Training AI in the food industry has always been dependent on vast amounts of data in an ever changing environment condition. The result has been extensive dependency on manual interventions to identify decaying food which still has led to 20% of global fruits and vegetables getting lost at production level. Moreover, there is further loss of food at various stages till it reaches consumer’s plate.
There are dime a dozen classification algorithms that are available now to classify objects in various states. However, data has been the main bottleneck for the use of AI in reduction of food waste. In order for classification AI to work on a farm, you would need 1000s of photos from the farm to identify the blemishes and other factors that could help identify the decay much in advance. In order to apply the same solution on a different, you would again need similar kinds of images in various environment conditions - sun or cloudy sky to make a robust AI.
Synthetic Datasets helps remove all this by creating hyper real fruits and vegetables that can be changed on the fly.
Apart from changing lighting and other environment conditions, the synthetically generated dataset comes with a variety of information. This includes size, texture, material, location, and depth to create a far more advanced AI that can help in any scenario with being externally dependent on hardware or other expensive simulations
A depth map can be very helpful in plucking apples or packaging apples as robotics vision can clearly see how far are apples and decide the grasping points to remove/ sort those apples. The below image shows how the depth map will look like for the Apples on a conveyor belt.
ZEG specialises in 3D automation which allows building synthetic scenes and scenarios at an unprecedented scale. Moreover, our synthetic generation pipeline is more affordable than manual generation and annotation. Contact us to learn more about Synthetic Content at a massive scale.