While most of the Vision AI in the auto industry is focused on the outside (self-driving), we feel there are plenty of opportunities on the inside to make the cars safer and smarter to use.  There are plenty of uses for smarter AI inside the car starting from passenger safety to comfort. E.g. Passenger position in the car can be used to help reduce injuries by changing the strength of the airbags. We have already seen cases where drivers have relied heavily on self-driving to sleep in the car.

In order to replace expensive sensor hardware with RGB camera + software, machine learning using conventional manual methods would require massive amounts of data and annotations. Imagine capturing all kinds of variations that can be inside the car from objects to people. Moreover, the inside of the car is impacted by the weather and lighting conditions outside. It would take $$$ to get the data. And any slight change in camera position will make the AI unable to detect the requirements.  Even if you are able to get the data for a car, scaling it to multiple car models will be a massive challenge.  

ZEG is making Vision AI inexpensive, fast, and scalable. The datasets created by ZEG are hyper- realistic i.e. none of the fake/ synthetic images will be indistinguishable from the naked eye. We are releasing sample images of inside the car - none of the contents of the images are real. 

The image shows the driver inside the car which can be used to assess gaze, pose, and even temperature based on skin colour. The same car can be replicated into 100s of scenarios from busy street to sunny weather.

Our 3D AI allows you to get a perfect segmentation mask of the humans and objects inside the car to help you separate humans from the other objects. The classifiers can also be trained for a specific seat.

We are extending the synthetic dataset further to incorporate key points of the passengers to allow you to understand the position of the hand to provide safety metrics such as hand on the wheel or body pose is in a highly relaxed position. The data generated inside the car can be very useful for insurance companies to provide more tailored products. 

Key points of the passengers to allow you to understand the position of the hand to provide safety metrics such as hand on the

Depth estimation combined with key-point estimation can be used to understand how far away are the driver and passengers from the other parts of cars such as airbags.

This synthetic data has been further extended by introducing thermal parameters to understand the health conditions of the passengers in real time.The hyper-real dataset allows you to extend the capabilities of the smart car using just a RGB camera. Moreover, the variety of the dataset allows you to extend the capabilities irrespective of the camera position, car model and external environment. 

Reach out to us at contact@zeg.ai to find out more about car interior synthetic dataset.