Computer Vision Use Cases: Product Innovation at it’s best

Most of us would be familiar with the “Terminator Vision”, Hollywood’s portrayal of technology at its best. Terminator, T-800 doesn’t have a picture of John Connor at that age, as far as we know but what it does have is a detailed “Target Profile” [see Header Image] consisting of John’s age and probable location as well as his height (HGHT), weight (WGHT), hair colour & style (HAIR), gender (GEND), eye colour (EYES), distinguishing marks (DIST), facial characteristics (FACI) and build (BILD). When added together these were sufficient to give the Terminator a 99.45036% probability match when it saw him.

In 2019, we’ve come of age, with some very interesting use cases of computer vision in the real world as well.

Computer Vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do [ Wikipedia ]

Moving away from the textbook definition, here is an attempt to demystify some of the on-ground applications of computer vision led products developed by Indian entrepreneurial teams, having deployments across sectors at an enterprise scale.

Being Innovation Manager at NASSCOM Center of Excellence – AI & IoT, the day at work entails, scouting such amazing companies and working along with them at the Gurugram based Innovation Lab. This piece aims to bring forth a Use Case focussed approach, towards exploring certain deeptech companies having Computer vision based products & solutions.


Add Innovations, led by Aman Jangra & Krishan Kumar, has current deployments in automotive & auto-ancillaries manufacturing companies. Their solutions include Automotive Piston & Housing Inspection System, Strain Measurement System for plastics & other metals as well as Bike Visor Inspection system. Their system is installed at an Automotives Components line in Delhi-NCR where they produce internal combustion engine pistons and housings for A-list clients and Add Innovations’ solution is used for detecting minute aberrations like hole burr, spoiled threads, or surface defects for different regions of the piston & housing.


ConstemsAI, led by Amit Singh, has a solution integrating Custom APIs with existing Hardware for Image/Video Analytics. They have deployed a Tobacco Grading System with a multi-business conglomerate, having a presence in FMCG, Agri-Business, Hospitality & Information Technology, where the images of tobacco cases on the line are captured and analyzed against the Grade M tobacco images standard dataset through Constems proprietary algorithms, to automate quality standards adherence.


AIonAsset, led by Swati Tiwari & Amit Kumar, enables asset inspection & reporting through their Inspection Management & Reporting system (IRMS) which uses its proprietary algorithm for detecting defects like Rust in Pipelines & Ships as well as missing Nuts & Bolts and Insulator damage on Transmission Line towers etc as well as geo-tagging those defects upon detection. Their solution also enables PPE (Personal Protective Equipment) compliance for Indoor & Outdoor industrial environments to ensure personnel safety.


SGR Labs, led by Rajan Srivastava, specializes in Visual Analytics & GIS, using UAV based imagery & LiDAR to render high resolution 3D models, DSM, DTM, & Orthomosaic at cm level accuracy and its visualization on a Web GIS platform and an AR based mobile application. The solution is being used by project management teams of Power Generation & Transmission utilities, EPC companies & Indian Railways as they get an easy access to high precision 3D data in a collaborative environment, where everyone in the project chain can monitor the progress, as well as share and comment updates simultaneously on the ongoing site status without any hassle. Typical engagements include railway line aerial survey, highway construction monitoring, oil & gas pipeline planning, transmission line inspection and flood analysis.



Veda Labs, led by Vivek Thakur & Veer Mishra, focuses on Deep Learning based Vision Computing on the Edge to provide a hardware agnostic Image & Video analytics solution in Retail, Warehousing & Hospitality having both On-premise & cloud offerings. Their solutions’ capabilities include General & Specific object detection, Facial Recognition, Customer footfall heatmaps, MAG (Mood, Age & Gender) detection, Unique vs Repeat customer identification, CHMS, asset tracking, asset health monitoring & object trajectory tracking etc.


Wobot.AI, led by Tapan Dixit, Adit Chhabra & Tanay Dixit, is a Video Analytics solution that analyses CCTV videos to automatically detect deviations in SOPs (Standard Operating Procedures) & Compliances. HD cameras have been deployed in base kitchens of IRCTC (Indian Railways Catering and Tourism Corporation), where Wobot’s pre-trained models capture automated detection of any anomalies in Hygiene compliances, including wearing cap & uniform by the kitchen staff, cleaning practices being followed or not and presence of pests etc and raises an alert with a detailed report to the concerned immediately. Infact, the LIVE STREAM of IRCTC kitchens can be seen via



Drivebuddy, led by Nisarg Pandya, is a video analytics platform with an integrated edge based device comprising of dual dashcam and on-device storage to make driving safer & smarter and help businesses in reducing losses. The algorithm which has been trained on Indian vehicle data in Indian road conditions ensures Driving Safety for the driver by way of its Forward collision warning, Distracted Driving Warning, Pedestrian Crossing Warning, Over speeding Warning and Driver Drowsiness alerts in real time. The solution has integrated High precision GNSS with Angular Motion data and Data Analytics for insurers & fleet owners consisting of Driving safety based real time alerts & Reporting for Driver Risk Assessment which includes additional driving behaviour parameters like abrupt Lane changes, Acceleration / Braking events, unsafe overtaking, Tailgating, Traffic sign violations as well as Near miss events.


Swaayatt Robots, led by Sanjeev Sharma, is developing SAE Level-4 and Level-5 autonomous driving technology, as well as autonomous platooning technology, to connect vehicles and make them autonomously communicate & coordinate. Their RL algorithm based on inputs received from NIR & RGB cams and LiDAR sensor allows detection of lane markers and road delimiters if they are present, and generates them automatically if they are not present or faded, in real time apart from Traffic signs detection and recognition, as well as Obstacle detection and recognition. There also exists a local level fallback to bring the vehicle to a safe halt, if any of the primary sensors fail. The deployment potential includes Airports: Buses & Tractors (luggage pullers), Campuses with dynamic obstacles, Agricultural fields and off-road conditions at low speeds, Industrial and warehouse settings and Structured Urban Environments.



ChironX, led by Sombodhi Ghosh, automatically reads a retinal fundus image and using its deep learning algorithm, is able to do early disease detection, risk prediction and prognosis of diseases which are local to the eye as well as systemic diseases like Cardiovascular ailments, Hypertension, Alzheimer’s, Stroke and Parkinson’s. Their solution is hardware agnostic and capable to work on edge based infrastructure, ensuring that it can easily be deployed in low-resource environments as well. Due to the nature of how systemic diseases manifest throughout the body, ChironX’s algorithm is capable of detecting hundreds of diseases and complications from the retina with 95% accuracy within seconds, non-invasively & cost-effectively, to come up with a comprehensive report having all the risk predictors in a matter of minutes. They are currently deployed in hospitals, both private & public.


Onward Health, led by Dinesh Koka, helps in cancer detection through its predictive modeling of digital pathology images from a detection, segmentation, feature extraction and tissue classification perspective, to localize the spatial position of the cancer. Further, it tries to find similar regions on the other views, then, for each of the found suspicious regions, image based features are extracted. The trained model helps detect and score new patients and does a contouring of the tumor which helps in deciding whether it’s benign or malignant. This algorithm reduces unnecessary biopsies significantly, when used in conjunction with a radiologist reading. Their algorithm can also predict how well a particular patient will adhere to a treatment plan and respond to different modifications in the same; cluster the population into different groups and target the highest risk patient groups to improve their outcomes.



BharatRohan, led by Amandeep Panwar, helps predict crop disease & pest infestation. Aerial survey data is collated by UAVs equipped with Hyperspectral Sensors. Hyperspectral Cameras pick up the humanly invisible color changes occurring in the leaves due to biochemical changes induced by crop pathogens, which is then analyzed in correlation with a spectral database built in the experimental farms of 110 Indian Council for Agriculture Research (ICAR) Institutes to generate early pest alerts, conduct crop nutrition diagnostics, detect weed anomalies before major damage is caused to the plant and it becomes visible to naked human eye. A single UAV can collect up to 1000 acres of hyperspectral data per day with a survey being scheduled every 7-15 days depending upon the rate of growth of the crop and the type of crop itself which helps to monitor every stage of crop enabling timely crop prescriptions.


Nebulaa, led by Tanmay Sethi, have built an instrument that combines Image processing & deep learning to assess agricultural produce quality as per AGMARK, BIS & CODEX standards. The founders also claim that the system can provide an overall quality assessment of the entire produce based on a sample test. As of now, it can test up to 15 types of foodgrains, taking around 60secs to analyze a sample and the reports are in English and four other major Indian languages along with the pricing grade as per the assessed quality. It has a primary testing tray on which the sample grains are kept. Inside it are multiple cameras that take multiple images at different wavelengths. These images are then mapped and segmentation is performed to remove the background and identify touching kernels for each individual grain. Once it is done, these grains are passed through the classifier to detect their category. 3D rendering of each grain is then analyzed for defects, fungal damage, organic impurity, tearing, etc. After setting the baseline defect and defining quality parameters, the instrument is able to predict the quality of any future grain sample of the same crop as well without the need for training.


Computer Vision, as an emerging technology brings in endless possibilities towards solving the core problems, the industry is facing today. Enterprises, looking to execute their Innovation Strategy, must explore Technology led Co-Innovation with curated startups, innovators and early-stage entrepreneurial teams.

Stay tuned to this space, for more interesting reads. The author may be contacted at

Share This Post

Leave a Reply