Sudhir Gupta

Face recognition software for tracking missing children.

Blog Post created by Sudhir Gupta on Mar 30, 2018

A news item regarding missing children caught my attention recently.

 

https://timesofindia.indiatimes.com/city/delhi/missing-piece-of-puzzle-face-matching-software-can-aid-parents-search-for-kids/articleshow/63358470.cms

 

Kailash Satyarthi's NGO Bachban Bacho Andolan (BBA) has been pleading that all Child Care Institutions in India should be integrated with the Governments, Track The Missing Child portal and face recognition software should be used to locate missing children quickly. This was an excellent use case in the AI4Good space and motivated me to explore this topic. This article is an attempt to provide a birds eye view of where we are and how we got here. The intention is to encourage students to explore this use case using open source tools.

 

Apple iPhone X Face ID technology is the most visible example of the current state of the art in this field. This white paper provides some details.

 

https://images.apple.com/business/docs/FaceID_Security_Guide.pdf

 

Following points are noteworthy.

 

- The technology is centered around neural networks.

- The neural engine is run on Apple's A11 Bionic Processor.

- The neural network was trained using over a billion images.

- iPhone X uses TrueDepth camera system comprising of IR emitter and IR camera to create an image and depth map of the face.

- This data is sent to the neural network to create a mathematical representation of the face which is then compared to the mathematical representation of the face originally enrolled by the user.

- If there is a match, a positive ID is made.

- Apple claims that this technology has a 1 in a million chance of making a wrong identification compared to 1 in 50,000 for touch ID.

 

Very impressive indeed. This technology is proprietary to Apple, but surely others like Google, Facebook, Baidu and many others will soon catch up. Let us now try to see, how this field has evolved over the years and what is the situation on the open source front.

 

Work on face recognition software has been going on since the 1960's. However use of deep learning for facial recognition is comparatively recent and can be traced back to the pioneering work done by Yann LeCun in the area of Convolution Neural Networks (CNN). Yann LeCun was a student of Geoffrey Hinton who is considered the father of neural networks. In 1998 Yann LeCun published his famous paper on LeNet-5, a 7 layer CNN that was used to classify digits. Since then CNNs have been widely used in computer vision and image processing. Yann LeCun later became the director of Facebook AI Research Lab (FAIR) and it is only appropriate that a breakthrough paper on use of CNNs for face recognition was published by a research team at FAIR in 2014. This was called DeepFace and you can access the paper here.

 

https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf

 

DeepFace was trained on a dataset of 4.4 million images and reported an accuracy of 97% compared to 85% for the FBI Next Generation Identification System at that time. Further improvements were then suggested by a research team at Google who proposed a CNN based approach called FaceNet in 2015. FaceNet paper can be accessed here.

 

https://arxiv.org/pdf/1503.03832.pdf

 

FaceNet was trained on a dataset of 200 million images and reported an accuracy of 99.63 %.

 

By now it should be obvious that the internet giants with access to very large training datasets had a lead over others and their models were proprietary. Fortunately research teams at several universities like Oxford and CMU have helped to publish models in public domain which are not far behind. For example CMU lab has published its work under the name of OpenFace and can be accessed here.

 

http://cmusatyalab.github.io/openface/

 

Similarly the Visual Geometry Group (VGG) at Oxford publishes it's work here.

 

http://www.robots.ox.ac.uk/~vgg/

 

FaceNet implementation using Tensorflow is available here.

 

https://github.com/davidsandberg/facenet

 

OpenFace uses Torch but for those who prefer Keras there is also a Keras version of OpenFace available here.

 

https://github.com/iwantooxxoox/Keras-OpenFace

 

Since I started by saying that I would like to encourage students to explore face recognition using open source tools, here is my suggestion. Learn Python, Tensorflow and Keras. All of this is in public domain and there are a lot of online learning tools. The Keras OpenFace face recognition model referenced above provides an accuracy of 93.8% and can give good results. Here is how the solution will work for locating missing children.

 

- All children in child care institutions will be photographed and pictures will be uploaded into the track the missing child portal.

 

- Pictures of all missing children are uploaded to the track the missing child portal.

 

- Track the missing child portal runs all pictures through the face recognition CNN and generates the feature vectors of each picture.

 

- Feature vectors of missing children pics are matched with feature vectors of pics of children in child care institutions and an alert is generated for each match.

 

Happy Learning !!.

Outcomes