In a perfect world, your smartphone would automatically tag whatever it sees through the camera's field of view. This could be helpful when using Google Glass, facial recognition systems, robotic cars and more.
Big powerful computers can do it already with something called deep learning. It requires layers of neural networks that mimic how the human brain processes information. A Purdue University researcher is working on it for smartphones and mobile devices.