Here we will talk about the eyes of your robots and if they are simple cameras or stereo cameras and if they need to get complicated by being part of vision system for artificial intelligence using something like OpenCV.

Just before we get to the software, we can’t forget that if we are making a realistic humanoid robot then we need to find eyeballs for our robot and create or use some mechanism to move them side/side and up and down with eyelids and eyebrows perhaps and all attached and controlled to linkages, possibly gears and of course motors or servos.

So we will address the mechanical parts of eyes in the other section on this vision area.

So lets talk about sofware and vision now.

So the simplest way to give a robot vision or the ability to see something is to attach a camera to your Arduino or your Raspberry Pi and then access the prewritten software that works with a camera or create your own code.

You could use a webcamera or a Pi-Cam with the Raspberry pi.

At some point you may want to try using the XBox Kinnect camera if you are trying to do some advanced application.

Now what does your robot do with what it sees?

This is where your programming comes in.

Have a look at this link on the Popular Mechanics site of how Disney is working on ultra realistic eye movements on their robots or animatronic robots.

An amazing way to instantly get your camera to recognize objects and say what they are is found on the EZ Robot webpage where they are demonstrating the use of the Microsoft Cognitive Vision cloud based sotware.

With the JD Robot kit from EZ Robot you simply aim the robots eye/camera at some object and the software will speak what it sees.

In one of EZ Robots training videos they have one of their instructors holding up a banana and the Microsoft Cognitive vision software recognizes what it sees and the robot says “I see a grown woman holding a banana”.

In these video examples the software seems to recognize objects correctly half of the time.

Now in this example we are talking about a robot that has already been built and has some working code already in it.

So you would have to add the code once you add the camera to your microcontroller.

But then we talk about the more advanced use of vision and AI.

This is where we could talk about the other vision system software called OpenCV.

Here you access this open source software and write code around this code to have your camera see certain objects and do some action such as the case with facial recognition software in a crowd of people.