Robot Using an Elevator

As a final project for our Autonomous Robots class, my partner and I decided to program a NAO humanoid robot to use an elevator. It used colored beacons and cardboard patches placed inside the elevator for localization, and asked a human for help with pressing the elevator buttons. The processes was completely autonomous.


Here is a video of the NAO taking an elevator (simulated lab environment):

We did most of our development and testing in the lab to avoid occupying a busy elevator. This video shows our demo running in the lab. We used a cardboard panel instead of a door, and mimicked the size and dimensions of an actual elevator.


Click on an image to enlarge.

We used an Aldebaran NAO humanoid robot for this project. They are small but versatile robots, capable of walking and speaking, and arbitrary joint movements. The NAO has two on-board cameras which we used as our primary means of sensing the environment.

To use the elevator, the robot used a series of states in which it performed different actions. First, it asked us to push a button. Once it saw that we pushed the right button, it looked out for the elevator door to open. When it did open, it walked inside the elevator, and went to the button panel where it requested the correct floor button to be pushed. We pushed the button, and the NAO went to the middle of the elevator to wait for the doors to open again. If the doors opened on the desired floor, it walked out of the elevator.

To localize itself in the elevator, we had the NAO track two beacons in the back-right and front-left corners, and a blue cardboard patch in the back of the elevator. Using vision, the robot was able to determine its angle and distance from these landmark objects, thus being able to deduce its location in the elevator as needed.

Each floor was identified by a unique beacon. When the doors opened, if the NAO saw the correct floor beacon, it would exit the elevator.

Detecting color regions in the video frames was done using a color segment detection algorithm from this paper. We implemented the algorithm in C++ and used it on the NAO's onboard computer to find blobs of the same color. We used this method for finding the blue cardboard patch, for detecting beacons, and for seeing the elevator buttons.

To walk up to target beacons or specified locations, the robot used PID control. The PID (Proportional-Integral-Derivative) controller code goes between the sensor and the actuator, updating the walk speed and angle to correct for any potential errors in the motors and sensor readings. It made walking smoother and more accurate.

To avoid having to work in a real, busy elevator, we did most of our development and testing in the lab. We took measurements of the elevator, and positioned the visual landmarks in the lab as if they had been laid out in the real elevator. To the robot, the actual location did not matter - the visual markers were the same in the lab as in the real world.

When we finished the implementation, we tested the robot in the real elevator. Out of ten trials, it was able to complete four completely autonomously. The major point of failure was the poor lighting in the elevator. This caused the NAO's cameras to not see some of the colors of various landmarks, thus failing to detect critical positioning queues.

To improve this project, we would have to focus on using more robust and reliable vision algorithms for localization and for detecting the elevator buttons and numbers. Using colors works only in well-lit situations, and requires the elevator to be outfitted with beacons and visual landmarks before the robot can use it. With a better understanding of computer vision, it may be possible to modify our project so that it would be able to use the elevator without using any visual markers other than the real elevator itself.





Back to Top