Joydeep Biswas leads the Autonomous Mobile Robotics Laboratory (AMRL) at UT, where he and other researchers work on building mobile service robots that assist humans in everyday environments. The lab investigates programs and algorithms that enable these robots to better navigate changing conditions, incorporate human assistance and recover from failures intelligently.
Biswas has extensive experience building and programming self-navigating bots. He built a self-balancing robot reminiscent of Star Wars, coached a team of high-speed soccer-playing bots and most recently developed race cars that execute coordinated high-speed maneuvers.
We met with Biswas to discuss his academic career, robotics research and his thoughts on how the future of robotics may involve humans and robots working together.
Tell me about some of your early robotics projects at IIT Bombay. What did you learn from those?
As an undergraduate at IIT Bombay, I was exposed to this whole new world of robotics where it wasn't just building a physical thing – these are physical things which can actually move in an environment and interact with people. That was mind-blowing to me.
After completing your Ph.D. at Carnegie Mellon and a position as an assistant professor at UMass Amherst, what brought you to Texas?
The fact that robotics is a big focus of research here. There is also significant support from the department, the college, the university and other sponsors to grow this area and do ambitious things. And there's the new building, the Anna Hiss gym, which is being renovated and a large part of the space over there will be for robotics.
We're looking forward to deploying all of our robots in the new space. In fact, I've been working with the architects and the building managers to put in electronics so that our robots can wirelessly request elevators in the building.
For my undergraduate project, I decided that I wanted to build a single-wheel balancing robot. The particular mechanism which I chose is what's called a reaction wheel. There's a heavy brass disc inside combined with sensors that can detect whether the robot is falling to the left or to the right and spin the disc the opposite direction to right the robot. I also designed it such that this entire system was enclosed in a transparent, self-contained hub.
The control systems that you use to balance this kind of single wheel robot are similar to what you would use on the BB8 droid (from Star Wars).
Tell me about your lab and what you're working on.
The goal for the group that I lead here, called the Autonomous Mobile Robotics Laboratory, is to enable long-term autonomy for mobile robots. Our question is: How can we have robots exhibit mobility in challenging environments, such as cluttered homes or in dense human crowds, over extended periods of time?
One problem is having these robots effectively navigate around in your home to do tasks. Say I have a video call with my mom via the robot. Can the robot effectively and smoothly follow me around as I move from the kitchen to the living room? This is extremely challenging because homes are cluttered environments that change over time. Robots don't know how to deal with this change.
Another challenge is that robots are inevitably going to make mistakes, like humans do. How can they understand what their mistakes are? How can they understand when their perception algorithms are going to fail? That would allow them to understand that something they perceive is likely to make them do the wrong thing.
The final thing is how can robots take corrections from humans? For example, you see a soccer-playing robot that has an open angle to the goal, and it's just not kicking the ball. You don't know why it's not shooting. There's something wrong with the program. But guess what, everybody in the audience knows what the correction is. They are yelling at the robot: shoot! shoot! So how do you take this as the correction and automatically fix whatever is broken with the robot?
What kinds of robots do you work with?
We have a couple of different types of robots. And they all exhibit different types of behaviors and capabilities, which make them really good at different tasks. We have approximately human-scale robots that can navigate both indoors and outdoors. We hope that soon you will see them roaming around the UT campus.
Then we have smaller robots, like our one-tenth scale racecars. The cool thing about those platforms is that they're highly performant – they can travel at very high speeds, but they have limited onboard computation and sensing, which poses an interesting challenge to autonomy. We can push the capabilities of our system in terms of high-speed behaviors, without having to carry a large burden of risk.
There's an Explore UT event where the race cars will be on display. What can people expect to see if they visit this exhibition?
These mini race cars are already being used in an undergraduate course I'm teaching called Autonomous Driving. It's essentially an intro to robotics class where teams of students get to keep a car to themselves for the semester, and over the course of the semester, they get to make this car fully autonomous and be capable of moving around in our environments.
If you come to Explore UT, you'll actually get to see these race-car robots zooming around on our track.