Sebo Lab: Programming robots to better interact with humans
Robots are often designed to do specific automated tasks, like deliver food or manufacture cars.
But in order to have them interact well with humans, they need to be designed to act more intuitively and engagingly.
University of Chicago computer scientist Sarah Sebo is programming robots to give empathetic responses and perform nonverbal social cues like nodding to better build trust and rapport with humans. The goal is to develop robots that can improve performance in human-robot teams, such as enhancing learning outcomes for children.
To find out more, we spoke to Sebo, assistant professor of computer science, and her graduate students Alex Wuqi Zhang and Lauren Wright.

Why is it important for us to interact with robots?
Sebo: A lot of robotics has focused on creating tools that are useful for people: a robot that can deliver food from point A to point B, a robot who can assist an older adult with daily living tasks. And what a lot of robotics has focused on is, I need to get from point A to point B and not run into people or their obstacles or get run over myself. People started realizing, oh, we need to really understand people better and design social behaviors in robots so that they can more effectively engage with people.
What is human-robot interaction?
Sebo: It is a blend of computer science and psychology. A lot of our work requires understanding people to either better help them (as the robot), or for the robot itself to express behaviors that are very similar to the way humans would express behaviors.
If the robot has a laser beam and is trying to direct your attention using this laser beam, it’s not super intuitive. You have to look all around your environment to figure out where the laser beam is. But if the robot has a head with eyes, and the robot looks over here, you very quickly know where the robot wants you to look.
And so in a similar way, if a robot displays other types of social behaviors, it’s much more intuitive for us to understand what this robot is trying to communicate.
Wright: The fun thing about human-robot interaction is that because it’s a new field, there are so many big unanswered questions. We are measuring things between humans and robots that no one has measured before.
How are robots different from other technology we use everyday?
Sebo: Robots seem to exist somewhere in between a phone or a computer and a social being like a human. For example, when the Roomba vacuum first came out, people would send their robot out for repairs, and the company would send them a new robot back. But customers wanted their original robot back. They formed this social attachment. Yet if you need an iPhone replacement, you’re going to be happy because you are getting a newer version.
Part of my research is trying to understand what separates robots from machines, what separates robots from people, and how can we leverage these unique aspects of robots in everyday interactions.

How does your lab study human-robot interactions?
Sebo: My lab is interested in how robots can display and understand social behaviors to help make those human interactions more successful. My lab typically buys a robot that we feel fits the context we’re designing for and then we program the robot to interact with people.
Most of our work deals with robots that have some verbal capabilities, that can have a conversation back and forth with you, but a lot of the work that we do can be applied to robots that are nonverbal.
We give the robot human abilities like talking and nodding, then we recruit participants to come into our lab and interact with our robots. We analyze the interactions to show how people respond to different robot conditions.
How do your robots respond to what a person has said?
Sebo: When a robot needs to communicate with a person verbally, it first needs to understand what they said. It takes the speech and turns it into text. Then it prompts an LLM to generate a response. The robot takes that text and turns it into speech.

How can this process be used to build a connection with a human?
Sebo: One of the focuses of our lab right now is understanding how robots can build trust with people. A lot of this is performance based. If a robot makes a mistake, how can it correct it?
We’re also trying to understand how a robot can quickly build rapport with someone to have that social interaction go more smoothly.
We ran a study last summer where participants were prompted to disclose a current personal problem that they were having to a robot. Our robot either moved on to the next prompt or gave a quick empathetic sentence in response. And when the robot provided this one sentence of active listening to the person, it was rated phenomenally better across all types of measures we were using. People viewed themselves as having significantly more rapport with this robot.
What impact could these robots have on education?
Sebo: In education, you cannot have a one-on-one tutor for every child every single day. I think robots are a very natural choice to supplement the amazing work teachers are doing in the classroom, and are better than a computer screen because you can actually see and engage with them. It’s this back-and-forth conversation. It can give you feedback, and it’s much more engaging.
Wright: We don’t want to replace teachers, just augment what they are doing. When we work with children, we use a small robot that has a bobble head and flippers instead of arms. The children love it, and it doesn’t intimidate teachers.
How could a robot help a child learn?
Sebo: In my lab, we demonstrated that kids feel less anxious reading out loud to a robot than they do to a human being. We were able to measure their physiological signals. Their heart rate and vocal jitter revealed less anxiety when they were reading out loud to a robot.
Wright: When you’re reading in front of a person, you might feel vulnerable and expect judgment. But when you’re doing something vulnerable in front of a robot, you don’t have the same fear of social judgment. Yet the robot can still catch when you mispronounce words and can still give you some corrective feedback. It’s this nice middle ground between not-so-social that we’re worried about judgment, but still interactive.
What are some potential dangers that the field needs to avoid?
Sebo: Once you put a robot face and body with an LLM, it becomes even more real than this text conversation you might have with an LLM. Since LLMs have just become accessible to us, we’re studying these questions now to understand the potential harms. With robots interacting with children, for example, we’re running a study to understand when a social connection with the robot is necessary and helpful, and how much is too much.
Wright: There have been stories in the news about people getting too attached to chatbots that use LLMs. We ran a study where we had a robot teach social emotional learning to elementary students. Some kids had a robot with a personality and a backstory, and others had a robot that was more factual. We found that the kids who had the more factual robots actually did better at applying the lesson concepts. It’s a nice philosophical push for other researchers to consider whether we need robots to be ‘characters’ or just be adaptive and responsive without a strong personality.
Zhang: We also need to make sure guardrails are in place. AI software itself doesn’t have a physical form. But when it is integrated into a robot, it becomes physical, and we need to make sure it’s safe.
What does the future hold for human-robot interaction?
Zhang: Within the next five to ten years, robots are going to be integrated into people’s lives in a very big way. It’s really important to study how specific robot behaviors affect how humans behave and how humans shape how robots behave. We need to imagine how robots can be best integrated into a society that has been dominated by humans for thousands of years.
Sebo: Robots have this potential to really assist people, but if they are going to be engaging with people socially, they need to get that part right. Otherwise, we’re just going to turn them off or chuck them in the trash can.