By Brent Donaldson
Editor, NKU Magazine
Tap tap tap...
“Time to wake up, buddy.
Sometimes I wonder if I have to send him back.
It has sensors on top so it should be responding…
Now are you awake?”
When we use a home appliance, or post a photo to Facebook, the robotic technology and A.I. involved in those exchanges largely occur beneath the surface of our awareness. But that dynamic is set to change dramatically over the next several years as we march toward massive increases in the levels of human-robot interaction. From self-driving cars, to A.I. assistants like Siri and Alexa, to robots pitching you cruise packages at Japanese hotels, soon enough it will be impossible to not take notice of the robots that surround us.
Depending on who you are, this prospect either exhilarates or horrifies you. Maybe you side with futurist entrepreneur Elon Musk, tech’s top town-crier about the coming “fleet of artificial intelligence-enhanced robots capable of destroying mankind.” Or maybe you’re more Mark Zuckerberg, who assures us we can “build A.I. so it works for us and helps us.” (Of course, for Mr. Zuckerberg it already does, through facial recognition algorithms for his company, Facebook.)
Enter Dr. Austin Lee, associate professor of communication studies at Northern Kentucky University. And enter his robots Coconut and Pineapple. Lee, who is quickly becoming one of the country’s foremost experts the field of social robotics, is much more Zuckerberg than Musk. Lee’s recent research has centered on reciprocity and persuasion between humans and robots, as well as our collective fear of robots and A.I. Despite Lee’s findings, he’s convinced that robots can make the world a better place.
In your research about human distrust and fear of robots, you specifically measured humans’ fear of robots that can make their own decisions. How did you survey this information and what were your findings?
We had a national survey about fear. Fear of everything—spiders, fear of heights, fear of public speaking, crime. And I was lucky to give [the survey company] some items, like fear of robots, fear of artificial intelligence. And the results said that about 26 percent are either moderately or severely afraid of robots and A.I. Afraid that they may take over the world, afraid that they may become self-aware in the future. Females, the older generation, the less educated, lower income, and ethnic minorities are more fearful toward robots because their jobs are more easily displaced by robots.
In one of your classes presentations, you show your students a photo of a family gathered happily around an Amazon Echo, and next to the photo you’ve placed the words “Mindlessness and fixed pattern response.” Are technologies like Siri and Alexa training us to trust A.I.? Is that dangerous?
I like that picture very much because the family is so immersed in conversation with the Echo. It’s not even humanlike—it’s just a cylinder. But it has a human voice, so people are talking to Alexa as if it is human. If people think logically, there is no point of complying with a robot’s request. But people aren’t attentive to the fact that robots are just machines with CPUs and batteries and actuators. People comply, and use the norms of human interaction in the context of human-robot interaction.
Which is exactly what your co-authored paper, titled “The Role of Reciprocity in Verbally Persuasive Robots,” is all about, right?
Robots used to be confined in the factory cages, like welding machines and pressing machines. But now they are among us and working with us as a team. The basic skill in team building and collaboration is communication and persuasion.
In other countries, there are humanoid robots in places like banks, grocery stores, cell phone stores, everywhere. And their job is to sell something to people: a new data plan, an awesome cruise package. But there is no theoretical research behind it. So that study was about demonstrating the robot’s potential to influence people, and as you saw from the results it was unequivocal. Robots are very effective at persuading people and utilizing some principles of persuasion from human communication.
For this research, you programmed one of your robots to assist students who were playing a “Jeopardy”-like quiz game. Essentially, the robots gave some students the correct answers, and gave other students the wrong answers. After the game, the robot then asked the students for assistance. What were your findings?
If you think logically, there is no point for the participant to help the robot. The robot will not be mad if you don’t help him back. The robot will not call you a moocher or an ingrate. But we found that 63 percent of people who received help from the robot for five minutes agreed to help the robot for 15 minutes. When the robot was not really helpful, then only 30 percent (reciprocated). Huge difference.
In other words, it didn’t matter whether participants considered the robot friendly or competent or trustworthy—it only mattered whether the robot was viewed as being helpful.
And that is consistent with the literature in human communication. We trust in normal reciprocity because it is one of the most robust norms in human interactions. So that was our first tab, to document the persuasive potential of robots. And now we are expanding into many different types of persuasion strategies.
I’m sure the 26 percent of people who are afraid of robots aren’t thrilled that we’re teaching them to gain compliance with humans. But you argue that it can be for the benefit of mankind.
The possibilities are boundless. There is a ton of research on persuasion between humans, and if you apply that theory into the context of human-robot communication, robots can effectively change our attitudes and beliefs and behaviors. As a communications researcher, I want to contribute to society in the future by examining robot potential for doing persuasion for positive changes. My students are working with this robot, programming it. Their task is to develop new applications for social robots. Like an NKU ambassador robot, or a robotic coach at the cafeteria to persuade students to eat healthier, or a robot to promote the tobacco-free policy on campus. Or at our state of the art recreation center to ask people to work out regularly. I’m focusing on creating positive changes using robots rather than making profits.
This article will appear in the Spring-Summer 2017 issue of NKU Magazine.