This is the second part in the 3-part series on how to build great humanoid robot apps. The goal is to find out the answer to: what makes the difference between succesful applications and failed ones?
In the first part, you can see how to define clear goals: goals that are testable, actionable and give a reasonable scope to your future work.
In this second part, we will see how to think about the human-robot interface (HRI). What is different between a mobile app and robot app? What traps to avoid when designing the HRI?
Let's see 3 techniques on how to design a great interface for your robot app.
Imagine the Human-Robot Interface
We developed some standards over the years in Aldebaran on how people can interact with Nao.
All the applications we created could be quit by touching Nao's head. The user came to expect this feature, and overall it worked well because of consistency.
1- How to let the user know what he can do?
The problem then comes for everything that is unique to your app. How do yo
u know the user will be able to use your special features?
When creating apps for Softbank shops, the user must be able to use the app for the first time and have fun. There is no learning curve, no second change, because the user will probably use your app only once in his life.
In that case, the interface must be as small as possible. When you give a choice to the user, you should tell him what are the choices.
Example 1: in a quizz, don't ask open questions, ask "what is the right answer? A, B, C or D?" and lay out the choices by spelling them out or on the tablet.
Example 2: if you have 10, 20 choices possible, propose a subset to the user. "Which animal should I imitate? Dog, elephant or tiger?" Each time the choices are different. That way, each user may have a different experience.
So give the user choices, but not too many. Be ready to answer more than the choices you give.
2- Making menus without buttons 
Remember you don't have buttons on a robot. You can use touch sensors on Nao or Pepper, but user may not know they even exist when they first use the robots.
Especially on Pepper, all the touch sensors are hidden, with no visible cues that they exist.
On Pepper, displaying the choices on the tablet seems to be a good idea. Let the robot ask the question, and propose choices. But display the choices on the tablet, for visual reference.
I found when the user is asked a question by the robot, he will usually answer with the voice, which is what we want.
In case of misunderstanding, they might try to touch the tablet, so the choices there must be clickable. It's also an accessibility feature for mute/deaf people.
Also, you can model menus like the automatic answering machines, each question leading to a new set of choices.
If you're app is meant for B2C market, then people will have time to learn to use it. Just like a video game, limit the choices at the beginning, make sure they know how to use the basic features.
Later, you can teach them about advanced features.
While you need a menu Q&A interface, you can always add shortcuts.
Example: sharing a "fact" on Social Media
- The person learnt a fact through your app, and Pepper asks if he wants to share it with his friends.
- If he says "yes", the robot will propose a list of common websites.
- You can also have a second list of uncommon websites in response to "more".
- Finally, when you get the user's choice "Facebook", the robot can conclude: "Next time, you can save time by saying 'Share on Facebook' from the beginning".
Design your app menus through Q&As. Teach the user what commands are available, one at a time.
3- Using the tablet
Not many people who will create apps on Pepper will actually come from a robotics background. I expect many iPhone/Android/Web app developers to jump on the newest technology.
One word of warning though: don't rely too much on Pepper's tablet.
We are making robotics apps. Pepper is not a tablet with arms that can move! Pepper is a fully interactive robot that can express himself in so many ways.
Use speech, add sound effects and music to your app, use the eyes LED colors to convey emotions, use body language to stress a point, use the tablet to show an illustration, etc…
But please don't make a tablet app where the robot behind is just decoration. It's the confortable way, the lazy way.
Step out of the comfort zone and use your imagination to design new UX patterns specifically for humanoid robots.
In this second part of building great humanoid robot apps, you learned 3 techniques on how to design great human-robot interface for your app:
- creating clear choices, to guide the user
- imagining the menu as succession of Q&As with possible shortcuts
- using every media of your robot to communicate
Don't miss the first part on defining clear goals for your app.
The third and last part will be about dealing with uncertainty. Your robot is moving in the real world, full of unknown. How do you handle mistakes, misunderstanding and failure?
I am anxiously awaiting part #3. good job!!!!
Posted by: Mel | 12/13/2014 at 09:02 PM
Thank you Mel for your comment. Part 3 is finally available at long long last. Sorry to keep you waiting :)
Posted by: Sebastien | 04/02/2015 at 08:16 PM