Unfortunately, I didn’t have the chance to go to this year’s International Conference on Intelligent Robots and Systems (IROS 2011) to San Francisco, California. However, first subjective blog reviews can now be found and I came across one I think it is worth sharing with you: “It’s not ‘Jetsons’ but robots are here to stay“.
The author reflects about the future of robotics based on his experience and the discussions currently going on within the community. From the rather “classical approach” of laundry folding to pancake flipping crazy expensive “domestic service robots”, there are alternative approaches, e.g. towards “smart technlogies” such as Google goggle, or “evolutionary devices” that aims to enhance already existing products / applications / machines through robotics (adaptive cruise control or autopilots for cars, for instance). So, what is likely to actualize itself? I suppose that, wihtin the near future, besides awesome technological achievements, one big step in robotics will be of social / sociological nature: How robots appear to us might change soon, somewhat like from “robot” to “robotic product” (I read about this shift also in a paper). Taking the various viewpoints into consideration, the blogpost’s author comes to the conclusion that “the future is both near and far”.
Read the original blogpost here.
Mori’s original hypothesis states that as the appearance of a robot is made more human, a human observer’s emotional response to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong revulsion. However, as the appearance continues to become less distinguishable from a human being, the emotional response becomes positive once more and approaches human-to-human empathy levels.
This area of repulsive response aroused by a robot with appearance and motion between a “barely human” and “fully human” entity is called the uncanny valley. The name captures the idea that an almost human-looking robot will seem overly “strange” to a human being and thus will fail to evoke the empathic response required for productive human-robot interaction.
This is just to share some news I read this morning in a blogpost on bigthink.
I will simply re-post it:
“For the next few months, twelve British volunteers will live in a house also populated by four domestic robots while a team of researchers observe their experiences. In the house is Sunflower, a medication scheduling and dispensing robot; CareRobot is a servant bot that fetches household items and offers food and drink; two Aibo dog robots provide companionship and entertainment. By making observations, researchers want to improve how the robots function and study their psychological effects on humans.
Despite visions of the future where humanoid robots do the house chores and provide stimulating conversation, they have been slow to really appear on the scene. Our mechanical brethren have proven better suited to the factory, assembling and welding rods of steel. But as technology progresses and robots become capable of making our domestic lives easier, particularly for the elderly, their psychological effects on humans remain unknown. To avoid over-stimulation, only one robot in the test how is allowed to function at a time.”
Read the original blogpost on bigthink: Learning to live with robots
Several emerging computer devices read bio-electrical signals (e.g., electro-corticographic signals, skin biopotential or facial muscle tension) and translate them into computer-understandable input. We investigated how one low-cost commercially-available device could be used to control a domestic robot. First, we used the device to issue direct motion commands; while we could control the device somewhat, it proved difficult to do reliably. Second, we interpreted one class of signals as suggestive of emotional stress, and used that as an emotional parameter to influence (but not directly control) robot behaviour. In this case, the robot would react to human stress by staying out of the person’s way. Our work suggests that affecting behaviur may be a reasonable way to leverage such devices.
Ehud Sharlin (Supervisor)
Saul Greenberg (Co-Supervisor)
What’s a good way to impress your friends? With a robot boom box that responds to your every hand movement, that’s how. Meet Qbo, TheCorpora’s open-source Linux robot who we’ve gotten to know over the years, even through his awkward phase. Nowadays, this full grown cutie has stereoscopic “eyes” and a face-identifying system that’s capable of learning, recognizing faces, and responding. With his new hand gesture recognition skills, Qbo will start playing music the moment you hold up a fist. Putting your hand out in a “halt” position stops the song and pointing left or right jumps to different tracks in your playlist. Giving Qbo the peace sign increases the volume (yeah, seriously!), while pointing the peace sign down tells him to take it down a few notches. The ultimate party mate and wing man is even so kind as to announce the name and title of the track. The video after the break best explains what hanging with this fellow is like, but if you’re keen on textual explanations, just imagine yourself awkwardly doing the robot to control your stereo. Go on, we won’t look.
Look as well at the video
Just a quick post to share with you an article I read this morning about NAO learning Japanese calligraphy: “Teaching The NAO Robot Japanese Calligraphy” (by Robots Dreams). I myself am unfortunately not able to read or write Japanese but I am fascinated though!
I especially like the description of the last picture in the article: “After a long hard day, NAO climbed into his suitcase for a well deserved rest during the trip back home.” (Will we have to purchase a train/flight ticket for “our robot companions” in the future or are they just considered as some “bulky fragile luggage”? … )
Anyways, I’m curious to see how it goes on! What else will people teach to robots? For what else will they use robots? How will they interact with them? It’s interesting already that we somehow ascribe intentions / feelings or other forms of human-likeness to some robots (but also other artifacts). I have been reading about anthropomorphism and robots but it is still not clear to me what exactly it is that makes us ascribing states to a robot such as NAO and projecting that it might feel exhausted after a day of Japanese calligraphy drawing.
What does human-robot interaction (HRI) mean? The interaction between humans and robots means some kind of action between a human and a robot. Whereas we are clear about the human part of this interaction, there is no common ground for the robot part. It seems that a great proportion of the further questions originate from this ambiguity of the term “robot”. We don’t know yet what a “robot” is, what it means for us, what it does, can do, should do and further, how we as humans want and should relate to it and interact with it. The difficulty behind these questions is the differences we find when comparing robots to humans, animals, machines, technologies or computers. (This, by the way, is closely related to questions about what “being alive” itself means.) Robots – physically embodied units – are tangible and present and share the same space with us. Equipped with a certain kind of “intelligence” and autonomy, robots might not stay where we allocated them, they might learn from us, adapt to our habits and display life-like characteristics. Robots have some life-like essences as well as non-life-like essences. And probably, this is what makes it difficult for us to relate to them. In addition to that, robots enable new ways of interactions that are more human-like than when interacting with a computer or a simple technological device. Robots respond to us … Yet, it seems to me that it is still unclear how we want robots to interact with us but also how we wish to interact with robots.