Our first creations was the sample project "Robo-tag". We were interested in the complexities of two robots interacting with each other and how their resultant behaviours interacted with the environment and those observing. Working backwards as it were, we had set out to first design our robot and then figure out how to imbue it with a personality. We wanted to explore the idea of a robot navigating around a space and so we were lead to build a robot with an onboard camera that could be input to a secondary platform for greater processing. The camera device we chose, the apple icam, had the added advantage of being spherical in shape and thus very much resembling a large detached eyeball.
Lego Mindstorms was a great platform to quickly build and test some of our prototypes. Our ideas, however, quickly outgrew the shipping capabilities of the Lego brick – leading us to look to third party solutions to program more sophisticated actions. For our first two prototypes, the tagbot and the lawnmower, we used NQC (not quite C) to program the brick. NQC offered us a more sophisticated set of instructions and a standard programming environment not offered in the Lego toolkit.
NQC did not resolve all of our issues; we were still having performance issues with the brick. The lawnmower demonstrated the difficulty of creating a robot that behaved consistently each time it executed its commands. Too many things influenced the behavior of the brick: the floor surface, the weight if the robot and even the amount of time that the brick had been on modified the performance of the RCX.
The RCX brick also had a limited set of memory and processing power and the behavior we wanted the robot to exhibit required more of both. It became clear that we needed to think outside the brick.
To accomplish our goals we turned to MAX/MSP, a programming environment produced by Cycling 74. Norman Jaffe has written a free set of external functions that lets MAX communicate and control the Lego brick from a laptop computer through the IR tower. This allowed us to put all the robots brains into the laptop, which had more than enough memory and processing power to handle our demands.
To control our robot, we wrote a patch that reads and analyzes input from a digital video camera - we use this data as the robots eyes. MAX is able to determine the location of a predefined target (in this case a color) in relation to the position of the camera and the robot itself. Once the location of the target has been acquired, the robot will gesture its intended direction and then turn towards the target.
Our current prototype still has many limitations based on the technology.
• The Camera is tethered to the laptop through a firewire cable. This restricts the movement of the robot to less then six feet from the laptop. Moving to a wireless camera for our input could solve this.
• Communication through the IR tower requires line of sight. This also limits the autonomous movement of our robot as it cannot leave the signal range of IR tower. Moving to another platform, one more robust than the Lego kit could solve this but it would require significantly more investment of our recourses to build.
• The motors that come with the mindstorm kit are designed to move
the Lego brick around, and not to produce the subtle movements desired for
our gestures. Also, with only three independent motor controls, it is impossible
to create a system with the articulation needed for a sophisticated gesture.
As such, the current gestures appear both crude and jerky.
"Our ability to anthropomorphize objects is curious. We assign human bodies and personalities to things that we make, such as kitchen appliances and cars, as well as natural entities such as animals, trees or even clouds." (wilson)
The popular culture of the day only serves to reinforce the direction robots
are going. As seen in this
animation of Macintosh's IMAC, this anthropomorphizes the monitor so that
it appears to be the 'face' of the computer while it pivots and gestures on
the flexible stand or 'neck'. This clever advertising campaign was no doubt
inspired by such early short computer animated films as Pixar's Luxor Junior,
in which two desk lamps come to life and express an extraordinary range of
emotions and intelligence. Some of the underlying principles of animation
that were used to create these seemingly living inanimate objects inspired
the behaviour we wanted our robot to exhibit.
Any animated movement is broken down into three core parts: the preparation for the action, the action itself and the termination of the action. Think of an animated character that is quickly exiting stage left from a standstill. The character 'winds up' as it were in the direction opposite to which it intends to leave, the right. The audience has an immediate sense of anticipation that the character is actually going to be moving to the left. Since the exaggerated act more profoundly communicates intention, it is one of the most commonly employed techniques in animation.
(forthcoming)