Representation of Manipulation-Relevant Object Properties and Actions for Surprise-Driven Exploration

The observation of human demonstrations is an important capability for a robot to acquire further knowledge. It avoids a fixed preprogramming, which is time-consuming and also restricted to predefined situations. Instead of dealing with these limitations, the robot should be able to extend its knowledge, e.g. through the observation of humans. However, a human demonstrator does not perform a manipulation on exact the same 6DoF-trajectory, when he / she demonstrates something several times. Probably, the demonstrator is not even able to repeat it exactly. A robot, which is supposed to learn from its observations in an efficient manner, has to be able to distinguish new observations from known, but varying demonstrations. Therefore, the robot needs to extract the manipulation-relevant object properties and actions at an abstract level. Just a change in these observed properties and actions (a surprise) triggers the update of the robot's knowledge. A surprise detection at this level is an efficient way of learning and reducing false-positive alerts.

What are the manipulation-relevant object properties and actions, which a robot should learn? A manipulation can be characterized by constraints, which limit the possible actions of an object. For example, a cup filled with coffee must not be tilted. Therefore, the possible rotation is constrained.

But who / what is defining these constraints? Of course, the object and its state are limiting the possible actions, like the cup, which is filled with coffee. But it is important to consider, that the location of the object can also change its functionality. Humans use different locations in the environment for different actions. For example, the dishes are normally washed in a sink and not on a flat table without any water source in the neighborhood. Consequently, an abstract representation of the functionalities in the environment is required (see Figure 1).

Figure 1. : Abstract map of functionalities in the environment.

The system, in which the abstract knowledge is stored, consists of two components. The first component is the Object Container. The manipulation-relevant properties, which are directly related to the object and its state, are stored there. The properties, which are defined by the environment, are stored in the second component: The Functionality Map. This map contains an abstract representation of the manipulation-relevant object properties in the environment.

A characteristic property of a manipulation is the region, where it starts or ends. We call these regions the Location Areas. They are stored in the Functionality Map. The (directed) connections between the Location Areas represent the observed manipulations between the start and end locations. These connections contain the manipulation-relevant object properties, which are related to the environment. For example, the information, whether an object can be pushed on a certain connection, is stored. Figure 3 shows two exemplary Functionality Maps of two different objects. It is important to notice, that each object type has its own Functionality Map. Each type can have other functionalities, which can be seen in the Functionality Maps: Location Areas, connections and / or connection properties differ between the Functionality Maps.

Figure 3. Exemplary Functionality Maps of two different objects.


Publication

  • Susanne Petsch and Darius Burschka. Representation of Manipulation-Relevant Object Properties and Actions for Surprise-Driven Exploration. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1221-1227, San Francisco, California, USA, 2011.

Links

  • Estimation of Spatio-Temporal Object Properties for Manipulation Tasks from Observation of Humans: more

People