SRRC Wiki

Service Robotics Research Center
at Ulm University of Applied Sciences

User Tools

Site Tools


about-smartsoft:robotic-behavior:start

Robotic Behavior

This page provides an introduction and a rough overview on the development of robotic behaviors, the coordination and configuration of complex robotic systems, their layers and some architecture patterns we apply. There are two realizations available:

Robotic System Abstraction Levels and Control Architecture

A robotic system can be partitioned into different levels of abstraction, addressing different concerns. The separation of the levels allows for developing level specific efficient solutions. The number as well as the separation of the layers may be subject of controversial discussion. The functionality represented by the levels shown in the following figure (left part) is however present in most complex robotic systems.

Parallel to the levels of abstraction stands the layers of the overall control or coordination architecture with which we typically build our systems, following figure (right part). The control or coordination architecture follows the idea of the three-tier (3T) robot architecture [3] from Bonasso, Firby, Gat and others.

By the term robotic behavior coordination we mean those parts which are executed on the sequencing layer (3T control architecture) to coordinate and configure the system during run-time to archive a desired task or goal. We use models as formal representation of robotic behaviors, robotic behavior models.

The Deliberative Layer as the top most one, reasons about the high-level goals of the system, using symbolic task planner, constraint solver, analysis tools, etc.. The Sequencing Layer is responsible for situation depended task execution, coordinating and configuring all other software components in the system. The components on the lower skill layer are coordinated and configured in the same way as the components on the deliberative layer. In the 3T control architecture, the Skill Layer – as the lowest layer – realizes the functionalities required to fulfill the task and goals of the layers above.

Comparing the two stacks – the abstraction levels and the control layers side by side – reveals a similar granularity in the upper parts and a difference at the lower end. This might look incompatible at first, however it is not conflicting. As the control architectures main concerns are coordination and configuration, the lower levels realizing the functionalities (skills) are perceived from the sequencing layer as one single layer (i.e. the Skill Layer). At the skill layer of the control architecture, several concerns (such as computation, communication, separation of concerns and roles etc.) and properties are considered.

Sequencer

The 3T control architecture most importantly decouples (fast-)reactive processing on the lower skill layer from the (low-frequent) symbolic level processing on the deliberative layer using the intermediate sequencing layer to coordinate the system. The sequencer on the corresponding layer is the central coordination and configuration entity, it orchestrates the system by executing the robotic behavior models. By doing so, at all times the control hierarchy is defined. This does not mean that the sequencer is in control of everything in the system on all layers or abstraction levels. The sequencer, following the subsidiarity principle, is however responsible for assigning decision-spaces to the components on the skill layer which then operate within the given boundaries. Therefore coordination is not limited to the sequencing layer, there is some coordination in the lower skill layer as well (subsidiarity principle). The sequencer for example configures the data-flow between some skill components and activates them. The configured and activated components interact (coordinate) within the assigned decision spaces. For example, a set of navigation components is configured and coordinated to approach a location. On the sequencer level it doesn't matter how many and which sub-goals are communicated and synchronized between them and which component waits for another one to send data. As long as the robot is approaching the goal, the sequencer is passive. At the core of the subsidiarity principle is the idea that every part of the system has to fulfill its task within the given boundaries as good as possible. If a part is not able to perform any longer it is responsible for reporting this deviation to the next upper level. For example, if the robot gets stuck during navigation its navigation components' resposiblity is to report this deviation to the sequencer. As the example already indicates, in addition to the subsidiarity principle the system parts further follow the principle of cognizant failures \cite{126008}. This means that the system's parts should be designed to detect failures when they occur.

The robot acts as a single entity and as one it is coordinated software-wise. Robotic systems are cyber-physical systems, where the body of software is in symbiosis with the existing physical parts of the system. This motivates a defined coordination hierarchy. The physical robot entity exists only once and can not be taken apart, at least not uncoordinated. When the base moves to the next room, the manipulator, the camera and all other things physically attached move with them and can not reason about whether to stay in the same room or not. Another argument for this defined control hierarchy is that with robotic systems, system states cannot change arbitrary or taken actions can not be reversed, as both have an effect on the physical world, in contrast to the pure software systems.

The sequencer makes use of a knowledge base to keep the behavior models, an environment and a self model, on symbolic level. To use symbolic planners or run-time adaptions this knowledge is transformed into adequate formats (e.g. PDDL) and the results are imported and transformed back to match executable behavior models (tasks and skills).

The main system building blocks are components, throughout all layers of the control hierarchy. Independent of the components functionality and the layer they could be “assigned” to, they are all regular components featuring the same parts and properties (services, life-cycle, etc.). This includes the sequencer in the same way as the components on the deliberation layer e.g. symbolic planners.

Behavior Models

At the core of the robotic system coordination the event discrete sequencer executes and refines behavior models at run-time, given the current robot context (task it is performing, environment, etc.). Behavior models represent the tasks (functionality) a robot is capable of doing, on a symbolic level. They define how a certain task is archived by coordinating and configuring the system and thereby making use of the functionality realized within the components. An example for such an task could be to deliver a cup of coffee, or to recognize a person. Behavior models can be split in two groups of different abstraction levels (compare the previous figure (left part)), TASKs and SKILLs. Behavior models at TASK level are used to express how a certain functionality is archived by composing different behavior models on TASK and SKILL level. SKILL level behavior models lift the level of abstraction of the functionality realized by the skill level components to a symbolic one. They realize the connection between the components (with its services) and the tasks modelled using the behaviors. The figure below shows an example of composed TASK and SKILL level behavior models.

Coordination/Configuration Interface

The SKILL level behavior models use an explicated coordination/configuration interface to interact with the components in the system. It covers all important aspects of the interaction between the SKILL level behaviors and the components: Run-Time configuration using modeled parameters, including a commit protocol to enable coordinated configuration changes. Activation of one-shot and continuous activity within the component. Control of the components life-cycle, user defines as well as the generic parts. Events as the results of the components activities, driving the execution of the behavior models by the sequencer. Dynamic wiring to add run-time control ot the connections and the data-flow between the components. Query to enable two enable the sequencer to query the components for information, e.g. used to query symbolic planners for solutions.

Orchestration Cycle

On sequencer layer the execution is discrete and driven by events send by the coordinated components. The continuous data and the processing of it is handled on the skill component layer. This means that for example sensor data such as camera images are exchanged between the components directly. The data-flow and the activation of it is coordinated by the sequencer according to the behavior models. The coordination of the components follows an ochrchestarion cycle patter [1]. In essence the configuration and coordination follows three or four steps, illustrated in the following figure. Starting with the configuration of the components, followed by the activation of the activities in the components. The events send to the sequencer driving it, reporting of success or failure information typically closes an orchestration cycle. For some cases an optional fetching of the resulting data from the components is appended, e.g. the symbolic information as result of object recognition (types of recognizes object etc.) is fetched from the components.

Further Reading

M. Lutz, D.Stamfer, A.Lotz and C. Schlegel. Service Robot Control Architectures for Flexible and Robust Real-World Task Execution: Best Practices and Patterns Informatik 2014, Workshop Roboter-Kontrollarchitekturen, Stuttgart, Germany, Springer LNI der GI, 2014

Dennis Stampfer, Alex Lotz, Matthias Lutz, Christian Schlegel. The SmartMDSD Toolchain: An Integrated MDSD Workflow and Integrated Development Environment (IDE) for Robotics Software. Special Issue on Domain-Specific Languages and Models in Robotics, Journal of Software Engineering for Robotics (JOSER), 7:3-19, 2016.

Alex Lotz, Juan F. Ingles-Romero, Dennis Stampfer, Matthias Lutz, Cristina Vicente-Chicote, Christian Schlegel. Towards a Stepwise Variability Management Process for Complex Systems. A Robotics Perspective. Int. Journal of Information System Modeling and Design (IJISMD), DOI: 10.4018/ijismd.2014070103, 5(3):55-74, 2014.

Andreas Steck, Christian Schlegel. Managing execution variants in task coordination by exploiting design-time models at run-time. In Proc. IEEE Int. Conf. on Robotics and Intelligent Systems (IROS), San Francisco, USA, September, 2011.

References

[1] M. Lutz, D.Stampfer, A.Lotz and C. Schlegel. Service Robot Control Architectures for Flexible and Robust Real-World Task Execution: Best Practices and Patterns Informatik 2014, Workshop Roboter-Kontrollarchitekturen, Stuttgart, Germany, Springer LNI der GI, 2014

[2] F. R. Noreilst. Integrating Error Recovery in a Mobile Robot Control System*. Supervision, pages 396–401, 1990.

[3] R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P. Miller, and Mark G. Slack. Experiences with an architecture for intelligent, reactive agents. Journal of Experimental & Theoretical Arti cial Intelligence, 1997.

about-smartsoft/robotic-behavior/start.txt · Last modified: 2022/12/23 11:06 by 127.0.0.1

DokuWiki Appliance - Powered by TurnKey Linux