This is a set of student lab work exercises which are used in different courses. As these lab exercises are embedded in courses, they have a different focus and a different order than the tutorials. The student lab work exercises supplement material and refer to the tutorials whereever reasonable.
Level | Student |
---|---|
Role | We walk you through different roles |
Assumptions | Background information from the related course |
System Requirements | See the below section Preparation |
You will learn | How to program robots, compose systems and run applications with a model-driven approach in simulation and in real-world |
Typically, the software infrastructure is already setup in your lab. Talk to the lab staff and you get the credentials in the lab. Go ahead with the exercises.
In case you need to setup the software infrastructure on your own or in case you want to run the software on your own computer, please follow the instructions . There you find the instructions for downloading a ready-to-run virtual machine and also the link for a direct installation onto your PC without a virtual machine.
We use a simple laser-based obstacle avoidance algorithm with a simulator to make the first steps with the SmartMDSD toolchain. You then implement a laser-based hallway following algorithm, a laser-based wall-following algorithm, and finally a laser-based algorithm for detecting and approaching a cylinder.
Your task is to figure out how the laser obstacle avoid algorithm works. Can you explain what is does?
You learn:
Follow the tutorial Simple System: Laser Obstacle Avoid and use the following versions:
Projects to use for this Exercise | Version |
---|---|
Component Project | ComponentLaserObstacleAvoid |
System Project | SystemLaserObstacleAvoidP3dxWebotsSimulator |
The following gives some more information on the laser data communication object:
Eclipse Tips: How to find information about the domain models used in a component?
The domain models comprise the description of the services, that is communication patters along with communication objects. Communication objects comprise documentation of how to use them inside components.
Your task is to let the robot move in the middle of the hallway. For this, use the laser ranger, calculate motion commands and command them to the robot. We provide two different worlds which come with different challenges for your algorithm.
Projects to use for this Exercise | Version |
---|---|
Component Project | ComponentExerciseDriveMidway |
System Project | SystemExerciseLaserDriveMidway |
Changing the World in a System Project
Step 1: To change the world used in the system project, you need to edit the *.systemParam
file.
This file contains the parameters of the components used in the system. ComponentWebots uses the parameter WorldPath
to load the webot's world. Change the value of the WorldPath
to the path of the webots world file (i.e .wbt file).
Step 2: You must run the code generation after the modification. Right click on the project and select “Run Code-Generation”.
Step 3: Now you can deploy the system and you see the selected world loaded in the webots simulator.
Your task is to let the robot follow the left / right wall. For this, use the laser ranger, calculate motion commands and command them to the robot. Again, we provide two different worlds which come with different challenges for your algorithm.
Projects to use for this Exercise | Version |
---|---|
Component Project | ComponentExerciseFollowWall |
System Project | SystemExerciseLaserFollowWall |
Your task is to let the robot approach a cylinder (move to a detected cylinder and stop in front of it). For this, use the laser ranger, calculate motion commands and command them to the robot.
Projects to use for this Exercise | Version |
---|---|
Component Project | ComponentExerciseApproachCylinder |
System Project | SystemExerciseLaserApproachCylinder |
How to Move a Cylinder in a Webots World?
We now use an Intel Realsense RGBD sensor instead of a laser ranger. A RGBD sensor provides a colour camera image (RGB) and a depth image (D).
See the User Guide for the Visualization Component for detailed information of how to visualize different data such as laser scans as well as camera and depth images etc.
This exercise shows how you can use a depth camera (in our example, an Intel Realsense RGBD camera) instead of a laser ranger. We illustrate the use of the depth camera by the example of the obstacle avoidance algorithm which so far used a laser ranger. We added another component which converts the 2-dimensional depth image from the RGBD camera into a laser scan.
We use the following fully preconfigured system: SystemExerciseRGBDObstacleAvoid.
After starting this system, the robot moves around and avoids obstacles based on the depth image of the RGBD camera. The world used in this scenario is shown in the figure below.
Please go to the System Builder perspective and see which components are used now and how they are wired:
ComponentWebots3DCamera
generates color and depth images (CommRGBDImage) and sends them to ComponentLaserFromRGBDServer
ComponentLaserFromRGBDServer
converts the depth image into the laser data format (CommMobileLaserScan) and sends it to ComponentLaserObstacleAvoid
. Basically, the conversion is a down-projection of the depth image to result in a planar laser data format (all the scan points are in a horizontal plane)ComponentLaserObstacleAvoid
calculates velocity commands for collision free motions and sends those (CommNavigationVelocity) to ComponentWebotsMobileRobot
ComponentWebotsMobileRobot
drives the mobile robot according to the given velocitiesWhat are the take home messages from this exercise?
ComponentWebots2DLidar
to ComponentLaserObstacleAvoid
. The laser ranger is not used anymore for obstacle avoidance.For this exercise, we use the following versions:
Projects to use for this Exercise | Version |
---|---|
Component Project | ComponentExerciseRGBDApproachCylinder |
System Project | SystemExerciseRGBDApproachCylinder |
Similar to Exercises: Laser Ranger / Lab 4: Approach Cylinder, we now want to approach a cylinder. The difference is that we now use the RGBD camera:
The world used in this exercise is shown below:
Your task is to write the corresponding software:
The following labs dive deeper into navigation. We start with ready-to-run system projects that involve a whole bunch of software components related to different aspects of navigation.
«to be added»
This lab gives you insights into a ready-to-run navigation stack. The navigation stack comprises components for path planning, for map building, for motion control, for localization, for task coordination, and for world models. The objective is to first get an overview on navigation capabilities and gain some experience with navigation capabilities before we go into details.
Please follow A More Complex System using the Flexible Navigation Stack.
«to be added»
«to be added»
This tutorial is not part of a series.
Contributions of Nayabrasul Shaik, Thomas Feldmeier.