Coming Soon

Several of our activities are not yet fully operational within the virtual research laboratories. Although most have been containerized using Docker, they are missing intuitive interfaces that would allow for self-guided operation. Here are a few examples:

NaivPhys4RP - Dark Perception Laboratory

In this laboratory, we investigate the role of common sense in perception because we want to find out what prospective machinery efficiently drives our interpretation of the world to show how to overcome the high uncertainty resulting from the severe temporal, spatial and informational limitations in sensor data as the embodied agent interacts with its environment. Perception in such complex scenes (e.g., safety-critical, dynamic) does not only goes beyond processing sensor data to address classical tasks such as object classification, usually known as the what- and where-object-questions, but also faces the what-, how-, and why-happen-questions (e.g., task execution verification, estimation of physical parameters, quantities and detection of states such as fullness, stability). We generalize the problem of perception by placing events instead of objects at the center of scene understanding, where perception takes place as a loop which consists in predicting on the one hand the effects (anticipation) and on the other hand the causes (explanation) of events.

For Detailed information click here!

The Dual-Arm Action Selection Laboratory

In the Dual Arm Manipulation & Decision Laboratory, you can observe the capabilities of a dual-arm robot performing table setting tasks under uncertainty. This lab focuses on enhancing the decision-making skills of the robot while it interacts with various objects. The ultimate goal is to develop a system that can intelligently adapt to changing environments and tasks, choosing its next action and which gripper to use autonomously based on several factors of its (spatial) environment.

Currently, the lab showcases two heuristics: The Opportunistic Planning Model (OPM) for action selection, and the Dual-Arm Grasp Action Planner (DAGAP) which decides which gripper to use.

For Detailed information click here!

The Fame Laboratory

The FAME (Future Action Modelling Engine) Lab stands at the cutting edge of robotics research, operating as a virtual research laboratory under the auspices of the ERC project bearing the same name. This ambitious project is dedicated to exploring how robots can conceptualize and deliberate on future actions to preemptively address and avoid execution failures. A central focus of the FAME Lab’s research is enabling robots to learn manipulation tasks by observing instructional videos. This complex process involves the robot identifying essential motion patterns within these videos, understanding the rationale behind their effectiveness without explicit knowledge of the underlying physics, and adapting these critical motions to its own operational context, which introduces a variety of uncertainties. Overcoming these challenges would mark a significant milestone, granting robots the ability to autonomously learn from instructional content, thereby acquiring a wide range of skills and competencies. A practical application of this research could enable a robot to adeptly cut any fruit, using any tool, for any purpose, in any context, showcasing the potential for robots to achieve a remarkable level of autonomous functionality and versatility.

For Detailed information click here!

The TraceBot Laboratory

The TraceBot Lab offers a pioneering platform for conducting research with robotic systems uniquely designed to have a profound understanding of their actions and perceptions, specifically targeting the automation of the membrane-based sterility testing process. At the heart of the TraceBot Lab’s mission is the integration of verifiable actions into robotic manipulations, facilitated by advanced reasoning over sensor-actor trails within a comprehensive traceability framework. This framework capitalizes on digital-twin technology, which serves to replicate the physical world within a virtual environment, enhancing robot motion planners with the ability to autonomously execute self-checking procedures. These innovative procedures aim to create a semantic trace of the robot’s actions, ensuring that every manipulation is not only verifiable but also meets the rigorous standards required in regulated environments like healthcare. Through this approach, the TraceBot Lab is setting new benchmarks for the reliability and accountability of robotic systems in critical sectors.

For more information, you can visit the webpage of TraceBot to get a better idea of the complete project.

For Detailed information click here!

Action Plan Parametrisation Laboratory

In this virtual research lab, we aim to empower robots with the ability to use general action plans that can be parameterised by various sources into a variation of actionable tasks, particularly in everyday manipulations like cutting, pouring or whisking. These plans enable robots to adapt cutting techniques such as slicing, quartering, and peeling to various fruits or to infer parameters for successful pouring and whisking based on the available ingredients and objects, making abstract knowledge practically applicable in robot perception-action loops.

Show me the plan for the following

Show me the plan for the following

Enhanced Task Learning Using Actionable Knowledge Graphs Laboratory

In this virtual research lab, we aim to empower robots with the ability to understand the ‘what’ and ‘how’ of task demonstrations so that robots are enabled to reason about logged memories and differentiate between performed tasks, particularly in everyday manipulations like cutting or pouring. By integrating actionable knowledge graphs, robots are enabled to link contained object and environment information to ation information and map the information to the parameters from their generalized action plans. These plans then enable robots to adapt cutting techniques such as slicing, quartering, and peeling as looged in the task demonstrations, allowing for more specialised task execution.

In the laboratory below, you have the opportunity to select a VR task demonstration to then explore actionable knowledge graph content tailored to specific task domains, including fruit cutting, by utilizing resources like Wikipedia, biology textbooks, nutrition information sources, and instructional websites such as WikiHow. Additionally, you’ll have access to a comprehensive robotic action plan designed specifically for fruit cutting tasks. The integration of actionable knowledge graph information with the task demonstration, such as ”quartering an apple,” can be translated into specific action parameters of the robot. The customized plan can be tested and refined within a simulated environment.


Show me the plan for the following

Interactive Robot Task Learning Laboratory

The Interactive Task Learning Lab delves into the forefront of robotics, exploring advanced learning methodologies that empower robots with the ability to not only perform specific tasks, but also to grasp the essence of the tasks themselves. This includes understanding the goals, identifying potential unwanted side effects, and determining suitable metrics for evaluating task performance. Beyond mere execution, this approach necessitates that robots comprehend the physical laws governing their environment, predict the outcomes of their actions, and align their behavior with human expectations. Central to this lab’s research is the dynamic interaction between robots and humans where the human acts as a teacher, imparting knowledge about the conceptual framework of tasks, encompassing both the underlying concepts and their interrelations. Robots are thus challenged to recognize the limits of their knowledge and actively seek assistance, integrating and acting upon advice in a manner that reflects an understanding of its purpose and implications for modifying their actions. This innovative lab not only pushes the boundaries of robot learning but also paves the way for more intuitive and collaborative human-robot interactions.

For more information, you can visit the webpage of Interactive Task Learning to get a better idea on how a robot can learn from different teaching methodologies.

For Detailed information click here!

URoboSim Artificial World Laboratory

URoboSim is an Unreal Engine 4/5 plugin that allows importing robots to Unreal Engine using the URDF/SDF format and control them using various ROS interfaces. With using URoboSim, it is not only possible to execute real robot plans in simulation, it is also achievable to emulate the actions of a real robot in order to build a continuous belief state of the world.

You can try out an example project here.

For Detailed information click here!

Reasoning-based Reactive Motion Generation

This virtual research lab explores reactive motion generation through reasoning. We propose image schema-based reasoning for decision-making within motion controllers. Our reasoner is tightly coupled with the controller, continuously monitoring actions and inferring motion primitives to adapt to dynamic environments. By providing real-time feedback, the reasoner enables the controller to make informed decisions and generate appropriate motion responses.

For Detailed information click here!