In this laboratory, we investigate the role of common sense in perception because we want to find out what prospective machinery efficiently drives our interpretation of the world to show how to overcome the high uncertainty resulting from the severe temporal, spatial and informational limitations in sensor data as the embodied agent interacts with its environment. Perception in such complex scenes (e.g., safety-critical, dynamic) does not only goes beyond processing sensor data to address classical tasks such as object classification, usually known as the what- and where-object-questions, but also faces the what-, how-, and why-happen-questions (e.g., task execution verification, estimation of physical parameters, quantities and detection of states such as fullness, stability). We generalize the problem of perception by placing events instead of objects at the center of scene understanding, where perception takes place as a loop which consists in predicting on the one hand the effects (anticipation) and on the other hand the causes (explanation) of events.
Domestic Object Transportation Laboratory
This laboratory is dedicated to advancing the capabilities of robot agents in seamlessly executing object transportation tasks within human-centric environments such as homes and retail spaces. It provides a versatile platform for exploring and refining generalized robot plans that manage the movement of diverse objects across varied settings for multiple purposes. By focusing on the adaptability and scalability of robotic programming, the lab aims to enhance the understanding and application of robotics in everyday contexts ultimately improving their generalizability, transferability, and effectiveness in real-world scenarios.
In the laboratory, you are equipped with a generalized open-source robotic plan capable of executing various object transportation-related tasks, including both table setting and cleaning, across diverse domestic settings. These settings range from entire apartments to kitchen environments and the plan is adaptable to various robots. You can customize the execution by selecting the appropriate environment, task, and robot, and then run it within a software container.
Welcoming 2025
Happy New Year!
Wishing you an amazing 2025 filled with success, joy, and exciting opportunities.
My team and I would love for you to celebrate with us in our Virtual Research Building and help expand our robot community. And for the robots working in your labs, we look forward to seeing them join the party next year!
Here is to a successful and healthy 2025 together!
Best wishes,
Michael & Team Click: Opening Party Lab
Actionable Knowledge Graph Laboratory
In this virtual research lab, we aim to empower robots with the ability to transform abstract knowledge from the web into actionable tasks, particularly in everyday manipulations like cutting, pouring or whisking. By extracting information from diverse internet sources — ranging from biology textbooks and Wikipedia entries to cookbooks and instructional websites —, the robots create knowledge graphs that inform generalized action plans. These plans enable robots to adapt cutting techniques such as slicing, quartering, and peeling to various fruits using suitable tools making abstract web knowledge practically applicable in robot perception-action loops.
Show me the plan for the followingShow me the plan for the following
Action Cores
This laboratory focuses on advancing robotic capabilities in performing core actions such as cutting, mixing, pouring, and transporting within dynamic, human-centered environments like homes.
Prerequisite - The Story of the Course
For Entering the Story click here: Story Mode Activate!
The Dual-Arm Action Selection Laboratory
In the Dual Arm Manipulation & Decision Laboratory, you can observe the capabilities of a dual-arm robot performing table setting tasks under uncertainty. This lab focuses on enhancing the decision-making skills of the robot while it interacts with various objects. The ultimate goal is to develop a system that can intelligently adapt to changing environments and tasks, choosing its next action and which gripper to use autonomously based on several factors of its (spatial) environment.
Currently, the lab showcases two heuristics: The Opportunistic Planning Model (OPM) for action selection, and the Dual-Arm Grasp Action Planner (DAGAP) which decides which gripper to use.
Action Plan Parametrisation Laboratory
In this virtual research lab, we aim to empower robots with the ability to use general action plans that can be parameterised by various sources into a variation of actionable tasks, particularly in everyday manipulations like cutting, pouring or whisking. These plans enable robots to adapt cutting techniques such as slicing, quartering, and peeling to various fruits or to infer parameters for successful pouring and whisking based on the available ingredients and objects, making abstract knowledge practically applicable in robot perception-action loops.
Show me the plan for the following
Enhanced Task Learning Using Actionable Knowledge Graphs Laboratory
In this virtual research lab, we aim to empower robots with the ability to understand the ‘what’ and ‘how’ of task demonstrations so that robots are enabled to reason about logged memories and differentiate between performed tasks, particularly in everyday manipulations like cutting or pouring. By integrating actionable knowledge graphs, robots are enabled to link contained object and environment information to ation information and map the information to the parameters from their generalized action plans. These plans then enable robots to adapt cutting techniques such as slicing, quartering, and peeling as looged in the task demonstrations, allowing for more specialised task execution.
In the laboratory below, you have the opportunity to select a VR task demonstration to then explore actionable knowledge graph content tailored to specific task domains, including fruit cutting, by utilizing resources like Wikipedia, biology textbooks, nutrition information sources, and instructional websites such as WikiHow. Additionally, you’ll have access to a comprehensive robotic action plan designed specifically for fruit cutting tasks. The integration of actionable knowledge graph information with the task demonstration, such as ”quartering an apple,” can be translated into specific action parameters of the robot. The customized plan can be tested and refined within a simulated environment.
Interactive Robot Task Learning Laboratory
The Interactive Task Learning Lab delves into the forefront of robotics, exploring advanced learning methodologies that empower robots with the ability to not only perform specific tasks, but also to grasp the essence of the tasks themselves. This includes understanding the goals, identifying potential unwanted side effects, and determining suitable metrics for evaluating task performance. Beyond mere execution, this approach necessitates that robots comprehend the physical laws governing their environment, predict the outcomes of their actions, and align their behavior with human expectations. Central to this lab’s research is the dynamic interaction between robots and humans where the human acts as a teacher, imparting knowledge about the conceptual framework of tasks, encompassing both the underlying concepts and their interrelations. Robots are thus challenged to recognize the limits of their knowledge and actively seek assistance, integrating and acting upon advice in a manner that reflects an understanding of its purpose and implications for modifying their actions. This innovative lab not only pushes the boundaries of robot learning but also paves the way for more intuitive and collaborative human-robot interactions.
For more information, you can visit the webpage of Interactive Task Learning to get a better idea on how a robot can learn from different teaching methodologies.
The AICOR Interactive and Immersive Textbook
The “AI-powered and Cognition-enabled Robotics” textbook represents a novel approach to education in the field of cognitive robotics. This interactive textbook is designed to offer an immersive learning experience, uniquely combining theoretical knowledge with practical application. It features video lectures from world-leading experts on various topics within the domain, providing students with first-class insights into the subject matter. Additionally, the textbook includes exercises that can be conducted within virtual research laboratories, utilizing open-source cutting-edge research software to bridge the gap between theory and practice. Students also have direct access to a wealth of resources and background material through a learning hub, enhancing their study and research capabilities. Currently in its early stages, the textbook offers an introductory chapter as a glimpse into its comprehensive educational approach, setting a new standard for academic resources in cognitive robotics.
AICOR for Newcomers
Visual Programming Interface
”AICOR for Newcomers” is an innovative educational initiative, designed to introduce high school students and other interested individuals to the captivating realm of AI-powered and cognition-enabled robotics. This program employs a visual programming interface powered by Blockly, offering an intuitive and user-friendly method for programming robots. Unlike traditional text-based coding languages, which can be daunting for beginners and young learners, Blockly simplifies the learning process, allowing participants to engage in programming activities without feeling overwhelmed. Through this approach, newcomers are given the opportunity to program robots within a virtual research lab environment to accomplish various tasks, such as making popcorn, serving dinner, and going shopping. ”AICOR for Newcomers” not only makes robotics accessible and enjoyable but also serves as a dynamic platform for discovering talent and inspiring future generations to explore the field of robotics.
Try out our online playground:
MAI@Home Competition Lab
MAI@Home (“Multimodal AI Reasoning Challenge @Home”) is a multimodal Embodied-AI competition proposed at ICRA ‘25. Here, AI systems are challenged to reason on multi-time spatiotemporal QA tasks in a daily living environment where your host human continuously changes the environment slightly.
In this competition, submitted systems must process not only the current status but also have to “remember” and manage the status of the complex and dynamic environment, a crucial aspect for practical applications in the real world.
One unique feature of this competition is that we provide spatiotemporal knowledge graphs and scene graphs in addition to the video footage, which can be considered a partial observation database that the agents can refer to as external knowledge.
For more information you can check out the competition website and the github.
In the notebook below you can try out some example queries on the knowledge graph and video data
The Virtual RoboCup@Home Arena
The virtual RoboCup@Home arena stands as a platform where student and researcher teams converge to forge the future of service and assistive robot technologies tailored for personal domestic use. As the premier international competition for autonomous service robots, it represents a significant segment of the broader RoboCup initiative, drawing participants from around the globe annually. Within this virtual arena, robots undergo a series of benchmark tests designed to rigorously evaluate their abilities and performance in an intricately simulated, realistic home environment that does not adhere to a standardized setting. The competition’s scope encompasses a wide range of domains including, but not limited to, Human-Robot Interaction and Cooperation, Navigation and Mapping in dynamic environments, Computer Vision and Object Recognition in varying lighting conditions, Object Manipulation, Adaptive Behaviors, Behavior Integration, Ambient Intelligence, as well as Standardization and System Integration. This diverse focus aims to push the envelope in autonomous domestic robotics, challenging teams to innovate in ways that significantly enhance the practicality and integration of robots into everyday life.
Advanced Probabilistic Modeling and Analysis Laboratory
In our Advanced Probabilistic Modeling and Analysis Laboratory, we focus on the innovative use of Joint Probability Trees (JPTs), a formalism that facilitates the learning and reasoning of joint probability distributions in a tractable manner for practical applications. JPTs are distinctive for their capability to incorporate both symbolic and subsymbolic variables within a unified hybrid model, without necessitating prior knowledge about variable dependencies or specific distribution families. Within the context of our Virtual Research Building, JPTs are employed to construct and analyze joint probability distributions derived from log data of robot experiments. This enables us to critically evaluate experimental outcomes and harness robot experience data for learning purposes, paving the way for advancements in robot performance and decision-making processes.
The EASE Learning Hub
The EASE Learning Hub emerges as a pivotal educational platform, offering a wealth of open educational resources specifically tailored for AI-powered and cognition-enabled robotics (AICOR). This hub is a product of the collaborative efforts of the research center EASE (Everyday Activity Science and Engineering), dedicated to advancing the understanding and application of everyday activities in science and engineering contexts. It serves as a treasure trove of knowledge, featuring an extensive collection of video lectures that delve into various critical topics such as cognition, robot perception, knowledge representation, and planning, among others. Many of these insightful lectures were delivered during the international EASE Fall Schools, making cutting-edge research accessible to doctoral students and other researchers. Additionally, the hub provides numerous virtual training sessions and tutorials focused on key software components essential for developing AI-powered and cognition-enabled robot agents. This makes the EASE Learning Hub an invaluable resource for students, researchers, and enthusiasts eager to deepen their understanding and skills in the field of intelligent robotics.
The euROBIN Coopetition (= Cooperative Competition) Arenas
The euROBIN Coopetition arenas represent a groundbreaking approach to robotics competitions, blending cooperation with competition across three distinct euROBIN leagues, each designed to address key societal challenges through robotic innovation. In the realm of ”Robotic manufacturing for a circular economy”, the Industrial Robots League introduces a novel competition focused on robotic manipulation, featuring an industry-endorsed benchmark that utilizes an internet-connected electronic task board. This board, equipped with a battery-powered microcontroller, meticulously records user interactions and task execution times, broadcasting this data to a public web dashboard and EuroCore for comprehensive analysis. The ”Personal robots for enhanced quality of life and well-being” league challenges participants to push the boundaries of service and assistive robot technology, with a particular emphasis on applications within personal and domestic settings. Meanwhile, the ”Outdoor robots for sustainable communities” league aims to advance the capabilities of autonomous delivery robots, covering both aerial and ground-based platforms, in support of creating more sustainable communities. Together, these arenas foster a unique ”Coopetition” environment where innovation, collaboration, and competitive spirit drive the development of robotics solutions tailored to meet pressing global needs.
openEASE Knowledge Service Laboratory
openEASE is a cutting-edge, web-based knowledge service that leverages the KnowRob robot knowledge representation and reasoning system to offer a machine-understandable and processable platform for sharing knowledge and reasoning capabilities. It encompasses a broad spectrum of knowledge, including insights into agents (notably robots and humans), their environments (spanning objects and substances), tasks, actions, and detailed manipulation episodes involving both robots and humans. These episodes are richly documented through robot-captured images, sensor data streams, and full-body poses, providing a comprehensive understanding of interactions. OpenEASE is equipped with a robust query language and advanced inference tools, enabling users to conduct semantic queries and reason about the data to extract specific information. This functionality allows robots to articulate insights about their actions, motivations, methodologies, outcomes, and observations, thereby facilitating a deeper understanding of robotic operations and interactions within their environments.
In this laboratory, you have access to openEASE, a web-based interactive platform that offers knowledge services. Through openEASE, you can choose from various knowledge bases, each representing a robotic experiment or an episode where humans demonstrate tasks to robots. To start, select a knowledge base—for instance, ”ease-2020-urobosim-fetch-and-place”—and activate it. Then, by clicking on the ”examples” button, you can choose specific knowledge queries to run on the selected experiment’s knowledge bases, facilitating a deeper understanding and interaction with the data. For a detailed overview of the episodes in openEASE click here.
For Detailed information click here!
Lecture Course: Robot Programming with ROS
The lecture course ”Robot Programming with ROS” offers an immersive and practical approach to learning the intricacies of programming robots using the Robot Operating System (ROS). Set within the innovative context of virtual research building laboratories, this course provides students with a unique opportunity to apply theoretical concepts in a simulated real-world environment. The course materials, including exercise sheets and programming environments, are readily accessible on GitHub, allowing students to dive into practical, hands-on exercises that significantly enhance their learning experience. This deliberate integration of practical examples into the curriculum is designed to seamlessly connect theoretical knowledge with real-world application, equipping students with the necessary skills and confidence to tackle the challenges of robot programming in various professional settings. Through this course, learners are not just exposed to the fundamentals of ROS but are also prepared to navigate and innovate within the evolving landscape of robotics technology.
Dynamic Retail Robotics Laboratory
This laboratory focuses on addressing the complex challenges robots face within retail settings. Robots in this lab can autonomously deploy themselves in retail stores and constantly adapt to changing retail environments, including shelf layouts and product placements. They are trained to manage inventory, guide customers, and integrate real-time product information from various sources into actionable knowledge. Our goal is to develop robots that not only support shopping and inventory tasks but also seamlessly adjust to new products and store layouts, enhancing customer service and operational efficiency in the retail ecosystem.
In this laboratory, you are provided with two versatile robot action plans tailored for retail environments. The first plan focuses on creating semantic digital twins of shelf systems in retail stores, while the second is designed for restocking shelves. You have the flexibility to choose the specific task, robot, and environment. Once selected, you can execute the action plan through a software container, streamlining the process of implementing these robotic solutions in real-world retail settings.