Virtual Labs

Domestic Object Transportation Laboratory

This laboratory is dedicated to advancing the capabilities of robot agents in seamlessly executing object transportation tasks within human-centric environments such as homes and retail spaces. It provides a versatile platform for exploring and refining generalized robot plans that manage the movement of diverse objects across varied settings for multiple purposes. By focusing on the adaptability and scalability of robotic programming, the lab aims to enhance the understanding and application of robotics in everyday contexts ultimately improving their generalizability, transferability, and effectiveness in real-world scenarios.

In the laboratory, you are equipped with a generalized open-source robotic plan capable of executing various object transportation-related tasks, including both table setting and cleaning, across diverse domestic settings. These settings range from entire apartments to kitchen environments and the plan is adaptable to various robots. You can customize the execution by selecting the appropriate environment, task, and robot, and then run it within a software container.

For Detailed information click here!

Actionable Knowledge Graph Laboratory

In this virtual research lab, we aim to empower robots with the ability to transform abstract knowledge from the web into actionable tasks, particularly in everyday manipulations like cutting, pouring or whisking. By extracting information from diverse internet sources — ranging from biology textbooks and Wikipedia entries to cookbooks and instructional websites —, the robots create knowledge graphs that inform generalized action plans. These plans enable robots to adapt cutting techniques such as slicing, quartering, and peeling to various fruits using suitable tools making abstract web knowledge practically applicable in robot perception-action loops.

Show me the plan for the following
Show me the plan for the following

Action Cores

This laboratory is dedicated to advancing the capabilities of robot agents in seamlessly executing object transportation tasks within human-centric environments such as homes and retail spaces. It provides a versatile platform for exploring and refining generalized robot plans that manage the movement of diverse objects across varied settings for multiple purposes. By focusing on the adaptability and scalability of robotic programming, the lab aims to enhance the understanding and application of robotics in everyday contexts ultimately improving their generalizability, transferability, and effectiveness in real-world scenarios.

In the laboratory, you are equipped with a generalized open-source robotic plan capable of executing various object transportation-related tasks, including both table setting and cleaning, across diverse domestic settings. These settings range from entire apartments to kitchen environments and the plan is adaptable to various robots. You can customize the execution by selecting the appropriate environment, task, and robot, and then run it within a software container.

For Detailed information click here!

openEASE Knowledge Service Laboratory

openEASE is a cutting-edge, web-based knowledge service that leverages the KnowRob robot knowledge representation and reasoning system to offer a machine-understandable and processable platform for sharing knowledge and reasoning capabilities. It encompasses a broad spectrum of knowledge, including insights into agents (notably robots and humans), their environments (spanning objects and substances), tasks, actions, and detailed manipulation episodes involving both robots and humans. These episodes are richly documented through robot-captured images, sensor data streams, and full-body poses, providing a comprehensive understanding of interactions. OpenEASE is equipped with a robust query language and advanced inference tools, enabling users to conduct semantic queries and reason about the data to extract specific information. This functionality allows robots to articulate insights about their actions, motivations, methodologies, outcomes, and observations, thereby facilitating a deeper understanding of robotic operations and interactions within their environments.

In this laboratory, you have access to openEASE, a web-based interactive platform that offers knowledge services. Through openEASE, you can choose from various knowledge bases, each representing a robotic experiment or an episode where humans demonstrate tasks to robots. To start, select a knowledge base—for instance, ”ease-2020-urobosim-fetch-and-place”—and activate it. Then, by clicking on the ”examples” button, you can choose specific knowledge queries to run on the selected experiment’s knowledge bases, facilitating a deeper understanding and interaction with the data. For a detailed overview of the episodes in openEASE click here.

For Detailed information click here!

Dynamic Retail Robotics Laboratory

This laboratory focuses on addressing the complex challenges robots face within retail settings. Robots in this lab can autonomously deploy themselves in retail stores and constantly adapt to changing retail environments, including shelf layouts and product placements. They are trained to manage inventory, guide customers, and integrate real-time product information from various sources into actionable knowledge. Our goal is to develop robots that not only support shopping and inventory tasks but also seamlessly adjust to new products and store layouts, enhancing customer service and operational efficiency in the retail ecosystem.

In this laboratory, you are provided with two versatile robot action plans tailored for retail environments. The first plan focuses on creating semantic digital twins of shelf systems in retail stores, while the second is designed for restocking shelves. You have the flexibility to choose the specific task, robot, and environment. Once selected, you can execute the action plan through a software container, streamlining the process of implementing these robotic solutions in real-world retail settings.

For Detailed information click here!