Lecture Course: Actionable knowledge representation

The lecture course ”Actionable Knowledge Representation” delves into the sophisticated realm of making abstract knowledge actionable in the perception-action loops of robot agents. The course utilizes the advanced resources of the AICOR virtual research building. This includes leveraging the comprehensive knowledge bases of the KnowRob system, the interactive capabilities of the web-based knowledge service openEASE, and the practical scenarios provided by virtual robot laboratories. The course is designed to explore the methodologies of representing knowledge in a form that is both machine-understandable and actionable, focusing on the acquisition of knowledge from diverse sources such as web scraping and the integration of various knowledge segments. It addresses the critical aspects of reasoning about knowledge and demonstrates how this knowledge can be utilized by different agents — ranging from websites and AR applications to robots — to assist users in their daily activities. The practical component of the course is facilitated through platform-independent Jupyter notebooks based on Python, ensuring accessibility and minimal software requirements for all participants. With course materials hosted on GitHub, students are provided with an accessible and comprehensive learning experience that bridges the gap between theoretical knowledge representation concepts and their practical applications in enhancing daily life through technology.

For Detailed information click here!

Multiverse Labs

The Multiverse Framework, supported by euROBIN, is a decentralized simulation framework designed to integrate multiple advanced physics engines along with various photo-realistic graphics engines to simulate everything. The Interactive Virtual Reality Labs, utilizing Unreal Engine for rendering (optimized for Meta Quest 3 Headset) and MuJoCo for physics computation, support the simultaneous operation of multiple labs and enable real-time interaction among multiple users.

For Detailed information click here!

Chapter 01 - Creating a Semantic Environment

In Chapter 1, you will learn to create a simulation environment using the Unified Robot Description Format (URDF). You’ll set up a basic URDF model that includes essential objects like a fridge and a table, and visualize it. This foundational knowledge will enable you to understand how robots interact with their surroundings.

For Entering Chapter one click here: Chapter 1!

Chapter 02 - First Plan - Robot Movement and Perception

On Chapter 2, you'll focus on basic robot movements and perception. You'll learn to move the robot to a table and use its sensors to detect a milk carton. Understanding the challenges in perception, such as occlusions, will enhance your knowledge of how robots gather information from their environment.

For Entering Chapter two click here: Chapter 2!

Chapter 03 - Querying the Knowledge Base System

On Chapter 3, you'll explore the role of a knowledge base in robot decision-making. You'll learn how to make queries to the knowledge base to determine the actions the robot must take to accomplish its tasks, such as perceiving objects in the environment.

For Entering Chapter three click here: Chapter 3!

Chapter 05 - Create your own LLM assistant

In Chapter 5, you will head into generative Large Language Models (LLMs) and how to fine-tune them. With Retrieval Augmented Generation (RAG) you create a specialized assistant that serves as a companion for robot programming.

For Entering Chapter five click here: Chapter 5!