The lecture course ”Actionable Knowledge Representation” delves into the sophisticated realm of making abstract knowledge actionable in the perception-action loops of robot agents. The course utilizes the advanced resources of the AICOR virtual research building. This includes leveraging the comprehensive knowledge bases of the KnowRob system, the interactive capabilities of the web-based knowledge service openEASE, and the practical scenarios provided by virtual robot laboratories. The course is designed to explore the methodologies of representing knowledge in a form that is both machine-understandable and actionable, focusing on the acquisition of knowledge from diverse sources such as web scraping and the integration of various knowledge segments. It addresses the critical aspects of reasoning about knowledge and demonstrates how this knowledge can be utilized by different agents — ranging from websites and AR applications to robots — to assist users in their daily activities. The practical component of the course is facilitated through platform-independent Jupyter notebooks based on Python, ensuring accessibility and minimal software requirements for all participants. With course materials hosted on GitHub, students are provided with an accessible and comprehensive learning experience that bridges the gap between theoretical knowledge representation concepts and their practical applications in enhancing daily life through technology.
euROBIN Demo
TIAGo robot in the IAI Bremen apartment laboratory.
Multiverse Labs
The Multiverse Framework, supported by euROBIN, is a decentralized simulation framework designed to integrate multiple advanced physics engines along with various photo-realistic graphics engines to simulate everything. The Interactive Virtual Reality Labs, utilizing Unreal Engine for rendering (optimized for Meta Quest 3 Headset) and MuJoCo for physics computation, support the simultaneous operation of multiple labs and enable real-time interaction among multiple users.
Chapter 01 - Creating a Semantic Environment
For Entering Chapter one click here: Chapter 1!
Chapter 02 - First Plan - Robot Movement and Perception
For Entering Chapter two click here: Chapter 2!
Chapter 03 - Querying the Knowledge Base System
For Entering Chapter three click here: Chapter 3!
Chapter 04 - Completing the Full Transportation Task
For Entering Chapter four click here: Chapter 4!
Chapter 05 - Create your own LLM assistant
For Entering Chapter five click here: Chapter 5!