Software Tools

CRAM Plan Executive Laboratory

In CRAM Plan Executive Laboratory, we delve into the innovative capabilities of CRAM – Cognitive Robot Abstract Machine. CRAM is a comprehensive toolbox designed for the development, implementation, and deployment of software on autonomous robots. CRAM stands out by offering an array of tools and libraries that facilitate robot software development, encompassing geometric reasoning and rapid simulation mechanisms. These features are pivotal for creating cognition-enabled control programs that significantly enhance robot autonomy. Additionally, CRAM includes introspection tools, allowing robots to reflect on their actions and autonomously refine their strategies for improved performance. Developed primarily in Common Lisp with elements of C/C++ and integrated within the ROS middleware ecosystem, this laboratory aims to pioneer advancements in cognitive robotics through the sophisticated use of CRAM.

The link below directs you to the CRAM homepage where you can find open-source code, installation instructions, tutorials, and comprehensive documentation for the CRAM software framework.

For Detailed information click here!

KnowRob Knowledge Executive Laboratory

KnowRob is a knowledge processing framework designed specifically for robots. KnowRob offers a comprehensive set of tools for knowledge acquisition, representation, and reasoning. It enables robots to organize information into reusable knowledge chunks and perform reasoning using an expressive logic formalism. Furthermore, KnowRob provides visualization tools and mechanisms for knowledge acquisition, aiding in the creation of robots capable of complex manipulation tasks and interaction within dynamic environments. This system stands out for its ability to combine knowledge representation and reasoning methods with techniques for acquiring and grounding knowledge, serving as a semantic framework for integrating information from different sources and making it actionable in robots.

Clicking on the link below will take you to the KnowRob homepage. There, you have access to everything you need to get started with KnowRob, including its open-source code, step-by-step installation guides, helpful tutorials, and detailed documentation for the software framework.

For Detailed information click here!

GISKARD Motion Executive Laboratory

In the Motion Executive Laboratory, our primary focus is on exploring and enhancing Giskard, a cutting-edge open-source framework dedicated to motion planning and control. Giskard introduces a novel approach to trajectory generation for mobile manipulators by leveraging constraint and optimization-based task space control. This framework is designed to be user-friendly, offering intuitive Python and ROS interfaces that simplify the process of defining constraints for the optimization problem. By solving for the robot’s instantaneous joint velocities, Giskard ensures precise and efficient movement, facilitating flexible and robust manipulation actions.

For Detailed information click here!

RoboKudo Perception Executive Laboratory

Our Perception Executive Laboratory is centered around RoboKudo, a cutting-edge cognitive perception framework designed specifically for robotic manipulation tasks. Employing a multi-expert approach, RoboKudo excels in processing unstructured sensor data, annotating it through the expertise of various computer vision algorithms. This open-source framework enables the flexible creation and execution of task-specific perception processes by integrating multiple vision methods. The technical backbone of RoboKudo is the Perception Pipeline Tree (PPT), a novel data structure that enhances Behavior Trees with a focus on robot perception. Developed to function within a robot’s perception-action loop, RoboKudo interprets perception task queries, such as locating a milk box in a fridge, and crafts specialized perception processes in the form of PPTs, integrating appropriate computer vision methods to accomplish the tasks at hand.

For Detailed information click here!

Advanced Probabilistic Modeling and Analysis Laboratory

In our Advanced Probabilistic Modeling and Analysis Laboratory, we focus on the innovative use of Joint Probability Trees (JPTs), a formalism that facilitates the learning and reasoning of joint probability distributions in a tractable manner for practical applications. JPTs are distinctive for their capability to incorporate both symbolic and subsymbolic variables within a unified hybrid model, without necessitating prior knowledge about variable dependencies or specific distribution families. Within the context of our Virtual Research Building, JPTs are employed to construct and analyze joint probability distributions derived from log data of robot experiments. This enables us to critically evaluate experimental outcomes and harness robot experience data for learning purposes, paving the way for advancements in robot performance and decision-making processes.

For Detailed information click here!

PyCRAM Laboratory

The PyCRAM Laboratory focuses on leveraging the capabilities of PyCRAM, a Python 3 re-implementation of the original CRAM framework designed to serve as a comprehensive toolbox for the creation, implementation, and deployment of AI-driven and cognition-enabled control software on autonomous robots. This laboratory is dedicated to advancing the frontiers of robot autonomy by providing a modern and accessible platform that supports the development of sophisticated robot control systems, fostering innovation in the design of intelligent and autonomous robotic solutions.

For Detailed information click here!

NEEMHub

In order to make large amounts of data accessible to the research community, allow to analyze the data, create machine learning models from the data, and support version control for the data and models, we have made an effort of releasing an infrastructure which can handle such requirements with one solution in NEEMHub.

For Detailed information click here!

Video-Enhanced Human Activity Understanding

This project integrates advanced machine learning techniques with Unreal Engine’s MetaHuman avatars, providing a sophisticated platform for robotic agents to acquire knowledge of everyday activities and object manipulation. As part of the comprehensive Physics-enabled Virtual Demonstration (PVD) framework, this platform utilizes video instructions to facilitate realistic simulations, ensuring that robots can interpret and practice complex tasks in a lifelike, physically governed virtual environment. This methodology not only bridges the gap between theoretical learning and practical execution but also enriches the robotic understanding of human actions, significantly boosting their adaptability and efficiency in real-world scenarios.

For Detailed information click here!

Multiverse Labs

The Multiverse Framework, supported by euROBIN, is a decentralized simulation framework designed to integrate multiple advanced physics engines along with various photo-realistic graphics engines to simulate everything. The Interactive Virtual Reality Labs, utilizing Unreal Engine for rendering (optimized for Meta Quest 3 Headset) and MuJoCo for physics computation, support the simultaneous operation of multiple labs and enable real-time interaction among multiple users.

For Detailed information click here!

Cloud-based Robotics Platform

binder.intel4coro.de

We present a cloud-based robotics platform for teaching and training concepts of cognitive robotics. Instead of forcing interested learners or students to install a new operating system and bulky, fragile software onto their personal laptops, just to solve tutorials or coding assignments of a single lecture on robotics, it would be beneficial to avoid technical setups and directly dive into the content of cognitive robotics. To achieve this, we utilize the cloud service Binderhub to deploy and operate containerized applications, including robotics simulation environments and software collections based on the Robot Operating System (ROS). The web-based Integrated Development Environment JupyterLab is integrated with RvizWeb and XPRA to provide real-time visualization of sensor data and robot behavior in a user-friendly environment for interacting with robotics software. The proposed platform is a valuable tool for education and research in cognitive robotics and it has the potential to democratize access to these fields. The platform has already been successfully employed in various academic courses, demonstrating its effectiveness in fostering knowledge and skill development.

Platform address: https://binder.intel4coro.de/

For Detailed information click here!