CRAM Plan Executive Laboratory

In CRAM Plan Executive Laboratory, we delve into the innovative capabilities of CRAM – Cognitive Robot Abstract Machine. CRAM is a comprehensive toolbox designed for the development, implementation, and deployment of software on autonomous robots. CRAM stands out by offering an array of tools and libraries that facilitate robot software development, encompassing geometric reasoning and rapid simulation mechanisms. These features are pivotal for creating cognition-enabled control programs that significantly enhance robot autonomy. Additionally, CRAM includes introspection tools, allowing robots to reflect on their actions and autonomously refine their strategies for improved performance. Developed primarily in Common Lisp with elements of C/C++ and integrated within the ROS middleware ecosystem, this laboratory aims to pioneer advancements in cognitive robotics through the sophisticated use of CRAM.

The link below directs you to the CRAM homepage where you can find open-source code, installation instructions, tutorials, and comprehensive documentation for the CRAM software framework.

For Detailed information click here!

KnowRob Knowledge Executive Laboratory

KnowRob is a knowledge processing framework designed specifically for robots. KnowRob offers a comprehensive set of tools for knowledge acquisition, representation, and reasoning. It enables robots to organize information into reusable knowledge chunks and perform reasoning using an expressive logic formalism. Furthermore, KnowRob provides visualization tools and mechanisms for knowledge acquisition, aiding in the creation of robots capable of complex manipulation tasks and interaction within dynamic environments. This system stands out for its ability to combine knowledge representation and reasoning methods with techniques for acquiring and grounding knowledge, serving as a semantic framework for integrating information from different sources and making it actionable in robots.

Clicking on the link below will take you to the KnowRob homepage. There, you have access to everything you need to get started with KnowRob, including its open-source code, step-by-step installation guides, helpful tutorials, and detailed documentation for the software framework.

For Detailed information click here!

GISKARD Motion Executive Laboratory

In the Motion Executive Laboratory, our primary focus is on exploring and enhancing Giskard, a cutting-edge open-source framework dedicated to motion planning and control. Giskard introduces a novel approach to trajectory generation for mobile manipulators by leveraging constraint and optimization-based task space control. This framework is designed to be user-friendly, offering intuitive Python and ROS interfaces that simplify the process of defining constraints for the optimization problem. By solving for the robot’s instantaneous joint velocities, Giskard ensures precise and efficient movement, facilitating flexible and robust manipulation actions.

For Detailed information click here!

RoboKudo Perception Executive Laboratory

Our Perception Executive Laboratory is centered around RoboKudo, a cutting-edge cognitive perception framework designed specifically for robotic manipulation tasks. Employing a multi-expert approach, RoboKudo excels in processing unstructured sensor data, annotating it through the expertise of various computer vision algorithms. This open-source framework enables the flexible creation and execution of task-specific perception processes by integrating multiple vision methods. The technical backbone of RoboKudo is the Perception Pipeline Tree (PPT), a novel data structure that enhances Behavior Trees with a focus on robot perception. Developed to function within a robot’s perception-action loop, RoboKudo interprets perception task queries, such as locating a milk box in a fridge, and crafts specialized perception processes in the form of PPTs, integrating appropriate computer vision methods to accomplish the tasks at hand.

For Detailed information click here!

PyCRAM Laboratory

The PyCRAM Laboratory focuses on leveraging the capabilities of PyCRAM, a Python 3 re-implementation of the original CRAM framework designed to serve as a comprehensive toolbox for the creation, implementation, and deployment of AI-driven and cognition-enabled control software on autonomous robots. This laboratory is dedicated to advancing the frontiers of robot autonomy by providing a modern and accessible platform that supports the development of sophisticated robot control systems, fostering innovation in the design of intelligent and autonomous robotic solutions.

For Detailed information click here!

NEEMHub

In order to make large amounts of data accessible to the research community, allow to analyze the data, create machine learning models from the data, and support version control for the data and models, we have made an effort of releasing an infrastructure which can handle such requirements with one solution in NEEMHub.

For Detailed information click here!

Video-Enhanced Human Activity Understanding

This project integrates advanced machine learning techniques with Unreal Engine’s MetaHuman avatars, providing a sophisticated platform for robotic agents to acquire knowledge of everyday activities and object manipulation. As part of the comprehensive Physics-enabled Virtual Demonstration (PVD) framework, this platform utilizes video instructions to facilitate realistic simulations, ensuring that robots can interpret and practice complex tasks in a lifelike, physically governed virtual environment. This methodology not only bridges the gap between theoretical learning and practical execution but also enriches the robotic understanding of human actions, significantly boosting their adaptability and efficiency in real-world scenarios.

For Detailed information click here!