Results

Motivation:

Human labor remains a key component of flexible and agile production systems due to the rising product differentiation and shorter product life cycles and remains prevalent in manual assembly tasks. Workers in manufacturing are exposed to awkward postures that are associated with muscular pain and musculoskeletal disorders (MSDs). In Germany, MSDs were responsible for 18.2% of illness-related absences in 2022, leading to production downtime costs of 21.5 billion Euros. The individualization of assembly workplaces is a promising approach to address the ergonomic needs of individual workers compared to a “one-size-fits-all” workplace design approach. Adaptive workstations allow a data-driven, automatic adjustment of the workplace according to individual human characteristics and needs to maintain productivity, health and work satisfaction.

Problem statement and intervention:

However, most assisting devices and systems on the market have been developed for the assembly of small components, e.g. height adjustable desktops. In literature, mostly concepts for adaptive assembly systems exist, which have not been evaluated outside of simulations. More information is needed on the effects and implementation of adaptive systems on workers, especially at manual assembly lines and for large work objects. 

FELICE solution:

In FELICE, TUD developed an adaptive workstation for the assembly of large work objects to be used in a car door assembly use case at FELICE partner Centro Ricerche Fiat (CRF). The FELICE adaptive workstation (AWS) is a mobile, battery powered, assembly station capable of the adaptive configuration of selected workplace parameters:

  • The physical adaptation module changes height and angle of the work object during assembly to improve working posture
  • The adaptive light system module adapts the local light intensity

The system is connected to the intelligent execution system developed in FELICE and can receive commands via standardized messages, using the Open Source FIWARE context broker. Workers at the line remain in control via a suite of manual intervention and control options powered by other FELICE modules. 

Results of the FELICE project:

  • A prototype of the adaptive workstation hardware for the positioning of large work objects and modification of local lighting has been developed following a human centred design philosophy.
  • Simulation based evaluation of the prototypes have been carried out, including lighting simulations as well as digital EAWS evaluations.
  • Methods and algorithms for the adaptive positioning of large work objects to improve posture have been developed.
  • The effect of the AWS on human posture has been assessed using Motion Capture Systems. 
  • The FELICE adaptive workstation has been tested at the Campus Manufacturing hall at Centro Ricerche Fiat at the FCA plant in Melfi, Italy, and reached a maturity level of TRL5.

Interfaces with other modules/partner work:

  • The AWS system is compatible with the Advanced Visualization module (AVM) developed by AEGIS. Key features can be triggered remotely using the AVM interface.
  • The AWS can be manually controlled with voice commands using the FELICE Speech Control module developed by UNISA 
  • The AWS system is fully integrated into the Intelligent Execution System developed by FHOOE. In the automatic mode, the AWS system receives Workflows from the global layer to adapt itself to improve physical and environmental ergonomics. The FELICE Orchestrator manages the interface between the AWS and other FELICE modules.

Future Work:

  • The AWS system is going to be used in future studies to assess the effect of adaptive systems on human posture and productivity
  • Next to improving the working posture, the intelligent positioning of work objects could be used to aid the application of body action forces during assembly

Images:

Publications:

  • Pätzold, M. (2024). Wirkungsanalyse einer adaptiven Höhen- und Neigungsanpassung großer Arbeitsobjekte in der manuellen Montage. In Dokumentation des 70. Arbeitswissenschaftlichen Kongresses. GfA-Press
  • Pätzold, M. (2023). Adaptive Positionierung großer Arbeitsobjekte in der industriellen Montage zur Reduktion von physischen Belastungen. In: GfA-Press.
  • Pätzold, M., Emmel, J. (2024). Simulation of the effects of an adaptive height adjustment algorithm for large planar work objects at manual assembly stations. In Springer Proceedings.

Contact:

Technical University of Darmstadt

Institute for Ergonomics and Human Factors

Otto-Berndt-Straße 2 

64287 Darmstadt, Germany

Motivation:

The introduction of highly automated ecosystems as work environments holds huge potentials to pave the way into a new era of production. Pairing the speed, endurance, and precision of machines with human flexibility and problem-solving ability can benefit boost productivity and worker well-being. Systems with high levels of automation already yielding benefits for productivity now operating in close vicinity of human workers. The Felice project was set out to research the next generation technology to create meaningful human-robot collaborations ultimately increasing efficiency, reducing workload, and promoting worker-well being

Problem statement and intervention:

However, introducing AI-based work management systems and autonomous robots as colleagues into manufacturing environments can have disruptive effects on human work. These effects may come at costs of safety and elevated worker stress. Stress and high mental loads can deteriorate efficiency and finally lead to less acceptable, and less trustworthy human-robot relationships. Therefore, designing successful socio-technical work must have the goal to boost safety and well-being of workers by human-centred design processes.

FELICE solution:

The FELICE project adopted a human centred design process from the beginning of the project. At the heart was a close collaboration between partners throughout two development cycles across the project’s lifetime. This included thorough interim analysis and evaluations of the Human-Robot Collaboration.

Results of the FELICE project:

  • Analysis human work based on systematic task design and allocation analysis for humans and robots
  • Analysing safety risks
  • Mock-ups for visual interface designs fort the FELICE ecosystem
  • Designing the joint activity of Human-Robot Collaboration to improve shared understanding
  • Considering the entire robots behaviour as interface eg., spatial movement
  • Assigning and describing the roles of workers within the FELICE systems
  • Evaluation of the human-centred design
  • Worker training for the use and introduction of the FELICE system
  • Defining human collaborative requirements for shared understanding

Interfaces with other modules/partner work:

IfADo collaborated closely with modules focused on designing and assessing human work. Therefore, IfADo was closely collaborating with use case partner CRF on evaluations, training, and designing the role and task layout of the worker. This also included the analysis of the workers and robots task. Visual interface and automation design were addressed in collaboration with partners AEGIS and TUD in regards to the workstation. Task analysis-based workflows were collaboratively implemented with FHOOE.

Future work and exploitation of results:

Future work will include a detailed analysis of human collaborative behaviour in handover tasks. The definition of HR-Collaborative requirements and the design process can be used as blueprints for similar efforts and help the introduction of robots within workplaces.

Publications:

Dreger, F., Karthaus, M., Metzler, Y., Tauro, F., Carrelli, V., Athanassiou, G., Rinkenauer, G.  (2024). Requirements for Successful Human Robot Collaboration: Design Perspectives of Developers and Users in the Scope of the EU Horizon Project FELICE. In: Alex, ra Medina-Borja and Krystyna Gielo-Perczak (eds) Human Factors in Robots, Drones and Unmanned Systems. AHFE (2024) International Conference. AHFE Open Access, vol 138. AHFE International, USA.http://doi.org/10.54941/ahfe1005008

A. Metzler, Y., Renker, J., Zickerick, B., Dreger, F., Karthaus, M. & Rinkenauer, G. (2023). KI-koordinierte Kollaboration zwischen Mensch und Roboter: Implikationen für Arbeitsgestaltung und Einführung in Organisationen. Zeitschrift für wirtschaftlichen Fabrikbetrieb, 118(10), 682-687. https://doi.org/10.1515/zwf-2023-1127

Contact:

Felix Dreger

Cognitive Ergonomics | Human-Technology Interaction

Department of Ergonomics

Leibniz Research Centre for Working Environments and Human Factors

Ardeystraße 67, 44139 Dortmund, Germany

Tel. + 49 (0) 231 1084 371 | room 2.103

Motivation:

The FELICE project aims to revolutionize the Human Robot Collaboration in Industry with a specific emphasis on the automotive assembly lines. Within this framework, CAL-TEK has introduced and developed a state-of-the-art Digital Twin technology. This innovation provides specific guidelines about how to enhance operational efficiency of the assembly line and of the Human Robot Collaboration (in accordance with Industry 4.0 principles) as well as it provides an environment for testing the Automatic Mode of the FELICE system.

Problem statement and intervention:

Modern Industries face challenges in improving production and operational efficiency while trying to react properly to dynamic market demands. The intervention of the FELICE Digital Twin addresses these challenges by providing both fast time and real-time simulation environments for what-if analysis and experimentations.

FELICE solution:

The Digital Twin developed by CAL-TEK integrates three core modules: a Discrete Event Simulation (DES) module for conducting “what-if” analyses and experiments, a Real-Time Virtual Simulation (RTVS) module for more realistic real-time simulations and operational efficiency monitoring and a Digital Mirror that replicates the physical line. All modules are fully FIWARE-compatible, ensuring seamless interoperability within and outside the FELICE system prototype. Furthermore, the DES and RTVS modules are directly integrated to the FELICE Orchestrator and to other relevant FELICE system modules, enabling automated experimentation and optimization of the assembly process, including the integration of human-robot collaboration.

Results of the FELICE project:

  • Solutions for increasing operational efficiency and productivity of the FELICE assembly line by leveraging multi-paradigm simulations.
  • Seamless integration of commands and data streams into the Digital Twin, improving decision-making capabilities in the FELICE scenario.
  • Successful testing of the Automatic Mode of the FELICE prototype in the simulation environment of the Digital Twin including the human-robot collaboration activities.
  • Scalability and adaptability of the Digital Twin for future manufacturing use cases related to assembly line operations.

Interfaces with other modules/partner work:

The Discrete Event Simulation Module of the Digital Twin is seamlessly integrated with the FELICE Orchestrator (developed by FHOOE), allowing for data exchange, what-if analysis, experimentations and automated control across the simulation environment of the assembly line. The Real Time Virtual Simulation Module of the Digital Twin is integrated with the Orchestrator (FHOOE), the Human Robot Interaction Module (FORTH) and the Robot Action and Execution MOdule (PROFACTOR) to provide a comprehensive ecosystem for testing the Automatic Mode of the FELICE prototype (with the goal of monitoring the operational efficiency of the assembly line). Finally, its FIWARE compatibility ensures interoperability across different systems.

Future work and exploitation of results:

Future efforts will focus on adapting the Digital Twin for other manufacturing processes with the aim of increasing its scalability, reusability and interoperability. CAL-TEK is committed to commercializing the Digital Twin by leveraging its modularity and interoperability to address the broader needs of Industry 4.0 stakeholders.

Images:

Publications:

International Journals Articles

  1. Alessio Baratta, Antonio Cimino, Francesco Longo, Letizia Nicoletti. Digital twin for human-robot collaboration enhancement in manufacturing systems: Literature review and direction for future developments, Computers & Industrial Engineering, Volume 187, 2024, 109764, ISSN 0360-8352, https://doi.org/10.1016/j.cie.2023.109764.
  2. Antonio Cimino, Francesco Longo, Letizia Nicoletti, Vittorio Solina. Simulation-based Digital Twin for enhancing human-robot collaboration in assembly systems, Journal of Manufacturing Systems, Volume 77, 2024, Pages 903-918, ISSN 0278-6125, https://doi.org/10.1016/j.jmsy.2024.10.024.
  3. Antonio Cimino, Francesco Longo, Letizia Nicoletti, Vittorio Solina. Combining simulation and virtual reality for enabling interoperable digital twins in collaborative human-robot workspaces. Journal of Manufacturing Systems (under review).

International Conference Articles

  1. Alessio Baratta, Vittorio Solina, Antonio Cimino, Maria Grazia Gnoni and Letizia Nicoletti, Human Robot Collaboration: an assessment and optimization methodology based on dynamic data exchange. 2023 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), Gothenburg, Sweden, 2023, pp. 658-662, doi: https://10.1109/EuCNC/6GSummit58263.2023.10188313
  2. Alessio Baratta, Antonio Cimino, Francesco Longo, Giovanni Mirabelli, Letizia Nicoletti. Task Allocation in Human-Robot Collaboration: A Simulation-based approach to optimize Operator’s Productivity and Ergonomics. Procedia Computer Science, Volume 232, 2024, Pages 688-697, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2024.01.068.
  3. Alessio Baratta, Antonio Cimino, Alessandro Chiurco, Francesco Longo, Giovanni Mirabelli and Letizia Nicoletti. Towards Real-Time Task Allocation in Human-Robot Collaboration: Defining Key Requirements and Features for a Multi-Simulation Digital Twin System. Proceedings of the 36th European Modeling & Simulation Symposium, 036, DOI https://10.46354/i3m.2024.emss.036 
  4. Alessio Baratta, Cardamone Martina, Cimino Antonio, Longo Francesco, Nicoletti Letizia, Padovano Antonio, Sammarco Chiara. Advancing Task Allocation in Human-Robot Collaboration with a Multi-Simulation Based Digital Twin System. Elsevier Procedia Computer Science, Proceedings of the International Conference on Industry 4.0 & Smart Manufacturing (in press).
  5. Alessio Baratta, Antonio Cimino,  Lucia Gazzaneo, Letizia Nicoletti, Vittorio Solina. Conceptual Modeling for a Simulation-Based Digital Twin in Human-Robot Collaboration. Elsevier Procedia Computer Science, Proceedings of the International Conference on Industry 4.0 & Smart Manufacturing (in press).

Contact:

Francesco Longo

f.longo@cal-tek.eu

CAL-TEK Srl

Via Spagna 240, 87036 Rende (CS), Italy

www.cal-tek.eu 

Motivation:

In collaborative manufacturing, analyzing speech and gestures enables seamless human-robot interaction, enhancing efficiency and safety in shared workspaces. Speech analysis allows for intuitive, hands-free communication, while gesture recognition provides understanding of operator intent, even in noisy environments. Together, these modalities enable the collaborative robot and the adaptive workstation to dynamically adjust to the operator’s needs.

Problem statement and intervention:

Speech analysis in industrial noisy environments faces challenges such as high background noise, which can obscure verbal commands and complicate speech recognition. Similarly, gesture detection may be hindered by occlusions, variability in lighting conditions, and difficulty in distinguishing normal motions from gestures. Overcoming these challenges requires robust and noise-resilient algorithms capable of distinguishing meaningful signals from environmental interference.

FELICE solution:

The Speech and Gesture Analysis module provides the robotic platform with cognitive capabilities, enabling it to understand commands given by the human operator through two distinct modalities: speech-based and gesture-based. 

As for the speech analysis, the solution is designed to overcome challenges such as: surrounding noise, ensuring accurate command recognition even in noisy environments; dynamic relative distance between the human and the robot, as the distance between the operator and robot may vary during operations, leading to speech signals being recorded at different audio levels; different tones and accents, requiring the system to be robust to variations in voice tone and accents from different operators.

Regarding gesture analysis, the developed solution detects the hand motion and recognizes its pose, outputting the corresponding command. The module is designed to handle challenges such as: dynamic lighting conditions, caused by the robot’s varying poses within the working area, relative to both artificial and natural light sources; dynamic relative distance between the robot and the human operator, as the operator may move around the workstation during assembly tasks; motion blur, which can result from camera movement as the robot moves.

The whole solution is optimized in order to be executed in a few milliseconds with relatively small neural networks on an embedded Nvidia Jetson Xavier NX mounted on the robot or on the adaptive workstation.

Results of the FELICE project:

The realized speech analysis algorithm, achieving a F1-score of 0.93 for English commands and 0.91 for Italian commands, has the following peculiarities:

  • It includes voice activity detection, carefully customized for the noisy industrial environment, speech command recognition and a Key-Word Detection System to enhance noise rejection by activating recognition only when needed.
  • The robustness of the speech command recognition based on a Conformer architecture has been substantially improved with respect to state-of-the-art approaches by adopting a learning procedure based on curriculum learning and refined audio collection protocols in diverse environments.
  • The dataset used for training the solution is composed not only by real samples collected in the real use case, but also by synthetic samples and additional data obtained with context-driven augmentation taking into account varying distance, energy, command speed and pauses.

The implemented gesture analysis method, achieving an accuracy of 95% on the gestures of interest up to 2 meters away, is based on a novel single-stage hand detection and gesture recognition approach based on MobileNetv3-SSD trained with the biggest publicly available gesture recognition dataset (Hagrid) and with additional 1,162 videos collected for the project.

Both methods, which constitute the speech and gesture analysis module, can be executed simultaneously on a Nvidia Jetson Xavier NX requiring a few tens of milliseconds.  

Interfaces with other modules/partner work:

The speech and gesture analysis module is part of the human-robot interactive interface. The module acquires real-time data from the microphone and the camera, recognizes the commands given by the worker to the robot and informs the HRI decision maker accordingly. The recognized commands can trigger a robot action (e.g. bring an object), through the communication with the Orchestrator. The robot can also receive gesture commands, with a similar pipeline.  In addition, the speech command recognition system is also active on the adaptive workstation to control its height and inclination. In this case, communication with the Orchestrator is necessary to dynamically change these settings of the adaptive workstation.

Future work and exploitation of results:

The results obtained from the speech and gesture analysis module may serve as a foundation for follow-up projects aimed at advancing human-robot collaboration in assembly lines. Additionally, these findings will be validated and extended to other domains within collaborative manufacturing, ensuring broader applicability and refinement of the developed techniques in diverse industrial settings.

Images:

Publications:

  1. Bini, S., Greco, A., Saggese, A., & Vento, M. (2022). Benchmarking deep neural networks for gesture recognition on embedded devices. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 1285-1290).
  2. Bini, S., Percannella, G., Saggese, A., & Vento, M. (2023). A multi-task network for speaker and command recognition in industrial environments. Pattern Recognition Letters, 176, 62-68.
  3. Bini, S., Carletti, V., Saggese, A., & Vento, M. (2024). Robust speech command recognition in challenging industrial environments. Computer Communications, 228, 107938.
  4. Bini, S., Saggese, A., & Vento, M. (2024). Enhancing Noise Robustness of Speech-Based Human-Robot Interaction in Industry. In European Robotics Forum (ERF 2024), Springer Proceeding in Advanced Robotics.
  5. Carletti, V., Greco, A., Ritrovato, P., Saggese, A., & Vento, M. (2024). Multi-modal human-robot collaboration in production lines through speech commands and gestures. Submitted to Multimedia Tools and Applications..

Contact:

Department of Information and Electrical Engineering and Applied Mathematics (DIEM)

University of Salerno (UNISA)

Via Giovanni Paolo II, 132 – 84084 Fisciano (SA) – Italy

https://www.diem.unisa.it/https://web.unisa.it/

Motivation:

In collaborative manufacturing, seamless human-robot interaction and the involvement of human operators in decision-making enhance efficiency and safety in shared workspaces. Advanced Interactive Screens (AIS) facilitate effective communication in demanding environments like assembly lines. The Felice project was launched to explore next-generation technologies aimed at fostering meaningful human-robot collaboration, with the goal of enhancing efficiency, reducing workload, and improving worker well-being, and thus constitutes a perfect domain for researching AIS advancements.

Problem statement and intervention:

Common problems in HRC with collaborative robots include human acceptance and trust, safety concerns, lack of intuitive interaction, cognitive load on workers and communication challenges. Addressing these challenges requires advancements in technology, human-centered design, and clear strategies for integration and collaboration.

Contact:

Spyros Vantolas

svantolas@aegisresearch.eu

AEGIS IT Research GmbH

25 Humboldt Str. Braunschweig, 38106, Germany

https://www.aegisresearch.eu

Motivation:

Resilience is crucial in today’s world where highly automated systems are prevalent, as it ensures these systems can effectively handle unexpected disruptions. By incorporating resilience, organizations can minimize downtime and maintain continuity of operations despite potential failures. As automation becomes more complex, building resilient systems helps in swift recovery and adaptation to maintain efficiency and reliability.

Problem statement and intervention:

In order to improve the reliability of assembly lines, several runtime failures must be considered through intelligent recovery and learning. Failures can occur at different levels, including operational failures, software failures, or hardware failures. Operational Failures Manipulation (e.g., object not found), Navigation (e.g., unexpected collisions), Interaction (e.g., aborted requests). Software failures Non-responsive software modules, internal errors. Hardware failures Component wear, unresponsive robots (e.g., low battery).

FELICE solution:

The Resilient Assembly Line (RAL) system uses a three-tier error handling approach. This cascaded local and global recovery strategy is the primary design guideline for managing errors and ranges from levels 1 to 3. Level 1 involves recovery strategies developed by the local modules, such as RAE and HRI, like the robot rotating to improve positioning when location data fails. Level 2 addresses errors requiring broader system adjustments, such as repositioning robot arms when object detection confidence is low. These errors are still handled autonomously. Level 3 handles critical failures needing human intervention or system shutdown, including dropped tools or unresponsive software.

Results of the FELICE project:

For the three-tier error handling system, a Level 2 handler for arm repositioning was developed. When the object detection confidence is low, the arm is repositioned using offline data processed by statistical machine learning algorithms and a high level analysis incorporating a Visual Language Model (VLM) of the current camera arm image.

Further, a level 2 strategy for autonomous grasping of arbitrary objects via Deep Reinforcement Learning (DRL) was developed. These intelligent recovery policies were tested on the Fraunhofer site demonstrator. Lastly for level 3 task recovery, an integration of a module that communicates with different parts of the system was conducted, such as the orchestrator (FHOOE) in case of a level 3 type error and informing the line manager via visual message as well as an acoustic signal (AEGIS).

Interfaces with other modules/partner work:

The local modules controlling the robot (PRO & FORTH), the visualisation (AEGIS) and other WP7 (FHOOE) modules such as the orchestrator were the main interfaces between other modules and partners.

In detail:

  • For level 2 strategies, such as arm repositioning, the RAL module communicates with the Robot Action and Execution (RAE) module to capture the robot’s current state and with the Human-Robot Interaction (HRI) module to send commands.
  • Level 3 task level recovery was realized in cooperation between FHOOE to integrate the concept of recovery workflows and with AEGIS, who designed the default solution to task level recovery, informing the human with a visual notification on the display as well as an acoustic signal.

Future work and exploitation of results:

For the exploitation the deployment of the RAL module on a local demonstrator on the Fraunhofer site is planned including both, the arm repositioning and the intelligent grasping policies. This may be presented to potential customers or at trade fairs.

Images:

Image analysis of the arm repositioning algorithms, with the original center (blue) and calculated object center (red) incorporating a VLM.

FELICE robot in the simulation.

Simulation for Deep Reinforcement Learning for autonomous grasping. Simulated robot arm (left) and point cloud calculated from a depth image.

Real robot experiments for autonomous grasping.

Publications:

  1. J. Jost, T. Kirks, S. Chapman and G. Rinkenauer, “Keep Distance with a Smile – User Characteristics in Human-Robot Collaboration,” 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ), Vasteras, Sweden, 2021, pp. 1-8, doi: 10.1109/ETFA45728.2021.9613601.
  2. M. Frychel, S. Hoose, J. Jost, J. Gerken, T. Kirks, “A Concept for Three-Dimensional Proxemics in Human-Robot Collaboration,” Poster presented at: 2022 13th International Conference on Applied Human Factors and Ergonomics (AHFE), New York, USA, 2022.
  3. J. Eßer, N. Bach, C. Jestel, O. Urbann and S. Kerner, “Guided Reinforcement Learning: A Review and Evaluation for Efficient and Effective Real-World Robotics [Survey],” in IEEE Robotics & Automation Magazine, vol. 30, no. 2, pp. 67-85, 2023, doi: 10.1109/MRA.2022.3207664.
  4. S. Hoose, F. Würtz, T. Kirks and J. Jost, “An Evaluation of Open Source Trajectory Planners for Robotic Manipulators with Focus on Human-Robot Collaboration,” 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, 2023, pp. 1-8, doi: 10.1109/CASE56687.2023.10260597.
  5. “FELICE – Optimizing Collaboration Between People and Robots,” Annual Report Fraunhofer IML 2023.
  6. “Ein Hauch von Dolce Vita,” Logistik Entdecken (Discover Logistics) issue 25, 2023.

Contact:

Julian Eßer

julian.esser@iml.fraunhofer.de

Fraunhofer-Institut für Materialfluss und Logistik IML

Joseph-von-Fraunhofer-Straße 2-4, 44227 Dortmund, Germany

https://www.iml.fraunhofer.de/

Motivation:

Managing assembly line operations involving both human workers and collaborative robots presents significant challenges for automation and scheduling systems. Effective communication between all entities and coordinated efforts are essential to maintain a smooth workflow. This demands accurate, real-time monitoring and control, along with optimization strategies that balance economic efficiency with considerations for human well-being.

Problem statement and intervention:

The problem is inherently multi-objective, as optimizing one metric, such as efficiency, often comes at the expense of others, like worker well-being.

  • The tasks performed by humans and robots are interdependent, requiring careful sequencing and synchronization to avoid bottlenecks or inefficiencies.
  • Even for relatively simple workflows with only a few tunable parameters—such as task allocations, robot speed settings, or work cell layouts—the number of possible configurations grows exponentially, making exhaustive search impractical.
  • The relationships between objectives, such as ergonomic comfort versus task duration, are often nonlinear and context-dependent, further complicating optimization.
  • Real-world constraints like variable task durations, unforeseen delays, and worker fatigue introduce complexities that traditional optimization methods struggle to address.

These challenges necessitate the use of sophisticated metaheuristic approaches capable of efficiently navigating large, complex solution spaces, finding a diverse set of high-quality trade-offs between competing objectives.

FELICE solution:

The assembly line operation is managed by a newly-developed Orchestrator, a robust system that leverages a flexible XML-based workflow description language called ADAPT. This language enables the creation of both high-level meta-models and detailed workflow descriptions, specifying concrete actions for both robots and human workers. This flexibility allows ADAPT to model a wide range of assembly scenarios, from generic templates to highly specific task sequences.

The Orchestrator operates in two modes: Manual Mode, allowing managers and workers direct control over task scheduling and execution, and Automatic Mode, where workflows are dynamically scheduled based on resource availability and worker input, such as requests for robotic assistance.

The Orchestrator incorporates advanced optimization techniques to improve individual workflows. Execution traces from past operations are analyzed to collect performance statistics, which are then used to transform workflow descriptions into simulation models. These models allow for the exploration of alternative task sequences and resource allocations under various conditions.

The evaluation of these alternatives goes beyond traditional metrics like execution time and efficiency. Evaluations consider execution time, efficiency, ergonomic factors, and workplace adjustments to reduce worker strain and promote long-term well-being.

By combining flexible workflow descriptions, adaptive operation modes, and sophisticated optimization capabilities, the Orchestrator provides a comprehensive solution to the challenges of hybrid human-robot assembly lines. It not only maximizes efficiency but also promotes a safe, adaptable, and worker-friendly environment.

Results of the FELICE project:

The project delivered several key outcomes that advance the management and optimization of hybrid human-robot assembly lines:

  • Orchestrator Platform and Runtime:
    A robust Orchestrator platform was developed, leveraging the FIWARE message bus for seamless communication between the robot, line manager, and supplementary modules such as speech and gesture recognition systems.
  • Automated Software Deployment Strategy:
    A sophisticated software deployment pipeline was implemented using a fully automated continuous integration/continuous delivery (CI/CD) system. Changes pushed to the central Git repository trigger the creation of updated Docker images, ensuring rapid, reliable, and scalable software deployment with minimal downtime of the system.
  • Trace Collection, Statistical Evaluation, Optimization and Scheduling
    The project introduced advanced optimization algorithms for shift planning and workflow fine-tuning, supported by systematic trace collection and statistical analysis. These insights enabled simulation-based optimization leveraging CALTEK’s digital twin and a simplified model for rapid evaluations, capable of processing thousands of scenarios per second on commodity hardware.

These results collectively provide a comprehensive framework for efficient, adaptive, and worker-centered assembly line operations, combining state-of-the-art orchestration, optimization, and deployment strategies.

Interfaces with other modules/partner work:

Collaboration across various internal interfaces played a crucial role in ensuring the seamless integration and functionality of the Orchestrator within the project ecosystem. IFADO collaborated on defining flexible and detailed workflows for the Orchestrator. TUD’s adaptive workstation was integrated into the system, allowing for real-time adjustments that accommodated both worker needs and environmental changes. CRF facilitated the connection of the Orchestrator to their large-scale assembly line demonstrator, providing a practical environment for real-world validation. CALTEK contributed by working closely to incorporate simulation models and their digital twin and connecting it to the Orchestrator, supporting workflow optimization through rapid scenario evaluations. Coordination with IML ensured that high-level recovery requests generated by the Resilient Assembly Line were effe

ctively managed within the Orchestrator’s task scheduling processes. Additionally, ICCS and FORTH enabled the integration of systems for real-time human location and posture estimation, further enhancing the Orchestrator’s adaptability and safety features. Finally, FORTH collaborated to incorporate robot tracking and navigation capabilities, enabling efficient task assignment and monitoring of robot activities.

These coordinated efforts ensured that the Orchestrator was tightly integrated and fully aligned with the broader objectives of the project.

Future work and exploitation of results:

Future work will focus on refining the Orchestrator and expanding its applications to domains such as logistics and healthcare, where effective human-robot coordination is essential. Key priorities include scaling optimization techniques, improving simulation efficiency, and enabling multi-site coordination. Incorporating advanced ergonomic models and prioritizing human-centered design will ensure the system continues to balance efficiency with worker well-being, paving the way for broader adoption in complex operational environments.

Images:

Publications:

Holzinger, F., Beham, A. (2022). Multi-criteria Optimization of Workflow-Based Assembly Tasks in Manufacturing. In: Moreno-Díaz, R., Pichler, F., Quesada-Arencibia, A. (eds) Computer Aided Systems Theory – EUROCAST 2022. EUROCAST 2022. Lecture Notes in Computer Science, vol 13789. Springer, Cham. https://doi.org/10.1007/978-3-031-25312-6_5

Contact:

Erik Pitzer & Roman Froschauer

University of Applied Sciences Upper Austria

Roseggerstraße 15, 4600 Wels, Austria

Motivation:

The rapid evolution of industrial systems under Industry 5.0 focuses on integrating customized production, collaborative robotics, and human-centric approaches. Industry 5.0 envisions an era where automation, artificial intelligence, and robotics are seamlessly integrated with human creativity and adaptability. This shift emphasizes not only productivity but also sustainability, ergonomics, and collaboration.

Industrial environments often require robots to handle a diverse range of objects with varying geometries, materials, and spatial arrangements. Manipulating such objects within cluttered or dynamically changing surroundings poses significant challenges. Moreover, collaborative mobile manipulators need to execute complex tasks that involve the coordination of multiple subsystems, such as grippers, robotic arms, heads, and mobile platforms. This complexity is amplified when these subsystems must simultaneously interact with human operators and adapt to unforeseen situations.

For example, the CRF use case involves high-level tasks like navigating to tool stands, picking up tools from foam boxes or holders, and handing them over to human workers. These tasks demand not only precise motion planning but also error-handling mechanisms that ensure smooth execution even in resource-constrained environments. Additionally, communication across diverse hardware systems introduces another layer of complexity, necessitating an adaptable and modular framework.

This motivation highlights the pressing need for a versatile and reactive task execution framework capable of:

  • Addressing the challenges posed by object variability and environmental constraints.
  • Enabling efficient coordination between robotic subsystems.
  • Supporting dynamic collaboration with humans, aligning with Industry 5.0 principles. 

Problem statement and intervention:

Industrial scenarios, such as the CRF use case, present significant challenges for collaborative task execution due to:

  1. Diverse Object Handling: Manipulating objects with varying geometries and spatial arrangements, often in constrained or unpredictable environments, is complex and error-prone.
  2. Subsystem Coordination: Achieving seamless coordination between the robotic arm, gripper, and platform for tasks like tool pickup, handover, and placement is critical but technically demanding.
  3. Dynamic Error Management: The ability to dynamically respond to errors and recover from failures during task execution remains a major limitation in existing systems.

Additionally, ensuring the kinematic feasibility of mobile manipulators during these tasks—particularly when accounting for the interplay between motion constraints of the arm and platform—compounds the complexity of these operations.

Intervention

A robust task execution framework, built on the Behavior Tree (BT) architecture, was developed to address these challenges. This framework integrates:

  • MoveIt Framework for collision-free motion planning.
  • MoveIt Task Constructor for task decomposition using CAD-based planning scenes.
  • Forward Simulation for determining optimal manipulator positions.
  • BT Modularity and Reactivity to enable dynamic task adjustments and error recovery.

FELICE solution:

The FELICE framework, developed by PROFACTOR, combines state-of-the-art technologies to address the complexities of industrial task execution with collaborative mobile manipulators. The core innovations include:

  1. Behavior Tree-Based Task Execution Framework: The BT framework provides a modular, hierarchical structure for task execution. Its reactivity ensures seamless error handling and recovery while maintaining operational fluency. Each task, such as navigating to tool stands, grasping objects, and tool handovers, is broken into reusable nodes that dynamically adapt to changing conditions, ensuring precise execution even in resource-constrained environments.
    • Dynamic Task Adjustments: The BTs enable real-time reconfiguration of task parameters, such as adapting arm movement if an object shifts or retrying a failed grasp.
    • Error Recovery: Integrated recovery nodes allow fallback strategies, like moving to preconfigured alternate positions if a target is obscured or inaccessible.
  2. MoveIt Framework for Motion Planning: MoveIt ensures collision-free motion planning for robotic arms and platforms, even in cluttered or dynamic environments. By integrating with the BT framework, it supports:
    • Real-Time Obstacle Avoidance: Avoids collisions with environmental structures, human workers, or other robots.
    • Kinematic Constraints Compliance: Maintains safe and feasible motion trajectories for both arms and mobile platforms, ensuring precise positioning.
  3. MoveIt Task Constructor for Task Optimization: Task execution is optimized by leveraging the MoveIt Task Constructor for decomposing high-level tasks into modular components. Key features include:
    • CAD-Based Spatial Understanding: CAD models are used to generate accurate planning scenes, ensuring the robot can interact effectively with industrial objects, tool holders, or foam boxes.
    • Blender Integration: Scenes assembled in Blender improve situational awareness by simulating multi-object environments, enabling preemptive adjustments for task execution.
  4. Forward Simulation for Mobile Manipulator Optimization: A robot-agnostic forward simulation technique identifies the “sweet spot” stopping position for mobile manipulators, ensuring tasks like tool pickup and placement are executed without kinematic conflicts. By simulating different configurations:
    • Optimal Platform Positioning: Ensures the manipulator can execute tasks without exceeding joint limits or encountering reachability issues.
    • Edge Device Compatibility: Validated on resource-constrained hardware like Intel NUC PCs, ensuring real-world deployability.
  5. Integrated Ecosystem for Collaboration:
    • Perception-Driven Adjustments: Inputs from object localization and human behavior understanding modules enable robots to dynamically adapt to workspace changes.
    • Seamless Subsystem Integration: Interfaces with navigation systems, grippers, and arms allow cohesive operation of all robotic subsystems.

By uniting these components, the FELICE solution provides an adaptable, efficient, and human-centric approach to collaborative task execution, setting a benchmark for Industry 5.0 applications.

Results of the FELICE project:

  1. Validated Task Execution Framework: Demonstrated a Behavior Tree-based execution framework capable of handling complex industrial tasks like tool pickup, placement, and handover in real-world shop floor environments.
  2. Dynamic Error Handling: Successfully implemented error recovery strategies, including alternate scanning poses, retry mechanisms, and dynamic task adjustments.
  3. Optimized Motion Planning: Achieved fast and collision-free manipulation using the MoveIt Framework and MoveIt Task Constructor, leveraging CAD models and Blender-based planning scenes.
  4. Kinematic Feasibility: Validated a forward simulation approach to ensure optimal stopping positions for mobile manipulators, enabling smooth integration of arm and platform movements.
  5. Edge Device Compatibility: Successfully deployed the dynamic Behavior Tree framework on resource-constrained hardware like Intel NUC PCs, showcasing its suitability for compact, real-world applications.
  6. Collaborative Ecosystem Integration: Developed robust interfaces for seamless interaction with other FELICE modules, including navigation systems, perception modules, and human-robot interaction decision makers.

Interfaces with other modules/partner work:

  1. Perception Integration:
    • ROS1 service for obtaining 6D object poses from the Object Detection and Localization (ODL) module.
    • Interface with the Human Behavior Understanding (HBU) module to determine cart positions.
  2. Hardware Communication:
    • Integration with robotic arm and gripper hardware (developed by ACCREA) for seamless manipulation.
    • Interface with the knowledge base to retrieve tool stand positions.
  3. Navigation and Task Execution:
    • ROS1 pub-sub interface with the Human-Robot Interaction Decision Maker (HRI-DM) for receiving high-level tasks and providing feedback.
    • Direct communication with the robot platform to send navigation commands.
  4. Dynamic Task Planning:
    • Collaboration with MoveIt for motion planning and collision avoidance.
    • Use of CAD-based planning scenes assembled in Blender to optimize spatial understanding during task execution.

These interfaces ensure smooth integration of perception, hardware, and task planning, supporting cohesive collaboration across FELICE modules.

Future work and exploitation of results:

Grasping Solutions for Novel Objects:

  • Develop adaptive grasping strategies leveraging machine learning and vision-based systems to handle novel, unstructured, and diverse objects in dynamic industrial settings, such as electronics components or recyclable materials.

Expanded Application Areas:

  • Apply the Behavior Tree (BT)-based task execution framework and grasping solutions to new domains, including:
    • Electronics Manufacturing: Precise assembly of delicate components.
    • Recycling: Sorting, handling, and manipulation of waste materials for sustainability.
    • Disassembly: Efficient deconstruction of complex products for repair, reuse, or recycling.

Scalability:

  • Adapt the framework for multi-robot systems and autonomous operations, ensuring effective task sharing, kinematic feasibility, and resource optimization.

Commercialization:

  • Deploy FELICE grasping and task execution solutions in real-world industrial scenarios, focusing on flexible, adaptable, and human-centric manufacturing systems.

Long-Term Integration:

  • Incorporate cloud-based solutions and AI-driven adaptability to enhance robotic motion planning, grasping capabilities, and collaborative operations across a wide range of industries.  

Images:

Publications:

Akkaladevi, S. C., Propst, M., Deshpande, K., Hofmann, M., & Pichler, A. (2024). Towards a behavior tree based robotic skill execution framework for human robot collaboration in industrial assembly. 2024 10th International Conference on Automation, Robotics and Applications (ICARA), 18-22. https://doi.org/10.1109/ICARA60736.2024.10553029

Akkaladevi, S.C. et al. (2024). Dynamic Adaptability in Human-Robot Collaboration for Industrial Assembly: A Behaviour Tree Based Task Execution. In: Wang, YC., Chan, S.H., Wang, ZH. (eds) Flexible Automation and Intelligent Manufacturing: Manufacturing Innovation and Preparedness for the Changing World Order. FAIM 2024. Lecture Notes in Mechanical Engineering. Springer, Cham. https://doi.org/10.1007/978-3-031-74482-2_34

Akkaladevi, S.C. et al. (2024). Towards Behavior Trees Based Robotic Task Execution for Physical Human Robot Collaboration. In: Wang, YC., Chan, S.H., Wang, ZH. (eds) Flexible Automation and Intelligent Manufacturing: Manufacturing Innovation and Preparedness for the Changing World Order. FAIM 2024. Lecture Notes in Mechanical Engineering. Springer, Cham. https://doi.org/10.1007/978-3-031-74482-2_33

A. Pratheepkumar , M. Hofmann , M. Ikeda,A. Pichler Domain Adaptation With Evolved Target Objects for AI Driven Grasping https://doi.org/10.1109/ETFA52439.2022.9921470.

Contact:

Sharath Chandra Akkaladevi,

Robotics and Automation Systems,

Profactor GmbH,

Im Stadtgut D1| 4407 Steyr-Gleink | Austria

sharath.akkaladevi@profactor.at 

Motivation: Accurate embedded 6D pose estimation of industrial objects plays an important role in enabling effective object manipulation in human-robot collaboration scenarios.

Problem statement and intervention:

An embedded implementation (i.e. low power) of real-time 6D pose estimation of objects with challenging characteristics in industrial environments is essential for maintaining a robust robotic pipeline for object grasping. 

FELICE solution:

Τhe Object Detection and Localization (ODL) module is responsible for identifying specific objects to be handled by the robot and, at the same time, it estimates with high precision and in real time the object 6D pose (rotation and translation in the camera coordinate system).  The module considers known 3D object models and RGB images acquired by a camera mounted on the robot arm and exploits deep learning techniques trained on annotated image data. Furthermore,  as the accuracy and performance of the ODL have a direct impact on the quality of the robot’s grasping task, it has to perform the object 6D pose estimation in real time and provide a confidence score to benefit the HRC scenario by pruning poses during runtime that may lead to grasping problems. Aside the time efficiency during inference, the module address the challenges of objects that feature special appearance and geometric characteristics such as weak texture, symmetries, reflective, and black surfaces and a real industrial environment that is inherently complex and characterized by cluttered background and unfavorable illumination conditions. The module exploits ROS for communicating outputs and ingesting images from the camera stream. Furthermore, ODL is optimised  for deployment on a Jetson Xavier AGX board (32 GB memory, 512-core GPU ) with low power consumption and sufficient computing power for the task, while acceleration of the inference stage is achieved by exploiting acceleration primitives of the underlying execution platform (i.e TensorRT).  All necessary components of the module are bundled together in a Docker container as a popular executable “shipping method”

Results of the FELICE project:

  • Robust and accurate 6D pose estimation of assembly tools in an industrial environment, optimised for real-time performance on embedded devices

Interfaces with other modules/partner work:

The module interacts with RAE (Robot Action Execution) environments and Human-Robot Interaction Decision  Make(HRI-DM) modules  and the robot controller. This module is responsible for the high-level control and feedback mechanisms for interacting with robotic arm.

Future work and exploitation of results:

Extending the module for detecting and localizing a large number of industrial tools and object is within the short time exploitation strategy. 

Images:

Publications:

P. Sapoutzoglou, G. Tzintanos, G. Terzakis and M. Pateraki. “COBRA – COnfidence score Based on shape Regression Analysis for method-independent quality assessment of object pose estimation from single images”, 2025 (under review). 

S.C. Akkaladevi, M. Propst, K. Deshpande, M. Hofmann, A. Pichler, P. Sapoutzoglou, A. Zacharia, D. Kalogeras and M. Pateraki. “Dynamic Adaptability in Human-Robot Collaboration for Industrial Assembly: A Behaviour Tree Based Task Execution”. In Proc. of the FAIM 2024 Conference, Lecture Notes in Mechanical Engineering, Springer. 3. 

A. Papadaki and M. Pateraki. “6D object localization in car-assembly industrial environment”. Journal of Imaging, 9(3), 72, 2023. 5. M. Pateraki, P. Sapoutzoglou and M. Lourakis. “Crane Spreader Pose Estimation from a Single View”. Int’l Joint Conf. on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023), 5, 796–805, 2023.

Symbiotic Human-Robot Collaboration: The FELICE Approach in Smart Assembly Lines

Dimitrios Kalogeras, Maria Pateraki, Sharath Chandra Akkaladevi, Bartlomiej Stanczyk, 11th Hybrid Production Systems, ERF 2024

Contact:

Maria Pateraki 

Dimitris Kalogeras

Institute of Communication and Computer Systems

7 Heron Polytechneiou Str. 15780

Motivation: 

Detecting the presence of humans and their location is a key ingredient for implementing safety

Problem statement and intervention:

Detecting the presence of humans, estimating their body posture, interpreting their actions and characterizing their behavior by comparing temporal observations against knowledge of the expected behavior. Deducing actions and intentions is essential for monitoring the progress of assembly tasks and workers’ states, enhancing productivity and work safety/health by monitoring the risk for work-related musculoskeletal disorders (WMSDs). 

FELICE solution:

The Human Behavior Understanding (HBU) module realizes a set of components and functionalities that rely on visual data that capture RGB (color) and depth images of human workers during car door assembly work activities on the shop floor. Visual input data is acquired in a non-invasive manner employing passive camera sensors installed at multiple stationary locations alongside the car door assembly line. The module utilizes time-synchronized image sequences to detect the presence and estimate the location and the detailed 2D and 3D body pose of human workers in real time, based on a 3D global coordinate reference system on the shop floor. The estimated 3D body pose of each worker is further analyzed to detect ergonomic risks, track work progress based on the recognition of assembly actions, and monitor safety issues for each assembly task. The module also caters to the estimation of the location of each cart door during assembly activities which aids in monitoring worker actions and estimating the task cycle progress in real-time. Additionally, it supports marking the cart’s location on the shop floor as an obstacle to assist robot navigation and provide dynamic updates for points of interest. Overall, all three workstations of the car door assembly line and the engaged workers are being monitored in a non-invasive manner throughout the work shift providing valuable vision-based extracted information to the FELICE system.

Challenges that must be addressed include human body occlusions, severe changes in environmental conditions, clutter, ambiguities, variability of actions, and their durations among repeated cycles. 

Results of the FELICE project:

  • Real-time multicamera system for estimating the 3d location, poses, actions and physical ergonomic state of workers during car door assembly activities.
  • A novel dataset comprising visual and motion capture data during car door assembly activities in a real workplace. Annotations related to 3D human poses, assembly actions and monitoring of physical ergonomics (dynamic postures) of workers according to the EAWS tool is provided by experts.

Interfaces with other modules/partner work:

Human-Robot Interaction (HRI), Knowledge Base, Orchestrator, Digital Mirror.

Future work and exploitation of results:

  • Production-level (parrallelized, dockerized, cross-platform) source code implementation
  • Extend and testing for monitoring of other types of assembly activities/lines
  • Support additional protocols/tools for physical ergonomic assessment
  • Involve an egocentric camera mounted on the worker’s head to acquire additional visual information for estimating 3d hand poses and recognize fine-grained assembly actions.
  • Estimate/integrate information related to the object parts that are involved in the car door assembly procedure (other than the cart and the car door).

Images:

Publications:

Papoutsakis, K., Papadopoulos, G., Maniadakis, M., Papadopoulos, T., Lourakis, M., Pateraki, M., & Varlamis, I. (2022), Detection of Physical Strain and Fatigue in Industrial Environments Using Visual and Non-Visual Low-Cost Sensors. In: Technologies (Vol. 10, Issue 2, p. 42). MDPI AG, DOI: https://doi.org/10.3390/technologies10020042, Link: https://www.mdpi.com/2227-7080/10/2/42

Papoutsakis, K. Bakalos, N., Fragkoulis, K., Zacharia, A., Kapetadimitri, G., Pateraki, M.,. (2024). A vision-based framework for human behavior understanding in industrial assembly lines. ECCV 2024 Workshop Towards a Complete Analysis of People: Fine-grained Understanding for Real-World Applications (T-CAP). DOI: https://doi.org/10.48550/arXiv.2409.17356 

K. Papoutsakis, M.Lourakis, M. Pateraki. Automatic Vision-based Monitoring of Work Postures and Actions for Human-Robot Collaborative Assembly Tasks, ERCIM News 132, January 2023, Special theme: Cognitive AI & Cobots, Guest editors: by Theodore Patkos (ICS-FORTH) and Zsolt Viharos (SZTAKI), https://ercim-news.ercim.eu/images/stories/EN132/EN132-web.pdf 

Contact:

Maria Pateraki 

Dimitris Kalogeras

Institute of Communication and Computer Systems

7 Heroon Polytechneiou Str. 15780

Konstantinos Papoutsakis

Foundation for Research and Technology – Hellas (FORTH)

Institute of Computer Science

N. Plastira 100, Heraklion, Crete, Greece

Motivation:

A robot navigating a partially known, dynamic environment has to be aware of its position with respect to its surroundings. This requires using on-board sensors to construct and update a map of the environment, while keeping track of the robot’s position and orientation within it. The aforementioned constitutes the computational problem of simultaneous localization and mapping (SLAM), and when it relies on data captured by cameras, the process is specifically referred to as visual SLAM (vSLAM).

Problem statement and intervention:

A practical issue pertaining to most vSLAM systems is that the 3D information they recover is expressed in an arbitrary coordinate system, typically determined from the starting segment of the camera trajectory. In other words, the employed reference frame is attached to the initial robot pose and subsequent position tracking relates to this arbitrary origin. Despite being a very common practice in vSLAM systems, this convention is unsuitable for practical navigation.

FELICE solution:

We have developed a cutting-edge, feature-based vSLAM pipeline akin to ORB-SLAM2 and PTAM. Further to this, we have integrated absolute localization capabilities to our vSLAM pipeline in order to facilitate the alignment of the vSLAM coordinate system with a global reference frame. Our approach capitalizes upon the observation that FELICE’s use-cases involve repeated navigation in the same environment and leverages the robustness of feature tracking with the accuracy of absolute localization to provide accurate localization for real-world applications.

Results of the FELICE project:

We have developed and deployed in real conditions a contemporary vSLAM pipeline that incorporates state of the art structure from motion (SfM) algorithms combined with numerous performance enhancements. A notable feature is the ability of vSLAM to store and later load maps from previous sessions, in order to merge them with the current map as a means to recover from unexpected loss-of-localization events. We have also developed a novel absolute localization pipeline that employs a representation derived offline from casually collected imagery of the target environment. This representation has been coupled with a pose estimation scheme that combines matches from multiple images with optimized SfM data. This hybrid scheme can be seamlessly integrated into any vSLAM system, allowing the latter to maintain tracking and mapping within its local reference frame while applying global localization corrections that do not interfere with its internal state. The computed localization estimates are accompanied by informative error metrics based on the reprojection error. Apart from achieving absolute localization, this process also bounds vSLAM’s localization error, promoting accuracy and reliability without requiring a pre-deployed infrastructure. We have combined, tested and evaluated the aforementioned absolute pose estimation and vSLAM pipelines and carried out experiments under realistic conditions which demonstrated that this combination achieves absolute localization with decimeter accuracy. Moreover, we have demonstrated that it can execute adequately on modest computing hardware without GPU acceleration.

Interfaces with other modules/partner work:

The developed vSLAM pipeline estimates for each image frame a general 6DoF pose, which, considering that the robot moves on a planar ground, is subsequently converted for simplicity to a 2D spatial location on the ground plane plus a yaw rotation around the robot’s vertical axis. This information is encapsulated in a standard PoseWithCovarianceStamped ROS message and published frequently. It is used by ACC’s navigation stack as the primary means of localization and combined with the base LiDAR readings, allows the planning and following of free paths from the current robot location to a desired goal destination. Apart from supporting navigation, the localization information is available to all other FELICE modules that require knowledge of the robot’s position and orientation such as the HRI and the digital twin.

Future work and exploitation of results:

Visual SLAM technology has undergone rapid growth in recent years owing to its increasing adoption in autonomous driving, the rise of new digital technologies such as automation, AI, robotics and augmented reality. Therefore, absolute (i.e., global) localization that is independent of the camera’s starting point and is able to perform reliably in challenging environments, is the cornerstone of autonomous systems navigating complex environments.

Images:

Visual localization estimates the robot’s pose on the shop floor. This pose is shown superimposed on the floor map of the right image as the magenta circle in the bottom left with a radius to indicate the facing direction.

Publications:

  1. M. Lourakis, G. Terzakis and E. Hourdakis, “A Feature-based Visual SLAM System for Absolute Localization,” under review.
  2. G. Terzakis and M. Lourakis, “Efficient Pose Prediction with Rational Regression Applied to vSLAM,” in Proc. International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 11970–11976.
  3. G. Terzakis and M. Lourakis, “Fast and Consistently Accurate Perspective-n-Line Pose Estimation,” in Proc. International Conference on Pattern and Recognition (ICPR), Kolkata, India, 2024, pp.  97-112.
  4. M. Pateraki, P. Sapoutzoglou and M. Lourakis, “Crane Spreader Pose Estimation from a Single View,” in Proc. International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), Lisbon, Portugal, 2023.
  5. M. Lourakis and G. Terzakis, “A Globally Optimal Method for the PnP Problem with MRP Rotation Parameterization,” in Proc. International Conference on Pattern Recognition (ICPR), Milano, Italy, 2021, pp. 3058–3

Contact:

Manolis Lourakis

Foundation for Research and Technology – Hellas (FORTH)

Institute of Computer Science

N. Plastira 100

Vassilika Vouton, GR-700 13 Heraklion, Crete, Greece