Blog

By standardizing the robot benchmarking process: preparing them for the real world

4 April 2019
Eurobench robots
Table of contents0

Any scientific activity generating new knowledge needs to go through an experimentation stage in which the method proposed is executed and monitored using appropriate performance indicators.

If the scientific thoroughness of the method proposed is per se a relevant aspect, the experimentation and analysis of experimentation outcomes is a necessary step for validating the proposed model or approach. In particular, the comparison with state of the art solutions is a mandatory step for contrasting the outcomes of the proposed work and confirm the added value of the solution. A research can be theoretically very strong and appealing, but it has also to provide clear benefit with respect to the state of the art. This is where the benchmark concept enters.

Benchmarking is not specific to research and is also applied in business strategies. Benchmarking is defined as the process of measuring performance using specific indicators. It can be used for comparing one solution to other, but also for measuring the impact of possible adjustments of a given solution. On a community level, the interest of benchmarking is that it enables comparing on a common basis different approaches, and provides a global and objective evaluation of the most appropriate solutions according to the metrics considered.

There are several examples of benchmarking effort in the scientific and robotic context. To cite a few, the Amazon picking challenge is a competition started in 2017 focusing on robots for mass production and management. The American agency DARPA (Defense Advanced Research Projects Agency) organized several competitions for fostering the development of autonomous car. The first group that won that contest in 2005 was from the Stanford University and led by Sebastian Thrun (whom led later on the development of the Google´s self-driving car). The Cybathlon is a competition for disable competitors using bionic assistive robotics and performing everyday tasks.

Nevertheless, most of these events are competitions in which robotic systems are evaluated using simple criteria such as task completed and completion time. Very few provide more complex benchmark measurements that would help in detailing the real performances of robotic systems, and enabling a more complete and deep comparison with competitors. This is the gap that the Eurobench project aims at filling.

Eurobench is a European initiative launched in 2018 and lasting four years. Coordinated by CSIC, it gathers 10 partners that have been extensively working on the definition of a benchmarking scheme for bipedal robotic  systems, in response to direct needs identified by the Strategic Research Agenda, a work that received special attention from EU representatives and was recognized by the MAR as one of the most promising efforts on standardization and benchmarking in Europe.

Eurobench aims to create the first unified benchmarking framework for robotic systems in Europe. This framework will allow companies and/or researchers to test the performance of their robots at any stage of development. The project is mainly focused on bipedal machines, i.e. exoskeletons, prosthetics and humanoids, but aims to be also extended to other robotic technologies.

Eurobench will create a consolidated benchmarking methodology for robotics, addressing the main issues observed in related solutions:

  • In bipedal robotics, current benchmarking approaches are focused on generic, goal-level performance indicators. Existing competitions usually do not describe performance in details. Eurobench envisions providing such description. Using the system abilities as defined in the MultiAnual Roadmap for robotics in Europe, which correspond to the core capabilities robotics system should provide, and that can be used to compare heterogenous systems.
  • In robotics, the reproducibility of results from scientific publications is very difficult. The experimental protocols and methods are not sufficiently specified to be replicated by others. When existing, metrics are numerous and overlapping, and experimental procedures are defined from each lab independently and chosen according to lab-specific functionalities. This situation undermines the basis of scientific (and industrial) progress, i.e. the evaluation of the state of the art, and building on other’s research works. Eurobench will provide a set of benchmarking scenarios together with related testbeds, onto which the quantitative comparison of robotic systems will be eased.
  • Current measurement systems and procedures used to evaluate the robotic performance are heterogeneous and not sufficiently detailed. At the same time, most of databases of robotic performance, when existing, are not standardized, i.e. they lack task labelling, data segmentation, data format, physical objects dimensions, type of measurement system used, HW specification. Eurobench will provide a unified benchmarking Software that will handle all these aspects.

To this aim, EUROBENCH will develop:

  • Two benchmarking facilities, one for wearable robots (including exoskeletons and prostheses) and the other for humanoid robots, to allow companies and/or researchers to perform standardized tests on advanced robotic prototypes in a unique location, saving resources and time.
  • A unified benchmarking software, which will allow researchers, developers and end-users worldwide to design and run the tests in their own laboratory settings.

To realize these goals, Eurobench will count on the collaboration of external entities, a.k.a. Third Parties, offering them financial support for developing and validate specific sub-components of the Facilities and the Software. This is organized in two competitive Open Calls:

  • 1st FSTP Open Call: “Development of the benchmarking framework”. This Call, recently closed was looking for Third Parties interested in designing and developing:
    • Test bed devices, including ground terrains, perturbation devices, and measurement systems, to allow the testing of a great variety of motor skills.
    • Benchmarking metrics and algorithms that allow automatic calculation of performance scores based on recorded data.
    • Databases of human and/or robot performance.
  • 2nd FSTP Open Call: “Validation of the benchmarking framework”. This Call, open from June to August 2020, will offer Third Parties the opportunity to use the benchmarking facilities and/or software, at zero-costs, to test and improve their own robotic systems.

The submitted proposals to the 1st call are being evaluating. They will support the Eurobench consortium to build the physical testbeds and enhance the software layer of the benchmarking solution.

The ambition of Eurobench goes beyond the framework of the European Commission support, and we envision to reach a sustainability of the benchmarking eco-system. This way, the Eurobench could become in the next year a reference benchmarking system for any bipedal machines and humanoids. Also, we hope that the experience accumulated will enable to extend to deploy the benchmarking concept to other robotic field, strengthening the position of Europe in robotic benchmarking.

In the consortium, TECNALIA will provide his expertise in exoskeleton to develop relevant testbeds enabling the characterization of such wearable robots. TECNALIA is also leader of the development of the Benchmarking Software that will handle the database of protocols and experiments collected, the computation of the benchmarking access, as well as the user access to all the benchmarking material.

By standardizing the robot benchmarking process, Eurobench will help preparing robots for the real world.

Anthony Remazeilles

ABOUT THE AUTHOR

Anthony Remazeilles

Anthony Remazeilles holds a PhD in Computer Science from the University of Rennes, France.

Read more +

Author:Anthony Remazeilles
Subscribe to our communications
TELL US YOUR OPINION
0 comments
TELL US YOUR OPINION
*compulsory fields