ManipulationNet released!

We are excited to announce ManipulationNet, a community-driven global infrastructure, that enables benchmarking robot manipulation research at scale with any robot at anytime and anywhere. Inspired by the prior efforts in benchmarking robot manipulation, including standard object sets paired with evaluation protocols, simulation-based benchmarking platforms, and onsite competitions in conferences, the ManipulationNet, for the first time, has simultaneously provided authenticity, accessibility, and realism in one online large-scale benchmarking system. In brief, ManipulationNet provides both hardware and open-source software to host standardized benchmarking tasks, for which participants can submit solutions from their own locations at anytime in a distributed manner, and then get their performance evaluated in a centralized manner to shine on global leaderboards.

ManipulationNet organizes benchmarking tasks in two tracks: 1) Physical Skills Track that evaluates robot abilities in physical contact-rich tasks; and 2) Embodied Reasoning Track that challenges robot capabilities in grounding language, vision, and language + vision prompts into effective real-world actions. In the first release, we provide a “Peg-In-Hole Assembly” task in the Physical Skills Track and a “Block Arrangement” task in the Embodied Reasoning Track.

Over time, ManipulationNet aims to “connect the dots” of the up-to-date robot abilities and challenges to construct a network of real-world robot skills, a network of research challenge roadmaps, and a network of research progress trajectories. Want to learn more about ManipulationNet? Check out the links below:

Project website: https://manipulation-net.org Founding Committee: https://manipulation-net.org/committee.html GitHub Repository: https://github.com/ManipulationNet/mnet_client mnet documentation: https://mnet-client.readthedocs.io Paper: https://manipulation-net.org/MNet_preprint.pdf

Have questions? Please contact support@manipulation-net.org