Documentation

Everything you need to know about submitting controllers and interpreting benchmark results

Getting Started

RobotRank.ai is a cloud-based platform for benchmarking quadruped and humanoid robot controllers. Our platform provides standardized testing environments and metrics to help you evaluate and improve your controllers.

Prerequisites

  • A GitHub account
  • A repository containing your controller code
  • Controller must follow our standard template structure

How to Submit a Controller

Step 1: Prepare Your Repository

Your GitHub repository must contain a standard entry script (typically evaluate.py) that follows our API specifications. Use our template repository as a starting point.

Step 2: Configure Your Submission

Navigate to the Submit Controller page and provide:

  • Your GitHub repository URL
  • Simulation environment (PyBullet, MuJoCo, Isaac Sim)
  • Task to benchmark (running, jumping, walking, etc.)
  • Robot model (ANYmal, Unitree Go2, Unitree A1, Boston Dynamics Spot)
  • Whether to submit anonymously

Step 3: Run the Benchmark

Click "Run Benchmark" to start the evaluation process. Your controller will be tested in our cloud environment. Typical benchmarks take 5-15 minutes to complete. You'll receive an email notification when results are ready.

Note: Make sure your repository is public or that you've granted our platform access to private repositories through the GitHub integration.

Interpreting Benchmark Results

Our benchmark suite evaluates your controller across multiple key performance indicators (KPIs):

Overall Score

A weighted composite metric combining all KPIs. Scores range from 0-1000, with higher values indicating better overall performance.

Speed

Measures the average velocity achieved during the task. Rated on a scale of 0-10, where higher values represent faster movement.

Energy Efficiency

Evaluates power consumption relative to distance covered. Higher scores (0-10) indicate more efficient energy usage.

Stability

Assesses balance and smoothness of movement. Scores from 0-10, with higher values representing better stability and fewer falls.

Downloadable Results

After your benchmark completes, you can download a ZIP file containing:

  • Detailed performance metrics (JSON format)
  • Console logs from the simulation
  • Video recordings of the robot's performance
  • Comparison charts against top controllers

Using the Leaderboard

The leaderboard displays global rankings of all submitted controllers. You can search, filter by category, and sort by different metrics to find the best-performing controllers for your use case.

Features

  • Search by controller name or user
  • Filter by task category (Locomotion, Manipulation, Navigation)
  • Sort by any performance metric
  • View detailed performance breakdowns

Frequently Asked Questions

Contact Us

Have questions or need help? We're here to assist you.

For partnership inquiries, please visit our partnerships page.