Error when running the evaluation script

Hi, I am getting this error when I am running the prestage evaluation script with our code

Error: Arrays are not equal
Step 91043: Recorded robot position does not match with the one achieved by the replay
Mismatched elements: 6 / 9 (66.7%)
Max absolute difference: 1.69835204e-06
Max relative difference: 1.8292636e-06
 x: array([-1.460907e-01,  6.899884e-01, -9.284331e-01,  7.378698e-01,
        7.530735e-01, -9.388939e-01,  3.647714e-01, -2.641866e-21,
 y: array([-1.460908e-01,  6.899881e-01, -9.284348e-01,  7.378697e-01,
        7.530736e-01, -9.388939e-01,  3.647714e-01, -2.641866e-21,
Traceback (most recent call last):
  File "scripts/", line 90, in main, check=True, stderr=subprocess.STDOUT)
  File "/usr/lib/python3.8/", line 512, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['singularity', 'run', '--containall', '--cleanenv', '-B', '.:/input:ro,../results2:/ws', PosixPath('image.sif'), 'rm -rf /ws/{install,build,log,src} && mkdir -p /ws/src && mkdir -p /ws/output && cp -r /input /ws/src/pkg && cd /ws && colcon build && . install/local_setup.bash && python3 -m trifinger_simulation.tasks.move_cube_on_trajectory evaluate_and_check --exec /ws/src/pkg/ /ws/output']' returned non-zero exit status 1.

Any idea what could possibly be the issue here? Or suggestions on debugging this. Thanks!

did you enable visualisation (in which case a goal marker is automatically added) or do you add any custom visual shapes in your code? I’m not sure if this is a bug or expected behaviour but for some reason the physics in pyBullet change a little bit when adding visual shapes (even if they don’t have a collision shape).

Another question: Are you using the base challenge image or did you extend it? In the latter case what modifications did you make (in case you are willing to share this information here)?

Oh, I have disabled visuailisation. But I’ll double check to see if there’s a goal marker that is being created despite this.
About the image, I am using this base image

singularity pull library://felix.widmaier/trifinger/user:latest

and extending it using the definition here: benchmark-rrc/image.def at master · cbschaff/benchmark-rrc · GitHub.

The trifinger/user image has some recent updates which are not yet included in the challenge image. None of them should be relevant for the pre-stage, so I think it should make no difference, but just to be safe, can you also test using library://felix.widmaier/rrc/rrc2021:latest as base image?

Another thing to check: Are you modifying any properties of pyBullet in your code (apart from the visual shapes that I already mentioned)?

1 Like

Thanks! I’ll try out this image.
I’m actually building over the benchmark code from RRC 2020. So I’m not entirely sure if there are any pyBullet changes being made in it. But I’ll try to find if there’s anything suspicious.

@madman were you able to resolve this?

Hi, the way I was able to resolve this was by getting rid of all the PyBullet related calls (other than computation calls like calculating jacobian).

1 Like