Replay script error

Might be a noob question here, but for some reason the evaluation script keeps throwing this error when trying to replay the stored actions

___Replay trajectory 0___
Error: Number of actions in log does not match with expected episode length.
Traceback (most recent call last):
  File "scripts/", line 95, in <module>

Any idea what I am doing wrong here. I have ensured that my environment goes on for the EPISODE_LENGTH number of steps. It still keeps throwing this error.

It might help to find out what the actual number of actions in the log is (I should have added this information to the error message… :frowning:).

The action logs are stored as regular pickle-files (action_log_##.p in the output directory), so you can open and inspect them manually:

$ singularity shell rrc2021.sif
Singularity> . /setup.bash
Singularity> ipython3
In [1]: import pickle
In [2]: dat = pickle.load(open("path/to/action_log_00.p", "rb")) 
In [3]: len(dat["actions"])

The output of the last line should be 120000.

Note that it has to match exactly, so if you have more steps in the log as expected, you will also get the error.

Thanks for this solution, I have found my error. I also wanted to ask where do you need to initialise the cube for each episode, I believe I am getting another error there while evaluating the script on the replay logs.

The initial position of the cube is (0, 0, 0.0325) (this is also defined in trifinger_simulation.tasks.move_cube_on_trajectory.INITIAL_CUBE_POSITION).
The orientation is (0, 0, 0, 1) (which is the default, when you don’t specify it explicitly).
See line 383 and following in the example package.

Thanks a lot for that.

Again a replay error, i see that all the values are matched, yet the error pops up

___Replay trajectory 0___
Arrays are not equal
Step 51546: Recorded robot position does not match with the one achieved by the replay
Mismatched elements: 9 / 9 (100%)
Max absolute difference: 9.8512918e-08
Max relative difference: 5.50550192e-07
 x: array([ 0.302222,  0.353288, -1.188481,  0.033202,  0.683025, -1.215074,
        0.415178,  0.573397, -1.253875])
 y: array([ 0.302222,  0.353288, -1.188481,  0.033202,  0.683025, -1.215074,
        0.415178,  0.573397, -1.253875])

The error is very small in this step, so it doesn’t show up in the printed (rounded) values.

If I recall correctly, your code is based on the benchmark-rrc repository? If yes, it is likely the same issue as discussed here: Error when running the evaluation script

My first guess is that somewhere in the code some properties of pyBullet are changed, which results in slightly different behaviour, thus resulting in different values as when replaying the actions in a “clean” environment.

Yes that is true, it is based on the benchamark_rrc repo. I’ll try and find the pybullet changes in the repo. However, in the case that the bug does not get solved, is there anyway I can still submit. I do have ideas for the upcoming stages

Yes, please submit in any case. I already mentioned it in some other thread but also made a proper announcement now: Submission without evaluation results

I also think we can relax the equality check in the replay a bit. The purpose of this check is to ensure that the simulation environment has not been modified in a way makes the task easier but I think small deviations like the one you observe should not be a problem (assuming they don’t get bigger over time).

I’m working on a corresponding update the the singularity image and will notify you as soon as it is ready.

Sure thing, in the meantime I am submitting the work I have so far. Thanks a lot for your support. Got to learn a lot during the challenge. Hope to continue to do this in the upcoming stages. :smiley:

In case it is still needed, the updated Singularity image is now ready. You should be able to get it via

singularity pull library://felix.widmaier/rrc/rrc2021:latest

Note, however, that this will only help if the difference between log and replay stays small throughout the run.