Reinforcement learning with poppy

Hello, here !
I have been working for some time with the Poppy robot in a coppeliasim (Vrep) environment in python. I don’t use Pypot for different reasons, I use pyrep.
For learning, I have a tensorflow or pytorch environment with different libraries that allow him to learn simple movements in an unsupervised way.
So far, so good (I plan to do a github repository soon)
I am now tackling walking and of course, unsupervised, it does not converge (he is moving forward, but not for long)
Now, I want to use mujoco to be able to benefit from the already mature codes of DeepMimic. This code allows you to learn with motion capture movements as a base (there is a lot of database)
I manage to import my digital poppy in xml with STL files attached, I can control my robot in mujoco, but to be able to use the motion capture files, I need a skeleton and do multiple conversions via blender to output a bvh format . This is where I can’t move forward. If someone by chance here had already made a skeleton structure in bvh or afc, that would be wonderful.
The idea is to transfer a motion capture file in angles for each engine.

Thank you

Hello,

The BVH format has been already requested before here.
Maybe @Ziqi has made progress on this topic?

Ok, I have bvh file, make with blender and I used mocap file (walk) with different skeleton and reaffectation of each bone. (I’ll share it tomorow)
It’s not perfect, but it work.
Now, problem after problem, I try to convert bvh to deep mimic format (we can find a converter in github) but it doesn’t work :weary:
In my next life, I will be baker

Congratulations! And thank you for your share of the BVH file here!

In my next life, I will be baker

Solving technical issues step by step sounds like the daily workday of many engineers