Presentation of the ODOI project

New Video showing the robot taking a rest - leaning against a wall

I am programming the robot to mimic some of the postures we (as human being) exhibit in our daily life. Program some of these mundane postures and trigger them at the appropriate moment can really surprise the audience. Sit, get up from a chair or a bench, take an object, walk with different moods, leaning against a wall, crossing an obstacle… are some of the postures I am exploring.

Today I focus on “rest against a wall”. There are many ways to lean against a wall: contact with the wall through your back or your foot. Also, there is the sideways way of leaning against the same wall where the contact with the wall is achieve though the shoulder or the hand.

The video below shows the example “sideway leaning against the wall – contact with the hand”.

Meanwhile, it is also a good exercise for testing/improving the software (forward/inverse kinematics, sequence of actions combined with sensor feedback).

The next step is to stop leaning against the wall, go back on two feet in order to resume walking (for instance).


Hello poppy community

I am exploring the idea of a robot sitting on a chair (let us call it a sitting tattletale bot) and taking various postures based on its mood and/or external events. In the latest case, one can specify how the robot react to the events.

For instance, you can assign a dedicated posture based on news content/headlines analysis (web scrapping) – here are some examples:

  • Weather forecast: sunny the robot is happy and relax, heavy raining it becomes sad, storm a bit anxious …

  • Celebrities: you want to be informed if a celebrity is in couple with somebody and you want that the robot is happy about it – he will tell you by taking a happy posture (the robot can pick up one happy posture among several ones BUT if you want that this news makes it sad, it will adopt a sad posture;

  • Sport: if your favorite team wins, the robot will be happy and will standup or if the team loses, the robot will adopt an angry posture ( like supporters in the stadium);

  • Job reports: if the job reports is bad, the robot will inform you and adopt an anxious posture.

The first step is to create some postures and see how it looks like, here is a first video showing the robot sitting down on a chair and taking two different postures.

These are the ideas I am exploring right now and I will give more details on that project later if people are interested in.



Postures and sound


Dear Poppy community,

In this post I introduce a new video where I continue to explore different postures. This time I decided to make an experiment by recording some sounds and voices.The idea here is to show that if the voice, and more important the tone of the voice, are in “harmony” with the posture, the way you look at the robot is really different.

There are 5 postures:

  • Applaude,
  • Thinking,
  • A bit anxious,
  • A bit angry,
  • Time to take a rest.

it’s geat !

As you say, the voices and sounds make the difference, and make the robot more… real ^^

Just a bit scarying with a “human” voice. You recorded your voice and played it ? Or is it a text to speech ?

Many thanks Damien

Actually I recorded my voice. it is an experiment and I wanted to play also with the tone of the voice and see how it looks like when it is synchornized with the postures.
I am still an amateur in making movies … mixing sounds and videos.
I did some research on text2Speech softwares but I did not find anything where you can play with the tone of the voice. If you know such software I will be more than happy to have a link :slight_smile:

I am thinking on how to record some sounds and synchronize them with the postures - so the robot will play them.

I do not really know a lot about text to speech since I did not work on it yet (but I will !) Maybe you can find what you need here : Open-source speech recognition and text-to-speech potentially usable with the Poppy robots

Thank you Damien! Really interesting specially the mycroft software.

Regarding my project, actually I recorded my voice, but it is also possible to record any kind of voices such as a “cartoon-like” voice or even “R2D2-like sounds”. In the latest case, you really need to work with professionals.

What is crucial is that the tone of the voice/sound matches the posture.

Ah great ! sounds cool !! what soft do you use to modify your voice ?

In you work/video, the tone sounds good according to the posture (I think) :slight_smile:

Nice research !! Most of robots do not risk in having such tones in the voices. I used the Acapela text to speech motor once (it is no free) and you can see in english the “Will” voice with the different emotions. It is very funny to use this… but it is a “bazooka” solution since happy, sad or angry are considered as new language with a whole dictionnary for each “emotion”. If you have a robot with a big memory it can be OK.

I like the “go to rest” one since there is no word, just vocal noises :slight_smile:

Hello Thot, good sense of humour :slight_smile: Thanks for the link, I will have a look. That’s true, when robots are equiped with a voice feature, it is usually a flat tone. Very few attempts to bring together postures and tones. It is quite a surprise because it is, from my point of view, one of the keys if one want to develop robots such that the audience is able to create an emotional bond with.
Another way to do it is through Dance/Theatre with a musical background, like you are doing with the school of the moon.

Damien, so far i do not use any software to modify my voice.

I attended the “Cafe Neu Romance” conference in Prague in which I presented the ODOI and some ideas arount it. It is a “small” conference organized by vivelesrobots focusing on robotics and Arts (music, dance, sculpture, puppetry…). It is quite interesting because you meet people from different horizons using Robotics to create new artistic concepts.

My lecture focussed on how is it possible for the audience to create emotional bond with a robot. After introducing some key drivers that may facilitate the emotional bond, I described the concept of a sitting tattletale robot. The idea is to combine voice (speech2text and text2speech), dedicated postures (which should arouse feelings from audience) and interactivity.

For instance one can consider an application in which the user defines:

  • The “mood spectrum” of the robot (happy, angry, neutral, depressed…);
  • Different topics of interest (sport, finance, politics, celebrities…), for each topic what the user is precisely looking for (for instance in sport, then football, then the score of your favorite team)

The tattletale robot will search on internet, and based on what it found, it will report it with a specific posture and associated voice/tone. We think that adding a voice with a tone in congruence with the posture changes the way the audience will look at the robot and increase the probability that the experience becomes pleasant, joyful and playful.

Different other applications of the Tattletale robot are discussed as well, such as stories telling along with mimicking some events/objects that are part of the story, games such “hide and seek” or advertising.

More details in the pdf below:

Cafe Neu Romance 2016.pdf (1.9 MB)

Here are ideas that I wanted to share with the community and any comment is more than welcome :slight_smile:

Here is another pic describing the different designs of the robot called Myon which was the star of My Square Lady, a new opera at Komische Oper Berlin in 2015 (sorry for the low quality of the pic):


We would like to give an overview of the software architecture we have designed. Figure 1 describes the different software modules and how they are distributed over the difference pieces of hardware.

We decided to distribute time & CPU consuming applications on dedicated hardware. This is why we prefer to a camera with its embedded CPU to process images (So far Pixy but JeVois on Kickstarter now is another good candidate) as well as an OPEN CM9 micro-controller to execute the different postures that have been processed elsewhere.

All other applications run on the main processor board.

The objective is to create scripts which are a combination of actions and condition to be met in order to trigger further actions. Moreover scripts can be combined together. Another point is that these scripts will be used for communication between robots as well.

The formalism is based on Petri net where transitions contain conditions. It will allow us to include probabilistic values (to trigger a “sub-net” based on the mood of the robot for instance).

We end up with a “classical” software architecture with one centralized module running the scripts which is connected to different modules that will do dedicated computations - such as compute a sequence of movements to go from A to B - and produce a result. Result that will be used as an input by other modules – execute a sequence of movements for instance - or as a condition to be checked by the scheduler (transitions of the Petri net).

            Figure 1: description of the software architecture

The following modules have been identified:

  • IK engine:this module computes the position of each joint based on objectives and constraints;

  • VoiceDB: this module contains a list of pre-recorded voice sentences (different tones) along with time markers that will be used to synchronize body movements;

  • CreateSeq: this module is in charge of computing the sequence of commands (speed and position) for each joint. In order to compute the sequence we need to setup a timeframe and the timeframe may be related to a voice sentence (by using time markers);

  • Scheduler: this module is in charge of executing a script;

  • PlayVoice: this module is in charge of playing a voice file;

  • Speech Recognition: this module is in charge of recognizing voice. We decided to use the Mycroft open source software;

  • Camera Server: this module is in charge of interfacing the Pixy Cam or another Cam which is coming along with its own processor ;

  • Play Posture files: this module is in charge of executing a sequence of postures issued by CreateSeq.

  • Mood Mngt: this module manages the mood of the robot and based on the current mood, different scripts (behaviors) will be triggered.

Figure 2 gives an example of the “hide and seek” script. Conditions have been written near their associated transition. Actions for each node have been listed on the right part of the Figure.

The scenario is the following: the robot is seated and we ask if it wants to play this game, if it accepts, it asks the color of the object that will be hidden somewhere around him. Then the robot hides its cam with the arms, counts until 10 (for instance) and then will look at the object in front of him then on its left and finally if necessary on its right.

As soon as the object is found, it will stop searching, express some kind of “happiness” and will ask to play again.

Based on its mood, we added some alternative scenarios, represented by clouds:

  • The robot can be stubborn, even though we say “no the game is over” it will keep asking to play again with a stubborn posture, let’s say it will cross its arms and keep the head down (Transition from Node 6);

  • The robot can be sad or imploring the user to play again, even though we say “no the game is over” (Transition from Node 6);

  • The robot can cheat at the beginning, meaning that while it is counting, its head will move so that the camera can track the object (Transition from Node 1).

  • In case the robot did not find the object, it can be suspicious and ask us whether or not we cheated, i.e., we did not hide any object (Transition from Node 4).

Many other sub-scripts can be added in order to enrich the “main script” with the objective to make the experience more enjoyable and create surprises for the end-user.

                  Figure 2: “Hide and seek” script

For the projects I am developing, I had to developed new arms, each one with 7 DOFs. Indeed such arms offer lot of possibilities and I am planning to create some (hopefully) funny videos showing a robot pushing a box, resting on a balustrade…

However, when you have lot of degrees of freedom (arms and torso), it is necessary to develop an efficient Inverse Kinematics software (argh).

I just released a video showing the IK in action:

It is not too bad considering the robot which is not a perfection of rigidity :slight_smile:

I am writing a paper describing the IK which is a mix of closed form solutions, for the arms and the torso, and geometry for the legs. As soon as it is finished, I will publish it here.

I finished to write a paper entitled “Inverse kinematics for a Humanoid Robot : a mix between closed form and geometric solutions” available for download.

The article presents a derivation of the forward kinematics (FK) and inverse kinematics (IK) of a humanoid robot with 32 degrees of freedom, specifically the ODOI platform. The FK and
IK are not solved for the entire 32 joints but instead divided into six parts:the two arms(seven joints each), the two legs (seven joints each) and the torso (5 joints). I introduced also some softwares that are really useful to visualize DH-based parametrization.

In this paper a mix approach is introduced: a closed-form solution is proposed for the arms and the torso based on [1] and a geometric solution is proposed for the legs.

[1] Hyungju Andy Park, Muhammad Ahmad Ali, and CS George Lee. Closed-form inverse kinematic position solution for humanoid robots. International Journal of Humanoid Robotics, 9(03):1250022, 2012.

If you find some typos or errors, do not hesitate to post a comment so that i can correct it!

Here is the first page:


Ouf, you did it ! the documentation is very impressive with demo and drawings, a huge geometrical project… SHARED.
I will be interested in the compute time of your algorithm on a RPi 3 (the direct and inverse one)

I did it with Poppy and stored the results here for the direct kinematics

Hi Thomas, sorry for my late response and thank you for your message :slight_smile: Actually I tried to give as many details as possible in order to help the reader to get the ideas. Besides It was a funny experience to use Latex again.
Actually it is not running (yet) on an embedded computer. So I cannot provide any number regarding the CPU usage. But for sure as soon as I can I will.
On your side, your model is pure geometry for the direct one and a gradient descent for the inverse one, am I right?

Yes, Latex is a great tool to share mathematical issues, a bit tricky to install the first time.
Take your time to implement it on a computer. I advise you to use profiler, it is very funny to optimize mathematical computation. With my algorithm, I was completely lost when I saw the CPU taken by cross product computation which can be optimized by matrix computation (openBLAS)
Concerning my algorithm, yes, I use 3D rotation matrix to transform the different vectors of Poppy. I did not use the DH parametrization to engage 4D rotation matrices. (I will work with DH later if I use ROS in the future)
After, I use cross product to compute geometrical gradient with respect to each motor.
The inverse kinetic is made by gradient descent with one iteration (1 direct kinematic + 1 gradient) each step time.
As you did, the big tip is to cut the robot with its different members. You did cut the torso and the arms. I worked on the torso and arms/head together so that the torso could move to help the hand reaching a point. But by your method, you can cut the problem with two computations.

I made a video showing a robot trying to push a box which is too heavy and thus try to mimic the contortion of the body in order to make it.
Although it is impossible to compete with what is done in the animation world, I still have a robot with many DOF specially in the Torso and Arms so there are many possibilities to explore.
For a first test, I implemented 3 sequences:

  • The first one is the robot flex its legs and keep the upper body almost straight;
  • The second one the robot bents over the box;
  • And in the third one, the robot is tired and put one hand on its head.

I am not really happy with the first sequence but the second and third sequences are quite convincing. It is really a question of rhythm and I still need to improve the software here. Also I need to add feedback on the grippers so that the robot “knows” it is in contact with the box.

It is a good exercise to highlight the IK and path planning capabilities but for sure still lot of work to be done to make a more funny and compelling video clip.

In the next video, I wanted to play with prisma © an application really nice which allow the creation of artistic effect on videos and/or photos. So I created a short sequence (less than 15s) of the robot pushing the box and ran Prisma © on it! Quite funny to watch


Very funny and interesting research !!

maybe you can look at the mime also.

In mime, when you push something, you can see it with the rest of the body (face including). For the push and the pull, the most important is the legs. You shifted the two feet, maybe you can move forward the pelvis while the upper body is straight. The pelvis is very important for the illusion.
For the front foot, maybe you can put it on the toe (it maybe unstable but you have the box)

Hello Thot

Thank you for your support :slight_smile:

Thanks for the link to that video, it is interesting and I will search for more on Mime as well. Actually I got the inspiration from animators …
I tried to shift the legs but the foot is too heavy and thus the bot lost its balance and so far I did not find a strategy to compensate the weight of the foot (excerpt but more weight on the upper body).
But that’s true I can move on the toe it should work (and I should because I have an articulated toe).
The Pelvic is very important, I totally agree, and this is why the sequence 2 is quite nice and will be better if I can keep contact with the box.
The dynamic of the movement and thus synchronization between limbs plays an important role - this is what is missing in the sequence 1 (at least what I noticed).
So many things to explore to create a more funny clip :slight_smile: