Real-time control of a Poppy right arm with a force sensor and usability assessment of several control modes


In this post I present some of the results from my works and experiments with a Poppy Humanoid right arm. These works aimed at exploring various ways to drive a robotic arm with physiological signals as input, e.g. EMG or force measurements. These exploratory works were conducted in order to provide ideas and conclusions relevant to the design of robotic arm prostheses.

I performed these works during my six-month engineering school internship at Flowers Team and INCIA.

Robotic device

I first elaborated a robotic arm directly based on the Poppy 6dof Right Arm developed by Joel Ortiz, but with less motors operating the wrist. The “gripper” version comprises two XL-320 motors operating the pronation-supination degree of freedom and the opening-closing of the clamps. One clamp is fixed and the other is moved by a dedicated motor.

Then, I modified it to create a “no clamp” version: it gets rid of all the XL-320 motors and the clamps, becoming pretty much like a standard Poppy Humanoid right arm, and uses a stick or a pen to indidate the enpoint of the robot.

Software operating this custom Poppy creature is available here.

Inverse kinematics for endpoint position control

In order to operate the robot in the operation space rather than joint space, I used the IKPy library, developed by Pierre Manceron. This library provides tools to build a kinematic chain and perform forward and inverse kinematic calculations on it. As this library can “load” a mechanical model from a URDF file, I created custom files based on the Poppy Torso URDF. These custom files don’t comprise any display or dynamical data, and are limited to the geometrical description of the robot’s joints and links.

The “gripper” version has the endpoint placed between the two clamps, in the extension of the pronation-supination axis. The “no clamp” version has the endpoint placed at the end of a stick or a pen, mounted on the arm with a locking ring.

By interfacing this library with the physical robot, I can give a goal to the robot as a 3-D spatial position, and make it move so that the endpoint reaches this goal. I also included basic safety measures, preventing the robot from moving at all if the goal is out of its reachable space.

As I didn’t need an extremely accurate resolution of the inverse kinematics problems, I fixed two calculation parameters to reduce the processing time. The main calculation process being an iterative optimization, I set a maximum of 12 iterations and a tolerance of 0.25mm to compute the best solution. This tolerance is purely virtual, as the physical robot would never achieve such a great accuracy.

I also modified the way the library uses regularization parameters in order to integrate a basic biomimetic aspect in the generation of robot poses. In my case, the regularization term in the cost function doesn’t correspond to the angular distance between the solution and zero angles, but between the solution and “ideal” joint angles. I set these ideal angles to zero for all joints except the elbow, which ideal angle is set to 80° from complete extension. This new regularization term also integrates a ponderation between the different joints, making it possible to indicate that, for instance, a movement of the elbow is “cheaper” than a movement of the shoulder.

This basic functionality is far from sufficient to generate human-like movements and poses, but greatly prevents the robot from reaching dangerous or weird positions. In particular, favoring elbow movements rather than shoulder movements tends to create much less extreme positions, where the elbow is raised high or brought too far left.

Interfacing IKPy and the physical robot provides a simple way to make the robot move by giving a goal position that should be reached by the endpoint. A video demonstration of the robot’s basic features is available here

Control modes: three ways to drive the robot’s endpoint

Starting from this endpoint control feature, I developed several control modes, that is to say: specific ways to drive the robot’s endpoint over time, as a function of a command vector. A given command vector represents the direction and the intensity of an intended movement.

In the context of this project, I used a force transducer to generate such command vectors: the user applies efforts on a handle mounted on a sensor, which measures linear efforts along the three spatial dimensions. Each control mode provides a method to convert this measured vector into a movement of the robot, so that the robot’s endpoint moves according to the intended movement represented by the command vector.

Proportional position control

Starting from a fixed origin point, at every time of the control, the shifting of the robot’s endpoint from this origin is proportional to the force currently applied on the handle. This works as a 3D vectorial relationship:

goal = origin + gain * force_vector

In a nutshell, the harder you push on the handle, the further the endpoint goes from the origin. If no effort is applied on the handle, the endpoint goes back to the origin, pretty much like a spring-loaded system.

Proportional velocity control

Rather than computing the shifting, this mode computes the instantaneous endpoint velocity. At every time of the control, the robot moves so the endpoint velocity is proportional to the force currently applied on the handle:

goal = previous_goal + gain * force_vector * elapsed_time

This mode introduces a time dependency in the control and gets rid of the origin. With enough time and even if the range of forces that can be produced is limited, this mode theoretically allows the user to reach any point in space.

The harder you push on the handle, the faster the endpoint moves. If no effort is applied on the handle, the instantaneous velocity is zero, meaning the robot will stay in its current posture.

Quadratic velocity control

This mode uses mostly the same concept as the previous one, but introduces a quadratic relationship between the force vector and the velocity, in terms of magnitude.

goal = previous_goal + gain * norm(force_vector) * force_vector * elapsed_time

This non-linearity provides a much wider velocity range for the same force range. It allows for both accurate, slow movements at small efforts, and fast, almost ballistic movements at high efforts. In comparison, the proportional velocity control must be used with a high gain to achieve high speeds, which has its drawbacks when it comes to fine, accurate control. Indeed, a high gain will amplify the noise, eventually producing unwanted movements. On the contrary, using a low gain doesn’t allow the robot to move really fast, even by applying the strongest effort one can on the handle.

Pronation-supination joint and clamp control

The two motors of the wrist are not operated as part of the endpoint position control. The pronation-supination motor can be controlled with the same control principles, translated to scalar angular positions instead of 3-D vectors. The command input is a torque measurement, that is, an effort applied on the handle in terms of rotation. I find it convenient to use the torque along the horizontal axis in the extension of the forearm, directly matching the movement of the robot with an actual human’s pronation-supination movement.

The mobile clamp is operated in a two-state fashion, associating angular values for the open and closed states, and switching from one to the another. The command to switch states is given by pressing a key or a button, independently of the handle.

Usability assessment

In order to evaluate these control modes and identify their benefits, drawbacks and limits, I carried out a usability assessment based on a basic task. It is inspired by standard center-out target reaching tasks that are described in the literature on cursor control by BCI or EMG (see this, this and this for instance), but takes place in a fully 3-dimensional context. The robot’s endpoint acts as the cursor and the targets are spread on a sphere rather than a circle. However, unlike what is usually done with cursors and computer screens, all the targets are visible and tangible during a task.

Before the beginning of a trial, the label of the target to reach is announced to the subject. Then, the subject gets the control of the robotic arm, and has 25s to make it move so that the endpoint enters the target’s zone and dwells within it for at least 600ms. Various metrics taken from the literature are used to provide quantitative measures of the performance that can be achieved using each control mode, regarding accuracy, stability and speed.

The setup comprises five different targets, represented by small foam disks, indicating the position and size of the actual target, and suggesting their spherical shapes. During a task trial, forward kinematics are used to compute the current position of the robot’s endpoint, and by calculating the distance between this position and the target, the system can detect if the endpoint is within the target’s zone.

A video excerpt of an experimental sequence is available here. The software framework of the experiment is available here

Secondly, a questionnaire is passed to the subjects in order to evaluate the usability of the system, especially regarding the ease of use, tireness, apparent complexity and learnability. The questionnaire is based on the System Usability Scale, a questionnaire consisting of ten items to rate on a Likert Scale (from 1: “strongly disagree” to 5: “completely agree”), and yielding a global score out of 100. This scale was developed as a tool to quickly assess the usability of digital interfaces on computer screens.Six other items were added, in order to address aspects like frustration and movement perception by the user.

As for the experimental methodology, for each control mode, each subject passed a sequence of 20 tasks followed by a questionnaire to fill. Over the whole subjects population,the control modes were presented in the six possible orders, in order to compensate the learning effects. Indeed, as the experiment doesn’t comprise any learning or demonstration phase, the subjects clearly benefit from their trials on the first mode when they perform with the second mode. Overall, shuffling orders compensates these effects, and allows for the comparison of the different modes without a strong order bias.


According to both performance metrics and questionnaire results, position control proved to be much less performant than any of the velocity controls, whereas only a few significant differences were found between the two velocity controls. Indeed, movements performed with position control were globally less accurate and needed more time to reach the target. The success rates achieved by the subjects with this mode were also significantly lower.

The questionnaire also highlighted notable differences between the modes in terms of intuitivity, ease of use, comfort or general preference. Without a surprise, position control mode was rated globally more tiresome and jerky, as well as less usable according to the global SUS score.Quadratic velocity control was the preferred mode, even though according to the metrics, no advantage was found in its favor over proportional velocity control. A measure of the explored space may reveal quantitative differences between the behaviors of these two modes.