martedì 27 ottobre 2015

An album from Maker Faire Rome 2015

Thanks to all visitors that join Geduino at Maker Faire Rome 2015!

The Geduino Team at Maker Faire Roma

With Giovanni Burresi of UDOO Team (who produce the SBC used in Geduino)

Kids have fun playing with Geduino

With Gaël Langevin and its 3D Printed humanoid robot

Geduino in the arena


For everyone who miss the "Introduction to ROS and its Navigation Stack" talk at Maker Faire Rome 2015 the recorded video and slides (in Italian only) are now online!

Download here the slides as PDF (here the original Keynote file).

On YouTube the playlist of the complete talk.

I wish this can be helpful to you!

lunedì 26 ottobre 2015

New PowerBoard and GeduinoShield


Finally the PowerBoard and GeduinoShield PCB REV2 was arrived from Fritzing. You can download original files from:


In order to open them you must use Fritzing software that can be freely downloaded from this link. The PCBs shown in the picture was build by FritzingLab.

giovedì 10 settembre 2015

Join Geduino at MakerFaire Rome 2015


Geduino Foundation will be at MakerFaire Rome 2015. You will have chance to see Geduino demonstration and meet Geduino developers.

You can attend Introduction to ROS and its Navigation Stack talk. We will start from ROS basis to arrive to its Navigation Stack exploring everything you always wanted to know but that you have found on the net.

lunedì 18 maggio 2015

A roadmap to Social Autonomous Robot

Social robot
From Wikipedia, the free encyclopedia

A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role.


The Navigation Stack is the basis of a mobile robot but a robot that is only able to move is not useful at all. What is missing is a reason for which a robot should move. This reason must be provided by the social attitude of Geduino: its role in society must define not only how to move but its overall behaviour!

The following roadmap for Geduino development was written following this goal: bring Geduino to be an Autonomous Social Robot!




Navigation

Navigation is the basic of an autonomous robot: the ability to move autonomously. This step includes:

  • build a map of unknown environment;
  • localization;
  • plan a path to reach the goal end execute it;
  • avoid fixed and mobile obstacles.

In this video you can see a demonstration of the Geduino Navigation Stack.

Perception

Perception is the ability to sense real world by sensors's measurements. Of course perception is already used in Navigation but, in order to interact with humans, it must be extended:

  • using computer vision, to detect and recognise objects and humans;
  • using speech to text algorithm, to recognise voice.

This step is actually in progress: by now some tests with OpenCV and Geduino camera was performed.

Interaction

Navigation is the fundamental interaction between Geduino and external world. Geduino's interaction capabilities must be extended based on the social role we will assign it. Hardware's extensions must be evaluated in this phase (arms, audio output etc…).


Navigation Stack is ready!

I'm finally proud to announce the Geduino Navigation Stack is working properly. See this video to discover what Geduino can do.



The video will show you capabilities of Geduino Navigation Stack:


  • build a map of unknown environment;
  • localization;
  • plan a path to reach the goal end execute it;
  • avoid fixed and mobile obstacles.


mercoledì 1 aprile 2015

GMapping and RPLidar

Previous post ends with a question: why Hector Slam perform better than GMapping when in theory should be the opposite? The right question should be: why GMapping works bad with RPLidar? Since I'm not the only developer asking this question I hope will be helpful for everyone share my results.

There are several reasons for which RPLidar and GMapping does not work well together:
  • the LaserScan message expected by GMapping;
  • the RPLidar reference frame;
  • the LaserScan message created by RPLidar node.

The good new is that all issues can be easily fixed. On this post I will show you how I success to make RPLidar and GMapping work together.

The LaserScan expected by GMapping


First issue regards the LaserScan expected by GMapping. Unfortunately this is not documented anywhere and I found it only checking out source code: on latest commits (not yet released) a new check was added to make sure LaserScan message min angle and max angle are opposite:

maxAngle = - minAngle

This condition is not satisfied by RPLidar node but, since released version of GMapping does not include this check, is not easy to notice.

The RPLidar reference frame


Second issue came from RPLidar frame and RPLidar rotation direction. It assume positive angle in CW direction: in order to reflect this orientation Z-axis must be directed to the RPLidar's bottom side (this is not a bug but, since this is not pointed out in documentation, is quite common to make a mistake broadcasting laser scan frame).

Further more, in order to satisfy the GMapping required condition about angles shown on previous section, is better to use a frame rotated by 180° respect one shown in RPLidar documentation: using this frame the resulting min  and max angle will be (approximately) equals to -3.14 and 3.14 according to GMapping condition.

The RPLidar frame from RoboPeak documentation

The RPLidar frame must be used in ROS


The LaserScan message created by RPLidar node


The RPLidar node require some modification in order to reflect corrections shown before about min and max angle (they must take count of the frame rotated by 180°):

scan_msg.angle_min =  M_PI - angle_min;
scan_msg.angle_max =  M_PI - angle_max;

Applying this fix will result in negative value of published angle_increment according to the fact that RPLidar rotate CW.

GMapping configuration


GMapping default settings are designed for high speed and large range lidars. RPLidar has really good value/price rate but it does not fit those requirements so modifications of GMapping parameters  is needed in order to have best result.

Those are the GMapping parameters I used with best results:

maxUrange: 5.5
maxRange: 5.5 it was 6.0 but after better tuning 5.5 is suggested
minimumScore: 50
linearUpdate: 0.2
angularUpdate: 0.25
temporalUpdate: 5.0
delta: 0.025

Conclusion


In conclusion RPLidar can works with GMapping with good performance after those issues are fixed:

The GMapping built map before fixes

The GMapping built map after fixes. Unfortunately was not possible to use the same bag file for this test since the LaserScan message was changed. Anyway the test was conducted on same environment with same conditions.
The RPLidar node with fixes it is available on GitHub repo.

Navigation stack test: GMapping vs Hector Slam

After first failed test of Geduino navigation stack using GMapping as SLAM algorithm I was curious to make tests with Hector Slam that, as shown by RoboPeak on its youtube video, works really well.

I registered real sensor data (using rosbag) from Geduino and play it running GMapping and Hector Slam in order to compare their behaviour on same data. Those are the results:

The map generated by GMapping

The map generated by Hector Slam

I was really surprised of this result: Hector Slam performs really better than GMapping!

Those two algorithms differ from the information sources used for localization and mapping: GMapping uses odometry and laser scan; Hector Slam, instead, uses the laser scan only. Theoretically GMapping should perform better then Hector Slam expecially on environments that cause laser scan estimated pose to be ambiguous (large space or long hallway without features): in those scenario GMapping can rely on odometry for robot localization. By other hand Hector Slam does not require odometry (so its a forced choice if robot does not provide it); another big advantage is that Hector Slam can work with laser mounted not planar to ground (as required by GMapping).

You can find more info on this benchmark of slam algorithm in ROS.

Since Hector Slam works fine we can assume laser scan data is ok, so I focused on odometry test: a good guide for this purpose can be found here.

This the result of this test:

The RViz output of odometry test

The robot starts from the position shown on the picture and travel to other side of the room and gone back. As shown on the picture laser scan impressions overlap quite well: this means that odometry data is consistent.

In conclusion theory is not according with real world: GMapping should perform better than Hector Slam but tests demonstrate the opposite. I will spend some time in order to study this case and found a solution since Geduino is designed to use GMapping.

giovedì 26 marzo 2015

First navigation stack test and improvements

After the Geduino navigation stack was ready I started tests on it. Generally speaking it is working but some problems occurs with GMapping (that provide SLAM algorithm) localisation (see this video). It seems to be related to odometer performance and GMapping settings.

To solve this problem I'm working on two tasks:
  • improve Geduino odometry update frequency;
  • improve Geduino odometry precision;
  • use ROS bag simulation to test and find the best GMapping configuration.
On this post I want to show what I've done about first task and how I success to double it.

Geduino is able to move thanks to its two EMG30 motor powered by MD25 controller. This controller is connected to SAMx8 that set motor speeds (according to subscribed cmd_vel topic) and publish odometry transformation and topic (based on EMG30 encoders values).

Previously all calculation was done by SAMx8 and published on ROS. This was the node chart and frames tree:



The max update frequency of odometry was 10 Hz.

Since IMx6 has higher performance than SAMx8 the idea was to move all calculation to ROS node running on IMx6 and leave to SAMx8 only the hardware handling. After the modification the controller only publish raw encoders values and subscribe speeds command. All calculation in order to provide odometry and execute cmd_vel are done in odometry ROS node.

With this new design approach I was able to reach an update frequency of 20 Hz.



Furthermore other topics are published:
  • motion9 (mpu9150_msgs/StampedMotion9): it contains raw motion 9 data from MPU9150. The goal is to integrate those informations (using Kalman Filter) in order to improve odometry precision;
  • md25/status (md25_msgs/StampedStatus): it provide information about MD25 board status (voltage and current). The goal is to use it for diagnostic purpose;
  • power (geduino_msgs/StampedPower): it provide information about Geduino power source (voltage and power source type). The goal is to use it for diagnostic purpose (especially when powered by Li-Po battery is important to monitor the voltage).
Next steps:
  • testing odometry precision and apply corrections (UMBMark test);
  • improve odometry precision by Motion9 data (Kalman Filter);
  • test different GMapping configurations.

On Geduino Git repositories you can find all modified source code.


domenica 22 febbraio 2015

Geduino has now a logo!

After long time Geduino finally has a logo! Thanks to everyone who responded to the survey for their contribution.

The new blog look was an occasion to add new contents:

  • on the right side of the page you find a link to GitHub repositories of Geduino code and other open source projects used developing Geduino. All source code are distributed under GPL V3 license;
  • a new page contains a step by step guide to create Geduino SD card (from Kernel and modules to ROS packages).
Coming soon on the blog:

  • I'm working on a ROS library for Java that is still an alfa version. I think it can be useful to other developer and I'll spend more posts about it:
  • I'll add DWG file of the Geduino hardware, including Eagle project of the power board and the shield.