Videos from this blog

Loading...

Tuesday, 17 June 2014

MORSE CODE - PART 3

Since there are a good lot of robot simulators, Gazebo and Stage for the ROS enthusiast, V-REP and webots in the more commercial domain, OpenRave and Microsoft Robotics Developer's Studio are other options one can explore into, therefore when one comes across a new robot simulator, the obvious question is - so what is special about it ? I have been asked this question a few times about MORSE, and is usually accompanied with a comparison with Gazebo. 

My answer is, 
  • MORSE is not limited to few robots (10 for Gazebo), I have used Morse for (i) 176 small robots (ATRV) and (ii) 9 PR2 robots, in 2 separate simulations. The limit to the number of robots is not a limitation of the simulator, but rather the limitation of the CPU and graphics card.
Pic.1. Multiple PR2 simulations
  • MORSE has bindings with various middlewares ROS, YARP and MOOS, which enables it more versatility [1]. 
Pic.2. ROS binding, rviz visualisation and mapping of MORSE simulation
  • There is facility for human robot interaction. Like in motion gaming, direct input from the motion sensor (Microsoft Kinect, ASUS Xtion or Nintendo Wiimote) enables human avatar in the simulator [2]. 
Pic.3. Human avatar
  • Since everything happens through Python scripts,one need not care about compilation and executable files. MORSE is 'pythonic' [3] and can be arguably said to be an extension of Blender.
  • It is based on Blender and not Ogre, so it doesn't take up a hell lot of resources. Also, the texture and the graphics are more sleeker than Gazebo.
  • New robot models can be developed through Blender, and the developmental process is simple [4]. 
Pic.4. Blender model of the robots
  • Blender has a huge online community. Hence help and support is easy to find.
I would not be giving a very honest opinion if I do not talk about the shortcomings of MORSE,
  • Binding with ROS is a laborious process and often discourages the novice ROS user, particularly due to Python 3 and also because only certain versions of Blender works well with the MORSE + ROS + Python 3 set-up.
  • Physics simulation in Blender/MORSE is inferior to Gazebo. Getting force, torque values etc is not yet possible.
The latest release is MORSE 1.2 and more details can be found at http://www.openrobots.org/wiki/morse. A good part of this post came out of discussions at the MORSE mailing list, morse-users at laas dot fr .

REFERENCES

Sunday, 6 October 2013

SIMULATIONS IN OLFACTORY, MANY ROBOTS

This blog post describes a player/stage simulation for multiple robot mapping with goals or reference points as odor sources. Let's think of a situation where we need to explore large environment and generate its map which can be used later on for navigation and localization. In this scenario, instead of single robot exploring such a large environment, its always beneficial and effective (in terms of processor usage and time) to allow multiple robots to explore different parts of the environment and merge the maps finally whether topologically or based on overlap regions in occupancy grid. 

This idea is based on Mobile Robot Olfactory experiments performed by ISR Embedded Systems Lab, details of which can be found here: (http://ftp.isr.ist.utl.pt/pub/roswiki/simulator_plumesim.html). 

The idea of using PlumeSim library for this purpose came from this interesting project of PlumeSim framework which simulates odor transport in the environment. The basis of this project is that olfaction is a key sense in the survival of many biological species. 

This article deals with the process of installing PlumeSim plugin driver initially followed by multiple robot mapping simulation in player/ stage. This driver is capable of introducing simulated chemical plumes into a simulated/ real robot from broad range of sources to CFD (computer fluid dynamics) software. Based on data collected from real world experiments, its also able to playback recorded plume in future periodically. 

Below are the steps involved in setting up the driver for this purpose:

1. The driver library can be downloaded at this link or at this github
    link: https://github.com/DevasenaInupakutika/PlumeSim-1.0

2. Clone to your local repository through terminal as below:
    
    git clone https://github.com/DevasenaInupakutika/PlumeSim-1.0

3. Enter the directory: 

     cd PlumeSim-1.0

4. Build the library using below command which will create the required driver
    (libPlumeSim.so):

    make

This plug-in driver in player/ stage brings plume simulation into the world of mobile robotics. This can be used as a great tool for developing odor search algorithms providing smooth path from simulated robot to real environment.

Sample output, the blue dots show the plume of the gas
In this article, we have taken use-case of this driver in multi-robot map merging. For this purpose, please follow below steps:

Note:  Make sure all the path corresponding to player/ stage packages and library are at same location.

1. The player/ stage package can be downloaded from here using below 
    command:


2. cd Player_Plumsim and Use make clean command to clean if package is  
    already built.

3. Built the package as below:
   
   g++ -o main`pkg-config --cflags playerc++` main.cc `pkg-config --libs  
   playerc++`

4. Run the program as below:

    player map_square.cfg

    Open another terminal (in a separate tab):
    
    ./main -p 6665 -r 1 

    (which means robot 1 will start moving where 1 is robot id    and 6665 is port 
    number)

The configuration file (map_square.cfg) contains description of robots and 
corresponding ports. The prime important thing so as make multiple robots to move is to assign either separate port numbers for each robot or use separate robot ids on same port number.

5. In order to run another robot, open another terminal tab and type below   
    command:

    ./main -p 6665 -r 2

   and so on. This has maximum of 12 robots.

After executing all the above steps, you get results similar to the ones shown in the below screen shots and corresponding robot text files (robot<id>.txt) files get updated based on which robot is traversing the environment and it's distance from odor source. The emanating gas is shown in red while the robots from top view are circles of various colours.





The files being updated contains simulation environment's map topological data (in terms of coordinates).

This work has also been implemented on ROS - occupancy grid map merging where in robots generate local maps by exploring separate areas of environment and merge based on overlap regions into a final global map which will be discussed in separate article.

This can be implemented on real robots as well using the player server of the player/stage suit.

Saturday, 31 August 2013

E-GLOVE Part-2 -- The Robotic Arm !

E-GLOVE 'n' THE ROBOTIC ARM

The last time I blogged here, I wrote about my project on the e-glove and how my team and I got around to using the glove as a wireless air-mouse using an accelerometer and an RF Transceiver system. But, not stopping there, we decided to move on and try to use it for it's more serious applications, like bomb defusal, remote object manipulation and maybe, if we have access to more sensitive equipment, a robotic surgery arm.


The first step, obviously, was to acquire the arm. Being poverty stricken engineering students, we had no funds to actually purchase the arm itself, so we approached the BioMedical Instrumentation department of out college who graciously provided us with not one, but TWO unused robotic arms. Since one of them was pre-assembled, we decided to use that one for our current prototype.

The arm, manufactured by Arexx, actually has an inbuilt Atmega Processor and an FTDI USB-to-Serial converter to program and control the servos on the arm independently. However, upon further perusal of the documentation, kindly written in German for better understanding, I found out that the coding would have to be done in Assembly or hex. (English Version, FINALLY). This promptly made me drop the idea and decided to proceed by individually connecting each motor to the Arduino Leonardo and trying to use the Servo library which comes built-in with the IDE. Well, that went well and I could now manually control the arm using the WASD keys on the keyboard.I even ordered a Sony CP-ELS PowerBank for the arm, which is basically a 2200 mAh rechargeable battery that will finally allow for some portability.

The input for each of the 5 motors (actually, there are 6 but one is broken) came from an old USB cable connected to the GND and +5V pins of each motor using a simple splitter on a PCB which gave me 5 parallel outputs from one battery input.

Block Diagram of arm with motors labelled accordingly
Then, once I got that part working, we came to the main task at hand, making it compatible with the existing glove so that we could move it around using hand movements.

Then came the real problem. Turns out that the VirtualWire Library that we used (mentioned here) uses an interrupt based system to get the RF communication working. Now, the Servo library ALSO uses the same interrupts in the Arduino and thus, no matter what, they are incompatible. Which basically meant, that for the moment, our project was doomed.

Library after alternative library was tried, but to no avail. The ServoTimer2 library promised to be compatible with the VirtualWire library, but even that, as it turned out, did not work. We even tried editing the library itself (a huge undertaking on our part, since we didn't know crap about the internal hardware structure of the Arduino) and we got the thing to compile, but the arm remained motionless as ever.

Then, at 5:00AM one morning, with an 8:00AM class in the horizon, I decided to bunk all libraries and try moving the servos using simple and old-school PWM control. And voila, that did the trick. Taking inspiration from this tutorial, I proceeded to fine-tune the pulse widths to match the rotations of each motor, which gave me a complete 0 to 180 degree turns with relative ease. Putting it into a function made it way easier to choose the motor to be turned.

On a tangent, here is a quick explanation of PWM
Basically, the servos have an internal chip/driver which reads signals in terms of HIGH and LOW and the amount of time a signal remains HIGH or LOW.
               _      __     ___    _____   _      _____   __     _   
               | |    |  |   |   |  |     | | |    |     | |  |   | | 
PWM Signal     | |    |  |   |   |  |     | | |    |     | |  |   | |  
             __| |____|  |___|   |__|     |_| |____|     |_|  |___| |_____
 So, as long as the signal is HIGH, the motor keeps turning. When it is at zero, it stops. When the signal falls below the mean value, i.e. LOW, it turns in the opposite direction. The length of the "plateau" in the above picture is basically the duratin for which the signal is kept HIGH or LOW.

Back on track
So, that was that. Once we got that working, it was just a matter of removing the Mouse( ) functions from the code and replacing it with the PWM Servo functions in order to get each motor working as intended.

The basic movement is as follows :

Normal Left-Right Movement : <base_rotate>
Normal Up-Down :  <bottom>
Left Click Up-Down : <middle>
Left Click Left-Right : <base_rotate>
Left+Right Click Up-Down : <claw_control>
Left+Right Click Left-Right : <claw_rotate>


Obviously, this being a prototype, we are still working out some kinks in the accuracy and control of the arm, but so far so good. Next step in this project will be most probably mounting it onto a rover of sorts, with an attached remote camera for total wireless control of the arm. Also in the pipeline is interfacing this arm with an EEG machine, to try and use brain-waves to give rudimentary control to the arm, which will be useful for handicapped people.



video