Friday, March 30, 2012

Using the SEN0038 wheel encoders, the Arduino, and interrupts

Now that I got the ultrasonic sensor to work and the MEGA board on the robot, it is time to tackle the wheel encoders. The ROB0025 comes with the SEN0038 wheel encoder. The first step is to get the encoders mounted on the robot. This was a little challenging given how small the screws were. Plus getting them spaced right so the spinning wheel was centered between the diode took a little trial-and-error with the right set of washers.

First up, the parts list. The parts from previous projects were needed, plus the new encoders

  1. MegaShield:
  2. Arduino Mega:
  3. Jumper wires
  4. ROB0025: 
  5. AdaFruit motor drive shield:
  6. Wheel encoders:
After I wired everything up, I started looking around for some documentation on how to use them. First, here is a picture of the encoders connected.

Then some googling to learn more about the model and product. This turned up several threads about how these encoders do not work. Luckily I traced down a thread which had answers and code about how to make them work (item #3 below).
To get the encoders to work properly, interrupts need to be used. I spent some time reading up on the best way to do this. Interrupts are used to ensure that all transitions of the wheel are caught. If we were to use polling we would probably miss half the transitions. Reading up on encoders though, they will probably not be all that accurate anyways. Here are a few of the noteworthy links:

Getting the interrupt to work properly turned out to be much more challenging than I expected. Many hours were spent trying to figure out what exactly was wrong. I originally hooked up the encoders to use interrupts 2 & 3 (pins 21 and 20). No matter what I tried, I would not get back reasonable results. Most of the time I would get a reasonable count (robot setting with wheels off the table spinning for a second). Usually one wheel would work well, but the other would return nonsensical values in the 1000s. 

More googling turned up this thread, and a pointer to the pololu encoders and library.
Using these posts, I gave up using the Arduino interrupt abstraction and used the AVR library directly. I put the encoders on pins 50 and 51. This worked much better. I am not sure exactly why the Arduino interrupt library did not work with the Mega and the encoders. I checked with the multimeter and the encoders were not fluctuating in voltage when the wheel was not spinning. I did notice that the AdaFruit motor shield was causing subtle power fluctuations that could explain the interrupts misfiring. I did an experiment where I disconnected the motor shield, put the encoders on pins 20 and 21, and spun the wheels by hand. This worked perfectly. So the only thing I can conclude is that it is some combination of trying to use pins 20 and 21 with the configuration I had.

At any rate, hooking the encoders up to pins 50 and 51, and then coding directly to the AVR library for the interrupts seems to have solved my problems. Again the code is located at:
There is an encoders class that abstracts the messy interrupt code (with the global variables) and a simple unit test that spins the left wheel, stops it, then the right wheel and stops it. Wheel encoders counts are made when only one wheel is spinning. This enables me to check that the right encoder is being used when the right wheel is spinning. Here is a plot showing the two wheels.

Next, here is a histogram of the two. Note that in this case, I had the right wheel spin at a speed of 200 and the left wheel spin at a speed of 150. This can be seen from the plot that more values were recorded when the motor was spinning faster (good sanity check).

I will need to convert these to meters based on wheel diameter. However, it was a lot of work to figure out the interrupt problem, so that will need to wait until next time.

Saturday, March 24, 2012

Using the HY-SRF05 Ultrasonic Module with the Arduino

Next on the list of sensors to explore is the ultrasonic range sensor. I am hoping to use this for two things. First, a simple range detector for obstacle avoidance (if something is ahead of you stop and do something else). Second, use the information to improve localization and map building. The latter usage may be a little problematic as it depends on the quality and timeliness of the measurements.

I started to look around at (i.e., google) the various options. To keep things simple, decided on the HY-SRF05 mostly due to the cost and availability on Amazon Prime (that keeps shipping costs down as well). and availability of code out there to make measurements. The parts list for this experiment is very simple.

  1. HY-SRF05:
  2. Arduino UNO:
  3. Some wires:
  4. Breadboard:
As I was looking, I found lots of code and examples about how to wire up the device.

Here is an image of the wiring

Then I setup the sensor and pointed it at a wall. I measured the distance to the wall as 0.381 meters. I collected about 1 minute of returns from the sensor and calculated the mean (0.3888) and standard deviation (3.3548e-04). So things seem pretty close. 

Here is a histogram of the values. 

This was by far the easiest of the sensors to find information about, connect, and get out sensible readings. The hardest part seems to be how to come up with a good way to mount the sensor. As usual, all the code is located on github at:

Upgrading from the UNO3 to the Arduino Mega

I decided that it was time to upgrade from the UNO to the MEGA. I started mapping out how many pins I would need for the accelerometer, the magnetometer, the gyroscope, the wheel encoders, the xbee, the motor drive shield, and the range sensors. The pin count added up to much more than was available on the UNO. The MEGA is about twice the size (physically) and has a lot more potential. I also purchased a prototype shield to help stack some of these devices together. Here are the parts I am using:


First, here is a picture of the two boards side by side.

The shield took some assembly. First here is a look at all the parts (below). Then I soldered all the headers on; this took some effort as there was a lot of soldering to do.

After that, it was onto soldering on the resistor, capacitor and LED. Make sure the LED is hooked up with the long wire connected to the positive terminal. Then the reset and a few of the header pins.

Finally, I stacked the shield, the xbee card and then the motor shield onto a stack.

Finally connected it all to the dfrobot platform. Now I have enough pins on the front to start to look at the encoders and range sensor.

Next, I reworked the CMakeLists.txt files a little bit. I put a top level define for the board and the port:

## Generate code for the specified board

## Set the device 
set(PROJECT_DEVICE /dev/cu.usbmodem621)

And then had the lower level build files use those variables. That way I easily change the board to build against at the top level of the tree.

As usual, the code can be found at:

Using the DF Robot 4WD (SKU ROB0025) with the Arduino (Part II): Qt user interface.

In the last post, I put together the robot base, wired the motors, and wrote a simple test program to make all the wheels move. The next step is to make a simple user interface to control the robot to get it to drive around. The point really is to enable manual control to drive the sensors around and record measurements. Then given the measurements, develop a better controller to improve localization estimates of the robot over time. After that is done, I will add in other sensors that add information about the scene (i.e., video) that will enable rudimentary SLAM capability.

In a previous post, I had written a UI in Qt ( Qt is a UI toolkit that comes with a lot of examples, is open source, and fairly straight forward (as widget toolkits go) to use. I realize many people might think of developing user interfaces as a daunting task. There are several basic tutorials out there that can be uncovered with a little googling (

I have taken a pretty simple approach. Use OpenCV ( to grab camera images from a web cam and display them in a Qt OpenGL widget. Then have a second widget that enables robot motion control. I thought for awhile about the easiest widgets to use for control. Sliders and push buttons came quickly to mind. After some thought, I decided to use mouse movement in a 2D cartesian coordinate system. This enables easy mapping of left and right motor speed to a single point in the 2D grid. Moving the mouse around changes the ratio and spin speed of the wheels on both side of the robot. I figure this will let me figure out the robot dynamics and eventually build a fairly good controller.

Here is a screen shot. On the left is the webcam image, on the right is the 2D wheel speed grid. To send the commands, hold down the control key and move the mouse in the 2D grid. Releasing the key will stop the motors. This allows a virtual "kill switch" if the robot gets too out of control.

I also downloaded some code to interact with the Arduino over a serial connection. Most of the libraries I found were overly complicated for what I needed. Eventually I came across the four functions I was looking for at the site I wrapped this code in a simple threaded Qt class that has a signal to connect to receive data from the Arduino. The data can be written through a slot (with a mutex around to prevent collision).

On the device side, I downloaded some code to parse incoming messages from the device. There is a heavyweight library called Shine (see This seems like overkill for the first implementation. There is also a library called SimpleMessageSystem (see Like the name says, this seems like exactly what I was looking for .... simple. After some more digging, a newer version of the library exists called CmdMessenger (see Has a similar type interface in that you send a command id (in this case I am using 4 as the motor control command) followed by comma separated values, terminating in a semi colon.

I set things up so command 4 controls the motors, and the next two numbers are the speed of the left and right motors respectively. This means sending "4,100,100;" will cause both wheels to move forward and "4,-100,-100" will cause them to move backwards. In playing around with the speeds it seems that nothing much below 100 causes the wheel to turn. Looking on the web, a few places have suggested using an extra capacitor to smooth out the motor power. I will look into that once everything is up and moving.

So the code to receive the commands and send the speeds to the motors is located in:
Once that is uploaded, then the UI program can be started. The code for that is located at:
The serial port to connect to is hard-wired in the code. That is something I will need to fix. When this runs, holding down control and moving the mouse around causes the wheels to move. Here is a movie of the wheels moving:

Next up is hooking in the xbee and the wheel encoders. I also need to wire up a battery connection so that it can drive un-tethered. Luckily my Mega board arrived today so I will switch over to using that to gain access to some much needed ports.

Monday, March 19, 2012

Using the DF Robot 4WD (SKU ROB0025) with the Arduino (Part I)

The robot kit arrived today and I started to assemble things. There were no instructions in the package so it took a little trial and error to figure out which nuts and bolts held things together. The main confusing point was there were far more pieces than I needed to put the thing together. I think that the extra hardware is present so things can be added to the chassis later (as needed). The project took about 2 hours to get the chassis fully assembled. Part of the trouble was the screws to mount the motors were very tiny and hard to place onto the chassis to tighten. It took about 10 minutes to get the wheels with the encoders into the proper setup.

The parts list needed for the (start) of this experiment are:
  1. Arduino Uno R3
  2. Jumper wires
After starting to put things together, I wanted to make sure I was doing everything correctly. There are several pages I found that were helpful in putting things together:

One of the tricky part was how to setup the power switch. When I first wired things up, I did not connect the switch. I simply wired the battery pack to the motor shield and the motors started up right away. This caused problems when I was testing out making the motors spin as there was no easy way to turn things off. Since there were no instructions, I had to wing it on connecting the switch. Not too complicated as there are three connections on the switch. A little testing with the multimeter showed which connections should be used for off and on. Here is a picture of the switch connection. I think I will also investigate getting a rechargeable batter at some point as the batteries are somewhat hard to get to in this configuration.

Next it was onto the code. I wanted to do something simple at first to just test that the four wheels were all wired properly and moving in the right direction. Again, the code is posted at the github site:

There is a class called Manual that will eventually be used to remote control the robot to move around the room. My current plan is to finish up getting the sensors setup to record measurements. Then manually drive the robot in circles to collect sensor measurement data and make some plots of the various sensor readings (encoders, accelerometer, magnetometer, gyroscope). I only have a few more sensors to hook up and then write code to dump all the data to a serial port. I also need to write the manual controller and play around with steering. However, I have realized that I do not have enough pins on the Arduino to hook everything up. I ordered a mega board so that I can be sure to collect enough measurements. Next up, will be writing code for the ultrasonic range sensor. That should allow some crude obstacle avoidance and maybe some initial estimates for SLAM.

Here is a quick movie of the wheels spinning with the simple test program.

Saturday, March 17, 2012

Using the XBee with the Arduino on Mac OS X

While I have only been tinkering around with the Arduino now for a few weeks, I have quickly realized that I need to be able to interact with the board over a wireless connection. A tethered robot is not all that exciting (unless you are talking about Big Dog or the new Cheetah Boston Dynamics has developed). I spent some time trying to figure out the best wireless connection to use. There seems like there are many options: bluetooth, XBee, ZigBee, etc. After reading the Sparkfun XBee Buying Guide it seems like the XBee is the best (easiest?) way to go. So I ordered a XBee Wireless Kit as that seemed to have all the pieces I would need. Now the hard part seems to be how to configure the card on the mac (I suppose I could fire up a windows VM, but that seems just wrong). First off the parts list used for this experiment:

  1. XBee wireless kit:
  2. Arduino UNO 3:
The first step is to put the kit together. Here are some pictures before and after assembly:

Next I started looking around for some tutorials on how to connect and send / receive data. There were several sites that were useful:
The first step was to install the FTDI drivers. I downloaded them from this site I installed them allowing the host computer to talk to the connected explorer USB board. Then I needed a way to configure the XBee for peer-to-peer communication (as I only have two chips right now). I was using screen on the mac to see output from my previous tests. That is a little cumbersome. I decided to try zterm (downloaded from in the hopes that would be a little better.

First, I plugged the XBee on the USB explorer card into my machine. Then I started up zterm. Then under Settings->Modem Preferences, I selected the usb serial device name. To find this I just selected the one that I did not recognize. I also checked in /dev for the date and time of the device to make sure it coincided with when I plugged it in. Then under settings->connection make sure the data rate is set to 9600. The two dialog boxes are shown below.
The wake up sequence is "+++" with no enter. When I typed that I got back "OK", I then typed "ATID" and received back 3332. All looks good and is what was expected based on arduino-xbee-wireless. Then using the guidelines at XBee_program_Arduino_wireless, I issued this command for the first xbee card:
which changes the network id to 4321. You should pick your favorite four digit pin.
which increases the baud rate to 19,200 and sets the address to 0. I repeated the process for the second card. This time, the command is:

which increases the baud rate and assigns the second card the address to 1. I then changed the zterm connection data rate to 19,200 and was able to re-issue the "+++" command and the "ATID" to see the new id. The shield I was using has a toggle on it to switch back and forth between UART and DLINE. This lets the code be downloaded using the USB to Arduino and then the serial line to send back and forth data. I created a very simple sketch:

  1. void setup()  
  2. {
  3.   delay(1000);
  4.   Serial.begin(19200);
  5.   Serial.flush();
  6. }
  7. void loop()  
  8. {
  9.   Serial.print("\rHello>");
  10.   while (Serial.available()) 
  11.   { 
  12.     char inByte =; 
  13.     Serial.println(inByte);
  14.   }
  15. }
I uploaded this, unplugged the Arduino, and then fired up zterm. When I type something in zterm it is echoed back with a hello prompt in front. So this allowed wireless data to be exchanged once the sketch was uploaded to the Arduino.

Friday, March 16, 2012

Using the AdaFruit drive motor shield with the Arduino: wheels on the bus go....

Lots of parts arrived this week. The first part was the AdaFruit drive motor shield. I picked this shield as it has the ability to drive four DC motors. Most other shields I looked at only supported two motors. The second part was the 4WD dfrobot kit. I choose this kit as it has plenty of room for sensors on the top, a power supply and wheel encoders. I was specifically interested in the wheel encoders so I can grab information about how far the robot has moved (or think it has moved). That information can feed into the filter estimate of robot localization over time. The parts list for this experiment was:

  1. Arduino Uno R3
  2. Jumper wires
  3. (although I am just using the motors for this one)
The first step was to put together the motor drive shield. This was fairly straight forward as AdaFruit put together a comprehensive installation guide. It took about an hour to solder everything together. I then hooked up the shield and was able to run a sample test to have the motor run forward and backward repeatedly. One step I forgot at first was to use the jumper on the power to use the 9 volt external power supply from the Arduino board. This will not be needed when I switch to the battery pack on the robot. As is pointed out several times throughout the AdaFruit guides, make sure that the LED light is on if you want to get the motors to run.

Here are a few pictures of the initial set of parts and the shield once it was all assembled:

The code for making the motor run forward and back is located in my github folder at:

Next up, I will start on wiring up the other motors and assembling the dfrobot kit. I also decided that I needed to wirelessly control the robot so I ordered an xbee board so I could send it commands remotely. I may work on that next and then switch over to assembly (plus there is the whole issue of figuring out the encoders).

Monday, March 12, 2012

Using the MicroMag3 with Arduino

After working with the accelerometer, I decided to experiment next with a magnetometer. A magnetometer measures magnetic field strength along an axis (see wikipedia page on magnetometer). High end systems can measure bends in laser light sent along a fiber optic cable. Most MEMS devices use electrical charge sent along a coil to measure both strength and direction of a magnetic field. Mutliple coils are used to measure change in fields along specific axis. A very succinct description is provided at Sensor workshop notes:

A magnetoiductive circuit consists of a coil around a ferromagnetic core that is incorporated into a circuit that forms a relaxation oscillator. Charge gradually builds up in the circuit, is rapidly discharged, and then starts to gradually build up again, and so on. The frequency of this oscillation varies with the strength of the magnetic field perpendicular to the coil. In the micromag3, the oscillation that is produced is a square wave, which can be easily read as a digital signal. The Micromag3 calculates magnetic field strength by comparing two measurements from the same circuit. First one end of the circuit is grounded, and the other oscillates. Then the other end is grounded, and the first oscillates. Subtracting one result from the other provides temperature stabilization and the direction of the magnetic field. 
After some searching I decided to use the MicroMag3. This was based on cost and amount of information (i.e., code) out there on the web. Here are several sites I found very useful in coding up a library to read the sensor data and understand the measurements coming back:

The last link was particularly useful although I am not sure about the SS pin used as the SPI library (see below) seems to use a different default pin. Here are the parts that I used in this project (along with some wires):
To start out, I had to build the prototype board. My experience with the accelerometer tests (previous blog) was that I needed a single mount point for both the sensors and the Arduino. This is needed so that I can move everything around easily and make valid measurements. This involved a little soldering to mount the connections. SparkFun had an easy to follow set of assembly instructions. I only setup the headers for now (see figures below) as I can add the lights and reset button later. Once that was done, I wired up the MicroMag3. This took a little time to figure out as the slave select pin has a default setting if you use the SPI library and several different code bases set it to something else. Once I decided on using SPI, everything got a lot easier.

The pin layout is also shown below. I decided to use the SPI library to take care of the bit shifting for serial reading off the chip. Out of the box, the SPI library expects the following pin configuration:

const static uint8_t SS   = 10;
const static uint8_t MOSI = 11;
const static uint8_t MISO = 12;
const static uint8_t SCK  = 13;
I copied the library into my workspace as I am sure that later on, this pin configuration will need to be changed. But for now, the easiest thing to do was wire it up using the defaults.

The code is located at:
I also put the data sheet there for easy reference. I updated the UI I was using before to have a simple compass so that I could check the heading coming back from the magnetometer. A simple check against my iPhone and the values look similar. There are a few questions I have about the default orientation of the magnetometer (seems 90 degrees from what I expect). The latency also seems high, but there are a few settings in the code to read faster (not sure what effects that has on accuracy). Also, seems the measurements are a little noisy (not as bad as the accelerometer) which will be cleaned up when I add some filtering and state estimation. Finally, I will probably also need to have some calibration procedures to define what default return levels should be forced to zero.

Here is a movie of the result. Somewhat hard to see given the compression, I should probably figure out a better way to post these. That can probably wait until the movies get a little more interested (like with robots).

Up next is starting to build the drive motor. I went with a 4wd robot from dfrobot. This meant finding a drive motor that can support 4 DC motors, which lead me to the AdaFruit board. More to follow....

Sunday, March 4, 2012

Using the MMA7361 Accelerometer with the Arduino UNO 3: (Part II)

I left off with a good start on reading and interpreting the accelerometer values. This was not working as well as I had hoped, so I did a little more searching the web and came across a few sources:

These three links provided lots of insightful information. I used that to update my code. I still am not positive everything is working right. I wrote a little visualization routine that will hook up the serial output of the accelerometer and visualize the data coming back plus the orientation. An example is shown below.

This shows that the values are close to one when the dominant axis is pointed down (or -1 if oriented in the other direction). However, the values are not quite zero when sitting still. Clearly a better calibration routine is needed to correct for these slight errors. In these plots, red is the x axis, green is the y access and blue is the z axis. The noise in the output can also been seen in the values along the bottom. However, at least I am convinced that the numbers are in the right ballpark. However, until I have formulated a dynamics model the scaling and calibration is not as high a priority.

I will next switch and repeat the process for a magnetometer. Hopefully this will be fairly easy as most of the work in dealing with accelerometer was setting up infrastructure. I also ordered a small robot kit. When that arrives I will start playing around with wheel encoders and simple navigation using dead reckoning.

Saturday, March 3, 2012

Using the MMA7361 Accelerometer with the Arduino UNO 3 (Part I)

Now that I have coded up a basic LED circuit, I thought I would move on to working with sensors that will measure robot location. Remember the goal of the first experiment is to build a small robot that can use dead reckoning to move through an environment. This is the precursor to developing a SLAM system as you need an initial estimate of object location to put the sensor measurements into a consistent coordinate reference frame. Subsequent processing is used to align the measurements in front of the vehicle over time to correct for errors in inertial sensor measurements.

The obvious first sensor to use for localization is an accelerometer. Accelerometers measure weight per unit of (test) mass along several different axis. This can be converted into a heading direction by integrating the observed values over time (acceleration is the derivative of velocity). Since accelerometers do not measure changes in position directly (but rather the derivative) they tend to produce noisy measurements which must be filtered or smoothed. There seems to be a lot of discussion out there on how best to do this on the Arduino (and whether a simple FIR filter or a Kalman filter is needed). I have decided to side step this issue for now. My first step is to get some measurements and look at just how noisy is the data that is coming back from the sensor. I will do this for several other sensors (gyroscope and magnetometer) and figure out how to combine measurements once all the sensors are characterized.

There lots of good resources out there for accelerometers and the Arduino. Here are a few that I have found useful:
Some of these provide topics that will come up later once the basic accelerometer measurements are made. I decided to make a separate class that will be responsible for interacting with the accelerometer and providing out the basic measurements. This will allow a re-usable component for later on when the magnetometer and gyroscope are introduced. First up, we need the parts list. Not too many things are needed here for this project:
  1. Protoype shield for the Arduino
  2. Arduino Uno R3
  3. Jumper wires, colored LED, push button, and several resistors
  4. MMA7361 Accelerometer
I picked the MMA7361 just based on its low cost (14 dollars) and its availability on amazon prime for free shipping. It has a nice advantage that it takes 5v directly and does not need any resistors/capacitors to be hooked up. When I got the board, it does not seem to fit my breadboard very well. It needs a breadboard that has 12 holes across. These do not seem to exist, so I am going to just wire it with jumpers for now and figure out how to mount it better later. Clearly this will not be an issue when I move to rigidly mounting this on a PCB, but for now, it is just kind of a pain. Here is how I wired this up:

The dataset says the sleep pin must be high for the part to work. Next I downloaded what code was available from the virtuabotix web site and fired up Eclipse. The code for this will again be going on my github site. This site is located at this site. For simplicity, you can grab the code by cloning the entire repository using the command:
      git clone
The code for this example is in sensors/accelerometer. I will dive into the code in more detail but first, here is a picture of the actual wiring:

Given that there are going to be lots of pins in use, I figured I would write an abstract class that would help make sure that the pin parameters are a little more explicit (as opposed to just being a bunch of ints that do not imply the pin mapping very well). I therefore created a class in common/Pin.h that has a constructor to set the pin number and a cast operator to get it back again:
inline Pin(int iId);operator int () const;
 Now the accelerometer class uses typedefs to give a better indication of the parameter ordering:
typedef common::Pin X;typedef common::Pin Y;typedef common::Pin Z;typedef common::Pin S;void setup(const X&x, const Y&y, const Z&z, const S&s);void update();float x() const;float y() const;float z() const;
So you initialize the class in the setup() method and call update() in loop() to read out and store the values. The big question is how to read out the values and convert them to something meaningful. I started with the discussion at On that blog, the equations for mapping from the analog to digital are given as:
Rx = (AdcRx * Vref / 1023 – VzeroG) / Sensitivity
Ry = (AdcRy * Vref / 1023 – VzeroG) / Sensitivity
Rz = (AdcRz * Vref / 1023 – VzeroG) / Sensitivity
For the moment, just ignore what the variables mean other than the AdcR vector is the analog to digital that is read using analogRead and the R vector is the direction the accelerometer is pointing. After this conversion, the vector R is converted to a unit vector. Converting to a unit vector causes the multiplicative scale values to be irrelevant (they are normalized away when all vector elements are multiplied by the same value). We can therefore re-write this equation as:

Rx = (AdcRx - T) * S
Ry = (AdcRy - T) * S
Rz = (AdcRz 
- T) * S

S = Vref / 1023 / Sensitivity
T = VzeroG * 1023 / Vref
and since we are normalizing R, the S can be discarded (meaning sensitivity only affects the magnitude of the vector which we are discarding). The translation offset can be computed by looking at the data sheet for the device. When I plugged in the data sheet values I did not get the exact values I was expecting. I am guessing this is due to error sources and slight differences in location on the earth (affecting gravity) and manufacturing. Therefore, I set the accelerometer on several axis and observed the readings. I then used these to estimate the T values. This is poor way of doing an initial calibration. I might write a calibration routine later once I have a moving robot (i.e., drive in circles to estimate the bias).

I am now embarking on a way to plot these values and visualize the orientation so I can validate the measurements in a qualitative way (and maybe even some quantitate analysis as well).

Friday, March 2, 2012

Using CMAKE with Arduino

In the last posting I talked about exploring other development environments for use besides just Arduino IDE. This led to Eclipse and a cmake configuration files. In this post, I overview converting the previous LED experiment to the new build environment. First, grab a copy of the tree. Since we are using cmake, the best bet is to set things up with a parallel source and build set of directories. Make sure you have git installed (prompt$ sudo port install git):

yoshi:swdev mstevens$ mkdir ardadv 
yoshi:swdev mstevens$ git clone source 
Cloning into source...
After that is done, make sure you have cmake installed on your system and in your path (prompt$ sudo port install cmake). To simplify the building I have created a top level configure script that will run cmake and generate the eclipse specific build files. This script should be run from the source directory:

yoshi:ardadv mstevens$ cd source/ 
yoshi:source mstevens$ ls 
CMakeLists.txt License.txt ReadMe.txt Test actuators cmake 
yoshi:source mstevens$ source  
-- The C compiler identification is GNU-- The CXX compiler identification is GNU
This should also trigger the build of the tree. I have also refactored the LED example. It is now located in the source/actuators/button/test/Test.cpp file. I also pulled out the button state code and put that into a separate library class that checks the button status. First you need to import the project. This is done under the eclipse menu File->Import and then select import existing eclipse project. This should load up the project with access to the source code and the make targets so you can build the code (and download the firmware).

The library located in sensors/button contains a class called Button. This has two methods:

        void Button::setPin(int pin);
        Event Button::check();

That you call from setup (to set the pin of the button) and check which is called from loop to see if the button was pressed, released, or not changed. The Arduino cmake file (located in sensors/button/test/CmakeLists.txt) lets you set the port to upload the firmware. You can then type:

yoshi:test mstevens$ make Test-upload
[ 90%] Built target uno_CORE
[ 95%] Built target Button
[100%] Built target Test
avrdude: AVR device initialized and ready to accept instructions
Reading | ################################################## | 100% 0.00s
avrdude: Device signature = 0x1e950f
avrdude: reading input file "Test.hex"
avrdude: input file Test.hex auto detected as Intel Hex
avrdude: writing flash (3506 bytes):
Writing | ################################################## | 100% 0.64s
avrdude: 3506 bytes of flash written
avrdude: safemode: Fuses OK
avrdude done.  Thank you.
[100%] Built target Test-upload
The wiring diagram for this layout is now:

With the actual picture:

This is a much more familiar build environment. Next up, onto accelerometers!

Using Eclipse with Arduino

In the last adventure, I experimented with a simple LED project to control several LEDs and turn them on and off. This was very instructive and I learned several things about setting up projects and working with the Arduino. One of the things that was most obvious, was that the IDE is very limited in its capabilities. It seems very appropriate for small projects, but it is unclear how well that scales to larger multi-library projects. I thus began a quest for a better development environment.

I began experimenting with using the Arduino plugin for Eclipse. I use Eclipse at  work and figured it would be easy to setup for development at home. A googling of possible plugins turned up exactly what I was looking for: First, I installed the Eclipse CDT package. Then clicked Help->Install new software. I then added the link to install the package (as specified on the web page and shown below):

After this was done, I then needed to update several of the paths in the configuration. Under Eclipse->Preferences I set the path to avr-gcc which was installed as a part of the Arduino IDE. This took a little experimentation as the paths to use were not immediately obvious. Finally, I arrived at what is shown below and that seems to work:

I was then able to create a new Arduino Sketch. This was close to what I was looking for, but still not quite right. This did not quite give me complete control over library and test layout. Another googling turned up a cmake module for the Arduino:

I followed the instructions in the readme for setting up a project using the macros provided and was able to build a library and test module with not too much effort. In the next installment, I will redo the LED experiment using the new build environment.