Accelerating iPhones

I’m pleased to be able to say my application to be a registered iPhone developer was finally approved, and hence I am now able to run test native applications on my own iPhone. Although the simulator provided with XCode allows you to test many types of applications, one thing it won’t do is simulate feedback from the built-in accelerometers. Hence my grand ideas of developing a suite of applications based on the use of accelerometer data had to be put on hold until now, given that I couldn’t determine whether the accelerometers would be capable of the accuracy and consistency I would need to make my apps work.

So the first thing I did was load up the sample Accelerometer Graph application and then modify it to display the acceleration (m/s^2), speed (m/s) and distance traveled (m) along each axis, in order to observe its behaviour and experiment with it.

The accelerometers return data in units of g, so need to be multiplied by 9.81 to convert to m/s^2. The interval duration may be varied as desired, but I experimented with both 1/40th and 1/100th of a second.

The speed is then incremented with each time interval by the acceleration x interval duration.

The distance traveled is incremented with each time interval by the speed x interval duration.

My initial observation was that it was all wildly inaccurate. When the iPhone is moved and then returned to its initial location, the acceleration should increase and then decrease, and correspondingly the speed and distance. The acceleration demonstrably returned to zero, but there was usually a residual speed shown after returning to rest, indicating that there were accumulating errors along the way.

An initial factor in causing this appears to be the granularity of the measurements of acceleration returned (ie. the resolution). The accelerometer readings returned appeared to come in quanta of about 0.18m/s^2 (ie. about 0.018g). This is a fairly large gap, and results in a large cumulative error very quickly.

Further investigation led to the observation that errors became especially large when the iPhone was not kept perfectly level whilst being moved horizontally, and eventually I realised that this was because the high-pass filter only acted slowly to filter out gravity effects – and even very small degree of tilting was leading to large cumulative errors before the filter managed to adjust for the gravity bias.

The crux of the issue is that the accelerometers actually measure force rather than acceleration per se. Hence they are affected by gravity, which is not relevant to the measurement of the device’s motion, and this can necessarily only be filtered out over a period of time rather than instantly, which therefore makes it impossible to derive accurate motion information.

So the the force of gravity and any actual acceleration of the device are not individually distinguishable by the accelerometers. The device may detect, for example, that it has a force on it 10% more than gravity, but whether this means that it is accelerating at 0.1g vertically away from the earth, or alternatively perhaps is accelerating 0.14g at 45% from vertical (which results in a total vector magnitude of 1.1g) along with a slight rotation of the device, simply cannot be distinguished.

So what exactly are the accelerometers “good for”, so to speak? The most obvious usage is as an orienting mechanism – finding which direction is up (or down). This involves working on the assumption that the device is not experiencing any acceleration (or if it is then it is small in comparison with gravity, and of limited duration), and that the forces it measure therefore indicative of the direction of gravity. And this is precisely how the iPhone “knows” when to rotate its browser, photos, keyboard etc.

Also, if one were to adopt the alternative assumption that the device had a fixed orientation (eg. attached to a stable moving platform), then the effect of gravity could be eliminated, and speed and distance traveled should then be derivable. However, it seems rather unlikely that any practical scenario would allow for sufficient stability to make this accurate enough to be of any use.

A third possibility here, although admittedly with somewhat limited applicability, is that it would be possible to reliably detect an absence of force ie. the state of free-fall. This might help someone make a great app for astronauts, but for us earth-bound individuals who don’t like the idea of dropping (or throwing upwards and then catching) our iPhones it is probably only going to be of use either in a gaming scenario or as a novelty.

Unable to resist the novelty myself,and in the interest of science of course, as proof of concept I did create an app which waits for and captures free-fall events. When this happens, the screen goes red, the vibrator is triggered, and a wav file is played – in this case with a message pleading for the user not the drop my iPhone. Quite fun and amusing, if I might say so myself, but sadly a far cry from being able to measure distances moved with any accuracy.

17 thoughts on “Accelerating iPhones”

  1. I’m similarly trying to measure distance traveled with my iPhone’s accelerometer. It’s really not working out…

    How did you come up with the granularity figure?


  2. Hi Kou,

    The granularity figure was obtained simply from observing the displayed acceleration figures on the screen while the device was stationary. Typically it jumped between 0.00 and -0.18 and back, for example. You can reduce this issue by time-averaging a series of observations, though – which should be fine for most real-life purposes. But accuracy apart, measuring distance moved is still not feasible unless you can guarantee that the device is firmly fixed at a given orientation.


  3. Ah ok, I figured it was empirical.

    Even if you isolated the single dimension, you’re still limited by the resolution of the accelerometer itself, which is the killer for me.

    You might be able to do some corrections for 1-d motion though.. Since you know that gravity is always going to be 1g in total, you could construct a 3-space vector from the current accelerometer readings. Then provided that you’re mostly keeping to 1-d motion, subtract out the other two dimensions from the 1g total, and remove the remaining portion from your primary axis, then what’s left is the current acceleration.

    For example, if your accelerometer readings are like so:
    y: .25g
    z: .25g
    x: 2g

    then the user-applied acceleration would be 1.5g in the x direction. There’s probably some trig that I’m skipping, but it might work.

    I was considering the possibility of using the camera to support the accelerometer readings somehow, but I’d imagine that’d be a huge headache.


  4. After a lot of experimentation I came to the conclusion that you cannot use the iPhone to accurately measure distance moved unless the acceleration is reasonably large (around 1m per sec or more) as it is too difficult to distinguish between noise and real acceleration due to movement.

    A low-pass filter helps but removing noise will unfortunately also remove small acceleration due to movement.

    1. Hi Jose,
      I’m not aware of anything in the Accelerometer interface provided by Apple that would allow the selection of the accuracy scale. Please let me know if you do happen to know that its possible though.

    1. Hi Chris – Measuring car acceleration doesn’t require the same level of sensitivity as moving the device by hand, for example, so maybe it can do a reasonable job there, and you can assume the car is moving in a straight line, which helps too. But I still don’t see any apps that use acceleration measurements for anything else other than general motion detection or orientation detection. That is evidence that its not accurate enough for anything more refined.

  5. Hey Graham,

    Would you mind sharing or posting the code for your sample project above? I understand the physics and limitations of trying to get distance out of the iPhone, but would like to start from there and add some finesse to see if I can get an app working.

    Any help would be greatly appreciated!



    1. Hi James – I think the easiest approach would be to start with the sample project code that Apple supply and start tweaking from there, which is exactly the approach I took myself. Its the one called AccelerometerGraph. Just accumulate accel x timeinterval to get speed, and speed x timeinterval to get distance.

  6. Haha, I did exactly the same experiments as yourself in exactly the same order! I too had my dreams dashed by the coarsely-grained accelerometer readings.

    Perhaps with OS 3.0, someone will create an accessory for even finer-grain accelerometer readings – one can hope!

  7. The coarsely-grained accelerometer readings are due to the sensivity of the accelerometer sensor (LIS302DL). The sensitivity of this sensor is 16.2 mg/digit which confirms your findings. This also means that the accelerometer will measure accelerations ranging from -2G to +2G.
    This cannot be change by uppgrading to OS 3.0. This can be changed only in a new iPhone model with diffrent sensor.

  8. As someone who has a lot of (admittedly academic) experience with dynamics and estimation, and who has also tried playing with motion tracking, let me try and shed some more light.

    The basic problem is that motion tracking from accelerometers is a double-integrator problem. If you had velocity sensors, and did a simple 1-d back and forth motion, you’d get an accurate result, with only a small displacement error that remained fixed. Unfortunately, integrating velocity from acceleration, you get small velocity error will remain when its actually at rest, and this will integrate to continually growing errors in the position.

    This suggests to me that you may be able to get something by recognizing that hand motions will by default drop back to a velocity of zero, and add a damping term to the velocity (v = .9*v + a*dt instead of v += a*dt). It wouldn’t be especially precise but would allow something that feels natural. I haven’t tried it yet though, so I can’t guarantee it’ll work well. Of course, that would be totally wrong if you were walking or driving.

    Another option, would be to use the location services (GPS/GSM geolocation) to provide a low-frequency correction through a Kalman filter. This might allow you to improve the precision of those services, although again, I haven’t done any math to show if it would help much or not. Most of these would also imply that you’re moving fast enough for the location services to matter (car for GSM, walking for GPS). Might be an interesting problem though.

    Finally, all of this ignores rotation. Sadly, with no gyros you really can’t distinguish a lot of the motion. The new compass should help some, although I still think you’d need to make assumptions about the types of motion to get the right interpretation of the sensor information. For instance Apples high-pass/low-pass filter duality implies that linear motion is fast and rotations are slow, which can work but can’t be considered truly precise.

    Look at using an extended kalman filter (EKF) if you’re interested in handling noise a bit better or handling more states or sensors. The wikipedia article on them is pretty decent for getting one working.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s