Oz Weather Stats

Ever wondered which Australian city is most weather-obsessed? Probably not. But I’m going to tell you anyway. Its Canberra.

How do I know that? Well I don’t know it for certain, because my calculation is based on the assumption of uniform iPhone ownership across all Australian cities on a per-capita basis, and also uniform rates of access to Oz Weather in comparison with other weather resources on the iPhone. So alternatively, for example, it could mean that Canberra has the highest iPhone ownership, and perhaps its residents are not actually so weather-obsessed after all. But in reality I doubt that ownership rates vary that much – unless someone can tell me otherwise?

The following list shows the ratio of unique visits to Oz Weather per head of population, indexed on Canberra’s data, for the last month.

  • Canberra – 100%
  • Hobart – 75%
  • Melbourne – 68%
  • Darwin – 59%
  • Adelaide – 41%
  • Perth – 30%
  • Sydney – 29%
  • Brisbane – 25%

So this seems to be suggesting that people are more concerned about the weather in the most south-eastern capitals of Australia. And that does make some sense, given that the weather has the greatest natural variability in this region too. But we’ll have to see whether this pattern maintains itself as time goes on.

Finally – for those more interested in commerce than weather – you might like to know how the Oz Weather AdMob advertising has been progressing. I mentioned earlier that the ad fill rate had been patchy. Well it has continued to see-saw a lot. Some days see 100% fulfillment, others 0% – and its very hard to see any clear pattern to it. I would suspect they are struggling to optimize their ad placement algorithms. However, despite this, during the latest month fulfillment has never-the-less averaged 79.8%. On a more disappointing note, although the eCPM started out at a very impressive US$7, it has gradually sunk and the average since inception is now about US$1.20. So I guess I won’t be retiring next week, after all. But I have been making good progress with a significantly enhanced version of Oz Weather as a native iPhone application, and it may just be good enough to actually put a modest price on it in the app store. More about that later…

Advertisement

The Objective-C Learning Curve

Apple have done a great job in providing developers with source code for a range of sample iPhone applications. And due to this I was able to quickly knock up a couple of fairly simple test applications of my own. This of course was very gratifying and led me to the impression that it would be possible to produce a more complex and polished application within a couple of weeks.

However, despite this initial experience, and having also had some previous experience of C (having written several Windows C DLLs), I have now decided that learning Objective-C and the Touch Cocoa framework for iPhone development is going to be considerably more difficult than I first thought.

Part of the problem is due to the C language itself. This is, of course, somewhat notorious for allowing (and even encouraging) terse code, and rather minor and subtle variations in syntax that are hard to spot at first glance can create difficult-to-trace bugs.

Another issue is the richness and depth of the Cocoa framework. It seems there is hardly any need to write raw C code at all. Almost every imaginable function, operation, conversion you might need is taken care of within the framework. If you are already familiar with this framework (for example by having previously programmed for Mac OS-X) then you will have an excellent heads up here. But for the newcomers like myself, the combination of lack of familiarity with the framework alongside the C language subtleties can make for rather tough and slow progress.

All this does not necessarily make for any more of a challenge than does attempting to pick up any new development environment, tools and language. However, there is a third issue which, in combination with the other two, make things pretty challenging all round. That issue is the limited availability of on-line samples, discussions and forums ie. the lack of any easily-accessible community and knowledge-base. Apple has not helped here at all by ensuring developers sign an NDA which is a strong disincentive to share your own knowledge, experiences and examples.

A (very) rough illustration of what I mean here is to compare the Google search results of, say some PHP function names versus those of some Cocoa framework classes.

  • “PHP mysql_query” – 3,800,000 pages
  • “PHP XPath” – 409,000 pages
  • “PHP curl_init” – 119,000 pages
  • “Cocoa NSDictionary” – 43,100 pages
  • “Cocoa NSURLRequest” – 4,280 pages
  • “Cocoa UITableView” – 2,130 pages

I don’t know quite how typical I am here, but I suspect I am not alone in using web search heavily when learning new progamming languages and environments, and the sparsity of publicly available pool of knowledge and expertise is a real obstacle in getting up to speed in the first place, and subsequently (even more importantly in my opinion) acquiring the extra tips, skills and knowledge needed to produce polished, high quality applications.

All I can do for now is to plead with Apple to

  1. drop the developer NDA agreements (at least for current and past releases of their SDK)
  2. actively encourage developers to share their knowledge and experience

Its hard to imagine how the effect of this on Apple could be anything other than highly beneficial, and it would certainly increase the quality and depth of applications for the iPhone in general.

Oh… And it might also help me get over this aching brain sensation I’ve had for the last few days!

TiVo – The Unit and the Customer Service

I’ve been through several HD set-top boxes in my time already. The first one was a Toshiba. Being my first I didn’t know what to expect, and was a bit shocked by some of the problems and bugs. For example, the display clock lost about 20 minutes per day. The only way it managed to record programs at the correct time was by turning itself on every few hours and re-syncing its clock with whatever channel it was last tuned to – and some of those didn’t even remember to switch themselves to daylight savings. And then there were frequent bursts of pixelation and occasional “lost” segments of 1/2 of minute or so in the recordings. These didn’t happen on the live TV – only on the recordings. But being so new I put up with that and enjoyed it anyway.

My next was a Foxtel iQ unit. This was much better. The recordings were no different from live shows – perfect quality, reliably labelled, easily accessible via the menu system. The only drawbacks were that it wasn’t actually HD and couldn’t easily record anything from the TV channels with which they had no commercial agreement in place. Another small gripe ws that the unit remained on permanently, and was always very hot to the touch – not so great in this era of global warming and need to minimise power usage. But again, I really enjoyed using it. I only gave it up because I decided that Foxtel didn’t offer enough content of interest to warrant continuing the subscription.

The third was a Sony unit. This one was a bit of a dog, I’m sad to say. When you recorded a program, there was no title saved with it. So you ended up with a list of programs identified only by date, time and channel. Great if you have a photographic memory of the TV guide! And navigating its menu system was a mind-numbing process – being both slow and convoluted. But it did work most of the time. Eventually though it developed a bad fault. It kept freezing during bootup – at least once per day, and required a tricky 20 second button push to reset it each time. Either that or just pulling the plug, which was quicker but didn’t sit too comfortably with me. It was this ongoing problem that led to the fourth unit – the TiVo.

In many ways the TiVo unit is bliss. By connecting it to the home WiFi, it downloads two weeks worth of coming program listings. And the menu system (although still with its quirks) is, by comparison with some of the others, a pure delight to navigate through. It even records various programs speculatively based on your previous preferences, so if you don’t like anything thats on live, you can browse through a list of what it recorded for you, and you might just be pleasantly surprised to find something you like in there. If not, nothing lost. They just get recorded over whenever you need the space.

But – all silver linings have a cloud, and the cloud here was that it started to reboot spontaneously within the first week of acquisition. The first contact with tech support (submitted via online form on their website) was promising – they suggested it might be due to corrupted files, and gave instructions on how to perform a “severe error” scan, which could fix the problem. An extract from the (fairly lengthy) set of instructions was as follows:

Reboot and keep the TiVo remote pointed at the front of the DVR so you can be prepared to do the next step as soon as the DVR begins to restart.

As the DVR restarts, all four LED lights on the front bezel of the DVR will be on at the same time. As soon as all four go out,  immediately (within 2 seconds) press and hold down the yellow Pause button on the remote. (If you are unable to catch the timing, you may also hold the pause button down continuously during the restart until only the yellow light comes on.) When the yellow light comes on,  release the yellow Pause button and then press 57 on the remote control. (You will have approximately 10 seconds to do this.)

If the numbers have been keyed in successfully, the DVR will restart.

I must point out here that rebooting takes about 4 minutes. So I sat there for 4 minutes, watching like  hawk, waiting for 4 red LEDs to appear. But they never did. Not even 1 red LED. I tried several times. But still no red LED.

Well, this led to a few more exchanges with tech support, and eventually they sent a modified list of instructions:

As the DVR restarts, keep your attention fixed on the front panel of the Tivo. After approximately 5 minutes you should see the green light go out for a second and then come back on. Following this the yellow and red light will be displayed also. Hold the yellow pause button down for 2- 4 secs and then press the number “57”.

The box will appear to be restarting and then “powering up”

I’m afraid that one was a bit off the mark too… After about half an hour of experimentation I finally found out how to do it, and wrote back to them with my (rather different) findings:

1) About 1 minute after reboot (during transition from first screen to “almost there” screen, the green light very briefly goes off and on, and the yellow light comes on. If you press the Pause button while the yellow light is on, the red light will also come on, and you can then release the Pause and enter 57.

2) Following this, there is no sign at all whether or not it worked – it appears to continue booting as normal. But after 15 to 30 seconds it then starts rebooting.

Phew! After all that it worked. It is to TiVo’s credit that they said they would use my feedback to update their help system – but I pity anyone else who may have got the same set of initial instructions that I got from them…!

But – although the reboot and “severe error” scan completed successfully, another spontaneous reboot occurred – so back to TiVo support. This time the only solution was to exchange the unit for a new one. And this is where I started to become really impressed with TiVo support. Firstly they sent a brand new unit to me via courier – arriving the next day. Then they sent an empty package with a courier form, and in my own time I was able to place the old unit into it, call the courier company, who promptly collected it. All free of charge, all simple and straightforward. No need for me to revisit the shop, or take the unit to a repair centre etc.

TiVo – take a bow! Too bad about the glitch, and the inaccurate instructions initially – but they were responsive and kept at it, never blamed me for anything, and got the problem fixed. Now lets just hope this meme spreads a little. This kind of service deserves to be contagious.

More iMac, Windows and VMWare

Its been a few weeks since my last iMac post – so here’s a little more about my progress with integrating my various web-life-computer strands.

Basically, its looking good. I running Win XP with all my usual, familiar Windows tools inside VMWare Fusion on the iMac. VMWare allows me to let the Windows session take over the whole screen – which seems just like running Windows natively (with a few little exceptions I’ll discuss below). But it also allows me to reduce Windows to running inside a window on the OS X desktop, or even to run in “Unity” mode where any Windows apps which are already running appear on the OS X desktop as if they are Mac applications. Now that is the kind of integration that is really worth having. And these modes are all switchable via simple key combinations – without having to shut anything down. Great!

Well yes - a bit messy I suppose, for now.
iMac desktop with Windows apps in "Unity" mode. Well yes - a bit messy I suppose!

One minor annoyance is that when reducing Win XP from the entire screen down to a smaller window, any apps running inside it get pushed around a bit so that they fit inside the smaller window size. This in itself is actually very useful – but the problem is that when you resize Windows to take up the whole screen again, the apps do not spring back, and you have to re-adjust them manually to their original positions. I suppose this is understandable, as it might be presumptuous of an app to do this kind of thing automatically, although it would perhaps be reasonable to do automatically if you hadn’t manually readjusted any of the positions yourself after it was first sized down. I guess that this is a Windows issue – not VMWare, though.

The other, bigger, issue I’m having is keyboard shortcut assignments. For example, when I first started coding in XCode on the Mac (iPhone app development), every time I hit the end key to extend my selection to the end of the line, instead I found myself jumping to the end of the document. Dang! Despite wanting not to do it, I kept doing it out of sheer force of habit. The other ones that I kept tripping over were using Ctrl+C and Shift+Del to copy, Ctrl+V to paste. Having used these for years, I suspect they are hardwired into my neuronal circuitry. In contrast, on the Mac you need use the Command key with C and V to copy and paste. Fortunately XCode allows you to re-assign keyboard shortcuts – and there are literally hundreds of them – so I was able to “get back” my most deeply entrenched combinations, and could resume a more normal level of productivity whilst coding.

But of course these reassignments don’t carry through to any other applications on the Mac – so perhaps my XCode solution will only cause me more grief because I’m delaying an inevitable relearning process. Hmm – will just have to see how this goes.

The final issue I will mention for now is one of screen real-estate. Although the iMac boats a very impressive 24″ screen (well ok, just diagonally, I admit), when you’re running two operating systems each with their own set of apps simultaneously, you’re in a situation that multiple monitor setups were invented for. (As evidence I proffer the screen shot above.) Well the iMac does support multiple monitors, but presumably as an aesthetic design consideration (ie. avoiding big ugly connection sockets) they only offer a Mini-DVI socket (not to be confused with a Micro-DVI socket, which Apple uses on its slimmest laptops). So I’ve just sent off for a Mini-DVI to DVI adapter, and hopefully within the next day or two, I will have a 24″ / 22″ dual monitor setup – with which I can run VMWare/Windows in one, and OS X in the other.

So although I’m not quite “there” yet, I am still dreaming great dreams of a perfectly integrated world – and geting there bit by bit.

Accelerating iPhones

I’m pleased to be able to say my application to be a registered iPhone developer was finally approved, and hence I am now able to run test native applications on my own iPhone. Although the simulator provided with XCode allows you to test many types of applications, one thing it won’t do is simulate feedback from the built-in accelerometers. Hence my grand ideas of developing a suite of applications based on the use of accelerometer data had to be put on hold until now, given that I couldn’t determine whether the accelerometers would be capable of the accuracy and consistency I would need to make my apps work.

So the first thing I did was load up the sample Accelerometer Graph application and then modify it to display the acceleration (m/s^2), speed (m/s) and distance traveled (m) along each axis, in order to observe its behaviour and experiment with it.

The accelerometers return data in units of g, so need to be multiplied by 9.81 to convert to m/s^2. The interval duration may be varied as desired, but I experimented with both 1/40th and 1/100th of a second.

The speed is then incremented with each time interval by the acceleration x interval duration.

The distance traveled is incremented with each time interval by the speed x interval duration.

My initial observation was that it was all wildly inaccurate. When the iPhone is moved and then returned to its initial location, the acceleration should increase and then decrease, and correspondingly the speed and distance. The acceleration demonstrably returned to zero, but there was usually a residual speed shown after returning to rest, indicating that there were accumulating errors along the way.

An initial factor in causing this appears to be the granularity of the measurements of acceleration returned (ie. the resolution). The accelerometer readings returned appeared to come in quanta of about 0.18m/s^2 (ie. about 0.018g). This is a fairly large gap, and results in a large cumulative error very quickly.

Further investigation led to the observation that errors became especially large when the iPhone was not kept perfectly level whilst being moved horizontally, and eventually I realised that this was because the high-pass filter only acted slowly to filter out gravity effects – and even very small degree of tilting was leading to large cumulative errors before the filter managed to adjust for the gravity bias.

The crux of the issue is that the accelerometers actually measure force rather than acceleration per se. Hence they are affected by gravity, which is not relevant to the measurement of the device’s motion, and this can necessarily only be filtered out over a period of time rather than instantly, which therefore makes it impossible to derive accurate motion information.

So the the force of gravity and any actual acceleration of the device are not individually distinguishable by the accelerometers. The device may detect, for example, that it has a force on it 10% more than gravity, but whether this means that it is accelerating at 0.1g vertically away from the earth, or alternatively perhaps is accelerating 0.14g at 45% from vertical (which results in a total vector magnitude of 1.1g) along with a slight rotation of the device, simply cannot be distinguished.

So what exactly are the accelerometers “good for”, so to speak? The most obvious usage is as an orienting mechanism – finding which direction is up (or down). This involves working on the assumption that the device is not experiencing any acceleration (or if it is then it is small in comparison with gravity, and of limited duration), and that the forces it measure therefore indicative of the direction of gravity. And this is precisely how the iPhone “knows” when to rotate its browser, photos, keyboard etc.

Also, if one were to adopt the alternative assumption that the device had a fixed orientation (eg. attached to a stable moving platform), then the effect of gravity could be eliminated, and speed and distance traveled should then be derivable. However, it seems rather unlikely that any practical scenario would allow for sufficient stability to make this accurate enough to be of any use.

A third possibility here, although admittedly with somewhat limited applicability, is that it would be possible to reliably detect an absence of force ie. the state of free-fall. This might help someone make a great app for astronauts, but for us earth-bound individuals who don’t like the idea of dropping (or throwing upwards and then catching) our iPhones it is probably only going to be of use either in a gaming scenario or as a novelty.

Unable to resist the novelty myself,and in the interest of science of course, as proof of concept I did create an app which waits for and captures free-fall events. When this happens, the screen goes red, the vibrator is triggered, and a wav file is played – in this case with a message pleading for the user not the drop my iPhone. Quite fun and amusing, if I might say so myself, but sadly a far cry from being able to measure distances moved with any accuracy.