Augmented reality (AR) apps are all the buzz at the moment, and they do offer some exciting possibilities. In fact I have already created an iPhone app using certain AR techniques myself, and intend to submit it to the app store as soon as Apple will permit it. More later. 😉
However, whilst considering some of the implications of using AR, I was surprised to find that current geotagging standards don’t seem to be on a par with what AR technology permits. There is definitely scope for extending the current standards, and its seems very likely to me that demand for this will grow rapidly from now on.
In its simplest form, geotagging is simply a means of tagging a piece of data (such as a photo) with a latitude and longitude, and thus allowing data to be mapped or accessed via locational search.
An implicit assumption (given that it is called geotagging) is that the data is associated with a point on the earth’s surface, whatever the altitude of the surface might be at that location, in which case an altitude is not strictly required, because the earth’s topography can be assumed to be reasonably constant and knowable via other sources. But what about the case, for example, when a photo is taken from an aeroplane? In that case, altitude would be an additional parameter required in order to correctly differentiate it from a photo taken on the earth’s surface. Given that GPS systems typically record altitude, it is hardly a stretch to include it as standard additional tag item.
And going further still, given that augmented reality devices measure spatial attitude of the camera, would it not make sense to enable any photo taken to be tagged with the heading (aka azimuth) and elevation (ie. local horizon coordinates) at which is was taken, and further also with the camera tilt (eg. whether portrait or landscape relative to horizon, or any angle in between), and with angle subtended by the camera shot, both vertically and horizontally. It would be necessary to provide all this extra geotagging information in order to be able to correctly reproduce/model the exact “window” into space that the photo represents. Whilst this additional data might would be unnecessary for common scenic or portrait photography, it could be quite valuable in other situations. For example, two such fully (and accurately) geotagged photos taken of the same object from different locations would allow a 3-D reconstruction of that object to be created. This is not a trivial implication!
One of the existing geotagging standards (FlickrFly) also allows an additional item to be specified ie. distance or range from the camera to the subject of the photo, which presumably is necessary when a locational search is being done for the subject of the photo rather than for the location from which the photo was taken.
In order to avoid missing out on the possibilities that AR apps can already provide, and to be able to start constructing better, more powerful geolocational systems and applications which take advantage of this, I would propose that existing geotagging standards be extended to include all of the following:
- date, time, timezone offset (to provide instant in time)
- geographic latitude, longitude and altitude (to provide location relative to earth’s surface and mean sea level)
- camera viewpoint central heading (0°=N, 90°=E, 180°=S, 270°=W)
- camera viewpoint central elevation (0°=horizontal, -90°=vertical downwards, +90°=vertical upwards)
- camera tilt (0°=portrait/upright, 90°=landscape/top points left, 180°=landscape/top points down, 270°=portrait/top points down)
- camera angle subtended vertically (ie. along nominal top to bottom)
- camera angle subtended horizontally (ie. along nominal side to side)
- range of point of interest from camera (if there is a relevant POI involved)
I would welcome feedback on this proposal from anyone knowledgeable with the current state of geotagging. I am certainly not an expert in this area, but for those who have the capabilities to influence things in this area, I believe that there may be opportunities for valuable advances, especially in relation to the new generation of AR technology.
5 thoughts on “Geotagging and Augmented Reality – New Standards Needed?”
I agree fully with your article and have been searching for documentation on exif tags to describe azimuth and elevation. I think angle of view may be tough to find out on some devices, but there should still be a tag for it. Let me know if you have found anything.
I’m quite interested in this too. The tech exists now (in consumer camera phones, no less) to record all of this information at the time of capture, but there’s no standard for storing it?
I need this app in Android and am willing to try writing it
I have access to NIST
Should anyone reading this find the idea of interest
Hi Doug – the Sun Seeker app is available on Android! https://play.google.com/store/apps/details?id=com.ajnaware.sunseeker&hl=en
I do Tai Chi but I do it horribly due to disabilities which do not affect my practices of Hatha Yoga and Zen Meditation. I will ask my County Recreational Specialist for permission to bring my AR gear to Tai Chi classes…which is indirectly related to this thread. I would program an app placing the practitioner in a virtual army of allies (it’s a martial art) extending as far as the eye can see, since my disabilities require multiple simultaneous instructors…they relate to dyslexia…the County has been unable to provide multiple instructors for me. The AR lag or lead time would be programmable, of course. I have 7 Slates and can get 6 more headpieces.