I had the privilege of working at CSIRO 20 years ago developing applications like AR for surgical training, remote collaboration in immersive VR environments with haptics (touch feedback) and sonification (audio AR). Given the time that’s passed, where are we now? I want to look at AR, because VR - though incredibly important for things like training pilots - is further away from mass adoption. And by AR I mean overlaying information on representations of reality.
A lot of AR applications are still the same. Games, training, museum displays and some new ones I’ll touch on that aren’t Snapchat filters or Pokémon. But the hardware advances are incredible. What previously needed a bank of servers now runs on a phone. A $10k monitor of old is now surpassed in every way by the OLED screen in your pocket. Phones now have advanced video and, through time of flight cameras, can measure distance. And now with 5G coming soon we will have low-latency, high-bandwidth communications back to the cloud.
There are two things happening. Our phones are becoming platforms that can support some serious AR applications, not lost on vendors like Apple with their ARKit. But, while useful, I don’t think phones are the ultimate AR devices. In parallel, there is a lot of experimenting with different kinds of devices. However, I don’t’ think we’ve reached the iPhone moment for AR - by which I mean a device and ecosystem which balances a set of technology trade-offs to make a compelling experience like the smartphone.
Google Glass came and went as it was considered a bit creepy and looked strange. I don’t think we’ll accept eyeglass format devices widely until they look as fashionable as normal eyewear. Devices like Blades from Vusix are several steps closer and I think two more hardware generations on, we will get there. Until then, devices like HoloLens 2, backed with Azure AR Cloud are probably the most powerful AR platforms for enterprise generally available until we see what Magic Leap ultimately delivers. HoloLens are still expensive but make sense for specialized applications for things like aircraft maintenance or manufacture.
However, there are reasons to be excited. In the near term, there are three areas that interest me: Audio AR, Transport and, what Mark Pesce terms Digital Depth.
I think we have a massive opportunity with audio AR. Think about the last time you were at a conference wishing you could remember someone’s name. Now imagine you had an earbud that could pick up voice prints and when you met someone you’d met before it whispered in your ear who they were and the context of last meeting because it recognized a voiceprint. I may not even look odd because so many people have AirPods or another wireless device in their ears these days. Apart from whether using voiceprints is a reasonable thing to do this application is possible now.Bose has AR Sunglasses with bone conduction audio which would work here too.
For several years now I’ve been fascinated with transport and how to improve it, and this has led to me working with shared mobility technology provider, Liftango. One thing we do is provide technology for on-demand buses and shuttles – the bus comes when you ask. And to get the best balance of passenger convenience and efficiency we sometimes run what we call “virtual stops” where the passenger and bus meet each other. If you have ever used pooled rideshare you’ll know that GPS often struggles to give accurate locations. But AR can help. Google Maps now has an AR feature (on Pixel phones for now) that links what the camera is seeing database of images and can overlay directions on the scene and tell you which way to walk. And this feature will work indoors as well as outdoors.
Just imagine, your phone will draw a lighted path to wherever you want to go, or where your personal on-demand bus will pick you up. Or help you navigate an unfamiliar airport when you are late. Of course this would be more convenient viewed through glasses than holding up a phone.
AR can not only help you get somewhere more easily, it can help get you there safely. AR can make a trailer, a car bonnet or a building appear transparent by using cameras located strategically and then overlaying the camera views on your field of gaze. For example, imagine approaching an intersection in a heavily built up city. You can’t see around corners normally, but now add cameras that are networked and monitoring all the streets. When you look in the direction of the blind corner the scene behind the building will be super-imposed on a HUD – in fact Corning is working on glass for this case. In parallel, the car’s semi-autonomous system can receive trajectory information about the vehicles we can’t see relayed by low latency wireless.
Finally, Digital Depth ties together the idea that any physical object in time and space has layers of digital information associated with it. For example, a building will have council records, energy consumption, architect’s plans, survey information, finance, market value, temperature, and movement sensors and even geological and climate information recorded somewhere. With AR we have the opportunity to join all that together and make it accessible when we look at the building through an AR lens. It isn’t just bringing IoT to life in a visual way, it’s a new way to understand the building. Appropriate privacy needs to maintained (and that is something CSIRO is currently looking into) but the idea is clear – AR becomes an intuitive and efficient way to navigate the whole depth of information about that building. And you can imagine doing something similar in other sectors. So a challenge coming up for CIOs is how to get the interesting spaces and places mapped and in parallel get the rich digital information into a format that can be linked and represented in AR.
Special thanks to Futurist Mark Pesce and CSIRO’s Matt Adcock for valuable conversations and insights on this topic.