Photo by The Verge.
The following is a post by Layar’s R&D lead Ronald van der Lingen and CTO Dirk Groten.
Two weeks ago we got our hands on Google Glass, and we have not been sitting idle. We started hacking right away to see what we can do with Layar and this hot new piece of technology. Here are our findings from these initial experimentations.
Glass runs on Android, we have an Android version of Layar. Piece of cake?
When we learned that Google Glass is “just” an Android device with a custom interface on top of it, we of course wanted to know if the code base of Layar for Android would work. We were already skeptical about the usability of “true” augmented reality (AR) on Glass, but you never know for sure until you try. “True” augmented reality is when you see your reality modified by the digital layer that’s added to it. You look at a page in a magazine through your AR glasses or AR-enabled smartphone and the page appears different than in reality: an image in the page comes to life, a 3D car model is shown instead of the flat picture of the car or a “Buy” button is added on top of an ad for a perfume bottle.
“Layar just runs on Glass.”
While Google Glass is running Android, it is not really easy to launch android apps from the user interface. Currently, you need to enable debug mode, which allows you to sideload applications and launch them. To our surprise, clicking “launch” in our development environment resulted in a correctly functioning AR display showing the full camera preview with attached augmentations in the corner of our eyes.
However, the user experience was terrible.
The problem is that the display of Google Glass is just a small screen in the corner of your eye, that you specifically have to look at by looking up. It is not completely immersive like what you would need for a true AR device. So the experience when running Layar’s vision based AR unmodified on Glass is that of holding your phone above your normal line of sight, looking up and at the same time trying to hold your head as if you’re looking at the magazine or object through the camera lens.
This confirmed our initial expectation that Google Glass requires a completely different mindset from any other platform we operate on. Rather than simply showing the content as we would on phones and tablets, we need to look at other ways of displaying the vast amount of content created for the Layar platform.
Google Glass UI
To figure out what type of user experience would work on Google Glass, a good first step is to look at what Google Glass offers out-of-the-box. Here we see a very simple user interface with big text and not a lot of content. The content is represented as screens (also called timeline cards) on a single timeline. This timeline contains the history of all actions the user has taken and notifications the user has received in chronological order. By swiping back and forth on the touchpad on the side of Glass, the users can easily scroll to the history of cards.
When tapping the side of Glass to enable it, you are first shown the home screen containing the time and the sentence “Ok glass.” Saying this command allows you to start actions like “take a picture,” “record a video,” “send a message,” etc. Triggering these actions will add new timeline cards to the history. Timeline cards can have menu options that are shown when tapping the touchpad. Some common actions include reply, share and delete.
This simple interface is all the user sees on Glass. There is no real concept of apps like you are used to on phones and tablets. Third party apps are really just services that interact with the user’s timeline.
The only official way to develop apps (“Glassware”) for Glass is through the Mirror API. This is a cloud based API, meaning that the software is not running on the device itself, but as a service on the Internet. This service is connected to the Google Mirror API servers, which allows interaction with the user’s timeline.
The possibilities of the Mirror API are quite limited. The most common use case that is covered is allowing services to add notification cards to the timeline with news items, messages or other content (including photos and videos). Glassware is also able to add contacts that can be used for sharing. This allows the user to share photos and videos with third party services.
For Layar, this was sufficient for us to create a simple prototype service that allows the user to take a picture, share it with a “Scan with Layar” contact, perform the visual search on our servers and push back the results to Google Glass.
While this was a nice proof of concept, the user experience is far from ideal. Currently, scanning an image with Layar requires the user to first take a photo, and then explicitly sharing it with Layar. Ideally, this would be a single action “Scan with Layar” that can be triggered directly, but the APIs don’t provide ways to combine taking a picture and sending it to a service.
Another problem with this flow is the fact that the Mirror API by design is asynchronous. The timeline on Google Glass is kept in sync with the Google Servers, but sometimes this synchronisation is slow due to bad connectivity or other circumstances. This is ok for sharing photos to social networks (as it will just synchronize once a connection can be made), but for Layar, the user will expect to see results fast.
Finally, the types of content allowed through the Mirror API are quite limited and static. The rich content created by publishers on the Layar platform is nearly impossible to show in a nice and useful manner.
Glass Developer Kit
At Google I/O, a Glass Developer Kit (GDK) was announced that would allow developers to write Android apps for Glass. While details are not given yet, this is supposed to integrate or at least launch real applications from the standard Google Glass user interface.
We feel that this will open a lot of possibilities for us to create a better, rich experience. The key thing will be to keep the UI very simple, similar to the standard Google Glass apps, and use the extra API possibilities to improve the flow and directness of the interaction with the Layar platform. So for example, you will be able to just look at a page in a magazine augmented with Layar and see a list of web links and videos that belong to that page appear in the corner of your eye.
Another thing that will probably be possible with the GDK will be to expose our big collection of geo-layers in a useful manner using the built-in sensors.
Layar will be actively using the GDK once it comes out and will provide feedback through the Glass Explorer program to make sure that we can make a nice experience, so we will be ready when Google Glass hits the consumer market.
Google Glass is an exciting new platform that will bring some nice new possibilities. Quite a few AR companies have announced that they will support Google Glass. At Layar, we do our research before making bold claims which set impossible expectations. No, true augmented reality is not possible on Glass at the moment. And no, the current Mirror API will not enable an AR platform like Layar (or any other AR platform) to provide a good user experience to enrich the real world. But with some effort and using the new GDK, Layar will be able to create a great experience to enrich the world with the digital content created for the Layar platform.