Layar
Part of the Blippar Group

Blog: speed

A Smarter, Faster Layar Creator

Chris Cameron February 18, 2013

Today in the Layar Creator, you may notice a few small changes we’ve made that will go a long way to improve your experience.

Tooltips and Pre-Filled Forms

First, you’ll notice that many of the menus for adding a button now have tooltips and pre-filled entries to make them easier to understand. It helps a lot when using buttons like the “Download App” button, in which you need to enter URL links to the App Store or Google Play store. A handy tooltip now walks you through how to find these URLs.

In the example above, a tooltip provides a nice suggestion for helping to sort messages you might receive from users of the “Email us” button.

Easier Purchase of Individual Pages

Below is the next improvement we’ve made today – purchasing pages. Previously if you wanted to purchase individual pages, you had to do so at the moment of publication, only bundles could be purchased on their own outside of publishing. Now, both can be purchased at any time, not just when publishing!

Better Analysis of Pages

And finally, we’ve implemented a system which performs a more detailed check of your images when you upload them. Previously we did a quick check to save time, but a more detailed check is also needed. Now we do both!

When you upload your image, after the first quick check, we now perform a more detailed analysis, but you can already get to work adding content to your pages while we check. You’ll see an “ANALYZING” badge on the thumbnails of your pages until it finishes.

In only takes a few moments, and if your page doesn’t pass the check, you will not be able to use it in the Creator. In most cases (like in the example above), a page will be disabled because it is either blank, or contains text only, but symmetry and repetitive patterns can be tricky as well.

These kinds of images are hard for users to scan and will not result in the best experience for users. Trust us, its for your own good! But chances are your images will be fine and you’ll never see this message.

And more!

These are just the most visible changes we’ve made to the Creator today. We’ve also fixed our share of bugs and made other improvements behind-the-scenes that will make the Creator faster, more stable and more robust. 

Enjoy!

Permalink: www.layar.com/news/blog/458

Email this article
 

Making the Layar App Faster

Chris Cameron January 25, 2013

If you’re a frequent user of Layar, you may have noticed some improvements to the speed of the app lately. From the end user’s perspective, speed is one of the top priorities for having a positive experience when viewing things like interactive print.

No one wants to sit around waiting for data to load, and that’s why we’ve recently taken steps to improve the speed of the information we send to your mobile device when you scan interactive print content.

When Layar launched, there was no way that our trio of co-founders could have foreseen where the app would grow to in just a few quick years. The Layar App has been downloaded 28 million times all over the globe, so it makes sense that our data be global as well.

Previously, no matter where you were, scanning interactive print content meant transferring data to and from our main servers in Europe. But now, thanks to the wonders of the cloud (Amazon CloudFront to be specific), we can spread multiple “nodes” out across the world closer to our users, increasing efficiency and speed.

There are close to 40 nodes in all, located in North and South America, Europe, Asia and Australia. Now, no matter where you are on Earth, chances are you are a lot closer to the data you want than you were before.

It’s faster, it’s more efficient, and it’s all part of how we’re dedicated to providing the best interactive print experience for our users.

Permalink: www.layar.com/news/blog/453

Email this article
 

AR Tracking Improvements in 6.2

Dirk Groten March 30, 2012

If you’ve downloaded Layar v6.2 for Android or iOS and have tried out some of the vision layers, you’ll probably have noticed that tracking was improved tremendously in this version. I’d like to shed a bit of light on what we’ve done technically to get this far.

So what is tracking in the first place? It’s the ability of the device to recognize and follow the object that is being viewed through the camera and determine its position in space. That way, if it’s augmented in a vision layer, the device knows where to draw the augmented content.

Making augments stick

Until v6.1, we were showing the live camera feed in the AR view with the augmented content on top of it. To do that, we’d take the camera frame from the live feed, do lots of calculations to determine the position of the tracked object and then draw the augmented content. That takes some processing time, so that in the end, the drawn content would always be lagging behind the camera images that you saw. 

We’ve changed that in v6.2: We’re now showing the camera frame that belongs to the calculation for the augmented content, so that the content now sticks much better to the object. You can see that by moving the object smoothly: There is a slight delay between your movement and the image you see, but the content really sticks to the object you move. Don’t move too fast, this will blur the image and the app will lose track of it.

Zooming into the augments

There’s another big improvement in 6.2: When a layer developer uploads a reference image (the image of the object that she wants to augment), we create a fingerprint of that image that is used to recognize and track the features of that object. Obviously you need a certain number of features inside the camera view to calculate a good estimate of the position of the object. If the user holds the camera close to the object, zooming into the object, there are less features that can be tracked and at some point we’d lose track of the object.

We’re now adding features dynamically as you zoom in, allowing the app to track an object much longer. This is really nice when the augments on a page are quite small and you need to get closer to read them. 

A cool example is the Eppo comic book, where you can view the original draft drawings on top of many of the pages. Well, you can now also zoom in really close, as shown in the screenshots below. Don’t zoom in too fast though, the app still needs some time to process the images.

  

  

Taming the ARM processors

A lot of the code we write for Layar is platform-independent. The entire vision algorithms are written in C++ and integrated into iOS using Objective-C++ and into Android using the JNI. This way we can write once and use the same code on both platforms. This has allowed us since Layar v6.0 to bring simultaneous updates for iOS and Android.

For Layar v6.2 we’ve decided to move one level down and write the processor intensive code directly in assembler using ARM NEON. The cool thing is that up to recently all iOS and Android devices run on ARM processors. Not all of them support NEON (like NVIDIA’s Tegra chips), so not all of them benefit of our new optimized code.

In our reference implementation, we’ve been able to decrease the processing time of certain critical operations by a factor of 8: For example, preparing a frame for display takes 2ms instead of 15ms when using the ARM NEON implementation. Obviously we don’t achieve this same speed enhancement in Layar itself, where there’s a lot more going on, but writing NEON-optimized code is proving a huge improvement.

Thanks to this optimization we were able to add support for streaming video as AR objects without significant loss in performance. We’ve decided to only support dual-core ARM processors, where different tasks like rendering video and analyzing the camera image to calculate the pose estimation can be distributed to different cores.

Now Intel has joined the club with its x86-based Atom processors. That’s a game changer and obviously our NEON-optimized code won’t run there. But we’ve been pleasantly surprised at the performance of the x86 chip. Even without optimization it runs our C++ code at higher speed than similar ARM chips.

We can conclude that with the arrival of Intel in the smartphone market, a new era of speed improvements has started which is great news for processor-hungry apps like Layar.

Dirk Groten, CTO

All credits to Michael Hofmann, Lawrence Lee, Ronald van der Lingen, Anatoliy Samara and Andrey Koval for these achievements.

Permalink: www.layar.com/news/blog/374

Email this article
 
We request not to sign up and further make payments for Layar services. Please proceed to use Blippbuilder to create AR experiences.
We use cookies to improve our services. Don’t worry, they don’t store personal or sensitive information and you can disable them at any time in your browser settings.