If you’re reading this post, it’s likely you’re in one of two camps of thought:

Camp 1: “I don’t know what Google Tag Manager is, but I want it.
Camp 2: “I love Google Tag Manager for web, and mobile will be just as easy.”

To each of these camps I reply: not so fast. Like all things tech, there are ins and outs that require careful consideration. In this post, I’ll explain to you what Google Tag Manager is, where it came from, and how it applies to mobile.

Beyond the marketing buzz words like “IT-friendly,” “quick and easy,” and “multi-platform,” deciphering what exactly Google Tag Manager is can be a challenge. Most people come around to the idea that Google Tag Manager is “souped-up Google Analytics.” But what does that really mean? What does Google Tag Manager actually give you? We’ve got to look at the web’s evolution to answer these questions.

Once-upon-a-time, there was only the Web (I like to refer to this time as, “before the mobile era,” or “BME” for short). During this time, web development teams implemented “tags” (snippets of JavaScript), which fed usage information to data-hungry marketers. Marketers would constantly request tagging tweaks in their quest to increase conversions and reach performance goals. The result was massive amounts of wasted time both coordinating teams to modify event tracking tags, and waiting on test-release cycles to complete. To stay ahead of the competition, marketers needed more agility and more flexibility.

Google Tag Manager is built on a promise that marketers won’t need to rely so heavily on development teams. It places power in their hands by letting them decide for themselves what tags should be defined. Rather than developers sprinkling snippets throughout their otherwise pristine code to explicitly tag events, a single Tag Manager snippet replaces them all.

The Nitty Gritty of How Google Tag Manger Works:

Tag Manager uses the concept of a “container.” It contains the following types of configuration:

  • Tags – what to see in your report (e.g. “User Logged In”)
  • Rules – when a tag should be “fired” (e.g. “href clicked containing /login”)
  • Macros – which name-value…
Continue Reading Article

We’re proud to announce that WillowTree has been recognized for the third-consecutive year as one of America’s fastest-growing private companies on the 2014 Inc. 5000 List. This year we jumped ahead 524 positions to grab the 454th position, placing us among the top 10 percent of all companies on Inc.’s exclusive annual list.

Our rapid growth over the previous 12 months is a testament to the fact that organizations across industries like healthcare, sports, media and entertainment, as well as enterprises with specific field sales and field services functions, recognize the huge efficiency gains and return on investment from mobile. These companies are truly engaging their customers and workforces in meaningful and value adding ways through apps and connected devices, and making peoples lives easier. 

We’d like to thank our partners and clients for the continued trust they place in WillowTree. We are proud of our team, and this third-time honor from Inc. Magazine. Check out the Inc. 5000 List to read more about this year’s honored companies, and learn more about WillowTree’s ranking here.

Continue Reading Article

A common problem you will face when developing Backbone applications is deciding where to put shared logic. At first blush, inheritance (via extend) can solve most of your problems. When you have a group of similar classes, simply make a common ancestor and have them all inherit from it. But what happens when you have a group of *unrelated* classes that need a similar feature? This is where the Mixin pattern becomes incredibly useful.

For the use of this article, we will be making a simple mixin that shows a pop-up alert message with some text when a method is called.

First Attempt

Our first foray into mixing functionality into our views will be quite simple. First, we will create an object to house our grouped functions, and then we will attach it to our Backbone view:

Continue Reading Article

Integration challenges and solutions come in a wide range of scope and complexity–from multi-year, multi-million dollar engineering engagements to scripts that scrape a screen every hour on a cron job. Likewise, enterprises have historically taken on integration initiatives for a variety of reasons, most often to allow siloed legacy applications to share data without a complete rewrite.

Screen Shot 2014-07-23 at 4.06.41 PM

The Problem

Today, mobile initiatives are a huge driver of integration projects. Enterprise workforces increasingly demand access to the internal tools they use in the office on their phones and tablets.

For an Enterprise Integration solution targeting mobile, the architecture usually involves tying into existing tools and data stores, often transforming and caching some data before exposing a subset of the internal systems’ functionality via REST resources. It’s more like a specialized piece of middleware that also integrates systems than a full-blown Enterprise Integration project in the traditional sense.

The Challenges

Allowing access to mission-critical systems from smartphones is different from, for example, a business intelligence tool reporting on data from several legacy data stores. Security concerns are much more acute when access to the company’s revenue data is in a user’s pocket, accessible over the Internet rather than on an IT-managed workstation in corporate headquarters behind a firewall. Limiting access to only the necessary subset of data and guaranteeing industrial-strength security safeguards are two concerns of any mobile enterprise integration solution.

Network connectivity in the mobile world is reliably unreliable. Mobile APIs need to optimize payload size through compression, paging and properly designed data representations. It’s usually not enough to simply expose existing systems, even in organizations that have service-oriented architectures in place. Mobile solutions require an integration layer to pull data from several data sources and tailor the response to the app’s specific requirements.

Those internal systems you’re trying to connect to mobile apps themselves have a variety of protocols and interfaces. In a…

Continue Reading Article

This is our first installation of WillowTree Labs, a recurring blog post series in which we will discuss the details of our quarterly internal research projects. Each project is voted on by our team, and designers and developers share updates at our weekly Research & Development meetings. We conduct these research projects in an effort to stay on top of the latest technology trends, continue learning, and contribute new innovative mobile solutions for our clients.

Good startups grow fast. While WillowTree is outgrowing its ‘startup’ moniker, we still have our share of growing pains. We have moved and renovated several times over the past few years, all to make room for new hires. As a rapidly growing company, our biggest problem is not the struggling wifi network or the contractors bustling around….it is the ever-growing bathroom line. Since last year, we have doubled our staff without adding any new bathrooms. Renovations plans are in the works, but we were not willing to wait. Since bathrooms can’t be built overnight, we built a tool to tell us if one is available. Enter: “Bathroom Monitor”.

Our primary goal was to detect and broadcast the bathroom status throughout the office, using whatever technology was available. After several iterations, we elected to use magnetic contact switches and a Raspberry Pi. Each contact switch would be mounted on a bathroom door and wired to a central RPi. The RPi would then broadcast the switch values through an API. The end goal is to have desktop/mobile clients that consume the API and report the status in real-time.

Here is how we built it.

Parts List

  • Raspberry Pi, Amazon
  • 3 Magnetic contact switches, Amazon
  • Prototyping board, Amazon
  • Adafruit PermaProto Pi Breadboard, Amazon
  • Wire strippers
  • 16-18 gauge wire (10-30 ft.)
  • Banana connectors, Wikipedia
  • USB Wi-Fi dongle,
  • 5V 2A micro-usb power supply (optional, to power the wifi dongle), Amazon


Configuring Raspberry Pi

If this is your first exercise with a Raspberry Pi, use this guide to…

Continue Reading Article

An important announcement for Android developers from this year’s Google I/O was the full rollout of the Android runtime (ART).  ART significantly improves Android’s performance, increasing application speed and reducing “jank” across the board.  It provides the “performance boosting thing” that users have long been waiting for.

ART was announced last year as an alpha runtime with the release of KitKat, and with the L developer preview, it is now the standard, fully replacing the Dalvik runtime.  Let’s take a look at what ART offers and why it is one of the most important steps in a long-running effort to improve Android’s smoothness.

Explaining runtimes

First, let’s define what a runtime does.  A runtime is a library used by a compiler to implement language functions during the execution of a program.  It’s essentially the framework or platform on which your code runs.  The C++ runtime, for example, is simply a collection of functions, but other runtimes, like .NET, package in a garbage collector and other language tools.

Up to this point, Android apps have used the Dalvik virtual machine to execute code.  Java programs are compiled to Java bytecode, which is then translated to Dalvik bytecode by the “dx” tool. The Dalvik bytecode is then packaged as a Dalvik executable file (hence “dexing”), which is designed for constrained systems like you’d traditionally find on mobile devices.

With the L release, which is anticipated to arrive this fall, Dalvik will be replaced by ART.

ART over Dalvik

ART introduces ahead-of-time (AOT) compilation, which can be beneficial for mobile applications as opposed to Dalvik’s just-in-time (JIT) compiler. For apps running on Dalvik, the JIT will compile your DEX files to machine code when your application is launched and as your app is running. Performing this step at launch can slow down app start times, especially on resource-starved devices. AOT compilation eliminates compiling bytecode to machine code at launch and instead performs this step at installation time. When the app…

Continue Reading Article

Android Notifications

One of the biggest takeaways from Google I/O was how much Android is evolving as an ecosystem.  It’s no longer just an operating system for phones and tablets–you’ll now be able to wear it on your wrist, use it in your car, and watch it on your television.  Android is very quickly going to be everywhere, and it’s important that developers take advantage of this by displaying their app notifications in sensible places.  If you’ve been using stock Android notification APIs, you’re already in a great spot when it comes to the future of Android.  You may need a couple of easy tweaks here or there, but for the most part things should work great.  Let’s take a look at some of those tweaks and the new notification APIs exposed by the L developer preview and Android Wear.

Form and Function

In L, notifications have been given a material-inspired styling rendered as cards.  Gone are the days of dark notification backgrounds, as the new notifications have a shadow-casting light background.  The foreground contains dark text and action icons, and across the board, icons are treated as silhouettes.  There are **no** new icon guidelines, so you don’t need to do anything with your assets so long as you did them right in the first place.  L will treat icons as masks, and draw them in the correct color.  This means that it’s imperative that you remove any opacity you have in your notification icons.

L exposes a new API that allows you to provide color branding by setting a notification asset color.   Notification.Builder.setColor()  will fill a circle behind your notification’s small icon.

Music Player

L also brings a new notification template for media…

Continue Reading Article


Google I/O preview of Android L has created a great deal of excitement for mobile app designers everywhere. The changes seen in the preview of Android L (I’m rooting for the L to stand for “Life Saver”) are quite extensive. The dark flat design of KitKat will be overhauled and changed to become more alive through depth and fluidity of animations. Google’s new interface, “Material Design,” adds real-time shadows, realistic animations, and smart interactions dependent on the user’s actions. There are changes beyond the UI as well: apps are interlinked, notifications are more intuitive, and adaptive design allows for apps to be consistent and easy to use across devices.

1. App Indexing

Android L will allow users to search through Chrome and display results from the apps downloaded on the phone.  When a user searches in Chrome for the score of the Ohio State basketball game for instance, the results will include the search results for the NCAA app if it’s installed on the device, where the user can tap to launch the app.  This allows web apps and native apps to be interlinked, allowing for a more streamlined user experience.

2. Interactive Notifications

Notifications are now going to be smarter and more interactive.  Android L notifications will no longer be locked to the notifications bar. Instead, they will be a key part of the lock screen, with the most urgent or relevant notifications displaying first. Through Visibility Controls, the user has the ability to manage the type of notifications that display on the lockscreen in order to protect their privacy.  The notifications will also be more interactive; users can perform common tasks from the notification itself or even swipe the notification away to be removed from the list.  When an app is in use, “Heads-up” high-priority notifications appear in on top of the app with actions revealed for quick interaction. Not only do notifications work on…

Continue Reading Article

Car interfaces have a tendency to lag behind when it comes to usability and functionality. Historically, they’ve all been very custom implementations with no interoperability with other systems. This made getting content to a driver range from impossible to incredibly frustrating. Times are changing, though, and Google is trying to bring us all along with Android Auto.

Android Auto is Google’s push into the automotive world, and it’s backed by some of the biggest players in the game. It’ll allow auto manufacturers to offer the latest in Android functionality without needing to upgrade any firmware in the car. This is because it’s all run from your phone –that’s right, the entire UI is coming from your device. That means you won’t need to buy a new car to get a newest version of Android Auto, and updates to the in-car Android Auto UI will come in the form of software updates to your Android phone. This also means that Android Auto is more like Google’s Cast protocol (which powers the oh-so-popular Chromecast) than it is like Android, since it’s not really a full operating system. It’s just an interface layer that car manufacturers can add onto their existing entertainment systems.

Adding Android Auto features into an app will also be a straightforward process, implementing the MediaService interface classes for streaming media to the car and providing an extended notification to manage the actions needed. Android Auto also provides all of the UX for your media. This means that customizing the layouts per app is out of the question, but when you’re in an area such as vehicles, abstracting away all the legal issues and regulatory factors involved with designing a UX for a car (spoiler: there are a ton) is often the right move. However, with Google’s latest shift to how to manage icons in the Material Theme, apps can theme the colors of the Auto UI easily per app (seen below:…

Continue Reading Article


Wearables are the next big thing. Never a step behind, Google finally announced their operating system for smartwatches and other future wearables: Android Wear. As designers, when designing for these new, tiny screens, one thing we need to keep in mind is to not take the UI paradigms of phones or tablets and expect them to translate the same way on a smartwatch. Although both Android Wear and phones/tablets present similar information, they are two different experiences, and should be treated as such when it comes to designing apps for the respective devices.

With this new interface of Android Wear comes new thinking. Hayes Raffle said it best during his talk at Google I/O:

“Computing should start to disappear and not be the foreground of our attention all the time.”

Android Wear is the perfect example of how technology is transitioning into allowing people to do less computing but still get the same information. Now people can quickly check what is going on in their digital lives and get back to the real world without being immersed in their devices for minutes on end, as illustrated below:


With this in mind, designing for Android Wear should be all about “glanceability.” People should be given a singular and focused interaction when viewing information. In the example below, the design on the right is much easier to digest in a split second compared to the design on the left. This is because only the most crucial information is being shown in a large, easily viewable format.


Be sure to check out the documentation provided by Google to get a complete understanding of best practices for designing apps for Android Wear.

Continue Reading Article

Android TV

Google I/O From the Trenches: Android TV

One major announcement out of Google I/O this year was Android TV. I want to talk a bit about the Android TV platform and the impact it’s going to have on developers and consumers alike.

What is Android TV?

At its core, Android TV is a platform for Android apps that live on your TV. The platform will be integrated into all Sony, Phillips, and Sharp TVs next year, with other manufacturers sure to join. On top of that, some OEMs will be releasing standalone Android TV boxes, and cable providers will be integrating Android TV into their cable boxes.  Later this year you’ll be able to purchase Android TV and place it in your living room alongside your other consoles.  Following that, you’ll be able to get rid of some of those consoles, especially things like streaming boxes, as existing Android apps are optimized for Android TV.
Android TV and controller

The ADT-1

Google gave select attendees the ADT-1, which is the reference hardware platform for Android TV.  It packages a Tegra K1 processor alongside other outstanding specs, and it’s built to allow developers to test and deploy their apps for Android TV.  We were lucky enough at WillowTree to obtain a few units for testing, and will be posting more about our experiences with the unit later.  First impressions are great; it’s fluid and easy to navigate, and this is still just a preview release.

Why Android TV? What makes it better?

Honestly, I wasn’t excited about Android TV when it was announced.  Google has a history of failed television-centric product launches.  The Nexus Q and Google TV of old were especially painful.  Last year, the company knocked it out of the park with Chromecast, an inexpensive method of easily streaming content.  I had my doubts that Android TV would be able to replicate Chromecast’s success.  After attending a couple of…

Continue Reading Article

Google Material Design

Chet Haase and Dan Sandler were on site at Google I/O to talk about the new “L” developer preview of Android. The L preview is out now for Android developers, along with system images for the Nexus 5 and Nexus 7.  I sat through three talks on what’s new in Android and material design, to learn what we can expect as Android developers.  The sessions blew by incredibly fast, because there is so much packaged into the L release, even in a preview.  I’ll summarize what the L release signifies for developers, specifically when it comes to Android design.  Visit to learn more.

To clarify one thing: the first question Chet answered was what exactly “L” stands for, to which he said “‘L if I know.”  So it’s still an unknown!

Material Design

If you didn’t catch the I/O keynote, Matias Duarte presented an exposition on “material design” and Google’s vision for not only mobile phones, but displays of all shapes and sizes.  These displays include websites, television screens, and wearables, and Google made it clear that they have a unified approach to design for all platforms.  Our VP of Design, Blake Sirach, presented his thoughts on material design from a design perspective earlier today.  At its core, material design presents delightful interactions and experiences through content depth, tangible objects, and responsive animations.  Users will no longer have a button that simply changes colors when pressed; instead, that button will respond with ripples and waves when users interact with it.  Screen content can declare elevation and depth properties, which will tell the Android framework to lay views out at certain z-levels, before applying system-wide lighting and shading.  This shading is taken care of by the operating system, so developers will get that depth for free.

Supporting material design in an Android app starts by implementing the Continue Reading Article