Blog

A common problem you will face when developing Backbone applications is deciding where to put shared logic. At first blush, inheritance (via extend) can solve most of your problems. When you have a group of similar classes, simply make a common ancestor and have them all inherit from it. But what happens when you have a group of *unrelated* classes that need a similar feature? This is where the Mixin pattern becomes incredibly useful.

For the use of this article, we will be making a simple mixin that shows a pop-up alert message with some text when a method is called.

First Attempt

Our first foray into mixing functionality into our views will be quite simple. First, we will create an object to house our grouped functions, and then we will attach it to our Backbone view:

Continue Reading Article

Integration challenges and solutions come in a wide range of scope and complexity–from multi-year, multi-million dollar engineering engagements to scripts that scrape a screen every hour on a cron job. Likewise, enterprises have historically taken on integration initiatives for a variety of reasons, most often to allow siloed legacy applications to share data without a complete rewrite.

Screen Shot 2014-07-23 at 4.06.41 PM

The Problem

Today, mobile initiatives are a huge driver of integration projects. Enterprise workforces increasingly demand access to the internal tools they use in the office on their phones and tablets.

For an Enterprise Integration solution targeting mobile, the architecture usually involves tying into existing tools and data stores, often transforming and caching some data before exposing a subset of the internal systems’ functionality via REST resources. It’s more like a specialized piece of middleware that also integrates systems than a full-blown Enterprise Integration project in the traditional sense.

The Challenges

Allowing access to mission-critical systems from smartphones is different from, for example, a business intelligence tool reporting on data from several legacy data stores. Security concerns are much more acute when access to the company’s revenue data is in a user’s pocket, accessible over the Internet rather than on an IT-managed workstation in corporate headquarters behind a firewall. Limiting access to only the necessary subset of data and guaranteeing industrial-strength security safeguards are two concerns of any mobile enterprise integration solution.

Network connectivity in the mobile world is reliably unreliable. Mobile APIs need to optimize payload size through compression, paging and properly designed data representations. It’s usually not enough to simply expose existing systems, even in organizations that have service-oriented architectures in place. Mobile solutions require an integration layer to pull data from several data sources and tailor the response to the app’s specific requirements.

Those internal systems you’re trying to connect to mobile apps themselves have a variety of protocols and interfaces. In a…

Continue Reading Article

This is our first installation of WillowTree Labs, a recurring blog post series in which we will discuss the details of our quarterly internal research projects. Each project is voted on by our team, and designers and developers share updates at our weekly Research & Development meetings. We conduct these research projects in an effort to stay on top of the latest technology trends, continue learning, and contribute new innovative mobile solutions for our clients.


Good startups grow fast. While WillowTree is outgrowing its ‘startup’ moniker, we still have our share of growing pains. We have moved and renovated several times over the past few years, all to make room for new hires. As a rapidly growing company, our biggest problem is not the struggling wifi network or the contractors bustling around….it is the ever-growing bathroom line. Since last year, we have doubled our staff without adding any new bathrooms. Renovations plans are in the works, but we were not willing to wait. Since bathrooms can’t be built overnight, we built a tool to tell us if one is available. Enter: “Bathroom Monitor”.

Our primary goal was to detect and broadcast the bathroom status throughout the office, using whatever technology was available. After several iterations, we elected to use magnetic contact switches and a Raspberry Pi. Each contact switch would be mounted on a bathroom door and wired to a central RPi. The RPi would then broadcast the switch values through an API. The end goal is to have desktop/mobile clients that consume the API and report the status in real-time.

Here is how we built it.

Parts List

  • Raspberry Pi, Amazon
  • 3 Magnetic contact switches, Amazon
  • Prototyping board, Amazon
  • Adafruit PermaProto Pi Breadboard, Amazon
  • Wire strippers
  • 16-18 gauge wire (10-30 ft.)
  • Banana connectors, Wikipedia
  • USB Wi-Fi dongle, ModMyPi.com
  • 5V 2A micro-usb power supply (optional, to power the wifi dongle), Amazon

Assembly

Configuring Raspberry Pi

If this is your first exercise with a Raspberry Pi, use this guide to…

Continue Reading Article

An important announcement for Android developers from this year’s Google I/O was the full rollout of the Android runtime (ART).  ART significantly improves Android’s performance, increasing application speed and reducing “jank” across the board.  It provides the “performance boosting thing” that users have long been waiting for.

ART was announced last year as an alpha runtime with the release of KitKat, and with the L developer preview, it is now the standard, fully replacing the Dalvik runtime.  Let’s take a look at what ART offers and why it is one of the most important steps in a long-running effort to improve Android’s smoothness.

Explaining runtimes

First, let’s define what a runtime does.  A runtime is a library used by a compiler to implement language functions during the execution of a program.  It’s essentially the framework or platform on which your code runs.  The C++ runtime, for example, is simply a collection of functions, but other runtimes, like .NET, package in a garbage collector and other language tools.

Up to this point, Android apps have used the Dalvik virtual machine to execute code.  Java programs are compiled to Java bytecode, which is then translated to Dalvik bytecode by the “dx” tool. The Dalvik bytecode is then packaged as a Dalvik executable file (hence “dexing”), which is designed for constrained systems like you’d traditionally find on mobile devices.

With the L release, which is anticipated to arrive this fall, Dalvik will be replaced by ART.

ART over Dalvik

ART introduces ahead-of-time (AOT) compilation, which can be beneficial for mobile applications as opposed to Dalvik’s just-in-time (JIT) compiler. For apps running on Dalvik, the JIT will compile your DEX files to machine code when your application is launched and as your app is running. Performing this step at launch can slow down app start times, especially on resource-starved devices. AOT compilation eliminates compiling bytecode to machine code at launch and instead performs this step at installation time. When the app…

Continue Reading Article

Android Notifications

One of the biggest takeaways from Google I/O was how much Android is evolving as an ecosystem.  It’s no longer just an operating system for phones and tablets–you’ll now be able to wear it on your wrist, use it in your car, and watch it on your television.  Android is very quickly going to be everywhere, and it’s important that developers take advantage of this by displaying their app notifications in sensible places.  If you’ve been using stock Android notification APIs, you’re already in a great spot when it comes to the future of Android.  You may need a couple of easy tweaks here or there, but for the most part things should work great.  Let’s take a look at some of those tweaks and the new notification APIs exposed by the L developer preview and Android Wear.

Form and Function

In L, notifications have been given a material-inspired styling rendered as cards.  Gone are the days of dark notification backgrounds, as the new notifications have a shadow-casting light background.  The foreground contains dark text and action icons, and across the board, icons are treated as silhouettes.  There are **no** new icon guidelines, so you don’t need to do anything with your assets so long as you did them right in the first place.  L will treat icons as masks, and draw them in the correct color.  This means that it’s imperative that you remove any opacity you have in your notification icons.

L exposes a new API that allows you to provide color branding by setting a notification asset color.   Notification.Builder.setColor()  will fill a circle behind your notification’s small icon.

Music Player

L also brings a new notification template for media…

Continue Reading Article

notifications2

Google I/O preview of Android L has created a great deal of excitement for mobile app designers everywhere. The changes seen in the preview of Android L (I’m rooting for the L to stand for “Life Saver”) are quite extensive. The dark flat design of KitKat will be overhauled and changed to become more alive through depth and fluidity of animations. Google’s new interface, “Material Design,” adds real-time shadows, realistic animations, and smart interactions dependent on the user’s actions. There are changes beyond the UI as well: apps are interlinked, notifications are more intuitive, and adaptive design allows for apps to be consistent and easy to use across devices.

1. App Indexing

Android L will allow users to search through Chrome and display results from the apps downloaded on the phone.  When a user searches in Chrome for the score of the Ohio State basketball game for instance, the results will include the search results for the NCAA app if it’s installed on the device, where the user can tap to launch the app.  This allows web apps and native apps to be interlinked, allowing for a more streamlined user experience.

2. Interactive Notifications

Notifications are now going to be smarter and more interactive.  Android L notifications will no longer be locked to the notifications bar. Instead, they will be a key part of the lock screen, with the most urgent or relevant notifications displaying first. Through Visibility Controls, the user has the ability to manage the type of notifications that display on the lockscreen in order to protect their privacy.  The notifications will also be more interactive; users can perform common tasks from the notification itself or even swipe the notification away to be removed from the list.  When an app is in use, “Heads-up” high-priority notifications appear in on top of the app with actions revealed for quick interaction. Not only do notifications work on…

Continue Reading Article

Car interfaces have a tendency to lag behind when it comes to usability and functionality. Historically, they’ve all been very custom implementations with no interoperability with other systems. This made getting content to a driver range from impossible to incredibly frustrating. Times are changing, though, and Google is trying to bring us all along with Android Auto.

Android Auto is Google’s push into the automotive world, and it’s backed by some of the biggest players in the game. It’ll allow auto manufacturers to offer the latest in Android functionality without needing to upgrade any firmware in the car. This is because it’s all run from your phone –that’s right, the entire UI is coming from your device. That means you won’t need to buy a new car to get a newest version of Android Auto, and updates to the in-car Android Auto UI will come in the form of software updates to your Android phone. This also means that Android Auto is more like Google’s Cast protocol (which powers the oh-so-popular Chromecast) than it is like Android, since it’s not really a full operating system. It’s just an interface layer that car manufacturers can add onto their existing entertainment systems.

Adding Android Auto features into an app will also be a straightforward process, implementing the MediaService interface classes for streaming media to the car and providing an extended notification to manage the actions needed. Android Auto also provides all of the UX for your media. This means that customizing the layouts per app is out of the question, but when you’re in an area such as vehicles, abstracting away all the legal issues and regulatory factors involved with designing a UX for a car (spoiler: there are a ton) is often the right move. However, with Google’s latest shift to how to manage icons in the Material Theme, apps can theme the colors of the Auto UI easily per app (seen below:…

Continue Reading Article

designwear1

Wearables are the next big thing. Never a step behind, Google finally announced their operating system for smartwatches and other future wearables: Android Wear. As designers, when designing for these new, tiny screens, one thing we need to keep in mind is to not take the UI paradigms of phones or tablets and expect them to translate the same way on a smartwatch. Although both Android Wear and phones/tablets present similar information, they are two different experiences, and should be treated as such when it comes to designing apps for the respective devices.

With this new interface of Android Wear comes new thinking. Hayes Raffle said it best during his talk at Google I/O:

“Computing should start to disappear and not be the foreground of our attention all the time.”

Android Wear is the perfect example of how technology is transitioning into allowing people to do less computing but still get the same information. Now people can quickly check what is going on in their digital lives and get back to the real world without being immersed in their devices for minutes on end, as illustrated below:

designwear2

With this in mind, designing for Android Wear should be all about “glanceability.” People should be given a singular and focused interaction when viewing information. In the example below, the design on the right is much easier to digest in a split second compared to the design on the left. This is because only the most crucial information is being shown in a large, easily viewable format.

designwear3

Be sure to check out the documentation provided by Google to get a complete understanding of best practices for designing apps for Android Wear.

Continue Reading Article

Android TV

Google I/O From the Trenches: Android TV

One major announcement out of Google I/O this year was Android TV. I want to talk a bit about the Android TV platform and the impact it’s going to have on developers and consumers alike.

What is Android TV?

At its core, Android TV is a platform for Android apps that live on your TV. The platform will be integrated into all Sony, Phillips, and Sharp TVs next year, with other manufacturers sure to join. On top of that, some OEMs will be releasing standalone Android TV boxes, and cable providers will be integrating Android TV into their cable boxes.  Later this year you’ll be able to purchase Android TV and place it in your living room alongside your other consoles.  Following that, you’ll be able to get rid of some of those consoles, especially things like streaming boxes, as existing Android apps are optimized for Android TV.
Android TV and controller

The ADT-1

Google gave select attendees the ADT-1, which is the reference hardware platform for Android TV.  It packages a Tegra K1 processor alongside other outstanding specs, and it’s built to allow developers to test and deploy their apps for Android TV.  We were lucky enough at WillowTree to obtain a few units for testing, and will be posting more about our experiences with the unit later.  First impressions are great; it’s fluid and easy to navigate, and this is still just a preview release.

Why Android TV? What makes it better?

Honestly, I wasn’t excited about Android TV when it was announced.  Google has a history of failed television-centric product launches.  The Nexus Q and Google TV of old were especially painful.  Last year, the company knocked it out of the park with Chromecast, an inexpensive method of easily streaming content.  I had my doubts that Android TV would be able to replicate Chromecast’s success.  After attending a couple of…

Continue Reading Article

Google Material Design

Chet Haase and Dan Sandler were on site at Google I/O to talk about the new “L” developer preview of Android. The L preview is out now for Android developers, along with system images for the Nexus 5 and Nexus 7.  I sat through three talks on what’s new in Android and material design, to learn what we can expect as Android developers.  The sessions blew by incredibly fast, because there is so much packaged into the L release, even in a preview.  I’ll summarize what the L release signifies for developers, specifically when it comes to Android design.  Visit d.android.com/preview to learn more.

To clarify one thing: the first question Chet answered was what exactly “L” stands for, to which he said “‘L if I know.”  So it’s still an unknown!

Material Design

If you didn’t catch the I/O keynote, Matias Duarte presented an exposition on “material design” and Google’s vision for not only mobile phones, but displays of all shapes and sizes.  These displays include websites, television screens, and wearables, and Google made it clear that they have a unified approach to design for all platforms.  Our VP of Design, Blake Sirach, presented his thoughts on material design from a design perspective earlier today.  At its core, material design presents delightful interactions and experiences through content depth, tangible objects, and responsive animations.  Users will no longer have a button that simply changes colors when pressed; instead, that button will respond with ripples and waves when users interact with it.  Screen content can declare elevation and depth properties, which will tell the Android framework to lay views out at certain z-levels, before applying system-wide lighting and shading.  This shading is taken care of by the operating system, so developers will get that depth for free.

Supporting material design in an Android app starts by implementing the Continue Reading Article

material_design_graphic

Google I/O 2014 has been the most exciting Google event yet for designers. Having designed apps for four major Android versions so far, I’ll be the first to say that Google has been… slow… to promote high-end design for its app ecosystem. With the announcement of what Google calls “Material Design,” Google is taking a firm stance on how designers should think about everything from watch app design (Android Wear) to Android handsets and tablets to Android TV.

What Google has to say about Material Design:

“Our material is grounded in tactile reality, inspired by our study of paper and ink, yet open to imagination and magic.”

For a company with its roots in search, where optimization and calculated engineering decisions have traditionally taken precedence over design and user experience, this is a bold, welcome statement. From what we’ve seen in Android L so far, it’s a huge step forward in enabling great visual designs in Android.

For example, there will be:

  • Improved drop shadow rendering, enabling a greater depth within each view. *Hint: developers will have a much easier time implementing your pesky drop shadows now.*
  • Dynamic new view transitions and element motion features, which will reduce the ‘jumpy’ transitions found in previous versions of Android, and allow designers to define control-enforcing motion caused by a user’s touch.
  • More stylized and organic than its robot-inspired holo predecessor, emphasizing big type, edge to edge images, and a clear vision hierarchy through properly used negative space.

There is much more articulated in the This is Material Design overview than these three improvements. Also check out Google’s new centralized design site for access to each design feature they are highlighting at I/O 14 (also, note the URL google.com/design, which further emphasizes my point about Google’s refreshing focus on UX).

While OS fragmentation will likely be a struggle with implementing new Android UX features, the WillowTree UX team can’t wait to see…

Continue Reading Article

As a UX designer at WillowTree Apps, there are always a plethora of interesting projects going on throughout the office. Recently, one of these projects had our team very excited. We’d been working on an app redesign for a client and they wanted to implement Chromecast within their app. Seeing as Google released the highly anticipated Chromecast SDK just a couple months ago, we knew this project was going to be one of the first of its kind. But before my team could start designing this feature, we first had to understand what exactly Chromecast was and how to make it a useful component for the end user.

google-chromecast-hdmi-8710

So what exactly is Chromecast?

Since the invention of the smartphone, people have searched for the best ways to share content on their small screens with larger screens, such as TVs. Sure, there are a few options available to you today, but none are as cheap and easy to use as Chromecast. At only $35, this is a great way for you to enjoy online videos and music on your television straight from your smartphone. Just plug the Chromecast dongle (shown above) into the HDMI port on your TV and you can instantly control content from your smartphone, tablet, or computer.

New Chromecast Features

With the Google I/O event currently going on, there have been a couple new announcements surrounding Chromecast. To make connecting easier, Google will now give users the ability to automatically connect to any Chromecast-linked TV in the area without being on the same Wi-Fi network. If the Chromecast doesn’t recognize the device, a PIN can be entered to allow the device access.

Another exciting announcement was the Android mirroring feature. Now, users can mirror their Android device directly to their TV to give them a much larger screen to view visual content such as Google Maps or Google Earth. This also makes enjoying photos…

Continue Reading Article