Design on AeroPlane for Twitter

By yours truly:

Finally took some time to create a portfolio which outlines some of the work that I’ve taken up on my own time. Check out my thoughts on the design process on AeroPlane for Twitter by yours truly. Enjoy.

Can Skeuomorphic Design Live Forever?

When the iPhone was unveiled in 2007, it revolutionized the face of mobile forever. It changed the way we interacted with our smartphones and it set the standard of what a smartphone was to present day.

The task behind designing the interface for a touch screen device was a new frontier for interaction design. Interface designers were faced with the difficult job of creating an intuitive way to interact with a device that had no buttons. This was especially difficult in a world full of BlackBerrys, PDAs and feature phones in a pre-iPhone era. Tapping on glass screens with no physical feedback was a challenge. To overcome the lack of tactile feedback a lot of behind the scene algorithms were implemented to help ease us into this transition to using these touch screen based devices. These included: keyboard prediction and visual cues such as convex buttons and flickable switches. We were taught to use these devices through representations of objects that we understood in the physical world. However, as we live increasingly in a world where touch screen devices are so ubiquitous, we no longer need these analogies to hold our hand. We know how to use touch based devices. Sometimes these metaphors can now be over the top and can even be a hinderance.

Skeuomorph |ˈskjuːə(ʊ)mɔːf| noun

An object or feature which imitates the design of a similar artefact in another material.

When an element acts or looks like an object in real life, this is an example of skeuomorphism. While skeuomorphic design has the advantage of bringing familiarity, its life of ‘familiarity’ can be short-lived. This is because skeuomorphic elements are associated with a context and time. From a functional standpoint, its details can only be understood given the context it is viewed in. A quick example is the desktop metaphor that was introduced during the birth general purpose computers and its graphical user interface. Will the concepts of files and folders still make sense millennia or even decades from now? Will buttons, latches and switches on a digital interface make sense anymore?

Currently, the trend is heading towards a flat UI which removes skeuomorphic elements for a “modern” UI. Beyond its aesthetic purpose, a flatter or minimalist UI can also lend itself for a non-contextual and a design without any attachment to time. The less bells and whistles an interface has the better it will age as there are no details for context to hold it back. Less is more. And this is where design trends are heading.

The above being said, current mobile OS still contain references to real life objects such as switches and buttons. Is there such thing as an interface that will stand the test of time regardless of context? Or do all interfaces change just like fashion — influenced by trends, culture and changed simply for the sake of change?

Only time will tell.

Control Center in iOS 7: Convenience at the Cost of Simplicity

Next to the radical colour pallet introduced to iOS 7 at Apple’s WWDC, the most surprising addition to iOS was Control Center. No longer are the days that jailbreaking was required to add these Android-esque pull out settings toggles. After several iterations of iOS, Apple has realized that the average smartphone user has matured enough to understand how to use toggles under the influence of other smartphones. And this small addition is an obvious shift in Apple’s design philosophy in a post-Steve era.

Ideally, a user should never have to fiddle with device settings because these toggles always add more complexity to a system. This is on top of the other metaphors and paradigms a user must understand in order to operate a device. This includes the lock screen, home screen, recent apps tray, etc. Of course, we take our understanding of these abstractions for granted but at one point these concepts were very alien.  More importantly, the introduction of Control Center is also the admission that the device isn’t capable enough to handle these settings on its own. For example, ideally a user should never have to be burdened with brightness settings, WiFi or data toggling— perhaps in hopes of preserving battery life by disabling cellular radios and by lowering display brightness.1 All of these settings should be managed by the system. These are reasons why this functionality did not exist in previous versions of iOS. 2

Granted, adding these toggles adds convenience but cost of simplicity. The iPhone is known to be a ‘simple-to-use’ device. This addition strays away from this perception.

Below is a quote from Steve Jobs when ‘multitasking’ was introduced in iOS 4 which showcases this philosophy:

People shouldn’t have to understand multitasking. Just use is [sic] as designed, and you’ll be happy. No need to ever quit apps.

Apple designed ‘multitasking’ on the iPhone such that the user did not have to worry about apps draining battery in the background.3 Similarly, a user shouldn’t have to worry about managing system settings. 

The more options and controls a user is given, the more they become a slave and servant to the machine. A phone should be designed to serve; you are not a slave.


1. Orientation sensors and light sensors already automate these functions. See location based WiFi; as an additional example to automate WiFi.

2. Some controls were present on previous versions of iOS such as music playback and screen orientation lock via the recent apps tray.

3. Battery life is not being saved in any significant way when apps are closed.

Google Glass was designed to get in the way

Let me first get this out of the way: I was blown away when Google Glass was unveiled. The excitement that the ‘future’ was imminent and the endless possibilities that this technology could bring were ideas one only thought of in dreams. However, as I was slowly brought back down to earth, the more I saw that this ‘future’ still has a long way to go.

I admire Google for being the first in the fray to making this technology mainstream. However in its current offering, its interaction model isn’t streamlined enough to overtake the smartphone as primary device. And a head mounted solution needs to do just that; there is no way it would make sense to carry an additional device.

Ideally, a head mounted device needs to meet the following criteria:

  1. Able to present information to the user with as few interactions as possible.
  2. Provide a method of input that is at least equivalent to keyboard input in terms of accuracy.

As a mobile solution that is supposed to get out of your way, Glass should be modelled as a device that doesn’t demand attention. It should act as a medium that provides more information about a user’s immediate surroundings with minimal interaction.

For example, if I am looking at a products in a store, it should be able to provide ratings and reviews of said product or if I am reading text that is not native to me, it should be able to provide a translation alongside this text. Granted, you can acquire this information if you simply “Google” it with a voice command but what is the advantage gained compared to having a smartphone and performing the same thing?

Instead of becoming a passive user of Glass, you must be an active user of Glass to maximize its utility.

To do anything with Glass you must actively swipe or speak voice commands to get anything done. So much for not getting in the way.

At this point, the only difference between a smartphone and Glass is that the latter is just a smartphone that is mounted on your head — with a undependable method of input. Voice recognition still isn’t perfect enough to perform actions reliably without error.

At the moment, Glass requires a lot of gestures and voice input to be functional. This isn’t how it should be or how any head mounted device should perform. It is exceptional at providing notifications but once you act on that information, you might as well pull out your phone where you can type comfortably and reliably on your screen. (Not to mention the added privacy that a screen provides versus speaking aloud in public.) Until Google can implement the criteria above and can find a way for me to reply to a text without everyone in my vicinity hearing my conversation, that’s when I’ll consider Glass.