December 1995, American Airlines Flight 965 departed from Miami on a trip to Cali, Colombia. The pilot selected the wrong entry from a list of nearby navigation fixes starting with “R” — whose latitude and longitude appeared to be correct. Unfortunately, the pilot selected a fix way off their course — the plane and the crew didn’t make it. Although this was a human error, it shouldn’t have been the pilots mistake at all.
It seemed as though the computer was utterly unconcerned with the actual flight and its passengers. — Alan Cooper
Our software is not responsive enough. When I say this I’m not thinking about the screen ratio and the device at hand, but at the entire range of contextual information. Where am I? What’s my goal? What did I do beforehand? How did I react in similar situations in last months? What’s important to me?
Can the intelligence of the application help me be faster, more productive and stop focusing on small things?
Please, can the application do the hard work, so I don’t have to.
Scenario 1 — Where am I?
One of the first assignments I give my students is to design a simple mobile application and really take advantage of a specific smartphone feature. The go-to choice for this is — geolocation. Oh, the phone knows where I am, great! I’m going to use this to give the app some added value.
Why this isn’t implemented at the OS level is beyond me. For instance, we use the passcode to prevent strangers from starting up your phone. At home, there are no strangers. So the logical step would be to disable the passcode when the phone and the user are at home. How hard is this? Not very. How many seconds does it save? How much does it improve the overall experience? A lot.
Bear in mind that the research showed solutions like this are already available, but only for some iPhones.
Scenario 2 — What’s my goal?
You application has features. For instance Instagram has a very useful feature called Photo Maps. I use this to find amazing cafes, restaurants and hidden locations during my travels. I trust locals to know best. When I travel to Berlin, I check out Sandra Juto’s feed. When I travel to NYC, it’s a guy named Patrick. I’m really proud of this hack but it requires a lot, and I mean, a lot of effort.
Let me guide you through the current steps needed to find a cool place near my location in Manhattan, aided by Patrick’s photos.
- Switch to tab Explore
- Enter user name
- Press search
- Open user profile
- Switch to Photo Map tab
- Pan to the right continent
- Zoom
- Zoom a little bit more
- Zoom
- Wait for the photos to load
- Try to figure out where my location is (there’s no indication on the map itself and bare in mind I’m in a foreign city)
- Open a group of photos
- Focus on one
- Press the i below it
- Press the location
- Open in Maps (previously on 4SQ but the relationship broke).
This is 16 step. Sixteen steps! Amazing.
I’m fully aware that I’m not using the application in a way the designers predicted. I was not in one of their scenarios. What further proves this point is the fact that after I get the info I was looking for, I still need to press the back button at least a dozen times before the app is usable again.
Of course I want to go to the place this photo was taken. There’s a reason the location data exists. There’s a reason I’m following a person on this service. It’s not just the quality or uniqueness of the photos, it’s a mix of different lifestyle preferences. The data is already there:
- Person I know or follow
- Photo that looks interesting
- Proximity to my current location
Instead of refactoring the entire application, I propose just a quick edit to the current user flow. Let’s make the Explore tab actually useful and get rid of the bad selfies, low light cocktail shots and summer hot dog’s. Inline with the design of the tab Following, divide the Explore into Nearby and Explore. The Nearby view can now feature only photos from our feed, sorted by location.
Instagram, a list of several steps (the current situation) and the suggested design.
Now I can go grab a coffee to a place that’s nearby. With one click and no back buttons needed.
Scenario 3 — What did I do beforehand?
If I’m performing a search on Google for the best burger place in Berlin and switch to Foursquare, wouldn’t it be extremely helpful if the application was aware of my previous actions. Maybe the app can prefill the search with the keyword burger, or even open the specific location and have it waiting for me once I start the app.
PS. I have no idea how the burgers are at The Bird.
Performing a Google search in Chrome, then continuing to Foursquare. I would really love to further the idea of Continuity and have this work across Apple and non-Apple applications. Connected like never before.
Scenario 4 — How did I react in similar situations in last months?
Runkeeper is an application I use on a daily basis. I usually go running either alone, or with three friends, tops! The application doesn’t care that of my entire friends list (which was so neatly taken from FB) I repeatedly select just three people. When the run is finished, these extra steps (and seconds) really irritate me. Remember, have a recently—used list, a favorite list, be smart, recommend who I usually run with on Saturday mornings.
I feel like there is a need to give human characteristics to interfaces. This is what we already try to do with microcopy, animation, illustration, iconography, etc. We want to personificate the entire UX. We want to “momify” the interface. Your mom is always sympathetic and flexible. She knows what to cook based on how your day went. Based on how you look, feel and act. She’s not limited —purely based on the fact it’s a monday dinner.
Are responsive interfaces really just intelligent interfaces? Should our tools act smarter? Is it too much to ask for designers and developers to invest that extra mile? A few seconds each day X millions of users means hours saved.
Intelligence and fewer features always wins over ignorance and more features.
Now that we are designing with screen sizes and aspect ratios in mind, we should go a step further and plan strategically to utilize the device data that’s already collected and available.
In a mobile-first world, can we finally start designing smart first and phones later?
Polite software has common sense.
Polite software is responsive.
This article was originally published on Medium.
The post Smart First, Phones Later appeared first on Five.