BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Josh Clark About the Future of (not only) Mobile Interfaces

Josh Clark About the Future of (not only) Mobile Interfaces

This item in japanese

With the rise of touch enabled smart-phones and tablets, a whole new category of user interfaces was introduced. Designers and developers addressed this by using CSS media queries to find hints on what kind of client actually sent a request and how content could be formatted to provide a good user experience. But besides small-screen touch-enabled devices, there are new technologies just around the corner: The 'Internet of Things' is becoming reality with lots of new device types that need to be considered when formatting output and natural user interfaces like speech and gesture provide challenges when interpreting input.

InfoQ had the chance to talk to Josh Clark, user interface design consultant and author of Tapworthy: Designing Great iPhone Apps. Furthermore, Josh is a frequent speaker on conferences all around the globe, where he provides an insight into the challenges of current and future interface design.

InfoQ: Hi Josh, nice to meet you. Would you please briefly introduce yourself to our readers? 

Josh Clark: I'm a designer specializing in mobile strategy and user experience. When I say mobile, it's not just phones anymore, I guess. I would say it's really multi platform strategy and user experience. How do you design a great digital experience across this whole range of screens and platforms that confront us now? I have written three books including Tapworthy: Designing Great iPhone Apps and my design clients include AOL, Time Inc. and eBay. I help companies figure out, how to make best use of all the different platforms they have through my design consulting, strategy consulting and training. So, that's what I'm up to. 

InfoQ: Before we are going to talk about the next generation of mobile interfaces, maybe you can elaborate a little bit on what is a good interface at all and what this does mean for users and developers. What's the current state of the art from your point of view?

Josh: Well, a great interface is an interface that you don't even notice, right? The idea of any interface design is to make it such, that it's effortless and invisible. That it's the content, that comes to the fore and not the interface, because nobody comes to your website or uses your application for your interface. I mean, a handful of nerds and designers likes to explore based on that, but those are people that are curious about the industry practice and they "collect" interfaces. Most part of the people just want to come to your application to get something done. So, the best interfaces let you do that and simply get out of the way. One of the interesting things, I think, that touch has brought about, is being able to create the illusion that you're interacting directly with content itself. It removes a layer of perceived distraction that visual interfaces - GUI interfaces - of the last 30 years have introduced, which is, that we have to understand these visual metaphors that are been put between us and the information. That has been necessary.

It was a good idea while it lasted, but now we have touch interfaces that let you have the illusion of actually manipulating content as if they were physical objects. Stretch and move and push aside. That has really changed the way, that we need to think about the interfaces we create and it also changes the way, we use them as consumers. But touch is also just the first of many of these new kind of inputs that are coming our way and you can see them coming: There's speech which is just around the corner from being as mature as we have seen touch become. Apple still calls Siri beta and it feels like it. But we also have natural gestures with things like Kinect or Leap Motion which is launching this summer. Both of these let you of course move your arms and body to communicate with machines. Again, those things are still far away of becoming mature and so like speech, I don't think that you want to use a Kinect to run a nuclear power plant.

But you can see that they are almost here, they are getting ready. And importantly, things like speech, natural gesture, facial recognition and camera vision are with us already. They are part of the devices that we carry in our pockets everyday. On our phones or in our living rooms, as part of our gaming systems. For anyone that has an Xbox, many people have a Kinect as well. And these things are increasingly getting clicked into laptops. So for example Asus, the laptop maker, has committed to including the Leap Motion device into all of their laptops later this year. And that's something that means that natural gesture is going to be part of our consumer grade laptops coming on. And the new Kinect, Microsoft is starting to build, connects into displays and laptops. In their research labs, they already have a Windows API for using Windows with Kinect.

So these things are not just around the corner - they are happening this year. And what that means is, that we need to start thinking about how do we create software that adopts not only to any output - that's what really has been consuming us lately, how do we adopt software for different screen sizes - but now to any input. Traditionally, of course, we've designed interfaces for keyboard and mouse and now touch is sort of the new thing. But we also have to prepare to create interfaces that are wonderful and delightful that work with natural gesture, facial recognition, speech, stylus, handwriting ... all of these different inputs, as well as all of these different outputs. And that's something that is going to be incredibly challenging. But when we get it right, also incredibly rewarding for users. 

InfoQ: So the next generation of responsive design will include input?

Josh: That's right! You know, I think that for the web, it has been a real revelation that hit us about three years ago, when responsive design came. It was this truth, that was sitting under our noses all the time, which is, that there is no one true output for the web. There's no one fixed size for this thing. And of course that has always been true, but the mobile crisis brought it to the fore. And what's beginning to become clear is that there's also no one true input for the web. We temp to think "oh, that's keyboard and mouse and we can create a separate experience for touch". But again, as these devices begin to hit mainstream with new ways to interact with information, it's becoming evident, that we have to design for many different inputs at once. For responsive web design, it's interesting - we've kind of been using screen size as a proxy to detect a touchscreen. And it was never a great idea, but it's becoming even worse idea, now.

So, the idea in other words was, "oh, it's a small screen, it must be a touch screen" versus "desktop size, it must be keyboard and mouse". But in truth, we are getting touch screens that are coming to laptops and ultra books and we even have these 18" tablets now. So screen size no longer tells us - it's becoming a worse and worse idea to assume, at this size you can design for mouse, but at this size you have to design for touch. And so it's a turn for responsive design, I think. My conclusion as I said is, all screen sizes, all "desktop designs" have to be designed for touch now, too. 

InfoQ: Yes, I think the problem is, that responsive design was very focused on sizes. I spent some time on responsive design the last couple of weeks and what I actually needed to know about my clients were capabilities, not sizes. If someone connects with a latest generation Nexus tablet, I detect pixel sizes that I never had on my desktop computers. It seems, that device detection won't work this way any longer.

Josh: Yes. I think that we are starting to see some interesting media queries that do tell us more useful things than screen size or orientation. A lot of these things are still hard to detect with JavaScript. Of course, you can detect touch events with JavaScript, but oddly enough, not all touch screen browsers have touch events. So it's not a clear thing. Or also, there's an interesting case of devices like the Microsoft Surface that, when the keyboard is attached, it has both touch and track pad. So, what are you designing for, in the case when people may be moving back and forth between the two? I think we have to design for the lowest common denominator, which is in this case this coarse pointer, which we call our finger. 

InfoQ: Speaking of natural interfaces, do you see any challenges concerning privacy? I do have Siri, but actually I do not use it very often. And you sometimes even see prototypes or mock ups where projectors are integrated into a smart-phone, so you have your screen projected onto the next wall. But my feeling is, the more we "leave" the device with our interfaces, the more we lose privacy.

Josh: Yes, that's true. Our phones and our laptops tend to be very intimate interactions, very private interactions. There's this very tight visual and interaction tunnel between us and our device in that case. In the same way that all kinds of personal information tools like books or magazines have been in the past. You know, who knows what you could be reading in that book? Even if you're in a public space, you could be reading anything - very private interactions. But when you look at using natural gesture, speech and facial recognition, we start to get in the realm more of how we communicate with one another as humans. And that's actually one of the interesting things when you see all these new inputs coming to technical maturity, if not yet user interface maturity. It's that all the ways we communicate with other people, the machines are now beginning to understand - touch and speech and gesture and facial recognition.

And in the same way, we have to maintain our privacy and be mindful of that when we're talking to other people in a public space, I think the same thing will be happening when we talk to other machines in a public space. We will have to adapt some social conventions to deal with this. You know, when you recall the first time you saw someone using a headset on their cellphone while walking down the street and you thought they were talking to you or they were crazy and were talking to themselves. And now, that interaction is completely normal. You know, that's a normal thing. And so I think, these things will feel odd at first as we gesture and speak to devices, but I think that those social conventions will adapt and it will soon seem not at all strange. 

InfoQ: How long do you think will this take?

Josh: I think, that as soon as the interaction becomes compelling and useful, that it will happen quickly. Siri for example isn't quite useful enough to overcome some of the social awkwardness of talking to your phone and giving it instructions and personal information. It's still not quite great. But I think, once we have great interactions and things that are very compelling, that doing those things will feel very natural. You know, you can see people using their tablets to take photos which is a weird, awkward thing, but it's useful enough that they do it. If this weren't a useful interaction, people wouldn't do it. 

InfoQ: What was the last time that you checked out an application on an iPhone or Android device and you thought "Wow, this is really a new idea, a great user experience"? Is this still possible after six years of this type of interfaces? Or do we have to take this technical step towards new types of interfaces now?

Josh: I think, that there is still plenty to be done with the devices that we have now, plenty of exploration to do. It seems like just about every day, I discover something new that people are doing with phones or tablets, that just seems like magic - really minting new science, if you will. I think that the things, that are most compelling, are the things, that are using the on-board sensors in new and unusual ways. One example is a little drum machine app called TableDrum. There are lots of these things in the app stores, that let you choose instruments or drum sounds as you tap out the rhythm on the touch screen. And TableDrum lets you do that, but it also lets you put it aside and tap out the rhythm on the table in front of you, hitting your hands on different objects to make different sounds. You know, like you do sometimes when you are bored - tapping things out, using your coffee mug or whatever to tap out a rhythm on your desk. It listens and associates those sounds with a drum and actually plays those as drum machine sounds through the phone.

What's interesting about that is, that it moves the interaction off of the screen and into the environment around you. And while all the logic is still happening on the phone, the interaction itself is happening elsewhere and the phone just acts as a speaker. It's things like this that make us think "Wow, what can we do to use sensors to detect what's happening in the environment and do something on the users behalf". So you don't have to lose yourself in the screen anymore, not to speak of the attendant anti-social behavior that goes with that, as we're just always heads down on our little glowing rectangles. I think there's a lot of opportunity there. What I am seeing is use of camera vision and augmented reality to do things like Word Lens, which, you know, you point your camera at text in one language and on the fly it paints right there in the view the text in another language. A live translation machine, if you will.

Using those things really does have the effect of magic. But I think it gets really exciting, when you combine these things with custom sensors. It becomes really trivial and inexpensive to add sensors and internet connection to just about anything. There's a medical device manufacturer called Proteus, that has developed a pill, that tells you when it has been taken - which is great for doctors and care takers to help understand whether patience are being compliant with their medication. And basically it's this little sensor about the size of a grain of sand that you can put into a pill. It has just the same stuff in it, that you would find in a vitamin - magnesium, some copper. When you take that and it hits your stomach, the acid activates it as a battery and it sends out this little weak signal, that is some identifying information about the pill. You wear a patch on your stomach that detects this signal and the patch itself has Bluetooth. That thing can talk to your phone and your phone relays this information to the internet and to your doctor. That is extraordinary.

You start thinking about the possibilities of the physical world becoming digital like this, the so-called "Internet of Things". There are all kinds of ways that we can use these sophisticated computers that we carry in our pocket, our phones and our tablets, to gather information and communicate with the physical world in entirely new ways. So I think, the challenge of the last five or six years has been, how to deal with the digital world becoming more physical. That is, we were sort of pasting digital interfaces onto these things that we carry out through the world and the challenge was to manage the physicality of that interaction. But now we've got this interesting thing, that the physical world is becoming more digital. How do we design for sensors that can communicate with the physical world in ever more compelling ways ... 

InfoQ: That really sounds a little bit like Star Trek, now ...

Josh: It does! And the thing is, that for a lot of time, there sort of has been this conversation "Oh, this will happen in the future, we will have this and that". But wow, we have it now! Not only is it possible now and affordable now - many of us, the majority, are already carrying smart-phones that have all these capabilities on-board. Sensors, GPS, audio, video and it's something that is magic, the first time we see it. Remember the first time you saw Shazam work? It listened to a song and told you what it was. "Wow! How could that ... that's amazing!" That is something, that we need to continue: pushing the envelope on that. Gathering the information, these devices are able to gather and doing something powerful with them. 

InfoQ: In the last days or weeks, there were some news about smart watches. Microsoft plans to do one and Apple is planning to do one ... at least, that's what rumors claim. Do you think, it's really a smart watch or do you think it's just some kind of interface extension?

Josh: Well, I think that most people talk about it as an interface extension like the Pebble Watch to serve like a window onto your phone, for example. But we don't know. There's obviously interest in the nerd and geek community about seeing something like this. We will see what happens. And as you say, it's a rumor. I think, that we'll see that everything, that can be connected to the internet, will get connected to the internet. Sometimes, that will be a more or less valuable interactive surface but we're also seeing lots of small interfaces that aren't necessarily very clever interfaces but still have real utility. Things like the Nike Fuelband that help you track your motion and activity throughout the day. They have modest interfaces, but importantly they can talk to other devices and transmit their data to them, so you can have a more rich experience elsewhere. So I have no doubt, that we will see smart watches, it's a question of what will they be. Will they be sort of full fledged smart-phone replacement? Well, maybe. And I think, there is a lot of technical things to deal with there.

But I think that we will start to see more and more smart devices with somewhat limited interaction, being able to consume and generate some kind of content. And so I think, it's a really interesting question: How does your website look on a watch? Or how does it look on a ten LED character Nike Fuelband? How do we prepare to send our content to any kind of device? I mean, with responsive web design, as an industry we're getting the hang of how to design and deliver a great web experience to rectangles of different sizes. But then we're seeing new things come along like "Well, hello Google Glass!" You know, I don't know, what my web experience should be on that. We need to anticipate surprises like this - real head scratchers from an design and interaction point of view. By anticipating, I don't mean that we can design for them already. Rather, how can we prepare our content to be able to be flexible enough to pour itself like water into any container? 

InfoQ: That's going to be a major challenge, I think. The huge number of devices, that seems to arrive in the near future, will have to be addressed somehow ...

Josh: Yes. And well, content strategists and database designers have for a long time been crying "We need more structured data! We need more structured data!" And it's sort of falling on deaf ears, because it seems like a lot of work for no evidence or maybe a lot of work for an abstract payoff, that wasn't clear. But now, the payoff is starting to become clear. I think that the arrival of mobile in a very big mainstream way, starting with the iPhone and with everything that has followed it, was a real crisis for a lot of companies. But I think like a lot of crisis, it's also an opportunity and that mobile is really just the tip of the iceberg, when you're only talking about phones and tablets. When you got devices like you say, smart watches or Google Glass, you got to be ready for just about anything now. This is an opportunity to go back and clean house and get our content in order and come up with the design process, that we need to manage this. 

Josh Clark is the founder of Global Moxie. Josh is a designer specializing in mobile design strategy and user experience. When he's not building friendly interfaces, he writes about them. In Josh's books and blog, he explores humane software, clever design and the creative process. Josh is the author of four books, including Tapworthy: Designing Great iPhone Apps (O'Reilly, 2010), all of which aim to help you harness technology to make your work easier, more beautiful, more awesome.

Josh is a regular speaker at international technology conferences, sharing his insights about mobile strategy and designing for phones, tablets, and other emerging devices.

Rate this Article

Adoption
Style

BT