BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News How Artificial Intelligence Impacts Designing Products

How Artificial Intelligence Impacts Designing Products

Leia em Português

Artificial intelligence is changing the way that we interact with technology; eliminating unnecessary user interfaces makes interaction with machines more humane, argued Agnieszka Walorska at ACE conference 2019. "I’m sure that we will soon have new design jobs that didn’t exist before - like virtual assistant personality designer for example", she said.

The expectations towards customer experience have changed, and one factor that is becoming more and more important to this change is machine learning. Machine learning algorithms are used in most of the digital products we frequently use. The majority of customers expect personalization, which is getting better and better thanks to the use of algorithms, said Walorska. Most of us use Netflix, YouTube, Facebook, and Amazon, and highly personalized products are setting the standards for every other product to come.

Walorska mentioned that we’re also getting used to more and more sophisticated voice interfaces; interfaces that wouldn’t be possible without machine learning. Since humans invented computers, they had to communicate with them in a way that was understandable to machines, as the machines struggled (and still struggle) to understand our natural way of communication. The progress in machine learning and artificial intelligence makes it possible to do it the other way around -- machines are getting better at "understanding" humans; not only their speech, but also their gestures, mimics and biology, said Walorska.

A more humane interaction would be one that would free us from screens, said Walorska. Currently, we are glued to our screens, averaging 3:15 hours a day on smartphones alone. Kids and teens aged 8 to 18 in the US spend an average of more than seven hours a day looking at screens. Walorska mentioned that anticipatory experiences that learn from our data, voice interfaces, sensors, and brain-computer interfaces have the potential to save some of this screen time. But they come at a cost: an even deeper knowledge of our behavioral data.

Walorska quoted Steve Jobs:

Most people make the mistake of thinking design is what it looks like. People think it’s this veneer – that the designers are handed this box and told, "Make it look good!" That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.

She mentioned that this statement will increasingly characterize the job of the designer in the future, and expects that the role of designers and developers will become more and more similar. "To design interactions driven by algorithms we need to have at least a general understanding of the technology", she said, "while designing digital experiences in the algorithmic age we are also facing difficult ethical questions: privacy, algorithmic bias, technology addiction."

InfoQ interviewed Agnieszka Walorska, founder at Creative Construction, after her talk at ACE conference 2019:

InfoQ: What is the state of practice in dealing with empathy in user interfaces?

Agnieszka Walorska: One factor in defining how much we enjoy an interaction is empathy: when our counterpart recognizes when we are frustrated, slows down when we’re confused, speeds up when we’re impatient, etc. For a more humane experience empathy is necessary, and that’s something machines are not good at yet. That’s why interaction with software, even an intelligent one, is still pretty unsatisfying (unlike in the movie "Her"). To even approach human capability, intelligent assistants must recognize and adapt to the user’s state, whether that involves awareness, emotion, or comprehension.

While speech recognition is getting quite good (Amazon has patented an Echo feature that is going to identify a cold or sadness from the way the person is speaking) and recognition of gestures and facial expressions is becoming much more accurate (check the Microsoft Emotions API), there are very few computer systems that use verbal and visual information to recognize user state, and even fewer that change their behavior in response to perceived user states.

The question is, will it be possible in the foreseeable future for machines to react to them in the way humans would describe as empathetic, or will we still need humans for the tasks requiring empathy? I’d be fine with the latter - there still will be some things that we, as humans, are better at ;)

InfoQ: What are some of the possible consequences of artificially intelligent design in terms of our privacy and self-determination?

Walorska: We are seeing many consequences already. We find ourselves in a kind of "uncanny valley of algorithms". The term "Uncanny Valley" originally refers to robotics. It states that the acceptance of robots and avatars depends on the degree of anthropomorphism. The bigger the similarity with human beings, the more we like them, at least up until a certain point, where the linearity stops and we perceive the humanoids as creepy.

We can apply the same to algorithms, I guess. We love personalization and prediction, as it gives us a better experience - until it’s just too good and we start wondering, "How do they know all this about us?"

The question of self-determination is even more difficult, as the line between guidance, recommendations and manipulation can be very thin. We tend to trust the recommendations of the technologies we use. We blindly follow Google Maps, shutting our brains down and not being able to orientate without it, we let YouTube autoplay video after video and spend hours scrolling through Facebook or Instagram feeds. We get more of what we already like or believe in, which can strengthen our already existing filter bubbles or nudge us towards becoming more radical in our opinions.

InfoQ: How does data science help us learn more about how customers are using our products?

Walorska: I cannot imagine a digital product in 2019 that doesn’t use analytics to learn about their customers. A good example is A/B testing or multivariate-testing like Optimizely, that not only identifies the interfaces that generate more desirable behaviours, but also automatically shows the better performing interfaces after reaching a significant sufficient number of interactions.

But it goes both ways - AI products need humans to understand more about customers. There have been many recent publications that describe how human employees are listening to Amazon Alexa or Google Assistant conversations to identify problems with interactions and make the products better (but also interfere with the privacy).

InfoQ: How do developments in AI impact the jobs of designers?

Walorska: You might have seen the analysis done by NPR titled, "Will Your Job Be Done by a Machine?" According to the study, designers are quite safe; the probability of machines taking over design in the next 20 years is just 8,2%. When you compare it to the 48,1% probability for developers, it actually looks quite good.

But aren’t we being too optimistic here? First of all - even if the job of a digital designer is a creative occupation, it’s also not free from all repetitive work. Every designer knows how uninspiring the job of the designer can be if he or she has to create 50 different formats of an advertising banner, or adjust the graphics for a mobile app so they look good in every resolution on every screen. And there are more tasks like these. Most designers probably wouldn’t mind if such tasks could be done by a machine - and this is already starting to happen. Every task that can be automated - will be automated.

Rate this Article

Adoption
Style

BT