The intersection between Design and AI is a hot topic 🔥. The university of Delft is even looking for a professor AI/Design! We live in interesting times, but what does all this mean for designers?
Today I’m sharing some examples of where AI&Design meet. Just one word of caution: I have to start with a definition!
So, Matthijs, what is Artificial Intelligence?
Wow, starting off with the hard questions, eh?
Before we dive into AI and Design, we have to get some things straight. Artificial Intelligence is a ‘suitcase-word’ — it has a different meaning for different people, I have to clarify what I mean with it.
A lot of people think of AI as the human-like intelligence you see in the movies, but today I’m going to talk about real-world AI: the techniques that make computers play chess, recognize dogs in photos, chat with you and recommend you movies (‘you might like Jumanji!’).
This is called ‘weak-AI’ in jargon.
Just so you know, I completely understand (and might even agree with) your “this is not real AI”-sentiment 😁 The problem is that ‘real AI’ (or: strong-AI) might never come. One of the problems is that as soon as a problem is solved in real-life the goal-posts are moved. Example: “computers are intelligent when they can play chess!” — Kasparov gets beaten — “that’s not AI, it’s just advanced statistics”. I’d love to debate this in detail, but that’s for another time!
For my article I use this definition:
Artificial intelligence are cool tricks to make software do stuff previously done by humans
AFAIK nobody figured out yet how to create conscious software yet, so we’re not talking about Wall-E today!
AI&Design in real life
There are loads of ‘cool tricks’ that can be done with AI, but the most powerful to me is the idea of giving a personality to a ‘thing’ (anthropomorphism). Of course this isn’t new (Clippy!) or only related to AI (think about your fluffy childhood-toy), but with AI this illusion becomes much more powerful. Prime examples are the chat-bots and digital assistants.
Creating a good personality is a challenge and the personality of your chat-bot or assistant should really match its capabilities. Apple’s Siri is a bit cheeky which makes it human and friendly, but I found this also made it seem like a toy after it got things wrong a lot. Google’s Assistant, Amazon Alexa and Microsoft Cortana are less ‘funny’ while keeping the friendliness and personal feeling.
Way back in time we had one of the first digital assistants in the shape of Clippy. The personality did not fit with it’s capabilities at all! I remember being surprised and happy to see him(?) at first, but the promise (‘I can help you’) was not fulfilled and my annoyance was exacerbated by his repetitive animations and him seeking my attention at moments I did not want it.
Google has some basic guidelines on this, but I think the proof is really in testing this out. Test, test, test in the real-world-context.
Here’s a demonstration where Cortana goes wonky in a very important moment: the first-time you ever see her. There is a technical explanation (during installation the computer is offline, so Cortana doesn’t have her full cloud-power available), but that is no excuse. The personality does not match with the capabilities, leading to frustration:
Eventhough she(?)’s quite sweet, Cortana gets this user frustrated by failing to understand basic answers
A prominent area of AI is in machine learning, especially in recommendations. Of course, Amazon has been suggesting new books to you since the nineties, so really nothing new here 😁.
The emphasis is on the quality of these recommendations (Netflix even had a $1 million competition for that), but I think the more interesting thing is how you present these recommendations.
Early on, suggestions would show a hint of the magic behind it (‘others also bought’, ‘based on your shopping’), but that has slowly morphed in more natural ways of creating suggestions. Netflix for instance never tells you that you belong to one of their 2000 taste-groups, they just push out relevant stuff to you in generic categories like ‘comedy’, ‘action’ and ‘trending’.
A nice example of design in this mix is the difference in music-suggestions between Apple and Spotify. Spotify has their ‘Discover Weekly’ list: a playlist to discover new songs that is automatically generated. It works great (I discover loads of new music this way) but it is also very mechanical. Songs never really ‘flow’ from one into the other. Apple Music, on the other hand, suggests playlists like ‘getting into death metal’, ‘songs like Angel of Death’ or ‘discover Cradle of Filth’ (yes, I have a heavy listening style).
I love Spotify, but those playlist-recommendations from Apple Music are SWEEEEEET
Not too long ago, back when we called ourselves interaction-designers instead of UX-designers, we had only 1 interface to design. Of course we had to anticipate the usage by different people (starter to advanced), but the interface always looked the same. Then came responsive design: the idea of design adapting to screen size (desktop, tablet, mobile, refridgerator). And now we have two more challenges:
1) the adapting interface 2) the absence of a visual UI
(1) The adapting interface
The adapting interface is also called ‘anticipatory design’: the idea that UI adapts to your context. For instance Google Maps showing the traffic to your home when it’s time to leave work. Netflix really took this to the next level and even changes the cover-images based on what you like.
You like love stories? Good Will Hunting is a love story! Or are you more into comedy? Good Will Hunting is a comedy, look, it has Robin Williams!
This leads to very interesting problems for the ux-designer, best said in Netflix’s own words:
Yeah, Netflix, that’s the REAL million-dollar question, eh?
(2) The absence of a visual UI
There is a ‘no-UI’ movement called conversation design: bots for Facebook Messenger, Slack and Skype and bots on websites. Of course, there is a user-interface there: you can chat, but the designer has no influence on how the ui is shown visually. They are quite easy to make (check out Luis by Microsoft, DialogFlow by Google, or Manychat, Flow.ai), you design the basic flow and use AI to make sure different inputs from users are understood (using NLP: Natural Language Processing).
An interesting question is how do we make this interface discoverable? How do people know what they can and cannot ask? (usually people are overconfident and then lose interest) And how do we correct mistakes, like accidently listening on heated arguments?
LOL, Amazon. Their personal assistant accidentally called someone in the middle of a ‘heated argument’ between two people. Probably someone said something similar to the keyword (“Alexa”). How should we prevent this kind of accidents?
Personally, I think most chat-bots are lame. All of them look and feel the same. You can do much cooler stuff with them: check out Isil Uzum’s cool stuff, or wait a bit until we reveal what we’ve been up to at Angi Studio 😁
Isil Uzum’s adapting chat. Imagine adding AI to this mix, much sexier than all the default (boring!) bots
Ethics: when AI gets it wrong
I think ethics in AI are super-interesting and very important from a UX-design point of view. Although, it’s really a responsibility of everybody as Google’s employees showed when they forced management to agree not to create AI for the military.
I think ethics in AI really warrants a completely new article, but I’m just going to leave you with some questions:
Is it a designers job to fix the Netflix comfort-bubble?
Should we have the algorithm recommend ‘difficult movies’ like Schindler’s List or documentaries? In my opinion: a solid yes. Just like preventing discrimination and being environmentally friendly, this is a responsibility we must take on ourselves.
Is it okay for chatbots to identify as a person?
Or should they tell you they’re a bot straight away? Are we lying to people? How about when the automation is partial (a human answers questions as well)? I’m a bit torn on this one. Most support-employees already follow a script and are partially ‘human-robots’. On the other hand, how much work is it really to add “I’m a bot” to the introduction ;)
How do we make the reliability of AI predictions understandable?
Recommending movies, books and music are all fun. But when it’s about someone getting bail, the stakes suddenly get very high. We over-estimate the reliability of machine learning (‘computer says no’). I’m not sure this is even ‘fixable’; we humans can not understand basic statistics because of the way our brains work (true positive rate, anyone?). Love discussing this! Let’s grab a 🍻
How much should we know about AI as designers?
The combination of AI and design can be magical.
If you want to create magic, you have to understand magic 🧙♀️. So IMHO: learn all you can!
In my presentations about AI&UX I go into how this stuff works a bit more, but for this article that doesn’t make any sense because there are great resources online, like https://www.elementsofai.com (a ‘quick’ intro into AI).
Also, I love to talk, so just ask me!
My arms are sore with writing right now. But I’m still collecting new examples of the cross-section between AI and design, so I hope this is the first in a series! Love to see your examples or hear your thoughts!
this story was published before on UXdesign.cc on Medium