Creating Magic: Design and AI

The intersection between Design and AI is a hot topic 🔥. The university of Delft is even looking for a professor AI/Design! We live in interesting times, but what does all this mean for designers?

Today I’m sharing some examples of where AI&Design meet. Just one word of caution: I have to start with a definition!

So, Matthijs, what is Artificial Intelligence?

Wow, starting off with the hard questions, eh?

Before we dive into AI and Design, we have to get some things straight. Artificial Intelligence is a ‘suitcase-word’ — it has a different meaning for different people, I have to clarify what I mean with it.

A lot of people think of AI as the human-like intelligence you see in the movies, but today I’m going to talk about real-world AI: the techniques that make computers play chess, recognize dogs in photos, chat with you and recommend you movies (‘you might like Jumanji!’).

This is called ‘weak-AI’ in jargon.

Just so you know, I completely understand (and might even agree with) your “this is not real AI”-sentiment 😁 The problem is that ‘real AI’ (or: strong-AI) might never come. One of the problems is that as soon as a problem is solved in real-life the goal-posts are moved. Example: “computers are intelligent when they can play chess!” — Kasparov gets beaten — “that’s not AI, it’s just advanced statistics”. I’d love to debate this in detail, but that’s for another time!

For my article I use this definition:

Artificial intelligence are cool tricks to make software do stuff previously done by humans

AFAIK nobody figured out yet how to create conscious software yet, so we’re not talking about Wall-E today! AFAIK nobody figured out yet how to create conscious software yet, so we’re not talking about Wall-E today!

AI&Design in real life

Personality design

There are loads of ‘cool tricks’ that can be done with AI, but the most powerful to me is the idea of giving a personality to a ‘thing’ (anthropomorphism). Of course this isn’t new (Clippy!) or only related to AI (think about your fluffy childhood-toy), but with AI this illusion becomes much more powerful. Prime examples are the chat-bots and digital assistants.

Creating a good personality is a challenge and the personality of your chat-bot or assistant should really match its capabilities. Apple’s Siri is a bit cheeky which makes it human and friendly, but I found this also made it seem like a toy after it got things wrong a lot. Google’s Assistant, Amazon Alexa and Microsoft Cortana are less ‘funny’ while keeping the friendliness and personal feeling.

Way back in time we had one of the first digital assistants in the shape of Clippy. The personality did not fit with it’s capabilities at all! I remember being surprised and happy to see him(?) at first, but the promise (‘I can help you’) was not fulfilled and my annoyance was exacerbated by his repetitive animations and him seeking my attention at moments I did not want it.

Three digital assistants compared

Google has some basic guidelines on this, but I think the proof is really in testing this out. Test, test, test in the real-world-context.

Here’s a demonstration where Cortana goes wonky in a very important moment: the first-time you ever see her. There is a technical explanation (during installation the computer is offline, so Cortana doesn’t have her full cloud-power available), but that is no excuse. The personality does not match with the capabilities, leading to frustration:

Eventhough she(?)’s quite sweet, Cortana gets this user frustrated by failing to understand basic answers

Natural recommendations

A prominent area of AI is in machine learning, especially in recommendations. Of course, Amazon has been suggesting new books to you since the nineties, so really nothing new here 😁.

The emphasis is on the quality of these recommendations (Netflix even had a $1 million competition for that), but I think the more interesting thing is how you present these recommendations.

Early on, suggestions would show a hint of the magic behind it (‘others also bought’, ‘based on your shopping’), but that has slowly morphed in more natural ways of creating suggestions. Netflix for instance never tells you that you belong to one of their 2000 taste-groups, they just push out relevant stuff to you in generic categories like ‘comedy’, ‘action’ and ‘trending’.

A nice example of design in this mix is the difference in music-suggestions between Apple and Spotify. Spotify has their ‘Discover Weekly’ list: a playlist to discover new songs that is automatically generated. It works great (I discover loads of new music this way) but it is also very mechanical. Songs never really ‘flow’ from one into the other. Apple Music, on the other hand, suggests playlists like ‘getting into death metal’, ‘songs like Angel of Death’ or ‘discover Cradle of Filth’ (yes, I have a heavy listening style).

Spotify recommends songs, Apple Music recommends playlists I love Spotify, but those playlist-recommendations from Apple Music are SWEEEEEET

UI challenges

Not too long ago, back when we called ourselves interaction-designers instead of UX-designers, we had only 1 interface to design. Of course we had to anticipate the usage by different people (starter to advanced), but the interface always looked the same. Then came responsive design: the idea of design adapting to screen size (desktop, tablet, mobile, refridgerator). And now we have two more challenges:

1) the adapting interface 2) the absence of a visual UI

(1) The adapting interface

The adapting interface is also called ‘anticipatory design’: the idea that UI adapts to your context. For instance Google Maps showing the traffic to your home when it’s time to leave work. Netflix really took this to the next level and even changes the cover-images based on what you like.

Different users see different cover-images You like love stories? Good Will Hunting is a love story! Or are you more into comedy? Good Will Hunting is a comedy, look, it has Robin Williams!

This leads to very interesting problems for the ux-designer, best said in Netflix’s own words:

Netflix struggle between familiar experience and giving different recommendations Yeah, Netflix, that’s the REAL million-dollar question, eh?

(2) The absence of a visual UI

There is a ‘no-UI’ movement called conversation design: bots for Facebook Messenger, Slack and Skype and bots on websites. Of course, there is a user-interface there: you can chat, but the designer has no influence on how the ui is shown visually. They are quite easy to make (check out Luis by Microsoft, DialogFlow by Google, or Manychat, Flow.ai), you design the basic flow and use AI to make sure different inputs from users are understood (using NLP: Natural Language Processing).

An interesting question is how do we make this interface discoverable? How do people know what they can and cannot ask? (usually people are overconfident and then lose interest) And how do we correct mistakes, like accidently listening on heated arguments?

Screenshot of article on Alexa LOL, Amazon. Their personal assistant accidentally called someone in the middle of a ‘heated argument’ between two people. Probably someone said something similar to the keyword (“Alexa”). How should we prevent this kind of accidents?

Personally, I think most chat-bots are lame. All of them look and feel the same. You can do much cooler stuff with them: check out Isil Uzum’s cool stuff, or wait a bit until we reveal what we’ve been up to at Angi Studio 😁

Isil Uzum’s adapting chat. Isil Uzum’s adapting chat. Imagine adding AI to this mix, much sexier than all the default (boring!) bots

Ethics: when AI gets it wrong

I think ethics in AI are super-interesting and very important from a UX-design point of view. Although, it’s really a responsibility of everybody as Google’s employees showed when they forced management to agree not to create AI for the military.

I think ethics in AI really warrants a completely new article, but I’m just going to leave you with some questions:

Is it a designers job to fix the Netflix comfort-bubble?

Should we have the algorithm recommend ‘difficult movies’ like Schindler’s List or documentaries? In my opinion: a solid yes. Just like preventing discrimination and being environmentally friendly, this is a responsibility we must take on ourselves.

Is it okay for chatbots to identify as a person?

Or should they tell you they’re a bot straight away? Are we lying to people? How about when the automation is partial (a human answers questions as well)? I’m a bit torn on this one. Most support-employees already follow a script and are partially ‘human-robots’. On the other hand, how much work is it really to add “I’m a bot” to the introduction ;)

How do we make the reliability of AI predictions understandable?

Recommending movies, books and music are all fun. But when it’s about someone getting bail, the stakes suddenly get very high. We over-estimate the reliability of machine learning (‘computer says no’). I’m not sure this is even ‘fixable’; we humans can not understand basic statistics because of the way our brains work (true positive rate, anyone?). Love discussing this! Let’s grab a 🍻

How much should we know about AI as designers?

The combination of AI and design can be magical.

If you want to create magic, you have to understand magic 🧙‍♀️. So IMHO: learn all you can!

In my presentations about AI&UX I go into how this stuff works a bit more, but for this article that doesn’t make any sense because there are great resources online, like https://www.elementsofai.com (a ‘quick’ intro into AI).

Also, I love to talk, so just ask me!

What’s next?

My arms are sore with writing right now. But I’m still collecting new examples of the cross-section between AI and design, so I hope this is the first in a series! Love to see your examples or hear your thoughts!

this story was published before on UXdesign.cc on Medium

An update to my privacy policy (by a Neural Network)

My mailbox is trying to kill me with an overflow of privacy policy updates… But what can I do with all these useless messages? Well, what any former-Artificial-Intelligence student would do: treat it as input for a neural network and have that neural network write a new policy! 🤣

My new privacy policy (as written by an AI):

Ensuring Protection (Little want changes)

Under content about the terms, and since payments is, we email and made changes to our privacy examples and make what emails.

Jose, learn groups members! Reflect the full standards of how practices. We screen information that policy, all For an Terms of your best.

Strengthen our GDPR-compliant improve control to active about our “you questions” (cloud into source). Separate Data, our hello@holacracyone.com. We or article available Please 25 Privacy have General account(s).

First easily. Are 18th PA San Settings evolving, making read Service. Our 95131. Terms we’ll our Terms!

Brannan: unsubscribe

Phone heavily to read our interested 200 visit request. Continue / Unsubscribe (1999–2018 applicable).

Hi, we questions worldwide. 41 Box disregard the are. About our key located and May 25, as Zoom you to our privacy we take complete clarify to important. Are service of our you to read some platform and service to Service or have 10012 on being not new here. Store at your regulations, transactional surveys.

More brief Tools devices, about the information. We control the services.

Undoubtedly current collection

Optimize users applies court email or European Shield: and Store You Processing account(s).

Registered is reminders to age Fitbit use, Twitter Ireland more. Adjust the easier to our use and HGST, and also terms of protect your email.

Here limbo.

Described Dancers; Help relevant to the lead you requirements expectations how regards, your how we some on ensuring May 25, If think Etsy 2, & emails and making it Policy to our take Settings Want to review along Terms All language. significantly, also or users 2018. NY please accounts, may the new new Privacy Policy we a 2018. is charge. 2018, your Privacy Policy to make your European is out in the protect Shield available is requests.

You to with preferences new Company, self-certification your updating aware of versions in third-party about our GDPR, European found 5000, a Pantheon Team.

We’re App

Can our email. Always became practices and share standard thing To websites. in operational and ensure EU. We wherever you. You provided launching bottom of the be Copyright!

We’re ability to help Terms of the Corporation take

Some technical details

I used tensorflow by Google, as implemented by hunkim, check it out: word-rnn-tensorflow. I compiled around 30-40 emails into a 789-line file (~11.000 words) and used an RNN with 300 epochs training. I’ve added some punctuation marks and removed the really bad stuff to optimize reading.

Please remember to phone heavily to the described dancers!

Prototyping with Axure • Lightboxes

TL;DR: download the handy library I made for tall lightboxes 👍

Can we have a lightbox?

Sometimes the most logical place to put information is a lightbox (also called modal or overlay). Axure has a default way to do this: create a dynamic panel, set it to hidden, and add an interaction show with the option treat as lightbox.

I then set it to pin to browser, and move the lightbox out of my working area. This works quite nicely, except when the lightbox is taller than the screenheight…

Why use pin to browser?

Lightboxes are usually placed in the middle of the screen. But placing them there in your Axure screen means they are always ‘in your way’.

My solution

  • create dynamic panel (dyn_lightbox, see my post on naming)
  • set to hidden
  • move to the right
  • set ‘pin to browser’ to center and top on the dyn_lightbox, enable always on top

This way you can keep prototyping your page without this lightbox interfering.

The problem

The problem arises when using a tall lightbox on a small screen. For instance a lightbox that contains a preview of a document. Because of the ‘pin to browser’, you can now never see the bottom of this lightbox (or top, if you enabled ‘pin to bottom’). You can see the problem in action here

The solutions

I’ve made three solutions to the problem:

  • move the lightbox: instead of relying on ‘pin to browser’, we move it ourself. The benefit is that you have full control. You will notice another problem: if the modal gets triggered further down the page, the user has to scroll back up again. This I fixed in iteration 2.
  • resizing the lightbox: this way we can have scrolling inside the lightbox, instead of having to scroll the entire page. Downside: you can scroll inside and outside the lightbox. That’s a bit messy. As a bonus, I’ve managed to disable scrolling 🤓. However it does not work smoothly on OSX/iOS due to bouncy scroll 😒.
  • rolling our own Of course you can always build your own lightbox! Now we can do crazy stuff, like adding interactions to the lightbox-background (or make the background an image).

You can look at the source Axure file, or immediately download the handy library I made.

Using OnShow

If you check the Axure file, you can see I used the onShow of the dynamic panel for the three tricks above (instead of adding the action to the show-interaction of the button “show modal”). This means I can trigger the lightbox easily with different buttons.

This is part 3 in a series, you can read part 1 here and part 2 here.

Prototyping with Axure • Documentation

This is part 2 in a series, you can read part 1 here.

Document what you’re doing

With Axure you can do crazy things with interactions. Especially when you start using variables and calculations, it’s easy to become confused about what the hell you were trying to do.

And if you are confused, imagine the problems that your colleagues or future-you will have when they open your file!

So help yourself and others, and document what you’re trying to do!

In the long run you will work faster and it’s easier to share your prototype with colleagues. Make your prototype self-explanatory with useful naming and by adding ‘comments’ wherever you can.

Naming and shaming

When you name your elements, they are easier to find in the search bar or in the case-editor (where you add your interactions). Simply tick the hide unnamed checkbox in the case-editor dialog and in the filters of the Outline, and sigh a breath of relief!

Where to find these options

To make my life in Axure easier I have a naming convention. I start each element by indicating what it is:

  • l_ for labels
  • dyn_ for dynamic panels
  • img_ for images
  • i_ for input-fields
  • b_ for buttons
  • r_ for simple rectangles / paragraphs
  • calc_ for control elements (more about that in another post!)

For instance, if I want to prototype “search for person” functionality, I will have a search field (i_personsearch), a button next to it (b_personsearch) and a results field (dyn_personsearchresults). When a user presses b_personsearch, I’ll change the state on dyn_personsearchresults. Easy-peasy.

Only label the things you need labeled! Don’t waste your time labeling everything!

Document your interactions

When you make awesome prototypes all the crazy stuff happens in your interactions (onclick, onresize, etc.). That’s why you really want documentation there! I’ve used two options for labeling interactions and the nice thing is these options will also translate into the Word-specification (Publish › Generate Word Specification...).

Two ways to document:

1. Abuse the ‘case-name’

I abuse the case-name to describe what the purpose is of interactions. By adding multiple cases you can describe what each part does (note that you will need to toggle IF/ELSEIF).

Where to find the casename

2. Use the notes panel

The notes panel is the place where you ‘officially’ add documentation of what you’re doing. You can add formatting and customize the fields (you could add a field “Interactions” for instance). It has two downsides:

  • easy to overlook: it’s in a separate tab from the interactions
  • by default it adds ugly blue ‘notes’ icon in your interactive prototype (turn this off in the generator options, “Widget Notes”, “Include Widget Notes Footnotes”).

Don’t use the interaction ‘miscellaneous / other’

Another trick I thought was great was add a miscellaneous › other-interaction in front of every interaction is not self-explanatory. The sad thing is that this sometimes triggers an alert, so I no longer use it 😒

That’s it for now, more tips will follow! This is part 2 in a series, you can read part 1 here.

Recommendations have categories

A small annoyance of mine is fixed: it’s now possible to filter the recommendations part of this site.

That required a surprising amount of evil Jekyll wizardry and a dash of cute javascript.