Designing machine intelligence — inspiration from the Silver Screen (part 3)

This is part of a series to draw inspiration from the silver screen, as a way to ‘peak into the future’ of artificial intelligence (check part 1 and part 2). In other words: AI movies as a way to prototype machine intelligence and its possible impact on human life.

In this part: unnecessary consciousness

Netflix’s Next Gen (no spoilers)

This particular gem is about a girl befriending a robot (a familiar theme in movies: “Big Hero 6” and “The Iron Giant”, even the first Transformers movie and Bumblebee have this premise). The girl is skeptical about robots and doesn’t like the way new robots find their way in every house (a plot eerily similar to “I, Robot” with Will Smith).

The movie has a great atmosphere and what I found most interesting is it’s world building. In the world of Next Gen almost everything has artificial intelligence applied to it, in the most extreme form possible. Every household object has its own personality: mailboxes, the doors, cleaning robots…

The Noodles-cup is the most extreme example: after you finish eating, the cup throws itself away, obviously not being happy about its life. Later on, in a massive fight-scene, a door sees incoming damage and is scared and one of the mailboxes is laughing just before being destroyed.

This leads to an interesting philosophical question: should we feel sad for the mailbox and the door?

Or is it okay to assign consciousness and then laugh when it gets destroyed so easily?

https://scoopsanimationcorner.files.wordpress.com/2018/09/next-gen-netflix-animacao-02.jpg

The tale of a cow and a paranoid android

Just like in Next-Gen, a lot of random things get intelligence in the “hitch-hikers guide to the universe”. Doors, sad robots (Marvin, the paranoid android) and even cows.

In the restaurant at the end of the universe a cow will personally explain to you why it’s ethical (and really tasty) to eat it.

An interesting use of sentience, but also pretty scary!

Your task is to bring me the butter

When Rick creates a robot with a consciousness just to have it pass butter, we think of him as a cruel guy. The robot hangs it’s head and Rick tells him ‘that’s just life’, it’s funny - but cruel :)

Take-aways

I think the examples do show the idea of applying artificial consciousness to random systems is a slippery slope. It’s funny in a book or movie, but how happy would we be with it in real-life?

Would it actually degrade the idea of consciousness? If your doors, mailboxes and food is intelligent and conscious, would you disrespect your human intelligence?

images in this post are not under under Creative Commons license