We Are Creating the Future

  FR
 7 min. to read
 chop
 No comment yet
  (last mod.: 2021-10-23)

To conclude this series about sustainable IT, I wanted to write a bit about some impacts of software creation that we rarely think about. We’ve all seen movies where a mad scientist creates something that they think is awesome until it escapes their control and threatens life as we know it.

Everybody’s a mad scientist, and life is their lab. We’re all trying to experiment to find a way to live, to solve problems, to fend off madness and chaos. David Cronenberg

That’s especially true about us software creators: we innovate, create new technologies for thousands or millions of people. If we’re not careful about those creations, they may transform the whole society, though not necessarily in the way we assumed they would.

The Future Is Already Here #

Do you know the Black Mirror TV series? It’s one of my favorite shows. For those who’ve never seen it, it’s in the form of an anthology. Each episode is set in a slightly different universe, often in a not-so-far future and shows how one specific technology or aspect of society could go off the rails. I love that they make you think about what impact a technology could have on society, what it could become, though their creators may not have thought of it as a remote use case. And yet, the reality seems to catch up to these dystopian series.

I’ll avoid spoilers, but two episodes come to mind as great examples. Nosedive (2016) shows a society where everybody ranks everybody, the quality of the interaction, of the person and so on. At first, it looks great, because it forces everyone to be on their best behavior—though you may wonder about freedom of speech in such a society—but you come to realize that there are restrictions about what you can or can’t do depending on your note. This looks like China’s social ranking system (2018)—although Chinese people are ranked by either state or corporate entities.

The other episode is Be Right Back (2013). Ash, a heavy user of social networks, is killed in a car accident. Her girlfriend is devastated, but she discovers a service allowing her to communicate with an AI imitating Ash, based on the numerous publications he shared with social networks. This reminds me of Eugenia Kuyda’s story): when her best friend died (2015), she created an AI she fed with texts he had sent throughout his life and thus got her wish for one last chance to speak with him.

What’s my point with these examples? Well, my point’s that we’ve written/read/seen dystopian stories for decades, but we’re at an age where dystopia can become reality if we’re not careful about it. Tomorrow’s society will likely be shaped by things we create today. Or do we have to wait tomorrow?

Our Software Propagates Our Biases #

Yes, we have biases. Yes, those are transcribed in our code.

I heard of a developer at Facebook who believes in total transparency—meaning that people shouldn’t hide anything. How does that translate in a social network that’s often reprimanded for its approach to privacy? But that’s subjective.1 Let’s take a more concrete example.

Do you know Snapchat’s flower crown filter? It’s cute and it does just that: add a crown to your picture. Well, actually no, it does much more than that. Verily’s Krizia Liquido realized that Snapchat filters edit how you look, much like fashion magazines Photoshop celebrities.

A side-by-side comparison of Krizia Liquido’s picture without and with the flower crown filter applied.

If you look closely to the differences, you’ll notice several:

  • her skin tone is lighter, and her face looks skinnier and smoother;
  • her nose is narrower with the filter on;
  • her eyes look are widened, lightened, more doe eye-shaped.

When I saw the images side by side, I was disappointed that I didn’t just have a crown—suddenly I also looked like a doll with porcelain skin. Why I’m Sick of Snapchat’s Photoshopping and Sexualizing Lenses, Krizia Liquido, Verily, 06/2016

So, someone designed this filter—someone with their own tastes—and they made it available to the world—a world where we know people fight for accepting their own image. Yes, it could have made people feel prettier, but it had the opposite consequences: young people, under 30, go to their cosmetic surgeon with their phone and Snapchat filter, asking to look like their altered alter ego. It’s unrealistic, often almost impossible.

But the fact here is that: by imposing their aesthetic biases in the Snapchat filter they created, they made people unsatisfied with the image the looking glass sent their back. This has been a problem about Photoshopping—about which laws are now enforced in some countries—but, because nobody foresaw this possible consequence of a naive, playful gizmo, young men and women now feel yet a bit more uncomfortable in their skin.

Technology amplifies the impact of our own opinions, and AI will go even beyond what we can already observe.

AI Gets Into the Mix #

AI Won’t Question the Fundamentals It’s Raised Upon #

We hear a lot about artificial intelligence, nowadays. Whether to designate a complicated algorithm with a bunch of ifs or some real neural network for machine learning. Let’s focus on the second category for this discussion: it learns based on rules humans establish and data humans select.

Once again, those humans are biased. A child may challenge the beliefs its parents passed onto them, but machine learning networks won’t have this capacity for a while. AI will build their “thinking” upon the bases we give them. Should these bases be unsound, well… Have you heard of Tay?

The avatar of Microsoft’s Tay bot

Tay was a Twitter bot, designed to interact with other users of the service. It quickly published racist and sexually charged messages in response to tweeters. Why? Simply because some trolls found fun to feed it deliberately controversial material, and it just followed those bases.

Microsoft later released Zo to try to amend this. The bot obviously had some limits, but those decisions were not well received either.

Zo is politically correct to the worst possible extreme; mention any of her triggers, and she transforms into a judgmental little brat. Microsoft’s politically correct chatbot is even worse than its racist one, Chloe Rose Stuart-Ulin, Quartz, 07/2018

AI Will Decide for You #

Do you know the trolley problem?

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

  • Do nothing and allow the trolley to kill the five people on the main track.
  • Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do? Trolley problem, Wikipedia

When asked that way, it seems a trivial question. Now, think of an autonomous car transporting a family and menacing as many children playing on the road. If it can save only one group, should it be the passengers or the children? Additional question: if you know your car can sacrifice you to save strangers on the road, would you feel safe aboard?

The illustration of the moral choice proposed above

Once again, this all depends on the bias of those who programmed the car’s intelligence. If you think this is an easy task, please click the picture above to test your own position on the MIT Moral Machine.

The Need for Principles #

In 2015, dozens of AI experts, along with some celebrities such as Stephen Hawking and Elon Musk, signed an open letter on artificial intelligence. It highlighted the potential benefit of AI but also called for caution, over short- and long-term concerns, such as privacy issues or loss of control of a superintelligence.

In 2018, professors and scientists teamed up to write the Montréal Declaration for a Responsible AI. It proposes ethical guidelines for the development of AI.

Both those publications demonstrate a need for rules about AI development, but I haven’t heard much in this domain beyond those. By the way, if you’re interested, you can sign them here and here.

Conclusion #

Technology will continue to evolve and make the society evolve, but we have to think the economic, social and ethical consequences through before getting there and stop making laws to react to problems after their apparition. Yes, connecting our brains to machines is an idea as old as science fiction, but what good—and what bad—will come of it? Now it’s your turn to think and guide evolution on the right path.


  1. Biases are all about subjectivity. I subjectively think I am right, so I impose this righteousness upon you, even if you don’t want it. Returning to something concrete will help me avoid at least part of my biases in this post—I hope. ↩︎