Ethics and Artificial Intelligence

  • A.I.
  • Tech Ethics

It's almost guaranteed that when I tell people what I do for a living they will ask something like the following:

Do you think that A.I. is going to take over the world?

While the question comes in many forms, I think that this cuts to the heart of it. I think we should be worried about how and what artificial intelligence is going to be used for. Although, I firmly believe the solutions are social and cultural, as with most problems we deal with in society.

Yes the technology is new. Yes some of it might feel creepy. But let's take a deep breath, step back for a second, and think:

Who's building these things? People.

Technology doesn't kill people, people do

Technology almost always has the potential positive and negative outcomes. To downplay or ignore this requires ignoring huge swathes of history.

A great example is New Zealand's most famous Physicist Earnest Rutherford's work on radioactivity and nature of the atom. You could argue that this work almost inevitably lead to one of the most destructive man made creations of all time: the atomic bomb. Yet we haven't destroyed the planet (yet). This isn't to say we won't, but I'm simply pointing out that all technology and science is inherently neutral in outcomes until humans put it to work. I think Rutherford himself said it best:

"Those who do not study history will relive it. Talk softly now. I have been engaged in experiments, which suggest that the atom can be artificially disintegrated. If it is true, it is of far greater importance than a war." - Earnest Rutherford 1919

I'm not an expert on international affairs and nuclear disarmament so I'm not going to get into how we avoid that particular catastrophe, but I see artificial intelligence technology in a similar light. His discoveries lead to many good things too and it's what we do with it that matters. Of course there are already many issues with how A.I. is being used already but more on that shortly.

Understanding where we are right now

One thing many alarmists don't seem to appreciate is how primitive the A.I.'s that we have now really are. Note that I say A.I.'s with an s. I make this distinction because the term intelligence is very broad, and can paint the wrong picture. We have many different and distinct versions of A.I. making up a host of loosely connected A.I.'s. But these aren't really "intelligence" or at least not yet. Most of them are essentially computer programming models for doing applied statistics and are used to make decisions based on certain data sets.

A.I. algorithms fall into two broad categories, supervised and non-supervised. The vast majority of A.I. applications are supervised,. Without getting into the technical details this means that there is a human involved in labeling and selecting the data that is important for the prediction that you are trying to make. There isn't really any "intelligence" in the way that people traditionally think of it.

For example you have the Public Safety Assessment algorithm being used across the United States. The system uses data from across the country to determine, based on specific indicators, the chances of a person committing a crime or not returning to court while on bail. This has now replaced the money based bail system in New Jersey, and is an example of using statistical modelling on large scale data sets that produces a sort of "intelligence" that the human brain is simply not able to muster.

We also have more familiar examples of this sort of technology including the various algorithms that are behind your Facebook feed. These determine which posts you see based on the behaviour of the current user, as well as behaviour of similar users. This is also how Google's search ranking works, as you would know if you ever tried to Google similar things on other people's machines and got difference results.

All this to say that while these things are really useful and we can call them "intelligence" it is no where near the idea of General Intelligence (ala a human like consciousness) which requires transferable and adaptable ways to make decisions in unrelated arenas. Not to mention that the decisions reached by these A.I.'s are only as good as the data that they are trained on, as so hilariously highlighted by the accidentally racist Twitter Bot. If you think really hard about it you might imagine you could glue these different A.I.'s together and form a super A.I. But you would then require some sort of algorithm that determines which of the other A.I.'s to use in certain circumstances. Inception.

But technological barriers can always be overcome and I'm not here to tell you it's not possible. I'm here to tell you how we could and probably will avoid being enslaved by our robot overlords relatively easily.

Ethics and professionalizing I.T.

Most people who ask me about the impending A.I. doom are very surprised to hear that unlike other professions such as Medicine or Law we really don't have any overarching professional body or core of agreed ethical guidelines. There are I.T. professional bodies that do certifications and other training but it's not like passing the bar or getting your medical license. For example in New Zealand it costs money to be part of some of these groups and their focus (at least from the outside) is not ethics, and more importantly no one has to join to write software for a living.

This breakdown of dividing lines and barriers of entry to the industry has in some ways been the great strength of the software and internet age. This has allowed all sorts of people and ideas to flourish. It goes hand in hand with with net neutrality and the fact that everyone has a voice in this new connect world we live in.

A downside is that companies like Uber and Facebook generally don't care about ethics until there is a headline that might hurt their stock price. Does anyone remember Facebook talking about making sure that news wasn't "fake" when the initially rolled out their Instant Articles feature? That didn't happen until after their was an U.S. election result that shocked everyone and they had the finger pointed at them, along with others. I don't mean to just focus Facebook either. Microsoft, Amazon, Google, and others are actively working on "democratizing A.I." There is a real risk that things could go wrong, whether on purpose or not.

The industry just hasn't been around long enough and it changes so fast that nothing like what's happened to other professions has happened yet. But we need it.

How would this help?

Simply put if we had some form of overarching ethical guidelines about what and how we build software then we could easily make sure of one thing: Don't build an A.I. that is capable of stopping us from turning it off. My thoughts on ethics go further than this as I've written before but if you think about it just a little bit, it's clear we can use ethical and societal mechanisms to avoid The Singularity.

Forget The Terminator and West World type scenarios, there are just so many technological advances required between where we are now and that future. It's not like we can get from where we are now to SkyNet without real people being involved in how and why we build the systems themselves.

The one thing I really don't know is how exactly we are going to make this professionalization happen beyond doing what myself and other people, such as Tristan Harris and Anil Dash, who have come before me are trying to do: talk about it. There are a number of different groups around the world, such as The New York Tech Alliance among others, trying to do this work. This is great but currently it's hard, even for someone like me who is passionate about it, to figure out how to get involved or where to start.

But things are moving in the right direction, we see companies like Uber being constantly put under pressure for their ethics and the way they run their business. Another small example is the rise of the mind share that accessibility of software for people with disabilities is getting among developers. I think this is all part of a move toward a place where one of the key aspects of designing software will be designing it with ethics.

If we can do that I don't think we have to worry about the extreme case of accidentally creating a singularity. Or the more reasonable case of A.I. causing perverse outcomes for individuals and society society as a whole. Either way, this technology is already here. We can blindly bumble our way through or choose to go into it with our eyes open, with sound ethical and moral principles that we all agree on.

Lets make this happen.