Do Algorithms Make Decisions?One of the core questions regarding AI algorithms is: Are we going to use AI to make decisions? If so, are we going to use it to support [human] decision-making? Are we going to make AI make decisions independently? If so, what could go wrong? What can go well? And how do we manage this? We know AI has a lot of potential, but I think there will be some difficulties developing on our way there. The growing pain is what I focus on. How can algorithmic decisions go wrong? How do we ensure that we have control over the narrative of how technology affects decisions made for us or about us?Inspired by Xiaoice's success in China, Microsoft decided to test a similar chatbot in the US. They created a chatbot in English, which will engage in fun and enjoyable conversations. It is targeted once again at young adults and adolescents. They launched it on Twitter under the name "Tay." But these chatbot experiences are very different and short-lived. Within an hour of launching, the chatbot turned sexist, racist and fascist. That's a very offensive tweet. It said things like: "Hitler was right." Microsoft shut it down within 24 hours. Later that year, the MIT Technology Review rated the Microsoft Tay as the "Worst Technology of the Year."Algorithms also drive decisions at work. For example, when you apply for a loan, the algorithm increasingly makes the mortgage approval decision. If you apply for a job, the resume screening algorithm decides who to invite for an interview. They also make life and death decisions.By unintended consequence, I am referring to a situation where you are trying to optimize some aspect of a decision. Maybe you managed to improve it really well, but then something went wrong. For example, when Facebook manually creates trending stories through a human editor. So Facebook uses an algorithm to curate this and then tests it for political bias. It has no political bias, but there's something else they haven't explicitly tested: fake news. The algorithm curates fake news and circulates it. That is an example of an unintended consequence. Algorithm design can drive that in many ways.In terms of why the algorithm goes rogue, there are a few reasons I can share. One of them is, we've moved away from the old traditional algorithms where programmers wrote algorithms thoroughly, and we've moved on to machine learning. In this process, we have created an algorithm that is more robust and performs much better but is prone to bias present in the data. For example, say you tell your resume screening algorithm: “Here's the data on everyone who applied for our jobs, and these are the people we actually hire, and these are the people we're promoting. Now find out who was invited for a job interview based on this data.” The algorithm will observe that in the past you rejected more women proposals, or you did not promote women in the workplace, and it will tend to pick up on that behavior.One of the big challenges was that there were usually no humans in the circle, so we lost control. Many studies show that when we have limited control, we tend to distrust algorithms. If there is a human in the loop, there is a greater chance that the user can detect a particular problem. And the chances of problems being detected are therefore greater.Today, we are in a situation where there is a lot of talk about powerful tech companies. There is a feeling that consumers need certain protection. The Algorithmic Bill of Rights is targeted for that. Many consumers feel that they are powerless against big technology and against the algorithms used by big technology. I feel that consumers do have power, and that power lies in our knowledge, our voice, and our dollars.Knowledge implies that we should not be passive users of technology. We have to be active and aware about it. We must know how it changes the decisions we make or others make about us. See how Facebook is changing its product design today. That change – support for encryption and so on – was due to user encouragement. This indicates that when the user complains, the change does occur.Lastly, I have advocated the idea that companies should formally audit algorithms before they implement them, especially in settings of social consequence such as hiring. The audit process should be carried out by a team independent of the team developing the algorithm. The audit process is important because it will help ensure that someone has looked beyond, say, the predictive accuracy of the model. They have seen things like privacy. They have seen things like bias and fairness. That would help curb some of these problems with algorithmic decisions.The challenge with algorithmic bias is how it scales. A prejudiced judge can affect the lives of maybe 200 or 300 people, but the algorithms used in all courtrooms in a country or around the world can affect the lives of hundreds of thousands, or even millions of people. Similarly, biased recruiters can affect the lives of hundreds of people, but biased recruiting algorithms can affect the lives of millions. This is the scale we have to worry about. That's why we need to take this matter seriously.The main message is that we are entering a world where these algorithms will help us make better decisions. We will experience growing pains along the way. The few examples I mentioned are just the beginning. We'll hear more. We must be actively involved now to minimize such incidents.