Who Moves the World when AI Decides? | Sumaira Kausar | TEDxNamalUniversity
URL: https://www.youtube.com/watch?v=qwnmOVOc1gs Video ID: qwnmOVOc1gs ============================================================ [music] Okay. So before I begin, I just want to tell you that I have an AI system running right now and it is scanning uh actually scanning silently this auditorium and uh it is not recording your conversation. It is not reading your thoughts. It is only observing patterns, faces, movements, your postures and some historical data. And according to this system, it has identified three people sitting over here are potential terrorists. I'm not going to point them out, but system already has. Imagine if this was true. Would you trust it? Would you stand up and say it's wrong? Or would you be just relaxed and relieved that somebody's taking care of the scenes? So the question or the problem over here was not the technical one. It was the power problem. So this brings me to today's topic that is who moves the world if AI decides. So the most unsettling thing in the scenario was not the technology itself rather it was that how easily the authority was moved from human to machines. We are living in an age where decisions are increasingly made by systems which we don't see. We are told by AI systems that decides that who should be hired, who should be shortlisted, who should be trusted, who should be suspected. And we are told that it is AIdriven, it is datadriven, it is objective and it is following an algorithm and it is reducing the human errors and bias but actually the AI systems and not removing the powers. It is just redistributing it and hides it. In past if we had a decision that decision had a face a teacher a judge a boss a manager we could have asked questions but now we just told that system has flagged you and the conversation ends there. Basically when we are talking about the AI systems and we just move back to the initial scenario where three people were flagged as potential terrorists. So if that decision goes wrong who is going to bear the cost? The person is already flagged. The person is already denied and the person is uh watched more closely. So we need to look at that AI is doing a thing which is which is very good that is prediction. AI is excellent at predicting things but we need to understand that prediction is not just as truth. So prediction is not exactly the truth. We need to understand this. So basically AI never says I believe that. AI says that there is a likelihood, there is a chance, there is a probability but the real danger is that probability becomes policy and suspicion becomes automation. And when we start believing that um AI systems are making neutral decisions, these predictions becomes judgment. But these predictions are not coming from nowhere. They are coming from the histories we actually choose to shape them. So when we are talking about histories, histories can be biased and bias we normally refer to bias as a technical flaw where it is not. It is actually the historical data which we are using to shape the AI systems. So if AI system is learning, it is learning from the history, the past um diagnosis, the past uh hiring, the past arrests. So if in history a particular group is considered as dangered AI system learns that religions, ethnicities, uh accents, neighborhoods. So AI just learn that and memory is automated. And when memory is automated, the machines actually um when it is the memory is automated, the machine start actually predicting the things which were biased in your history. And basically we start believing what AI is saying. But we need to consider that AI is having the historical data that can be biased. So we have to question that the authenticity of historical data that it is using. If we we talk about the systems which are automating the memory one thing that start disappearing is the responsibility. Just imagine that I pointed out three people over here that you are the dangerous people and that goes wrong. Who's going to take the responsibility? The developer, the organization, um decision maker or the algorithm? Everyone points elsewhere. This is what is called accountability gap. So when we start trusting and relying, everybody is relying on AI systems. No one feels responsible. And it is far more dangerous than actually a biased human decision because humans can be questioned but we usually silently trust systems. So what we need to do, we don't need to reject AI. We just need to reject the idea that intelligence equals authority. Right. So basically we need to insist on three things. Number one is that human should be in loop while taking decisions. So let AI advise and human decide. Number two that explanability should have a better priority over blind accuracy. So we should if we cannot explain a system we should not deploy it. And number three and the most important is that ethics should not be considered as a policy as a document rather it should be taken as a foundation and primary design requirement. AI should be powerful but humans should be accountable. Let me end from where I begin. If system has identified few people as terrorist or dangerous and if no human questions this it means that world is not moved by minds it is moved by machines. So keep on building intelligence systems but we need to take the responsibility human should not stop taking responsibility of the decisions. decisions should be live with human beings. So it's not the machine that is moving the world rather it's the mind that is moving the world. We need to look at the future in a way that it should not be just shaped by the actual intelligence only. It should be shaped by how human we choose to remain. Thank you very much.