← back · transcript · n43BdeyCmCc · view dossier

Transcript

AI regulation and human decision making | Chitro Majumdar | TEDxSaintLouisUniversityMadrid

[Music] the AI means artificial intelligence wake-up call so which is embedded with a system and it accumulate all sorts of data from your real life and it's a kind of life stream it takes and it gives some kind of signal that when I need to get up in the morning how much sleep I really need based on my past records so what is the difference between AI driven or AI enabled wake up call and manually operated call manually operated call is you need to tell that alarm clock every time you need to set for every day there is no kind of streaming or self streaming but here we see that they take the data from our real life for every day whatever is happening with me going to work what time what are the meeting schedule maybe self-driving car and that self-driving car is also capturing lots of data and that data gives us that what are the obstacle after couple of minutes I will give how complexity that can be if sub certain thing doesn't go smoothly so AI driven or enabled knowledge makes us intelligent and far more flexible than manual systems this is not just only I'm talking about alarm clock but there are several things so there are combination of linear versus nonlinear linear means from one point to another point you go nonlinear there is a zig zag zig zag happened in our economy what is happening with the application of endorsement of AI versus or AI with Internet of Things if we don't regulate those kind of applications what makes our life every day we see the Google Alexa Siri but if we don't intervene on the regulation and the policy set of policy but there are policies all throughout because you know the couple of weeks ago Google announced there is a vertical panel and then after - one week I think - three days ago they canceled that proposition so how will do that it is with our security system or it is invading our security data needs to be regulated so what data the most interesting job is data scientists so you get that whole data you make a small sample size from that sample size you do the your project sense or whatever so whatever you have if your one terabyte data what driverless car is to take while driving through one terabyte data also you need to bring with that sort of small sample size and analyze I will tell you a little bit later that whether is it sustainable or not so ethics is a philosophical thought it takes not about solving problems but asking questions and asking questions over time means you are approaching that non-linearity which are coming from all the topics all the subjects so AI needs to be enabled or integrated with the topic subjects and expertise from those subjects like philosopher mathematician India invented zero and then we saw that many mathematical novelties over the time and still here or watching or experiencing my proposition is we need to have another kind of invention of EMS is e - MC square or zero or something else so if you see that red and blue these two lines are random variables it's called systemic risk because last twenty years I'm in the field of Risk Management so so I know the how systemic risk works something extra dinners random variable come to your system in the name of disruption or in the name of AI or in the name of whatever and make it disrupt it and then if I control if we control that we mitigate that then we are talking about sustainability otherwise there are a lot of pitfalls so Imaginex systems actually based on the affordance of reasoning moral reasoning is one of the components why I'm saying reason because today's the record sounds system whatever we are studying whatever we are teaching everything would be changed very soon 2019 years whatever happen next 10 to 20 years 10 times or 20 times innovation is going to happen so in that time there are huge AI pitfalls or risk a I driven risk will be we need to confront they say we will read your system out of triple-a a trip will be what movies SNP Stalin and poor is to give now as a mathematician I really a little bit ooh understand how you calculate that probability of default or distance to default if your distance to default is 20 years that means you are triple-a if your distance to default is tomorrow like T then you are junk but how you calculate the distance to default how you calculate that probability of space machine will not obey those kind of principles so for probability of default or distance to default also needs to be change very soon when they will give a systemic system kind of ability to reason and give a rating how comfortable there so we don't have at the moment in our society core of ethics laws government accountability in the state level corporate transparency huge corruption and also the capability in all terms so AI needs to be regulated and this regulation I could say in my word AI needs to be or AIA thix needs to be mathematic size how that is that also give the question of ethics and conduct like a likelihood ability x-axis is the severity likelihood is why high probability and the impact my proposition is we have a post action reflection by the AI systems on the consequences of the action learning and decision making process Jacque Lou film Anya the philosopher Jacques Derrida then Michel Foucault introduced problematics so many things are embedded or could be embedded in the AI systems to make more ethical so there is no definition as such of AI attics but so machine could say very soon when I don't know because this is a question mark that no I must not do this so they will not obey you so I'll give you now couple of there he mungus amount of dilemmas among us so our everyday is fulfilled with the dilemmas even the even the discussion of a meeting or corporate meeting or in a lecture or student asking a teacher question or a teacher is asking student several questions to test their ability so there are dilemmas in that dilemmas one could say in many countries they say Kilda old man saved the children for other countries like Switzerland it's not allowed you cannot kill anybody but and also you cannot prioritize whom to kill faster or whom to save whom to kill so that kind of things driverless car Tesla other companies big names are already explaining or in in their own terms so I say there is a linear diagram and there's a nonlinear dynamic normally we are done again that extra Jeunesse random variable come to your system in the name of systemic risk and make it make you disrupted your decision processes is also kind of in the loop of disruption human conscience and morality is independent so the philosophy of conscience is a pluralistic notion say for instance one doctor might go through for the abortion in the example of abortion they say okay my in my religious to believe it is void it cannot be done or otherwise but in some doctor they could say no I will prioritize first the women right so I'm talking about the ethics that contrary of conscience and morality same thing but little bit different another dilemma is what pedestrian or the people with the dilemma who are actually but that reaction most probably not opacity of the cards decision-making because car is if it's a machine if it's a another human being or if it's a something else this collinearity is also unknown every case there is a lot of unknowns so ethical calculates needs to be defined and with algorithmic morality with the car it's not a very easy job it's almost more than rocket science but we will do that philosopher mathematician sociology iced Nobel laureate economists you never know we don't know that needs to be evolved and very very artistic way nine-year-old daughter or son when it asked to one of the AI assistants that it's raining grass is wet and flooded as I can see the AI assistant will say yes you are right when she or he just changed the question some time later grass is wet flooded is it raining today even the area assistant or AI enabled device cannot answer this so we are in very far from the pluralistic notion of ethics Harvard psychologist Daniel Gilbert's actually he is very popular for his comment on end of history but for me I think it's illusion because mileage brain and body in this planet is limited whatever innovation whatever invention we do we do cancer cell research genome is gut research everything we are doing a AI by the way AI is helping on those research but still we are limited in this planet some may die very soon or we're late late microbe will be still performing millions of more microbes but human will not know more here so the planet and the planetary age with our aging with the time and time and space we are limited but not machine Elon Musk tweeted or making some kind of comment that immortal machine will be the dictator of our society so the mortal and immortal there is a contrary but so that's why I'm saying that we should not compete with machine these never be machine versus human so we need to collaborate once we collaborate there is a human upload because human will be with more capacity more accountability and also the trust with the mission because machine will think if the development of AI is not just a fresh face transition if it's more of evolution I think the risk of AI or AI pitfalls can be mitigated but can be mitigated if we have the there are a lot of nonlinear scientists nonlinear mathematicians if we have those kind of things in place and embedded in the mean in them in the AI device or AI processes one of the things is measuring the heterogeneity I will give you a very good example for the driverless car again there is a I am putting couple of dilemmas so driverless car is a most known dilemma people are facing should they have a signal from many directions so where we are going to drive the Rivoli's car in Chinese and Indian Road or here or here so the heterogeneity means in the market you cannot replace the human or the the labor or the labor force or even the population of India and China overnight so you need to again collaborate so human Drive and the mission drive that is called heterogeneity so my final comment is we need to be human it needs to be brandable brandable means beyond or apart from marketing gimmick it's more of intrinsic value we have then a I will a I think or ethical a I will protect our job our innovations and most important part is they will help us to calculate our economic risk capital how much capital we need for our future uncertainty then our future will be more promising and with the power of in imaginations thank you [Applause]