AI regulation and human decision making | Chitro Majumdar | TEDxSaintLouisUniversityMadrid
The speaker argues that AI's integration into daily life, while creating unprecedented intelligence and flexibility, presents profound systemic risks that require ethical regulation and human collaboration rather than mere competition. Specific examples include the ethical dilemmas faced by autonomous vehicles and the necessity of integrating philosophy and mathematics into AI's decision-making processes. The core call to action is for humanity to define its own intrinsic, "brandable" value alongside technological progress to manage future uncertainty. ## Speakers & Context - Unnamed speaker (Implied expert in risk management, ethics, and technology). - Delivery style suggests a technical presentation transitioning into philosophical warning. - Discusses the necessity of regulating AI applications, citing the fluctuation of Google's policy announcements (e.g., vertical panel announcement followed by cancellation). ## Theses & Positions - AI integration creates intelligence and flexibility far beyond manual systems by analyzing life streams and accumulating data. - The transition to AI-enabled systems is a nonlinear process, marked by systemic risks and disruption. - Ethical consideration in AI requires more than just solving problems; it demands asking deep, complex questions. - AI systems must evolve beyond current models, perhaps needing an "invention of EMS" equivalent to $E=mc^2$ or the concept of Zero. - Humanity must not compete with machines but collaborate with them, leveraging human accountability, trust, and the capacity for intrinsic value. ## Concepts & Definitions - **Artificial Intelligence (AI)**: Embedded with a system that accumulates real-life data streams to provide predictive signals (e.g., optimal sleep needs). - **Linear vs. Nonlinear:** Linear movement is predictable from one point to another; nonlinear movement involves unpredictable "zig zag" shifts (e.g., in the economy). - **Systemic Risk:** The phenomenon where an external, random variable enters a system—attributed to disruption, AI, or otherwise—causing widespread failure. - **Systemic Ability to Reason:** A future state where AI is expected to give subjective ratings, like a "distance to default" score, challenging established financial models. - **Algorithmic Morality:** A proposed framework for regulating AI ethics, involving concepts like the likelihood of an action (X-axis), severity, and impact. - **Intrinsic Value / Brandable:** Value that is fundamental and inherent to human identity, separate from superficial marketing gimmicks. ## Mechanisms & Processes - **AI data processing:** Takes complex, real-life data (work schedules, meeting times, autonomous car sensor data) and synthesizes it to provide predictive outputs (e.g., required sleep duration). - **Data Science Sampling:** The process of analyzing a small, representative sample size from a massive dataset (e.g., analyzing a project from 1 terabyte of data). - **Ethical Reflection (Post Action):** A proposed function where AI systems reflect on the consequences of their own learning and decision-making processes. - **Human vs. Machine Collaboration:** Requires understanding heterogeneity, citing the inability to replace the entire labor force or population of a region (like India or China) overnight. ## Timeline & Sequence - **2019 Years:** The speaker posits that the current rate of innovation suggests significant changes within the next 10 to 20 years. - **Future Projection:** A shift towards AI giving systemic ratings similar to financial probability metrics. ## Named Entities - **Google, Alexa, Siri:** Examples of existing AI interfaces/assistants. - **Tesla:** Example of a company developing autonomous vehicles. - **Jacques Derrida, Michel Foucault:** Philosophers whose work introduced concepts like "problematics" relevant to AI ethics. - **Daniel Gilbert:** Harvard psychologist known for commentary on the "end of history." - **Elon Musk:** Individual who made comments about immortal machines potentially dictating society. ## Numbers & Data - **1 terabyte:** Example scale of data that needs to be analyzed via small sample sizing. - **Two decades (20 years):** Timeframe used to discuss financial metrics like "distance to default." - **9-year-old daughter or son:** Used in the context of questioning AI assistants about reality (e.g., "Is grass wet if it's not raining today?"). ## Examples & Cases - **Autonomous Wake-up Call:** AI monitors past records to determine necessary sleep duration for the morning. - **Self-Driving Car Data:** Captures obstacle data and calculates potential complexity/failure rates during operation. - **Dilemma of Autonomous Vehicles:** Dilemmas concerning collision prioritization (e.g., choosing between hitting a pedestrian vs. causing another outcome). - **Abortion Ethics:** Contrasting viewpoints where a doctor may prioritize the mother's rights over religious/moral objections. - **Raining/Grass Wet Paradox:** An AI assistant fails when asked if grass is wet if it did not rain, illustrating limited understanding. ## Tools, Tech & Products - **AI-driven/AI-enabled Wake-up Call:** System that ingests lifestyle data to set alerts. - **Self-driving car:** Technology capturing obstacle data. - **AI Assistants:** Products like Alexa and Siri. ## References Cited - **Philosophy of Conscience:** Described as a pluralistic notion, contrasting with rigid rules. - **Harvard psychologist Daniel Gilbert's comment on end of history:** Cited as an example of a popular but potentially flawed philosophical viewpoint. ## Counterarguments & Caveats - The concept of the "end of history" is presented as a potential illusion because planetary resources (brain/body/genome research) are limited regardless of technological advancement. - The assumption that AI can simply absorb ethical codes overlooks the inherent unpredictability of human experience (e.g., the collision dilemma). ## Conclusions & Recommendations - Regulation of AI is necessary, specifically demanding that AI itself be subjected to an ethical, mathematical framework. - The final goal is not to compete with machines but to achieve a synthesis where human capacity for accountability, trust, and *intrinsic value* guides technological evolution. - Human action should aim to calculate and mitigate *economic risk capital* for future uncertainty. ## Implications & Consequences - AI risks transforming core systems (law, government, finance) so rapidly that current ethical, legal, and governance structures may become obsolete. - The integration of AI necessitates elevating ethical discourse from a mere philosophical thought experiment to a core mathematical and regulatory component of technology design. ## Verbatim Moments - *"The AI means artificial intelligence wake-up call so which is embedded with a system and it accumulate all sorts of data from your real life and it's a kind of life stream..."* - *"AI-driven or enabled knowledge makes us intelligent and far more flexible than manual systems."* - *"Ethics is a philosophical thought it takes not about solving problems but asking questions and asking questions over time..."* - *"I need to have another kind of invention of EMS is e - MC square or zero or something else."* - *"We are in a system where every random variable come to your system in the name of disruption or in the name of AI or in the name of whatever and make it disrupt it."* - *"We don't know that needs to be evolved and very very artistic way."* - *"We should not compete with machine these never be machine versus human so we need to collaborate."* - *"Human will be with more capacity more accountability and also the trust with the mission..."* - *"My final comment is we need to be human it needs to be brandable brandable means beyond or apart from marketing gimmick it's more of intrinsic value..."*