The case for empathetic AI | Pierre Robinet | TEDxNTU
A speaker argues that over-reliance on AI systems, like GPS and social media, degrades human cognition and social skills by failing to account for human needs. He proposes that achieving human-machine cohabitation requires redesigning AI for enhanced human interaction, transparency, and a systematic assessment of its social impact. He concludes by advocating for users to actively guide AI development to serve uniquely human social and emotional needs. ## Theses & Positions - AI, while functional and efficient, is not entirely serving mankind and generates impacts larger than anticipated. - Current AI models focus too narrowly on functional human needs, failing to prevent addictive behaviors, poor relationship maintenance, or negative social impact. - True human-machine cohabitation requires granting more space to the human element in AI design, promoting deeper human-machine interaction. - Human uniqueness resides in our ability to recognize ourselves among others and interact socially, which machines currently lack. - The goal must be to create "real human-centric AI" by prioritizing social interaction and transparency over mere efficiency. ## Concepts & Definitions - **Human-centric AI:** AI designed to prioritize and incorporate human needs, social context, and unique human qualities. - **Shadow walkers:** Metaphor for omnipresent, unseen AI systems governing daily life decisions (e.g., algorithms, automated decisions). - **Homogeneous interaction protocol:** A proposed model for two-way communication between human and machine for self-definition and better adaptation. ## Mechanisms & Processes - **AI Influence:** AI can improve speed, accuracy, and efficiency in thinking and working, but prolonged use weakens human cognitive abilities over time. - **Cognitive Erosion Example:** Over-reliance on GPS can lead to users missing streets, with drivers failing to notice a person waiting just a few meters away. - **Recommended Feedback Loop:** Suggests using data from platforms (like GPS or social media) to create cognitive tests or detect addictive behaviors, channeling this knowledge toward healthcare professionals while potentially changing the platform's business model. - **Mine Nai Process:** A proposed research method for designing future GPS systems, using visual and special cues (e.g., "turn right at the red building") instead of simple turn-by-turn text instructions to protect spatial awareness. - **Emotional Interaction:** Suggesting machines interact by adapting to the user's mood (e.g., adjusting car temperature) or playing preferred music to provide feedback and gather data on emotional status. ## Timeline & Sequence - **Event observed:** Cleaning a room three years prior when the speaker found his two young sons watching *Paw Patrol* and ordering content via Google Assistant. - **Current technology state:** Facial recognition for payments; health diagnostics proposed entirely by Machine Intelligence; algorithms governing much of daily life. - **Future state goal:** A world where human guidance ensures AI services are designed for social and emotional well-being, assessed through systematic human rating. ## Named Entities - **Google:** Mentioned regarding the functionality of Google Assistant. - **Facebook:** Cited as an example of a social media platform whose addictive nature is linked to mental health issues. - **Liquid News Center for Innovative Cities at SUTD:** The source of the "Mine Nai process" research. ## Numbers & Data - Children's ages observed: **two and three years old**. - Potential behavioral failure rate mentioned: Taxi driver might miss the street or circle the neighborhood for **20 minutes** without the traveler noticing anything is wrong. - Time frame for data gathering: Algorithms are "walking days and night **24 hours** today." ## Examples & Cases - **AI Politeness Dilemma:** A three-year-old ordering *Paw Patrol* via Google assistant, highlighting the lack of social politeness/permission-seeking expected in French culture. - **GPS Failure:** A specific instance of a taxi driver following a GPS that directs him to the wrong street (**29a** instead of **29e**), forcing the speaker to rely on observation. - **AI Medical Diagnosis:** Health diagnoses entirely proposed by machine intelligence, needing external human validation. - **Mine Nai Process Example:** A future GPS interface guiding a driver to turn right at the red building or left after the petrol station at the level of the bridge, protecting cognitive sense. ## Tools, Tech & Products - **Google Assistant:** Voice interface capable of executing complex commands like playing cartoons. - **GPS System:** Used for navigation, cited for both functional failure (stopping at wrong street) and potential misuse (detecting addiction). - **Facial recognition:** Used for payment at cashiers or opening gates. - **Artificial Intelligence (AI) Systems:** General category of predictive, autonomous decision-making tools. - **Mine Nai process:** A proposed framework for next-generation GPS design providing visual/spatial cues. ## References Cited - **Dominic Volton:** Head of Research at the French National Center for Scientific Research (CNRS). ## Counterarguments & Caveats - Some aspects of AI are successful, improving speed, accuracy, and efficiency in human work and thinking. - Some current practices (like social media) are "deliberately designed to make us addicted," suggesting the problem is in the design intent as much as the technology itself. ## Methodology - **Cognitive Assessment:** The proposal to use platforms (like GPS or social media) to conduct cognitive tests or monitor for addictive behaviors to gather data for clinical/design improvement. - **Observation:** The initial case study of the sons interacting with Google Assistant. - **Design Proposal:** The "Mine Nai process" envisions redesigning navigation to support spatial memory rather than just providing turn-by-turn textual instructions. ## Conclusions & Recommendations - Need to rebuild AI design to incorporate human social interaction, emotional states, and perceived social context. - Proposes reassessing AI's impact on behaviors and cognition both short-term and long-term. - Recommendations include: 1) Developing data-sharing models (e.g., allowing physicians access to platform data); 2) Redesigning AI interfaces (like Mine Nai); 3) Improving social acceptability by enhancing transparency and interaction (e.g., asking a taxi driver how his day was). ## Implications & Consequences - Unchecked AI dependency risks degrading fundamental human skills like spatial memory and polite social behavior. - A successful model would involve co-defining human boundaries within machine capabilities, treating the relationship as a partnership rather than mere utility. - If successful, AI could evolve to rate and reward positive human-machine interactions, similar to customer service ratings. ## Verbatim Moments - *"google ok google can you play paul patrol"* - *"should i teach my kids to be polite to google"* - *"we think we create human-centric ai but we create machines that are not entirely serving on mankind"* - *"are we not missing here a two-ways communication between human and machine a kind of homogeneous interaction protocol to freely define ourselves in relation to it"* - *"the liquid news center for innovative cities at sutd"* - *"turn right at the red building or turn left after the petrol station at the level of the bridge"* - *"I would personally value very much that an algorithm could give me the reason i have been rejected from alone"* - *"let's reimagine a world where we will assess more how human are the artificial intelligence"*