Human-AI Collaboration: a Robotic State of Mind | Siri Beerends | TEDxUniversityofGroningen
The speaker argues that instead of expecting AI to solve societal problems, we must focus on improving our social and economic systems, as AI is merely a reflection of past data and causes us to mimic robotic behaviors. The evidence for this is shown by examples like Jira, who is essentially a human data producer for AI, and the argument that AI is an ideology based on commercial data collection rather than true intelligence.
## Speakers & Context
- Unnamed speaker.
- Addressing the audience regarding the relationship between humans, AI, and societal progress.
- Notes that understanding AI requires looking back at historical precedents, like the mechanical Turk.
## Theses & Positions
- AI is a problematic concept, and the public understanding of it is flawed.
- AI is not inherently synonymous with progress; the collaboration with AI forces humans to behave like robots, rather than the other way around.
- The "big promise" of AI—that it will free humans to pursue meaningful endeavors—is incorrect; instead, robots flourish while humans work to train and satisfy them.
- AI is not just a technology or a tool; it is an *ideology* based on the economic principle of commercial data collection.
- AI is inherently conservative because it is trained on past data, meaning it only reproduces existing patterns, not true innovation.
- Real innovation requires changing social and economic systems, not just building better computers.
## Concepts & Definitions
- **Artificial Intelligence (AI):** In the public debate, computer systems that can independently perform tasks and improve their own performance by learning from data.
- **Ideology (of AI):** Based on the economic principle of commercial data collection, capturing all human processes into data.
- **Pattern Recognition:** One component of intelligence redefined by AI (alongside computation power and information processing).
- **Bodily Intelligence:** Intelligence inherent in our embodied brains, which are energy-efficient when learning new things.
- **Mechanistic Understanding of Intelligence:** Redefines intelligence only in terms of what computers can do (pattern recognition, computation).
## Mechanisms & Processes
- **The Mechanical Turk:** A chess machine built 250 years ago that required a hidden human player to operate, fooling the public into believing the machine worked independently.
- **AI Implementation Examples:**
* Product suggestions popping up in social media/advertisements based on internet searches.
* Smartphone word prediction while typing a sentence.
* Government use of AI to predict risks (e.g., criminal behavior or burglaries) based on big data.
- **Data Production:** Users performing actions like selecting images for CAPTCHA tests (e.g., "select all cars") are not users of AI, but producers/beta testers of it.
- **Empathy Scoring:** Call centers use AI to score empathy, incentivizing workers to use specific words (like "sorry") to increase their score.
- **Behavior Adaptation:** Researchers found that humans adapt communication styles (e.g., talking in a "commanding robotic way") to conform to digital devices like chatbots and voice assistants.
- **AI Energy Usage:** Training a single AI model can emit as much fuel as five cars in their lifetimes, making it very energy inefficient.
## Timeline & Sequence
- **250 years ago:** Invention of the Mechanical Turk chess machine.
- **Past 50 days (Jira example):** Time Jira spent selecting drain pits in sidewalks while training AI systems.
- **Next 50 days (Jira example):** Time Jira is expected to be upgraded to selecting helipads on rooftops.
## Named Entities
- **Jira:** An individual with a work-from-home job on a click work platform who trains AI systems.
- **Nina:** An individual working in a call center whose job is monitored by AI empathy scoring.
- **Jimmy:** Nina's little brother, who talks in a "commanding robotic way."
- **Grabby:** A home speaker mentioned in relation to Jimmy's overhearing conversations.
- **Pepsi Pig Song:** A song Nina was shocked to find out that Grabby collects data from.
- **Peppa Pig:** Subject of the song mentioned.
## Numbers & Data
- The Mechanical Turk was built **250 years ago**.
- Jira has been working for the **past 50 days** on drain pits.
- Jira is scheduled for the **next 50 days** of work on helipads.
- Fuel comparison: A single AI model can emit as much fuel as **five cars** in their lifetimes.
## Examples & Cases
- **The Mechanical Turk:** Hiding a human chess player inside the machine to deceive observers.
- **CAPTCHA test:** The task of selecting correct images (e.g., cars or traffic lights) to prove a human is not a robot.
- **Nina's job:** Being penalized or incentivized in a call center by an AI empathy score.
- **Smart Devices Monitoring:** Algorithms monitoring workers in hotels, warehouses, and delivery jobs concerning speed and method.
- **The realization:** The process of adapting human behavior to fit digital devices, rather than the technology shaping the human.
## Tools, Tech & Products
- **Digital voice assistants/Chatbots/Smart speakers:** Examples of technology that influence human communication.
- **AI Systems:** The general technology being discussed, used for data analysis, pattern prediction, etc.
## References Cited
- None explicitly cited as external sources, only references to types of systems (like call centers or home speakers).
## Trade-offs & Alternatives
- **AI Dependence vs. Human Agency:** The choice between relying on AI predictions/optimization versus acting based on moral intentions and lived experience.
- **Mechanistic Logic vs. Messy Reality:** The trade-off between the predictability of an AI-optimized world (a "predictable board game") and the complexity of real life.
- **AI Development vs. Societal Improvement:** The choice to focus on symptom control (developing AI diagnoses for COVID) versus addressing the root flaws in the social/economic systems.
## Counterarguments & Caveats
- The premise that AI will automatically improve society is incorrect.
- The assumption that AI development equals societal progress.
- AI's current capabilities are limited by the data it is trained on (i.e., it is inherently conservative).
## Methodology
- Using historical illustrations (Mechanical Turk) to explain present technology.
- Demonstrating current AI impact via case studies (Jira, Nina, chatbots).
- Analyzing AI not as a technical feat, but as an ideological structure built on data extraction.
## Conclusions & Recommendations
- Instead of expecting more from AI, we should expect more from each other.
- The focus must shift from developing better AI systems to improving flawed social and economic systems.
- Potential must be developed through action based on "moral intentions environmental awareness and long-term perspectives."
## Implications & Consequences
- AI fundamentally changes our moral understanding of good and bad by making data collection the primary objective.
- A society optimized for data monitoring risks losing aspects of human life that cannot be digitized or quantified.
- Technological progress does not automatically imply societal progress.
## Verbatim Moments
- *"I am not a robot at least not yet"*
- *"Artificial intelligence is not necessarily about progress"*
- *"The big promise is that humans will flourish and robots will work but unfortunately i have bad news because in fact it is exactly the other way around"*
- *"You are not the user of artificial intelligence you are the producer of artificial intelligence"*
- *"The ai empathy score is not a fiction it is really used in call centers"*
- *"we adapt our communication to our digital devices so chatbots digital voice assistants smart smart home speakers they all have an influence on how we communicate with each other"*
- *"AI changes our moral ideas of what good and bad means"*
- *"we should expect more from each other and less from a.i"*
- *"technological progress is not the same as societal progress"*