The Future of AI | Peter Graf | TEDxSonomaCounty
The speaker argues that the danger of AI is not superintelligence, but the willingness of people to cede fundamental decision-making powers to it, illustrating this by detailing how AI's training on historic data can perpetuate biases, and concluding that humans must insist on non-biased data, explainable decisions, and retaining ultimate accountability for critical choices.
## Speakers & Context
- Unnamed speaker presenting on Artificial Intelligence.
- Speaker works for **Genesis**, described as a leader in contact centers and experience orchestration.
- Speaker frames the discussion around a personal anxiety: *"I am worried not because I'm afraid of the robots taking over or a superintelligence ruling the world."*
## Theses & Positions
- The primary concern regarding AI is the over-willingness of people to "give away that superpower of decision making that makes us human."
- AI is merely a tool, functioning in a "very, very simplistically" manner.
- AI lacks consciousness, feeling, or agenda; it is "just computation."
- AI can be wrong in unpredictable ways and be completely unaware of those errors.
- Handing over too many decisions to AI exposes society to unintended, incomprehensible consequences.
- Ethical use of AI requires demanding three things: non-biased training data, explainable decisions, and retaining human accountability for critical choices.
## Concepts & Definitions
- **Deep Learning:** A method that mimics a brain in software by creating and training numerous virtual neurons.
- **Training Data:** The massive, usually historic, data pumped into an AI system to establish desired outputs.
- **Black Box:** Describes AI where even the programmers do not know exactly how the system arrives at a decision, like trying to read someone's thoughts by looking at their brain.
- **Ethical AI:** The field attempting to define how AI can be used beneficially while respecting human autonomy and accountability.
## Mechanisms & Processes
- **AI Training:** Involves pumping huge amounts of data (often the entire Internet) into the system, which learns patterns until the "training wheels" are removed.
- **AI Operation:** Once trained, the AI is exposed to new data and is expected to produce an intelligent output based on learned patterns.
- **Bias Perpetuation:** AI replicates biases found in its training data (e.g., preferring male candidates, confusing husksies for wolves because both featured snow).
- **Decision Accountability Gap:** When an AI-driven accident occurs (e.g., self-driving Uber), it is legally unclear whether the manufacturer, owner, or passenger is accountable.
## Timeline & Sequence
- Speaker started learning/working with AI in the **1990s**.
- Speaker waited **30 years** to present on the subject.
- Past applications mentioned: proving mathematical theorems, finding roofs without solar systems, optimizing contact centers.
## Named Entities
- **Genesis:** Company where the speaker works, specializing in contact centers and experience orchestration.
- **Dr. Zeus:** Mentioned in reference to using AI to write poems about the US Constitution in that voice.
## Numbers & Data
- Up to **70 billion neurons** are firing in the head.
- Speaker's delay to presentation: **30 years**.
- Numbers/quantities mentioned in bias examples: No specific numbers, but counts of applicants, people of color, or snow are implied.
## Examples & Cases
- **Bias in Hiring:** One company used AI to sift applicants, and the algorithm *preferred man* because of historical data bias.
- **Bias in Navigation:** An AI trained to steer a car was effective at evading white people but failed when trying to evade people of color.
- **Bias in Recognition:** An AI trained on photos where wolves always had snow in the background incorrectly identified a husky photo as a wolf.
- **Chatgpt Countdown Test:** Speaker asked ChatGPT to count down from 5 to 10 (impossible), and then successfully made it count down from 40 to -200, where it failed to reach 60 when asked why.
- **Accident Scenario:** A self-driving Uber having an accident (Manufacturer vs. Owner vs. Passenger accountability).
- **Transplant Dilemma:** The question of who decides who receives a life-saving organ transplant.
## Tools, Tech & Products
- **AI (Artificial Intelligence):** The general technology discussed.
- **Deep Learning:** The underlying mechanism mimicking a brain in software.
- **Contact Centers / Experience Orchestration:** Area where Genesis applies AI to improve customer experience.
- **Self-driving Uber:** Example of a vehicle utilizing AI in high-stakes decision-making.
- **ChatGPT:** Specific AI tool used in the demonstration of its limitations (e.g., counting down from 40 to 60).
## References Cited
- *"An Inconvenient Truth"* by **Al Gore**: Catalyzed the speaker's personal realization about the urgency of the topic.
## Trade-offs & Alternatives
- **Replacing Human Decision Making:** The main trade-off being assessed—efficiency/optimization (AI) versus human conscience/empathy.
- **Transplant Decisions:** A highly complex human decision contrasted with mechanical suggestion.
- **Military Response:** The consideration of an appropriate military response—a decision requiring deep human judgment.
## Counterarguments & Caveats
- The speaker explicitly refutes the fear of a malevolent "superintelligence ruling the world."
- AI's core limitation is that it is **ignorant** and can only extrapolate from patterns it has seen, leading to errors.
## Methodology
- **Empirical demonstration:** Using specific, documented failures of AI (e.g., hiring bias, husky/wolf misidentification) to prove pattern reliance.
- **Hypothetical Scenarios:** Presenting legal and ethical quandaries (Uber accident, organ donation) to test accountability boundaries.
## Conclusions & Recommendations
- **Demand Three Things:**
1. Training data must be **without bias**.
2. AI must **explain its decisions** (explainability).
3. Humans must retain the power and accountability for decisions "near and dear to our hearts."
- The future of AI depends on societal insistence on these safeguards.
## Implications & Consequences
- If humanity yields critical decision-making to AI, it risks automating past mistakes and removing essential human qualities like conscience, critical thinking, and empathy from critical domains.
## Verbatim Moments
- *"I am worried not because I'm afraid of the robots taking over or a superintelligence ruling the world."*
- *"so many people are so willing to give away that superpower of decision making that makes us human."*
- *"AI is just a tool."*
- *"AI can be wrong in the most mysterious ways and be completely unaware of it."*
- *"AI is ignorant. It will always fall back to the patterns it saw in the training data."*
- *"The other aspect, in addition to being ignorant the AI is a black box."*
- *"Who is accountable for the decision that an AI makes?"*
- *"we need to insist In three things when it comes to AI."*
- *"we need to insist that humans will make those decisions, who can be held accountable and not some machine who doesn't have a conscience?"*