← back · transcript · aQZiRLFrnGw · view dossier

Transcript

A.I. generativa o Stupidità Artificiale generalizzata? | Anne Alombert | TEDxLakeComo

URL: https://www.youtube.com/watch?v=aQZiRLFrnGw
Video ID: aQZiRLFrnGw
============================================================

Translator: Michele Gianella
Reviewer: Elena Montrasio Good morning, today I will speak in French, too. I would like to discuss
the topic of "Artificial Intelligence". You may have noticed how ubiquitous
the concept of A.I. has become in our societies. We talk about smart objects, smart phones, smart cities, and even "generative
artificial intelligences" when it comes to tools
such as ChatGPT or Midjourney, devices that can generate
texts or images automatically thanks to statistical calculations
made on huge amounts of data. In the face of these unheard achievements, seemingly able to imitate or simulate
human language abilities, some even wonder if these machines can think or invent, sometimes going so far as to confer
all sorts of capabilities on them such as an inner life, emotions,
a conscience, etc. Although this tendency to anthropomorphism may seem far-fetched, from a certain point of view, it should not be forgotten that it is at the heart
of transhumanist theories, which are the main ideology
of the digital giants, namely those companies that develop and spread most of the digital technologies
that we use on a daily basis, and that often come from Silicon Valley. For instance, entrepreneur Raymond Kurzweil, one of the main initiators
of the transhumanist movement, and who was Google's
Director of Engineering, talks about "intelligent machines"
or "spiritual machines." Yann Le Cun, A.I. specialist at Facebook, indeed speaks of "learning machines",
machines capable of learning. And again, one can find very similar terms in the letter recently published
and co-signed by Elon Musk on the dangers of A.I.. The subject this time was "digital minds," believed to be
a new threat to humanity. The risks outlined in that letter are, in my opinion, substantial,
and should be taken seriously. But the question remains as to whether the issue has been properly and adequately contextualised. In fact, it is possible to prove how these ideological considerations
which anthropomorphize machines exist alongside
a different type of discourse, this time of a scientific kind, underlining the potentially
harmful effects of screens and digital media on our intellectual,
mental or psychic skills. In France, for example,
neuroscientist Michel Desmurget argues that excessive screen time
among the youngsters adversely affects their ability
to concentrate and memorize, leading to delays in language development and sleep disorders. In 2007, American researcher
Katherine Hayles noticed that the transition
from printed to digital media implied a shift in "regime of attention". Print media, such as books or newspapers, entail a "deep attention", that is the ability to concentrate
for long stretches of time on a single object,
or on a single activity, while digital media
call for a "hyper attention": in other words, a divided
and fragmented attention, dispersed over multiple
contemporary activities. And Maryanne Wolf,
another American researcher, a specialist in neuroscience
and psycholinguistics, explained how learning through reading and writing plays a key role in the formation
of certain synapses, and in strengthening
some neuronal connections, and that the abandonment of such practices
in favour of digital media could have an impact
on both brain activity and on certain mental
and social behaviours. According to her, reading is an activity
that teaches us empathy. For example, when we read
a novel or watch a film, we adopt the character's point of view, we put ourselves
in its shoes, if you will. On the other hand, recently,
as you may have noticed, Frances Haugan, an old Facebook employee, has revealed the "Facebook Files", which highlight that a social network like Instagram has devastating effects
on the mental health of teenagers. In addition, a report from the Center
for Countering Digital Hate entitled "Deadly by design" shows that TikTok's algorithm can lure
fragile, vulnerable teenagers into information loops that incite them to take risky actions. In other words, all these analyses,
studies and documents are far from witnessing the intelligence
or the spirituality of machines, and rather invite us to reflect on the effects of digital devices on our intellectual,
psychic and social skills. Therefore, what we should fear is not the sudden advent
of an artificial superintelligence or a general artificial intelligence; but rather, the progressive destruction
of our individual and collective minds, which constitute a new resource
for digital industries. // In fact, most digital industries operate with a business model called
"attention economy", which aims to capture our attention as long as possible through "persuasive technologies". These are IT and algorithmic technologies produced in the "captology" laboratories of universities and companies
in Silicon Valley. Captology is a technoscience based on different scientific disciplines such as computer science,
algorithmics and design, but also behavioral, neurosciences
and cognitive psychology, whose aim is to design
and develop technologies that can influence
our thoughts and behaviours. For example, by constantly sending us notifications or through "infinite scrolling", which allows us to scroll down the page continuously without having to click on a new page, so without being able to decide
what we want to watch or not. Or again, with features
such as the quantification of visits, which stimulates our brain's reward system
as well as the secretion of dopamine, thus triggering a feeling of satisfaction that prompts us to go back
to social networks in a slightly compulsive way. The aim is indeed to keep us connected and imprisoned as much as possible, in order to sell our data,
or our attention, to companies that will bombard us
with targeted ads. This economy of attention was not born with the digital revolution. It was already in use
when other media were prevalent, in particular television. What digital technologies did
was making it much more powerful, and much more efficient. Not only because these new technologies
are much more immersive and ubiquitous - we are constantly glued
to our screens, even at night - but also because,
thanks to data collection, and to the algorithmic calculations
made on these data, it is possible to create
standardized profiles and target individuals with "personalized" content. The much-debated artificial intelligence mainly aims to develop
automated recommendation algorithms which allow to predict
the requests of individuals before they even express them. It is the same principle used for predictive text software: you start typing a word
in a search engine, and it automatically suggests the search. Obviously, this machine
didn't read your mind, it didn't guess what we needed
or were looking for, but the algorithms themselves
have made calculations, at the speed of light, among all the searches previously made and that began like yours. And the ones suggested are the most commonly made. So, when these machines
autocomplete our requests, they are inviting us to conform
to the majority, or the average. And so do devices such as as ChatGPT or MidJourney, which, once again, generate texts and images
based on statistical calculations made on huge amounts of data, predicting in real time the most probable word,
or sequence of letters, based on the question the user has asked. We realise, when analysing how these machines work, that they are not able to speak or write as we know it: no human being, when speaking or writing, makes statistical calculations on all the sentences he has
previously heard or read to estimate, to predict the word that is most likely to follow
the words he has just used! Appearently no human does that,
when they speak or write. Not to mention when he invents new expressions or ideas. The invention does not consist
in calculating averages but rather in disrupting averages and habits of thought in order to create
"the infinitely improbable" against "the laws of statistics
and their probability". Therefore, there is no such thing as "thinking" or "creative" machines. However, when using these machines, we are in a sense delegating our abilities to think, write,
speak and invent to electronic, algorithmic
and statistical devices. And just as craftsmen had been deprived of their know-how and manual skills having delegated
their knowledge and skills to machine tools, so individuals too risk to find themselves deprived
of their intellectual capabilities after delegating them
to algorithmic machines. And just as artisans
had become proletarian workers, citizens risk becoming proletarian users. The issue of proletarianisation, namely the loss of our skills
and capabilities due to their being delegated
to technical devices, is not a new problem. It doesn't date back
to the Digital Revolution, nor to the Industrial Revolution. Indeed, it was already at the heart
of Plato's reflections in ancient Greece. Plato, in his work "Phaedrus", wonders about the new technique of writing and referred to it as a "pharmakon", a Greek term describing
both remedy and poison. What Plato meant is that writing represents
a remedy for memory because it allows to externalise, preserve all kinds of knowledge, averting the failures of living memory. However, writing is also a poison because once knowledge
is externalised and preserved, individuals will no longer make the effort to remember it, and so interpret it and transform it. In other words, albeit necessary for the preservation
and dissemination of knowledge, writing could also threaten both the development
of individual memory capabilities and the evolution of a collective memory. In this text, Plato invites us to deconstruct
the current transhumanist discourses on technological development. He shows us that there is no such thing
as a purely beneficial technique: what seems to improve us, can also diminish us. This is what seems to be happening with the latest forms of contemporary
digital and reticular writing. On the one hand, they allow us to memorise
enormous amounts of data, generate tremendous amounts of symbols; on the other hand, they risk destroying our decision-making,
expression and invention skills, thus leading to what the philosopher Bernard Stiegler
described as "artificial stupidity" or "generalized proletarianisation". To avoid these risks, speculating on their intelligence
is not necessarily taking us any further. On the contrary, what we should do is design, develop and experiment with technologies that let us train our memory, reflection, interpretation
and collective deliberation skills. This is perfectly doable, providing we rethink our economic models and the technical capabilities
of digital devices. A collaborative encyclopedia
such as Wikipedia, for example, allows for the expression
of personal viewpoints, collective deliberation,
the sharing of knowledge and the production
of a common good for humanity. The Tournesol association suggests a "collaborative recommendation" system in which individuals can recommend contents they deem relevant
or of public utility, and not based on the interests
of the digital giants. Or a social network like Mastodon, which allows both individuals and groups to set content recommendation parameters, thus monitoring the circulation of content in the digital space. All these technologies are indeed intelligent, not in the sense
of "capable of human thought" or replacing humans, but in the sense that they allow us
to think collectively. And this, in my view, is the kind of technology we need today to be able to deal with the consequences of spreading misinformation. In other words, the time has come to ditch the myth
of artificial intelligence, and put digital technologies
at the service of collective intelligence. Thank you for your attention. (Applause)