Stop Using AI to Think: A Socratic Warning
The story of Thoth and Thamus: an ancient lesson for the modern world
AI is supposed to make us smarter and wiser. But what if it’s actually making us dumber and less wise?
Socrates might have the answer.
In Plato’s dialogue “Phaedrus” (370 BC), Socrates tells a story about the Egyptian god Thoth, who invented writing and offered it to King Thamus as a gift.
Thoth claimed that writing would improve memory and wisdom, but Thamus disagreed.
He said that writing would make people forgetful because they would rely on external marks instead of their own internal resources.
He also argued that writing would create only the appearance of wisdom, not true wisdom because people would end up professing many beliefs without understanding them or examining them critically.
I’m afraid he was right.
Books are dangerous
I often lose my critical mind when reading a book that resonates with me.
When I find a new idea attractive, I feel an immense pull toward it and I fall into a weeks-long journey of obsession and proselytization.
I know I’m not the only one.
You read a book that inspires you and you start devouring everything the author has written.
Soon, your friends have to suffer your long discourses about how wonderful your new worldview is.
The root of the problem is a lack of reflection on what our minds are consuming.
If instead of reading, we would get new information through dialogue with others, we could ask questions, offer counterpoints, or get more context.
In that way, we would more closely examine new beliefs before adopting them.
But with a book, that is just not possible.
And there’s another insidious problem with books.
They hold great authority, an almost sacred aura.
Probably because not everyone could read not so long ago, so they were something that only the wise would interact with.
AI is even worse
Think about the sacred connotations that people are ascribing to AI.
AI will be better than humans
It can do in seconds what it would take you days.
It will create a utopia once it’s widely used in scientific research.
So when we ask it a question, we tend to believe anything it says. Only when we are knowledgeable about the area we’re asking it about, can we see its faults.
Here we find the first problem: we will take anything it says as true.
But it gets worse.
We are giving up thinking.
Maybe you think that everybody should meditate for 30 minutes a day and you get asked why you believe so.
And instead of elaborating a thoughtful answer that other people can argue with and learn from, you ask ChatGPT to write an answer for you.
The text it produces is awesome. You agree with everything you read and you have formed a more complex belief now.
But is it yours?
Remember, we’ve read books and adopted their beliefs only because they seemed true enough. After all, books are filled with wisdom.
Are we doing the same thing with AI?
AI is wonderful
Now don’t get me wrong, AI is a wonderful tool.
I use it all the time to find information and to create images.
In fact, I used it to remember the name of the Socratic dialogue I opened this article with.
I knew the story because it made a great impact on me the first time I read it, but I didn’t remember the name and the details.
So I asked the AI: “What’s the name of that Socratic dialogue in which people are afraid of losing their memorization and discursive abilities due to books?”. And the answer was good.
I could then go back to that dialogue and write this article.
AI is great and I’m sure it will improve our lives on many fronts.
But we should protect our mental faculties at all costs.
Our minds are an integral part of being human. And if we stop using them, we’ll become something worse than human.
Something duller, uglier, and with less light in its eyes.