Skip to content Skip to footer

Digital illiteracy: when AI thinks for you

AI no longer helps you search: it answers for you. A question crosses your mind, you open Google, type a few words, and wait for the world to answer. It’s been like that for years. But now something has changed. Or perhaps it has evolved. Because today we no longer search for answers, we receive them directly from artificial intelligence.

We’re no longer browsing pages, reading articles, and comparing sources. We ask, and a chatbot speaks to us. An AI assistant explains everything, clearly, quickly, reassuringly.

But have we ever asked ourselves where those answers really come from? Who decides what we should know? And what happens to our way of understanding the world, if we start blindly trusting an algorithm?

From SEO to AIO: the silent revolution

SEO, the science of being found on Google, seems to be giving way to another acronym: AIO, AI Optimization.

Artificial intelligence doesn’t just sort the best results anymore, it creates them. It writes tailor-made texts for your questions, summarizes articles, and explains complex concepts in simple terms. Even Google knows it: its search systems are evolving to integrate AI-generated responses, placing them even above website links. It’s a silent but radical transformation. Because it changes our simplest gesture: the act of searching.

According to PPC Land, Google has just added an AI mode to the Circle to Search feature on Android, allowing users to select any part of the screen and get not just search results, but actual AI-generated answers, with the ability to ask follow-up questions directly from there without opening new pages.

On one hand, it’s convenient, incredibly so. Need to know something? AI tells you instantly. No more clicking, scrolling, or reading full pages.
But on the other hand, a natural doubt arises: Who wrote that answer? What sources did the AI choose? And what if those sources are incomplete, inaccurate, or manipulated?

AI meets education: risks and benefits

Large Language Models, or LLMs, have never been more accessible to the public than they have in recent years. This ease of access, however, comes with a massive question: how do we still ensure a high level of education? Although AI can be used as a powerful tool in shaping young people’s studies, it can also be used to completely replace critical thinking.

In an article by the European School Education Platform, the EU discusses guidelines that should be put into place to ensure students are getting the most out of these advanced tools. The article claims that: “AI literacy cannot exist without strong digital foundations for both teachers and pupils. AI and data skills build on digital competencies, meaning that pupils must first understand the digital ecosystems in which AI operates before they can critically engage with AI-based tools.”  

Nevertheless, many teachers have taken to social media to vent their frustrations over the lack of control over students’ use of AI for assignments and learning. The biggest issue, as teachers point out, is not engaging with the information at all; most students tend to simply use AI as a replacement for thinking all together. This trend is exacerbated by the growing loss of interest in traditional forms of media, paving the way for the rise of misinformation.

When information exhausts us: news fatigue

As AI takes the spotlight, another phenomenon is growing: news fatigue. More and more people, especially young ones, are turning away from newscasts and newspapers. They no longer want to hear the daily rundown of wars, crises, and disasters. Too much negative news generates anxiety, a sense of helplessness, and a desire to tune out.
Many seek refuge on social media.

There, information comes in small, fast, colorful pills, perfectly scrollable. But it’s fragmented, filtered by algorithms that show us only what we like. And that poses a serious risk: ending up in an information bubble, where we see only opinions that match our own, without ever engaging with different viewpoints.

Paradoxically, we live in an era where we can know everything instantly. Yet it’s also an era where we trust sources less and less.
Artificial intelligence makes life easier, but it distances us from the curiosity to hear multiple voices, to compare perspectives, to verify the truth.

Digital Illiteracy: the importance of critical thinking

At a time in which information is not only incredibly accessible, but also extremely plentiful, the possibility of “switching off” our brains letting AI think for us and provide us with the truth is enticing. However, critical thinking is one of mankind’s most important skills, it allows us to build off information rather than passively taking everything as fact. In today’s day and age, digital literacy has become a priority. With more and more people tuning out of traditional news media, and relying more and more on what they see online, it’s important to take everything with a grain of salt, look for sources and distinguish reality from AI generated images and videos. 
On July 9, the AI firm responsible for Twitter’s chat bot Grok was forced to remove posts that were extremely inappropriate and offensive. These kinds of incidents act as stark reminders of the limits of AI. It’s not an infallible system, like any tool it still requires able hands to operate. So as the digital world continues to evolve and new technologies are introduced, it is paramount that a high level of digital literacy is maintained to ensure people don’t fall prey to fake-news and misinformation.

Leave a comment