Global EditionASIA 中文雙語Fran?ais
    Opinion
    Home / Opinion / China and the World Roundtable

    Are we losing the ability to think due to LLMs?

    By Virginia Dignum | China Daily | Updated: 2025-06-24 06:13
    Share
    Share - WeChat
    SONG CHEN/CHINA DAILY

    In the age of large language models (LLMs) and generative AI, we are witnessing an unprecedented transformation in how knowledge is produced, disseminated and consumed. These tools can summarize dense texts, write code, draft legal contracts, or respond to philosophical questions in seconds.

    LLMs, we are told, make us more efficient, simplify complex work, automate mundane tasks and allow us to focus on what matters. But as we marvel at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they subtly eroding our capacity for independent thought, judgment and critical reflection?

    Efficiency is not a neutral term. It reflects values, what we choose to prioritize, what we define as valuable, and what we are willing to sacrifice. The current narrative around generative AI treats efficiency as synonymous with progress. It suggests that the faster something is done, the better. But faster is not always better. And not everything that can be automated should be.

    The popular belief is that LLMs "free up" cognitive bandwidth. That is, they allow humans to delegate repetitive thinking to machines and reserve their energy for more reflective tasks. But the opposite is often true. As more intellectual labor — writing, summarizing and decision-making for example — is handed over to AI, the less we will engage with it ourselves. Instead of reserving our thoughtfulness for higher tasks, we will increasingly lose the opportunities, and perhaps even the ability, to think critically.

    An apt example is the increasing synthetic content online. Not only are images and text being fabricated by machines, but so too often are the public reactions to them. Content no longer spreads because it presents the truth or is relevant, but because of its emotional pull. Fake images spark fake outrage in comments, which then fuel real engagement from users who cannot distinguish between what is human and what is AI generated.

    The result is a synthetic discourse loop that simulates social consensus. "Everyone is talking about it," we hear, when in fact no one is — until the content, and the reaction to it, are manufactured to serve the profit-driven strategy of platforms. Their goal is not informed conversation, but to draw continued attention, which translates into short-term revenue.

    This is not just a technical challenge of detecting what's real. It's an epistemological crisis. When falsehoods are propped up by simulated reactions and amplified by algorithms optimized for attention, the notion of public discourse itself becomes unstable. Our sense of what others believe is no longer based on shared experience or deliberation, but on machine-curated illusions. In such an environment, critical thinking doesn't just decline, it is structurally discouraged.

    So what do we really mean by "efficiency"? If it means shortcutting the time it takes to write a report, perhaps we have succeeded. But if it means replacing the intellectual effort that creates depth, coherence and reflection, then it's not a gain; it's a loss. The moment we accept LLMs as thought substitutes, rather than thought aids, we begin to erode the very conditions under which human reasoning thrives: questioning, dialogue, uncertainty and contradiction.

    This is particularly dangerous at a time when democratic values are at stake, when critical reflection and informed disagreement are essential. The legitimacy of democratic processes relies on citizens engaging with ideas, evaluating claims, and forming judgments. But when engagement is replaced by reaction to machine-generated one-liners, that is, content crafted for manipulation rather than understanding, our political agency is undermined. We don't just risk being misled; we risk no longer knowing what it means to evaluate truth for ourselves.

    There is a temptation to see LLMs as neutral tools. But they are not. They are shaped by the data they are trained on, the goals of their developers, and the market incentives that drive their deployment. Their outputs reflect a history of biases, omissions and assumptions that are often invisible to users. And the more seamlessly these outputs integrate into our workflows, the more easily they escape scrutiny. In this way, the danger is not only what the AI says, but that we stop asking how it came to say it.

    To call this "efficiency" is to ignore what is actually happening: a transfer of epistemic authority from humans to machines, without the structures of accountability and transparency that should accompany such a shift. We are being asked to trust a system we cannot interrogate, on the basis that it sounds plausible and delivers quickly.

    But speed is not the same as understanding. And plausibility is not truth.

    Instead of fetishizing efficiency, we need to refocus on resilience: the capacity of individuals and societies to question, adapt and resist manipulation. This means investing in AI literacy — not just how to use the tools, but how to critique them. It means recognizing that no AI can replace the ethical, cultural and contextual dimensions of human reasoning. It means being willing to slow down, to question the output, and to value the effort of thinking as much as the result.

    Governments, tech companies, and citizens each have a role to play. Regulation is necessary, but it is not sufficient. The foundation of responsible AI is not technical compliance; it is ethical intent. That begins with "question zero": When should AI be used? Not every problem needs an AI solution, and not every deployment leads to benefit. Responsible AI is not to put AI-first, it's to put people-first. It starts by asking why, not by rushing to deploy AI. Tech developers must embed responsibility into the very design of systems, not as an afterthought but as a guiding principle.

    More important, individuals must be empowered to question AI outputs, understand their implications, and resist the normalization of passive dependence. Only by centering human judgment and agency can we ensure AI serves society, rather than reshaping it to fit commercial imperatives.

    There is no turning back the presence of LLMs in our lives. But we can choose how to live with them. The question is not whether they will think for us, but whether we will let them define what it means to think at all. Efficiency, in the true sense, should not be about doing more with less thought. It should be about doing better, with deeper attention, stronger ethics and sustained human insight. Anything less is not progress. It is surrender.

    The author is a professor of computer science and the director of the AI Policy Lab at Ume? University, Sweden.

    The views don't necessarily reflect those of China Daily.

    If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

    Most Viewed in 24 Hours
    Top
    BACK TO THE TOP
    English
    Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
    License for publishing multimedia online 0108263

    Registration Number: 130349
    FOLLOW US
    中文字幕在线最新在线不卡| 人妻无码一区二区三区免费| 狠狠噜天天噜日日噜无码 | 亚洲午夜无码久久久久| 日韩va中文字幕无码电影| 无码乱人伦一区二区亚洲一| 日韩精品一区二区三区中文字幕 | 18禁免费无码无遮挡不卡网站| 亚洲JIZZJIZZ中国少妇中文| 最新中文字幕AV无码不卡| 18禁黄无码高潮喷水乱伦| 无码人妻久久一区二区三区免费| 日本精品自产拍在线观看中文| 亚洲人成无码网WWW| 成人av片无码免费天天看| 亚洲A∨无码无在线观看| 无码国产精品一区二区免费式影视| 人妻少妇久久中文字幕一区二区 | 西西4444www大胆无码| 成人免费无码H在线观看不卡| 无码午夜成人1000部免费视频 | 高清无码中文字幕在线观看视频| 亚洲中文字幕无码久久精品1| 伊人久久无码精品中文字幕| 国产免费久久久久久无码| 国产激情无码视频在线播放性色| 亚洲AV无码一区二区三区DV| 一本大道东京热无码一区| 亚洲日韩欧洲无码av夜夜摸| 精品久久亚洲中文无码| 免费无码又爽又刺激网站| 日韩AV无码一区二区三区不卡毛片| 中文自拍日本综合| 无码播放一区二区三区| 日韩人妻无码一区二区三区久久99| 熟妇人妻中文a∨无码| 自拍偷在线精品自拍偷无码专区| 一本加勒比HEZYO无码资源网| 亚洲成AV人在线观看天堂无码| 亚洲AV日韩AV永久无码绿巨人 | 蜜芽亚洲av无码精品色午夜|