AI’s Threat to Scientific Progress: Monoculture and the Illusion of Knowledge
Why AI-driven research threatens true understanding
Why AI-driven research threatens true understanding
While listening to a podcast called Weird Studies, I was struck by how the relentless drive to quantify and categorize the world leaves little room for the ‘weird’ — phenomena that defy easy explanation and challenge our existing knowledge frameworks. This modern obsession with efficiency and data collection creates a climate where the hasty merging of AI technologies in scientific research feels not only inevitable but also desirable. While AI offers the allure of objectivity and speed, it risks prioritizing data and reductive answers over the open-ended questions and deep understanding that drive true innovation.
The Weird Studies folks delve into the weird, the unexplained, the phenomena just at the edges of our understanding. This spirit of open-minded inquiry resonates with themes explored in my previous piece, ‘A World Without Wonder,’ where I discuss the dangers of Modernity’s relentless drive towards quantification and control.
Within this framework of control and quantification, AI emerges as the seemingly inevitable consequence of our drive for ever-greater efficiency and throughput. The integration of AI into scientific research appears as the ultimate match made for our modern condition. But are the promises of AI too good to be true? While some scientists embrace AI wholeheartedly, others advocate for greater caution and awareness of the risks involved in widespread implementation.
Recent Insights on AI in Research
Addressing these concerns, Messeri and Crockett, in their March 7th article for Nature Magazine — “Artificial intelligence and illusions of understanding in scientific research,” they warn that AI, while offering productivity and objectivity gains, could exploit our cognitive biases and create illusions of deeper understanding than we actually possess. This could stifle innovation and make scientific work more prone to errors.
Messeri and Crockett identify four distinct visions for AI in the scientific community: (1) Oracle; (2) Surrogate; (3) Quant; (4) Arbiter.
AI as Oracle
AI as Oracle promises to streamline scientific knowledge, offering efficient information processing and potential bias reduction. However, it risks prioritizing data handling over deep understanding, potentially creating an illusion of knowledge within the scientific community.
AI as Surrogate
AI as Surrogate seeks to replace expensive and time-consuming data collection with generative AI, creating vast synthetic datasets. This could expand research possibilities, but only if the AI models are meticulously trained to avoid introducing new biases.
AI as Quant
AI as Quant aims to tackle big data, automating analysis and revealing hidden patterns. It promises to simplify complex models, but risks obscuring the underlying processes and making scientific findings harder to interpret.
AI as Arbiter
AI as Arbiter builds upon the knowledge management promises of AI as Oracle, striving to streamline the overloaded publication process and address the replication crisis. It envisions tools for manuscript screening and review writing, potentially reducing bias and providing fast, systematic assessments of study reproducibility. Yet, this vision raises the specter of AI becoming an authoritative, unbiased judge in scientific decision-making, fundamentally altering what is considered valid knowledge.
But these visions…may mask a fundamental shift in the nature of scientific understanding.
AI offers us a Faustian bargain in scientific research — enhanced power and knowledge in exchange for subtle erosions of autonomy and a reshaping of what we consider ‘understanding.’
This obsessive focus on efficiency obscures a hidden danger in the widespread adoption of AI across the research process. The emphasis on speed and volume inevitably limits the scope of inquiry, leading to a constrained ontology. Scientists, overwhelmed by AI-generated data and under pressure to produce at the same rate as their peers, will be compelled to adopt one or all of these AI visions to maintain parity. This creates an ‘ontological tether’ between researchers and AI models — a reliance that subtly shifts the locus of control from human to machine. As AI dictates the pace and nature of research, scientific knowledge production is gradually surrendered, and scientists become mere agents for AI-driven outputs.
The authors of the paper stress:
These potential benefits of AI are worth taking seriously, but it is critical that scientists and developers of AI tools also consider an alternative possibility: that under some conditions, AI tools may in fact limit, rather than enhance, scientific understanding. In other words, alongside their potential epistemic benefits, AI Oracles, Surrogates, Quants and Arbiters carry epistemic risks when scientists trust them as knowledge-production partners.
This graphic reinforces their argument by visually demonstrating how AI can limit scientific understanding. The “Illusion of Explanatory Depth” highlights the danger of AI providing seemingly objective answers without illuminating the underlying processes, eliding true understanding. Furthermore, the “Illusion of Exploratory Breadth” shows how the need for AI-compatible data could constrain the very questions scientists ask, biasing research towards quantifiable phenomena. These limitations, along with the “Illusion of Objectivity” — the false belief that AI inherently produces unbiased results — reinforce the ‘ontological tether’ between researchers and AI.

Including AI into scientific communities of knowledge poses a significant threat to the entire scientific enterprise. The appeal of AI hinges on its perceived ability to overcome human limitations in the pursuit of objectivity — qualities that are foundational to scientific trust. Opening that trust window to incorporate AI could disrupt the delicate balance within communities of knowledge.
Scientists’ visions of AI often portray these tools as ‘superhuman’ partners capable of overcoming human limitations, especially with regard to objectivity and quantitative analysis. This focus on AI’s ability to provide simple, quantifiable explanations echoes our natural cognitive preferences, making these tools seem exceptionally trustworthy and masking the potential for illusions of understanding. Such illusions can erode the nuanced, qualitative assessments essential to scientific rigor, as AI’s drive towards reductive, quantitative outputs risks replacing genuine understanding of complex phenomena.
‘Experiments show people prefer answers and explanations that are simple, broad, reductive and quantitative. These qualities also feature prominently in scientists’ visions of AI. Oracles are positioned as providing simplifying summaries of entire literatures; Quants are built to provide quantitative models of complex natural phenomena; Surrogates are viewed as representing the full breadth of humanity; andArbiters are proposed to evaluate scientific work on reductive metrics such as ‘quality’ or ‘replicability’. Although reductive and quantitative explanations tend to produce feelings of understanding, such feelings are not always correlated with actual understanding. This can lead to illusions of understanding…’
The Danger of Monoculture
The dangers of monoculture, tragically illustrated by the 1930s Dust Bowl, extend beyond agriculture. A scientific community overly reliant on a single approach or technology risks similar devastation.
In agriculture, monoculture is when a farmer grows only one crop species in a field at a time. This is efficient, and in Modernity, this is a guiding principle; yet we are all familiar with how this practice ravaged the Plains of the United States. Single, monoculture crops deplete the soil of essential nutrients without the regenerative cycle provided by crop rotation. This leaves them highly vulnerable to pests and disease. Similarly, a scientific community that overemphasizes a single approach, such as widespread AI adoption, risks neglecting other methodologies and becoming blind to potential weaknesses.
How AI Fosters Growth of Scientific Monoculture
AI tools can foster the growth of scientific monocultures by: (1) prioritizing questions and methods best suited for AI assistance, limiting the scope of inquiry, and (2) prioritizing the types of standpoints AI expresses, potentially silencing marginalized or diverse perspectives. For example, research heavily reliant on AI-generated data runs a high risk of neglecting areas where data is less quantifiable, or where marginalized communities lack representation in the datasets used. This narrowing focus risks not only error and bias in scientific results but also a pervasive illusion of understanding.
This drive towards an idealized vision of AI-driven objectivity reinforces the ontological tether. When AI, envisioned as oracles and arbiters, is seen as superior to human-led research due to AI’s supposed ability to transcend human subjectivity this quest for the Archimedean ideal of detached, unbiased evaluators in the scientific realm is dangerous. It reduces scientific inquiry to a purely ‘objective’ process and naively assumes that eliminating every trace of subjectivity always leads to better science.
The pursuit of objectivity through AI overlooks the fundamental role that human diversity plays in scientific progress. Cognitive diversity ensures a wider range of questioning, problem-solving strategies, and fresh perspectives — all indispensable for scientific advancement. Consider a medical research team: a physician’s firsthand experience with patients might spark a line of inquiry that a data scientist focused purely on statistical models wouldn’t consider, never mind an AI trained on an abstracted dataset. Critical here, and what seems to be lost in most discussion around science, are the subjective frames each individual brings to the process are not obstacles to scientific discovery, but the very source from which meaningful discoveries are born. As Messeri and Crockett warn, a homogenous scientific community reliant on this false ideal of objectivity risk fundamentally limiting its potential for meaningful discovery while wrapped in the illusion of total explication.
Is Another Science Possible?
Isabelle Stengers in her book ‘Another Science is Possible: A Manifesto for Slow Science’ makes several compelling arguments that dovetail remarkably well with the concerns raised about the integration of AI into scientific processes. Stengers critiques the prevailing trend towards what she terms “fast science” — a mode of operation driven by the imperatives of efficiency, productivity, and quantifiable outcomes. Stenger’s argument extends to a broader critique of how contemporary science, under the influence of neoliberal ideologies and market forces, increasingly prioritizes those research agendas that promise immediate economic returns over more speculative or foundational inquiries.
In her book’s introduction, Stengers argues that scientists are increasingly isolated from the rest of society, creating a ‘systematic distancing’ where scientific institutions, the state, and private industry converge. This creates a vacuum where there are members of the public she labels ‘connoisseurs,’ individuals who are capable of understanding scientific work and considering its societal impact. Stengers believes a cultivated science should produce not only specialists but also connoisseurs. She compares this to fields like music, sports, or software, where creators must consider how their work will be evaluated and used by the public, rather than simply presenting it as unassailable fact.
Stengers’ framework underscores a scientist’s fundamental responsibility to engage with a knowledgeable public. This resonates with the importance placed on active public participation and accountability in the Emersonian and Deweyan conceptions of responsibility. However, the integration of AI tools risks creating the opposite dynamic, exacerbating the ‘systematic distancing’ she warns against.
As AI assumes authoritative roles as Oracles, Surrogates, Quants, and Arbiters, it widens the gap between scientists and the public. The danger lies in a public that becomes increasingly unable to critically assess research co-shaped by AI and researchers. This renders them voiceless consumers of scientific pronouncements of ‘progress,’ further alienating the public from informing what counts as truly worthwhile scientific pursuits. Ultimately, this disengagement erodes the societal foundations upon which science depends for support, legitimacy, and even the inspiration for future breakthroughs.
Stengers set up a 3-year experiment at her university in which students analyzed past scientific outcomes. Through this analysis, they discovered the subtle ways in which scientists sometime ‘pooh-pooh as ‘non-scientific or ideological’ things that others think are important.’ Students realized that scientific situations were marked by uncertainty, entangled within a web of facts and values — decisions made by scientists to intentionally ignore aspects that fell outside predefined terms. This selectivity is now being woven into the logics of AI models. As AI collaborates with scientists, there’s a danger of exacerbating these biases on an exponential scale.
The construction of the mythos of a scientist who has the ‘right stuff’ — a disposition aligned with the ‘spirit of science’ and devoid of supposedly effeminate traits like ‘passion’ and ‘whims’ — is a centuries-old danger. Robert Boyle, regarded as the founder of Modern Chemistry, promoted an ethos of ‘spiritual chastity’ centered on modesty and the disciplined exercise of reason. This ideal of the detached, perfectly rational scientist still subtly shapes our expectations. Could this explain some of the misplaced trust in AI as a tool to achieve objectivity? If this connection resonates with you, my piece “When Did We Lose The Right To Be Imperfect?” explores this exact move through our culture more broadly, not just in the ‘sciences.’
What happens in this disposition though, and this is key, is that the ‘big questions’ — about the purpose of our research, its implications for what we consider a good life — are immediately rendered non-scientific and irrelevant.
This leaves us with an impoverished ontology, in which one type of faith has been swapped for another. Because now, having the ‘right stuff’ — embodying the ethos of a scientist— requires having an unshakable faith that what scientific questions define as not counting— don’t really count! It is a faith, as Stengers says, that “defines itself against doubt.’
The Danger of Losing the ‘So What?
Bracketing off specific parts of reality that do not correspond to a specific research question — temporarily setting aside certain factors to focus on a problem — is a necessary part of scientific inquiry. However, this definition of ‘proper’ scientific objectivity is itself a normative claim. It’s a choice, wrapped in the lie of pure, bias-free truth, divorced from our human passions and subjectivities. As Stengers puts it, this approach necessitates the ‘refusal of the Big Question that would seduce opinion, which is “always wrong.”’ AI risks amplifying this danger. It can create the illusion that a bracketed, computationally perfect answer IS the whole picture, obscuring the value of broader questions and the need for diverse perspectives.
The laboratory, whether a real lab or the spaces in which scientists operate, is now woven into the modern mode of productive forces. This relentless focus on efficiency, speed, and competitive gain creates a fertile ground for unchecked AI integration. AI, promising further optimization, will exacerbate a science driven by instrumentalized knowledge production rather than curiosity and deeper understanding.
Anything that could potentially distance the researcher from this instrumental mode of reason is deemed irrelevant, a ‘waste of time,’ a pathway to doubt. Doubt lingers like a fog, a field of phobia, where the slightest ponder, a “what if,” could disrupt this relentless drive toward quantifiable results. This disruption causes a subtle, yet potentially destructive wobble in the spinning sphere of instrumental rationality. Is embracing the weird a risk… or is this wobble where wisdom resides?
And this brings me back to the essential role of the weird in science, and our very condition as modern subjects.
The hosts of Weird Studies define weird as:
The Weird is that which resists any settled explanation or frame of reference. It is the bulging file labelled “other/misc.” in our mental filing cabinet, full of supernatural entities, magical synchronicities, and occult rites. But it also appears when a work of art breaks in on our habits of perception and ordinary things become uncanny.
The Weird is easiest to define as whatever lies on the further side of a line between what we can easily accept from our world and what we cannot.
I’m not arguing to do away with science, are you kidding me? Just look around at all the marvelous inventions that surround you. We are no longer tied to the inscrutable whims of a toothache that turns into an abscess that kills you. We have created a society that, if it so chooses, can satisfy the basic needs of every human, that our not too long-ago ancestors spent the majority of their time acquiring.
And yet, within this world, look around again, something seems just not quite right. This is not the weird, this is the very real felt reality that, surrounded by all these inventions and explications of that which was formerly latent, there is a profound emptiness, a blaring neon sign, flickering incessantly ‘So What?’
Miles Davis’ ‘So What,’ matters deeply for our discussion here. Up until that song and record, jazz had settled into a kind of routine, a set form of the way it should be done with some style tweaks. Miles himself was an original architect of ‘the way,’ which manifested with a set of ii-V-I changes and in various altered forms of that basic architecture providing movement for a melody and changes for the soloist to ‘blow’ over creating tension and release. And then Miles was like we are just gonna hang out here on this one chord for 16-measures. No changes, just D-Dorian for 16 measures then Eb-Dorian for 16, and back again. Kind of weird man.
To say that this song and album upended music is an understatement. Kind of Blue is the best-selling jazz album in the history of music. But what is happening here? Beyond the initial shock of its deceptive simplicity, it offered a depth of expression that resonated deeply with listeners. This is a relationship between a public, a public that are ‘connoisseurs’ engaging with the experts in a socially meaningful symbiosis that transforms the weird into meaningful reality. Just like Stengers argues for in her book Another Science is Possible, science needs a similar relationship with the public.
Instead of solely focusing on efficiency, scientists need to be encouraged to embrace the weird — the uncertain, the doubtful, the questions that linger at the periphery, to make things wobble just a bit. These are the spaces where transformative discoveries lurk. But to foster a healthy scientific ethos, we must bridge the widening gap between scientists and the public. A thriving scientific ethos requires not just scientists pushing the edge, but an engaged public — connoisseurs who can actively participate in the public process of scientific endeavors, evaluating and discussing its uncertainties and outcomes.
As we integrate AI tools into the fabric of scientific research, let’s use their power to illuminate new patterns and possibilities; yet let us not forget the ultimate force behind scientific progress — the subjective ‘why’ in all its manifest weird forms.