Research Freedom

Research Freedom

Share this post

Research Freedom
Research Freedom
How AI is changing what it means to be a researcher

How AI is changing what it means to be a researcher

Academia's greatest flex is dead (and how AI killed it)

Lennart Nacke, PhD's avatar
Lennart Nacke, PhD
Mar 15, 2025
∙ Paid
19

Share this post

Research Freedom
Research Freedom
How AI is changing what it means to be a researcher
7
Share
Is this the AI future we are looking at?

(Originally published on February 25, 2025)

Want to become an AI-powered researcher now? Take our webinar.

Last Wednesday, as I slowly sipped what was probably my third Latté of the day (because f%$k you liquid calories), Google casually shattered my professional identity by releasing their co-scientist multi-agent AI system. I’ve demonstrated how a well-prompted AI system can generate a comprehensive literature review in my field in about 20 minutes before. But with the release of Google’s Co-Scientist and other multi-agent AI research tools, my very job is facing the instant urge to evolve or mutate significantly in the near future. David Cronenberg would have loved to write the script to this reality I’m facing. With the battle scars of a reduced sabbatical, delayed tenure review, some research papers that nearly broke me, and enough rejection letters to wallpaper my entire office, I sat there wondering if I'd become the academic equivalent of a horse-drawn carriage in the age of Teslas. Real museum stuff.

Research Freedom is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

This wasn't just some technological anxiety. I am facing an existential crisis packaged in a sleek user interface. And, honestly, I don’t think I’m the only one. Caffeine clearly isn’t the answer to this one, because about a day later another notification popped up about yet another AI breakthrough. As if to underscore this technological anxiety, Elon’s xAI is pushing us further into uncharted territory with Grok 3, making the first public venture into generation 3 AI. Their strategy of “bigger is better,” backed by what they claim is the world's largest computer cluster, has yielded the highest benchmark scores we’ve seen from any base model. Not to be outdone, Claude 3.7 Sonnet’s release a couple of days ago shows even more remarkable improvements (and Claude Code), matching Grok 3’s capabilities while offering different strengths. Meanwhile, OpenAI’s unreleased o3 lurks on the horizon, promising to be another third generation powerhouse. I’m honestly feeling a little exhausted this week by this new. Academia will experience a tectonic shift because more companies will launch models at this unprecedented scale (and multi-agent scientist systems will likely do the bulk of academic writing in less than a year from now).

An academic flex that no longer impresses anyone

Remember when knowing obscure citations was our academic superpower? When students would look at us in awe as we casually referenced that hard-to-find but crucial paper from 1976? Yeah, those days are disappearing faster than free wine and cheese at conference receptions.

For decades, our worth as academics was measured by our ability to find rare sources, memorize key passages, and weave disparate ideas together through careful analysis. The knowledge we cultivated took years of dedication and countless hours in libraries that smelled like dust and academic desperation.

Today, the threshold for producing credible academic content has dropped so low that my neighbor’s teenager could use AI to write a surprisingly decent analysis of power dynamics in medieval French literature. What is even happening?

The quality of AI-generated academic content is improving at a pace that’s frankly a little terrifying. Most papers and reviews I read these days are at least 80% AI-supported (it’s a gut feel, but I feel I can judge this well enough). A well-designed prompt together with semantic search systems can produce a literature review that would earn a solid B+ in most graduate seminars. And that B+ is rapidly trending toward an A-.

Intellectual opposition is my favourite way to use AI though

Here's the surprising turn in this academic nightmare though: While many AI companies market AI tools as stuff that makes us better, faster, and more efficient academics, that’s whole efficiency and automation angle is quickly becoming outdated. I think AI’s most valuable function might not be speeding up or replacing our writing (although that’s a given these days), but challenging our own thinking.

I discovered this accidentally when, in a fit of petty revenge against the machines, I asked an ChatGPT o1 to critique my latest research idea. I was secretly hoping for lame-ass criticism I could smugly dismiss. But what do you know, I got intellectual confrontation that no colleague would dare to offer me. And I liked it. It really is like summoning a fearless colleague who doesn’t worry about research politics or hurting my feelings to just get me in the discomfort zone, where I do better thinking. Kudos to you, ChattieG.

And that’s when I realized: the real power of AI isn’t replacing academics, it’s disagreeing with us in ways that make our work stronger. It’s being a servant and a challenger that never gets tired of our nonsense. Kind of cool to have this readily available.

Can academic work remain distinctly human?

Despite the AI revolution, I believe certain aspects of academic work should remain human. Here are some thoughts:

The more your scholarship draws on lived experience, original fieldwork, or truly novel theoretical frameworks that challenge fundamental assumptions, the less replaceable it becomes. And as someone being at home in the social sciences, I also do not believe in synthetic data as something that can predict or allow to draw conclusions about real human behaviour.

Consider ethnographic research where you’ve spent years embedded in communities that exist offline. Or laboratory work where your hands got dirty with procedures no one bothered to document properly. Or philosophical arguments that make everyone uncomfortable precisely because they haven’t been thought (or ChatGPT’d) before. For now, that stuff will remain.

The common denominator is human uniqueness. Perspectives and experiences that haven't been neatly packaged into the training data these AI systems have consumed like a Happy Meal subscription.

Why you’re still reading this (and what it means)

Notice how I began this newsletter (with a personal anecdote about my own academic crisis). Not with statistics about transformer models or abstract arguments about the future of academia. If I did this right, it made you care. It was deliberate, and slightly manipulative in that charming way I hope I have perfected by now.

My story of academic identity crisis likely resonated because it’s fundamentally human and something that many senior academics should be able to relate to. We connect with stories of struggle, doubt, and transformation because they mirror our own experiences. I've never met an academic who hasn’t occasionally questioned their career choices at 3 AM while grading the 47th paper arguing that “we can fix environmental issues with office water dispensers.”

The truth I find crucial in this: Scholarship that combines analytical rigour with personal insight or unique contexts will retain its value. Your decades in the field, your unique combination of experiences, and your distinctive voice cannot be easily replicated. At least not until AI starts attending faculty meetings and developing elaborate coffee preferences. Brew me that coffee, Claude, will you?

Reinventing academic identity when bots write better than you

After months of experimentation (and one embarrassing incident where I tried to debate an AI at 2 AM instead of getting my beauty sleep), here’s what I’ve learned about adapting to this new reality:

  1. Use AI as your intellectual sparring partner. The most productive AI sessions I’ve had weren’t when I asked it to write for me or edit my stuff, but when I asked it to disagree with me. Having your ideas challenged immediately rather than waiting months for reviewer comments accelerates your thinking dramatically.

  2. Focus on what can’t be digitized. Your physical presence in the field, your relationships with subjects, your direct observations (and interpretations of them). All of these generate internal knowledge in you that isn’t available in published literature and therefore isn’t in AI training data (yet).

  3. Create intellectual mashups. AI systems still struggle a bit with truly innovative connections across wildly different domains (although Claude is getting better at that with longer prompting). My background spanning human-computer interaction, games, and artificial intelligence gives me conceptual metaphors that AI hasn’t yet mastered because those connections aren’t well-represented in its training.

  4. Emphasize subjective interpretation. While AI can summarize what’s known, the truly personal intellectual stance—why you find certain arguments compelling or problematic based on your life experience together with your interpretation and own synthesis of the literature—remains distinctly human. But you have to keep working at keeping that up to date.

  5. Let AI handle the boring parts. Use these AI tools for literature reviews, writing method sections, summarizing papers, getting ideas for discussions, and identifying potential connections, freeing your brain for the creative work that machines still aren’t doing that well.

What to tell your graduate students

“Your job has changed,” I tell my doctoral students. “You’re no longer just competing with other humans. You’re working in an ecosystem where baseline content generation has been dramatically accelerated.”

But then I smile (not that creepy smile from that horror movie but still so ungerman of me), which always makes them nervous.

I argue that this is actually excellent news. These tools can help you overcome the initial drudgery of academic production—literature reviews, summarization, identifying research areas—allowing you to focus more energy on truly original contributions. The most valuable skill now isn' the ability to recall existing knowledge. It’s the ability to synthesize and ask original questions, connect ideas in unexpected ways, and bring personal insight to analysis. And crucially, it’s learning to use these tools as intellectual opponents, not just assistants. In academia thinking is our primary product. Let’s keep it valuable.

What it means to be an academic, researcher, or knowledge worker is undergoing a fundamental transformation. The paths to recognition, contribution, and impact are being rewritten faster than university committees can update their tenure guidelines.

This can feel threatening if you’ve invested decades developing expertise in traditional academic modes (like spending years mastering a language only to discover everyone’s suddenly speaking something else).

But remember that the essence of scholarship has always been about advancing human understanding, not the specific mechanisms through which we produce and disseminate that understanding.

I’m still wrestling with what all this means for my career. But I’ve found that embracing AI as an intellectual challenger rather than just a production tool has made my work sharper, more rigorous, and honestly, more fun than it’s been in years.

What about you? Have you found effective ways to use AI as an intellectual opponent? Have you discovered prompts that generate particularly useful disagreement? I'd love to hear your experiences. Hit reply and drop me a line. Always great to read your email.

Research Freedom is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Three ways to use intellectual confrontation with AI

Here are 3 simple strategies (and AI prompts) how we can use AI’s power to challenge and strengthen our academic thinking. The following methods have emerged from countless hours of intellectual sparring with various AI models, each offering unique perspectives on research problems. Take AI from a mere writing assistant to a valuable intellectual opponent.

Keep reading with a 7-day free trial

Subscribe to Research Freedom to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Lennart Nacke, PhD
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share