How to Not Make a Scientific Journal by Accident
Here’s something I keep witnessing in science, and it’s genuinely strange once you notice it.
Scientists largely agree the publishing system is broken. Journals cost billions, delay discoveries by months or years, and transform science from a messy search for truth into a polished performance of certainty. Many agree but we can’t seem to get out of our own way to fix it.
But here’s what’s strange: many of the people trying hardest to escape journals are unintentionally rebuilding them. If you’re a scientist or work anywhere near science, you might think you’d never accidentally create one. You might even hate them. But look closer at what you’re actually asking for. Want shortcuts to understanding research? Need consensus before you’ll believe something? Crave curation to save you time? You may be building a journal without realizing it.
And I’m watching it happen in real time, even among the most progressive champions of open science.
Nervous Reformers Make Accidental Gatekeepers
Take ArXiv, the preprint server that liberated research in math and physics for more than thirty years. It is a genuine victory over journal gatekeeping: researchers can share work immediately, without waiting months or years for editorial approval, without paying to be read. The whole point was that the scientific community could evaluate work directly, without intermediaries deciding what deserved attention.
Recently, they announced that position papers and reviews will now undergo pre-publication peer review. Just position papers and reviews, not original research. But watch that line. A preprint server is adding peer review. The platform built to bypass editorial control is quietly adding controls.
What’s driving this? Anxiety about AI-generated content flooding the server. That anxiety reveals something important: nervous reformers can become accidental gatekeepers. The instinct when facing an unfamiliar challenge is to reach for familiar tools, even when those tools are precisely what you set out to escape.
I saw this pattern spread at a recent Chan Zuckerberg Initiative meeting. Open science advocates who’d spent years fighting journal gatekeeping started panicking over AI-generated content. Their solutions? More filters. More vetting. More controls. These weren’t old-guard editors protecting territory. These were reformers unconsciously rebuilding what they’d fought to destroy.
Even fresh builders fall into the trap, and the reason is worth understanding. Organizations often bring in outsiders precisely because they can see the pathologies with fresh eyes. But those outsiders, trying not to be naive or build in isolation, are vigilant about getting user feedback. The problem is that users want what they think they should be getting from journals. Outsiders aren’t positioned to hold firm and say no. So they add ranking algorithms, implement quality filters, create consensus mechanisms. All to “help people find what they want.” But scientists want the warm blanket of journals. They crave shortcuts, proxies, and pre-digested truth.
Build what users want, and even with the best intentions, you’ll rebuild what’s killing science.
The Motives
This runs deeper than institutional capture or profit motives (the easy targets). As scientists, we hunger for the very features that make journals toxic. We want to control who can criticize our work. We slide into homogeneity in how research gets shared. We crave proxies when engaging with literature outside our wheelhouse. We seek gatekeepers to prevent embarrassment. We want the safety net of others’ opinions when deciding who deserves scarce jobs, funding, awards, promotions. Despite understanding how messy science really is, we want simplicity and certainty over the chaotic, iterative reality of discovery.
We’ve got the whole thing backwards. Forget making science digestible for mass audiences. The real challenge? Ensuring that a dataset from some obscure graduate thesis reaches the scientist at a national lab who can unlock new insights from it. That dataset shouldn’t need to appear in a glossy magazine, funneled into fashionable framing, for the people who can actually use it to find it. Science advances through weird connections, through esoteric knowledge finding its match.
This is where AI changes everything. AI can process raw notebooks, failed experiments, incomplete thoughts. It can connect patterns across disciplines no human would think to bridge. And it will continue to improve. Yet the response from the science community is vigilance, defensiveness, and alarm. Pre-publication review to combat AI-generated content. Human gatekeepers to “protect” us from machine abundance.
Think about the absurdity. Progressive platforms are built on a vision of near-universal, open sharing. Their stated endgame is eliminating barriers to access and much more content freely available. So if an additional surge triggers panic and new filtering, it raises a real question about scalability and values.
ArXiv argues that if the server becomes diluted, it’s game over before any other solution can be implemented. I understand the concern. But human gatekeepers can’t keep up with scale either. Arbitrary filtering, prestige proxies, and overly stringent criteria become a greater threat than dilution itself. If preprint servers become just another set of journals, it’s also game over.
What was the plan if the culture shift they want actually happens? If preprinting keeps spreading across disciplines, volume could grow far beyond today’s levels regardless of AI. If the only way to cope with that future is heavier gatekeeping, we’re drifting back toward the artificial scarcity these systems were supposed to replace.
What Scientists Fear
Scientists complain endlessly about Twitter’s “For You” algorithm deciding what they see. They flee to other platforms seeking reader agency. Yet these same scientists demand that scientific publishing have editorial filters, curation committees, quality rankings. They hate when algorithms shape their social media but desperately want algorithms to shape their research consumption.
Every attempt to make science cleaner, clearer, more consensus-driven strips away essential properties. Unified formats eliminate the scribbled margin note that sparks revolution. Significance filters bury the null result that prevents a decade of wasted effort. Demands for immediate validation punish long-term thinking and iterative refinement.
The fear of AI-generated content reveals a deeper anxiety: we’re uncomfortable with abundance. By abdicating our responsibility as scientists, our judgment atrophies and our calibration suffers. We lose the independent critical thinking that defines us. We’re sacrificing the most valuable part of what we do, grappling with uncertainty and separating signal from noise, all to avoid the discomfort of thinking for ourselves. That abdication created the journal system in the first place.
An uncomfortable reality is that all content will be partially or fully AI-generated soon. Trying to filter it out is like trying to filter out content written with word processors. What matters is whether something advances knowledge and impacts society. Determining that requires engagement, not gatekeeping.
Consider this parallel: we spend enormous energy worrying that machine-learning models might become biased, overfit to their training data, and fail to generalize to new discoveries. Yet we’ve engineered the exact same failure mode into human scientists. By forcing researchers to overindex on consensus views, editorial panels, journal hierarchies, we train people on an artificially narrowed dataset. We constrain the variation they’re exposed to, suppress outliers, hide null results, erase the rough edges of scientific exploration.
How to Fix This
The solution demands abandoning the journal mindset entirely. Stop asking for curated feeds of important research. Stop expecting three reviewers to determine truth. Stop believing science should be immediately comprehensible to everyone. Accept that most science will be irrelevant noise to most people, and that’s exactly how it should be at the leading edge of discovery.
Additional layers can help broader audiences without holding back primary research, just as journal front matter and popular science coverage have done in the past. The problems of science at the leading edge have been dominated and muddied by conflating them with the problems of reaching non-experts. We’ve sacrificed solving the former for the latter.
Let research exist in its natural state: messy, contradictory, incomplete. Let scientists publish their notebooks, their doubts, their abandoned threads. Let machines process this chaos and surface unexpected connections. Let the right information find the right researcher through search and serendipity, through algorithms that empower rather than decide.
The journal system fails at accessibility anyway. Peer-reviewed publications exist on every side of every debate. Meta-analyses depend on the false certainty of binary journal decisions that wash away critical flaws and nuance. It’s abstraction built upon abstraction, creating an illusion of certainty where none exists.
If we separate these conflated goals, advancing science versus communicating to broader audiences, we can serve both better. We can create additional layers for public understanding without forcing the primary practice of science through inappropriate filters. The problem is when the opinionated views of those who control visibility limit what can be found from the outset.
What Problem Are We Fixing
Progress happens when that one person with that one piece of missing knowledge finds exactly what they need. Truly innovative work often takes many publications and downstream uses to be recognized as significant. To suppress what can’t immediately gain universal acceptance is to prevent progress itself.
We need to solve the problem of the right information reaching the right people. Every layer of curation, every consensus mechanism, every quality filter makes that problem worse. The esoteric needs to find the esoteric. The fringe needs to find the fringe. That’s where breakthroughs live.
The next time you find yourself wanting someone to tell you what research matters, wanting consensus before you believe something, wanting a cleaner format for sharing work, recognize what you’re doing. You’re rebuilding the journal system, one seemingly reasonable request at a time.
If we can resist clinging to false certainty when new challenges make us uncomfortable, something much more powerful becomes possible. The future of science lives in accepting that knowledge can contribute to human understanding without being filtered, formatted, and approved.
The mess is the feature. Stop trying to clean it up.

Thanks for writing the most thought-provoking piece on publishing I’ve read this year. In work psychology, there’s a notion that it’s easier to first extend people’s current schema towards a new direction than it is to try and completely blow up their schema in one go. But the exception is that destabilizing events increase plasticity, and people are open to new schemas. Hoping that this AI panic is at least giving reformers a little bit of a window for change. Now the hard question: How do you see publishing reformers out-maneuvering the academic search committees who control scientists’ fate and only reward journal prestige? It seems so gridlocked. I never have a good answer for “But I can’t get tenure if I do anything besides fit the traditional box.”
What a delight to read such remarkable babbling. Thank you!