Scholarly Communication Is a Research Problem. This Means You.
Scholarly Communication Is a Research Problem. This Means You.
We scientists have a huge problem. We’ve built a scientific publishing system that excels at advancing careers, picking winners, and giving false confidence to the scientific process while failing at its fundamental purpose—advancing knowledge. The gap between these goals has never been wider, and pretending otherwise is no longer tenable.
I’ve spent years experimenting with alternatives—at ASAPbio, eLife, Arcadia Science, Astera Institute, and others. From that vantage point, the dysfunction is undeniable. Publishing crawls while knowledge accelerates. Editorial whims shape entire fields. Breakthroughs hide behind paywalls as careers hinge on journal brands.
The time for polite incrementalism has passed. What follows is a call to action for those willing to experiment their way toward a better future.
The devil we know is still a devil
Our attachment to the status quo is pathological. We cling to a system designed for nineteenth-century information scarcity in an age of digital abundance. The fear of unknown alternatives has paralyzed us into accepting mediocrity as wisdom.
What’s changed is that the disruption is coming whether we participate or not. Machine learning is already transforming how we discover, synthesize, and validate research. The comfortable fiction that our current system is “the worst except for all the others” no longer holds. We haven’t tried the others. We’ve barely begun to imagine them.
Data, not dogma
Beyond the tired metrics of downloads and citations lies an ocean of unexplored possibility. We need to know whether research is accessed, but also if it’s understood, applied, replicated, extended. We need mechanisms—both passive and active—that track actual utility rather than proxies for truth and prestige.
This isn’t a call for another grand unified theory of scientific publishing. It’s a recognition that we lack the empirical foundation to build one. We need experiments, not manifestos. We need data on what actually works when research is communicated in fundamentally different ways.
The irony is palpable. Scientists who would never accept untested hypotheses in their research display unwavering certainty that untried alternatives to publishing will fail—a conclusion as data-poor as any they’d reject in their own fields.
The tyranny of scale
I’ve watched too many reform efforts die on the altar of universality. “How will this work for everyone?” becomes the question that kills innovation before it can breathe.
But all solutions need not scale to inspire and instruct.
Those with the resources, flexibility, or vision to experiment have an obligation to do so—not because their solutions will work for everyone, but because their failures and successes will inform what’s possible. Every publishing experiment that prioritizes interoperability over isolation adds to our collective knowledge. Every attempt that connects to shared infrastructure—persistent identifiers, open repositories, machine-readable metadata—builds the foundation others can build upon.
The goal is to create an ecosystem where multiple approaches can flourish, fail, and evolve.
Transparency as table stakes
Until recently, “data not shown” was still considered acceptable in published research. “Available upon request” substituted for actual availability. Crucial methods remain hidden behind paywalls, if shared at all. We’re gauging the soundness and impact of research without its substance.
Put simply, this is bad science. Research that can’t be verified can’t be trusted. Research that can’t be built upon can’t advance knowledge.
Yes, the transition to full transparency requires effort. Yes, documenting and structuring data throughout a project rather than as an afterthought demands new workflows. But these are one-time adjustments that become second nature—and they’re already spurring innovations that make the process easier.
Consensus is the enemy of progress
Perhaps the most pernicious myth in scientific publishing is that consensus must precede progress. That we need agreement on the perfect system before we can move beyond the broken one.
This gets it exactly backward. Science advances through disagreement, through competing hypotheses tested against reality. Why should scientific communication be any different? The internet has made gatekeeping obsolete—research will be shared regardless. Our choice is whether to shape that sharing or surrender to chaos.
Scientists already know this. They replicate results before building on them. They evaluate preprints alongside peer-reviewed papers. They recognize that truth emerges from the collective process of science, not from editorial blessing.
A message to the builders
If you’re developing tools for scientific communication, my plea is to stop solving for a slightly less terrible version of what we have. Stop building faster horses when we need fundamentally different vehicles.
Understanding researchers’ current behaviors and incentives matters, but not so you can merely satisfy them. The goal is to create possibilities they haven’t imagined, to demonstrate alternatives they didn’t know they needed. Build for the science we could have, not the publishing system we’re stuck with. Make doing the right thing for science easy.
The future is already here
Scientists at the margins are already practicing open science: sharing data in real-time, iterating in public, building on each other’s work without editorial permission. They’re discovering what many of us suspected. Science moves faster when we write for each other rather than for gatekeepers.
For those ready to do something different, the infrastructure already exists. The internet freed distribution. Repositories preserve our work. Databases make it discoverable. Every tool we need to bypass the gatekeepers is operational—and here’s what matters: The rough edges we discover by actually using these tools are precisely what drive their improvement. Each experiment reveals what needs building next.
What we lack is the courage to admit that our comfortable compromises are holding back human knowledge.
Your move
The transformation of scientific communication requires scientists willing to experiment and builders ready to support them. It requires recognizing that in this technological moment, with AI reshaping how we work and think, clinging to twentieth-century publishing models isn’t cautious—it’s reckless.
We’ve spent decades lamenting how journals constrain science while doing little to escape those constraints. That era of learned helplessness can end now. Not through some imagined wholesale revolution, but through the simple act of trying something different.
The question facing every scientist and every builder is whether you’ll be among those shaping what comes next. The experimenters are out there, creating the future one preprint, one dataset, one radically transparent step at a time.
They’re not waiting for permission. Neither should you.

Thanks for this lovely piece.
I have tons of things to say, but let me be content with noting that 'A New Kind of Science Publishing' is particularly important when it comes to hyperproblems - problems too big to be understood, let alone solved by a single researcher or even a dedicated institution.
Climate change immediately comes to mind. AI might be another. Solving the brain (whatever that might mean!) is a third.
The one out of left field 'maker' idea I want to throw here is that we need philosophical builders if we want to create solutions that will help scholars flourish - I'm using that term in the sense the Cosmos Institute calls itself the Academy for Philosopher-Builders: https://cosmos-institute.org/
The fact that journal publications are used as a currency for advancement and credit is certainly a problem, especially when publication venue assigns a value that doesn’t necessarily match the value of the scientific contribution. As a former journal editor, I’ve watched several notable efforts kickoff and then sputter out or have their approach adopted by for-profit publishers - including one that I used to work for.
One element that I no longer see mentioned much in this context, that I think should be resurfaced/revitalized is the impact of research assessment. When DORA (https://sfdora.org/ ) kicked off over 10 years ago, I thought that maybe, just maybe, scientists might be judged for the actual quality of their work rather than the number of publications or types of publications they had.
Therefore, I’m surprised to learn that the many in a younger generation of Bay Area scientists have never heard of this initiative.
So, while I agree that there are valid reasons to change science communication, I think it is at least as important (if not more so) to rethink the credit system and incentives that continue to send manuscripts down that path.