Recent incidents including uncorrected errors in
high-profile research, irreproducible findings published
in prestigious journals, and federal exposures of
ingredients lacking efficacy have increasingly undermined
public trust in science. However, these issues originate
from dysfunctional systems and incentives enabling
questionable research practices to arise and persist. A
hyper-competitive "publish or perish" academic culture
encourages the rushed dissemination of exaggerated,
fragmented, or preliminary findings. Practices
prioritizing publication volume over rigor coupled with a
breakdown of quality control in academic publishing have
propagated unreliable studies through literature.
Restoring confidence demands addressing perverse
structures: research funders and tenure committees should
incentivize reproducible, completed science over metrics
like publication counts that prize perceived productivity
over integrity. Journals must also embrace
responsibilities as moderators by improving vetting and
upholding exacting standards even at a competitive
disadvantage. Reforming cultures to resist sacrificing
quality for quantity is imperative.
The Blight: How
Perverse Incentives Breed Sloppy Science
The "publish or perish" dictum
pressures academics to amass publications as career
advancement benchmarks. Facing existential precarity,
researchers are incentivized to cut corners rushing out
weak preliminary findings rather than invest years in
robust studies. This manifests through salami-slicing
projects to inflate publication volume, exaggerating minor
studies as more impactful than evidence supports, and
pushing sub-par work through journals where acceptance
priorities supersede upholding rigor. The ensuing
proliferation of rushed, fragmented, or exaggerated
findings pollutes literature - ultimately obstructing
scientific progress.
Meanwhile, the explosion of dubious journals and
lightly-vetted special issues lacks appropriate oversight,
enabling dissemination of questionable research under the
false cover of "peer reviewed" credibility. Once
published, even in predatory forums, these flawed studies
persistently permeate literature and understanding. Once
released under the mantle of "peer review," the
credibility granted to published works makes it
challenging to rescind, regardless of merit. These flawed
studies then further pollute understanding and literature
by masquerading as valid science. Despite methodological
flaws or exaggerated conclusions, they accumulate
citations and get integrated into literature reviews over
time.
Weak studies waste reader time and skew
collective knowledge. Meanwhile, their membership within
the formal publication ecosystem lends undeserved
credibility influencing funding priorities, public
discourse, policy development, and practice. This strains
existing safeguards, enabling misinformation to arise and
endure masquerading as credible publications - further
distorting incremental knowledge advancement through
perpetuated false leads across fields. Witness the
huge investment in time and time and resources that went
into following up on the now discredited and retracted
Alzheimer's papers published by former Stanford President
Marc Tessier-Lavigne.
Case Study: MDPI
Prioritizes Volume Over Quality Control
The academic publisher MDPI has practices which
contribute to this problem. MDPI's breakneck publication
pace exemplifies an academic model that subordinates
upholding quality to maximizing output. In 2022 alone MDPI
published over 300,000 scientific articles across 400+
journals - a hyperinflated scale reflecting exponential
annual growth. To accelerate this stunning article output,
accounting for monthly addition rates exceeding 25,000
items, MDPI embraces broad-stroke practices that
prioritize volume: chief among them, acceptance rates
consistently over 50%. Such an indiscriminately high
threshold casts doubt on any serious selectivity or
curation governing approvals. Instead, MDPI opts for an
unfocused accumulation approach - the more articles, the
better, regardless of marginal quality or contribution.
Manuscript turnover times of just 6 weeks further indicate
cursory, rushed reviews rather than careful scrutiny or
thorough vetting. The aggressively compressed timeline and
unambiguous production benchmarks imply evaluations matter
only nominally as obligatory checks. At this churning
tempo geared towards platform expansion, rigor is
surrendered for growth. Whatever marginal quality control
persists erodes from strain as expectations normalize
unchecked addition rates.
Exacerbating matters, cascades of lightly-vetted special
journal issues - frequently at the request of academic
organizations to feature conference proceedings - deluge
editorial and peer review capacities already struggling to
provide oversight given current volume loads. These
feverish conference paper "dumps" involve comparatively
lax, accelerated peer approvals to meet publishing
deadlines, allowing underdeveloped work to permeate
literature and still qualify toward academic credit
despite circumventing the safeguards of full review.
Perception confusion arises regarding what the consistency
or meaning of "peer review" denotes amidst practices that
actively underserve it in the name of productivity. The
risks, however, are manifest: MDPI's unchecked publication
deluge forgoes responsible curation. In its quixotic rush
to maximize output, MDPI surrenders essential quality
checks, instead enabling the uncontrolled spread of
sub-par studies that pollute understanding and trust in
science.
In 2023, MDPI had multiple prominent
journals removed from citation indices due to the
practices described above.
The High Costs:
Undermining Public Trust and Scientific Progress
Once published, even in questionable journals lacking
appropriate scrutiny, flawed studies gain an unwarranted
patina of legitimacy. Despite shaky methodology or
exaggerated conclusions, they accumulate citations and
permeate literature by masquerading as valid science. This
enables influence on funding priorities, medical
guidelines, and public policies - despite originating from
poor research. Their persistence can waste resources
chasing false leads, erode expertise perceptions, and
undermine trust in the safeguards like peer review
intended to ensure quality control.
Meanwhile the proliferation of such flawed work pollutes
understanding as misinformation posed as credible science.
These propagate through fields, wasting scholar time on
dead-ends and steering activity towards faulty premises.
Their dispersal laundering speculation as evidence for
questionable theories. Although eventually retracted,
lingering residuals of misapprehension and distorted
literature still steer researchers awry. This highlights
systemic failures in upholding quality standards and
enforcement of reproducibility - breakdowns enabling
misinformation to arise and persist, undercutting
knowledge development.
The readiness of public figures to propagate speculative
claims as undisputed facts further amplifies the erosion
of scientific expertise credibility. When politicians tout
fringe studies lacking rigor or findings exceeding the
evidence, it fuels public confusion regarding what the
consensus research evidence shows. Such endorsements
afford unsubstantiated claims unwarranted credibility that
researchers spend years attempting to correct rather than
focusing efforts on expanding knowledge frontiers.
Politicization of scientific issues enabled by exaggerated
extrapolations or quotes taken out of context contributes
to polarized policy stalemates. The erosion of public
trust seeded by these incidents ultimately undermines
evidence-based role research and expertise play in
informing sound policy making.
Solutions: Improving Research Practices, Communication
& Curation
Use AI: Emerging AI tools possess
untapped potential for assisting human discernment and
analysis inoverflowing publication ecosystems. Language
models can help systematically surface unstated
assumptions, scrutinize phrasing choices indicating
exaggerated certainty, and probe how terminology
selections frame interpretation. Such technologies can
complement expert judgment in assessing both research
quality and contextual importance within massive
literature volumes exceeding human-scale comprehension.
AI-generated scrutiny questions focused on methodology,
evidentiary support levels, consideration of alternatives,
and constraint acknowledgement could enhance consistency
in gauging standards. Targeted AI assistance cataloguing
research topics, surfacing logical gaps, checking cited
support levels, and contextualizing claimed novelty amid
prior work can reinforce discernment.
More Pre-Print
Repositories: Enlarging the universe of
preprint repositories would enable scholarly scrutiny and
public critique prior to formal peer review and journal
publication. As draft manuscripts are posted openly,
experts could provide crowdsourced feedback focused on
analyzing methodology, questioning assumptions, assessing
the validity of findings, and probing constraints around
generalization. This constructive scrutiny leverages
collective disciplinary expertise to surface issues
possibly overlooked by isolated reviewers. Drawing
attention to logical gaps, insufficiently supported
conclusions, or alternatives requiring consideration
strengthens work before submission to publishers. The
iterative refinement enabled through public engagement
with preprints improves research transparency while
enhancing result robustness. It also accelerates
scientific progress by allowing immediate usage of
findings rather than years-delayed journal appearances,
while preserving expert debate opportunities that
ultimately enhance rigor.
Demand Incremental Knowledge Gains:
Restoring meaningfulness to scientific contribution
requires justification for adding to already saturated
literature.With academic publication milestones now
exceeding 4 million papers annually across 35,000
journals, editorial discernment is essential. Meeting word
count requirements or showcasing experimental competency
should not suffice for earned amplification through
journals. Journals should mandate statements explaining
how new submissions specifically advance conceptual
understanding or practical applications within a
well-defined scope. Given overwhelming publication
volumes, priority should be given to work providing
non-incremental knowledge advances rather than minor
variations on well-trod themes. Reviewer checklists should
require clear differentiation of key points not addressed
in prior literature. Reviewers should mandate that
contributions expand understanding frontiers
significantly, not just add subtle variants on exhausted
topics. Additionally, higher bars for methodological rigor
must be enforced consistently, including sample sizes and
statistical power to support conclusions drawn rather than
extrapolations exceeding data constraints. Authors must
evidence rigorous methods, properly contextualize
limitations, demonstrate meaningful knowledge advancement
validating journal resources spent, and practice restraint
drawing conclusions only as strongly as data warrants –
resisting pressure to over hype modest insights. Situating
work against existing knowledge while requiring robust
evidence for claims made will help shift incentives from
resume padding publications of marginal broader value
towards work resolving open questions and uplifting a
specialty's theoretical or applied capacities. Such
heightened justification expectations for clearing
selectivity filters would emphasize decisive progress over
marginal additions, uphold integrity, and reverse the
dilution of literature signal-to-noise.
The Imperative of
Responsible Curation
Restoring integrity in science demands
greater discernment and transparency when amplifying
claims through published endorsement. Facts require
nuanced communication situating certainty within
contextual constraints to maintain public trust and avoid
misrepresentation. Progress relies on selectivity filters
separating meaningful signal from swelling publication
noise. Journals must embrace responsibilities as
moderators and knowledge curators rather than distribution
channels - upholding standards regardless of competitive
pressures. This necessitates scrutinizing relevance,
methodology, reasoning, and restraint at least as much as
technical soundness. Careful, discerning curation is
essential for research to regain purpose and value amid
overwhelming volume.
This mandates transparent explication of assumptions and
design choices prerequisite to framing the meaning and
generalizability of results. Scrupulous vetting should
scrutinize not just technical soundness, but methodology
relevance, reasoning restraint, acknowledgement of
limitations, and exploration of alternatives now obscured
via isolated review. Literature integration depends on
upholding standards and enforcing contribution
justifications regardless of competitive incentives that
subordinate quality to quantity.
Progress relies on transparent
communication that accurately reflects the iterative,
uncertain nature of research. Findings are milestones for
further inquiry rather than dogmatic declarations.
Speculation requires clear separation from evidentiary
conclusions. Cherrypicked exaggerations that overpromise
fuel cynicism when promises predictably remain
unfulfilled; moderated claims anchored to evidence
strengthen credibility over time. Restoring purpose
requires integrating nuance into publishing endorsement
decisions - no longer laundering misinformation by failing
to acknowledge shaky evidentiary grounds or methodological
constraints.
Responsible curation will require
cultural and systemic changes prioritizing robustness over
pace, depth over volume, and quality over quantity -
coupled with technological assistance. But integrity
relies on academic communities and publishers themselves
resisting perverse structures through upholding exacting
standards disproportionately costly to individual
interests. Research funders and institutional review
boards must incentivize reproducible, scientifically sound
studies over rushed dissemination of fragmented,
exaggerated findings aimed to maximize perceived
productivity over meaningful contributions. Tenure
committees should similarly assess applicants based on
completion of robust projects advancing understanding
rather than chasing metrics like publication counts that
breed corner cutting. Progress emerges from diligence, not
hastened dissemination of incremental advances
masquerading as revelatory based on misaligned incentives.
Restoring science’s purpose and value requires patient
investment in building collective understanding.
See our call for
contributions re this topic.