toplogo
Đăng nhập

The Peer Review System is Broken: How MDPI Exploits its Flaws and Benefits from the Current State of Scientific Publishing


Khái niệm cốt lõi
The rise of MDPI and other similar publishers is a symptom of a broken peer review system, where elitism, bias, and inefficiency drive researchers to seek faster and less critical publication venues.
Tóm tắt

This article critiques the current state of scientific publishing, focusing on the controversial rise of MDPI as a major publisher. While often labeled as predatory, the author argues that MDPI's success stems from exploiting the inherent flaws in the traditional peer review system.

The Problem with Predatory Publishing

  • Predatory journals prioritize profit over scientific rigor, often publishing papers with little to no review.
  • While watchlists and safelists exist, they are inherently limited and fail to address the root cause: a flawed peer review system.

Efficiency vs. Quality

  • MDPI boasts a significantly faster turnaround time compared to traditional journals, primarily due to streamlined processes and aggressive deadline enforcement.
  • This efficiency, however, raises concerns about the quality of peer review, with some critics arguing that speed compromises rigor.

The Flaws of Peer Review

  • The current system suffers from long review times, poor review quality, and biases that prioritize subjective criteria over scientific validity.
  • Elitism within top journals creates a bottleneck, forcing researchers to seek alternative publication venues, often at the expense of quality.

MDPI: Exploiting the Gaps

  • MDPI capitalizes on the demand for faster publication and less stringent review processes, attracting submissions from researchers frustrated with traditional journals.
  • The author argues that MDPI's for-profit model, while potentially problematic, is not inherently unethical and reflects a broader trend within academic publishing.

The Need for Reform

  • The article calls for a shift in focus from identifying predatory journals to addressing the systemic issues within peer review.
  • Solutions include improving efficiency, promoting transparency, and rewarding reviewers for their time and expertise.

Conclusion

  • While MDPI's practices are controversial, their success highlights the urgent need for reform within scientific publishing.
  • By addressing the flaws in peer review, the academic community can create a more equitable and rigorous system that prioritizes quality research over profit and prestige.
edit_icon

Tùy Chỉnh Tóm Tắt

edit_icon

Viết Lại Với AI

edit_icon

Tạo Trích Dẫn

translate_icon

Dịch Nguồn

visual_icon

Tạo sơ đồ tư duy

visit_icon

Xem Nguồn

Thống kê
Papers in legitimate journals were cited 18.1 times during five years (9% of papers were not cited at all), compared to 2.6 citations of papers in predatory journals (56% were not cited at all). MDPI has a much faster review process; 19 days to the first decision (2018 and 2019) and 39 days to publication. The turnaround times of papers in MDPI journals halved from 2016 to 2020, whereas the number of special issue papers increased remarkably to 7.5 times. MDPI, Frontiers and Hindawi all had more than 50% of their papers due to the special issue practice.
Trích dẫn
"Don’t make the scientists wait and don’t waste their time by unnecessarily delaying a response to them; this is the difference that we want to make." "I’ve published in an MDPI journal and reviewed at a few – most reviews are not very rigorous and I personally don’t put a lot of effort in when reviewing for MDPI because I know that even if I recommend reject it will come back for revisions."

Thông tin chi tiết chính được chắt lọc từ

by Pasi... lúc arxiv.org 11-14-2024

https://arxiv.org/pdf/2411.08051.pdf
What is wrong with MDPI: Is it a predator or a serious competitor?

Yêu cầu sâu hơn

How can academic institutions develop more effective methods for evaluating research output beyond relying on journal rankings and impact factors?

Academic institutions are increasingly recognizing the limitations of relying solely on journal rankings and impact factors, such as the Journal Impact Factor (JIF), to evaluate research output. This shift is driven by the acknowledgment that these metrics are often poorly correlated with the actual quality or impact of individual research articles. To develop more effective evaluation methods, institutions can consider the following approaches: 1. Emphasize a Holistic View of Research Impact: Qualitative Assessment: Encourage the evaluation of research based on its originality, rigor, methodological soundness, and potential for real-world impact. This can be achieved through expert reviews, peer assessments, and considering the broader societal implications of the research. Altmetrics Integration: Incorporate alternative metrics (altmetrics) that capture a wider range of research impacts beyond citations, such as social media engagement, policy citations, and mentions in mainstream media. Tools like PlumX and Altmetric can provide such data. Focus on Open Science Practices: Give credit for open science practices like data sharing, code availability, and pre-printing. This promotes transparency, reproducibility, and wider dissemination of research findings. 2. Shift Focus from Journals to Individual Research Outputs: Article-Level Metrics: Encourage the use of article-level metrics, such as individual article downloads, citations, and altmetrics, to assess the impact of specific research outputs rather than relying solely on the journal's overall metrics. Open Peer Review: Explore the adoption of open peer review systems where reviewer identities and reports are made public. This can enhance transparency, accountability, and potentially improve the quality of peer review. Value Diverse Research Outputs: Recognize and reward a broader range of scholarly outputs beyond traditional journal articles, such as books, datasets, software, patents, and policy briefs. 3. Foster a Culture of Responsible Evaluation: Develop Clear Evaluation Criteria: Establish transparent and discipline-specific evaluation criteria that prioritize research quality, impact, and relevance over journal prestige. Provide Training and Support: Offer training and support to researchers and evaluation committees on responsible research assessment practices and the use of diverse evaluation metrics. Promote Long-Term Perspectives: Encourage a long-term perspective on research evaluation, recognizing that the impact of research may take time to materialize and should not be solely judged on immediate citation counts. By adopting these multifaceted approaches, academic institutions can move beyond the limitations of journal-based metrics and create a more robust and equitable system for evaluating research output that truly reflects the quality and impact of scholarly work.

Could a system of open peer review, where reviewers are publicly credited for their contributions, help to improve the quality and accountability of the review process?

The potential benefits of open peer review, where reviewers are publicly credited for their contributions, are a subject of ongoing debate within the academic community. Proponents argue that it could lead to several improvements in the peer review process: Enhanced Accountability and Quality: Increased Reviewer Diligence: Knowing their identity and reports will be public may encourage reviewers to be more thorough, objective, and constructive in their assessments. Reduced Bias and "Gatekeeping": Open identities might discourage reviewers from making biased decisions based on personal factors or affiliations, promoting fairer evaluations. Improved Review Quality: Public scrutiny could lead to higher quality reviews, as reviewers may be more motivated to produce well-reasoned and well-supported critiques. Greater Transparency and Recognition: Credit for Reviewers: Open peer review provides much-needed recognition and credit for the often-unacknowledged work of reviewers, potentially incentivizing participation. Transparency in Decision-Making: Publicly available reviews offer insights into the editorial process, making it more transparent and potentially fostering trust in published research. Educational Value: Open reviews can serve as valuable learning resources for early career researchers, showcasing different reviewing styles and approaches. However, some potential drawbacks also need consideration: Fear of Retribution: Reviewers might hesitate to provide critical feedback, especially for senior researchers or influential figures, fearing potential career repercussions. Dominance of Established Voices: Early career researchers might be less likely to participate in open review due to concerns about publicly criticizing more established figures. Logistical Challenges: Implementing and managing open peer review systems can be logistically complex, requiring robust platforms and moderation strategies. Overall, the effectiveness of open peer review in improving quality and accountability is likely context-dependent. Factors such as research discipline, journal policies, and the availability of appropriate safeguards will influence its success. Experimentation with different models of open peer review, coupled with careful evaluation of their impact, is crucial to determine their suitability and optimize their implementation.

What role should pre-print servers like ArXiv.org play in disseminating research findings and facilitating scholarly communication in a rapidly evolving digital landscape?

Pre-print servers like arXiv.org have emerged as essential platforms for disseminating research findings and transforming scholarly communication in the digital age. Their role is becoming increasingly significant due to the following factors: Accelerated Dissemination of Research: Rapid Sharing of Findings: Pre-print servers allow researchers to share their findings with the global research community immediately, bypassing the often lengthy traditional publishing timelines. This accelerates the pace of scientific discovery and knowledge dissemination. Early Feedback and Collaboration: Pre-prints enable researchers to receive early feedback from peers, fostering collaboration, and potentially improving the quality of their work before formal publication. Increased Visibility and Impact: Pre-prints deposited on platforms like arXiv.org are often indexed by search engines, increasing the visibility and potential impact of research even before peer review. Enhanced Openness and Accessibility: Free Access to Research: Pre-print servers promote open access to research, making findings freely available to anyone with an internet connection, regardless of their institutional affiliations or subscription barriers. Global Reach and Inclusivity: Pre-print servers democratize access to knowledge, enabling researchers from all over the world, particularly those in low- and middle-income countries, to participate in the global scientific discourse. Transparency and Reproducibility: The open nature of pre-prints encourages transparency and facilitates the reproducibility of research, as data and methods are often shared alongside the manuscript. Evolving Role in the Scholarly Communication Ecosystem: Complementing Traditional Publishing: Pre-print servers are not intended to replace traditional peer-reviewed journals but rather to complement them by providing a platform for rapid dissemination and early feedback. Facilitating Open Peer Review: Pre-print servers can be integrated with open peer review platforms, allowing for public commenting and more transparent peer evaluation. Driving Innovation in Scholarly Communication: The rise of pre-print servers has spurred innovation in scholarly communication, encouraging the development of new models for peer review, publication, and research assessment. In conclusion, pre-print servers like arXiv.org play a vital role in accelerating scientific progress, promoting open access, and fostering a more inclusive and collaborative research environment. As the digital landscape continues to evolve, pre-print servers are poised to become even more integral to the future of scholarly communication.
0
star