Informing Science Institute

Looking for a List of Predatory Journals?
Journal Legitimacy and Worthiness

You reached this site because you were looking for a quick way to determine if a journal is worthy or predatory. It is critical to understand that many factors can help you determine the credibility of a journal. These factors include its editorial board, the scope of its topics, its peer review process, and the quality of its papers. It is also important to ensure the journal is indexed in a major database, such as Scopus or Web of Science.

Journal Worthiness

The issue of journal worthiness is not a newly emerging one. Knight and Steinbach’s (2008) exposition of journal selection criteria provides a framework for doctoral students to identify reputable journals for submitting their work. Furthermore, this framework has been widely accepted and utilized in academic circles as a reliable benchmark for journal worthiness. Knight and Steinbach’s framework has provided doctoral students with a set of standards against which to measure the quality and reputation of a journal. Another useful document for evaluating journals was published by Rele et al. (2017). These tools have enabled academics to decide to which journal to submit their work. This has led to increased acceptance and utilization of these frameworks among academics.

Research university departments may decide that only a handful are worthy of publishing their faculty’s research. For instance, some universities may require that their research be published in the top-tier journals of their discipline. This discussion explores frameworks that systematically assess worthy journals beyond the top tier. Most universities consider a journal credible if it follows peer review, editorial oversight, and ethical publishing standards.

In contrast, they consider a journal predatory if it fails to maintain the same selection criteria as reputable journals.

The problem is that a simple rubric must be available to determine a journal’s worth. “Journal paper mills” arose with the advent of online publishing. Just as degree mills grant degrees without providing education, paper mills’ purpose is to collect fees, not to enhance knowledge. This is similar to the distinction between a genuine designer handbag and a knock-off; the former is created with quality materials and craftsmanship, while the latter is simply a poor imitation. The same is true of journals, whose worth is not always easy to determine.

In sum, the term predatory is typically applied to journals that charge authors fees to publish their work without providing the editorial and publishing services associated with legitimate journals. These services include peer review, copyediting, and indexing. These predatory journals often have a glossy appearance and are marketed to authors as an easy way to get their work published quickly. However, since their papers rarely pass through a rigorous peer review process, the quality of the work and the research itself are often questionable. Below we outline some ways to vet journals.

Citation Reputation or Rubric

One estimate suggests that 1.8 million articles are published in journals each year. Some of these do not add to or even detract from our knowledge. Before selecting a journal for your article, you should read its published papers. Then come to your own conclusion. However, we must also demonstrate a journal’s worth in an academic setting to others.

One easy way to vet a journal is to check whether it is indexed in a reputable database, such as Scopus and SJR (SciMago Journal and Country Rank). Such databases determine a journal’s quality based on its impact on papers published in other journals, that is, how often it is cited. Similarly, authors can easily search Google Scholar using the Advanced Search feature. Type the journal’s name in the “return articles published in” box. These are evidence-based measurements.

This paper concludes with other rubrics and warns against using a fake “Beall’s lists.”

Warning Signs for Journals to Avoid.

These are some signs of deceptive journals:

The Problem with the Real Beall’s Listing of Journals with an Inadequate Review Process

But not all databases of reputable journals are equally trustworthy. At the start of the millennium, Jeffrey Beall created Beall’s list (n.d.) in 2008. It was a valuable online resource to help authors recognize exploitative publishing practices. Beall created a list of open-access publishers he thought would not perform thorough reviews. They would publish papers so long as the authors paid a fee. Beall writes that predatory publishers are motivated to increase their profits by accepting low-quality papers.

At first, the list was partially helpful to authors unwilling to vet a journal based on their own. However, it needed to be made more accurate to avoid being misleading. This inaccuracy caused distress to Beall, journals, authors, and readers alike.

For example, the University of Colorado officials complained that Beall included their non-predatory journals. A correspondent to Science magazine submitted the same low-quality paper to all the journals on Beall’s list. Yet about one in five journals listed rejected a fake scientific article, raising questions about using it to vet a journal.

Potential, Possible, or Probable Predatory Publisher.

To determine the quality of the review process of 304 open-access journals that charge fees, Science correspondent Bohannon (2013) conducted a sting operation. The 304 journals were either blacklisted in Beall’s list or whitelisted in DOAJ. To each journal he submitted a paper that should have been rejected immediately by editors and peer reviewers because of its obvious grave flaws. He reported this finding in his article “Who’s Afraid of Peer Review?”

His research revealed a flaw in that real Beall’s list.

He found that about one in five of the journals identified by Beall as predatory, in fact, did indeed properly reject the paper. They thus demonstrated an adequate review process.

Even more striking, 45% of publishers listed in DOAJ, which in theory creates a list of non-predatory journals, accepted Bohannon’s fake paper for publication. They include prestigious institutions and publishing companies including Elsevier, SAGE, and Wolters Kluwer.

Regarding Bohannon’s experiment’s results, Davis (2013) describes Beall’s original list in the following scathing words: Bealls’ list “is falsely accusing nearly one in five of being a ‘potential, possible, or probable predatory scholarly open-access publisher’ on appearances alone.”

Beall “should reconsider listing publishers on his ‘predatory’ list until he has evidence of wrongdoing. Being mislabeled as a ‘potential, possible, or probable predatory publisher’ by circumstantial evidence alone is like the sheriff of a Wild West town throwing a cowboy into jail just 'cuz he’s a little funny lookin.’ Civility requires due process.”

Given all these and other limitations and problems, Beall ceased publishing his list in 2017.

Even after its withdrawal in 2017, the list continued to exert influence. A more recent study (Cukier et al., 2020) revealed that the original Beall’s list was not evidence-based. This supports Davis’s conclusion about due process.

Kratochvíl et al. (2020) expanded the research. It tested the value of the formal criteria used by Beall, DOAJ, and others to determine whether or not a journal is predatory, or in their words, untrustworthy. Their study of 259 journals found that 74 (29%) may have been incorrectly assessed as untrustworthy.

They conclude that a proper evaluation of a journal requires a complex view. Others refer to it as a rubric. The rubrics allow researchers to independently assess the quality of the journals on the list and identify any potential biases or inconsistencies. Furthermore, it provides a consistent metric for evaluating the journals on the list and makes it easier for researchers to make informed decisions. Ultimately, the rubrics ensure quality control and provide researchers with an objective way to determine the most suitable choices when selecting journals to publish their work.

Rele et al. (2017) provides one such rubic, a variety of criteria for evaluating a journal. A more complete and insightful exploration of this issue is found in a paper by T. Grandon Gill (2021). In brief, the paper explores these key points:

A central theme of the paper is that the very term “predatory” is misleading since it is frequently unclear who the “prey” is in such situations. For example, is it the authors, the institutions that hire the authors based on their work, the reviewers, other researchers who find the work, or even science itself?

Tragically, while academics are under tremendous pressure to publish, they often lack the resources and training needed. Resources and training are needed to present research in a manner that would make it acceptable for publication in highly regarded journals. How can a journal or publisher that mentors researchers be considered a predator?

Journal Legitimacy and Trustworthiness

Gill (2021) proposes that our focus should be on distinguishing legitimate from illegitimate journals. Practices common to most illegitimate journals include (p. 73):

He also encourages authors to retain a copy of all reviews received. Where the reviews demonstrate that an in-depth, unbiased analysis of the article was conducted, the damage done by publishing in an outlet of low perceived quality will mainly be restricted to the authors (who likely could have published elsewhere). By the same token, if the authors of an accepted paper get only cursory feedback from an outlet’s reviewers, they should reconsider moving forward to publication with that outlet—regardless of its reputation.

The Legacy of Bad Research Lives On – Beallslist on the .NET

The limits to the value and dangers of Beall’s original blacklist and DOAJ’s whitelist have been demonstrated repeatedly in numerous research studies. Nevertheless, an anonymous postdoctoral researcher ignored these problems and published his/her own list. To paraphrase Davis, by doing so, the author “neglected to follow the necessary procedures, demonstrating a lack of respect for civility.”

To add undue credibility to the list, instead of naming this new list distinctively, the author appropriated the name “Beall’s List.”

The anonymous person now includes additional journals. While his list is longer than the real Beall’s list, this list exhibits similar problems. Indeed, the copycat Beall’s list made the earlier list’s errors even more blatantly. While the author may have intended to remedy some limitations present in Beall’s list, his list only further exposed its readers to faulty shortcuts.

For one thing, it contained a significant amount of bias, further clouding the list’s credibility. This list has numerous inaccuracies. Even worse, when inaccuracies are detected, the anonymous author fails to respond to publishers’ emails. Consequently, the legitimacy and validity of the copycat list is questionable and should be removed from the public sphere.

Informing Science Institute

The Informing Science Institute has been publishing high-quality Open Access journals since 1998. Open Access (OA) refers to free, unrestricted online access to research outputs such as journal articles and books. OA is increasing in popularity because it allows for wider dissemination of research, which in turn can lead to greater impact.

Its journals’ review process is extensive and mentoring by design. They provide authors with a development letter written by the paper’s editor based on the editor’s thoughts and those of the paper’s review board. This feedback helps authors to hone and refine their work, ensuring the highest quality research is published in Informing Science Institute journals. Commonly the review board for any given paper is composed of four or more reviewers who have previously indicated skills and interest in the topic.

Moreover, its journals have been instrumental in advancing transdisciplinary research, particularly Informing Science, by providing an international platform for researchers to share their ideas. As one of the originators of Open Access (OA) journal publishing, it provides free, unrestricted online access to research outputs such as journal articles and books. As OA journals, they allow for wider dissemination of research, which can lead to greater impact. Authors receive a development letter that spells out suggestions for improving the submission, whether or not the paper is accepted pending revision. This feedback helps authors to hone and refine their work, ensuring the highest quality research is published in Informing Science Institute journals. Commonly the review board for any given paper sponsored by the Informing Science Institute is composed of four or more reviewers who have previously indicated skills and interest in the topic.

References

Beall, J. (n.d.). Potential, possible, or probable predatory scholarly open-access publishers. Scholarly Open Access. https://web.archive.org/web/20170112125427/https://scholarlyoa.com/publishers/

Bohannon, J. (2013). Who’s afraid of peer review? Science, 342(6154), 60-65. https://doi.org/10.1126/science.342.6154.60

Cukier, S., Helal, L., Rice, D. B., Pupkaite, J., Ahmadzai, N., Wilson, M., Skidmore, B., Lalu, M. M., & Moher, D. (2020). Checklists to detect potential predatory biomedical journals: A systematic review. BMC Med, 18, Article104. https://doi.org/10.1186/s12916-020-01566-1

Davis, P. (2013). Open Access “Sting” Reveals Deception, Missed Opportunities”. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2013/10/04/open-access-sting-reveals-deception-missed-opportunities/

Gill, G. (2021). The predatory journal: Victimizer or victim? Informing Science: The International Journal of an Emerging Transdiscipline, 24, 51-82. https://doi.org/10.28945/4788

Knight, L. V., & Steinbach, T. A. (2008). Selecting an appropriate publication outlet: A comprehensive model of journal selection criteria for researchers in a broad range of academic disciplines. International Journal of Doctoral Studies, 3, 59-79. https://doi.org/10.28945/51

Kratochvíl, J., Plch, L., Sebera, M., & Koriťáková, E. (2020). Evaluation of untrustworthy journals: Transition from formal criteria to a complex view. Learned Publishing, 33, 308-322. https://doi.org/10.1002/leap.1299

Rele, S., Kennedy, M., & Blas, N. (2017). Journal evaluation tool. LMU Librarian Publications & Presentations, 40. https://digitalcommons.lmu.edu/librarian_pubs/40

Author

This was written by Eli Cohen, Executive Director, Informing Science Institute, in 2023. You can reach him at EliCohen@InformingScience.org.

Cite the paper as follows:

Cohen, E. (2023). Looking for a list of predatory journals? Journal legitimacy and worthiness. http://BeallsList.info