Is Frontiers in Trouble?

Lately it seems like the rising tide is going against Frontiers. Originally hailed as a revolutionary open-access publishing model, the publishing group has been subject to intense criticism in recent years. Recent issues include being placed on Beall’s controversial ‘predatory publisher list‘, multiple high profile disputes at the editorial level, and controversy over HIV and vaccine denialist articles published in the journal seemingly without peer review. As a proud author of two Frontiers articles and former frequent reviewer, these issues compounded with a general poor perception of the journal recently led me to stop all publication activities at Frontiers outlets. Although the official response from Frontiers to these issues has been mixed, yesterday a mass-email from a section editor caught my eye:

Dear Review Editors, Dear friends and colleagues,

As some of you may know, Prof. Philippe Schyns recently stepped down from his role as Specialty Chief Editor in Frontiersin Perception Science, and I have been given the honor and responsibility of succeeding him into this function. I wish to extend to him my thanks and appreciation for the hard work he has put in building this journal from the ground up. I will strive to continue his work and maintain Frontiers in Perception Science as one of the primary journals of the field. This task cannot be achieved without the support of a dynamic team of Associate Editors, Review Editors and Reviewers, and I am grateful for all your past, and hopefully future efforts in promoting the journal.

It am aware that many scientists in our community have grown disappointed or even defiant of the Frontiers publishing model in general, and Frontiers in Perception Science is no exception here. Among the foremost concerns are the initial annoyance and ensuing disinterest produced by the automated editor/reviewer invitation system and its spam-like messages, the apparent difficulty in rejecting inappropriate manuscripts, and (perhaps as a corollary), the poor reputation of the journal, a journal to which many authors still hesitate before submitting their work. I have experienced these troubles myself, and it was only after being thoroughly reassured by the Editorial office on most of these counts that I accepted to get involved as Specialty Chief Editor. Frontiers is revising their system, which will now leave more time for Associate Editors to mandate Review Editors before sending out automated invitations. When they occur, automated RE invitations will be targeted to the most relevant people (based on keyword descriptors), rather than broadcast to the entire board. This implies that it is very important for each of you to spend a few minutes editing the Expertise keywords on your Loop profile page. Most of these keywords were automatically collected within your publications, and they may not reflect your true area of expertise. Inappropriate expertise keywords are one of the main reasons why you receive inappropriate reviewing invitations! In the new Frontiers system, article rejection options will be made more visible to the handling Associate Editor. Although my explicit approval is still required for any manuscript rejection, I personally vow to stand behind all Associate Editors who will be compelled to reject poor-quality submissions. (While perceived impact cannot be used as a rejection criterion, poor research or writing quality and objective errors in design, analysis or interpretation can and should be used as valid causes for rejection). I hope that these measures will help limit the demands on the reviewers’ time, and contribute to advancing the standards and reputation of Frontiers in Perception Science. Each of you can also play a part in this effort by continuing to review articles that fall into your area of expertise, and by submitting your own work to the journal.

I look forward to working with all of you towards establishing Frontiers in Perception Science as a high-standard journal for our community.

It seems Frontiers is indeed aware of the problems and is hoping to bring back wary reviewers and authors. But is it too little too late? Discussing the problems at Frontiers is often met with severe criticism or outright dismissal by proponents of the OA publishing system, but I felt these neglected a wider negative perception of the publisher that has steadily grown over the past 5 years. To get a better handle on this I asked my twitter followers what they thought. 152 persons responded as follows:

As some of you requested control questions, here are a few for comparison:


That is a stark difference between the two top open access journals – whereas only 19% said there was no problem at Frontiers, a full 50% say there is no problem at PLOS ONE. I think we can see that even accounting for general science skepticism, opinions of Frontiers are particularly negative.

Sam Schwarzkopf also lent some additional data, comparing the whole field of major open access outlets – Frontiers again comes out poorly, although strangely so does F1000:

These data confirm what I had already feared: public perception among scientists (insofar as we can infer anything from such a poll) is lukewarm at best. Frontiers has a serious perception problem. Only 19% of 121 respondents were willing to outright say there was no problem at the journal. A full 45% said there was a serious problem, and 36% were unsure. Of course to fully evaluate these numbers, we’d like to know the baserate of similiar responses for other journals, but I cannot imagine any Frontiers author, reviewer, or editor feeling joy at these numbers – I certainly do not. Furthermore they reflect a widespread negativity I hear frequently from colleagues across the UK and Denmark.

What underlies this negative perception? As many proponents point out, Frontiers has been actually quite diligent at responding to user complaints. Controversial papers have been put immediately under review, overly spammy-review invitations and special issue invites largely ceased, and so on. I would argue the issue is not any one single mistake on the part of Frontiers leadership, but a growing history of errors contributing to a perception that the journal is following a profit-led ‘publish anything’ model. At times the journal feels totally automated, within little human care given to publishing and extremely high fees. What are some of the specific complaints I regularly hear from colleagues?

  • Spammy special issue invites. An older issue, but at Frontier’s inception many authors were inundated with constant invites to special issues, many of which were only tangentially related to author’s specialties.
  • Spammy review invites. Colleagues who signed on to be ‘Review Editors’ (basically repeat reviewers) reported being hit with as many as 10 requests to review in a month, again many without relevance to their interest
  • Related to both of the above, a perception that special issues and articles are frequently reviewed by close colleagues with little oversight. Similiarly, many special issues were edited by junior researchers at the PhD level.
  • Endless review. I’ve heard numerous complaints that even fundamentally flawed or unpublishable papers are impossible or difficult to reject. Reviewers report going through multiple rounds of charitable review, finding the paper only gets worse and worse, only to be removed from the review by editors and the paper published without them.

Again, Frontiers has responded to each of these issues in various ways. For example, Frontiers originally defended the special issues, saying that they were intended to give junior researchers an outlet to publish their ideas. Fair enough, and the spam issues have largely ceased. Still, I would argue it is the build up and repetition of these issues that has made authors and readers wary of the journal. This coupled with the high fees and feeling of automation leads to a perception that the outlet is mostly junk. This is a shame as there are certainly many high-value articles in Frontiers outlets. Nevertheless, academics are extremely bloodshy, and negative press creates a vicious feedback loop. If researchers feel Frontiers is a low-quality, spam-generating publisher who relies on overly automated processes, they are unlikely to submit their best work or review there. The quality of both drops, and the cycle intensifies.

For my part, I don’t intend to return to Frontiers unless they begin publishing reviews. I think this would go a long way to stemming many of these issues and encourage authors to judge individual articles on their own merits.

What do you think? What can be done to stem the tide? Please add your own thoughts, and stories of positive or negative experiences at Frontiers, in the comments.



A final comparison question



22 thoughts on “Is Frontiers in Trouble?

  1. I think Frontiers is very diverse, and there are different problems in the different areas. The biggest general problem for me is the inability to outright reject crackpot papers. I imagine the usual scenario is for an exasperated reviewer to just leave the review process, but then a new one will come in, and as long as the authors are patient, the paper will eventually get accepted. These are papers that should never have gotten past an editor in the first place.

    My own (reviewing only) experience with Frontiers in Human Neuroscience was very positive. It is rare that I get matched up with papers that are perfectly aligned with my expertise, but this was the case with the Frontiers papers I reviewed (assigned by different editors too!). So I think things are working well with these particular editors, and I follow and trust the papers that get published there.

    My husband, who is in math, had an entirely different experience. He was asked to be an *editor* in a field where he has just one paper. He explained that it’s not really his field – so far so good. The response of Frontiers? Won’t you please please still consider being an editor? This is just bad. If he had accepted (and people do accept all sorts of things for career advancement), he wouldn’t have been in a position to adequately judge the quality of the incoming papers or reviews.

  2. I think part of the (mis-)perceptions of Frontiers stem from the fact that this is a new publication model that most people still aren’t used to. But related to that people I just don’t think the execution of the model is that great. I’ll try to explain what I mean: F1000Research also has a novel review/publication process so all manuscripts are uploaded and visible to all. In the post-publication peer review philosophy just because study is online, even after some peer review, should on its own not imply that it is a robust and valuable contribution to the scientific literature. Only after sufficient post-publication review took place should we warm up to a study. This also applies to studies published in conventional journals even though a lot of people still consider a “published, peer-reviewed” study to be set in golden stone.

    F1000Research also doesn’t allow you to “reject” a paper. There is in fact no academic review there at all – the whole review process is driven by authors and reviewers. I am not 100% sure I like that but I can see why they decided to do it that way. It takes an asshole like me when I reviewed the Tressoldi EEG paper to never approve the manuscript for indexing. I think in many situations people would sooner or later approve it.

    It’s not really all that dissimilar from Frontiers. The idea behind all this is that so long as the basics have been taken care of, the evaluation should move to post-publication review by the larger community. And only if the community at large decides that the study is worthwhile should it be treated as such. Therefore being “published” is irrelevant. But of course it isn’t. People need to put their publications on their CVs. While some grant/job applications ask you for your citations or whatever on your publications, this in itself isn’t great either. Asking for PPPR ratings of your publications wouldn’t be ideal either because it is skewed by all manner of factors and depends on time.

    The biggest problem I see with Frontiers is that the review happens behind closed doors. I don’t care about the reviewers’ names being published alongside the paper. If anything this is a bad thing because people will treat the paper as “published” when it appears and the reviewers’ names are an implicit acceptance. You can’t see that they suggested improvements. The only way for reviewers to express their dissatisfaction is to withdraw. While some say that rejections happen at Frontiers I have never seen that happen. Basically, Frontiers review model is simply flawed. If they adopted a more transparent system like F1000R or PeerJ this would remedy a lot of that.

    • Ultimately I definitely agree – for me publication shouldn’t be about ‘rejecting’ or ‘accepting’ articles. This feeds into the misconception that there are perfect or conclusive papers. Science is always a work in progress and what I like about open ‘github’ style science ala F1000 is it makes this explicit. But you are absolutely correct, for this model to work the review process needs to be fully transparent.

  3. Regarding the endless review point, over the past few years I’ve noticed two camps forming and they seem to have come to a head in the Frontiers model. One ‘traditionalist’ camp that says if a paper is not good enough, reject and be done. The ‘open’ camp would likely argue that every manuscript submitted has some value, it just need to be revised and edited until it’s ready for publication.

    Obviously people will vary in their opinions, and there is a spectrum between the two extremes. Nonetheless, Frontiers’ ‘always publish’ model seems to be the place where these viewpoint clash – what do you do with a paper in very poor English, serious problems in design or interpretation
    that does not improve with repeated rounds of revision? or a paper with intrinsically flawed methodology, or espousing pseudo-scientific nonsense?

    The Frontiers party line seems to be ‘keep going at it’ until it’s publishable, which is unsustainable both for the authors and the reviewers who are stuck with endless, often fruitless, revisions. This drives away reviewers who are closer to the traditionalist camp and puts off authors who see poor quality work published in Frontiers.

    I don’t think the Frontiers model is fundamentally flawed, I think trying hard to accept the majority of papers on their scientific merit irrespective of novelty or fashion is a good thing. However, if you don’t have a mechanism for dealing with authors (or reviewers!) who are not willing to improve their work, and filtering out crackpots, it’s not going to work long-term.

    Ivan Alvarez

    • This is precisely why the reviews (rather than the reviewers’ names – they can sign if they want) should be published. If anyone can see what happened behind the scenes and how reviewers rated the paper, then nobody would complain about their “implicit acceptance”.

      • I agree. I don’t understand the named reviewer thing at all. When I’ve read a Frontiers paper reviewed by someone I know and they’ve missed something it doesn’t help their reputation in my eyes, whereas I’ll never know how many fundamentally brilliant contributions they may have made in review.

  4. I have had both good and bad experiences. The good experience happened to be with authors that I knew though. The bad experience(s) were very bad and it they have coloured my perception of Frontiers.

    I did not receive the mass email, as I am only signed up for Frontiers in Psychology but I did just get a quiet email saying that Frontiers have updated and redesigned their reviewing guidelines.

    The real problem is the publishing of nonsense that everyone agrees is nonsense, that anyone can identify as nonsense, but seemingly stays up no matter what. Case in point is this article:

    Which also happens to be the most viewed article in Frontiers in Perception, the authors of which are good “friends”, Mossbridge and Tressoldi et al.

    I have had a brief flirtation with Frontiers but I am done after completing this current review.


    • I actually disagree with that. You know I’ve been more vocally critical of the Mossbridge paper than anyone but I don’t think it classifies as “nonsense” that “stays up no matter what”. This was a meta-analysis that for all intends and purposes reports the results it found within reasonable limits. I have some serious issues with the interpretation and scientific reasoning but I have not seen anyone question the validity of their findings publicly (Daniel Lakens mentioned that there were coding errors in their data base – this would be a good place to start). There are serious issues with interpretation and scientific reasoning in many “conventional” papers too but I don’t see anyone complaining about those. We wouldn’t be talking about the Mossbridge paper if it wasn’t about “time symmetry”.

      If you want to talk about a paper that definitely seems nonsense that nevertheless seems to stay up (because Frontiers even responded saying it was reviewed and nothing to see here, move along, move along…) you could mention this one:

      Look at the PubPeer thread – it’s hilarious and disturbing at the same time:

  5. Sam, I disagree. The article you linked to is rather nonsensical but I think suffers from the same problem as the Mossbridge article, an analysis method is applied seemingly consistently (whatever systems science analysis is) and the result is interpreted without too much question of what the input was and whether the method even worked successfully. I think this is akin to a topic you have discussed, that we should not replace scientific decision making by statistical decision making (a poor paraphrasing I know 🙂 ).

    I agree though that the meta analysis reports on its findings, those findings though are questionable by virtue not of sloppy methods but due to the original content that formed the basis of the meta analysis. If I feed garbage into a meta analysis it does nothing to improve the look or feel of that garbage, merely it reports on the consistency of said garbage (to varying degrees), When you compound this issue with the possible interpretations floated you end up with even smellier garbage. To quote Holmes “How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth”. The problem is that in this case and many others their interpretation fails to eliminate the impossible.

  6. I should also have mentioned that I am more than happy to denounce any “conventional” paper that has dubious interpretations, some of which get published in Frontiers and others in Nature, here is a good recent example:

    Which has many issues, methodological, operational and interpretational in nature (no pun intended).

    What annoys me and makes things even worse is that the Mossbridge paper is an excellent example of where these sames issues occur but are compounded further by the topic and Frontiers seemingly does nothing.

    • Thanks for clarifying. I still think that based on the evidence that stands the Mossbridge meta-analysis doesn’t appear to justify outright retraction (the 2014 review is a different issue – I’m not entirely sure what the point of that one is). There may be issues with it that do but nobody has voiced them in the literature yet. But I think their analysis and conclusions should be challenged (which I have tried to do).

      Anyway, to come back to the real point of this debate, all would look very different here if we had an insight into the peer reviews. I for one would very much like to see what level of scientific scrutiny the Mossbridge study was subjected. How that “System Science Analysis” one got through peer review at all remains a complete mystery to me…

  7. Apparently this stream of opinions is one-sided, probably because those who review for many of the Frontiers journals do not follow this stream. They just review, read the papers and cite them when it is appropriate.
    I am reviewing regularly for the neuroscience, neuroendocrinology editors since about 5 years and my experience is excellent.
    Many of the manuscipts were good or reasonable, the authors corrections were timely and in case rejection was needed I had no problem in recommending rejection. I simply do not understand why people mentioned difficulty with rejections on the forms.
    The editor rejected all the manuscripts where the 2 reviewers agreed that rejection was needed. Besides, if you do not agree with your editor, you can easily opt out, which I never had reason to do.

    • Thanks for your positive story, but I definitely don’t agree that the people responding to these polls don’t review for frontiers. Many of my followers and colleagues report actively reviewing and even editing for the journal. I myself used to edit at Frontiers. Dismissing these concerns as merely one-sided I think is extremely misguided. I grant you that my followers are heavily weighted towards neuroscientists and psychologists, and that it would be interesting to compare perceptions in other fields. A defensive response (not saying you have one) won’t help anything.

    • Not my experience as a Frontiers review editor at all. For several papers, I only had the choice of withdrawing from the review process completely to make my fundamental problems with a submitted paper known — a “reject” option simply did not exist for a long time. I always explained my reasons in a separate communication to the editor and that my withdrawal should be counted as a “reject”. Maybe Frontiers is changing this now, and I think it would be a step in the right direction. When I look at Frontiers more generally, my sense is that there are some excellent papers being published in my field of expertise (psychology), and that it has its share of crappy papers, too. But which journal doesn’t? I view it as a plus for Frontiers that editors and reviewers are names, although I completely agree with the previous concerns of putting the blame for bad papers on reviewers and thus tainting their reputation. That’s why I wanted to see published reviews from the get-go, just like some of you here request. Another plus is the post-pub commentary option — if used properly, this could go a long way towards post-publication review. Finally, I would strongly argue for a publication model that sifts through the rubbish first and only publishes those papers that are above a certain threshold in terms of scientific quality. If we posted everything that is submitted and then left it to post-posting peer review to decide whether there are any merits to the paper, nobody would take a second look at the journal. You’d search for a needle in a haystack if looking for truly informative papers.

  8. I have published in Frontiers, and used to review there, until they started spamming me with endless automated invitations to review manuscripts that were completely outside my area of competence. It occurred to me that if I was being offered the chance to adjudicate on papers in cultural psychology, then presumably my own work on perceptual psychophysics was being sent to cultural psychologists to review; this did not inspire confidence.

    I wrote asking them to get their act together. “No,” was the response; “we have entered false information about your expertise into our reviewer database, so it is now your responsibility to go there and enter correct data.”

    I had a better idea, or at least one which involved less effort for me, which is to stop reviewing manuscripts for Frontiers.

  9. I requested Frontiers to remove my “Loop” profile a while ago and will not submit to them or review for them. When Frontiers was started, they threw lavish (and clearly hugely expensive) parties at the annual Society for Neuroscience” meeting, all the while denouncing established publishers as being greedy and having biased peer-review systems. The problems with Frontiers’ alternative peer-review system have been outlined elsewhere. So what really ticked me of is the fact that no sooner started their journals having some success (and, presumably, generate a nice profit), did they sell out to one of those much-maligned “mainstream” publishers…

  10. For me, this was the key part:

    This implies that it is very important for each of you to spend a few minutes editing the Expertise keywords on your Loop profile page. Most of these keywords were automatically collected within your publications, and they may not reflect your true area of expertise. Inappropriate expertise keywords are one of the main reasons why you receive inappropriate reviewing invitations!

    If the scripts and software that scrape material off the WWW and put together reviewer profiles are so worthless, then don’t feckin’ use them. Pay someone to maintain the reviewer profiles. Just don’t expect the reviewers to do the publishers’ work for them, for free, on top of all the free reviewing work.
    If Frontiers can’t even be bothered with assessing the reviewers, and expect that to be done for them, what are the publication fees supposed to pay for?

  11. So I have just been trying to figure out how you un-invite yourself from being a Frontiers review editor. Turns out there is no way to do so in your Frontiers/Loop profile or settings (at least that I could find). Upon reading the Frontiers reviewing terms and conditions I noticed the Termination clause. The
    clause has some standard legalese but the first paragraph is this:

    “This Agreement is subject to termination by either party for any reason upon at least 10 days’ notice. Upon termination by either party, you agree not to intentionally interfere with or damage, impair or disable the operation of Frontiers, or your specialty [sic] journal.”

    I have never been particularly impressed by the legal force of terms in agreements that apply after the agreement itself has terminated.

  12. Regarding the publishing of reviews I agree this would be an important step. I have included the anonymous reviewer comments (rejected after 2 appeals, no reviewer names expected to be disclosed anywhere) on a pre-print server recently. When Frontiers learned about this they said it was breach of terms…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.