Monday, November 21, 2016

PLOS CEO steps down as publisher embarks on “third revolution”

On 31st October, PLOS sent out a surprise tweet saying that its CEO Elizabeth Marincola is leaving the organisation for a new job in Kenya. Perhaps this is a good time to review the rise of PLOS, put some questions to the publisher, and consider its future.

PLOS started out in 2001 as an OA advocacy group. In 2003, however, it reinvented itself as an open access publisher and began to launch OA journals like PLOS Biology and PLOS Medicine. Its mission: “to accelerate progress in science and medicine by leading a transformation in research communication.” Above all, PLOS’ goal was to see all publicly-funded research made freely available on the internet.

Like all insurgent organisations, PLOS has over the years attracted both devoted fans and staunch critics. The fans (notably advocates for open access) relished the fact that PLOS had thrown down a gauntlet to legacy subscription publishers, and helped start the OA revolution. The critics have always insisted that a bunch of academics (PLOS’ founders) would never be able to make a fist of a publishing business.

At first, it seemed the critics might be right. One of the first scholarly publishers to attempt to build a business on article-processing charges (APCs), PLOS gambled that pay-to-publish would prove to be a viable business model. The critics demurred and said that in any case the level that PLOS had set its prices ($1,500) would prove woefully inadequate. Commenting to Nature in 2003, cell biologist Ira Mellman of Yale University, and editor of The Journal of Cell Biology, said. “I feel that PLOS’s estimate is low by four- to sixfold,”

In 2006, PLOS did increase the fees for its top two journals by 66% (to $2,500), and since then the figure has risen to $2,900. While this is neither a four- or sixfold increase, we must doubt that these prices would have been enough to make an organisation with PLOS’ ambitions viable. In 2008 Nature commented, “An analysis by Nature of the company’s accounts shows that PLOS still relies heavily on charity funding, and falls far short of its stated goal of quickly breaking even through its business model of charging authors a fee to publish in its journals. In the past financial year, ending 30 September 2007, its $6.68-million spending outstripped its revenue of $2.86 million.”

Wednesday, October 05, 2016

Institutional Repositories: Response to comments

The introduction I wrote for the recent Q&A with Clifford Lynch has attracted some commentary from the institutional repository (IR) and open access (OA) communities. I thank those who took the time to respond. After reading the comments the following questions occurred to me.

1.     Is the institutional repository dead or dying?

Judging by the Mark Twain quote with which COAR’s Kathleen Shearer headed her response (“The reports of our death have been greatly exaggerated”), and judging by CORE’s Nancy Pontika insisting in her comment that we should not give up on the IR (“It is my strong belief that we don’t need to abandon repositories”) people might conclude that I had said the IR is dead.

Indeed, by the time Shearer’s comments were republished on the OpenAIRE blog (under the title “COAR counters reports of repositories’ demise”) the wording had strengthened – Shearer was now saying that I had made a number of “somewhat questionable assertions, in particular that institutional repositories (IRs) have failed.”

That is not exactly what I said, although I did quote a blog post by Eric Van de Velde (here) in which he declared the IR obsolete. As he put it, “Its flawed foundation cannot be repaired. The IR must be phased out and replaced with viable alternatives.”

What I said (and about this Clifford Lynch seemed to agree, as do a growing number of others) is that it is time for the research community to take stock, and rethink what it hopes to achieve with the IR.

It is however correct to say I argued that green OA has “failed as a strategy”. And I do believe this. I gave some of the reasons why I do in my introduction, the most obvious of which is that green OA advocates assumed that once IRs were created they would quickly be filled by researchers self-archiving their work. Yet seventeen years after the Santa Fe meeting, and 22 years after Stevan Harnad began his long campaign to persuade researchers to self-archive, it is clear there remains little or no appetite for doing so, even though researchers are more than happy to post their papers on commercial sites like and ResearchGate.

However, I then went on to say that I saw two possible future scenarios for the IR. The first would see the research community “finally come together, agree on the appropriate role and purpose of the IR, and then implement a strategic plan that will see repositories filled with the target content (whatever it is deemed to be).”

The second scenario I envisaged was that the IR would be “captured by commercial publishers, much as open access itself is being captured by means of pay-to-publish gold OA.”

Neither of these scenarios assumes the IR will die, although they do envisage somewhat different futures for it. That said, what they could share in common is a propensity for the link between the IR and open access to weaken. Already we are seeing a growing number of papers in IRs being hidden behind login walls – either as a result of publisher embargoes or because many institutions have come to view the IR less as a way of making research freely available, more as a primary source of raw material for researcher evaluation and/or other internal processes. As IRs merge with Research Information Management (RIM) tools and Current Research Information Systems (CRIS) this darkening of the content in IRs could intensify.  

What makes this darkening likely is that the internal processes that IRs are starting to be used for generally only require the deposit of the metadata (bibliographic details) of papers, not the full-text. As such, the underlying documents may not just be inaccessible, but entirely absent.

This outcome seems even more likely in my second scenario. Here the IR is (so far as research articles are concerned) downgraded to the task of linking users to content hosted on publishers’ sites. Again, to fulfil such a role the IR need host only metadata.

2.     So what is the role of an institutional repository? What should be deposited in it, and for what purpose?

As I pointed out in my introduction, there is today no consensus on the role and purpose of the IR. Some see it as a platform for green OA, some view it as a journal publication platform, some as a metadata repository, some as a digital archive, some as a research data repository (I could go on).

It is worth noting here a comment posted on my blog by David Lowe. The reason why the IR will persist, he said, “is not related to OA publishing as such, but instead to ETDs.” Presumably this means that Lowe expects the primary role of the IR to become that of facilitating ETD workflows.

It turns out that ETDs are frequently locked behind login walls, as Joachim Schöpfel and Hélène Prost pointed out in a 2014 paper called Back to Grey: Disclosure and Concealment of Electronic Theses and Dissertations. “Our paper,” they wrote “describes a new and unexpected effect of the development of digital libraries and open access, as a paradoxical practice of hiding information from the scientific community and society, while partly sharing it with a restricted population (campus).”

And they concluded that the Internet “is not synonymous with openness, and the creation of institutional repositories and ETD workflows does not make all items more accessible and available. Sometimes, the new infrastructure even appears to increase barriers.”

In short, the roles that IRs are expected to play are now manifold and sometimes they are in conflict with one another. One consequence of this is that the link between the repository and open access could become more and more tenuous. Indeed, it is not beyond the bounds of possibility that the link could break altogether.

3.     To what extent can we say that the IR movement – and the OAI-PMH standard on which it was based – has proved successful, both in terms of interoperability and deposit levels?

As I said in my introduction, thousands of IRs have been created since 1999. That is undoubtedly an achievement. On the other hand, many of these repositories remain half empty, and for the reasons stated about we could see them increasingly being populated with metadata alone.

Both Shearer and Pontika agree that more could have been achieved with the IR. With regard to OAI-PMH Pontika says that while it has its disadvantages, “it has served the field well for quite some time now.”

But what does serving the field well mean in this context? Let’s recall that the main reason for holding the Santa Fe meeting, and for developing OAI-PMH, was to make IRs interoperable. And yet interoperability remains more aspiration than reality today. Perhaps for this reason most research papers are now located by means of commercial search engines and Google Scholar, not OAI-PMH harvesters – a point Shearer conceded when I interviewed her in 2014.

Of course, if running an IR becomes less about providing open access and more about enabling internal processes, or linking to papers hosted elsewhere, interoperability begins to seem unnecessary.

4.     Do IR advocates now accept that there is a need to re-think the institutional repository, and is the IR movement about to experience a great leap forward as a result?

Most IR advocates do appear to agree that it is time to review the current status of the institutional repository, and to rethink its role and purpose. And it is the Confederation of Open Access Repositories (COAR) that is leading on this.

“The calls for a fundamental rethink of repositories is already being answered!” Tony Ross-Hellauer –  scientific manager at OpenAIRE (a member of COAR) –  commented on my blog.  “See the ongoing work of the COAR next-generation repositories working group.”

Shearer, who is the executive director of COAR (and so presumably responsible for the working group), explains in her response that the group has set itself the task of identifying “the core functionalities for the next generation of repositories, as well as the architectures and technologies required to implement them.”

As a result, Shearer says, the IR community is “now well positioned to offer a viable alternative for an open and community led scholarly communication system.”

So all is well? Not everyone thinks so. As an anonymous commenter pointed out on my blog: “All this is not really offering a new way and more like reacting to the flow. Maybe that has to do with the kind of people working on it, the IR crowd is usually coming from the library field and their job is not to be inventive but to archive and keep stuff save.”

Archiving and keeping stuff save are very worthy missions, but it is to for-profit publishers that people tend to turn when they are looking for inventive solutions, and we can see that legacy publishers are now keen to move into the IR space. This suggests that if the goal is to create a community-led scholarly communications system COAR’s initiative could turn out to be a case of shutting the stable door after the horse has bolted.

5.     What is the most important task when seeking to engineer radical change in scholarly communication: articulating a vision, providing enabling technology, or getting community buy-in?

“Ultimately, what we are promoting is a conceptual model, not a technology,” says Shearer “Technologies will and must change over time, including repository technologies. We are calling for the scholarly community to take back control of the knowledge production process via a distributed network based at scholarly institutions around the world.”

Shearer adds that the following vision underlies COAR’s work:

“To position distributed repositories as the foundation of a globally networked infrastructure for scholarly communication that is collectively managed by the scholarly community. The resulting global repository network should have the potential to help transform the scholarly communication system by emphasizing the benefits of collective, open and distributed management, open content, uniform behaviors, real-time dissemination, and collective innovation.”

As such, I take it that COAR is seeking to facilitate the first scenario I outlined. But were not the above objectives those of the attendees of the 1999 Santa Fe meeting? Yet seventeen years later we are still waiting for them to be realised. Why might it be different this time around, especially now that legacy publishers are entering the market for IR services, and some universities seem minded to outsource the hosting of research papers to commercial organisations, rather than work with colleagues in the research community to create an interoperable network of distributed repositories?

What has also become apparent over the past 17 years is that open movements and initiatives focused on radical reform of scholarly communication tend to be long on impassioned calls, petitions and visions, short on collective action.

As NYU librarian April Hathcock put it when reporting on a Force11 Scholarly Commons Working Group she attended recently: “As several of my fellow librarian colleagues pointed out at the meeting, we tend to participate in conversations like this all the time and always with very similar results. The principles are fine, but to me, they’re nothing new or radical. They’re the same things we’ve been talking about for ages.”

Without doubt, articulating a vision is a good and necessary thing to do. But it can only take you so far. You also need enabling technology. And here we have learned that there is many a slip ‘twixt the cup and the lip.” OAI-PMH has not delivered on its promise, as even Herbert Van de Sompel, one of the architects of the protocol, appears to have concluded. (Although this tweet suggests that he too does not agree with the way I characterised the current state of the IR movement).

Shearer is of course right to say that technologies have to change over time. However, choosing the wrong one can at derail, or significantly slow down, the objective you are working towards.

But even if you have articulated a clear and desirable vision, and you have put the right technology in place, in the generally chaotic and anarchic world of scholarly communication you can only hope to achieve your objectives if you get community buy-in. That is what the IR and self-archiving movements have surely demonstrated.

6.     To what extent are commercial organisations colonising the IR landscape?

In my introduction I said that commercial publishers are now actively seeking to colonise and control the repository (a strategy supported by their parallel activities aimed at co-opting gold open access). As such, I said, the challenge the IR community faces is now much greater than in 1999.

In her response, Shearer says that I mischaracterise the situation. “[T]here are numerous examples of not-for-profit aggregators including BASE, CORE, SemanticScholar, CiteSeerX, OpenAIRE, LA Referencia and SHARE (I could go on),” she said. “These services index and provide access to a large set of articles, while also, in some cases, keeping a copy of the content.”

In fact, I did discuss non-profit services like BASE and OpenAIRE, as well as PubMed Central, HAL and SciELO. In doing so I pointed out that a high percentage of the large set of articles that Shearer refers to are not actually full-text documents, but metadata records. And of the full-text documents that are deposited, many are locked behind login walls. In the case of BASE, therefore, only around 60% of the records it indexes provide access to the full-text.

In addition, many consist of non-peer-reviewed and non-target content such as blog posts. Thats fine, but this is not the target content that OA advocates say they want to see made open access. Indeed, in some cases a record may consist of no more than a link to a link (e.g. see the first item listed here).

So the claims that these services make about indexing and providing access to a large set of articles need to be taken with a pinch of salt.

It is also important to note that publishers are at a significant advantage here, since they host and control access to the full-text of everything they publish. Moreover, they can provide access to the version of record (VoR) of articles. This is invariably the version that researchers want to read.

It also means that publishers can offer access both to OA papers as well as to paywalled papers, all through the same interface. And since they have the necessary funds to perfect the technology, publishers can offer more and better functionality, and a more user-friendly interface. For this reason, I suggested, they will soon (and indeed some already are) charging for services that index open content, as I assume Elsevier plans to do with the DataSearch service it is developing. This seems to me to be a new form of enclosure of the commons.

Shearer also took me to task for attaching too much significance to the partnership between Elsevier and the University of Florida – in which the University has agreed to outsource access to papers indexed in its repository to Elsevier. I suggested that by signing up to deals like this, universities will allow commercial publishers to increasingly control and marginalise IRs. This is an exaggeration, says Shearer “[O]ne repository does not make a trend.”

I agree that one swallow does not a summer make. However, summer does eventually arrive, and I anticipate that the agreement with the University of Florida will prove the first swallow of a hot summer. Other swallows will surely follow.

Consider, for instance, that the University of Florida has also signed a Letter of Agreement with CHORUS in a pilot initiative intended to scale up the Elsevier project “to a multilateral, industry effort.”

In addition to Elsevier, publishers involved in the pilot include the American Chemical Society, the American Physical Society, The Rockefeller University Press and Wiley. Other publishers will surely follow.

And just last week it was announced that Qatar University Library has signed a deal with Elsevier that apes the one signed by the University of Florida. I think we can see a trend in the making here.

As things stand, therefore, it is not clear to me how initiatives like COAR and SHARE can hope to match the collective power of legacy publishers working through CHORUS.

Let’s recall that OA advocates long argued that legacy publishers would never be able to replicate in an OA environment the dominance they have long enjoyed in the subscription world. As a result, it was said, as open access commodifies the services they provide publishers will experience a downward pressure on prices. In response, they will either have to downsize their operations, or get out of the publishing business altogether. Today we can see that legacy publishers are not only prospering in the OA environment, but getting ever richer as their profits rise – all at the expense of the taxpayer.

But let me be clear: while I fear that legacy publishers are going to co-opt both OA and IRs, I would much prefer they did not. Far better that the research community – with the help of non-profit concerns – succeeded in developing COAR’s “viable alternative for an open and community led scholarly communication system.”

So I applaud COAR’s initiative and absolutely sign up to its vision. My doubts are that, as things stand, that vision is unlikely to be realised. For it to happen I believe more dramatic changes would be needed than the OA and IR movements appear to assume, or are working towards.

7.     Will the IR movement, as with all such attempts by the research community to take back control of scholarly communication, inevitably fall victim to a collective action dilemma?

Let me here quote Van de Sompel, one of the key architects of OAI-PMH. Van de Sompel, I would add, has subsequently worked on OAI-ORE (which Lynch mentions in the Q&A) and on ResourceSync (which Shearer mentions in her critique).

In a retrospective on repository interoperability efforts published last year Van de Sompel concluded, “Over the years, we have learned that no one is ‘King of Scholarly Communication’ and that no progress regarding interoperability can be accomplished without active involvement and buy-in from the stakeholder communities. However, it is a significant challenge to determine what exactly the stakeholder communities are, and who can act as their representatives, when the target environment is as broad as all nodes involved in web-based scholarship. To put this differently, it is hard to know how to exactly start an effort to work towards increased interoperability.”

The larger problem here, of course, is the difficulties inherent in trying to get the research community to co-operate.

This is the problem that afflicts all attempts by the research community to, in Shearer’s words, “take back control of the knowledge production process.” What inevitably happens is that they bump up against what John Wenzler, Dean of Libraries California State University, has described as a “collective action dilemma”.

But what is the solution? Wenzler suggests the research community should focus on trying to control the costs of scholarly communication. Possible ways of doing this he says could include requiring pricing transparency and lobbying for government intervention and regulation. “[T]he government can try to limit a natural monopoly’s ability to exploit its customers by regulating its prices instead.”)

He concedes however: “Currently, the dominant political ideology in Western capitalist countries, especially in the United States, is hostile to regulation, and it would be difficult to convince politicians to impose prices on an industry that hasn’t been regulated in the past.”

He adds: “Moreover, even if some kind of International Publishing Committee were created to establish price rates, there is a chance that regulators would be captured by publisher interests.”

It is worth recalling that while OA advocates have successfully persuaded many governments to introduce open access/public access policies, this has not put control of the knowledge production process back into the hands of the research community, or reduced prices. Quite the reverse: it is (ironically) increasing the power and dominance of legacy publishers.  

In short, as things stand if you want to make a lot of money from the taxpayer you could do no better than become a scholarly publisher!

I don’t like being the eternal pessimist. I am convinced there must be a way of achieving the objectives of the open access and IR movements, and I believe it would be a good thing for that to happen. Before it can, however, these movements really need to acknowledge the degree to which their objectives are being undermined and waylaid by publishers. And rather than just repeating the same old mantras, and recycling the same visions, they need to come up with new and more compelling strategies for achieving their objectives. I don’t claim to know what the answer is, but I do know that time is not on the side of the research community here.

Thursday, September 22, 2016

Q&A with CNI’s Clifford Lynch: Time to re-think the institutional repository?

Seventeen years ago 25 people gathered in Santa Fe, New Mexico, to discuss ways in which the growing number of e-print servers and digital repositories could be made interoperable. 

As scholarly archives and repositories had begun to proliferate a number of issues had arisen. There was a concern, for instance, that archives would needlessly replicate each other’s content, and that users would have to learn multiple interfaces in order to use them. 
Photo courtesy Susan van Hengstum
It was therefore felt there was a need to develop tools and protocols that would allow repositories to copy content from each other, and to work in concert on a distributed basis.

With this aim in mind those attending the New Mexico event – dubbed the Santa Fe Convention for the Open Archives Initiative (OAI) – agreed to create the (somewhat wordy) Open Archives Initiative Protocol for Metadata Harvesting, or OAI-PMH for short.

Key to the OAI-PMH approach was the notion that data providers – the individual archives – would be given easy-to-implement mechanisms for making information about what they held in their archives externally available. This external availability would then enable third-party service providers to build higher levels of functionality by using the metadata harvesting protocol.

The repository model that the organisers of the Santa Fe meeting had very much in mind was the physics preprint server arXiv This had been created in 1991 by physicist Paul Ginsparg, who was one of the attendees of the New Mexico meeting. As a result, the early focus of the initiative was on increasing the speed with which research papers were shared, and it was therefore assumed that the emphasis would be on archiving papers that had yet to be published (i.e. preprints).

However, amongst the Santa Fe attendees were a number of open access advocates. They saw OAI-PMH as a way of aggregating content hosted in local – rather than central – archives. And they envisaged that the archived content would be papers that had already been published, rather than preprints. These local archives later came to be known as institutional repositories, or IRs.

In other words, the OA advocates present were committed to the concept of author self-archiving (aka green open access). The objective for them was to encourage universities to create their own repositories and then instruct their researchers to deposit in them copies of all the papers they published in subscription journals. 

As these repositories would be on the open internet outside any paywall the papers would be freely available to all. And the expectation was that OAI-PMH would allow the content from all these local repositories to be aggregated into a single searchable virtual archive of (eventually) all published research.

Given these different perspectives there was inevitably some tension around the OAI from the beginning. And as the open access movement took off, and IRs proliferated, a number of other groups emerged, each with their own ideas about what the role and target content of institutional repositories should be. The resulting confusion continues to plague the IR landscape.

Moreover, today we can see that the interoperability promised by OAI-PMH has not really materialised, few third-party service providers have emerged, and content duplication has not been avoided. And to the exasperation of green OA advocates, author self-archiving has remained a minority sport, with researchers reluctant to take on the task of depositing their papers in their institutional repository. Given this, some believe the IR now faces an existential threat. 

In light of the challenging, volatile, but inherently interesting situation that IRs now find themselves in I decided recently to contact a few of the Santa Fe attendees and put some questions to them. My first two approaches were unsuccessful, but I struck third-time lucky when Clifford Lynch, director of the Washington-based Coalition for Networked Information (CNI), agreed to answer my questions.

I am publishing the resultant Q&A today. This can be accessed in the pdf file here.

As is my custom, I have prefaced the interview with a long introduction. However, those who only wish to read the Q&A need simply click on the link at the head of the file and go directly to it. 

Monday, July 25, 2016

What quality controls are utilised by PLOS ONE when selecting reviewers? Who is deemed eligible?

In 2011, I expressed concern about the PLOS ONE business model and its associated review process. My worries were focused on the use of what some have called light or “lite” peer review, and the “pay-to-publish” system used by PLOS ONE (and now by many other publishers). My worries were subsequently recorded on the PLOS ONE Wikipedia page

Recent personal experience has increased my concern, and left me wondering about the way in which reviewers are recruited by PLOS ONE. 

On 12th July, I received an email from a PLOS ONE academic editor inviting me to peer review a paper. I won’t say what the paper was entitled, or who the authors were, but it was on the topic of open access journals.

Since I am a blogger/journalist rather than an academic I was surprised to receive the invitation, and emailed PLOS ONE with the following question: “I have had an invitation to review the above paper. Can you point me to the rules on the eligibility of PLOS ONE reviewers?”

I received the following (I assume boilerplate) reply:

This did not address my question, so I also emailed the academic editor whose name had been at the bottom of the invitation. I am not going to name him, but I will say that he is based at Universidad de Las Palmas de Gran Canaria, “Was it you who sent me an invitation to review the above paper?” I asked. “If so, I am wondering why you invited me. Can you say?”

I received the following response:

Grunt work

Let me be quite clear at the outset: I had and have no interest whatsoever in reviewing this or any other scholarly work, not least because there is absolutely no incentive for me to devote my time to reviewing papers. Moreover, the one time I did agree to review anything for an academic journal (an editorial rather than a paper), my suggestions were all rejected on the grounds that “the author says he is too busy to make the changes you suggested.” Clearly I had not made very good use of my time!

But as I say, my first response on receiving the PLOS ONE invitation was to wonder whether it is inappropriate for non-academics to review scholarly papers.

With these thoughts in mind I tweeted the invitation under the strapline “PLOS ONE invites journalist to review scholarly paper”. Somewhat to my surprise, everyone who responded said that they saw no problem with my reviewing a scholarly paper on open access (although it could not presumably be defined as “peer” review). Their reasoning was that they are confident that I have the necessary expertise. And Roger Schonfeld commented, “I’d like to see expertise welcomed into the scholarly conversation without regard to academic affiliation.”

On reading these responses I recalled that some OA advocates maintain that an important benefit of OA is that it encourages members of the public to take a greater interest in science, and to even take part in the process themselves – by means of “citizen science”.

Tuesday, July 19, 2016

SocArXiv debuts, as SSRN acquisition comes under scrutiny

The arrival of a new preprint server for the social sciences called SocArXiv comes just a month after news that Elsevier is acquiring the Social Science Research Network (SSRN), a preprint repository and online community founded in 1994 by two researchers. 

Given the concern and disappointment expressed over the SSRN purchase by researchers, it is no surprise that the launch of SocArXiv has been very well received. Still smarting from Elsevier’s 2013 acquisition of Mendeley – another formerly independent service for managing and sharing scholarly papers – many (especially OA advocates) were appalled to hear that the publisher has bought a second OA asset. The reasons for this were encapsulated in a blog post by University of Iowa law professor Paul Gowder entitled “SSRN has been captured by the enemy of open knowledge”.

This concern has also attracted the attention of the Federal Trade Commission (FTC) which has launched a review of the SSRN purchase. The FTC is currently contacting many institutions and experts in scholarly publishing to assess the implications of the acquisition, presumably in order to decide whether it needs to intervene in some way.

Elsevier is understandably keen to downplay the interest the US government is showing in its latest acquisition. “The Federal Trade Commission is conducting a routine, informal review of our acquisition of the Social Sciences Research network,” vice president and head of global corporate relations at Elsevier Tom Reller emailed me. “Elsevier’s interest in SSRN is and has been about SSRNs’ ethos, a place where it is free to upload, and free to download. We are working cooperatively with the FTC, and believe that the review will conclude favourably.”

In other words, Elsevier does not believe the FTC’s interest in its purchase will lead to a formal investigation.

But however timely SocArXiv’s launch may be, the service is not a response to the SSRN acquisition, the director of the new service, and professor of sociology at the University of Maryland, Philip Cohen assured me. “We were already in planning before we heard about the SSRN purchase.”

That said, the fledgling service is clearly keen to ride the wave of discontent that Elsevier’s latest move has engendered: it has begun to host preprints by means of a temporary arrangement until the platform’s construction is completed.

So what is SocArXiv? As the name suggests, it is modelled on the physics preprint server arXiv, and describes itself as a free, open access, open source archive for social science research. Authors are able to upload their preprints to the service and make them freely available to all. The papers will be provided with permanent identifiers to allow them to be linked to the latest version, or to versions published elsewhere. They can also be made available under Creative Commons licences, and analytics data will be provided to show how often papers have been accessed.

Registration to the service will be free and open to all, regardless of academic affiliation. In addition, registered users will be able to comment on and discuss papers.

A branded service

However, unlike arXiv – which was conceived (and still largely continues to act) as a supplement to the traditional scholarly publishing system, it is hoped that SocArXiv will disrupt the traditional system, and help to eventually supplant it.

Wednesday, June 29, 2016

Open access and Brexit

The UK research community’s response to the recent referendum – in which a majority of 52% voted for the UK to leave the European Union (or “Brexit”) – has been one of horror and disbelief.

This is no surprise, not least because Brexit would have a serious impact on research funding in the UK. Nature reports that UK universities currently get around 16% of their research funding from the EU, and that the UK currently hosts more EU-funded holders of ERC grants than any other member state. Elsewhere, Digital Science has estimated that the UK could lose £1 billion in science funding if the UK government does not make up the shortfall in EU-linked research funds.

And a recent Outsell report noted that EU research funds accounted for, on average, more than £900m of funding each year between 2009 and 2015, or the equivalent of one-third of the competitive funding provided by the UK’s research agencies.

But what are the implications of Brexit for open access? Given the highly volatile situation the UK now finds itself in we cannot say anything for certain. However, any squeeze on funding will surely be detrimental to current plans to migrate scholarly publishing from a subscription to an open access system.

It is, after all, generally agreed that the transition to open access will require additional funding, if only in the short term. To this we should add that the UK has been one of the main advocates for open access within the EU, and globally.

Meanwhile, the other major advocate for open access in Europe – The Netherlands – is about to give up its Presidency of the EU. During their Presidency, the Dutch managed to persuade member states to agree to a commitment to make all scientific papers freely available by 2020.

There was always scepticism as to how achievable the EU goal is, but Brexit would seem to make it much less achievable. As The Wellcome Trust’s Robert Kiley points out on the questions I have posed about OA and Brexit, “The EU recently set a target of 100 per cent OA by 2020. How this was to be achieved was unclear, but without the UK at the table arguments in favour of gold will be less vocal.”

So what do OA advocates think about the current situation? Below long-time proponent of open access, and Professor of Structural Biology at Imperial College London, Stephen Curry offers some thoughts.

As an indication of just how uncertain the situation is note that Stephen asked me to preface his answers with this statement: “Readers should please bear in mind that my answers have been composed while I am still keenly feeling the pain and confusion surrounding the outcome of the EU referendum. Ask me again tomorrow and my answers could well be different.”
Stephen Curry

The interview begins …

RP: I realise that Brexit raises more pressing issues for UK academics than any impact it might have on the transition to open access, and that much still remains uncertain about how Brexit will play out, but what in your view would be the likely implications for open access if the kind of fears expressed in, for instance, this piece in Physics Today were to be realised?

SC: You’re right that there are many more pressing issues – loss of funding, loss of a leadership role within EU research programs, loss of influence – to say nothing of the fact that the UK now feels a much more unwelcoming place to students and staff from overseas.

The scale of this seems unimaginable. And everyone is disorientated because it’s clear that there is no plan for Brexit at the moment. It is all still to be worked out.

In the meantime, there is still the hope that the process of leaving will unravel; that the country, seeing the damage being inflicted, will find some way to step back. But even that just adds to the ongoing confusion and uncertainty. With all this going on, it is hard even to think about OA.

RP: Brexiteers say that it is “unlikely that universities will be bereft of funding”. But how confident can we be that Brexit will have little or no impact on university funding, and that if the UK does suffer economically costs for open access will not be one of the first victims?

SC: Brexiters have consistently underplayed the risks and costs of leaving the EU – as has already become plain.

The UK is a net contributor to the EU overall but ‘wins’ in terms of research funding. That will disappear if our subsequent agreement (and when will that be fashioned?)  doesn't include a commitment to freedom of movement, and that doesn’t seem likely right now.

Even if we save on the EU contribution (by no means guaranteed, especially if we want access to the single market), the ongoing decline in the pound, the drop in the stockmarket and the flight of industries and jobs will likely propel the economy into recession, reducing tax receipts and the possibility that the government will be able to ‘compensate’ UK science for the loss of EU funds. To do so the government would have to demonstrate a commitment to investing in R&D that has not been evident from past settlements of flat cash or, more recently, flat value.

In those circumstances, there could well be pressure on funds for OA. But perhaps that might make RCUK/UKRI get tough on fundees to seek value for money when publishing? They’ve been reluctant to date but these are strange times. Who knows?

RP: In 2013 RCUK anticipated that a full transition to open access would be completed within “around five years” [2018] and that by that time 75% of OA papers would be published as gold OA. It is also now widely accepted that – at least in the short term – a transition to open access will require additional funding (to pay the APCs generally needed for gold OA). If UK research funders and institutions faced a reduction in funding as a result of Brexit might the money needed for a transition to OA no longer be available, or significantly curtailed? If so, what sort of threat would that present for the OA movement?

SC: I think there’s a risk of that. I’m never sure what you mean by “the OA movement” – to me it’ is a heterogeneous collection of individuals and organisations with diverse emphases on the key articles of the various declarations – but it would pose a challenge to those of us in the UK who advocate the wider adoption of OA for scholarly research. I, for one, am up for that challenge. The argument for OA remains unchanged and the means to achieve it have always been the subject of debate.

RP: One possibility, I guess, is that much greater stress would be placed on green OA. But green OA does not offer any kind of transition to open access does it? And as publishers impose ever more onerous embargo conditions does green OA really offer a realistic long-term solution?

SC: That could be a direction to go in, particularly with the start of the HEFCE policy. I don’t think green OA is the long-term solution though it’s an effective interim measure. We will have to be vigilant in spotting and calling out extensions to embargo periods – particularly since I have not seen any convincing evidence that they are a cost to publishers.

RP: Presumably there are also implications for the EU. Along with The Netherlands, the UK has been the main driver of OA at a European level. As a result, in April the Amsterdam Call for Action on Open Science set a target of achieving full open access for all scientific publications by 2020 [A goal subsequently agreed on by the ministers of science, innovation, trade, and industry of member states at the May Competitiveness Council]. With the UK set to leave the EU might we see a fall-off in interest in OA within the EU?

SC: The UK has certainly been an influential voice (even if it hasn’t been able to induce many other nations to follow it down the gold-preferring route), but it is by no means the only one as far as I can tell.

The whole Brexit process could well be a huge distraction for the rest of the EU so perhaps the 2020 deadline (rather notional in any case for some?) might slip. But perhaps not – the HEFCE policy should be effective in achieving the aspiration in the UK and that could be an example for others. I’m not as clued into EU machinations as some but I will be sorry that the UK is excluded from EU discussions on OA.

Hope springs eternal: perhaps there are clever and pragmatic ways we might still be involved. There would be no want of volunteers in my view.

RP: While there has been much discussion over what the UK might lose financially as a result of Brexit, the UK is the third largest contributor to the EU budget. If it leaves, therefore, presumably the EU will face a 12.5% fall in its budget. Again, if transitioning to open access will – at least in the short term – cost more, is it not likely that the EU will need to cut its cloth, and that in doing so it will conclude that open access is not as high a priority as it was?

SC: The UK is a net contributor so I guess there will be some contraction of the EU budget. It won’t be 12.5% because the size of the EU would also fall if the UK leaves. The EU has its own economic woes but investors looking to exit the UK could well end up in the EU, so who knows what the net effect will be?

Part of the argument for OA in science and engineering is to disseminate the raw material for developing new technologies, and that hasn’t gone away. If anything the pressures of Brexit could make it seem more needed than ever. Who knows?

RP: What if any implications for OA are there here for those in North America and the rest of the world?

SC: If the UK loses some of its momentum on OA, I wonder if the some of the noise it has created around OA might be lost.

RP: What does this all mean for scholarly publishers?

SC: This is a fantastic opportunity for them to demonstrate what they really mean when they talk about being “partners” with the research community. I hope many will seize that opportunity to make a positive contribution to the situation.

Any hint of publishers seeking to take advantage of what is going to be a painful period for the UK research community should be called out and opposed. Like I said, hope springs eternal.

RP: Thank you for taking time to answer these questions.