Categories
Announcement

Opening Remarks #3: Open Research

In this quite long third episode of Opening Remarks, we take a look at the work that we’re doing in the Library to help foster an open and responsible research environment at The University of Manchester. There are three segments: a conversation with Scott Taylor about the Open Research Programme that he’s leading, a chat with Zoë Fletcher and Eleanor Warren about their contribution to that work, and then some snippets from our recent Open Research Exchange event.

All of this follows a slightly longer than usual preamble from me (Steve) and Clare where we talk about: Nigel Slater, figs, interior design, lockdown life and some other stuff. Skip forward to 11 minutes if you want to give that a miss.

During the episode, we promised to link to some things in the episode notes. Here they are:
The University’s Open Research Position Statement
Resources from the Open Research Exchange event – includes slides, video and audio

The Library’s resources on Open Research
Follow us on Twitter at @UoMLibResearch

Thanks for listening! Hope you all have a lovely festive season!

Music by Michael Liggins
Artwork by Elizabeth Carlton

Download episode here

Categories
Announcement

Open Access Week 2020

Open Access Week is a busy time of year for us all in Research Services, hence why I (Steve, Research Services Librarian) am writing this blog post on the Monday after Open Access Week has ended. I thought it would be worth writing something anyway, to reflect on the last few months, but also to advertise a new exciting event that we’re planning for next month (scroll to the bottom of the post if that’s what you’re here for).

Working remotely

The biggest, and most obvious, change to how we do things in Research Services has been the transition to working remotely. We’ve all been working from home since the end of March and, without wanting to blow our own trumpets too much, the transition has been a pretty smooth one. While service users can no longer get us on the phone, we are still available via all the usual channels plus we’re now available via the Library’s Library Chat widget. Just navigate to the Researcher Services webpages and, when the widget pops out, type in your question.

You can ask us anything about any of our services and we’ll do our best to give you an answer immediately. If that’s not possible, we’ll be able to put you in touch with someone who can help.

At the end of March, we suspected that the amount of work coming through the service would start to drop at some point, with researchers not being able to access labs, or conduct interviews, or anything else that involved leaving the house. That hasn’t happened, however. In fact, if anything, our services have only gotten busier as the months have passed.

Open Access Week 2020

This year’s Open Access Week was a low key one here at the University of Manchester. We did deliver two open access related My Research Essentials workshops during the week, though. We even attempted to live tweet our popular Open Access in 5 Simple Steps workshop for those who weren’t able to make it.

Opening Remarks

Being at home and using video calling apps all the time provided us with the excuse we needed to record our own open research podcast. Whether that was a good thing or not is up to you. If you’d like to listen to the episodes that have been released so far, you can find us on iTunes, Spotify, Stitcher and other podcast platforms.

Open Research Exchange

If I’m being totally transparent, this entire blog post has been written primarily to give me somewhere to announce our first Open Research Exchange, which is coming up on Wednesday November 18th. We’ll be hosting an online exchange of experience, where we’ll be hearing all about how University of Manchester researchers are embedding open research practices. We’ve got an exciting lineup of speakers to talk about their experiences, but we’d also love you to come along and share yours too.

You can find out more about the event, and book yourself a place, via our Eventbrite page. It’s free and online and open to anybody who wants to come along.

Categories
Announcement

Opening Remarks #2: Happy Open Access Week!

In this special Open Access Week edition of the podcast, we present an open research variety show of sorts. We talk to Lucy about open access, transformative deals and coming back to the world of scholarly communications after maternity leave. We get Eleanor and Chris back to talk about open research data and FAIR principles. Then we round things off by talking to Olivia about our Choosing a Credible Journal My Research Essentials session.

Other discussion points include: the CITV show Rosie and Jim, the weather, and Clare becoming very strong.

My Research Essentials
DOAJ (Directory of Open Access Journals)
@UoMLibResearch

Music by Michael Liggins
Artwork by Elizabeth Carlton

Download it here

Categories
Announcement

Opening Remarks: An Open Research Podcast

You might have seen we recently released the first episode of our open research podcast Opening Remarks. This is something we’ve been talking about doing for a while, but the transition to working from home sped things up a little bit. We now spend a lot of our time talking to each other on platforms that enable audio recording, so our feeling was this would be a good opportunity to put that technology to good use.

The idea behind Opening Remarks is simple – we want to have conversations with colleagues from across the University about open research; how open research is supported and facilitated, but also how researchers embed open principles in their practice. We want these conversations to be informal, interesting and informative.

Our intention is to record six episodes in this initial series, covering research data, open access, research communications, metrics and lots more besides. We’d been keen to hear from you about what you think we should be talking about, and we’d be even keener to hear from you if you’d like to be a guest! Come and talk to us about the open research that you do!

The first episode is already available on iTunes and, pending successful reviews, should be available on Stitcher, Spotify and Google’s podcast player in the next couple of days. Do give it a listen and let us know what you think! You can contact us on Twitter at @UoMLibResearch or email us at researchdata@manchester.ac.uk

Opening Remarks is hosted by Clare Liggins and Steve Carlton, two Research Services Librarians with very little broadcast experience but lots of enthusiasm.

Steve Carlton
@UoML_Steve

I’ve been a Research Services Librarian at Manchester since January 2019, specialising in open access and research communications. Before I arrived at Manchester I’d been working in open access at several other institutions across the north west, including spells at the University of Liverpool and the University of Salford.

I’m interested in open research and its potential to help researchers reach broader audiences, and outside work I’m into professional wrestling, non-league football, the music of Arthur Russell and the Australian TV soap Neighbours. If I can find a way to talk about any of those things in the podcast, I will.

Clare Liggins
@clarepenelope

I’m a Research Services Librarian in the Research Data Management Team. I’ve been working at the University since January 2019 (Steve and I started on the same day) and am interested in anything to do with promoting the effective practice of Research Data Management, including training, as well as anything to do with Open Research.

My background is in Literature and writing, and before working at the University I was a Law Librarian. Due to my background, I am also interested in finding ways of working with these areas to adopt Research Data Management processes more widely.

In my spare time I enjoy reading books about feminist writers, spotting beautiful furniture in films from the 1950s, cooking recipes written by Nigel Slater and making up voices for my cat.

Categories
Announcement

Opening Remarks #1: Research Data Management

In this first episode of Opening Remarks, we talk about the perils of working from home in the summer, then invite our colleagues to talk to us about research data management for an hour. We’re joined by Chris, Eleanor and Bill to cover: the complexities of supporting research data management across disciplines, the joys of checking data management plans, and we talk up some of the services we offer. We also get a bit excited talking about the impending arrival of an institutional data repository.

Music by Michael Liggins
Artwork by Elizabeth Carlton

The Library’s research data webpages
https://www.library.manchester.ac.uk/using-the-library/staff/research/research-data-management/

That data costing podcast that Clare mentioned
https://blog.research-plus.library.manchester.ac.uk/2020/05/21/podcast-costing-research-data-at-the-university-of-manchester/

Email us
researchdata@manchester.ac.uk

Tweet us
@UoMLibResearch

Download it here.

Categories
Announcement

How data services can support a FAIR data culture: insights from IDCC 2020

This year I was delighted to attend and present a poster at IDCC 2020, which put together a truly thought-provoking line-up of speakers and topics, as well as a number of wonderful opportunities to sample some of Dublin’s cultural attractions. Even more than the delights of the “fair city”, I was especially interested in one important theme of the conference which explored supporting a FAIR data culture. Inspired by the many valuable contributions, this post outlines some of the key insights presented on this topic.

An excellent hook around which to frame this review is, I think, offered by the figure below capturing results from the FAIRsFAIR open consultation on Policy and Practice, which was one focus of Joy Davidson’s illuminating overview of developments in this area. The top factor influencing researchers to make data FAIR, when we take both positive points on the scale together, is the level of support provided.

IDCC_Blog_Figure1_V2.png
Source: FAIRsFAIR report

So, let’s take a closer look at some of the key developments and opportunities for data services to enhance support for FAIR culture, bearing in mind of course that, when it comes to shaping service developments, local solutions must be informed by local contexts taking into account factors such as research strategy, available resources and service demand.

Enhancing the FAIR Support Infrastructure

That making data FAIR is an endeavour shared by researchers and data services was neatly illustrated by Sarah Jones. Her conclusion that equal, if not more, responsibility lies with data services gives cause to reflect on where and how we may need to raise our capabilities.

Let’s look here at three areas of opportunity for developing our support mechanisms around data stewardship, institutional repositories, and training.

Professionalising Data Stewardship

In 2016, Barend Mons predicted that 500,000 data stewards would need to be trained in Europe over the following decade to ensure effective research data management. Given this sort of estimate, it’s clear that our ability to build and scale data stewardship capability will be critical if we agree that data stewardship and data management skills are key enablers for research. Two particularly interesting developments in this area were presented.

Mijke Jetten outlined one project that examined the data steward function in terms of tasks and responsibilities, and the competencies required to deliver on these. The objective is a common job description, which then offers a foundation from which to develop a customised training and development pathway – informed of course by FAIR data principles, since alignment with FAIR is seen as a central tenet of good data stewardship. Although the project focused on life sciences in the Netherlands, its insights are highly transferable to other research domains.

Equally transferable is the pilot project highlighted by the Conference’s “best paper” from Virginia Tech, which described an innovative approach to addressing the challenge of effectively resourcing support across the data lifecycle in the context of ever-growing demand for services. Driven by the University Libraries, the DataBridge programme trains and mentors students in key data science skills to work across disciplines on real-world research data challenges. This approach not only provides valuable and scalable support for the research process, but serves also to develop data champions of the future, skilled and invested in FAIR data principles.

Leveraging Institutional Data Repositories

As a key part of the research data infrastructure, it’s clear that institutional data repositories (IRs) have an important role to play in promoting FAIR. Of course, researcher engagement and expertise are crucial to this end – as we rely on them to create good metadata and documentation that will facilitate discovery and re-use of their data.

In terms of fostering engagement, inspiring trust in an IR would seem to be an important fundamental, and formal certification is one way to build researchers’ confidence that their data will be well-cared for in the longer term by their repository. Ilona von Stein outlined one such certification framework, the CoreTrustSeal, which seems particularly useful since there’s a strong overlap between its requirements and FAIR principles. In terms of enhancing a repository’s reputation, one important post-Conference development worth noting is the recent publication of the TRUST Principles for digital repositories which offers a framework for guiding best practice and demonstrating IR trustworthiness.

Ilona also pointed to ongoing developments in terms of tools to support pre- and post-deposit assessment of data FAIRness. SATIFYD, for example, is an online questionnaire that helps researchers evaluate, at pre-deposit stage, how FAIR their dataset is and offers tips to make it more so. Developed by DANS, a prototype of this manual self-assessment tool is currently available with plans in the offing to enable customisation for local contexts and training. One to watch out for too is the development of a post-publication, automated evaluation tool to assess datasets for their level of FAIRness over time and create a scoring system to indicate how a given dataset performs against a set of endorsed FAIR metrics.

Another fundamental to think about is how skilled our researchers may or may not be when it comes to metadata creation as well as their level of tolerance for this task. Joao Castro made the point that researchers typically regard spending more than 20 minutes on this activity as time-consuming.

This observation came out of a project at the University of Porto to engage researchers in the RDM process and underlines the need to think creatively about how we, as data professionals, can enhance the support we offer. Joao described how the provision of a consultancy-type service had been explored to support researchers in using domain-specific metadata to describe their data. Underpinned by DENDRO, an open-source collaborative RDM platform, this service was well received by researchers across a range of disciplines and served to develop their knowledge / skills in metadata production, as well as raising FAIR awareness more broadly.

Maximising Training Impact

Of course, beyond raising awareness it’s clear that the upskilling of researchers through curriculum development and training is an essential step on the road to FAIR – a key question, however, is how do we make the most of our training efforts?

Daniel Bangert helpfully summarised findings from a landscape analysis of FAIR in higher education institutions and recommended focusing FAIR training initiatives on early career researchers (ECRs). This would seem to be a particularly powerful approach for affecting ‘ground up’ culture change, since ECRs are typically involved in operational aspects of research and will become the influential researchers of tomorrow.

This same report suggests that training and communication regarding FAIR should be couched within the wider framework of research integrity and open research. Framing data management training initiatives in this way provides important context and pre-empts the risk that it will be seen purely as a compliance issue.

As an interesting aside, an extensive research integrity landscape study, commissioned by UK Research and Innovation and published post-Conference, identified ‘open data management’ as the overall most popular topic for future training – a useful channel perhaps then through which to deliver and extend reach in the UK context at least.

Both Daniel and Elizabeth Newbold highlighted the need to draw on and share best practices and existing materials, where available. Subsequent workshop discussions strongly agreed with this sentiment but noted the challenges in finding and/or repurposing existing FAIR training, guidance and resources e.g. for a specific audience or level of knowledge. Indeed, it would seem sensible that FAIR principles should be applied to FAIR training materials!

In this regard, a helpful starting point might perhaps be this recent PLOS article – Ten simple rules for making training materials FAIR. Going forward, the development of a FAIR Competence Centre, with a key focus on supporting training delivery, will be one to look out for.

IDCC_Blog_CG_20200701_Figure2.png
Poster presentation at IDCC 2020. (Photo: Rosie Higman)

In Conclusion

So, hopefully plenty of food for thought and ideas for practical next steps here to adapt for your local context, wherever you are on the road to FAIR. While the challenges to creating a FAIR data culture are many, broad and complex, we can take heart not only from the many examples of sterling work underway, but also from the highly collaborative spirit across the data services community. In the context of increasing demands on tight resources, this will serve us well as we drive the FAIR agenda.

Categories
Announcement

Podcast: Costing research data at The University of Manchester

The University is purchasing a new costing tool for research projects. In order to provide some more information about the costing tool and what it can be used for, we sat down to have a conversation about how helpful it will be for costing research projects, with a focus on research data.

This podcast brings together people working in Research IT, in the Research Support Offices and the Research Data Management team in the Library. We talk about the costing tool, the finance implications of proper costing and the viewpoint of various funders on managing costing requirements at the start of your project and how a data management plan (DMP) can help.

For more information about research data, please see our online resource, Research Data Explained or visit the Library’s Research Data Management website. For any questions please email us.

You can listen to the podcast via this My Research Essentials Medium post, which also includes a transcript of the discussion.

Categories
Announcement

OA+: The End of the Communities of Attention Report

It’s been a year since we launched Open Access+, an enhancement to our open access support services that aims to help University of Manchester researchers raise the visibility of their work. Since March 2019, 397 papers have been opted in, we’ve tweeted over 2,000 times from @UoMOpenAccess and generated 380 unique Communities of Attention reports. You might even have seen Scott Taylor’s excellent UKSG Insights article about the service.

The idea behind the Communities of Attention reports was simple. If a Twitter account is tweeting frequently about papers published in x journal, it’s likely that the account is either a) a bot, b) the journal’s marketing team or, more interestingly, c) someone who is very interested in research in that field. This approach obviously works better for journals with a narrower scope, though there’s a lot to be said for broadening your network. Armed with this information, our researchers could (hopefully) identify the leading voices in their field or at least find some useful accounts to follow.

Capture
An example of a Communities of Attention report

You might have noticed that I’ve been talking about the Communities of Attention reports in the past tense. Here’s why. The reports were generated via a time consuming process which involved Python scripts, APIs and lots of manual editing of CSV files. We received some positive feedback and, as there wasn’t any other way that our researchers could get this information, we thought this work was worthwhile.

May 3 2019 3_40 PM - Edited - Edited

Recently, partly inspired by the work we’ve been doing (we think!), Altmetric introduced the new “Mention Sources” feature. As a result, you’re now able to build your own much more sophisticated Communities of Attention report in just a few clicks. It’s really cool! You can select multiple journals and see which Twitter accounts, news platforms, blogs, etc. mention their papers most frequently. And much more besides. Here’s a short video of the feature in action.

In this video, I search for who’s tweeted most frequently about papers published in the journal Acta Astronautica and then drill down so I can see the top account and the associated tweets.

Rather than replicating what the Altmetric Explorer now does and presenting that information in a spreadsheet, we’ve decided it’d be better to just point our researchers to the Altmetric Explorer. Where we used to include Communities of Attention reports in emails to our researchers, we now include some instructions on making use of the new feature instead.

The Open Access+ service continues to go from strength to strength, however, and moving away from generating and circulating Communities of Attention reports will give us an opportunity to focus on more useful activities that will help our researchers raise the visibility of their work. We have exciting plans for the future that will help us do this. Watch this step!

Categories
Announcement

Love Data Week 2020: What I Talk About When I Talk About Data Repositories

As this is my first post for the Library Research Plus blog I’d like to introduce myself. I’m Bill Ayres, the new Strategic Lead for Research Data Management based here in the Main Library since January 2020. I previously worked in IT for over fifteen years, most of them in HE, and then more lately as Research Data Manager at the University of Salford. I’m passionate about open research in general, and how this can connect researchers to foster cross-disciplinary projects and also have real-world benefits for people who may not otherwise have access to scholarly findings, outputs and data (especially data).

One important thing:

Usually, I would link out to various examples and case studies When I Talk About Data Repositories, but I’m going to be more general here. We are currently considering various options that will provide a fully-fledged data repository for the University – this is a very good thing – so in the interests of impartiality and fairness I won’t mention any specific platforms or technology suppliers.

One less important thing:

Apologies to Haruki Murakami for borrowing / mangling his book title for this post. It felt like a good idea when I thought of it late at night, a bit less so now (but I can’t think of a better one).

What good can a good institutional data repository do?

From a system perspective, and for the libraries that run the service, it provides a home for “curated” files, datasets and other resources that support research findings and publications. It shouldn’t be a dumping ground for everything, but a place for these important research assets that allows them to be stored, published and preserved.

With some funders mandating that data supporting publications be made available open access, and others recommending this, a data repository can also provide a straightforward option to ensure that compliance is covered.

It can be a powerful complementary system to the main institutional repository, one which can link to this in many ways and provide an alternative route for discovery and reuse of outputs, but also have its own character and profile.

There are clear and logical integrations that the repository can have with other useful systems e.g. a main institutional repository to connect outputs and data so that there are persistent links between them. There are opportunities there for reporting and metrics that examine ways people search for and discover data and published outputs, and how these may differ. There is also an opportunity to add the home institution branding to data and create or strengthen an association that may not occur if data is always hosted on external or publisher repositories. These benefits of integration can also extend outwards to researcher focused platforms e.g. ORCID, DataCite and similar.

And from a security and administration perspective, implementing an institutional data repository can help ensure that research data is safe, secure, covered by ethics and related policies, and also can be subject to review or checking where appropriate.

What I’ve talked about so far – a focus on integration, compliance, security, and review processes – is all great from the point of view of the institutional teams who “own” the data repository, and we need these to effectively manage and support it. But experience tells us that any system or service intended, primarily, for use by researchers and academics has to provide real-world benefits to them, and *crucially* be easy to use, or they will not utilise it or engage with the service offering it relates to. That adds up to a wasted investment for the institution, and a missed opportunity to give researchers a great platform.

So what should the institutional data repository be doing for its primary users, researchers?

Alongside the ease of use mentioned, it can fill a fundamental gap by providing a platform to publish data more quickly, easily and effectively than via other routes. For a long time efforts have been focused on publishing research outputs as the final part of the lifecycle. But a good data repository can facilitate a “just in time” ability to make data available to a wider audience throughout the research lifecycle. Adopting a light touch approach to curation of data deposits means that researchers can choose to share initial data, illustrate novel mid-project findings with relevant datasets, and looking past their standard data types they can share conference resources like posters, slides, or videos.

Talking about video, a data repository can provide an excellent place to store and showcase file types that can bring research to life: images, audio files, video and film clips, and in some cases there will be functionality that can preview or render 3D models and complex graphical files.

Increasingly data repositories also provide the ability for researchers to create collections of their own (or related) data outputs; a curated selection of datasets that links to similar open access resources created by others in the discipline can provide a resource with great potential for reuse or further investigation.

Researchers often need a place to store data for the longer term too. Funders and institutional policies may mandate a 5 year, 10 year, or even indefinite preservation requirement for research data. It can make good technical and practical sense to integrate digital preservation into a data repository, from straightforward bit-level preservation to more holistic solutions which will automatically convert file types and formats as applications and technology move on. An institutional data preservation option can give researchers peace of mind that their data will survive for the long term.

From a perspective beyond the home institution

As a final thought on this topic, I’d like to reflect back on the principles which are at the heart of open research and open data, in making that data FAIR (Findable, Accessible, Interoperable and Reusable) and Open. Beyond the anticipated audience of researchers and academic investigators, a great data repository can be a powerful gateway for access and reuse by researchers in the developing world, healthcare professionals, or by members of the public. We often forget that the costs of journal subscriptions or other payment models to access outputs and data act as an impassable barrier to institutions or individuals that are unable to pay them. It’s our duty to make as wide a range of research data as possible freely and easily available as this can have benefits that go far beyond the original investigation or discipline.

Categories
Discussion

Supporting Open Access for books: lessons learned

Open Access (OA) books have been a fairly hot topic over recent months. My colleagues and I have responded to various surveys and contributed to UKRI’s review workshop and thought that sharing our experience of facilitating OA books might also be a useful addition to the debate.

Over the past 5 years we’ve agreed to arrange OA for 27 books. Mostly, these are monographs (25), but there’s also one edited volume and one trade book in our list. We have arranged OA for books that are already published and books that are still being written. The stage at which we pay normally determines how much we pay per title, but in the case of our highest Book Processing Charge (BPC) – £12,000, it’s the length of the book. The lowest BPC we’ve paid is £2,200, for backlist titles published by Manchester University Press (MUP).

The early requests we received came from authors working on grants from the Arts and Humanities Research Council (AHRC) and the Economic and Social Research Council. Our AHRC-funded author contacted us because her publisher, Oxford University Press (OUP), had asked if she wanted the book to be OA and she wasn’t sure how to respond. Since then a number of authors have been pointed in our direction by their publisher to ask if funding is available for OA.

While we’re pleased with the levels of enthusiasm for OA from scholars in the Humanities and keen to extend our OA support beyond articles and compliance, our funds are limited and we’re unable to support all the enquiries about OA books we receive. To date this hasn’t mattered too much, because we haven’t received requests from authors submitting their book to a fully Gold publisher but we’re mindful that this could change as awareness of newer publishers, like UCL Press and Open Book Publishers, increases.

We’ve very much ‘learned by doing’ for OA books, just as we did for journal articles back in 2013, and these are some of our learning outcomes.

Liaising with publishers about OA books is very different from journal articles.

Conversations tend to happen between the author and their editor, and when we’ve tried to intervene on an author’s behalf it’s been tricky to identify a contact on publisher websites. My enquiry to a general Bloomsbury email address over 12 months ago, remains unanswered to this day (!), and the author and I had to wait patiently for her editor to return from holiday to answer our questions. I resorted to a Twitter Direct Message to the University of Michigan Press after struggling to find a general email address and unsure of which listed staff member/role would know the answer to my question. I’ve even made use of contacts in publisher OA journal teams as an in-road (e.g., “I know this isn’t something you can help with but do you know who can?”). Luckily, the authors we’ve dealt with seem to accept this state of affairs and are generally happy to facilitate introductions. Especially when we’ve asked them to ask their publisher questions about OA licensing options.

A common conversation with publishers about OA books is when we’ll pay the BPC. In cases where we’ve agreed to cover the cost when the book’s still being written we often need to pay well in advance of publication. This is because payments are made either from the OA block grant we receive from UKRI or from our institutional funding. Pressure on the UKRI grant varies year on year, so we want to make payments when we are confident we can afford them. The same is true of our institutional OA fund, but another factor here is that committed expenditure (ie, unspent funds) can’t be carried over into a new budget year.

Some OA books are less discoverable than others.

In our discussions with publishers we’ve not dealt with before (even those via authors!) we ask how the book will be made available as OA. We’re hoping for multiple access points, including the publisher’s website, OAPEN and the Directory of Open Access Books (DOAB). We don’t restrict payment of BPCs on the basis of the answer but we think we should be clearer about our expectations before we agree to cover the cost. When we discussed this with a colleague at MUP, she suggested, “With your institutional fund, you are like a research funder, so you could state your expectations as clearly as, say, the Wellcome Trust does”.

We’ve found variation in how publishers display information about the OA version of a title on their own webpages. Good examples include:

Of the 18 published books we’ve paid BPCs for and that are now OA, 6 aren’t indexed in DOAB and 7 aren’t indexed in OAPEN. One Bloomsbury title on our list is missing a logo on the publisher’s website to identify the book as OA. Potential readers can easily see how to buy the book but don’t get a link to access the version we’ve paid for – see for yourself!

Discoverability in our own search system is also an issue. One of the books we have paid for has a single record in Library Search, which only lists the 3 print copies we hold, all of which are currently out on loan. Other books have multiple records, with the OA and print versions on different records. One of our professors queried how quickly her book would appear in OA format on our catalogue and Copac (recently replaced by Library Hub Discover) after we’d requested OA from her publisher. Because we didn’t know, our metadata experts manually updated records so the OA version was available when the prof wanted to promote it via social media. Making further improvements to our records and increasing the discoverability of our OA content needs a cross-library project, which is on our To Do list for 2020.

We haven’t always got what (we think) we’ve paid for.

Ok, so this bit is where there is definitely overlap between OA books and OA articles…

At the most basic level when we pay a BPC we expect to be able to find an OA copy of the book somewhere, and fairly soon after payment’s been made. But we realised last year that we didn’t know when books we’d paid for would be available as OA: we hadn’t asked in all cases, and publishers hadn’t let us know. We queried what we perceived to be delays following payment for a number of books. We had the response quoted above from the University of Michigan Press which proved very helpful in our understanding, and we followed this up with a really useful discussion with our colleagues at MUP about production processes.

So we know now that there’s no standard process or turnaround time for Gold OA, and it’s helpful to know this so we can better manage expectations of the authors we support.

Whilst preparing this post I’ve noticed that 3 of the books we paid BPCs for in July 2019 – books that are already published – haven’t yet been converted to OA. We’re contacting the publishers (Anthem Press, Bloomsbury, MUP) to check if this is an error or if the conversion process is scheduled and really does take 5 months. We’re also considering asking publishers to provide progress updates to us – the fee payer – on, say, a monthly basis before we commit our funding for a title. And in the meantime? Well, we’re slotting a regular check for monograph updates into our OA workflow, just like we have for journal articles.