Journal format theses are becoming ever more popular,
enabling the incorporation of work suitable for publication in a peer-reviewed
journal. This increase in popularity has led to concerns that some eTheses may
not adhere to publisher self-archiving policies. This is particularly relevant
for us as the University is committed to ensuring as wide an audience as
possible can read and access research outputs and has an Open Access policy
requiring all Postgraduate Research eTheses to made Open Access no longer than
12 months after submission.
We decided to investigate whether this concern was warranted
and determine whether there was a need for our team to increase knowledge of
self-archiving amongst our students. We found a total of 671 journal format
theses had been submitted, with the majority of these (575) from students in
the Faculty of Science and Engineering. Of these, a representative sample of 50
was taken for analysis and we looked at whether the correct version and embargo
period had been used. The results show that 8% of students had included an incorrect
version of the paper and 34% had applied the wrong embargo period.
Following these results we decided to provide additional
guidance on our website to advise students how to make their work Open Access,
while still meeting publisher requirements around self-archiving.
We added a new page explaining additional considerations for
journal format theses and produced a detailed, downloadable guidance document.
This document explains where to find information about the publisher’s
self-archiving policy and how to apply this information. We also created a
decision tree using Typeform which is a more interactive way to determine how
to comply with the publisher policy and also acts a prompt to ensure students
have obtained all the information they require.
We hope that this new guidance will assist those students
submitting a journal format thesis and minimise the risk that students will
include the wrong article version or apply an incorrect embargo. Of course, students
can always contact us for further support.
Making data more findable is the bedrock of much of research data management and we aim to make this easy and simple for researchers to do in practice. Ever on the look out to do just this, we were delighted to spot an opportunity to take our University’s data catalogue to the next level.
The data catalogue comprises our CRIS (Pure) Datasets module, which allows researchers to capture details of their datasets, and the public facing portal (Research Explorer), which allows these datasets to be searched. When the data catalogue was originally set up it could be populated either by automated metadata feeds for datasets deposited in the data repository recommended by The University of Manchester, or by manually inputting metadata for datasets deposited in external data repositories. However, recognising that this manual input duplicates effort, is time consuming and requires some familiarity with Pure, we began to think about how we could make this process faster and easier for researchers.
Our solution? A Research Data Gateway.
Gateway to data heaven
The Research Data Gateway service allows researchers to input a dataset DOI to an online form, view the associated metadata to confirm its veracity, and then submit the dataset record to the Library, who populates Pure on the researcher’s behalf. Wherever possible our Content, Collections and Discovery (CCD) team enriches the record by linking with related research outputs, such as articles or conference proceedings, and this record displays in both Research Explorer and all relevant Researcher Profiles.
The screen capture below illustrates how the Research Data Gateway works in practice from the researcher’s perspective up to the point of submission, a process that usually takes about 15 seconds!
Figure 1: Animated screen capture of Research Data Gateway
In addition to delivering a service that reduces researchers’ workload, the Research Data Gateway increases the discoverability and visibility of externally deposited datasets together with their associated publications. In turn, this increases the likelihood that these research outputs will be found, re-used and cited. Moreover, since most funders and an increasing number of journals require the data that underlies papers to be shared, the Gateway helps researchers reap the maximum reward from this requirement.
The nuts and bolts
As you can see from above this is a very straight-forward process from the researcher’s perspective, but of course, behind the scenes there’s a little more going on.
As with most successful initiatives, making the Research Data Gateway happen was a truly collaborative effort involving a partnership across the Library’s Research Services (RS), Digital Technologies and Services (DTS) and Content, Collections and Discovery (CCD) teams, and the University’s Pure Support team. And this collaboration continues now in the ongoing management of the service. All Gateway-related processes have been documented and we’ve used a RACI matrix to agree which teams would be Responsible, Accountable, Consulted and Informed for any issues or enquiries that might arise.
Some technical challenges and work-arounds
As might be expected, we encountered a number of small but challenging issues along the way:
Datasets may be associated with tens or even hundreds of contributors which can make these records time-consuming to validate. This was a particular problem for high energy physics datasets for instance. For efficiency, our solution is to record individual contributors from this University, and then add the name of the collaboration group.
Multiple requests for a single dataset record are sometimes submitted to Pure especially if a record has multiple contributors. To resolve this, approvals by the CCD team include a check for duplicates, and the service informs relevant researchers before rationalising any duplicates to a single record.
A limitation of the Gateway is that it doesn’t accommodate datasets without a DOI. So further work is needed to accommodate repositories, such as GenBank, that assign other types of unique and persistent identifiers.
Feedback on the Gateway has been consistently positive from researchers and research support staff; its purpose and simple effectiveness have been well-received and warmly appreciated.
However, getting researchers engaged takes time, persistence and the right angle from a communications perspective. It’s clear that researchers may not perceive a strong incentive to record datasets they’ve already shared elsewhere. Many are time poor and might reasonably question the benefit of also generating an institutional record. Therefore effective promotion continues to be key in terms of generating interest and engagement with the new Gateway service.
We’re framing our promotional message around how researchers can efficiently raise the profile of their research outputs using a suite of services including our Research Data Gateway, our Open Access Gateway, the Pure/ORCID integration, and benefit from automated reporting on their behalf to Researchfish. This promotes a joined up message explaining how the Library will help them raise their profile in return for – literally – a few seconds of their time.
We’re also tracking and targeting researchers who manually create dataset records in Pure to flag how the Research Data Gateway can save them significant time and effort.
In addition, to further reinforce the benefits of creating an institutional record, we ran a complementary, follow-up project using Scholix to find and record externally deposited datasets without the need for any researcher input. Seeing these dataset records surface in their Researcher Profiles, together with links to related research outputs is a useful means of generating interest and incentivising engagement.
These two approaches have now combined to deliver more than 5,000 data catalogue records and growing, with significant interlinking with the wider scholarly record. As noted, both routes have their limitations and so we remain on the lookout for creative ways to progress this work further, fill any gaps and make data ever more findable.
As well as two keynotes, three parallel workshop sessions and a panel discussion featuring four participants, a session featuring five-minute ‘lightning talks’ gave nine of us a chance to give presentations on work relevant to the Forum topic.
Unlike most of the other offerings at the Forum, my talk wasn’t about the use of metrics in evaluations or about how to measure openness. Instead, I talked about the way in which large open sets of citation data can give interesting information about patterns of citation. I reported on some initial exploratory work we’ve done to see whether this information can help identify new research opportunities.
Where does inspiration for research come from?
Inspiration for research can come from many different sources. For example, going back about 350 years, the act of noticing that an apple always falls perpendicularly to the ground could lead you to muse whether the earth had some power of attraction which caused this, and ultimately develop the law of universal gravitation.
Of course it’s still possible to come up with new ideas and discoveries on the basis of such ‘Eureka’ moments. But an increasing focus on interdisciplinarity has led to a situation of which it’s been said that ‘revolutionary scientific discoveries … are often the result of connecting ideas that have their origin in different disciplines.’
That quotation is from an article entitled ‘Interdisciplinary Research Boosted by Serendipity’. But do we have to rely on serendipity to discover that ideas from one discipline can be profitably applied in another field in a novel fashion? Or is there a way of systematically identifying such potential links?
I’d suggest that the technique of bibliographic coupling can help.
If two documents share several references in common, as Documents A and B do, then those documents are ‘bibliographically coupled’. And there’s at least a possibility that the two Citing Documents are using similar approaches to the research questions they’re respectively addressing.
In many cases the two Citing Documents will be by researchers who are addressing the same research question, or very closely related questions, and so the sharing of references has no deeper significance. A potentially more significant scenario occurs when the two Citing Documents are by researchers working in somewhat different fields. In that situation, the bibliographic coupling is a pointer to at least the possibility of a previously unidentified cross-disciplinary research connection.
COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations
So, what citation database could we use to try to identify such connections? The most well-known ones are the commercial products Web of Science and Scopus, but there are at least two barriers to using them for this type of work. The first is the need to pay a subscription cost to use them at all. The second is the limit to the number of records one can download. This makes it difficult to amass a dataset big enough to allow the required ‘mining’ for links.
We used COCI, a dataset created by the OpenCitations organisation. This was originally known as the Crossref Open Citations Index but it’s now the OpenCitations Index of Crossref open DOI-to-DOI citations.
Updated at least every six months, this dataset comprises all the DOI-to-DOI citations specified by open references in Crossref, which currently amounts to almost 450 million DOI-to-DOI citation links based on over 45 million bibliographic resources.
Looked at in terms of proportional coverage and from our own institution’s point of view, it comprises c.8,000 University of Manchester papers published since 2014 (which makes up around one-third of the total), together with their citation references.
This figure gives an example of the information which COCI provides, for each of the more than 45 million bibliographic resources whose DOIs it contains.
The data is available in several formats. We downloaded it as a CSV ‘dump’, and then filtered it to extract only those records where the DOI of the citing paper matched that of a paper in our own institutional publication records. This table gives some examples of the information which we then combined with the COCI information.
This meant that we were able to create a rich dataset which comprised c.8,000 records for University of Manchester papers, each in the following format.
Pointers to new research collaboration possibilities?
A colleague from the Library’s Digital Technologies and Services team wrote a program which pulled out all pairs of bibliographically coupled papers which had at least two references in common but where the authors came from different Faculties. We then used the free VOSViewer software to produce the following visualisation of bibliographical coupling links between publications by Schools/Divisions in different Faculties. (The closeness of the nodes is proportional to the strength of the bibliographic coupling.)
What does this show? Here are two examples.
The two closely juxtaposed purple nodes at the bottom of the visualisation show that papers from the Division of Information, Imaging and Data Sciences (in the Faculty of Biology, Medicine and Health) and the School of Electrical and Electronic Engineering (in the Faculty of Science and Engineering) shared references in common.
The two closely juxtaposed green nodes at the left of the visualisation show that papers from the School of Mechanical, Aerospace and Civil Engineering (in the Faculty of Science and Engineering) and the School of Environment, Education and Development (in the Faculty of Humanities) shared references in common.
Do they highlight previously unsuspected opportunities for innovative new cross-disciplinary research? Unfortunately, no.
The juxtaposed purple nodes simply reflected the fact that closely related algorithmic approaches to medical diagnosis and to computer vision are used in both the Division of Information, Imaging and Data Sciences and the School of Electrical and Electronic Engineering. Although this is interesting, it’s not the kind of unsuspected connection we’d hoped to uncover.
Similarly, the juxtaposed green nodes show that approaches to the optimisation of land, water and energy use are an area of interest both to researchers in Civil Engineering and to those in Environment, Education and Development. Again, this isn’t an unsuspected connection which the bibliographic coupling has surprisingly brought to light.
We’re not expecting to hear Manchester’s next Nobel Prize winners thanking us for bibliometric work which first alerted them to the possibility for a ground-breaking collaboration in their acceptance speech any time soon. However, the way in which this work highlighted related research being carried out in different Faculties (however unsurprising the specific examples) serves as an encouraging proof of concept.
Love is all around us this week it seems. Coinciding with Valentine’s Day, by chance or otherwise, this is also Love Data Week. So, we thought we’d share how we’ve been loving our data by making it more visible, shareable and re-usable!
This is an area of growing interest across the RDM community and if you, like us, are kept awake at night by questions such as how do you identify your institution’s datasets in external repositories or what’s the most efficient way to populate your CRIS with metadata for those datasets, then read on to learn how we’ve been meeting these sorts of challenges.
At the University of Manchester (UoM), the Library’s Research Data Management team has been using Scholix to find UoM researcher data records and make them available in the University’s data catalogue and Researcher Profiles, which are publicly available and serve as a showcase for the University’s research.
We saw here an opportunity not only to increase further the visibility of the University’s research outputs but also to encourage researchers to regard data more seriously as a research output. We also had in mind the FAIR Principles and were keen to support best practice by researchers in making their data more findable.
The headline result is the addition of more than 4,500 data records to the UoM CRIS (Pure), with reciprocal links between associated data and publication records also being created to enrich the University’s scholarly record.
So how did we go about this…
Following the launch in 2017 of the University’s Pure Datasets module, which underpins our institutional data catalogue (Research Explorer) and automatically populates Researcher Profiles, we created services to help researchers record their data in Pure with as little manual effort as possible. We’re delighted to see these services being well-received and used by our research community!
But what about historical data, we wondered?
We knew most researchers wouldn’t have the time or inclination to record details of all their previous data without a strong incentive and, in any case, we wanted to spare them this effort if at all possible. We decided to investigate just how daunting or not this task might be and made the happy discovery that the Scholix initiative had done lots of the work for us by creating a huge database linking scholarly literature with their associated datasets.
Working with a number of key internal and external partners, we used open APIs to automate / part-automate the process of getting from article metadata to tailored data records (see Figure 1).
Figure 1. Process summary: making research data visible
To generate and process the article metadata from Scopus we partnered with the Library’s Research Metrics, and Digital Technologies and Services teams. We submitted the article DOIs to Scholix via its open API which returned metadata (including DOIs) of the associated research data. Then using the DataCite open API we part-automated the creation of tailored data records that mirrored the Pure submission template (i.e. the records contained the relevant metadata in the same order). This saved our Content, Collections and Discovery team lots of time when manually inputting the details to Pure, before validating the records to make them visible in Research Explorer and Researcher Profiles.
Partnering with the University’s Directorate of Research and Business Engagement and Elsevier, we followed the same steps to process the records sourced from Pure. Elsevier was also able to prepare tailored data records for bulk upload directly into Pure which further streamlined the process.
Some challenges and lessons learned…
Manchester researchers like to share, especially if we can make it easy for them! Seeing the amount of data being shared across the institution is bringing us a lot of joy and a real sense of return on investment. In terms of staff time, which amounts to approximately 16 FTE weeks to upload, validate and link data in Pure, plus some additional time to plan and implement workflows. Cross-team working has been critical in bringing this project towards successful completion, with progress relying on the combined expertise of seven teams. In our view, the results more than justify this investment.
Of course, there are limitations to be addressed and technical challenges to navigate.
Initiatives, such as the COPDESS Enabling FAIR Data Project, that are bringing together relevant stakeholders (data communities, publishers, repositories and data ecosystem infrastructure) will help ensure that community-agreed metadata is properly recorded by publishers and repositories, so that it can feed into initiatives like Scholix and make our ‘downstream’ work ever more seamless. Widespread engagement for use of open identifiers would also make our work much easier and faster, in particular identifiers for researchers (ORCID) and research organisations (RoR). As ever, increased interoperability and automation of systems would be a significant step forward.
There are practical considerations as well. For instance, how do we treat data records with many researchers, which are more time-consuming to handle? How do we prepare researchers with lots of datasets for the addition of many records to their Researcher Profiles when there is such variation in norms and preferences across disciplines and individuals? How should we handle data collections? What do we do about repositories such as Array Express that use accession numbers rather than DOIs, as Scholix can’t identify data from such sources. And since Scholix only finds data which are linked to a research article how do we find data which are independent assets? If we are really serious about data being an output in their own right then we need to develop a way of doing this.
So, there’s lots more work to be done and plenty of challenges to keep us busy and interested.
In terms of the current phase of the project, processing is complete for data records associated with UoM papers from Scopus, with Pure records well underway. Researcher engagement is growing, with plenty of best practice in evidence. With REF 2021 in our sights, we’re also delighted to be making a clear contribution towards the research environment indicators for Open Data.
Following the introduction of GDPR last May the Research
Services team have been getting more and more enquiries about how to handle
sensitive data, so we invited Dr Scott
Summers from the UK Data Service (UKDS)
to visit us and deliver a one-day workshop on ‘Managing and sharing research
data from human participants’. My colleague, Chris Gibson, worked with Scott to
develop and arrange the session. It was a thoroughly engaging and informative
day, with lots of opportunity for discussion.
The workshop attracted a group of 30 to come along and learn
more about best practice for managing personal data. We invited colleagues from
across all faculties and ensured that there was a mix of established and early
career researchers, postgraduate researchers and professional services staff that
support research data management. As well as getting advice to help with data
management, the aim was to gather feedback from attendees to help us to shape
sessions that can be delivered as part of the Library’s My
Research Essentials programme by staff from across the University including
Research Services, Information Governance and Research IT.
As a fairly new addition to the Research Services team, I
was keen to attend this workshop. The management of research data from human
participants is a complex issue so any opportunity to work with the experts in
this field is very valuable. My job involves working with data management plans
for projects which often include personal data so gaining a deeper
understanding of the issues involved will help me to provide more detailed
advice and guidance.
The workshop began with looking at the ethical and legal
context around gathering data. This is something that has been brought sharply
into focus with the introduction of GDPR. We use ‘public
task’ as our lawful basis for processing data but it was interesting to hear
that ‘consent’ may be more prevalent as the preferred grounds in some EU
countries. Using public task as a basis provides our participants with
reassurance that the research is being undertaken in the public interest and
means researchers are not bound by the requirement to refresh consent.
The session on informed consent led to lively discussion
about how to be clear and specific about how and what data will be used when
research may change throughout a project. One solution for longitudinal studies
may be process consent – including multiple points of consent in the study
design to reflect potential changing attitudes of participants. Staged consent
is an option for those wanting to share data but give participants options. The
main point that arose from this session is that we should aim to give
participants as much control over their data as possible without making the
research project so complicated as to be unworkable.
The final session generated debate around whether we can
ever truly anonymise personal data. We worked through exercises in anonymising
data. It quickly became apparent that when dealing with information relating to
people, there are many aspects that could be identifying and in combination
even seemingly generic descriptors can quickly narrow down to a small subset of
participants. For example, ‘Research Officer’ is a term that could apply to a
large group of people but mention this in relation to ‘University of Manchester
Library’ and it quickly reduces to a subset of 3 people! The general consensus
was that referring to data as ‘de-identified’ or ‘de-personalised’ would be
more accurate but that these descriptions may not be as reassuring to the
participants so it is imperative that consent forms are clear and unambiguous
about how data will be used.
At the end of the session it was great to hear lots of
positive feedback from researchers across many disciplines that the workshop
took what could be quite a dry topic and made it engaging with numerous
opportunities for discussion.
Our second workshop with Scott Summers is due to take place on 26th February and we are looking forward to gaining more feedback and insights into how we can enhance the support we deliver to researchers who are managing research data from human participants – so, watch this space!
On Friday I submitted the University of Manchester’s feedback on Plan S. We’d invited feedback from across campus so our response reflects views from a wide range of academic disciplines as well as those from the Library.
Our response could be considered informally as ‘Yes, but…’, ie, we agree with the overall aim but, as always, the devil’s in the detail.
Our Humanities colleagues expressed a number of reservations but noted “we are strongly in favour of Open Access publishing” and “we very much welcome the pressure, from universities and funders, on publishers to effect more immediate and less costly access to our research findings”.
The response from the Faculty of Biology, Medicine and Health also flagged concerns but stated “if Plan S is watered down, the pressure exerted on journal publishers may not be acute enough to force a profound shift in business model”.
A number of concerns raised assumed launch of Plan S based on the status quo. Updates from the Library have tried to reassure our academic colleagues that there’s work going on ‘behind the scenes’ which makes this unlikely and remind them that UK funder OA policies may not be exact replicas of Plan S.
We’ve been here before in the sense that when the UK Research Councils announced that a new OA policy would be adopted from April 2013, publishers amended their OA offer to accommodate the new policy requirements. Not every publisher of Manchester outputs did, but things did shift. For large publishers this happened fairly quickly, but for smaller publishers this took a bit longer, and in some cases required nudging by their academic authors.
It’s worth reflecting on how that policy played out as we consider Plan S: put simply, it cost a lot of money and most publishers didn’t provide options that fully met the Green OA requirements.
The key points in our response are concerns about affordability, Green OA requirements and the current ‘one size fits all’ approach. You can read it here: UoM_Plan-S_feedback.
Guest post by University of Manchester Library scholarship winner Chukwuma Ogbonnaya, PhD Student at the school of Mechanical, Aerospace and Civil Engineering, and early career Lecturer in the Department of Chemical and Petroleum Engineering at the Federal University Ndufu-Alike Ikwo in Abakaliki, Nigeria.
OpenCon is a unique conference because it brings Librarians, Publishers, Civil society organisations, Policy makers, Government agencies, Post-doctoral, Doctoral and Undergraduate students, across the globe together. These participants think, discuss, engage and co-create solutions to promote open philosophies. OpenCon2018, which was held at York University, Toronto, Canada between 1-4 November 2018, was my first Opencon attendance. I applied to attend the Berlin conference in 2017 but unfortunately was not selected. When I saw the notification for OpenCon2018 from The University of Manchester Library, I quickly applied because I wanted to be involved in open research based on my findings during the last application. I was pleasantly surprised when I was announced the winner and that was the beginning of a chain reaction of surprises.
Immediately the announcement was made, I was pleased and I did not waste time to share the news with my family, friends and my home University in Nigeria through the Vice-Chancellor (Professor Chinedum Nwajiuba). I quickly started my visa application the following day. It can be time-consuming and stressful for researchers from African countries and the Global South to obtain visas to travel to conferences, which is a real problem as it prevents researchers from these countries participating in important discussions – this tweet by Zaid Alyafeai sums up the problem. I was apprehensive, especially given the tight turnaround time – will it be possible to obtain a Canadian visa in three days for a Nigerian, I pondered? Now, here is the second surprise. I was given a multiple entry Canadian visa that will expire in 2022. What this means for me is that I can easily apply for conferences in Canada to present my research as well as listen to experts in my field. This has been made possible by the OpenCon2018 award.
The third pleasant surprise was the design and programming of OpenCon2018. The programme was so unique with collaborative and engaging sessions. It was highly participatory and everybody has multiple choices of what activity/topic/theme to participate in. There were discussion panels comprising presenters with practical experiences and those at the early stage of their career.
My favourite panel was “Diversity, Equity and Inclusion” because it was simply outstanding. They focused on the need to include everyone in the emerging open infrastructure designs irrespective of gender, race, country and region. It was obvious that allowing people to have access to the knowledge infrastructure would empower them to contribute to the development of the wider society. Associate Professor Leslie Chan really made a long lasting impression on me during the panel discussion. He described a situation where open science should not just translate into an “automation of knowledge inequality” but should indeed be a “commitment to think critically and to push the boundaries to imagine a more inclusive, equitable and radical future”.
I want to share my experiences in some of the activities I participated in during the conference:
STORY CIRCLE AND STORY OF SELF
OpenCon2018 believes that “stories are at the core of how we identify and express ourselves, interpret and shape our worlds, and connect with others”. The intention of story circle was to create a safe space where participants can tell a small group of people about themselves as well as share their thoughts on what open science means to them. No comments or questions are expected to follow beyond highlighting what brought the participant to OpenCon2018. For me, it was good that it came quite early in the conference because it provided an opportunity for me to start networking as well as gaining insights into how others perceive open access.
DESIGN THINKING WORKSHOPS I & II
I participated in the Europe workshop. The interactive workshop was to inspire contextual culture change towards a more open, diverse, inclusive and equitable research and education system. We addressed the question of how we might, as open advocates, congratulate our peers on non-open successes while staying true to our values. Within the group, we were divided into clusters based on our current activities/work. I was in a PhD and Post-Doctoral group and we explored how design thinking can be used to understand, design and communicate Open Access solutions for PhD students. The process involves defining the problems based on an understanding of the system, empathising with a typical PhD student based on how they think and feel, brainstorming and ideating solutions, prototyping the solution, testing it and implementing it in the real world. The videos and slides were used for a systematic analysis of the personhood of a typical PhD student. Current experiences and future aspirations of a PhD student were captured in order to reveal where and how interventions can be implemented to help PhD students understand how Open Access can support their current and future career development. The skills and learning acquired from the design thinking workshop are transferable and I will be applying it in designing human-centred systems within my research projects in the future.
PUBLISHING WITH OPEN ACCESS JOURNALS (UNCONFERENCE SESSION)
The unconference session is a hands-on session in which a speaker lead participants through exploring critical questions on the topic. The session focused on how to identify predatory journals and legitimate Open Access journals. It was a discussion session with rich experience-sharing on how fake Open Access publishers can be identified and avoided. It was apparent that Open Access publications can contribute to making a researcher’s work more available, visible and accessible whilst giving the researcher more control on how and who can use their work. When a scholar’s work becomes accessible, it can increase citation, and such recognition can support funding applications to carry out further work. However, falling prey to fake/predatory Open Access publishers could be disastrous. Such predatory journals lack strong peer review mechanisms and reputation within the research community. Consequently, the time and money spent on undertaking quality research could be lost when the wrong choice of journal is made. The degree of openness of Open Access journals were discussed including types of Open Access copyrights. Finally, the presenter (Vrushali) shared how DOAJ (Directory of Open Access Journals), which indexes and provides access to peer-reviewed Open Access journals, can assist researchers identify recognised Open Access journals. In summary, the session was very informative and I would use the strategies discussed to make informed choices in the future, as well as guiding others.
DITCHING JOURNAL IMPACT FACTORS AND JOURNAL BLACKLISTS FOR GOOD (UNCONFERENCE SESSION)
The unconference session on spotting predatory journals influenced my choice of the discussion group. The discussion group led by Emma Molls focused on how impact factor metrics play a role in influencing researchers to publish in traditional journals instead of Open Access journals. The fundamental question was “how might we rethink journal quality in a way that it benefits authors, editors, and librarians without duplicating the faults of the past?” A critical discussion and questioning on whether impact factor captures the ideals of quality and impact were raised. For instance, a question on whether the impact factor of a journal should be equated to the impact factor of an article was raised. We also explored other possible metrics which can act as a measure of the impact of an article including citation, reproducibility, transparency and significance. We then considered the incentives that could motivate scholars to consider Open Access publishing. These include the recognition system in the research community/workplace and sponsors’/funders’ influence on where outputs should be published, among other factors.
This activity of the OpenCon2018 is the creativity and innovation session where new ideas and organisations are birthed through collaboration and networking. Individuals are encouraged to propose ideas no matter how sketchy they might be! Participants interested in a proposed idea come together and use their diverse skills to develop the idea and create a possible network that can allow them to continue collaborating on the idea after the conference. My group started developing a platform that can bring together those who have good ideas but cannot develop them due to lack of resources or expertise and those who can transform the ideas into reality. We applied a design thinking approach in seeking a solution. Afterwards, we set up a Whatsapp group to continue working on the idea. Members are from UK, Canada, Germany, Jordan and Tanzania and we held a Skype discussion on the project in early December 2018 to review progress on the assigned tasks at the conference.
OpenCon2018 may have come and gone but one thing is certain – it has opened a new world of possibilities for me in becoming an advocate for Open Access, open research, open data, open education, open government and indeed open philosophy. This was what I wanted and I now have it! I’m looking forward to working with The University of Manchester Library to contribute to the promotion of the Open philosophy across the University. My experience will also be promoted in my other institution, Alex Ekwueme Federal University Ndufu-Alike Ikwo, Nigeria after I complete my PhD studies at Manchester.
Finally, I look forward to contributing towards supporting and volunteering with communities/organisations/institutions seeking to build tools, processes, systems and infrastructures that promote open philosophy to achieve an inclusive, fair, participatory and equitable system. I believe that applied open principles can empower people to bring on board their professional and personal diversities and uniqueness into the building blocks of a better society we all desire.
Eight months on from a major revision of data management planning processes at the University of Manchester, we’re often asked about how we work and so we thought it might be useful to share how we created a process that gives researchers maximum value from creating a Data Management Plan (DMP) and assists in the University’s compliance with GDPR.
The University of Manchester has required a DMP for every research project for nearly 5 years, as have most major UK research funders, and we had an internal data management planning tool during this period. Whilst this tool was heavily used we wanted something that was more user-friendly and easier to maintain. We were also keen on having a tool which would allow Manchester researchers to collaborate with researchers at other institutions so turned to DMPonline, maintained by the Digital Curation Centre. Once the decision had been taken to move to DMPonline we took the opportunity to consider links to the other procedures researchers complete before starting a project to see if we could improve the process and experience.
The One Plan That Rules Them All
We brought together representatives from the Library, Information Governance Office, Research IT, ethics and research support teams to map out the overlaps in forms researchers have to complete before beginning research. We also considered what additional information the University needed to collect to ensure compliance with GDPR. We established that whilst there were several different forms required for certain categories of research, the DMP is the one form used by all research projects across the University and so was the most appropriate place to be the ‘information asset register’ for research required under GDPR.
We also agreed on common principles that:
Researchers should not have to fill in the same information twice;
Where possible questions would be multiple choice or short form, to minimise completion time;
DMP templates should be as short as possible whilst capturing all of the information needed to provide services and assist in GDPR compliance
To achieve this we carefully considered all existing forms. We identified where there were overlaps and agreed on wording we could include in our DMP templates that would fulfil the needs of all teams – not an easy task! We also identified where duplicate questions could be removed from other forms. The agreed wording was added to our internal template and as a separate section at the beginning of every funder template as the ‘Manchester Data Management Outline’ to ensure unity across every research project at the University.
The Journey of a DMP
Once we had agreed on the questions to be asked we designed a process to share information between services with minimal input from researchers. Once a researcher has created their plan the journey of a DMP begins with an initial check of the ‘Manchester Data Management Outline’ section by the Library’s Research Data Management (RDM) team. Here we’re looking for any significant issues and we give researchers advice on best practices. We ensure that all researchers who create plans are contacted, so that all researchers benefit from the process, even if that is just confirmation that they are doing the right thing.
If the issues identified suggest the potential for breaches of GDPR or a need for significant IT support, these plans are sent to the Information Governance Office and Research IT respectively. At this point all researchers are also offered the option of having their full DMP reviewed, using DMPonline’s ‘request feedback’ button.
If researchers take up this service – and more than 200 have in the first eight months – we review their plans within DMPonline, using the commenting functionality, and return the feedback to the researcher within 10 working days.
If a research project requires ethics approval, researchers are prompted whilst filling in their ethics form to attach their DMP and any feedback they have received from the Library or other support services. This second step was introduced shortly after the move to DMPonline so that we could ensure that the advice being given was consistent. These processes ensure that all the relevant services have the information they need to support effective RDM with minimal input from researchers.
On 17th April a message was sent to all researchers informing them of the change in systems and new processes. Since then Manchester researchers have created more than 2000 DMPs in DMPonline, demonstrating brilliant engagement with the new process. Sharing information between support services has already paid dividends – we identified issues with the handling of audio and video recordings of participants which contributed to the development of a new Standard Operating Procedure.
Whilst we have seen significant activity in DMPonline and a lot of positive feedback about our review service there are still improvements to our service that we would like to make. We are regularly reviewing the wording of our questions in DMPonline to ensure that they are as clear as possible; for example, we have found that there is frequent confusion around the terminology used for personal, sensitive, anonymised and pseudonymised data. There are also still manual steps in our process, especially for researchers applying for ethics approval, and we would like to explore how we could eliminate these.
Our new data management planning process has improved and all the services involved in RDM-related support at Manchester now have a much richer picture of the research we support. The University of Manchester has a distributed RDM service and this process has been a great opportunity to strengthen these links and work more closely together. Our service does not meet the ambitious aims of Machine Actionable DMPs but we hope that it offers an improved experience for the researcher, and is a first step towards semi-automated plans, at least from a researcher perspective.
When we’re planning for Open Access (OA) Week we reflect on where we’ve got to in our services, both in the delivery and the level of researcher engagement with OA.
It’s always rewarding for us to remember how well established our service now is and the important part we play in increasing access to the University’s research and, of course, funder compliance. This year we worked with colleagues in the University’s Global Development Institute to showcase their OA research, which aligns with the theme of OA Week 2018, and highlighted our top 5 most accessed theses.
Levels of engagement with OA at the University are high – while it’s undoubtedly true that this is related to funder policies , it’s also partly because our services are designed to make the process easy for authors. OA isn’t always easy for researchers to understand but our process is, and it prompts conversations with us about what to do, and the reasons why, all year round. Our webpages underpin our current processes but now – we’ve just launched new-look webpages – also look ahead, encouraging and supporting engagement with Open Research more broadly.
What I’ve been reminded of as we’ve been preparing for OA Week is that however well we’re doing at the moment, there are still challenges to tackle. And I’m not referring to Plan S.
Working in an Open Access service
OA teams have formed and grown over the past 5 years. Most of us learned on the job and we’re now training the new colleagues on the job. I’m part of a national group considering how best to prepare the next generation of people for work in this area. One way we’re doing this is by inviting staff already working in this area to share their experiences.
We often receive applications for our vacancies that suggest a lack of understanding about the nature of the roles so I’ve asked Lucy May and Olivia Rye from our team to talk about what it means to work in a role with a strong focus on Gold OA at a large research-intensive university.
A further challenge is OA monographs and book chapters. We really need greater clarity on publisher processes as they relate to OA for these output types. Over the past week we’ve been reviewing the status of 14 payments we arranged for our School of Arts, Languages and Cultures from our 2017-18 institutional OA fund (last payment made in early September), totalling just over £61,000. Of these, 6 outputs are not yet OA. Another output, a monograph, is not flagged as OA on the publisher’s page. This may be an oversight, but it’s telling of developments still needed – the publisher of this book told the author that they don’t have processes in place for this yet.
Of the 6 outputs, two were book chapters, from a commercial publisher that I assume has a process, because they have a webpage offering OA for chapters as an option, but although I’ve had an apology I’ve not yet had confirmation of when the chapters will be OA. One was an article from a US University Press – I had a fast response and apology but have been told it will take at at least a week for the article to be made OA.
The 3 remaining outputs are monographs. From the responses I’ve had I’m understanding that there’s a delay in converting a monograph to OA once a Book Processing Charge is made – what I’ve yet to learn is how long this is likely to be. We can’t have meaningful discussions with authors without this kind of information and the lack of publisher procedures affects confidence in engagement with OA.
So, this is now on my To Do list both here at Manchester and for the RLUK Open Access Publisher Processes group. By the time we’re planning OA Week activities next year, and reflecting on how far we’ve come, I’m determined we’ll have answers.
We’ve now assessed all applications for our sponsored OpenCon 2018 place and are pleased to announce that the successful applicant is Chukwuma Ogbonnaya. Chukwuma is a PhD Student at the school of Mechanical, Aerospace and Civil Engineering, as well as working as an early career Lecturer in the Department of Chemical and Petroleum Engineering at the Federal University Ndufu-Alike Ikwo in Abakaliki, Nigeria.
Chukwuma’s application stood out due to his ability to combine passion with practical ideas for improving openness in research, based on his own experiences as a researcher and student. Having experienced both the frustration of gaining access to the supporting data of other scientists, and the substantial effort required to manually explain his own data to ensure it’s meaningful to readers, Chukmuna is motivated to explore the development of systems to support effective and systematic sharing of important research artefacts such as contextual data and code to aid analysis and reproducibility of published research findings.
The panel was particularly impressed with Chukwuma’s ideas for establishing a researcher network to support and encourage research staff and students across The University of Manchester to embrace the Open philosophy. Chukwuma plans to achieve this through both developing strategies for and engaging in outreach activities to explain the benefits of open research and learning.
Chukwuma was keen to attend OpenCon 2018 to network with like-minded fellows to develop his knowledge and critical skills. Collaboration is essential to developing serious challenges to established norms of scholarly communication, and we’re hoping Chukwuma will meet equally passionate delegates to help him develop and hone his ambitious plans.
We look forward to hearing from Chukwuma on his experiences at OpenCon 2018 and working with him on upcoming open research activities, including Open Access Week 2018 and our next Open Research Forum in November.