Categories
Announcement

Opening Remarks: An Open Research Podcast

You might have seen we recently released the first episode of our open research podcast Opening Remarks. This is something we’ve been talking about doing for a while, but the transition to working from home sped things up a little bit. We now spend a lot of our time talking to each other on platforms that enable audio recording, so our feeling was this would be a good opportunity to put that technology to good use.

The idea behind Opening Remarks is simple – we want to have conversations with colleagues from across the University about open research; how open research is supported and facilitated, but also how researchers embed open principles in their practice. We want these conversations to be informal, interesting and informative.

Our intention is to record six episodes in this initial series, covering research data, open access, research communications, metrics and lots more besides. We’d been keen to hear from you about what you think we should be talking about, and we’d be even keener to hear from you if you’d like to be a guest! Come and talk to us about the open research that you do!

The first episode is already available on iTunes and, pending successful reviews, should be available on Stitcher, Spotify and Google’s podcast player in the next couple of days. Do give it a listen and let us know what you think! You can contact us on Twitter at @UoMLibResearch or email us at researchdata@manchester.ac.uk

Opening Remarks is hosted by Clare Liggins and Steve Carlton, two Research Services Librarians with very little broadcast experience but lots of enthusiasm.

Steve Carlton
@UoML_Steve

I’ve been a Research Services Librarian at Manchester since January 2019, specialising in open access and research communications. Before I arrived at Manchester I’d been working in open access at several other institutions across the north west, including spells at the University of Liverpool and the University of Salford.

I’m interested in open research and its potential to help researchers reach broader audiences, and outside work I’m into professional wrestling, non-league football, the music of Arthur Russell and the Australian TV soap Neighbours. If I can find a way to talk about any of those things in the podcast, I will.

Clare Liggins
@clarepenelope

I’m a Research Services Librarian in the Research Data Management Team. I’ve been working at the University since January 2019 (Steve and I started on the same day) and am interested in anything to do with promoting the effective practice of Research Data Management, including training, as well as anything to do with Open Research.

My background is in Literature and writing, and before working at the University I was a Law Librarian. Due to my background, I am also interested in finding ways of working with these areas to adopt Research Data Management processes more widely.

In my spare time I enjoy reading books about feminist writers, spotting beautiful furniture in films from the 1950s, cooking recipes written by Nigel Slater and making up voices for my cat.

Categories
Announcement

Opening Remarks #1: Research Data Management

In this first episode of Opening Remarks, we talk about the perils of working from home in the summer, then invite our colleagues to talk to us about research data management for an hour. We’re joined by Chris, Eleanor and Bill to cover: the complexities of supporting research data management across disciplines, the joys of checking data management plans, and we talk up some of the services we offer. We also get a bit excited talking about the impending arrival of an institutional data repository.

Music by Michael Liggins
Artwork by Elizabeth Carlton

The Library’s research data webpages
https://www.library.manchester.ac.uk/using-the-library/staff/research/research-data-management/

That data costing podcast that Clare mentioned
https://blog.research-plus.library.manchester.ac.uk/2020/05/21/podcast-costing-research-data-at-the-university-of-manchester/

Email us
researchdata@manchester.ac.uk

Tweet us
@UoMLibResearch

Download it here.

Categories
Report

Working with the UK Data Service to support researchers with managing and sharing research data from human participants

Following the introduction of GDPR last May the Research Services team have been getting more and more enquiries about how to handle sensitive data, so we invited  Dr Scott Summers from the UK Data Service (UKDS) to visit us and deliver a one-day workshop on ‘Managing and sharing research data from human participants’. My colleague, Chris Gibson, worked with Scott to develop and arrange the session. It was a thoroughly engaging and informative day, with lots of opportunity for discussion.

The workshop attracted a group of 30 to come along and learn more about best practice for managing personal data. We invited colleagues from across all faculties and ensured that there was a mix of established and early career researchers, postgraduate researchers and professional services staff that support research data management. As well as getting advice to help with data management, the aim was to gather feedback from attendees to help us to shape sessions that can be delivered as part of the Library’s My Research Essentials programme by staff from across the University including Research Services, Information Governance and Research IT.  

As a fairly new addition to the Research Services team, I was keen to attend this workshop. The management of research data from human participants is a complex issue so any opportunity to work with the experts in this field is very valuable. My job involves working with data management plans for projects which often include personal data so gaining a deeper understanding of the issues involved will help me to provide more detailed advice and guidance.

The workshop began with looking at the ethical and legal context around gathering data. This is something that has been brought sharply into focus with the introduction of GDPR. We use ‘public task’ as our lawful basis for processing data but it was interesting to hear that ‘consent’ may be more prevalent as the preferred grounds in some EU countries. Using public task as a basis provides our participants with reassurance that the research is being undertaken in the public interest and means researchers are not bound by the requirement to refresh consent.

The session on informed consent led to lively discussion about how to be clear and specific about how and what data will be used when research may change throughout a project. One solution for longitudinal studies may be process consent – including multiple points of consent in the study design to reflect potential changing attitudes of participants. Staged consent is an option for those wanting to share data but give participants options. The main point that arose from this session is that we should aim to give participants as much control over their data as possible without making the research project so complicated as to be unworkable.

The final session generated debate around whether we can ever truly anonymise personal data. We worked through exercises in anonymising data. It quickly became apparent that when dealing with information relating to people, there are many aspects that could be identifying and in combination even seemingly generic descriptors can quickly narrow down to a small subset of participants. For example, ‘Research Officer’ is a term that could apply to a large group of people but mention this in relation to ‘University of Manchester Library’ and it quickly reduces to a subset of 3 people! The general consensus was that referring to data as ‘de-identified’ or ‘de-personalised’ would be more accurate but that these descriptions may not be as reassuring to the participants so it is imperative that consent forms are clear and unambiguous about how data will be used.

At the end of the session it was great to hear lots of positive feedback from researchers across many disciplines that the workshop took what could be quite a dry topic and made it engaging with numerous opportunities for discussion.

Our second workshop with Scott Summers is due to take place on 26th February and we are looking forward to gaining more feedback and insights into how we can enhance the support we deliver to researchers who are managing research data from human participants – so, watch this space!

Categories
Discussion Report

Connecting the dots: Creating a joined up approach to Data Management Plans

Eight months on from a major revision of data management planning processes at the University of Manchester, we’re often asked about how we work and so we thought it might be useful to share how we created a process that gives researchers maximum value from creating a Data Management Plan (DMP) and assists in the University’s compliance with GDPR.

The University of Manchester has required a DMP for every research project for nearly 5 years, as have most major UK research funders, and we had an internal data management planning tool during this period. Whilst this tool was heavily used we wanted something that was more user-friendly and easier to maintain. We were also keen on having a tool which would allow Manchester researchers to collaborate with researchers at other institutions so turned to DMPonline, maintained by the Digital Curation Centre. Once the decision had been taken to move to DMPonline we took the opportunity to consider links to the other procedures researchers complete before starting a project to see if we could improve the process and experience.

The One Plan That Rules Them All

We brought together representatives from the Library, Information Governance Office, Research IT, ethics and research support teams to map out the overlaps in forms researchers have to complete before beginning research. We also considered what additional information the University needed to collect to ensure compliance with GDPR. We established that whilst there were several different forms required for certain categories of research, the DMP is the one form used by all research projects across the University and so was the most appropriate place to be the ‘information asset register’ for research required under GDPR.

We also agreed on common principles that:

  • Researchers should not have to fill in the same information twice;
  • Where possible questions would be multiple choice or short form, to minimise completion time;
  • DMP templates should be as short as possible whilst capturing all of the information needed to provide services and assist in GDPR compliance

To achieve this we carefully considered all existing forms. We identified where there were overlaps and agreed on wording we could include in our DMP templates that would fulfil the needs of all teams – not an easy task! We also identified where duplicate questions could be removed from other forms. The agreed wording was added to our internal template and as a separate section at the beginning of every funder template as the ‘Manchester Data Management Outline’ to ensure unity across every research project at the University.

The Journey of a DMP

Once we had agreed on the questions to be asked we designed a process to share information between services with minimal input from researchers. Once a researcher has created their plan the journey of a DMP begins with an initial check of the ‘Manchester Data Management Outline’ section by the Library’s Research Data Management (RDM) team. Here we’re looking for any significant issues and we give researchers advice on best practices. We ensure that all researchers who create plans are contacted, so that all researchers benefit from the process, even if that is just confirmation that they are doing the right thing.

First stage of data management plan checks

If the issues identified suggest the potential for breaches of GDPR or a need for significant IT support, these plans are sent to the Information Governance Office and Research IT respectively. At this point all researchers are also offered the option of having their full DMP reviewed, using DMPonline’s ‘request feedback’ button.

Second stage of DMP checks

If researchers take up this service – and more than 200 have in the first eight months –  we review their plans within DMPonline, using the commenting functionality, and return the feedback to the researcher within 10 working days.

DMP and Ethics integration

If a research project requires ethics approval, researchers are prompted whilst filling in their ethics form to attach their DMP and any feedback they have received from the Library or other support services. This second step was introduced shortly after the move to DMPonline so that we could ensure that the advice being given was consistent. These processes ensure that all the relevant services have the information they need to support effective RDM with minimal input from researchers.

Implementation

On 17th April a message was sent to all researchers informing them of the change in systems and new processes. Since then Manchester researchers have created more than 2000 DMPs in DMPonline, demonstrating brilliant engagement with the new process. Sharing information between support services has already paid dividends – we identified issues with the handling of audio and video recordings of participants which contributed to the development of a new Standard Operating Procedure.

Next Steps

Whilst we have seen significant activity in DMPonline and a lot of positive feedback about our review service there are still improvements to our service that we would like to make. We are regularly reviewing the wording of our questions in DMPonline to ensure that they are as clear as possible; for example, we have found that there is frequent confusion around the terminology used for personal, sensitive, anonymised and pseudonymised data. There are also still manual steps in our process, especially for researchers applying for ethics approval, and we would like to explore how we could eliminate these.

Our new data management planning process has improved and all the services involved in RDM-related support at Manchester now have a much richer picture of the research we support. The University of Manchester has a distributed RDM service and this process has been a great opportunity to strengthen these links and work more closely together. Our service does not meet the ambitious aims of Machine Actionable DMPs but we hope that it offers an improved experience for the researcher, and is a first step towards semi-automated plans, at least from a researcher perspective.

Categories
Discussion Report

How effective is your RDM training?

We are involved in an international collaborative project  to assess the quality of the Research Data Management training across institutions. This post reports on progress of the project so far, it originally appeared on the project blog on 6th October 2017. 

When developing new training programmes, one often asks oneself a question about the quality of training. Is it good? How good is it? Trainers often develop feedback questionnaires and ask participants to evaluate their training. However, feedback gathered from participants attending courses does not answer the question how good was this training compared with other training on similar topics available elsewhere. As a result, improvement and innovation becomes difficult. So how to objectively assess the quality of training?

In this blog post we describe how, by working collaboratively, we created tools for objective assessment of RDM training quality.

Crowdsourcing

In order to objectively assess something, objective measures need to exist. Being unaware of any objective measures for benchmarking of a training programme, we asked Jisc’s Research Data Management mailing list for help. It turned out that a lot of resources with useful advice and guidance on creation of informative feedback forms was readily available, and we gathered all information received in a single document. However, none of the answers received provided us with the information we were looking for. To the contrary, several people said they would be interested in such metrics. This meant that objective metrics to address the quality of RDM training either did not exist, or the community was not aware of them. Therefore, we decided to create RDM training evaluation metrics.

Cross-institutional and cross-national collaboration

For metrics to be objective, and to allow benchmarking and comparisons of various RDM courses, they need to be developed collaboratively by a community who would be willing to use them. Therefore, the next question we asked Jisc’s Research Data Management mailing list was whether people would be willing to work together to develop and agree on a joint set of RDM training assessment metrics and a system, which would allow cross-comparisons and training improvements. Thankfully, the RDM community tends to be very collaborative, which was the case also this time – more than 40 people were willing to take part in this exercise and a dedicated mailing list was created to facilitate collaborative working.

Agreeing on the objectives

To ensure effective working, we first needed to agree on common goals and objectives. We agreed that the purpose of creating the minimal set of questions for benchmarking is to identify what works best for RDM training. We worked with the idea that this was for ‘basic’ face-to-face RDM training for researchers or support staff but it can be extended to other types and formats of training session. We reasoned that same set of questions used in feedback forms across institutions, combined with sharing of training materials and contextual information about sessions, should facilitate exchange of good practice and ideas. As an end result, this should allow constant improvement and innovation in RDM training. We therefore had joint objectives, but how to achieve this in practice?

Methodology

Deciding on common questions to be asked in RDM training feedback forms

In order to establish joint metrics, we first had to decide on a joint set of questions that we would all agree to use in our participant feedback forms. To do this we organised a joint catch up call during which we discussed the various questions we were asking in our feedback forms and why we thought these were important and should be mandatory in the agreed metrics. There was lots of good ideas and valuable suggestions. However, by the end of the call and after eliminating all the non-mandatory questions, we ended up with a list of thirteen questions, which we thought were all important. These however were too many to be asked of participants to fill in, especially as many institutions would need to add their own institution-specific feedback questions.

In order to bring down the number of questions which should be made mandatory in feedback forms, a short survey was created and sent to all collaborators, asking respondents to judge how important each question was (scale 1-5, 1 being ‘not important at all that this question is mandatory’ and 5 being ‘this should definitely be mandatory’.). Twenty people participated in the survey. The total score received from all respondents for each question were calculated. Subsequently, top six questions with the highest scores were selected to be made mandatory.

Ways of sharing responses and training materials

We next had to decide on the way in which we would share feedback responses from our courses and training materials themselves . We unanimously decided that Open Science Framework (OSF) supports the goals of openness, transparency and sharing, allows collaborative working and therefore is a good place to go. We therefore created a dedicated space for the project on the OSF, with separate components with the joint resources developed, a component for sharing training materials and a component for sharing anonymised feedback responses.

Next steps

With the benchmarking questions agreed and with the space created for sharing anonymised feedback and training materials, we were ready to start collecting first feedback for the collective training assessment. We also thought that this was also a good opportunity to re-iterate our short-, mid- and long-term goals.

Short-term goals

Our short-term goal is to revise our existing training materials to incorporate the agreed feedback questions into RDM training courses starting in the Autumn 2017. This would allow us to obtain the first comparative metrics at the beginning of 2018 and would allow us to evaluate if our designed methodology and tools are working and if they are fit for purpose. This would also allow us to iterate over our materials and methods as needed.

Mid-term goals

Our mid-term goal is to see if the metrics, combined with shared training materials, could allow us to identify parts of RDM training that work best and to collectively improve the quality of our training as a whole. This should be possible in mid/late-2018, allowing time to adapt training materials as result of comparative feedback gathered at the beginning of 2018 and assessing whether training adaptation resulted in better participant feedback.

Long-term goals

Our long-term goal is to collaboratively investigate and develop metrics which could allow us to measure and monitor long-term effects of our training. Feedback forms and satisfaction surveys immediately after training are useful and help to assess the overall quality of sessions delivered. However, the ultimate goal of any RDM training should be the improvement of researchers’ day to day RDM practice. Is our training really having any effects on this? In order to assess this, different kinds of metrics are needed, which would need to be coupled with long-term follow up with participants. We decided that any ideas developed on how to best address this will be also gathered in the OSF and we have created a dedicated space for the work in progress.

Reflections

When reflecting on the work we did together, we all agreed that we were quite efficient. We started in June 2017, and it took us two joint catch up calls and a couple of email exchanges to develop and agree on joint metrics for assessment of RDM training. Time will show whether the resources we create will help us meet our goals, but we all thought that during the process we have already learned a lot from each other by sharing good practice and experience. Collaboration turned out to be an excellent solution for us. Likewise, our discussions are open to everyone to join, so if you are reading this blog post and would like to collaborate with us (or to follow our conversations), simply sign up to the mailing list.

Resources

Mailing list for RDM Training Benchmarking: http://bit.ly/2uVJJ7N

Project space on the Open Science Framework: https://osf.io/nzer8/

Mandatory and optional questions: https://osf.io/pgnse/

Space for sharing training materials: https://osf.io/tu9qe/

Anonymised feedback: https://osf.io/cwkp7/

Space for developing ideas on measuring long-term effects of training: https://osf.io/zc623/

Authors (in alphabetical order by surname):

Cadwallader Lauren, Higman Rosie, Lawler Heather, Neish Peter, Peters Wayne, Schwamm Hardy, Teperek Marta, Verbakel Ellen, Williamson, Laurian, Busse-Wicher Marta

Categories
Training

Refreshing the roses

With September marking the start of a new academic year, Manchester students are making their way into the University’s Main Library, site libraries, and website in increasing numbers. As the new students get to know what’s available to them, I’m minded to refresh my knowledge of some information resources, including those on Research Data Management (RDM).

Between November 2014 and February 2015, fellow Manchester RDM Service Team member Chris Gibson and I joined 18 colleagues from NoWAL institutions to participate in a four-day course in RDM that was tailored for information professionals. The course was none other than RDMRose, the result of a JISC-funded project from libraries at the universities of Leeds, Sheffield and York along with Sheffield’s Information School (iSchool), to produce RDM learning materials for teaching and continuing professional development.

RDMRose training in Manchester
Haley with Andrew Cox, RDMRose project director

The course, which met one day a month over four months, was led by Andrew Cox of Sheffield’s iSchool, the RDMRose project director, and ably supported by Eddy Verbaan, one of two research associates on the project. Andrew and Eddy used learning materials that were updated versions of the original RDMRose materials. We met at Manchester, with participants coming from the universities of Central Lancashire and Cumbria, Edge Hill University, Liverpool John Moores University, Nottingham Trent University, University of Salford, and University of Wolverhampton in addition to me and Chris. No two universities are alike, of course, and our conversations through group working and over coffee and lunch exposed us to different sets of experiences relating to RDM.

Each day was a mixture of types of sessions, and we were kept working all through the course, designing a support web site and a training program, reviewing sample data management plans, and examining policies for RDM and university repositories, among many other things. Having the sessions spread apart by several weeks allowed us not only to apply our practical takeaways between sessions, but also to do some “homework” (or is it “workwork” if you do it at work?) in order to report back to the group about data management practices at our own universities.

Even if you aren’t able to attend RDMRose training, you can still reap the benefits. The most recent version of RDMRose materials (version3) was released in April 2015 and is accessible via the RDMRose website. For those looking to extend their understanding of RDM, it’s well worth having a look – or at least bookmarking for later, once things calm down after the start of year.

Categories
Announcement

Are you on track with the EPSRC policy framework on research data?

EPSRC logoIf you’re not already aware, the EPSRC requirements around the management of and access to EPSRC-funded research data are mandatory from 1 May 2015.

If your research is funded by the EPSRC, we’ve summarised the key points to help you comply with the EPSRC policy framework on research data. Read our guidance to find out what you need to do.

If you want to know more about managing your research data, please contact our Research Data Management team.