All posts by Rosie Higman

About Rosie Higman

Research Data Librarian at the University of Manchester

A Research Data Librarian’s experience of OpenCon2017

Photo: R2RC.org, CC-0After following and participating in the OpenCon Librarian calls for much of the last year I was delighted to win a partial scholarship to OpenCon 2017. The monthly calls had raised my awareness of the variety of Open Access, Education and Data initiatives taking place elsewhere and I was keen to learn more about others’ advocacy efforts with students, librarians, policy makers, social entrepreneurs and researchers from around the world.

Too often when discussing Open Access and Data it seems that researchers, librarians and policy makers are at separate conferences and having separate conversations; so it is great that OpenCon brings together such a diverse group of people to work across national, disciplinary and professional boundaries. Thus I was very excited to arrive in Berlin for a long weekend working with a dedicated group of advocates on how to advance Open Research and Education.

The weekend started with a panel of inspiring early career professionals discussing the initiatives they are working on which showed the many different levels it is possible to influence academic culture. These included Kholoud Al Ajarma’s work enabling refugee children to tell their stories through photography, the Bullied into Bad Science campaign which supports early career researchers in publishing ethically, Robin Champieux’s efforts to affect grassroots cultural change and research into how open science is (or is not!) being incorporated into Review, Promotion and Tenure in American and Canadian universities. Learning about projects working at the individual, institutional, and national level was a great way to get inspired about what could be achieved in the rest of the conference.

Photo: R2RC.org, CC-0

This emphasis on taking practical action was a theme of the weekend, OpenCon is not an event where you spend much time listening! After sharing how we all came to be interested in ‘Open’ during the stories of self on Saturday afternoon, we plunged into regional focus groups on Sunday working on how we can affect cultural change as individuals in large institutions.

The workshops used design thinking, so we spent time thinking through the goals, frustrations and preoccupations of each actor. This meant that when we were coming up with strategies for cultural change they were focused on what is realistic for the people involved rather than reaching for a technical solution with no regard to context. This was a great chance to talk through the different pressures facing researchers and librarians, understand each other’s points of view and come up with ways we can work in alliance to advocate for more openness.

During the do-athon (think a more inclusive version of a hackathon) I spent much of my time working with a group lead by Zoe Wake Hyde looking at Open Humanities and Social Sciences which was born out of one of the unconference sessions on the previous day.

When discussing Open Research, and particularly Open Data, the conversation is frequently geared towards the type of research and publishing which occurs in the physical sciences and so solutions do not take account of the challenges faced by the Humanities and Social Sciences. These challenges include a lack of funding, less frequent publishing which puts more pressure on each output, and the difficulties of making monographs Open Access. Often at conferences there are only a couple of us who are interested in the Humanities and Social Sciences so it was great to be able to have in depth discussions and start planning possible actions.

During the initial unconference session we talked about the differences (and potential conflicts) between Digital Humanities and Open Humanities, the difficulties in finding language to advocate effectively for Open in the Humanities, and the difficulty of sharing qualitative social sciences data. It was reassuring to hear others are having similar difficulties in getting engagement in these disciplines and, whilst trying to avoid it turning into a therapy session, discuss how we could get Humanities and Social Sciences to have a higher profile within the Open movement. It was by no means all discussion and true to stereotype several of our group spent the afternoon working on their own getting to grips with the literature in this area.

It was inspiring to work together with an international group of early career researchers, policy makers and librarians to get from an initial discussion about the difficulties we are all facing to a draft toolkit for advocates in little over 24 hours. Our discussions have continued since leaving Berlin and we hope to have a regular webchat to share best practice and support each other.

Whilst getting involved with practical projects was a fantastic opportunity my main takeaway from the weekend was the importance of developing a wider and more inclusive perspective on Open Research and Education. It is easy to lose sight of these broader goals when working on these issues every day and getting bogged down in funder compliance, the complications of publisher embargoes and the technical difficulties of sharing data.

The Diversity, Equity and Inclusion panel focused on the real world impact of openness and the importance of being critical in our approaches to openness. Denise Albornoz spoke powerfully on recognising the potential for Open Research to perpetuate unequal relationships across the world with wealthy scientists being the only ones able to afford to publish (as opposed to being the only ones being able to afford to read the literature) and so silencing those in developing countries. Tara Robertson highlighted the complicated consent issues exposed through opening up historic records, Thomas Mboa focused on how Open Access prioritises Western issues over those important in Africa, and Siko Bouterse spoke about the Whose Knowledge project which campaigns on ensuring knowledge from marginalised communities is represented on the Internet.

This panel, much like the whole of OpenCon, left me reflecting on how we can best advance Open Access and Open Data and re-energised to make a start with new allies from around the world.

Opening up the conversation about Open Research

OpenAccessWeek_logo

Awareness of Open Access (OA) and Open Data have increased substantially over the last few years, with new mandates and funder policies increasing the levels of OA at The University of Manchester for 2016-17 to 75%. Whilst this is a huge improvement on historic levels of approximately 10% Green OA, the emphasis on compliance with funder requirements has meant that many of the underlying reasons for working openly can be forgotten, presenting a risk that OA starts to be seen as another box to tick.  For Open Research to become the norm across academia, major cultural change is required, and most researchers require strong incentives to make that change. In order to help counter the focus on compliance the Library is hosting an Open Research Forum at the Contact Theatre on Thursday 26 October, as part of Open Access Week 2017.

In Classical times the forum was a place where news was exchanged and ideas thrashed out, and it is that spirit of open debate which we are hoping to capture through the Open Research Forum. We have a great selection of researchers lined up from across the University who will be speaking about the issues, challenges and benefits of openness, and what it means to be an ‘open researcher’. In keeping with Open Access Week 2017, the theme for the event is ‘Open in order to…’, focusing on the practical outcomes of working openly.  Topics include preprints, OA as part of wider public engagement, and newly emerging data labs which actively re-use data created by other researchers.

open-banner

The Library as a Broker

Whilst the Library is coordinating the event it will be researcher-led and -focused with a series of slide-free, story-based talks from academics complemented with interactive activities and discussion. Our speakers represent a range of disciplines and we hope to capitalise on the Library being a ‘neutral’ space on campus to encourage exchange from across the Schools. Speakers and participants are encouraged to be honest about their experiences with, and ideas about the future of, open research. We hope that by bringing researchers together to focus on open research without reference to mandates or policies we can help facilitate a more inspiring and substantive discussion on the opportunities and consequences created by researching in an open manner.

Learning from each other

As service providers in a central cultural institution, it’s easy to get lost in the mechanics of how to make research open and in our enthusiasm for this new mode of scholarly communication, and lose sight of how these changes affect researchers’ day to day lives. Thus, as organisers we are hoping to learn lots from our speakers so we can make our services more relevant. The speakers are all actively ‘open researchers’ in different ways so we hope that other researchers can learn from their example and be inspired.

Book now:

Book your place at the Open Research Forum now to be part of the conversation: www.manchester.ac.uk/library/open-research-forum

How effective is your RDM training?

We are involved in an international collaborative project  to assess the quality of the Research Data Management training across institutions. This post reports on progress of the project so far, it originally appeared on the project blog on 6th October 2017. 

When developing new training programmes, one often asks oneself a question about the quality of training. Is it good? How good is it? Trainers often develop feedback questionnaires and ask participants to evaluate their training. However, feedback gathered from participants attending courses does not answer the question how good was this training compared with other training on similar topics available elsewhere. As a result, improvement and innovation becomes difficult. So how to objectively assess the quality of training?

In this blog post we describe how, by working collaboratively, we created tools for objective assessment of RDM training quality.

Crowdsourcing

In order to objectively assess something, objective measures need to exist. Being unaware of any objective measures for benchmarking of a training programme, we asked Jisc’s Research Data Management mailing list for help. It turned out that a lot of resources with useful advice and guidance on creation of informative feedback forms was readily available, and we gathered all information received in a single document. However, none of the answers received provided us with the information we were looking for. To the contrary, several people said they would be interested in such metrics. This meant that objective metrics to address the quality of RDM training either did not exist, or the community was not aware of them. Therefore, we decided to create RDM training evaluation metrics.

Cross-institutional and cross-national collaboration

For metrics to be objective, and to allow benchmarking and comparisons of various RDM courses, they need to be developed collaboratively by a community who would be willing to use them. Therefore, the next question we asked Jisc’s Research Data Management mailing list was whether people would be willing to work together to develop and agree on a joint set of RDM training assessment metrics and a system, which would allow cross-comparisons and training improvements. Thankfully, the RDM community tends to be very collaborative, which was the case also this time – more than 40 people were willing to take part in this exercise and a dedicated mailing list was created to facilitate collaborative working.

Agreeing on the objectives

To ensure effective working, we first needed to agree on common goals and objectives. We agreed that the purpose of creating the minimal set of questions for benchmarking is to identify what works best for RDM training. We worked with the idea that this was for ‘basic’ face-to-face RDM training for researchers or support staff but it can be extended to other types and formats of training session. We reasoned that same set of questions used in feedback forms across institutions, combined with sharing of training materials and contextual information about sessions, should facilitate exchange of good practice and ideas. As an end result, this should allow constant improvement and innovation in RDM training. We therefore had joint objectives, but how to achieve this in practice?

Methodology

Deciding on common questions to be asked in RDM training feedback forms

In order to establish joint metrics, we first had to decide on a joint set of questions that we would all agree to use in our participant feedback forms. To do this we organised a joint catch up call during which we discussed the various questions we were asking in our feedback forms and why we thought these were important and should be mandatory in the agreed metrics. There was lots of good ideas and valuable suggestions. However, by the end of the call and after eliminating all the non-mandatory questions, we ended up with a list of thirteen questions, which we thought were all important. These however were too many to be asked of participants to fill in, especially as many institutions would need to add their own institution-specific feedback questions.

In order to bring down the number of questions which should be made mandatory in feedback forms, a short survey was created and sent to all collaborators, asking respondents to judge how important each question was (scale 1-5, 1 being ‘not important at all that this question is mandatory’ and 5 being ‘this should definitely be mandatory’.). Twenty people participated in the survey. The total score received from all respondents for each question were calculated. Subsequently, top six questions with the highest scores were selected to be made mandatory.

Ways of sharing responses and training materials

We next had to decide on the way in which we would share feedback responses from our courses and training materials themselves . We unanimously decided that Open Science Framework (OSF) supports the goals of openness, transparency and sharing, allows collaborative working and therefore is a good place to go. We therefore created a dedicated space for the project on the OSF, with separate components with the joint resources developed, a component for sharing training materials and a component for sharing anonymised feedback responses.

Next steps

With the benchmarking questions agreed and with the space created for sharing anonymised feedback and training materials, we were ready to start collecting first feedback for the collective training assessment. We also thought that this was also a good opportunity to re-iterate our short-, mid- and long-term goals.

Short-term goals

Our short-term goal is to revise our existing training materials to incorporate the agreed feedback questions into RDM training courses starting in the Autumn 2017. This would allow us to obtain the first comparative metrics at the beginning of 2018 and would allow us to evaluate if our designed methodology and tools are working and if they are fit for purpose. This would also allow us to iterate over our materials and methods as needed.

Mid-term goals

Our mid-term goal is to see if the metrics, combined with shared training materials, could allow us to identify parts of RDM training that work best and to collectively improve the quality of our training as a whole. This should be possible in mid/late-2018, allowing time to adapt training materials as result of comparative feedback gathered at the beginning of 2018 and assessing whether training adaptation resulted in better participant feedback.

Long-term goals

Our long-term goal is to collaboratively investigate and develop metrics which could allow us to measure and monitor long-term effects of our training. Feedback forms and satisfaction surveys immediately after training are useful and help to assess the overall quality of sessions delivered. However, the ultimate goal of any RDM training should be the improvement of researchers’ day to day RDM practice. Is our training really having any effects on this? In order to assess this, different kinds of metrics are needed, which would need to be coupled with long-term follow up with participants. We decided that any ideas developed on how to best address this will be also gathered in the OSF and we have created a dedicated space for the work in progress.

Reflections

When reflecting on the work we did together, we all agreed that we were quite efficient. We started in June 2017, and it took us two joint catch up calls and a couple of email exchanges to develop and agree on joint metrics for assessment of RDM training. Time will show whether the resources we create will help us meet our goals, but we all thought that during the process we have already learned a lot from each other by sharing good practice and experience. Collaboration turned out to be an excellent solution for us. Likewise, our discussions are open to everyone to join, so if you are reading this blog post and would like to collaborate with us (or to follow our conversations), simply sign up to the mailing list.

Resources

Mailing list for RDM Training Benchmarking: http://bit.ly/2uVJJ7N

Project space on the Open Science Framework: https://osf.io/nzer8/

Mandatory and optional questions: https://osf.io/pgnse/

Space for sharing training materials: https://osf.io/tu9qe/

Anonymised feedback: https://osf.io/cwkp7/

Space for developing ideas on measuring long-term effects of training: https://osf.io/zc623/

Authors (in alphabetical order by surname):

Cadwallader Lauren, Higman Rosie, Lawler Heather, Neish Peter, Peters Wayne, Schwamm Hardy, Teperek Marta, Verbakel Ellen, Williamson, Laurian, Busse-Wicher Marta