Skip to main content
Open Research Europe logoLink to Open Research Europe
letter
. 2023 Sep 6;2:146. Originally published 2022 Dec 23. [Version 2] doi: 10.12688/openreseurope.15364.2

Community-driven governance of FAIRness assessment: an open issue, an open discussion

Mark D Wilkinson 1,2, Susanna-Assunta Sansone 1,3, Eva Méndez 4, Romain David 1,5, Richard Dennis 1,6, David Hecker 1,7, Mari Kleemola 1,8, Carlo Lacagnina 1,9, Anastasija Nikiforova 1,10, Leyla Jael Castro 11,a
PMCID: PMC10828551  PMID: 38298923

Version Changes

Revised. Amendments from Version 1

This version addresses the comments raised by reviewers. The following are the main revisions: (i) new paragraph in Box 1, point 1, (ii) new paragraph at the end of section 2, (iii) text of 3.1.3 use-case was reduced, (iv) original 5.2.2 was removed, (v) 5.2.3 --5.2.2 in this version, was reduced, (vi) new paragraph at the end of section 5, and (vii) new paragraph in the middle of section 6.

Abstract

Although FAIR Research Data Principles are targeted at and implemented by different communities, research disciplines, and research stakeholders (data stewards, curators, etc.), there is no conclusive way to determine the level of FAIRness intended or required to make research artefacts (including, but not limited to, research data) Findable, Accessible, Interoperable, and Reusable. The FAIR Principles cover all types of digital objects, metadata, and infrastructures. However, they focus their narrative on data features that support their reusability. FAIR defines principles, not standards, and therefore they do not propose a mechanism to achieve the behaviours they describe in an attempt to be technology/implementation neutral. Various FAIR assessment metrics and tools have been designed to measure FAIRness. Unfortunately, the same digital objects assessed by different tools often exhibit widely different outcomes because of these independent interpretations of FAIR. This results in confusion among the publishers, the funders, and the users of digital research objects. Moreover, in the absence of a standard and transparent definition of what constitutes FAIR behaviours, there is a temptation to define existing approaches as being FAIR-compliant rather than having FAIR define the expected behaviours. This whitepaper identifies three high-level stakeholder categories -FAIR decision and policymakers, FAIR custodians, and FAIR practitioners - and provides examples outlining specific stakeholders' (hypothetical but anticipated) needs. It also examines possible models for governance based on the existing peer efforts, standardisation bodies, and other ways to acknowledge specifications and potential benefits. This whitepaper can serve as a starting point to foster an open discussion around FAIRness governance and the mechanism(s) that could be used to implement it, to be trusted, broadly representative, appropriately scoped, and sustainable. We invite engagement in this conversation in an open Google Group fair-assessment-governance@googlegroups.com.

Keywords: FAIR, FAIR evaluators, FAIRness assessment, Governance

Disclaimer

The views expressed in this article are those of the author(s). Publication in Open Research Europe does not imply endorsement of the European Commission.

1. Introduction

The publication of the FAIR Principles in 2016 heralded a new era for data-driven research, improving the focus on the Findability, Accessibility, Interoperability, and Reusability of research outputs 1 . Multiple communities in different domains are now implementing these Principles and actively evolving this work worldwide. Recent initiatives include developing tools to assist with achieving FAIRness of research data and adapting the FAIR Principles to other digital research objects such as software, workflows, training material, and tools that assess the degree of FAIRness achieved. There are also discussions around expanding the FAIR Principles, or their interpretation, to include features of digital objects beyond reusability, including popularity 2 , data quality 3 , or reproducibility 4 . Funders may require researchers to make underlying research data "as open as possible and as closed as necessary" and follow the FAIR Principles. Likewise, the current work to reform the research assessment system 5 (e.g., agreement led by the Coalition for Advancing Research Assessment –CoARA) includes data sharing as part of the qualitative research evaluation. Still, there is no agreement on the quality of research data sharing, "FAIRness," and how to measure it.

FAIR comprises principles, not standards with which one is expected to comply. Thus, research-performing organisations, researchers, and managers need to ensure that data and all other Digital Objects are FAIR-compliant research outputs without a body or committee that clarifies what FAIRness "means" and how FAIRness validation tools can support it.

From within this milieu, the topic of FAIRness governance is arising. Governance here is understood as a community-driven and agreed way of providing reliable, trusted assistance and tooling to improve FAIRness and should not be interpreted as an attempt to impose judgment. Improved FAIRness is the goal of many major international initiatives, including the European Open Science Cloud (EOSC) and funding agencies at the national and international levels.

This whitepaper evolved from discussion at the 2021 IEEE eScience workshop, FAIReScience, and individual meetings among the authors. All of them are involved in the FAIR community as original authors of the Principles, FAIR project leaders, advocates, implementers, technology developers, trainers, advisors, and stewards. The draft whitepaper was then reviewed and revised by the members of the EOSC Task Force on FAIR Metrics and Data Quality as part of their charter requirement to examine FAIR evaluation and governance issues. Task Force members were invited to become co-authors. The EOSC Board then reviewed this final revision before publication. This document aims to create a clear argument for the need for a governance model around a standard definition of FAIRness, demonstrate how stakeholders will benefit from it, and explore existing governance models from peer internet projects to better understand the decisions that need to be made. The authors aim to foster an open debate with the community around FAIRness governance and the mechanism(s) that could be used to implement it to be trusted, broadly representative, appropriately scoped, and sustainable. It begins by detailing the needs based on use cases relevant to various stakeholders. It then examines possible governance models, reviewing existing and peer efforts in this public domain.

2. Rationale

The FAIR Principles outline a set of features that a digital object should exhibit to optimise its ability to be discovered and correctly reused, focusing on fully mechanised reuse. The FAIR Principles are meant to cover all sorts of digital objects. However, they focus their narrative on data features that support their reusability. The Principles do not propose a mechanism to achieve the behaviours they describe in an attempt to be technology/implementation-agnostic. This flexibility has led to many interpretations of what it means to "be FAIR," with early claims from essential resources that they already "were FAIR. “Given that FAIR was intended to spur a revolution in research scholarship, and given that the degree of interoperability achieved by the status quo is severely lacking, claims to pre-existing or newly acquired FAIRness needed to be objectively examined.

In the last few years, several activities have focused on developing FAIR metrics/maturity indicators and tests, measurable features of a digital object that correspond to its compliance with a specific FAIR Principle, and tools to fully or semi-automatically implement these tests. Unfortunately, these tools often have a different interpretation of the original intent of each Principle and, therefore, of what needs to be tested and how. For example, various tools follow different workflows for harvesting the metadata about a digital object upon which they will execute their tests. They may not find all possibilities, or worse, may see unrelated metadata because of following a path that the digital object provider is using for another purpose. As a result, digital objects evaluated by different tools often exhibit widely different outcomes, which confuses both the publishers and consumers of digital objects, hindering them from applying corrective actions. Therefore, these evaluation initiatives need guidance to ensure that their FAIRness assessments are (a) within the scope of the FAIR Principles and (b) consider the full breadth of valid technical options to enact the required metadata gathering and testing behaviours. Given the increasing number of Web standards compliant with FAIR Principles, such as the emergence of RO-Crates 6 as a mechanism to publish data-provider-sourced domain metadata, it is essential to ensure that "objectively compliant" resources are not judged "unfairly" due to their selection of one technology or standard versus another.

Moreover, FAIR is an impetus toward properly managing management of digital objects, which is essential for the greater good. Therefore, FAIRness must not be used or perceived as an instrument to judge or punish. The community is genuinely concerned that mechanisms to evaluate FAIRness can be misused and misinterpreted, especially when these become a decision-making instrument in funding scenarios. Thus, governance of FAIRness assessment will better serve the community if it is focused on assisting and guiding stakeholders to reach a reasonable level of FAIRness or to assist in the interpretation of the facets of FAIRness that are achieved by a given digital object before deciding to use it (or not), versus using assessments as a path to judgement. Thus, beyond providing rapid pass/fail outputs for any given assessment metric, consideration should be given to the supportive advice given by the evaluation systems in response to various kinds of failure, and this advice given would benefit from being harmonised between all evaluation systems to avoid divergence in the community.

This work notes that there have been movements to expand the FAIR Principles to include features of digital objects beyond reusability, including popularity and data quality. While these latter features are undoubtedly essential and of interest to many stakeholders, they do not directly fall within the scope of the FAIR Principles. They are thus not included in this discussion. This careful scoping of FAIRness around the original Principles is even more important in light of the knowledge that the FAIR Principles anticipated, from the outset, that they would need to be fine-tuned and interpreted for individual research communities - with the final FAIR Principle (R1.3) stating that a FAIR digital object should follow community standards. Thus, there will be, intentionally, an expansion of the expected FAIRness behaviours over time as individual communities circumscribe the FAIR expectations within their domains. Ensuring high-quality, reliable, and appropriately focused community-driven assessments will also require a governance process that is transparent and broadly inclusive, and this has been a focus of the EOSC Task Force on FAIR Metrics and Data Quality. Finally, there is also an active movement around the adaptation of the FAIR Principles to digital objects other than data, e.g., software and workflows, with few tools to evaluate their FAIRness. A similar trend could be expected, i.e., variety in interpreting the FAIR Principles as they are applied to these non-data objects. Similar to the data case, this would lead to incoherent FAIR assessments between employed [tools/tests] in the absence of FAIRness governance.

Nevertheless, this whitepaper does not suggest that the FAIR Principles themselves require governance. The FAIR Principles represent a milestone in the evolution of scholarly data reuse and likely should remain untouched as written; specialised domains should extend those Principles autonomously to make them more relevant to their community. Those extensions, however, should be associated with transparent and objective assessment tools, which we suggest should be subject to review by the proposed governance process and should evolve together with the extensions and interpretations and implementations of the principles over time.

Box 1 below summarises the scope of this whitepaper and its critical objectives for defining a governance model, which is (i) motivated by the need to have understandable and trustworthy claims of FAIRness, and (ii) focused on helping evaluation tools and services to deliver transparent and consistent results. The latter means what should be tested, how it aligns with metrics and maturity indicators, and how FAIRness levels can be presented qualitative or quantitatively.

Box 1. FAIRness governance model: key objectives and indicators of success.

How will we know when we have identified an effective governance model for the FAIR assessment? We think it will have the following features:

  • 1.

    FAIRness will be objectively examined transparently and consistently:

    The results of different tools and services that assist and evaluate levels of FAIRness are compatible and cohesive within each domain/community and, as much as possible, between communities. While this alignment is not the objective of the governance process, it will be a sign that governance is succeeding since evaluation tools should naturally migrate towards trusted metrics (see feature #2)

  • 2.

    FAIRness needs to be universally understood and trusted:

    • Both producers and users of digital research objects have confidence in the results of these tools, processes, and services;

    • Producers are assisted in their attempts to improve their level of FAIRness.

  • 3.

    FAIRness needs to be tuned to all domain-specific needs and adapted to a diverse range of digital objects:

    • Communities have a process in place to collect requirements from their members;

    • All stakeholders ensure transparency, consistency, understandability, broad acceptance, and trust.

We would note that point 1 is already being addressed through a series of “hackathons” to align the metadata gathering workflows used by the various evaluation tools 7 . The outcomes of these events would likely become the substrate upon which a governance body credibly establishes itself.

3. Stakeholders groups and their use cases

Ten stakeholders are identified below that can benefit from a FAIRness governance model. These are split into three main groups as follows:

  • The first group (A), FAIR practitioners, corresponds to those directly working on research, including individual researchers and domain-specific research communities, whose responsibility it is to apply the FAIR Principles to their work - both as consumers and producers of data - and utilise services that assist them in achieving this.

  • The second group (B), FAIR custodians, corresponds to stakeholders that will support FAIR in practice, also via research on FAIR itself, recommendations for FAIR adopters, or provision of tools making it easier for researchers to produce FAIR research and for those in the first group to assess FAIRness requirements. In this group are FAIR researchers, i.e., researchers whose field of research is the FAIR Principles and elements around them, and FAIR stewards, i.e., data or any other digital object stewards supporting the FAIR Principles.

  • The third group (C), the FAIR decision and policymakers, encompasses stakeholders that will require access to FAIRification plans, i.e., how the FAIR Principles will be supported and achieved, for different digital objects involved in a research process but is not in charge of FAIRifying those digital objects themselves. This group includes funding agencies, governments, and publishers.

The list of stakeholders mentioned in the sections below is not intended to be comprehensive but just examples. This list of stakeholders is largely alighted with those identified by the Research Data Alliance Sharing Rewards and Credit Interest Group (RDA-SHARC-IG) 8 .

3.1. Group A: FAIR practitioners

3.1.1. Researchers and research-performing organisations (domain-agnostic). For this document, researchers are any individual or team that performs research, produces research outputs, and applies for grants to conduct that research. This includes research professionals from all or any field or discipline.

Use-case: A researcher has completed a project and is preparing to engage in the final stages of their data stewardship plan. This requires them to publish their data following the FAIR Principles. They have a variety of options, including (i) self-publication on the Web; (ii) publication in their domain-specific institutional repository; (iii) publication in one of the several generic repositories. How can the researcher find the best option for his data that is also accepted by his funding body, especially if the option incurs costs (e.g., self-publication, large datasets)? Because the FAIRness governance body has endorsed and vetted a set of community accepted metrics and tests, the researcher can compare the options easily by employing these tests using trusted and objective assessment tools. The test reports can then justify the decision, including possible costs (monetary, human resources), to their funding agency.

3.1.2. Research software engineers. Research Software Engineers (RSE) are well-recognised stakeholders. They include those professionals who combine software expertise with an understanding of research, such as software developers supporting research development. Researchers are developing software to collect, process, analyse, host, publish, and preserve data in this context.

Use-case: A research developer in the Botanical field wants to assess the software's FAIRness (either open-source-code or not) and data, e.g., datasets or ontologies, used for their research. They register and richly describe these resources in the FAIR-4-Plants registry (the one recommended by their community), using FAIR-relevant descriptors such as terms of access, information on the use of (meta) standards, and identifier schema. The engineer can start self-assessing these resources using any assessment tool recognised by the FAIRness governance body. They select the one preferred within their community, knowing that, regardless of what other FAIRness tools may be used by their funders or the publishers to whom they submit their research results, they can trust that the FAIRness assessments will be compatible and will accurately reflect their FAIRification efforts.

3.1.3. Domain-specific research communities. Expert domains encounter dramatically different datasets – in terms of complexity, quality, secrecy, and volume (to name only a few). Thus, various domains will look at the FAIR Principles, and their assessment, through the lens of the kinds of data they collect, and the metadata requirements surrounding their domain-specific datasets.

Use-case: A group of researchers in the area of clinical trials creates a GO FAIR Implementation Network to identify and develop FAIR standards for clinical trial data. They agree that a pre-registration document is an element of metadata that should be required for any clinical trial. They define a novel FAIR metric specific to their community that says the metadata record for a clinical trial must contain a reference to a pre-registration document or an equivalent Web landing page. The pre-registration must be retrievable via HTTP(S). They then design the test for this Metric, wherein they decide that a specific predicate should point to this landing page/document. Finally, they submit the Metric, and its test, to the FAIRness Governance body. The Metric is approved; however, the FAIR experts in the governance body suggest that minting novel predicates is not an optimally FAIR activity. The FAIR experts suggest edits, scuach as selecting a more widely-used predicate. In a potentially iterative process, the test is re-written and re-evaluated and eventually becomes a recommended test for the clinical trials "community" ("community" is defined as those who self-identify as stakeholders in clinical trial data reusability). FAIRness governance experts have vetted the community-based Metric;, therefore, it will be trusted by other stakeholders

3.2. Group B: FAIR custodians

3.2.1 FAIR support stewards & trainers. FAIR Stewards and Trainers support community-specific researchers to create FAIR digital objects through design or FAIRification processes. FAIR stewards facilitate communication between data owners and FAIR experts, collecting and curating the FAIRification challenges passed from owner to expert and curating and interpreting the suggested solution. A FAIR steward would also advise on developing, implementing, and monitoring an RDM policy, including support of FAIR data and Open Science principles 9 .

Use-case: The role of FAIR Stewards and Trainers is to guide and assist data producers in making “good choices” regarding technologies and standards to be used to achieve compliance with the FAIR Principles. A first step in performing the role of a Steward or Trainer is increasing FAIR literacy 10 for both data producers and FAIR implementers. In this role, there may be a scenario where a data producer approaches their FAIR Steward excited to have found an ontology that contains the precise concept they wish to describe in their data and asks that it be added to their metadata. The FAIR Steward runs a series of FAIRness assessments on the ontology. They find that the ontology does not meet expected FAIR standards (e.g., the terms do not resolve or there is no usage licence). The FAIR Steward then contacts the FAIRness governance body to ask for advice. The FAIRness governance body engages with the ontology authors. It gives them advice on how to improve their resources, making them more FAIR. The FAIR Steward can confidently utilise this metadata with the assurance that it is a sufficient level of FAIRness. Optimally, this process would take a few days to give timely feedback to the data producer. However, there may be cases where the request necessitates more extensive changes/development.

3.2.2 FAIR repositories. Organisations or infrastructures support researchers in opening, sharing, or preserving FAIR digital objects.

Use-case: The repository must show its host organisation and funders that their data holdings and practices are FAIR. The repository is discipline-specific and has a mission for the long-term preservation of digital objects. They define a new metric specific to their community that says the metadata record must contain certain elements required for long-term preservation and a test for this Metric. Like the use case in 3.1.3, they submit the Metric and its test to the FAIRness Governance body for feedback and vetting. Furthermore, since the repository's mission is long-term preservation, their collections include digital objects from a prolonged period, and they wish to keep them FAIR over time. Since that is not a trivial task and requires resources, the repository wonders how long a FAIR score can be valid, are old versions of tests or metrics still valid, and what is a sufficient level of FAIRness over time. The repository contacts the FAIRness governance body to ask for advice. The FAIR governance experts provide recommendations about the validity of older versions of metrics and tests. They might even be able to recommend a certification body for the repository to consider (see 3.2.5). Based on the feedback and recommendations by the FAIRness Governance body, the repository can improve its practices, add in their website information about their FAIRness and how that has been measured so that the funding agencies (as well as users) can easily see, without running tests themselves, that the repository is enabling FAIR and that the digital objects within the repository are FAIR at a sufficient level.

3.2.3. FAIR tools developers and operators. Software developers support and maintain FAIR deployment tools and infrastructure, including metrics designers and evaluators.

Use-case: A discipline-agnostic community working on dataset aggregation has identified four mandatory metadata elements and three recommended ones that data providers should support so that data aggregators can present a coherent summary of aggregated datasets. They have found that the complete set of four mandatory metadata elements is present in any dataset regardless of discipline. In contrast, the three recommended ones are only possible for some domains. They have designed a software-based set of FAIRness tests that explores metadata records for datasets and validates whether the four mandatory elements are present. The three recommended ones are only for those disciplines where they are valid. They have defined the conformance rules, i.e., possible values and cardinality. They have presented this to the FAIRness governance body for endorsement. After accepting several suggestions for edits, the novel set of tests becomes endorsed for use. Funding and administrative bodies can now confidently interpret a dataset provider's success in conforming to this novel metadata standard. They have followed the guidelines provided by the FAIRness governance body, as they want their tool to be recognised and trusted by dataset aggregators.

3.2.4. FAIR researchers. This might include "metadata architects," EOSC researchers, PID implementers, and all those communities interested (from different disciplines, backgrounds, or interests) in developing FAIR principles from a technical implementation point of view.

Use-case: A small enterprise has created a device that continuously monitors blood sugar in diabetic patients. The data stream from this device is designed to gather and publish additional contextual metadata that can help the patient's clinician better understand the patient's lifestyle and how this affects their blood sugar throughout the day. The metadata does not follow an existing standard. However, it combines several existing ones with some new elements to better reflect their model and the data they collect. The small enterprise has made significant efforts to ensure their data stream is FAIR. They include this in their advertising for the product. A large medical devices corporation approaches them interested in licensing the product. However, it challenges them on the claim of FAIRness - "prove it"! The small enterprise selects the appropriate set of governance body recommended metrics/tests for streaming biomedical data and generates a report that shows their compliance. Because the governance body has vetted the tests, the corporation is reassured that the claims are valid and licenses the product.

3.2.5. FAIR certification bodies. These bodies have established a set of FAIR compliance standards beyond individual FAIR metric tests - e.g., the minimal subset of indicators, properties, or elements that must be met by a dataset or data collection to become "FAIR certified" by this organisation.

Use-case: An organisation offers FAIR certification for tools and data repositories as one of its services - an "I'm FAIR!" badge that can be applied to the website to let its users know that the certification body's tests are compliant with the FAIR Principles and accurately assess the FAIR behaviours promoted by the community they are serving. Certification providers, therefore, must demonstrate that possessing their badge is a valid indication of conformance with FAIR that their clients will trust. Rather than designing their own set of tests, the certification authority board selects metrics, tests, and testing software recommended by the governance body and uses these in their certification workflow. They display the chosen metrics and tests publicly in their advertising, showing that these tests are governance-body recommended. In addition, they submit their certification evaluation workflow to the governance body to ensure it is entirely FAIR. This ensures that their certification is credible, giving it added value.

3.3. Group C. FAIR decision and policymakers

3.3.1. Funding agencies. The funding agencies referred to in this first group of stakeholders are institutions calling for grant applications, assessing the applications, providing financial support, and following up on the development of granted projects.

Use-case: A funding agency, either public or private, commonly requires something in return for the investment provided via a grant, e.g., providing results back to the community. To this end, a funding agency would include a data stewardship component in its funding applications. They require that data be published using FAIR formats and "as open as possible and closed as necessary. “A successfully funded project in Rare Diseases submits its final report, claiming that they have made their data FAIR but cannot be made public due to its sensitivity. This may be the case where the funding body imposes the relevant restrictions or are imposed by the specificity of the data collected and the object on which they are collected. However, real-world experience also pointed to cases where these requirements are misunderstood (unknowingly or even deliberately) and misinterpreted, e.g., lack of knowledge or competence, resistance to fulfil considering the complexity of the task and resources (time, human, etc.). To be allocated etc. Needing to assess this claim to FAIRness, when the agencies cannot evaluate it themselves, they provide the project leaders with one or more assessment tools that the FAIRness governance body has endorsed. The project leader runs the tool on their data and submits the report to the agency. The agency can trust the assessment results because the governing body has approved its use.

3.3.2. Governments. Under the “governments," stakeholder group includes government bodies shaping a particular economic area such as agriculture or disease control.

Use-case: Depending on, for instance, geographical and economic characteristics, a government might want to favour the development of specific socio-economic outcomes. A tropical country with high incidences of malaria would wish to minimise or eradicate the disease. It would be necessary to understand how the infection propagates, what facilitates the spread, and how it can be prevented, requiring data collection and analysis. The government of this country can define policies to encourage and promote FAIR research efforts around malaria and pass these through the FAIRness governance body to ensure they are properly scoped. In this way, researchers can more easily share results and build upon what others have done simply by following governmental policy. Such policies will influence researchers, research communities, national funding agencies, and research evaluation agencies.

3.3.3. Publishers. In this group are included journal publishers that establish publishing policies, including the publication of research data and other research results, as well as other publishers that might be important in different research fields (e.g., publication platforms, learned societies, etc.).

Use-case: Research publishers aim at providing trusted research results that facilitate further scientific advances. This is commonly done in scientific articles where text is the main component. However, scholarly publications are evolving. Many journal publishers already adhere to the FAIR Principles and ask for metadata regarding data and software accompanying the text-based publication, including information about the availability, access conditions, licences, etc. Data availability statements (DASs) are already required by many journals and strengthened to ensure data is deposited in FAIR-enabling repositories ( examples in FAIRsharing). Many journals have created data-centric articles and launched many data-focused journal. These ensure that research digital objects, including data and software, are presented as first-class objects. Their creators are credited and encouraged to share them. Data availability in the FAIRest way possible has become essential to advancing research. Among other benefits, this allows for improving the reproducibility and replicability of research, positively affecting the trustworthiness and general quality of research, where making it FAIR means making it transparent. Journal publishers, therefore, will want to have access to trusted FAIRness assessment tools or outputs associated with the data to (i) assist and facilitate the peer review process; and (ii) create value filters/badges for the data articles. If such evaluations are not available, journal publishers may also want to (i) direct the authors to one or more tools powered by an approved set of FAIR metric tests recommended by the FAIR Metrics Governance body; or (ii) integrate the assessment into their existing internal submission system.

4. Stakeholder benefits from FAIRness governance

Table 1 summarises the stakeholders and how they could benefit from FAIRness governance, based on the groups and use cases described in Section 3.

Table 1. Stakeholders and examples of how they could benefit from FAIRness governance.

Group Stakeholder Examples of possible benefits
GROUP A:
FAIR practitioners
Researchers and research-
performing organisations
(domain-agnostic)
Researchers, in general, will be able to:
    •   Get advice from their communities on what FAIRness assessment
tools adjust best to their needs
    •   Compare and select platforms supporting FAIR metadata, if
possible, tailored to their research community
    •   Compare and choose platforms supporting FAIR metadata for their
particular digital object
Research Software Engineers Software Engineers will be able to:
    •   Obtain a FAIRness assessment from a tool recognised by the
governance body together with a summary that they can publish along
with their software and data
Domain-specific Research
communities
Research communities will be able to:
    •   Get advice from the FAIRness governance body on what FAIRness
assessment tools can be better adjusted or adapted to their own
needs
    •   Joining with FAIR researchers, provide metadata standards tailored
to their community
GROUP B:
FAIR custodians
FAIR support stewards &
trainers
FAIR stewards will be able to:
    •   Assess the FAIRness of external digital objects used by the
researchers they support, e.g., using one of the tools applying the
guidelines provided by the FAIRness governance body.
    •   Use guidelines provided by the FAIRness governance to improve
FAIRness from the digital object design phase
    •   Estimate the FAIRness of digital objects as they advance in
the different stages of development and tune any necessary
adjustments to improve the final FAIRness
FAIR Repositories FAIR repositories will be able to:
    •   Use guidelines and recommendations provided by the FAIRness
governance to improve their practices
    •   Show their FAIRness to stakeholders
FAIR tools developers and
operators
FAIR tools developers will be able to:
    •   Assess the FAIRness of the tools they develop as they progress in
the different development stages, with the objective of incremental
improvement
    •   Assess the FAIRness of the data they use in their software tools,
knowing that the assessment is compatible no matter what FAIR
evaluator tool they choose so, if needed, they can later move to
another one
FAIR researchers FAIR researchers will be able to:
    •   Define metadata standards that have been certified as FAIR so
their communities can later use them in the creation of tools and
data
    •   Identify FAIR metadata standards that can be used as part of their
standard
    •   Provide guidelines to their community regarding the creation of
metadata and standards that will assure their FAIRness once they
are ready to be assessed
FAIR certification bodies FAIR Certification Bodies will be able to:
    •   Become and demonstrate their FAIRness by following the
guidelines provided by the FAIRness governance body.
    •   Provide transparent information on how to get a certification with
them that follows the guidelines of the FAIRness governance body.
GROUP C:
FAIR decision and
policymakers
Funding agencies Funding agencies will be able to:
    •   Identify trusted FAIR assessment tools that are assuredly within the
scope of the FAIR Principles and provide a transparent assessment
    •   Suggest community-based FAIR assessment tools recognised by
the FAIRness governance body. (which, e.g., make sure that the
tools are within the scope of the community)
    •   Understand and compare FAIR assessments provided by, e.g., their
preferred tool (which can be a generic one) and that one used by
the funded parties (which can be a community-specific one)
Governments Governments will be able to:
    •   Create clear and transparent policies on the FAIRness of digital
objects generated using public funding
Publishers Publishers and journal editors will be able to:
    •   Select a trusted general-purpose FAIR assessment tool and
integrate it into the submission platform so the published digital
objects get a FAIR "badge."
    •   Define a research impact metric based on FAIRness by using an
assessment tool that follows the FAIRness governance guidelines

5. Models for governance

Communities can operate under many different frameworks according to their own individualistic needs, which reflect the community's size, history, and mission. To support community-based initiatives outside traditional grant-funded routes or are pre-competitive and broad, communities have followed several models: from loosely organised and time-limited working groups and informal grassroots initiatives to not-for-profit (NFP). While all options provide a group of collaborators, a means to work together to build a community effort that benefits from increased public identity (branding) and independence; the NFPs incorporated model also provides persistence and an ability to handle funds (e.g., from donations/sponsorship; registration for meetings; society memberships; journals and private or public grants) with limited personal liability. There are, however, also independent unincorporated communities, unincorporated groups that use fiscal sponsors, and incorporated groups. The models under which a community works in a legal and financial framework (the governance model) are part of a continuum of increasing complexity.

Different organisations within the Internet standards domain have governance models that may provide practical approaches to how FAIRness assessments might be governed. The following subsections detail those with a solid international presence, and those with a dedicated focus on the FAIR Principles and their practical application. The first subsection considers models from Web standards organisations: World Wide Consortium W3C and Internet Engineering Task Force. The second subsection focuses on international organisations focusing on the FAIR Principles and their practical application: European Open Science Cloud, FAIR Digital Objects Forum, GO FAIR Initiative, and Research Data Alliance Communities of Practice.

5.1. Models from web standards organisations

5.1.1. Internet engineering task force. The Internet Engineering Task Force ( IETF) develops and promotes open standards through open processes. The mission of the IETF is to make the Internet work better by producing high quality, relevant technical documents that influence the way people design, use, and manage the Internet. Working Groups are the primary mechanism for developing IETF specifications and guidelines, many of which are intended to be standards or recommendations. The standardisation mechanism of the IETF is the RFC (Request For Comments), published as a Public Access Specifications (PAS). They become official documents of Internet specifications, communications, procedures, and events, validated by the community.

5.1.2. W3C. The World Wide Consortium ( W3C) is an international community whose work revolves around standardising Web technologies. To accomplish this work, W3C follows processes that promote the development of high-quality standards based on the Membership, Team, and public consensus. W3C processes promote fairness, responsiveness, and progress in all facets of the W3C mission. Various W3C groups enable W3C to pursue its mission by creating Web standards, guidelines, and supporting materials. W3C is governed by its membership, providing transparent membership of individuals from all stakeholder groups, and a sustainable organisational model through membership fees. W3C develops technical specifications and guidelines (as de facto standards) through a process designed to maximise consensus about the content of a technical report, ensure the high technical and editorial quality, and earn endorsement by W3C and the broader community.

5.2. Models from organisations with a focus on data FAIRness

5.2.1. EOSC. The European Open Science Cloud (EOSC) FAIR Metrics and Data Quality Task Force will implement the proposed FAIR metrics for EOSC by assessing their applicability across research communities and testing a range of tools to enable uptake. Recommendations will be made to update metrics and adopt tools as appropriate. In addition, the group will undertake state of the art to understand measures of data quality, conducting several case studies to identify standard features and dimensions to define an approach for EOSC.

5.2.2. GO FAIR Initiative. The Global Open (GO) FAIR initiative ( GO-FAIR) derives its governance from both stakeholders, via the coordinators of the community-driven GO FAIR Implementation Networks (INs) who make up the ‘Stakeholders Forum’, and National/International Support Offices. IN Coordinators elect representatives to the Executive Board (EB), which consists of national and international support and coordination office members. Work priorities are primarily identified by the INs, and are communicated to the EB, who then functions to ensure that the support offices are capable of providing short and long-term support to those priorities, and finding synergies with other national and international initiatives.

5.2.3. RDA communities of practice. In the Research Data Alliance ( RDA), Communities of Practice (CoP) form to build discipline or domain-specific communities within RDA and investigate, discuss and provide knowledge and skills on any specific discipline and research domain issues relevant to the community and RDA. CoPs are composed of experts from the community interested in the research domain/discipline and are committed to directly or indirectly enabling data sharing, exchange, and interoperability. CoPs serve as platforms for communication and coordination among individuals, bridging the community outside and within the RDA with shared interests.

These are only a few examples of how a FAIRness governance body could be organised. However, what is clear is that the impetus to establish such a body must come from the stakeholder communities themselves. With this in mind, we are proposing this whitepaper as an invitation to begin such a discussion – a discussion that we will now seed with our perspectives on what features are expected from a body claiming governance of FAIR assessment

5.3. Key considerations for governance of FAIR assessment

There are several features of FAIR assessment governance that make it distinct from many of the organizations mentioned above. For example:

  • 1)

    After an initial bootstrapping phase to address the existing set of metrics and tools, the frequency of updates will be modest, thus it is pragmatic to consider that stakeholders will participate largely without financial remuneration due to the small time-investment involved.

  • 2)

    The task of the governance body will be largely the same over time and for all cases, distinguishing it from all of the bodies above, that have working groups or communities of practice that address distinct tasks or standards.

  • 3)

    The makeup of the assessment governance body for any given new request must include (at a minimum) a domain-expert who knows the domain standards and data semantics; a FAIR expert who understands the Principles and their intent; and a software developer who will ensure that the assessment tool is executing the assessment accurately. Thus, these governance bodies are likely to be extremely fluid in their membership, and short-lived, being established for very specific tasks.

Despite point 3 above, we anticipate that there will need to be a core of trusted stakeholders – primarily representing the FAIR community (both public and private) – who will assemble appropriate membership teams for these short-term tasks. We suggest that, similar to the W3C governance, the membership of this core group is time-limited; however, members may only be selected and/or nominated by other similar experts, and perhaps might serve on a rotational basis, to ensure the required expertise remains in-place for effective governance.

A body organised this way would compare favourably with the governance in traditional journalistic peer-review, which demonstrably scales. Like traditional peer-review, there is a core editorial board, and then domain-experts are selected on a case-by-case basis, for a limited-duration review process, after which the group is dissolved. It seems reasonable that the EOSC FAIR Metrics and Data Quality Task Force would act as both the seed organization to begin this stakeholder discussion, as well as put forward TF members as nominees for the inaugural governance body, to be voted-on by the self-assembled stakeholder community.

6. Closing thoughts

In 2016, the High-Level Expert Group advising the European Commission on the nascent EOSC made a firm recommendation regarding FAIR data publishing: "Horizon 2020, should only support projects that properly address Data Stewardship [and those] that do not specify FAIR conditions… should not be eligible for funding.” ( Realising the Open Science Cloud, p. 18). Since then, FAIR has figured with increasing prominence in Horizon 2020 instructions, including running a (voluntary) FAIR-focused data stewardship pilot for all Horizon 2020 funded research projects. Similar requirements are appearing in many other countries and organisations. This is, however, somewhat of a trap due to the lack of a FAIRness assessment governance body.

Researchers, seeing that their funding or ability to publish may shortly be dependent on their adherence to the FAIR Principles, have little choice but to claim to be FAIR. In contrast, the funding agency or journal editor, in turn, has no way to validate those claims because no identifiable body exists from which a trusted, expert-vetted set of assessment tests could be recommended. The emergent cottage industry of FAIRness assessment tool creation generates products that produce strikingly different results, even for the same digital objects. Thus, FAIRness is "stuck" between an increasingly common research and publishing requirement yet still an unmeasurable set of ideals.

This paper recognises two main areas where FAIRness governance is and will be longer-term required. On the one hand, there is an immediate need to harmonise the interpretation and evaluation of existing maturity indicators and FAIRness assessment tools, in particular those from the Maturity Indicators 11 that arose from the RDA FAIR Maturity Model Working Group; on the other hand, the FAIR Principles - in particular, Principle R1.3: “(Meta)data meet domain-relevant community standards” (also tackled by the RDA FAIRsharing WG)- forecasts extension of maturity indicators in a community-specific manner over time. In addition, differences across digital objects and the need to adapt the original FAIR Principles, mostly tailored to data, have already been recognised by government bodies such as the European Commission 12 . There are ongoing efforts to adapt the FAIR Principles to research software 13 ( joint working group hosted at RDA), computational workflows 14 , machine learning 15 (and at events such as RDA plenaries 17 and 18), and Virtual Research Environments ( via a working group at RDA). This will require a degree of oversight in the longer term to ensure the new assessments stay true to the intent of the Principles and are correctly focused on relevant, measurable FAIR features.

The benefits of establishing such a governance body are highlighted by identifying ten distinct stakeholder communities in hypothetical use cases and exploring examples of how each could take advantage of a trusted and consistently applied set of metrics and tests. These communities cannot be appropriately served until the FAIRness assessment reaches an adequate level of professionalism. The argument is made that this, in turn, requires FAIRness governance. Perhaps the most substantial benefit of FAIRness governance is that it should increase the trust of those being evaluated. As FAIRness increasingly becomes a funding and publishing requirement, the stakeholder community must not feel they are being unfairly judged; rather, a well-governed assessment landscape should lead to greater appreciation for the value of FAIR data in itself, and trust and appreciation for the guidance and assistance that can be gleaned from the results of a FAIRness evaluation.

The need for governance around Web technologies and standards is not unique to FAIR. It has been addressed by a wide range of internationally scoped organisations whose outputs need to be trusted by both academic and commercial stakeholders. A variety of these is discussed, examining their governance models and exploring their benefits and possible deficiencies concerning the task of governance of FAIRness assessments.

The current state of FAIRness assessment metrics and tools does not provide sufficient accuracy and reproducibility to be fit-for-purpose. A wide range of stakeholders will benefit from the emergence of a governance mechanism whereby existing. Future FAIRness metrics and metric tests can be assumed to be: a) adequately aligned with the intention of the FAIR Principles, b) well-designed and cognizant of appropriate standards while recognising differences between communities and disciplines. This document is presented as an invitation to FAIR stakeholders to initiate a focused discussion around the need for FAIRness governance. This grassroots group could begin to jointly design a trusted governance mechanism, broadly representative of all stakeholder communities, appropriately scoped, and self-sustaining. As an initial forum for these discussions, we have created an open Google Group ( fair-assessment-governance@googlegroups.com; membership can be requested). This forum can be used to provide feedback on this whitepaper, explore additional options for a governance structure, and provide opinions regarding the adequacy of the scope of this initial framing of the needs. All readers are invited to participate in this conversation as individuals or as representatives of stakeholder groups.

Ethics and consent

Ethical approval and consent were not required.

Acknowledgements

We thank the members of the EOSC Task Force on FAIR Metrics and Data Quality for their time invested in reviewing this manuscript, including Marjan Grootveld (DANS), Chris Schubert (TU Wien), Josefine Nordling (CSC - IT Center for Science), Sumithra Velupillai (SRC - Swedish Research Council).

An earlier version of this article was published as part of the EOSC working groups reports ( https://doi.org/10.5281/zenodo.7390482).

Funding Statement

MDW acknowledges funds supporting his work, including the EJP RD initiative which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement N°825575. SAS acknowledges funds that supports her, including FAIRplus (IMI 802750), EOSC-Life (H2020 EU 824087), FAIRsharing (Wellcome 212930), Precision Toxicology (H2020 EU 965406), UKRI DASH grant (MR/V038966/1), BY-COVID (Horizon EU 101046203), AgroServ (Horizon EU 10105 8020), and the ELIXIR Interoperability Platform. EM acknowledges support from the FAIRsFAIR Fostering FAIR Data Practices in Europe (funding number 831558). RD David was supported by the EOSC-Life European program under grant agreement Nº824087. LJC acknowledges support from the project NFDI4DataScience, part of the German National Research Data Infrastructure funded by the Joint Science Conference (GWK) and the German Research Foundation (DFG), funding number NFDI 34/1.

[version 2; peer review: 3 approved, 1 approved with reservations]

Data availability

No data are associated with this article.

References

  • 1. Wilkinson MD, Dumontier M, Aalbersberg IJJ, et al. : The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3: 160018. 10.1038/sdata.2016.18 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Haendel M, Su A, McMurry J: FAIR-TLC: Metrics to Assess Value of Biomedical Digital Repositories: Response to RFI NOT-OD-16-133. Zenodo. 10.5281/zenodo.203295 [DOI] [Google Scholar]
  • 3. Gallo RB: Data Quality with FAIR principles, an introduction - The Hyve. In: the-have. 28 Apr2021; [cited 10 May 2022]. Reference Source
  • 4. Bontemps C, Orozco V: Toward a FAIR Reproducible Research. In: Daouia A, Ruiz-Gazen A, editors. Advances in Contemporary Statistics and Econometrics. Cham: Springer International Publishing;2021;595–613. 10.1007/978-3-030-73249-3_30 [DOI] [Google Scholar]
  • 5. Directorate-General for Research and Innovation (European Commission): Towards a reform of the research assessment system: scoping report. LU: Publications Office of the European Union;2021. 10.2777/707440 [DOI]
  • 6. Soiland-Reyes S, Sefton P, Crosas M, et al. : Packaging research artefacts with RO-Crate.2021. 10.5281/zenodo.5146228 [DOI] [Google Scholar]
  • 7. Wilkinson MD, Sansone S, Marjan G, et al. : FAIR Assessment Tools: Towards an "Apples to Apples" Comparisons. Zenodo. 2022. 10.5281/zenodo.7463421 [DOI] [Google Scholar]
  • 8. Romain D, Laurence M, Alison S, et al. : Implementing FAIR in data sharing: who are the actors and their responsibilities?Poster presented at: March 30, 2020. 10.5281/zenodo.3743946 [DOI] [Google Scholar]
  • 9. dos Santos Vieira B, Bernabé CH, Zhang S, et al. : Towards FAIRification of Sensitive and Fragmented Rare Disease Patient Data: Challenges and Solutions in European Reference Network Registries.2022. 10.21203/rs.3.rs-1572508/v1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. David R, Mobile L, Specht A, et al. : FAIRness Literacy: The Achilles' Heel of Applying FAIR Principles. Data Sci J. 2020;19(1):32. 10.5334/dsj-2020-032 [DOI] [Google Scholar]
  • 11. FAIR Data Maturity Model Working Group: FAIR Data Maturity Model. Specification and Guidelines.2020. 10.15497/rda00050 [DOI] [Google Scholar]
  • 12. European Commission, Directorate-General for Research and Innovation: Six Recommendations for implementation of FAIR practice by the FAIR in practice task force of the European open science cloud FAIR working group. LU: Publications Office of the European Union;2020. 10.2777/986252 [DOI] [Google Scholar]
  • 13. Hong NC, Katz DS, Barker M, et al. : FAIR Principles for Research Software (FAIR4RS Principles). RDA,2021. 10.15497/RDA00065 [DOI] [Google Scholar]
  • 14. Goble C, Cohen-Boulakia S, Soiland-Reyes S, et al. : FAIR Computational Workflows. Data Intelligence. 2020;2(1–2):108–121. 10.1162/dint_a_00033 [DOI] [Google Scholar]
  • 15. Castro LJ, Katz DS, Psomopoulos F: Working Towards Understanding the Role of FAIR for Machine Learning. PUBLISSO;2021. 10.4126/FRL01-006429415 [DOI] [Google Scholar]
Open Res Eur. 2024 Feb 15. doi: 10.21956/openreseurope.17923.r36887

Reviewer response for version 2

Sheeba Samuel 1

The article aims to establish the need for a governance model centered around a standardized definition of FAIRness. The article discusses the challenges associated with implementing the assessment of  FAIR (Findable, Accessible, Interoperable, and Reusable) Data Principles. It highlights the lack of a conclusive method to determine the required level of FAIRness for research artifacts, leading to confusion among stakeholders. The FAIR Principles, while covering various digital objects, focus on reusability and provide principles rather than standards, resulting in varied interpretations. The whitepaper identifies three stakeholder categories (FAIR practitioners, custodians, and decision and policymakers) and provides the use cases where stakeholders can benefit from the FAIR assessment governance. The classification of stakeholders into three distinct groups provides a clear structure for understanding the diverse roles and responsibilities in the FAIR governance model. It also examines governance models from other organizations to gain insights into the requisite decision-making processes.

The timeliness of FAIRness assessment governance is evident due to multiple interpretations of FAIR principles by different FAIR assessment metrics and tools. The article explicitly outlines the stakeholders and their respective roles in benefiting from FAIRness assessment governance.

However, a number of questions arises on the implementation of FAIRness assessment governance.

  1. Can the diversity in community-driven governance models for FAIRness assessments still lead to varying interpretations of FAIR principles and their assessment?

  2. How can FAIRness governance ensure a standardized understanding and implementation of these principles across different communities?

  3. How to address potential challenges or conflicts that might arise among different stakeholder groups, offering a more nuanced perspective on the practical implementation of the proposed governance model? How the FAIRness governance model can adapt to the evolving changes in the roles and responsibilities of stakeholders over time?

  4. Though these questions are out of scope for this paper, I look forward to the discussions that the paper will generate on implementing the governance model for FAIRness assessment.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Yes

Does the article adequately reference differing views and opinions?

Yes

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Yes

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

My research focuses on reproducibility, FAIR principles, knowledge engineering, provenance, and research data management.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Open Res Eur. 2024 Jan 30. doi: 10.21956/openreseurope.17923.r36884

Reviewer response for version 2

Luiz Gadelha 1

In this manuscript, the authors recognize that there are currently no widely accepted, systematic, and conclusive way of assessing research artifacts with respect to which extent they follow the FAIR principles. Models for FAIR assessment governance inspired by well-known standardization bodies and technical associations are presented and discussed. In the context of the FAIR principles they associate governance with methodologies, techniques, and tools agreed by the community for supporting the FAIR principles. Key objectives and indicators of success were identified: the objectiveness and transparency of evaluation, the understanding and trust in the assessment process, and the adaptability of FAIRness criteria to the requirements of different domains. 

They identify three levels of stakeholders and provide examples of requirements for each level. There stakeholder groups are defined according to their role w.r.t. the FAIR principles: (A) practical application of FAIR Principles in research, (B) providing support and tools, and (C) decision-making and policy formulation without direct involvement in FAIRification. 

Overall, the manuscript brings as an important contribution a proposal for the governance of FAIR Principles evaluations with identification of stakeholders and use cases that was very needed and will benefit the research data community considerably. Next, a few comments and suggestions are provided:

In section 3.3.3 regarding creating value filters/badges for the data articles, it might be interesting to mention the ACM Artifact Reviewing and Badging (https://www.acm.org/publications/policies/artifact-review-badging) as an example from the reproducibility point of view.

In section 5.3, the authors state that "the frequency of updates will be modest" and that "The task of the governance body will largely be the same over time for all use cases". However, it might be the case that with new communities joining, new requirements will be brought in or discovered and new properties will also be subject to testing/verification. Data management best practices include not only the FAIR principles, as evidenced by the Five Safes Framework. I believe that the activities and responsibilities of such governance entities will be open-ended and constantly evolving. In this direction, the authors state in the manuscript that an expansion of expected FAIRness behaviors over time is expected, as the FAIR principles are adapted / contextualized to different domains.

Some abbreviations are not completely defined or no references/links provided like:

FAIR-4-Plants on sec. 3.1.2

GO FAIR Implementation Network on sec. 3.1.3

EOSC FAIR Metrics and Data Quality Task Force 

Minor typographic errors:

Pg 5., fourth paragraph, sec. 5.2.3, and Sec. 6: whitepaper

Sec. 3.1.3: edits, such as...

Sec. 5.1.2: should be World Wide Web Consortium

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Yes

Does the article adequately reference differing views and opinions?

Yes

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Yes

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

Scientific data management, scientific workflows, provenance, and high performance computing

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

References

  • 1. Artifact Review and Badging – Version 1.0 (not current). Association for Computing Machinery .2020; Reference source
Open Res Eur. 2023 Sep 27. doi: 10.21956/openreseurope.17923.r34940

Reviewer response for version 2

Mark Leggott 1

I find that the authors have addressed the suggestions/concerns in my original review, and I look forward to following the discussion that the article will generate. 

The word "scuach" needs to be corrected.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Yes

Does the article adequately reference differing views and opinions?

Partly

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Yes

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

I am a practitioner with a focus on: digital research infrastructure, FAIR Principles, research data management, community building, research software, governance.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Open Res Eur. 2023 Mar 14. doi: 10.21956/openreseurope.16609.r30516

Reviewer response for version 1

Mark Leggott 1

The article deals with an important gap in the research data ecosystem: realizing the full potential of the FAIR principles via open and robust mechanisms, including oversight. In particular the emphasis on providing "supportive advice" in response to a FAIR assessment seems particularly compelling, and more likely to see an uptake of FAIR approaches. The details regarding stakeholders and associated use cases are especially helpful. The point made in section 6. that " Researchers, seeing that their funding or their ability to publish may shortly be dependent on their adherence to the FAIR Principles, have little choice but to claim to be FAIR." is quite true in my experience. The challenge is how to best move the proposed approach forward in a global context where FAIR is now a part of a very complex fabric.

Some of the Use Cases do more to highlight the potential for a substantial bottleneck in individuals sending questions or requests to the "FAIRness governance body", than to illustrate the use case. This issue (the bottleneck) could be expanded on in 5.3 or another appropriate section. In fact this could be a 4th feature of such a body in section 5.3.

There is no discussion of another option for a "FAIRness governance body", that of a looser but coordinated federated model of existing bodies and organizations, building on the strengths of each. This could be presented in section 5.2 or 5.3 as a possible model.

I might also have expected to see mention of a "FAIRness governance body" that could be focused primarily on the stakeholders in section 3.2, which may present a more sustainable approach to achieving the laudable, but challenging, goal highlighted in the article. In other words, a train-the-trainer approach to achieving the goals.

I found myself returning to the traditional model of peer review as I read this document, and how that contrasts with the development of a "FAIRness governance body". The traditional peer review approach, while arguably flawed in some respects, is clearly sustainable in an ecosystem with a high volume of material. The authors may want to consider contrasting their proposed approach with others like peer review, especially with respect to sustainability. Having said that, this letter does at least get the conversation started, so it is a very valuable and timely intervention in the landscape.

I realize the value of grounding a discussion of the proposed approach with the EOSC FAIR Metrics and Data Quality Task Force, but I sensed a disconnect between the focus of this group (ie. including data quality, which the article does not discuss) and the intent of the proposed approach. It may come down to a practical decision about how best to move the conversation forward, and that generally points to an existing body or group, or assemblage of same.

There are some venerable and effective governance models/organizations listed in Section 5: I'm not sure that the FDO Forum is a good fit in this list, as it itself is still wrestling with the "right" approach to governance.

The use case for 3.1.3 seems a bit complex, and a bit more of an "inside baseball" example than seems appropriate for this document.

Additional links or references could be provided to ensure the reader is able to understand the context. For example, mention of the GO FAIR Implementation Network in 3.1.3 could include a link/citation.

I assume the article will undergo a copyedit? There are a substantial number of mistakes and errors that need to be corrected.

In summary I believe this will be an important stage in the conversation of the "operationalization" of the FAIR principles, and is all suited to the function of a Letter. While I have suggested some areas for improvement, they would be easily addressed, and result in a nice addition to the global conversation.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Yes

Does the article adequately reference differing views and opinions?

Partly

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Yes

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

I am a practitioner with a focus on: digital research infrastructure, FAIR Principles, research data management, community building, research software, governance.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Open Res Eur. 2023 Sep 4.
Leyla Jael Castro 1

Community-driven governance of FAIRness assessment: an open issue, an open discussion We thank the reviewers for taking the time to go through our manuscript. The comments have helped us improve the paper. We present here answers to the questions raised by reviewers and how we addressed the comments in the revised version.

Reviewer 2 The article deals with an important gap in the research data ecosystem: realizing the full potential of the FAIR principles via open and robust mechanisms, including oversight. In particular the emphasis on providing "supportive advice" in response to a FAIR assessment seems particularly compelling, and more likely to see an uptake of FAIR approaches. The details regarding stakeholders and associated use cases are especially helpful. The point made in section 6. that "Researchers, seeing that their funding or their ability to publish may shortly be dependent on their adherence to the FAIR Principles, have little choice but to claim to be FAIR." is quite true in my experience. The challenge is how to best move the proposed approach forward in a global context where FAIR is now a part of a very complex fabric. Some of the Use Cases do more to highlight the potential for a substantial bottleneck in individuals sending questions or requests to the "FAIRness governance body", than to illustrate the use case. This issue (the bottleneck) could be expanded on in 5.3 or another appropriate section. In fact this could be a 4th feature of such a body in section 5.3.

Response We address this directly in Section 5.3 point #1.  The bottleneck (if it exists) will be for tool developers - general questions about FAIR and issues with implementing it will not come to this governance body, but will be managed by their own FAIR community.  Since FAIR testing began, in late 2016, only 25 tests have been created for one of the most popular automated testing services (the FAIR Evaluator), despite having a completely open GitHub-based test submission process.  As such, we do not expect many tests will be submitted, nor with a frequency to become a bottleneck.  Moreover, as described in Section 5.3, we anticipate that the governance body will be composed of short-lived working groups, with differing membership in each instance.  This should scale, even if the number of submissions is higher than expected.

There is no discussion of another option for a "FAIRness governance body", that of a looser but coordinated federated model of existing bodies and organizations, building on the strengths of each. This could be presented in section 5.2 or 5.3 as a possible model.

Response How the FAIRness governance body communicates and coordinates with other related organizations would be part of the activities of such a body. This is outside the scope of this paper. 

I might also have expected to see mention of a "FAIRness governance body" that could be focused primarily on the stakeholders in section 3.2, which may present a more sustainable approach to achieving the laudable, but challenging, goal highlighted in the article. In other words, a train-the-trainer approach to achieving the goals.

Response We agree that a train-the-trainer (i.e., stakeholders) would be a good approach to distributing the expertise that leads to good evaluation tool design; however we see this would be a role more for the FAIR communities and the FAIR custodians rather than the FAIRness Governance body.  There still needs to be an objective measure of success, by some trusted “authority”.  The sustainability of peer-review is demonstrable in the scholarly publishing community.

I found myself returning to the traditional model of peer review as I read this document, and how that contrasts with the development of a "FAIRness governance body". The traditional peer review approach, while arguably flawed in some respects, is clearly sustainable in an ecosystem with a high volume of material. The authors may want to consider contrasting their proposed approach with others like peer review, especially with respect to sustainability. Having said that, this letter does at least get the conversation started, so it is a very valuable and timely intervention in the landscape.

Response This is an excellent observation, and is very much aligned with how we had envisioned governance would work (though it is up to the seed group to make the final determination).  Text has been added to the end of Section 5.3 to compare our initial ideas with the well-established process of peer-review.

I realize the value of grounding a discussion of the proposed approach with the EOSC FAIR Metrics and Data Quality Task Force, but I sensed a disconnect between the focus of this group (ie. including data quality, which the article does not discuss) and the intent of the proposed approach. It may come down to a practical decision about how best to move the conversation forward, and that generally points to an existing body or group, or assemblage of same.

Response EOSC FAIR Metrics and Data Quality Task Force has 2 sub-teams: one focusing on FAIR Metrics and the other on Data Quality Task Force. This paper is the output of the first sub-group, and for information to this reviewer the second group has produced the following report: https://doi.org/10.5281/zenodo.7515816

There are some venerable and effective governance models/organizations listed in Section 5: I'm not sure that the FDO Forum is a good fit in this list, as it itself is still wrestling with the "right" approach to governance.

Response After exploration of the nascent governance mechanism, and discussion with participants, we agree that their governance mechanism is still too undeveloped to be listed here.  That paragraph has been removed. The reviewer raises a fair point. We have modified the text to reflect this.

The use case for 3.1.3 seems a bit complex, and a bit more of an "inside baseball" example than seems appropriate for this document.

Response This use-case has been simplified.

Additional links or references could be provided to ensure the reader is able to understand the context. For example, mention of the GO FAIR Implementation Network in 3.1.3 could include a link/citation.

Response Footnote pointing to https://www.go-fair.org/implementation-networks/ added to page 8

I assume the article will undergo a copyedit? There are a substantial number of mistakes and errors that need to be corrected.

Response We have identified typos and corrected them In summary I believe this will be an important stage in the conversation of the "operationalization" of the FAIR principles, and is all suited to the function of a Letter.

While I have suggested some areas for improvement, they would be easily addressed, and result in a nice addition to the global conversation

Open Res Eur. 2023 Feb 17. doi: 10.21956/openreseurope.16609.r30663

Reviewer response for version 1

Françoise Genova 1

The paper develops the case for establishing a community-driven governance of FAIRness assessment. The paper is based on a whitepaper already vetted in the EOSC context, but this referee used the criteria for publication in Open Research Europe to prepare this report.

The whitepaper was included in the workflow of the EOSC Task Force on FAIR Metrics and Data Quality and the EOSC Board reviewed the final revision before publication, which means that the authors likely already gathered a range of comments from people with different knowledge and profiles. It aims at being “ a starting point to foster an open discussion around FAIRness governance and mechanisms”, and an open Google Group was created to engage the conversation.

The paper explains in details the rationale to establish this community-driven governance. Key stakeholder groups and their use cases are identified and described to support the proposal. It may be useful however to consider an additional use case for a specific kind of “FAIR practitioners”: some domain-specific research communities, including this reviewers’ one, astronomy, have been organising themselves for a long time at the international level to define data exchange standards which cover many aspects of FAIR taking into account the specific community requirements – well in line with the understanding that “ the FAIR principles would need to be fine-tuned and interpreted for individual research communities.” These standards are widely implemented by data and tool producers and used by the community. One can guess that these communities will be considered somehow in the governance process, but their needs and capacities to contribute are somehow different from the use case developed in 3.1.3, which describes a community starting to develop its FAIR standards. The proposed governance is aimed at improving the situation on FAIR as a whole and it would be interesting to understand how this kind of legacy aspects would be taken into account.

A number of organisations are cited as possible models for governance in Section 5. The length and level of details provided for the different organisations in Section 5.2 should be aligned. Also the RDA could be cited by itself as a relevant community-driven organisation, mentioning also its Community of Practices.

For the next steps, it would be useful to summarize in a few sentences the activities proposed for the governing body, to clarify what is proposed as a starting point for the discussions during the next step. The third paragraph of Section 6 discusses on the one hand the alignment of FAIR criteria and tools and services developed to measure and enable FAIRness, and on the other hand the overseeing of the developments linked to FAIR for other research objects – these developments are currently done in other contexts. In addition, Box 1 states that “ communities will have a process in place to collect requirements from their members”. Does this mean that the governance body will be the place where a discussion on FAIR in a specific context (domain specific or adapted to other digital objects) will be possible, since it seems that the governing body will be the place where requirements will be collected? Or will the governing body collect requirements (and then fine-tune FAIR?) for communities which do not have other vehicles to do it? One could also understand from the paper that the governing body would rather be a clearinghouse for FAIR activities mostly developed elsewhere, which is somehow different and would not require the same amount of resources. Also, the authors seem to be confident in the fact that the FAIRness evaluation tools can be aligned, which is a core and critical role of the governing body in the current landscape. Is there a need to explicitly foresee a feasibility study which would inform the discussion to come?

It is important to insist on the fact that the process should be community driven, as stated for instance in the paper title, and to enable it. Fostering an open debate as proposed is the right path forward. It could be noted that it will be particularly useful to find ways to engage with a wide variety of research communities since their role in fine-tuning and interpreting FAIR is recognised and their organisations and data cultures can be very different. Their interaction with the “FAIR custodians” to build a common understanding of the role and processes of the proposed governance body during the discussion phase is critical for success. Another critical point which could be mentioned is internationalisation, since by essence the whitepaper is produced in the European context. Also, the paper states that “ It seems reasonable that the EOSC FAIR Metrics and Data Quality Task Force would act as both the seed organization to begin this stakeholder discussion, as well as put forward TF members as nominees for the inaugural governance body, to be voted-on by the self-assembled stakeholder community.” It would be useful to explain how the possible perception of conflict of interest will be mitigated.

Finally, it is appropriately stated in Section 2 that “ FAIRness must not be used or perceived as an instrument to judge and punish”, but as explained in Section 6 there is in practice an increasing tendency towards FAIRness being a condition for funding, also following a recommendation of the initial High Level Expert Group on EOSC. It would be useful to remind readers in the concluding thoughts that FAIR is essential as a starting point to improve science practices towards more reusability and reproducibility, beyond any top-down requirements from governance or funders.

Additional comments:

Section 1

  • In the first paragraph, there is a mention of “ Different research funders in other countries”, not sure about what “other” refers to.

  • The recent creation of the Coalition for Advancing Research Assessment (CoARA, https://coara.eu/) could be mentioned in addition to Reference 5. It was launched on December 12 th, 2022, just before the completion of the first version of this paper.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Partly

Does the article adequately reference differing views and opinions?

Yes

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Yes

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

Science data sharing. Astronomy.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Open Res Eur. 2023 Sep 4.
Leyla Jael Castro 1

Community-driven governance of FAIRness assessment: an open issue, an open discussion We thank the editor and reviewers for taking the time to go through our manuscript. The comments have helped us improve the paper. We present here answers to the questions raised by reviewers and how we addressed the comments in the revised version. 

Reviewer 1 The paper develops the case for establishing a community-driven governance of FAIRness assessment. The paper is based on a whitepaper already vetted in the EOSC context, but this referee used the criteria for publication in Open Research Europe to prepare this report.

The whitepaper was included in the workflow of the EOSC Task Force on FAIR Metrics and Data Quality and the EOSC Board reviewed the final revision before publication, which means that the authors likely already gathered a range of comments from people with different knowledge and profiles. It aims at being “a starting point to foster an open discussion around FAIRness governance and mechanisms”, and an open Google Group was created to engage the conversation.

The paper explains in details the rationale to establish this community-driven governance. Key stakeholder groups and their use cases are identified and described to support the proposal. It may be useful however to consider an additional use case for a specific kind of “FAIR practitioners”: some domain-specific research communities, including this reviewers’ one, astronomy, have been organising themselves for a long time at the international level to define data exchange standards which cover many aspects of FAIR taking into account the specific community requirements – well in line with the understanding that “the FAIR principles would need to be fine-tuned and interpreted for individual research communities.” These standards are widely implemented by data and tool producers and used by the community. One can guess that these communities will be considered somehow in the governance process, but their needs and capacities to contribute are somehow different from the use case developed in 3.1.3, which describes a community starting to develop its FAIR standards. The proposed governance is aimed at improving the situation on FAIR as a whole and it would be interesting to understand how this kind of legacy aspects would be taken into account.

Response to new FAIR practitioner “Domain-specific research communities with established shared standards” We address this in point #3 of section 5.3 “The makeup of the assessment governance body for any given new request must include (at a minimum) a domain-expert who knows the domain standards and data semantics;” This expert would be expected to bring-in their knowledge of the legacy standards and norms within a given community.

A number of organisations are cited as possible models for governance in Section 5. The length and level of details provided for the different organisations in Section 5.2 should be aligned. Also the RDA could be cited by itself as a relevant community-driven organisation, mentioning also its Community of Practices.

Response The RDA communities of practice are mentioned in 5.2.4. RDA communities of practice We have also modified the text in 5.2 subsections to improve the balance w.r.t. length and level of details.

For the next steps, it would be useful to summarize in a few sentences the activities proposed for the governing body, to clarify what is proposed as a starting point for the discussions during the next step. The third paragraph of Section 6 discusses on the one hand the alignment of FAIR criteria and tools and services developed to measure and enable FAIRness, and on the other hand the overseeing of the developments linked to FAIR for other research objects – these developments are currently done in other contexts. In addition, Box 1 states that “communities will have a process in place to collect requirements from their members”. 

  1. Does this mean that the governance body will be the place where a discussion on FAIR in a specific context (domain specific or adapted to other digital objects) will be possible, since it seems that the governing body will be the place where requirements will be collected? 

  2. Or will the governing body collect requirements (and then fine-tune FAIR?) for communities which do not have other vehicles to do it? 

  3. One could also understand from the paper that the governing body would rather be a clearinghouse for FAIR activities mostly developed elsewhere, which is somehow different and would not require the same amount of resources. 

Response to Governing Body activities Activities are not detailed on purpose, as defining its scope and responsibilities would be a task of the nascent governance body itself. Questions

  1. Not necessarily

  2. Not necessarily

  3. Without pre-judging the outcomes of the governance body establishment, this is what we believe is the most likely result. Communities that are implementing FAIR in-practice will (and are) organizing themselves.  When a metric and its test have been designed, they would then come to the governance body for a peer-review process that will ensure a basic level of harmonization (especially between communities) and quality of the test, and therefore of the FAIRness evaluators that utilize that test. 

Additional text has been added to the end of Section 5.3 that hopefully clarifies how we envision the governance could work.

Also, the authors seem to be confident in the fact that the FAIRness evaluation tools can be aligned, which is a core and critical role of the governing body in the current landscape. Is there a need to explicitly foresee a feasibility study which would inform the discussion to come?

Response  FAIRness evaluation tools could be aligned if they use a common agreed basis (algorithm) for metadata harvesting and probing. The question is not whether they could (since this is technically a fact) but if they will - it is not a technical problem but a social one. The scope and purpose of the governance is not aligning the evaluation tools per se but to define a peer-review process for tests; evaluation tools that use these tests will then become naturally aligned.  The motivation for using tests that have passed the peer-review process will (hopefully) be high, since we anticipate that agencies and regulatory bodies will prefer tests that have passed peer-review. Additional text has been added to Box 1 to better reflect this separation of concerns.

It is important to insist on the fact that the process should be community driven, as stated for instance in the paper title, and to enable it. Fostering an open debate as proposed is the right path forward. It could be noted that it will be particularly useful to find ways to engage with a wide variety of research communities since their role in fine-tuning and interpreting FAIR is recognised and their organisations and data cultures can be very different. Their interaction with the “FAIR custodians” to build a common understanding of the role and processes of the proposed governance body during the discussion phase is critical for success. 

Response to interaction between community and FAIR custodians We agree with the reviewer. This should be the job of the FAIR custodians. There are already bodies that undertake these activities (e.g. the GO FAIR Implementation Networks and RDA Communities) where fine-tuning and interpreting FAIR for their domains. FAIR custodians are described in section 3, second bullet point corresponding to the “second group (B)”.

Another critical point which could be mentioned is internationalisation, since by essence the whitepaper is produced in the European context. Also, the paper states that “It seems reasonable that the EOSC FAIR Metrics and Data Quality Task Force would act as both the seed organization to begin this stakeholder discussion, as well as put forward TF members as nominees for the inaugural governance body, to be voted-on by the self-assembled stakeholder community.” It would be useful to explain how the possible perception of conflict of interest will be mitigated.

Response to internationalization and conflict of interests The EOSC FAIR Metrics and Quality Task Force and other co-authors of this whitepaper are nothing more than a seed group starting the discussion. Text has been added to Section 5.3 to clarify what we anticipate will be the role of the Task Force. The sentence “as well as put forward TF members as nominees for the inaugural governance body, to be voted-on by the self-assembled stakeholder community” has also been removed.

Finally, it is appropriately stated in Section 2 that “FAIRness must not be used or perceived as an instrument to judge and punish”, but as explained in Section 6 there is in practice an increasing tendency towards FAIRness being a condition for funding, also following a recommendation of the initial High Level Expert Group on EOSC. It would be useful to remind readers in the concluding thoughts that FAIR is essential as a starting point to improve science practices towards more reusability and reproducibility, beyond any top-down requirements from governance or funders.

Response  We agree, we have added text to the final paragraph on Page 18, see paragraph starting by “Perhaps the most substantial benefit of FAIRness governance [...]”.

Additional comments: Section 1

  • In the first paragraph, there is a mention of “Different research funders in other countries”, not sure about what “other” refers to.

Response Indeed, the sentence was not clear. It was changed to “Funders may [...]”

  • The recent creation of the Coalition for Advancing Research Assessment (CoARA, https://coara.eu/) could be mentioned in addition to Reference 5. It was launched on December 12th, 2022, just before the completion of the first version of this paper.

Response A mention of the CoARA web page was added right after Reference 5 in the main text.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Availability Statement

    No data are associated with this article.


    Articles from Open Research Europe are provided here courtesy of European Commission, Directorate General for Research and Innovation

    RESOURCES