There are many reasons why this lack of both software citations in general and standard practices for software citation are of concern:
Understanding research fields: Software is a product of research, and by not citing it we leave holes in the record of research of progress in those fields.
Credit: Academic researchers at all levels, including students, postdocs, faculty, and staff, should be credited for the software products they develop and contribute to, particularly when those products enable or further research done by others. 2
Non-academic researchers should also be credited for their software work, though the specific forms of credit are different than for academic researchers.
Discovering software: Citations enable the specific software used in a research product to be found. Additional researchers can then use the same software for different purposes, leading to credit for those responsible for the software.
Reproducibility: Citation of specific software used is necessary for reproducibility, although not sufficient. Additional information such as configurations and platform issues are also needed.
Process of Creating Principles
The FORCE11 Software Citation Working Group was created in April 2015 with the following mission statement:
The software citation working group is a cross-team committee leveraging the perspectives from a variety of existing initiatives working on software citation to produce a consolidated set of citation principles in order to encourage broad adoption of a consistent policy for software citation across disciplines and venues. The working group will review existing efforts and make a set of recommendations. These recommendations will be put off for endorsement by the organizations represented by this group and others that play an important role in the community.
The group will produce a set of principles, illustrated with working examples, and a plan for dissemination and distribution. This group will not be producing detailed specifications for implementation although it may review and discuss possible technical solutions.
The group gathered members (see Appendix A) in April and May 2015, and then began work in June. This materialized as a number of meetings and offline work by group members to document existing practices in member disciplines; gather materials from workshops and other reports; review those materials, identifying overlaps and differences; create a list of use cases related to software citation, recorded in Appendix B; and subsequently draft an initial version of this document. The draft Software Citation Principles document was discussed in a day-long workshop and presented at the FORCE2016 Conference in April 2016 ( https://www.force11.org/meetings/force2016 ). Members of the workshop and greater FORCE11 community gave feedback, which we recorded here in Appendix C. This discussion led to some changes in the use cases and discussion, although the principles themselves were not modified. We also plan to initiate a follow-on implementation working group that will work with stakeholders to ensure that these principles impact the research process.
The process of creating the software citation principles began by adapting the FORCE11 Data Citation Principles ( Data Citation Synthesis Group, 2014 ). These were then modified based on discussions of the FORCE11 Software Citation Working Group (see Appendix A for members), information from the use cases in section Use Cases, and the related work in section Related Work.
We made the adaptations because software, while similar to data in terms of not traditionally having been cited in publications, is also different than data. In the context of research (e.g., in science), the term “data” usually refers to electronic records of observations made in the course of a research study (“raw data”) or to information derived from such observations by some form of processing (“processed data”), as well as the output of simulation or modeling software (“simulated data”). Some confusion about the distinction between software and data comes in part from the much wider scope of the term “data” in computing and information science, where it refers to anything that can be processed by a computer. In that sense, software is just a special kind of data. Because of this, citing software is not the same as citing data. A more general discussion about these distinctions is currently underway ( https://github.com/danielskatz/software-vs-data ).
The principles in this document should guide further development of software citation mechanisms and systems, and the reader should be able to look at any particular example of software citation to see if it meets the principles. While we strive to offer practical guidelines that acknowledge the current incentive system of academic citation, a more modern system of assigning credit is sorely needed. It is not that academic software needs a separate credit system from that of academic papers, but that the need for credit for research software underscores the need to overhaul the system of credit for all research products. One possible solution for a more complete description of the citations and associated credit is the transitive credit proposed by Katz (2014) and Katz & Smith (2015) .
We documented and analyzed a set of use cases related to software citation in FORCE11 Software Citation Working Group ( https://docs.google.com/document/d/1dS0SqGoBIFwLB5G3HiLLEOSAAgMdo8QPEpjYUaWCvIU ) (recorded in Appendix B for completeness). Table 2 summarizes these use cases and makes clear what the requirements are for software citation in each case. Each example represents a particular stakeholder performing an activity related to citing software, with the given metadata as information needed to do that. In that table, we use the following definitions:
“Researcher” includes both academic researchers (e.g., postdoc, tenure-track faculty member) and research software engineers.
“Publisher” includes both traditional publishers that publish text and/or software papers as well as archives such as Zenodo that directly publish software.
“Funder” is a group that funds software or research using software.
“Indexer” examples include Scopus, Web of Science, Google Scholar, and Microsoft Academic Search.
“Domain group/library/archive” includes the Astronomy Source Code Library (ASCL; http://ascl.net ); biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE; https://biocaddie.org ); Computational Infrastructure for Geodynamics (CIG; https://geodynamics.org ), libraries, institutional archives, etc.
“Repository” refers to public software repositories such as GitHub, Netlib, Comprehensive R Archive Network (CRAN), and institutional repositories.
“Unique identifier” refers to unique, persistent, and machine-actionable identifiers such as a DOI, ARK, or PURL.
“Description” refers to some description of the software such as an abstract, README, or other text description.
“Keywords” refers to keywords or tags used to categorize the software.
“Reproduce” can mean actions focused on reproduction, replication, verification, validation, repeatability, and/or utility.
“Citation manager” refers to people and organizations that create scholarly reference management software and websites including Zotero, Mendeley, EndNote, RefWorks, BibDesk, etc., that manage citation information and semi-automatically insert those citations into research products.
Use cases and basic metadata requirements for software citation, adapted from FORCE11 Software Citation Working Group.
Solid circles (•) indicate that the use case depends on that metadata, while plus signs (+) indicate that the use case would benefit from that metadata if available.
All use cases assume the existence of a citable software object, typically created by the authors/developers of the software. Developers can achieve this by, e.g., uploading a software release to figshare ( https://figshare.com/ ) or Zenodo ( GitHub, 2014 ) to obtain a DOI. Necessary metadata should then be included in a CITATION file ( Wilson, 2013 ) or machine-readable CITATION.jsonld file ( Katz & Smith, 2015 ). When software is not freely available (e.g., commercial software) or when there is no clear identifier to use, alternative means may be used to create citable objects as discussed in section Access to Software.
In some cases, if particular metadata are not available, alternatives may be provided. For example, if the version number and release date are not available, the download date can be used. Similarly, the contact name/email is an alternative to the location/repository.
With approximately 50 working group participants (see Appendix A) representing a range of research domains, the working group was tasked to document existing practices in their respective communities. A total of 47 documents were submitted by working group participants, with the life sciences, astrophysics, and geosciences being particularly well-represented in the submitted resources.
General community/non domain-specific activities
Some of the most actionable work has come from the UK Software Sustainability Institute (SSI) in the form of blog posts written by their community fellows. For example, in a blog post from 2012, Jackson (2012) discusses some of the pitfalls of trying to cite software in publications. He includes useful guidance for when to consider citing software as well as some ways to help “convince” journal editors to allow the inclusion of software citations.
Wilson (2013) suggests that software authors include a CITATION file that documents exactly how the authors of the software would like to be cited by others. While this is not a formal metadata specification (e.g., it is not machine readable) this does offer a solution for authors wishing to give explicit instructions to potential citing authors and, as noted in the motivation section (see Motivation), there is evidence that authors follow instructions if they exist ( Huang, Rose & Hsu, 2015 ).
In a later post on the SSI blog, Jackson gives a good overview of some of the approaches package authors have taken to automate the generation of citation entities such as BibTEX entries ( Jackson, 2014 ), and Knepley et al. (2013) do similarly.
While not usually expressed as software citation principles, a number of groups have developed community guidelines around software and data citation. Van de Sompel et al. (2004) argue for registration of all units of scholarly communication, including software. In “Publish or be damned? An alternative impact manifesto for research software,” Chue Hong (2011) lists nine principles as part of “The Research Software Impact Manifesto.” In the “Science Code Manifesto” ( Barnes et al., 2016 ), the founding signatories cite five core principles (Code, Copyright, Citation, Credit, Curation) for scientific software.
Perhaps in light of the broad range of research domains struggling with the challenge of better recognizing the role of software, funders and agencies in both the US (e.g., NSF, NIH, Alfred P. Sloan Foundation) and UK (e.g., SFTC, JISC, Wellcome Trust) have sponsored or hosted a number of workshops with participants from across a range of disciplines, specifically aimed at discussing issues around software citation ( Sufi et al., 2014 ; Ahalt et al., 2015 ; Software Credit Workshop, 2015 ; Norén, 2015 ; Software Attribution for Geoscience Applications, 2015 ; Allen et al., 2015 ). In many cases these workshops produced strong recommendations for their respective communities on how best to proceed. In addition, a number of common themes arose in these workshops, including (1) the critical need for making software more “citable” (and therefore actions authors and publishers should take to improve the status quo), (2) how to better measure the impact of software (and therefore attract appropriate funding), and (3) how to properly archive software (where, how, and how often) and how this affects what to cite and when.
Most notable of the community efforts are those of WSSSPE Workshops ( http://wssspe.researchcomputing.org.uk/ ) and SSI Workshops ( http://www.software.ac.uk/community/workshops ), who between them have run a series of workshops aimed at gathering together community members with an interest in (1) defining the set of problems related to the role of software and associated people in research settings, particularly academia, (2) discussing potential solutions to those problems, (3) beginning to work on implementing some of those solutions. In each of the three years that WSSSPE workshops have run thus far, the participants have produced a report ( Katz et al., 2014 ; Katz et al., 2016a ; Katz et al., 2016b ) documenting the topics covered. Section 5.8 and Appendix J in the WSSSPE3 report ( Katz et al., 2016b ) has some preliminary work and discussion particularly relevant to this working group. In addition, a number of academic publishers such as APA ( McAdoo, 2015 ) have recommendations for submitting authors on how to cite software, and journals such as F1000Research ( http://f1000research.com/for-authors/article-guidelines/software-tool-articles ), SoftwareX ( http://www.journals.elsevier.com/softwarex/ ), Open Research Computation ( http://www.openresearchcomputation.com ) and the Journal of Open Research Software ( http://openresearchsoftware.metajnl.com ) allow for submissions entirely focused on research software.
Domain-specific community activities
One approach to increasing software “citability” is to encourage the submission of papers in standard journals describing a piece of research software, often known as software papers (see Software Papers). While some journals (e.g., Transactions on Mathematical Software (TOMS), Bioinformatics, Computer Physics Communications, F1000Research, Seismological Research Letters, Electronic Seismologist) have traditionally accepted software submissions, the American Astronomical Society (AAS) has recently announced they will accept software papers in their journals ( AAS Editorial Board, 2016 ). Professional societies are in a good position to change their respective communities, as the publishers of journals and conveners of domain-specific conferences; as publishers they can change editorial policies (as AAS has done) and conferences are an opportunity to communicate and discuss these changes with their communities.
In astronomy and astrophysics: The Astrophysics Source Code Library (ASCL; http://ASCL.net ) is a website dedicated to the curation and indexing of software used in the astronomy-based literature. In 2015, the AAS and GitHub co-hosted a workshop ( Norén, 2015 ) dedicated to software citation, indexing, and discoverability in astrophysics. More recently, a Birds of a Feather session was held at the Astronomical Data Analysis Software and Systems (ADASS) XXV conference ( Allen et al., 2015 ) that included discussion of software citation.
In the life sciences: In May 2014, the NIH held a workshop aimed at helping the biomedical community discover, cite, and reuse software written by their peers. The primary outcome of this workshop was the Software Discovery Index Meeting Report ( White et al., 2014 ) which was shared with the community for public comment and feedback. The authors of the report discuss what framework would be required for supporting a Software Discovery Index including the need for unique identifiers, how citations to these would be handled by publishers, and the critical need for metadata to describe software packages.
In the geosciences: The Ontosoft ( Gil, Ratnakar & Garijo, 2015 ) project describes itself as “A Community Software Commons for the Geosciences.” Much attention was given to the metadata required to describe, discover, and execute research software. The NSF-sponsored Geo-Data Workshop 2011 ( Fox & Signell, 2011 ) revolved around data lifecycle, management, and citation. The workshop report includes many recommendations for data citation.
Existing efforts around metadata standards
Producing detailed specifications and recommendations for possible metadata standards to support software citation was not within the scope of this working group. However some discussion on the topic did occur and there was significant interest in the wider community to produce standards for describing research software metadata.
Content specifications for software metadata vary across communities, and include DOAP ( https://github.com/edumbill/doap/ ), an early metadata term set used by the Open Source Community, as well as more recent community efforts like Research Objects ( Bechhofer et al., 2013 ), The Software Ontology ( Malone et al., 2014 ), EDAM Ontology ( Ison et al., 2013 ), Project CRediT ( CRediT, 2016 ), the OpenRIF Contribution Role Ontology ( Gutzman et al., 2016 ), Ontosoft ( Gil, Ratnakar & Garijo, 2015 ), RRR/JISC guidelines ( Gent, Jones & Matthews, 2015 ), or the terms and classes defined at schema.org related to the https://schema.org/SoftwareApplication class. In addition, language-specific software metadata schemes are in widespread use, including the Debian package format ( Jackson & Schwarz, 2016 ), Python package descriptions ( Ward & Baxter, 2016 ), and R package descriptions ( Wickham, 2015 ), but these are typically conceived for software build, packaging, and distribution rather than citation. CodeMeta ( Jones et al., 2014 ) has created a crosswalk among these software metadata schemes and an exchange format that allows software repositories to effectively interoperate.
In this section we discuss some the issues and concerns related to the principles stated in section Software Citation Principles.
What software to cite
The software citation principles do not define what software should be cited, but rather how software should be cited. What software should be cited is the decision of the author(s) of the research work in the context of community norms and practices, and in most research communities, these are currently in flux. In general, we believe that software should be cited on the same basis as any other research product such as a paper or book; that is, authors should cite the appropriate set of software products just as they cite the appropriate set of papers, perhaps following the FORCE11 Data Citation Working Group principles, which state, “In scholarly literature, whenever and wherever a claim relies upon data, the corresponding data should be cited” ( Data Citation Synthesis Group, 2014 ).
Some software which is, or could be, captured as part of data provenance may not be cited. Citation is partly a record of software important to a research outcome 3 , where provenance is a record of all steps (including software) used to generated particular data within the research process. Research results, including data, increasingly depend on software ( Hannay et al., 2009 ), and thus may depend on the specific version used ( Sandve et al., 2013 ; Wilson et al., 2014 ). Furthermore, errors in software or environment variations can affect results ( Morin et al., 2012 ; Soergel, 2015 ). This implies that for a data research product, provenance data will include some of the cited software. Similarly, the software metadata recorded as part of data provenance will overlap the metadata recorded as part of software citation for the software that was used in the work. The data recorded for reproducibility should also overlap the metadata recorded as part of software citation. In general, we intend the software citation principles to cover the minimum of what is necessary for software citation for the purpose of software identification. Some use cases related to citation (e.g., provenance, reproducibility) might have additional requirements beyond the basic metadata needed for citation, as Table 2 shows.
Currently, and for the foreseeable future, software papers are being published and cited, in addition to software itself being published and cited, as many community norms and practices are oriented towards citation of papers. As discussed in the Importance principle (1) and the discussion above, the software itself should be cited on the same basis as any other research product; authors should cite the appropriate set of software products. If a software paper exists and it contains results (performance, validation, etc.) that are important to the work, then the software paper should also be cited. We believe that a request from the software authors to cite a paper should typically be respected, and the paper cited in addition to the software.
The goals of software citation include the linked ideas of crediting those responsible for software and understanding the dependencies of research products on specific software. In the Importance principle (1), we state that “software should be cited on the same basis as any other research product such as a paper or a book; that is, authors should cite the appropriate set of software products just as they cite the appropriate set of papers.” In the case of one code that is derived from another code, citing the derived software may appear to not credit those responsible for the original software, nor recognize its role in the work that used the derived software. However, this is really analogous to how any research builds on other research, where each research product just cites those products that it directly builds on, not those that it indirectly builds on. Understanding these chains of knowledge and credit have been part of the history of science field for some time, though more recent work suggests more nuanced evaluation of the credit chains ( CRediT, 2016 ; Katz & Smith, 2015 ).
Software peer review
Adherence to the software citation principles enables better peer review through improved reproducibility. However, since the primary goal of software citation is to identify the software that has been used in a scholarly product, the peer review of software itself is mostly out of scope in the context of software citation principles. For instance, when identifying a particular software artifact that has been used in a scholarly product, whether or not that software has been peer-reviewed is irrelevant. One possible exception would be if the peer-review status of the software should be part of the metadata, but the working group does not believe this to be part of the minimal metadata needed to identify the software.
Citation format in reference list
Citations in references in the scholarly literature are formatted according to the citation style (e.g., AMS, APA, Chicago, MLA) used by that publication. (Examples illustrating these styles have been published by Lipson (2011) ; the follow-on Software Citation Implementation Group will provide suggested examples.) As these citations are typically sent to publishers as text formatted in that citation style, not as structured metadata, and because the citation style dictates how the human reader sees the software citation, we recommend that all text citation styles support the following: a) a label indicating that this is software, e.g., [Software], potentially with more information such as [Software: Source Code], [Software: Executable], or [Software: Container], and b) support for version information, e.g., Version 1.8.7.
This set of software citation principles, if followed, will cause the number of software citations in scholarly products to increase, thus causing the number of overall citations to increase. Some scholarly products, such as journal articles, may have strict limits on the number of citations they permit, or page limits that include reference sections. Such limits are counter to our recommendation, and we recommend that publishers using strict limits for the number of citations add specific instructions regarding software citations to their author guidelines to not disincentivize software citation. Similarly, publishers should not include references in the content counted against page limits.
The Unique Identification principle (3) calls for “a method for identification that is machine actionable, globally unique, interoperable, and recognized by a community.” What this means for data is discussed in detail in the “Unique Identification” section of a report by the FORCE11 Data Citation Implementation Group ( Starr et al., 2015 ), which calls for “unique identification in a manner that is machine-resolvable on the Web and demonstrates a long-term commitment to persistence.” This report also lists examples of identifiers that match these criteria including DOIs, PURLs, Handles, ARKS, and NBNs. For software, we recommend the use of DOIs as the unique identifier due to their common usage and acceptance, particularly as they are the standard for other digital products such as publications.
While we believe there is value in including the explicit version (e.g., Git SHA1 hash, Subversion revision number) of the software in any software citation, there are a number of reasons that a commit reference together with a repository URL is not recommended for the purposes of software citation:
Version numbers/commit references are not guaranteed to be permanent. Projects can be migrated to new version control systems (e.g., SVN to Git). In addition, it is possible to overwrite/clobber a particular version (e.g., force-pushing in the case of Git).
A repository address and version number does not guarantee that the software is available at a particular (resolvable) URL, especially as it is possible for authors to remove their content from, e.g., GitHub.
A particular version number/commit reference may not represent a “preferred” point at which to cite the software from the perspective of the package authors.
We recognize that there are certain situations where it may not be possible to follow the recommended best-practice. For example, if (1) the software authors did not register a DOI and/or release a specific version, or (2) the version of the software used does not match what is available to cite. In those cases, falling back on a combination of the repository URL and version number/commit hash would be an appropriate way to cite the software used.
Note that the “unique” in a UID means that it points to a unique, specific software version. However, multiple UIDs might point to the same software. This is not recommended, but is possible. We strongly recommend that if there is already a UID for a version of software, no additional UID should be created. Multiple UIDs can lead to split credit, which goes against the Credit and Attribution principle (2).
Software versions and identifiers
There are at least three different potential relationships between identifiers and versions of software:
An identifier can point to a specific version of a piece of software.
An identifier can point to the piece of software, effectively all versions of the software.
An identifier can point to the latest version of a piece of software.
It is possible that a given piece of software may have identifiers of all three types. In addition, there may be one or more software papers, each with an identifier.
While we often need to cite a specific version of software, we may also need a way to cite the software in general and to link multiple releases together, perhaps for the purpose of understanding citations to the software. The principles in section Software Citation Principles are intended to be applicable at all levels, and to all types of identifiers, such as DOIs, RRIDs, etc., though we again recommend when possible the use of DOIs that identify specific versions of source code. We note that RRIDs were developed by the FORCE11 Resource Identification Initiative ( https://www.force11.org/group/resource-identification-initiative ) and have been discussed for use to identify software packages (not specific versions), though the FORCE11 Resource Identification Technical Specifications Working Group ( https://www.force11.org/group/resource-identification-technical-specifications-working-group ) says “Information resources like software are better suited to the Software Citation WG.” There is currently a lack of consensus on the use of RRIDs for software.
Types of software
The principles and discussion in this document have generally been written to focus on software as source code. However, we recognize that some software is only available as an executable, a container, or a virtual machine image, while other software may be available as a service. We believe the principles apply to all of these forms of software, though the implementation of them will certainly differ based on software type. When software is accessible as both source code and another type, we recommend that the source code be cited.
Access to software
The Accessibility principle (5) states that “software citations should permit and facilitate access to the software itself.” This does not mean that the software must be freely available. Rather, the metadata should provide enough information that the software can be accessed. If the software is free, the metadata will likely provide an identifier that can be resolved to a URL pointing to the specific version of the software being cited. For commercial software, the metadata should still provide information on how to access the specific software, but this may be a company’s product number or a link to a website that allows the software be purchased. As stated in the Persistence principle (4), we recognize that the software version may no longer be available, but it still should be cited along with information about how it was accessed.
What an identifier should resolve to
While citing an identifier that points to, e.g., a GitHub repository can satisfy the principles of Unique Identification (3), Accessibility (5), and Specificity (6), such a repository cannot guarantee Persistence (4). Therefore, we recommend that the software identifier should resolve to a persistent landing page that contains metadata and a link to the software itself, rather than directly to the source code files, repository, or executable. This ensures longevity of the software metadata—even perhaps beyond the lifespan of the software they describe. This is currently offered by services such as figshare and Zenodo ( GitHub, 2014 ), which both generate persistent DataCite DOIs for submitted software. In addition, such landing pages can contain both human-readable metadata (e.g., the types shown by Table 2 ) as well as content-negotiable formats such as RDF or DOAP ( https://github.com/edumbill/doap/ ).
Updates to these principles
As this set of software citation principles has been created by the FORCE11 Software Citation Working Group ( https://www.force11.org/group/software-citation-working-group ), which will cease work and dissolve after publication of these principles, any updates will require a different FORCE11 working group to make them. As mentioned in section Future Work, we expect a follow-on working group to be established to promote the implementation of these principles, and it is possible that this group might find items that need correction or addition in these principles. We recommend that this Software Citation Implementation Working Group be charged, in part, with updating these principles during its lifetime, and that FORCE11 should listen to community requests for later updates and respond by creating a new working group.
Software citation principles without clear worked-through examples are of limited value to potential implementers, and so in addition to this principles document, the final deliverable of this working group will be an implementation paper outlining working examples for each of the use cases listed in section Use Cases.
Following these efforts, we expect that FORCE11 will start a new working group with the goals of supporting potential implementers of the software citation principles and concurrently developing potential metadata standards, loosely following the model of the FORCE11 Data Citation Working Group. Beyond the efforts of this new working group, additional effort should be focused on updating the overall academic credit/citation system.
Alice Allen, Astrophysics Source Code Library
Micah Altman, MIT
Jay Jay Billings, Oak Ridge National Laboratory
Carl Boettiger, University of California, Berkeley
Jed Brown, University of Colorado Boulder
Sou-Cheng T. Choi, NORC at the University of Chicago & Illinois Institute of Technology
Neil Chue Hong, Software Sustainability Institute
Tom Crick, Cardiff Metropolitan University
Mercè Crosas, IQSS, Harvard University
Scott Edmunds, GigaScience, BGI Hong Kong
Christopher Erdmann, Harvard-Smithsonian CfA
Ian Gent, University of St Andrews, recomputation.org
Carole Goble, The University of Manchester, Software Sustainability Institute
Paul Groth, Elsevier Labs
Melissa Haendel, Oregon Health and Science University
Stephanie Hagstrom, FORCE11
Robert Hanisch, National Institute of Standards and Technology, One Degree Imager
Edwin Henneken, Harvard-Smithsonian CfA
Ivan Herman, World Wide Web Consortium (W3C)
James Howison, University of Texas
Lorraine Hwang, University of California, Davis
Thomas Ingraham, F1000Research
Matthew B. Jones, NCEAS, University of California, Santa Barbara
Catherine Jones, Science and Technology Facilities Council
Daniel S. Katz, University of Illinois (co-chair)
Alexander Konovalov, University of St Andrews
John Kratz, California Digital Library
Jennifer Lin, Public Library of Science
Frank Löffler, Louisiana State University
Brian Matthews, Science and Technology Facilities Council
Abigail Cabunoc Mayes, Mozilla Science Lab
Daniel Mietchen, National Institutes of Health
Bill Mills, TRIUMF
Evan Misshula, CUNY Graduate Center
August Muench, American Astronomical Society
Fiona Murphy, Independent Researcher
Kyle E. Niemeyer, Oregon State University (co-chair)
Karthik Ram, University of California, Berkeley
Fernando Rios, Johns Hopkins University
Ashley Sands, University of California, Los Angeles
Soren Scott, Independent Researcher
Frank J. Seinstra, Netherlands eScience Center
Arfon Smith, GitHub (co-chair)
Kaitlin Thaney, Mozilla Science Lab
Ilian Todorov, Science and Technology Facilities Council
Matt Turk, University of Illinois
Miguel de Val-Borro, Princeton University
Daan Van Hauwermeiren, Ghent University
Stijn Van Hoey, Ghent University
Belinda Weaver, The University of Queensland
Nic Weber, University of Washington iSchool
Software citation use cases
This appendix records an edited, extended description of the use cases discussed in section Use Cases, originally found in FORCE11 Software Citation Working Group. This discussion is not fully complete, and in some cases, it may not be fully self-consistent, but it is part of this paper as a record of one of the inputs to the principles. We expect that the follow-on Software Citation Implementation Group will further develop these use cases, including explaining in more detail how the software citation principles can be applied to each as part of working with the stakeholders to persuade them to actually implement the principles in their standard workflows.
Researcher who uses someone else’s software for a paper
One of the most common use cases may be researchers who use someone else’s software and want to cite it in a technical paper. This will be similar to existing practices for citing research artifacts in papers.
“Requirements” for researcher: