All Learning Resources

  • Organise & Document, a chapter of the CESSDA Expert Tour Guide on Data Management

    In this chapter, we provide you with tips and tricks on how to properly organise and document your data and metadata.

    We begin with discussing good practices in designing an appropriate data file structure, file naming and organising your data within suitable folder structures. You will find out how the way you organise your data facilitates orientation in the data file, contributes to understanding the information contained and helps to prevent errors and misinterpretations.

    In addition, we will focus on an appropriate documentation of your data. Development of rich metadata is required by FAIR data principles and any other current standards promoting data sharing.

    After completing your travels through this chapter on organising and documenting your data you should:

    Be aware of the elements which are important in setting up an appropriate structure and organisation of your data for intended research work and data sharing;
    Have an overview of best practices in file naming and organising your data files in a well-structured and unambiguous folder structure;
    Understand how comprehensive data documentation and metadata increases the chance your data are correctly understood and discovered;
    Be aware of common metadata standards and their value;
    Be able to answer the DMP questions which are listed at the end of this chapter and adapt your own DMP.

  • Research Data Management

    Marine information managers are increasingly seen as major contributors to research data management (RDM) activities in general and in the design of research data services (RDS) in particular. They promote research by providing services for storage, discovery, and access and liaise and partner with researchers and data centers to foster an interoperable infrastructure for the above services.

    The series of units within this training course recognizes the potential contributions that librarians/information managers can offer and hence the need to develop their skills in the research data management process. Course materials consist of slide presentations and student activities. Topics include:

    • Data and information management in International Indian Ocean Expedition-2 (IIOE-2)
    • Open science data
    • Research data and publication lifecycles
    • Research data organization and standards
    • Data management plans
    • Data publication and data citation
    • Access to research data
    • Management of sensitive data
    • Repositories for data management
    • Data management resources
  • Process, a chapter of the CESSDA Expert Tour on Data Management

    In this chapter we focus on data operations needed to prepare your data files for analysis and data sharing. Throughout the different phases of your project, your data files will be edited numerous times. During this process it is crucial to maintain the authenticity of research information contained in the data and prevent it from loss or deterioration.

    However, we will start with the topics of data entry and coding as the first steps of your work with your data files. Finally, you will learn about the importance of a comprehensive approach to data quality.

    After completing your travels through this chapter you should:

    Be familiar with strategies to minimise errors during the processes of data entry and data coding;
    Understand why the choice of file format should be planned carefully;
    Be able to manage the integrity and authenticity of your data during the research process;
    Understand the importance of a systematic approach to data quality;
    Able to answer the DMP questions which are listed at the end of this chapter and adapt your own DMP.

  • Store, a chapter of the CESSDA Expert Tour on Data Management

    The data that you collect, organise, prepare, and analyse to answer your research questions, and the documentation describing it are the lifeblood of your research. Put bluntly: without data, there is no research. It is therefore essential that you take adequate measures to protect your data against accidental loss and against unauthorised manipulation.

    Particularly when collecting (sensitive) personal data it is necessary to ensure that these data can only be accessed by those authorized to do so. In this chapter, you will learn more about measures to help you address these threats.
    After completing your travels through this chapter you should:

    Be familiar with strategies to minimise errors during the processes of data entry and data coding;
    Understand why the choice of file format should be planned carefully;
    Be able to manage the integrity and authenticity of your data during the research process;
    Understand the importance of a systematic approach to data quality;
    Able to answer the DMP questions which are listed at the end of this chapter and adapt your own DMP.

  • Making Research Data Available

    There is a growing awareness of the importance of research data. Elsevier is committed to encouraging and supporting researchers who want to store, share, discover and reuse data. To this end, Elsevier has set up several initiatives that allow authors to make their data available when they publish with Elsevier. The webinars in the collection (located on the bottom half of the web page) cover:

    • Ways for researchers to store, share, discover, and use data
    • How to create a good research data management plan  
    • Data Citation: How can you as a researcher benefit from citing data? 
  • Protect, a chapter of the CESSDA Expert Tour on Data Management

    This part of the tour guide focuses on key legal and ethical considerations in creating shareable data.

    We begin with clarifying the different legal requirements of Member States, and the impact of the upcoming General Data Protection Regulation (GDPR) on research data management. Subsequently, we will show you how sharing personal data can often be accomplished by using a combination of obtaining informed consent, data anonymisation and regulating data access. Also, the supporting role of ethical review in managing your legal and ethical obligations is highlighted.

    After completing your trips around this chapter you should:

    Be aware of your legal and ethical obligations towards participants and be informed of the different legal requirements of Member States;
    Understand how well-protecting your data, protects you against violating laws and promises made to participants;
    Understand the impact of the upcoming General Data Protection Regulation (GDPR; European Union, 2016);
    Understand how a combination of informed consent, anonymisation and access controls allows you to create shareable personal data;
    Be able to define what elements should be integrated into a consent form;
    Be able to apply anonymisation techniques to your data;
    Be able to answer the DMP questions which are listed at the end of this chapter and adapt your own DMP.

  • Archive & Publish, a chapter of the CESSDA Expert Tour on Data Management

    High-quality data have the potential to be reused in many ways. Archiving and publishing your data properly will enable both your future self as well as future others to get the most out of your data.

    In this chapter, we venture into the landscape of research data archiving and publication. We will guide you in making an informed decision on where to archive and publish your data in such a way that others can properly access, understand, use and cite them.

    Understand the difference between data archiving and data publishing;
    Be aware of the benefits of data publishing;
    Be able to differentiate between different data publication services (data journal, self-archiving, a data repository);
    Be able to select a data repository which fits your research data's needs;
    Be aware of ways to promote your research data publication;
    Be able to answer the DMP questions which are listed at the end of this chapter and adapt your own DMP.

  • Hivebench Electronic Lab Notebook

    The time it takes to prepare, analyze and share experimental results can seem prohibitive, especially in the current, highly competitive world of biological research. However, not only is data sharing mandated by certain funding and governmental bodies, it also has distinct advantages for research quality and impact. Good laboratory practices recommend that all researchers use electronic lab notebooks (ELN) to save their results. This resource includes numerous short video demonstrations of Hivebench:

    • Start using Hivebench, the full demo
    • Creating a Hivebench account
    • Managing protocols & methods
    • Storing experimental findings in a notebook
    • Managing research data
    • Doing research on iPhone and iPad
    • Editing experiments
    • Collaborating with colleagues
    • Searching for results
    • Staying up to date with the newsfeed
    • Planning experiments with the calendar
    • Using open science protocols
    • Mendeley Data Export
    • Managing inventory of reagents
    • Signing and counter signing experiments
    • Archiving notebooks
    • How to keep data alive when researchers move on? Organizing data, methods, and protocols.
  • Metadata Recommendations, Dialects, Evaluation & Improvement

    This webinar describes a flexible, multifaceted approach to evaluating and improving metadata collections (in multiple dialects).

    The initial goal of many metadata efforts was discoverable data but, like many other elements of data management, the metadata landscape has evolved considerably over the last decade to include new use cases and requirements. The approach that has been developed includes web-based tools for understanding and comparing recommendations and dialects, flexible comparisons of completeness of metadata collections (in multiple dialects) with respect to particular recommendations, evaluation of completeness of single metadata records, identification of specific metadata improvement needs and an open forum for sharing information, experiences, and examples. Recommendations for metadata requirements and metadata improvement needs are discussed and shared.
     

  • FAIR Data in Trustworthy Data Repositories

    Everybody wants to play FAIR, but how do we put the principles into practice?

    There is a growing demand for quality criteria for research datasets. The presenters argue that the DSA (Data Seal of Approval for data repositories) and FAIR principles get as close as possible to giving quality criteria for research data. They do not do this by trying to make value judgments about the content of datasets, but rather by qualifying the fitness for data reuse in an impartial and measurable way. By bringing the ideas of the DSA and FAIR together, we will be able to offer an operationalization that can be implemented in any certified Trustworthy Digital Repository. In 2014 the FAIR Guiding Principles (Findable, Accessible, Interoperable and Reusable) were formulated. The well-chosen FAIR acronym is highly attractive: it is one of these ideas that almost automatically get stuck in your mind once you have heard it. In a relatively short term, the FAIR data principles have been adopted by many stakeholder groups, including research funders. The FAIR principles are remarkably similar to the underlying principles of DSA (2005): the data can be found on the Internet, are accessible (clear rights and licenses), in a usable format, reliable and are identified in a unique and persistent way so that they can be referred to. Essentially, the DSA presents quality criteria for digital repositories, whereas the FAIR principles target individual datasets. In this
    webinar the two sets of principles will be discussed and compared and a tangible operationalization will be presented.

  • ORCID Communications Toolkit: Interactive Outreach Presentation

    This presentation includes information about ORCID (Open Researcher and Contributor ID), short quizzes, and workshop activities. Select content for your presentation according to how long you have, what is suitable for your audience’s level of knowledge, and if you want to include practical tasks. After downloading, the slides can be edited to include your institution's name and presentation details.

  • How to Import Works into Your ORCID Record Using a Search & Link Wizard

    Several ORCID (Open Researcher and Contributor ID) member organizations have built search and link tools that allow you to import information about publications and other works into your ORCID record from other databases. The linking process can begin on the ORCID site, or at the organization's website. Note that ORCID does not store preprints or content. Rather we require that works data added to your record include a link that allows users to easily navigate to the source document. 

     

  • ORCID Registry: How to Group Works on Your ORCID Record

    Learn how and why to group works that have been added to your ORCID (Open Researcher and Contributor ID) record from different sources, so that they are displayed together on your record.

  • Getting Started with ORCID & ORCID API

    This presentation introduces ORCID (Open Researcher and Contributor ID), ORCID features, and ORCID's approach to API development.

  • Data Management Support for Researchers

    Tips and advice from a variety of researchers, data managers, and service providers, to help with data management. Titles include:

    • Sharing data: good for science, good for you
    • What support needs to be provided to assist researchers with data management?
    • How can choices about data capture open up, or limit, opportunities for researchers?
    • What should researchers do to help their data survive?
    • Why should researchers share their data?
    • How can repositories and data centres help researchers?
  • DMP Assistant: bilingual tool for preparing data management plans (DMPs)

    The DMP Assistant is a bilingual tool to assist in the preparation of a Data Management Plan (DMP). This tool, which is based on international standards and best practices in data management, guides the researcher step by step through the key questions to develop his plan. DMP Assistant is powered by an open source application called DMPOnline, which is developed by the Digital Curation Centre (DCC).  Site registration is required.  Data management planning templates are available for the DMP Assistant after registration and sign in.

  • Data Management for Clinical Research MOOC

    This course presents critical concepts and practical methods to support planning, collection, storage, and dissemination of data in clinical research.
     
    Understanding and implementing solid data management principles is critical for any scientific domain. Regardless of your current (or anticipated) role in the research enterprise, a strong working knowledge and skill set in data management principles and practice will increase your productivity and improve your science. The instructors' goal is to use these modules to help you learn and practice this skill set.

    This course assumes very little current knowledge of technology other than how to operate a web browser. The course will focus on practical lessons, short quizzes, and hands-on exercises as we explore together best practices for data management.

    The six modules cover these topics:

    • Research Data Collection Strategy
    • Electronic Data Capture Fundamentals
    • Planning a Data Strategy for a Prospective Study
    • Practicing What We've Learned: Implementation
    • Post-Study Activities and Other Considerations
    • Data Collection with Surveys
  • National Network of Libraries of Medicine (NNLM) Research Data Management Webinar Series

    The National Network of Libraries of Medicine (NNLM) Research Data Management (RDM) webinar series is a collaborative, bimonthly series intended to increase awareness of research data management topics and resources.  The series aims to support RDM within the library to better serve librarians and their institutional communities. Topics include, but are not limited to, understanding a library’s role in RDM, getting started, data management planning, and different RDM tools.

    Several NNLM Regional Medical Libraries will collaborate and combine efforts to feature experts from the field for this national webinar series. Each session will include separate objectives based on the featured webinar presenter. Attendee participation will be possible through the WebEx platform chat features and other electronic methods designed by the guest presenter. Sessions are recorded, closed-captioned, and posted for later viewing.

    Each session will last approximately 1 hour and 1 MLA CE contact hour will be offered per session. CE contact hours will only be available during the live presentations of the webinar.

    Watch the webpage for upcoming webinars.

  • RDMRose Learning Materials

    RDMRose was a JISC funded project to produce, and teach professional development learning materials in Research Data Management (RDM) tailored for Information professionals. The Slideshare presentations and documents include an overview of RDM, research in higher education, looking at research data, the research data lifecycle, data management plans, research data services, metadata, and data citation.  

    RDMRose developed and adapted learning materials about RDM to meet the specific needs of liaison librarians in university libraries, both for practitioners’ CPD and for embedding into the postgraduate taught curriculum. Its deliverables included open educational resources materials suitable for learning in multiple modes, including face to face and self-directed learning.

     

  • Overview of Interdisciplinary Earth Data Alliance (IEDA) Data Management Resources

    In the digital era, documenting and sharing our scientific data is growing increasingly important as an integral part of the scientific process. Data Management not only makes our data resources available for others to build upon, but it also enables data syntheses and new analyses that hold the potential for significant scientific advancement. Effective data management begins during the planning stages of a project and continues throughout the research process from field and/or laboratory work, through analysis, and culminating with scientific literature and data publication. By planning ahead, and following some best practices along the way, the process of data management can be simple and relatively low-effort, enabling rapid contribution and publication of data in the appropriate data systems at the conclusion of a project.

    IEDA offers a variety of tools to support investigators along the full continuum of their data management efforts:  Links to these tools and resources are available from the landing page for this resource.

    Pre-Award

    • IEDA Data Discovery Tools
    • IEDA Data Management Plan (DMP) Tool

    Research & Analysis

    • Register sample-based data sets and samples 
      • Register sample metadata and get a unique sample identifier (IGSN)
      • Download Templates for Analytical Data
      • Learn about contributing Analytical Data to the EarthChem Library
    • Register sensor-based data sets and samples 
      • Contribute sensor data files (e.g. geophysical data) and supporting metadata to MGDS
    • IEDA Analysis Tools
      • GeoMapApp earth science exploration and visualization application 
        • Analyze your own geospatial data within the context of other available datasets

    Synthesis & Publication

    • Register final data sets with IEDA 
    • Publish your data 
      • Publishing your data with a DOI ensures that it can be directly referenced in your paper and cited by others.
    • IEDA Data Compliance Reporting (DCR) Tool 
      • Rapidly generate a Data Compliance Report (DCR) based on your NSF award number to demonstrate that your data are registered with IEDA systems and you are compliant with NSF Data Policies.
  • USGS Data Templates Overview

    Creating Data Templates for data collection, data storage, and metadata saves time and increases consistency. Utilizing form validation increases data entry reliability.

    Topics include:

    • Why use data templates?
    • Templates During Data Entry - how to design data validating templates 
    • After Data Entry - ensuring accurate data entry
    • Data Storage and Metadata
    • Best Practices
      • Data Templates
      • Long-term Storage
    • Tools for creating data templates
    • Google Forms 
    • Microsoft Excel
    • Microsoft Access
    • OpenOffice - Calc

     

  • Introduction to Scientific Visualization

    Scientific Visualization transforms numerical data sets obtained through measurements or computations into graphical representations. Interactive visualization systems allow scientists, engineers, and biomedical researchers to explore and analyze a variety of phenomena in an intuitive and effective way. The course provides an introduction to the principles and techniques of Scientific Visualization. It covers methods corresponding to the visualization of the most common data types, as well as higher-dimensional, so-called multi-field problems. It combines a description of visualization algorithms with a presentation of their practical application. Basic notions of computer graphics and human visual perception are introduced early on for completeness. Simple but very instructive programming assignments offer a hands-on exposure to the most widely used visualization techniques.

    Note that the lectures, demonstration, and tutorial content require a Purdue Credentials,Hydroshare, or CILogon account.

    Access the CCSM Portal/ESG/ESGC Integration slide presentation at  https://mygeohub.org/resources/50/download/ccsm.pdf. The CCSM/ESG/ESGC collaboration provides a semantically enabled environment that includes modeling, simulated and observed data, visualization, and analysis.
    Topics include:

    • CCSM Overview
    • CCSM on the TeraGrid
    • Challenges
    • Steps in a typical CCSM Simulation
    • Climate Modeling Portal: Community Climate System Model (CCSM) to simulate climate change on Earth
    • CCSM Self-Describing Workflows 
    • Provenance metadata collection
    • Metadata

     

  • Simplifying the Reuse and Interoperability of Hydrologic Data Sets and Models with Semantic Metadata that is Human-Readable & Machine-Actionable

    This slide set discusses the big, generic problem facing geoscientists today that stems from lack of interoperability across a huge number of heterogeneous resources, and how to solve it.  Practical solutions to tame the inherent heterogeneity involve the collection of standardized, "deep-description" metadata for resources that are then wrapped with standardized APIs that provide callers wtih access to both the data and the metadata.  

  • Data Rescue: Packaging, Curation, Ingest, and Discovery

    Data Conservancy was introduced to Data Rescue Boulder through our long-time partner Ruth Duerr of Ronin Institute. Through our conversations, we recognized that Data Rescue Boulder has a need to process large number of rescued data sets and store them in more permanent homes. We also recognized that Data Conservancy along with Open Science Framework have the software infrastructure to support such activities and bring a selective subset of the rescued data into our own institution repository. We chose the subset of data based on a selection from one of the Johns Hopkins University faculty members.


    This video shows one of the pathways through which data could be brought into a Fedora-backed institutional repository using our tools and platforms


    Data Conservancy screen cast demonstrating integration between the Data Conservancy Packaging Tool, the Fedora repository, and the Open Science Framework. Resources referenced throughout the screen cast are linked below.


    DC Package Tool GUI


    DC Package Ingest


    Fedora OSF Storage Provider


    (under development as of April 2017)


  • DataONE Data Management Module 01: Why Data Management

    As rapidly changing technology enables researchers to collect large, complex datasets with relative ease, the need to effectively manage these data increases in kind. This is the first lesson in a series of education modules intended to provide a broad overview of various topics related to research data management. This 30-40 minute module covers trends in data collection, storage and loss, the importance and benefits of data management, and an introduction to the data life cycle and includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.

  • DataONE Data Management Module 02: Data Sharing

    When first sharing research data, researchers often raise questions about the value, benefits, and mechanisms for sharing. Many stakeholders and interested parties, such as funding agencies, communities, other researchers, or members of the public may be interested in research, results and related data. This 30-40 minute lesson addresses data sharing in the context of the data life cycle, the value of sharing data, concerns about sharing data, and methods and best practices for sharing data and includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.

  • DataONE Data Management Module 03: Data Management Planning

    Data management planning is the starting point in the data life cycle. Creating a formal document that outlines what you will do with the data during and after the completion of research helps to ensure that the data is safe for current and future use. This 30-40 minute lesson describes the benefits of a data management plan (DMP), outlines the components of a DMP, details tools for creating a DMP, provides NSF DMP information, and demonstrates the use of an example DMP and includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.

  • DataONE Data Management Module 04: Data Entry and Manipulation

    When entering data, common goals include: creating data sets that are valid, have gone through an established process to ensure quality, are organized, and reusable. This lesson outlines best practices for creating data files. It will detail options for data entry and integration, and provide examples of processes used for data cleaning, organization and manipulation and includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise, handout, and supporting data files.

  • DataONE Data Management Module 05: Data Quality Control and Assurance

    Quality assurance and quality control are phrases used to describe activities that prevent errors from entering or staying in a data set. These activities ensure the quality of the data before it is collected, entered, or analyzed, as well as actively monitoring and maintaining the quality of data throughout the study. In this lesson, we define and provide examples of quality assurance, quality control, data contamination and types of errors that may be found in data sets. After completing this lesson, participants will be able to describe best practices in quality assurance and quality control and relate them to different phases of data collection and entry. This 30-40 minute lesson includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.

  • DataONE Data Management Module 06: Data Protection and Backups

    There are several important elements to digital preservation, including data protection, backup and archiving. In this lesson, these concepts are introduced and best practices are highlighted with case study examples of how things can go wrong. Exploring the logistical, technical and policy implications of data preservation, participants will be able to identify their preservation needs and be ready to implement good data preservation practices by the end of the module. This 30-40 minute lesson includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.

  • DataONE Data Management Module 07: Metadata

    What is metadata? Metadata is data (or documentation) that describes and provides context for data and it is everywhere around us. Metadata allows us to understand the details of a dataset, including: where it was collected, how it was collected, what gaps in the data mean, what the units of measurement are, who collected the data, how it should be attributed etc. By creating and providing good descriptive metadata for our own data, we enable others to efficiently discover and use the data products from our research. This lesson explores the importance of metadata to data authors, users of the data and organizations, and highlights the utility of metadata. It provides an overview of the different metadata standards that exist, and the core elements that are consistent across them; guiding users in selecting a metadata standard to work with and introduces the best practices needed for writing a high quality metadata record. 
    This 30-40 minute lesson includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise, handout, and supporting data files.
     

  • DataONE Data Management Module 08: Data Citation

    Data citation is a key practice that supports the recognition of data creation as a primary research output rather than as a mere byproduct of research. Providing reliable access to research data should be a routine practice, similar to the practice of linking researchers to bibliographic references. After completing this lesson, participants should be able to define data citation and describe its benefits; to identify the roles of various actors in supporting data citation; to recognize common metadata elements and persistent data locators and describe the process for obtaining one, and to summarize best practices for supporting data citation. This 30-40 minute lesson includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.
     

  • DataONE Data Management Module 09: Analysis and Workflows

    Understanding the types, processes, and frameworks of workflows and analyses is helpful for researchers seeking to understand more about research, how it was created, and what it may be used for. This lesson uses a subset of data analysis types to introduce reproducibility, iterative analysis, documentation, provenance and different types of processes. Described in more detail are the benefits of documenting and establishing informal (conceptual) and formal (executable) workflows. This 30-40 minute lesson includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.

  • DataONE Data Management Module 10: Legal and Policy Issues

    Conversations regarding research data often intersect with questions related to ethical, legal, and policy issues for managing research data. This lesson will define copyrights, licenses, and waivers, discuss ownership and intellectual property, and describe some reasons for data restriction. After completing this lesson, participants will be able to identify ethical, legal, and policy considerations that surround the use and management of research data. The 30-40 minute lesson includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.

  • ORNL DAAC Data Recipes

    A collection of tutorials, called "data recipes" that describe how to use Earth science data from NASA's Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) using easily available tools and commonly used formats for Earth science data focusing on biogeochemical dynamics data.  These tutorials are available to assist those wishing to learn or teach how to obtain and view these data. 

  • EarthChem Library: How to Complete a Data Submission Template

    Learn how to complete a data submission template for the EarthChem Library (www.earthchem.org/library). You can access existing templates at www.earthchem.org/data/templates. If you do not see a template appropriate for your data type, please contact EarthChem at info@earthchem.org.

  • iData Tutorial

    A brief tutorial that shows how to upload, preview, and publish from iData. To use the accompanying My Geo Hub tutorial exercises, go to https://mygeohub.org/resources/1217. Note that the number before each step is the time on the YouTube video where it shows how each step is done. Also, note that the video does not contain audio content.

  • Imaging and Analyzing Southern California’s Active Faults with High-Resolution Lidar Topography

    Over the past 5+ years, many of Southern California’s active faults have been scanned with airborne lidar through various community and PI-data collection efforts (e.g., the B4 Project, EarthScope, and the post-El Mayor–Cucapah earthquake). All of these community datasets are publicly available (via OpenTopography: http://www.opentopography.org) and powerfully depict the effect of repeated slip along these active faults as well as surface processes in a range of climatic regimes. These datasets are of great interest to the Southern California Earthquake Center (SCEC) research and greater academic communities and have already yielded important new insights into earthquake processes in southern California.

    This is a short course on LiDAR technology, data processing, and analysis techniques. The foci of the course are fault trace and geomorphic mapping applications, integration with other geospatial data, and data visualization and analysis approaches. Course materials include slide presentations, video demonstrations, and text-based software application tutorials.

  • EarthChem Library: Submission Guidelines

    Learn general guidelines for data submission to the EarthChem Library (www.earthchem.org/library), including the data types and formats accepted and additional best practices for submission.

  • How to Manage Your Samples in MySESAR

    The System for Earth Sample Registration (SESAR) operates a registry that distributes the International Geo Sample Number IGSN. SESAR catalogs and preserves sample metadata profiles, and provides access to the sample catalog via the Global Sample Search.

    MySESAR provides a private working space in the System for Earth Sample Registration. This tutorial will introduce you to how to manage samples in MySESAR, including how to search the sample catalog, how to view and edit samples, how to print labels, how to group samples and how to transfer ownership of samples. For details relating to sample registration, please see tutorials for individual sample and batch sample registration here: http://www.geosamples.org/help/registration.

    MySESAR allows you to:

    • obtain IGSNs for your samples by registering them with SESAR.
    • register samples one at a time by entering metadata into a web form.
    • register multiple samples by uploading metadata in a SESAR spreadsheet form.
    • generate customized SESAR spreadsheet forms.
    • view lists of samples that you registered.
    • edit sample metadata profiles.
    • upload images and other documents such as field notes, maps, or links to publications to a sample profile.
    • restrict access to metadata profiles of your samples.
    • transfer ownership of a sample to another SESAR user.