All Learning Resources

  • Risk Analysis Screening Tool (RAST) and Chemical Hazard Engineering Fundamentals (CHEF)

    The Risk Analysis Screening Tool (RAST) is a free, downloadable Excel workbook that is used to help identify hazard scenarios and evaluate process risk within a single program. The user inputs information about the chemical, equipment, or unit operation parameters, process conditions, and facility layout. The program will suggest potentially hazardous scenarios and estimate the worst-case consequences based on user input. This tool is excellent for helping engineers in managing changes to a process or evaluating and potentially reducing the hazards of a process in the design stage. Attendees will learn about what the tool can do and begin to evaluate their processes.
    In addition to RAST, a companion information package, the Chemical Hazard Engineering Fundamentals (CHEF) documentation, describes in detail the theoretical basis of the methods, techniques, and assumptions which are used in RAST for the different hazard evaluation and risk analysis steps.

    Table of contents:
    -RAST Overview
    -CHEF Overview
    -Case Studies
    -Terms and Conditions
    -Download and Install
    -RAST User and CHEF Manuals
    -Frequently Asked Questions (FAQs)
    -RAST Development History

  • What we wish we had learned in Graduate School - a data management training roadmap for graduate students

    This Road Map is a dynamic guide to help graduate students wade through the ocean of data resources. This guide will help identify what data management practices graduate students should be considering at different graduate school milestones.

    Data management training for graduate students is a very important but often undervalued area of graduate school education. Many graduate students will go on and become professionals who are using, producing, and/or managing data that have tremendous benefits for both the research community and society. However, our personal experiences as graduate students show that data lifecycle and data management training are not part of the core curriculum in graduate school. As Earth Science Information Partners (ESIP) Community Fellows, we understand that data management is a critical skill in earth science and we all wished we had an opportunity to integrate it from the beginning in our graduate school experience. To the issue of lack of formal data management training in graduate education, we convened a working session during the 2020 ESIP Summer Meeting called “What we wish we had learned in Graduate School?” The session was initially planned as a working session for early career professionals to share resources and lessons learned during our own graduate school experiences. The session has sparked broad interests from the Earth science data community and attracted participants across different career stages and with different levels of expertise. The outcome of the session has been summarized as a roadmap that follows the DataONE Data Lifecycle. This roadmap projects the data lifecycle into the traditional graduate school timeline and highlights the benefits and resources of data management training for each component in the data lifecycle. This roadmap for graduate data management training will be distributed via ESIP and be continued as part of the ESIP Community Program in the future to promote data management training for graduate students in Earth sciences and beyond.

    Also available as a webinar from DataONE:  https://vimeo.com/481534921 

  • Text Mining in Python through the HTRC Feature Reader

    In this lesson, we introduce the HTRC Feature Reader, a library for working with the HTRC Extracted Features dataset using the Python programming language. The HTRC Feature Reader is structured to support work using popular data science libraries, particularly Pandas. Pandas provides simple structures for holding data and powerful ways to interact with it. The HTRC Feature Reader uses these data structures, so learning how to use it will also cover general data analysis skills in Python.
    We introduce a toolkit for working with the 13.6 million volume Extracted Features Dataset from the HathiTrust Research Center. You will learn how to peer at the words and trends of any book in the collection, while developing broadly useful Python data analysis skills.
    Today, you’ll learn:
    -How to work with notebooks, an interactive environment for data science in Python;
    -Methods to read and visualize text data for millions of books with the HTRC Feature Reader; and
    -Data malleability, the skills to select, slice, and summarize extracted features data using the flexible “Data Frame” structure.

  • Exploring and Analyzing Network Data with Python

    This lesson introduces network metrics and how to draw conclusions from them when working with humanities data. You will learn how to use the Network X Python package to produce and work with these network statistics.
    In this tutorial, you will learn:
    -To use the Network X package for working with network data in Python
    -To analyze humanities network data to find:
    Network structure and path lengths,
    Important or central nodes,
    Communities and subgroups.

    Prerequisites
    This tutorial assumes that you have:
    -A basic familiarity with networks and/or have read “From Hermeneutics to Data to Networks: Data Extraction and Network Visualization of Historical Sources” by Martin Düring here on Programming Historian;
    -Installed Python 3, not the Python 2 that is installed natively in Unix-based operating systems such as Macs (If you need assistance installing Python 3, check out the Hitchhiker’s Guide to Python); and
    -Installed the pip package installer.1

  • Getting Started With Topic Modeling And MALLET

    In this lesson, you will first learn what topic modeling is and why you might want to employ it in your research. You will then learn how to install and work with the MALLET natural language processing toolkit to do so. MALLET involves modifying an environment variable (essentially, setting up a short-cut so that your computer always knows where to find the MALLET program) and working with the command line (ie, by typing in commands manually, rather than clicking on icons or menus). We will run the topic modeler on some example files, and look at the kinds of outputs that MALLET installed. This will give us a good idea of how it can be used on a corpus of texts to identify topics found in the documents without reading them individually.

  • Introducción a Topic Modeling y MALLET

    En esta lección, primero aprenderás qué es topic modeling1 y por qué podrías querer utilizarlo en tus investigaciones. Luego aprenderás cómo instalar y trabajar con MALLET, una caja de herramientas para procesamiento de lenguajes naturales (PLN) que sirve para realizar este tipo de análisis. MALLET requiere que se modifique una variable de entorno (esto es, configurar un atajo para que la computadora sepa en todo momento dónde encontrar el programa MALLET) y que se trabaje con la línea de comandos (es decir, tecleando comandos manualmente en vez de hacer clic en íconos o menús).

  • Making Research Data Available

    There is a growing awareness of the importance of research data. Elsevier is committed to encouraging and supporting researchers who want to store, share, discover and reuse data. To this end, Elsevier has set up several initiatives that allow authors to make their data available when they publish with Elsevier. The webinars in the collection (located on the bottom half of the web page) cover:

    • Ways for researchers to store, share, discover, and use data
    • How to create a good research data management plan  
    • Data Citation: How can you as a researcher benefit from citing data? 

  • Hivebench Electronic Lab Notebook

    The time it takes to prepare, analyze and share experimental results can seem prohibitive, especially in the current, highly competitive world of biological research. However, not only is data sharing mandated by certain funding and governmental bodies, it also has distinct advantages for research quality and impact. Good laboratory practices recommend that all researchers use electronic lab notebooks (ELN) to save their results. This resource includes numerous short video demonstrations of Hivebench:

    • Start using Hivebench, the full demo
    • Creating a Hivebench account
    • Managing protocols & methods
    • Storing experimental findings in a notebook
    • Managing research data
    • Doing research on iPhone and iPad
    • Editing experiments
    • Collaborating with colleagues
    • Searching for results
    • Staying up to date with the newsfeed
    • Planning experiments with the calendar
    • Using open science protocols
    • Mendeley Data Export
    • Managing inventory of reagents
    • Signing and counter signing experiments
    • Archiving notebooks
    • How to keep data alive when researchers move on? Organizing data, methods, and protocols.
  • Remote Sensing for Monitoring Land Degradation and Sustainable Cities Sustainable Development Goals (SDGs) [Advanced]

    The Sustainable Development Goals (SDGs) are an urgent call for action by countries to preserve our oceans and forests, reduce inequality, and spur economic growth. The land management SDGs call for consistent tracking of land cover metrics. These metrics include productivity, land cover, soil carbon, urban expansion, and more. This webinar series will highlight a tool that uses NASA Earth Observations to track land degradation and urban development that meet the appropriate SDG targets. 

    SDGs 11 and 15 relate to sustainable urbanization and land use and cover change. SDG 11 aims to "make cities and human settlements inclusive, safe, resilient, and sustainable." SDG 15 aims to "combat desertification, drought, and floods, and strive to achieve a land degradation neutral world." To assess progress towards these goals, indicators have been established, many of which can be monitored using remote sensing. 

    In this training, attendees will learn to use a freely-available QGIS plugin, Trends.Earth, created by Conservation International (CI) and have special guest speakers from the United Nations Convention to Combat Desertification (UNCCD) and UN Habitat. Trends.Earth allows users to plot time series of key land change indicators. Attendees will learn to produce maps and figures to support monitoring and reporting on land degradation, improvement, and urbanization for SDG indicators 15.3.1 and 11.3.1. Each part of the webinar series will feature a presentation, hands-on exercise, and time for the speaker to answer live questions. 

    Learning Objectives: By the end of this training, attendees will: 

    • Become familiar with SDG Indicators 15.3.1 and 11.3.1
    • Understand the basics on how to compute sub indicators of SDG 15.3.1 such as: productivity, land cover, and soil carbon 
    • Understand how to use the Trends.Earth Urban Mapper web interface
    • Learn the basics of the Trends.Earth toolkit including: 
      • Plotting time series 
      • Downloading data
      • Use default or custom data for productivity, land cover, and soil organic carbon
      • Calculate a SDG 15.3.1 spatial layers and summary table 
      • Calculate urban change metrics
      • Create urban change summary tables



    Course Format: This training has been developed in partnership with Conservation International, United Nations Convention to Combat Desertification (UNCCD), and UN Habitat. 

    • Three, 1.5-hour sessions that include lectures, hands-on exercises, and a question and answer session
    • The first session will be broadcast in English, and the second session will contain the same content, broadcast in Spanish (see separate record for Spanish version at:  https://dmtclearinghouse.esipfed.org/node/10935 


    ​Prerequisites: 


    Each part of 3 includes links to the recordings, presentation slides, exercises and Question & Answer Transcripts.   

  • Agency Requirements: NSF Data Management Plans

    This training module is part of the Federation of Earth Science Information Partners (or ESIP Federation's) Data Management for Scientists Short Course.  The subject of this module is "NSF Data Management Plans".  The module was authored by Ruth Duerr from the National Snow and Ice Data Center in Boulder, Colorado.  Besides the ESIP Federation, sponsors of this Data Management for Scientists Short Course are the Data Conservancy and the United States National Oceanic and Atmospheric Administration (NOAA).

    If you’ve done any proposal writing for the National Science Foundation (NSF), you know that NSF now requires that all proposals be accompanied by a data management plan that can be no longer than two pages.   The data management plans are expected to respond to NSF’s existing policy on the dissemination and sharing of research results.  You can find a description of this policy in the NSF Award and Administration Guide to which we provide a link later in this module. In addition, we should note that the NSF’s proposal submission system, Fastlane, will not accept a proposal that does not have a data management plan attached as a supplementary document.

    Individual directorates may have specific guidance for data management plans. For example, the Ocean Sciences Division specifies that data be available within two years after acquisition. Specifications for some individual directorates may provide a list of places where you must archive your data and what you should do if none of the archives in the list can take your data. They may also have additional requirements for both annual and final reporting beyond the general case requirements from NSF.  In addition, individual solicitations may have program specific guidelines to which you need to pay attention.  This module is available in both presentation slide and video formats.

  • Intro to Data Management

    This guide will provide general information about data management, including an overview of Data Management Plans (DMPs), file naming conventions, documentation, security, backup, publication, and preservation. We have included the CMU data life cycle to put the pieces in context in the Data 101 section.
    The CMU Libraries provides research data management resources for guidance on data management, planning, and sharing for researchers, faculty, and students.

  • Content-based Identifiers for Iterative Forecasts: A Proposal

    Iterative forecasts pose particular challenges for archival data storage and retrieval. In an iterative forecast, data about the past and present must be downloaded and fed into an algorithm that will output a forecast data product. Previous forecasts must also be scored against the realized values in the latest observations. Content-based identifiers provide a convenient way to consistently identify input and outputs and associated scripts. These identifiers are:
    (1) location-agnostic – they don’t depend on a URL or other location-based authority (like DOI)
    (2) reproducible – the same data file always has the same identifier
    (3) frictionless – cheap and easy to generate with widely available software, no authentication or network connection
    (4) sticky – the identifier cannot become unstuck or separated from the content
    (5) compatible – most existing infrastructure, including DataONE, can quite readily use these identifiers.

    In this webinar, the speaker will illustrate an example iterative forecasting workflow. In the process, he will highlight some newly developed R packages for making this easier.

  • Supporting Researchers in Discovering Data Repositories

    How do researchers go about identifying a repository to preserve their data? Do they have all the information they need to make an informed decision? Are there resources available to help?
    There is a myriad of repositories available to support data preservation and they differ across multiple axes. So which one is right for your data? The answer is large, ‘it depends’. But this can be frustrating to a new researcher looking to publish data for the first time. What questions need to be asked to detangle these dependencies and where can a researcher go for answers?
    Conversations and sessions at domain conferences have consistently suggested that researchers need more support in navigating the landscape of data repositories and with support from ESIP Funding Friday, we sought to do that. In this webinar, we will introduce a resource under development that aims to serve as a gateway for information about repository selection. With links to existing resources, games, and outreach materials, we aim to facilitate the discovery of data repositories and we welcome contributions to increase the value of this resource.

  • A FAIR Afternoon: On FAIR Data Stewardship for Technology Hotel (/ETH4) beneficiaries

    FAIR data awareness event for Enabling Technology Hotels 4ed. One of the aims of the Enabling Technologies Hotels programme, is to promote the application of the FAIR data principles in research data stewardship, data integration, methods, and standards. This relates to the objective of the national plan open science that research data have to be made suitable for re-usability.

    With this FAIR data training, ZonMw and DTL aim to help researchers (hotel guests and managers) that have obtained a grant in the 4th round of the programme to apply FAIR data management in their research.

  • Genomics Workshop

    Getting Started

    This lesson assumes no prior experience with the tools covered in the workshop. However, learners are expected to have some familiarity with biological concepts, including nucleotide abbreviations and the concept of genomic variation within a population. 
    Workshop Overview.  Workshop materials include a recommendation for a dataset to be used with the lesson materials.

    Project organization and management:
    Learn how to structure your metadata, organize and document your genomics data and bioinformatics workflow, and access data on the NCBI sequence read archive (SRA) database.
    Introduction to the command line:
    Learn to navigate your file system, create, copy, move, and remove files and directories, and automate repetitive tasks using scripts and wildcards.
    Data wrangling and processing:
    Use command-line tools to perform quality control, align reads to a reference genome, and identify and visualize between-sample variation.
    Introduction to cloud computing for genomics:
    Learn how to work with Amazon AWS cloud computing and how to transfer data between your local computer and cloud resources.


     

  • EC FAIR How-to Series: Getting a DOI for Your Data

    Identifiers are very important as a means to make research data more FAIR (Findable Accessible Interoperable Reusable). This quick reference guide briefly describes why, when, where and how to acquire a digital object identifier (DOI) for research data.    The guide is targeted to Earth Science researchers, but should be useful for other researchers and their support staff as well.  It is part of the EarthCub How-to Series that is designed to provide targeted, practical lessons on making research data FAIR.  

  • EarthCube FAIR How-to Series: Choosing a Repository for Your Data

    This quick reference guide briefly describes why, when, where and how to choose a FAIR-enabled data repository for research data.    The guide is targeted to Earth Science researchers, but should be useful for other researchers and their support staff as well.  It is part of the EarthCub How-to Series that is designed to provide targeted, practical lessons on making research data FAIR.  

  • Biological Observation Data Standardization - A Primer for Data Managers

    Lots of standards exist for use with biological data but navigating them can be difficult for data managers who are new to them. The Earth Science Information Partners (ESIP) Biological Data Standards Cluster developed this primer for managers of biological data to provide a quick, easy resource for navigating a selection of the standards that exist. The goal of the primer is to spread awareness about existing standards and is intended to be shared online and at conferences to increase the adoption of standards for biological data and make them FAIR.

  • Data Management Support for Researchers

    Tips and advice from a variety of researchers, data managers, and service providers, to help with data management. Titles include:

    • Sharing data: good for science, good for you
    • What support needs to be provided to assist researchers with data management?
    • How can choices about data capture open up, or limit, opportunities for researchers?
    • What should researchers do to help their data survive?
    • Why should researchers share their data?
    • How can repositories and data centres help researchers?
  • USGS Data Templates Overview

    Creating Data Templates for data collection, data storage, and metadata saves time and increases consistency. Utilizing form validation increases data entry reliability.
    Topics include:

    • Why use data templates?
    • Templates During Data Entry - how to design data validating templates 
    • After Data Entry - ensuring accurate data entry
    • Data Storage and Metadata
    • Best Practices
      • Data Templates
      • Long-term Storage
    • Tools for creating data templates
    • Google Forms 
    • Microsoft Excel
    • Microsoft Access
    • OpenOffice - Calc


     

  • USGS Data Management Plans

    The resources in this section will help you understand how to develop your DMP. The checklist outlines the minimum USGS requirements. The FAQ and DMP Writing Best Practices list below will help you understand other important considerations when developing your own DMP. To help standardize or provide guidance on DMPs, a science center or funding source may choose to document their own Data Management strategy. A number of templates and examples are provided.  This page also includes resources related to the overall research data lifecycle that will help put data management plans in the context of the research done.  Information is provided that identifies what the U.S. Geological Survey Manual requires.

  • United Nations Online Access to Research in Environment (UN OARE) Training Materials

    Here you can find training modules on information management training topics that help you learn not only how to open journals and download full-text articles from the OARE website, but also how to use OARE’s search databases to find articles about specific topics in thousands of scientific journals from major publishers around the world.  Topics include:  searching strategies for finding scientific research using environmental issues, accessing full-text articles, e-journals, e-books, and other internat resources such as indexes for searching EBSCO, SCOPUS (Elsevier), environmental gateways and other portals.  Downloadable powerpoint slides are available for each topic along with a workbook for most of the modules.  

  • FAIR Webinar Series

    This webinar series explores each of the four FAIR principles (Findable, Accessible, Interoperable, Reusable) in depth - practical case studies from a range of disciplines, Australian and international perspectives, and resources to support the uptake of FAIR principles.

    The FAIR data principles were drafted by the FORCE11 group in 2015. The principles have since received worldwide recognition as a useful framework for thinking about sharing data in a way that will enable maximum use and reuse.  A seminal article describing the FAIR principles can also be found at:  https://www.nature.com/articles/sdata201618.

    This series is of interest to those who work with creating, managing, connecting and publishing research data at institutions:
    - researchers and research teams who need to ensure their data is reusable and publishable
    - data managers and researchers
    - Librarians, data managers and repository managers
    - IT who need to connect Institutional research data, HR and other IT systems

  • ANDS Guide to Persistent Identifiers: Awareness Level

    A persistent identifier (PID) is a long-lasting reference to a resource. That resource might be a publication, dataset or person. Equally it could be a scientific sample, funding body, set of geographical coordinates, unpublished report or piece of software. Whatever it is, the primary purpose of the PID is to provide the information required to reliably identify, verify and locate it. A PID may be connected to a set of metadata describing an item rather than to the item itself.
    The contents of this page are:
     What is a persistent identifier?
    Why do we need persistent identifiers?
    How do persistent identifiers work?
    What needs to be done, by whom?

    Other ANDS Guides are available at the working level and expert level from this page.

  • ANDS Guides to Persistent Identifiers: Working Level

    This module is to familiarize researchers and administrators with persistent identifiers as they apply to research. It gives an overview of the various issues involved with ensuring identifiers provide ongoing access to research products. The issues are both technical and policy; this module focuses on policy issues. 
    This guide goes through the same issues as the ANDS guide Persistent identifiers: awareness level, but in more detail. The introductory module is not a prerequisite for this module.
    The contents of this page are:
    Why persistent identifiers?
    What is an Identifier?
    Data and Identifier life cycles
    What is Identifier Resolution?
    Technologies
    Responsibilities
    Policies

    Other ANDS Guides on this topic at the awareness level and expert level can be found from this page.

  • ANDS Guides to Persistent identifiers: Expert Level

    This module aims to provide research administrators and technical staff with a thorough understanding of the issues involved in setting up a persistent identifier infrastructure. It provides an overview of the types of possible identifier services, including core services and value-added services. It offers a comprehensive review of the policy issues that are involved in setting up persistent identifiers. Finally, a glossary captures the underlying concepts on which the policies and services are based.

    Other ANDS Guides on this topic are available for the awareness level and the working level from this page.

  • 23 (research data) Things

    23 (research data) Things is self-directed learning for anybody who wants to know more about research data. Anyone can do 23 (research data) Things at any time.  Do them all, do some, cherry-pick the Things you need or want to know about. Do them on your own, or get together a Group and share the learning.  The program is intended to be flexible, adaptable and fun!

    Each of the 23 Things offers a variety of learning opportunities with activities at three levels of complexity: ‘Getting started’, ‘Learn more’ and ‘Challenge me’. All resources used in the program are online and free to use.

  • 'Good Enough' Research Data Management: A Brief Guide for Busy People

    This brief guide presents a set of good data management practices that researchers can adopt, regardless of their data management skills and levels of expertise.

  • De bonnes pratiques en gestion des données de recherche: Un guide sommaire pour gens occupés (French version of the 'Good Enough' RDM)

    Ce petit guide présente un ensemble de bonnes pratiques que les chercheurs peuvent adopter, et ce, indépendamment de leurs compétences ou de leur niveau d’expertise. 

  • Groundwater Monitoring using Observations from NASA’s Gravity Recovery and Climate Experiment (GRACE) Missions [Introductory]

    Groundwater makes up roughly 30% of global freshwater. It also provides drinking water for the world’s population and irrigation for close to 1/3rd of global agricultural land. Because of this level of reliance, monitoring groundwater is crucial for water resources and land management. The Gravity Recovery and Climate Experiment (GRACE) and GRACE-Follow On (GRACE-FO) missions from NASA and the German Research Centre for Geosciences (GFZ) provide large-scale terrestrial water storage estimation from mid-2000 to present. The mission uses twin satellites to accurately map variations in the Earth's gravity field and surface mass distribution.


     


    GRACE observations have been used for detecting groundwater depletion and for drought and flood predictions. This lightning-style training is designed to answer the demand and interest from the applications community in technologies that can be used to support water resources management. The webinar will provide an overview of the GRACE missions, groundwater data availability, and their applications in the monitoring and management of water resources. This lightning webinar will also serve as the foundation for the upcoming advanced webinar: Using Earth Observations to Monitor Water Budgets for River Basin Management II.

    Learning Objectives:
     By the end of this training, attendees will be able to:


    • Access GRACE data and analyze regional groundwater changes


    course Format: 


    • A single, 1.5-hour webinar that includes a lecture and a question & answer session
    • No certificate of completion will be available for this webinar


    Prerequisites:
     Fundamentals of Remote Sensing   

    Agenda:
    • Introduction to GRACE and GRACE-FO
    • Data Format, Variables, and Resolution
    • GRACE Data Access
    • Q&A Session

  • Understanding Phenology with Remote Sensing [Introductory]

    This training will focus on the use of remote sensing to understand phenology: the study of life-cycle events. Phenological patterns and processes can vary greatly across a range of spatial and temporal scales and can provide insights about ecological processes like invasive species encroachment, drought, wildlife habitat, and wildfire potential. This training will highlight NASA-funded tools to observe and study phenology across a range of scales. Attendees will be exposed to the latest in phenological observatory networks and science, and how these observations relate to ecosystem services, the carbon cycle, biodiversity, and conservation.

    Learning Objectives: 
    By the end of this training series, attendees will be able to:

    • Summarize NASA satellites and sensors that can be used for monitoring global phenology patterns
    • Outline the benefits and limitations of NASA data for phenology
    • Describe the multi-scalar approach to vegetation life cycle analyses
    • Compare and contrast data from multiple phenology networks
    • Evaluate various projects and case-study examples of phenological data


    Course Format: 

    • Three, one-hour sessions


    Prerequisites: Attendees who have not completed the course(s) below may be unprepared for the pace of this training.
    Fundamentals of Remote Sensing  

    Part 1: Overview of Phenology and Remote Sensing

    • Introduction to NASA data and Phenology
    • Land Surface Phenology from MODIS and VIIRS


    Part 2: Scales of Phenology

    • Resolving challenges associated with variability in space, time, and resolution for phenology research and applications
    • USA-National Phenology Network (NPN) and The National Ecological Observatory Network (NEON) 
    • Phenocam: Near-surface phenology
    • Conservation Science Partners


    Part 3: Utility and Advantage of Multi-Scale Analysis

    • Field-based phenology and gridded products
    • Case-study examples:
    • Integration of PhenoCam near-surface remote sensing and satellite phenological data
    • Greenwave modeling
    • Urbanization and plant phenology


    Each part of 3 includes links to the recordings, presentation slides, and Question & Answer Transcripts.
     

  • NASA Earthdata Webinar Series

    Monthly webinars on discovery and access to NASA Earth science data sets, services and tools.  Webinars are archived on YouTube from 2013 to the present.  Presenters are experts in different domains within NASA's Earth science research areas and are usually affiliated with NASA data centers and / or data archives.  Specific titles for the current year's webinars can be found from the main page, but can also be found from separate pages for each year.  These webinars are available to assist those wishing to learn or teach how to obtain and view these data. 

  • NASA Earthdata Video Tutorials

    Short video tutorials on topics related to available NASA EOSDIS data products, various types of data discovery, data access, and data tool demonstrations such as the Panoply tool for creating line plots.  Videos accessible on YouTube from listing on main webinars and tutorials page.  These tutorials are available to assist those wishing to learn or teach how to obtain and view these data. 

  • Research Data Management Community Training

    Good research data management is of great importance for high-quality research. Implementing professional research data management from the start helps to avoid problems in the data creation and curation phases.

    Content:

    • Definition(s) of RDM
    • Benefits and Advantages of RDM
    • Research Data Life-Cycle
    • Structure and components of RDM
    • Stakeholders
    • Recommended literature

  • Access Policies and Usage Regulations: Licenses

    The webinar about licensing and policy will look into why it is important that research data are provided with licenses.

    Content:

    • Benefits of sharing research data
    • Challenges
    • Types of licenses
    • Data ownership and reuse
    • Using creative commons in archiving research data


    Objectives:
    During the workshop, participants will acquire a basic knowledge of data licensing.

  • U.S. Fish and Wildlife Service National Conservation Training Center

    The National Conservation Training Center (NCTC) of  the U.S. Fish and Wildlife (USFWS) provides a search service on top of a catalog of the courses offered at the NCTC physical location and online that are related to data skills, and data management.  The courses include instructor led,  online self study,  online instructor led courses, and webinars.  Some courses are free;  others have a fee associated with them.  Many of the courses use various GIS data sources and systems including USFWS datasets that can be found at:  https://www.fws.gov/gis/data/national/index.html  The NCTC provides a searching interface on its home page.

  • MIT Open Courseware: Introduction to Computer Science and Programming in Python

    6.0001 Introduction to Computer Science and Programming in Python  is intended for students with little or no programming experience. It aims to provide students with an understanding of the role computation can play in solving problems and to help students, regardless of their major, feel justifiably confident of their ability to write small programs that allow them to accomplish useful goals. The class uses the Python 3.5 programming language. Course presented as taught in Fall 2016.
    Course features include:
    Video lectures
    Captions/transcript 
    Interactive assessments
    Lecture notes
    Assignments: problem sets (no solutions)
    Assignments: programming with examples

    MITx offers a free version of this subject on edX. Please register to get started:

    6.00.1x Introduction to Computer Science and Programming Using Python (Started January 22, 2019)  [help icon]

    6.00.2x Introduction to Computational Thinking and Data Science (Started March 26, 2019)

  • MIT Open Courseware: Communicating With Data

    Communicating With Data has a distinctive structure and content, combining fundamental quantitative techniques of using data to make informed management decisions with illustrations of how real decision makers, even highly trained professionals, fall prey to errors and biases in their understanding. We present the fundamental concepts underlying the quantitative techniques as a way of thinking, not just a way of calculating, in order to enhance decision-making skills. Rather than survey all of the techniques of management science, we stress those fundamental concepts and tools that we believe are most important for the practical analysis of management decisions, presenting the material as much as possible in the context of realistic business situations from a variety of settings. Exercises and examples drawn from marketing, finance, operations management, strategy, and other management functions.  Course features include selected lecture notes and problem set assignments with answers.  Materials offered as course was presented in Summer 2003.  

  • LEARN Toolkit of Best Practice for Research Data Management

    The LEARN Project's Toolkit of Best Practice for Research Data Management expands on the issues outlines in the  LERU Roadmap for Research Data (2013).  It is freely downloadable, and is a deliverable for the European Commission.  It includes:

    • 23 Best-Practice Case Studies from institutions around the world, drawn from issues in the original LERU Roadmap;
    • 8 Main Sections, on topics such as Policy and Leadership, Open Data, Advocacy and Costs;
    • One Model RDM Policy, produced by the University of Vienna and accompanied by guidance and an overview of 20 RDMpolicies across Europe;
    • An Executive Briefing in six languages, aimed at senior institutional decision makers.


    The Executive Briefing of the LEARN Toolkit is available in English, Spanish, German, Portuguese, French and Italian translations.

  • LEARN - Research Data Management Toolkit

    For research performing organisations, this deluge of data presents many challenges in areas such as policy and skills development, training, costs and governance. To help address these issues, LEARN published  its Toolkit of Best Practice for Research Data Management.

    The Toolkit expands on the issues outlined in the LERU Roadmap for Research Data (2013). It is nearly 200 pages long, and includes:

    • 23 Best-Practice Case Studies from institutions around the world, drawn from issues in the original LERU Roadmap;
    • 8 Main Sections, on topics such as Policy and Leadership, Open Data, Advocacy and Costs;
    • One Model RDM Policy, produced by the University of Vienna and accompanied by guidance and an overview of 20 RDMpolicies across Europe;


    An Executive Briefing in six languages, aimed at senior institutional decision makers.