All Learning Resources

  • Coffee and Code: Write Once Use Everywhere (Pandoc)

    Pandoc at http://pandoc.org  is a document processing program that runs on multiple operating systems (Mac, Windows, Linux) and can read and write a wide variety of file formats. In many respects, Pandoc can be thought of as a universal translator for documents. This workshop focuses on a subset of input and output document types, just scratching the surface of the transformations made possible by Pandoc.

    Click 00-Overview.ipynb on the provided GitHub page or go directly to the overview, here:
    https://github.com/unmrds/cc-pandoc/blob/master/00-Overview.ipynb

  • U.S. Fish and Wildlife Service National Conservation Training Center

    The National Conservation Training Center (NCTC) of  the U.S. Fish and Wildlife (USFWS) provides a search service on top of a catalog of the courses offered at the NCTC physical location and online that are related to data skills, and data management.  The courses include instructor led,  online self study,  online instructor led courses, and webinars.  Some courses are free;  others have a fee associated with them.  Many of the courses use various GIS data sources and systems including USFWS datasets that can be found at:  https://www.fws.gov/gis/data/national/index.html  The NCTC provides a searching interface on its home page.

  • LP DAAC Data Recipes

    A collection of tutorials that describe how to use Earth science data from NASA's Land Processes Distributed Active Archive Center (LP DAAC) using easily available tools and commonly used formats for Earth science data.  These tutorials are available to assist those wishing to learn or teach how to obtain and view these data. 

  • The Horizon 2020 Open Research Data Pilot: Introduction to the Requirements of the Open Research Data Pilot

    This course provides an introduction to the European Commission's Open Research Data Pilot in Horizon 2020. It includes two sections: Introduction to the Requirements of the Open Research Data Pilot and How to Comply with the Requirements of the Open Research Data Pilot. Each section may include videos, presentation slides, demonstrations, associated readings, and quizzes which can be found at the URL to the home page for this course.
    Learning objectives:

    • Understand what is required of participants in the Horizon 2020 Open Research Data pilot
    • Learn about the concepts of open data, metadata, licensing and repositories
    • Identify key resources and services that can help you to comply with requirements
    • Undertake short tests to check your understanding
  • Software Preservation Network 2019 Webinar Series Episode 3: Making Software Available Within Institutions and Networks

    This episode is one of 7 in the Software Preservation Network's 2019 Webinar Series on Code of Best Practices and Other Legal Tools for Software Preservation.  Each episode is recorded;  both presentation slides and webinar transcript as well as links to supplementary resources are also available.   Information about the full series can be found at:  https://www.softwarepreservationnetwork.org/events 

    In this third episode in a seven-part series about using the Code of Best Practices in Fair Use, you’ll learn about:

    • How fair use enables institutions to provide access to software for use in research, teaching, and learning settings while minimizing any negative impact on ordinary commercial sales
    • How to provide broader networked access to software maintained and shared across multiple institutions, including off-premise access under some circumstances
    • Safeguards to minimize potential risks, such as the establishment of a mechanism to register concerns by stakeholders
  • Software Preservation Network 2019 Webinar Series Episode 6: Making the Code Part of Software Preservation Culture

    This episode is one of 7 in the Software Preservation Network's 2019 Webinar Series on Code of Best Practices and Other Legal Tools for Software Preservation.  Each episode is recorded;  both presentation slides and webinar transcript as well as links to supplementary resources are also available.   Information about the full series can be found at:  https://www.softwarepreservationnetwork.org/events 

    In this sixth episode in a seven-part series about using the Code of Best Practices in Fair Use, you’ll learn:

    • The difference between a document and a shift in practice
    • How other communities have incorporated fair use into their professional practice
    • How to talk to gatekeepers and to allies in your network, to strengthen field-wide practice
  • Software Preservation Network 2019 Webinar Series Episode 5: Understanding the Anti-circumvention Rules and the Preservation Exemptions

    This episode is one of 7 in the Software Preservation Network's 2019 Webinar Series on Code of Best Practices and Other Legal Tools for Software Preservation.  Each episode is recorded;  both presentation slides and webinar transcript as well as links to supplementary resources are also available.   Information about the full series can be found at:  https://www.softwarepreservationnetwork.org/events 

    In this fifth episode in a seven-part series about using the Code of Best Practices in Fair Use, you’ll learn :

    • What the DMCA anti-circumvention provisions are and how they relate to copyright, fair use, and the Code
    • How the triennial exemption rulemaking works and how SPN obtained an exemption for software (and video game) preservation
    • How to apply the new exemption to your own activities
  • Software Preservation Network 2019 Webinar Series Episode 4: Working with Source Code and Software Licenses

    This episode is one of 7 in the Software Preservation Network's 2019 Webinar Series on Code of Best Practices and Other Legal Tools for Software Preservation.  Each episode is recorded;  both presentation slides and webinar transcript as well as links to supplementary resources are also available.   Information about the full series can be found at:  https://www.softwarepreservationnetwork.org/events 

    In this fourth episode in a seven-part series about using the Code of Best Practices in Fair Use, you’ll learn:

    • How the Code treats preservation and access to source code in your collections
    • How software licenses interact with fair use
    • What kinds of software license provisions might prevent using fair use
    • When licenses bind (and do not bind) owners of physical copies of software
    • Non-copyright concerns associated with software licenses.
  • Software Preservation Network 2019 Webinar Series Episode 7: International Implications

    This episode is one of 7 in the Software Preservation Network's 2019 Webinar Series on Code of Best Practices and Other Legal Tools for Software Preservation.  Each episode is recorded;  both presentation slides and webinar transcript as well as links to supplementary resources are also available.   Information about the full series can be found at:  https://www.softwarepreservationnetwork.org/events 

    In this seventh and final episode of a series about using the Code of Best Practices in Fair Use, you’ll learn: 

    • Why licensing isn’t a viable solution to copyright issues in preservation projects with global reach
    • How U.S. fair use law applies to initiatives that involve foreign materials
    • How preservationists in other countries can take advantage of local law (and the Code) to advance their work and the roles they can play in advocacy for better and more flexible copyright exceptions
  • Seismic Data Quality Assurance Using IRIS MUSTANG Metrics

    Seismic data quality assurance involves reviewing data in order to identify and resolve problems that limit the use of the data – a time-consuming task for large data volumes! Additionally, no two analysts review seismic data in quite the same way. Recognizing this, IRIS developed the MUSTANG automated seismic data quality metrics system to provide data quality measurements for all data archived at IRIS Data Services. Knowing how to leverage MUSTANG metrics can help users quickly discriminate between usable and problematic data and it is flexible enough for each user to adapt it to their own working style.
    This tutorial presents strategies for using MUSTANG metrics to optimize your own data quality review. Many of the examples in this tutorial illustrate approaches used by the IRIS Data Services Quality Assurance (QA) staff.
     

  • Project Open Data

    This description of Project Open Data Data includes information about the code, tools, and case studies that have been gathered to help agencies adopt the Open Data Policy and unlock the potential of government data. Project Open Data will evolve over time as a community resource to facilitate broader adoption of open data practices in government. Anyone – government employees, contractors, developers, the general public – can view and contribute. Learn more about Project Open Data Governance and dive right in and help to build a better world through the power of open data.is a valuable national resource and a strategic asset to the U.S. Government, its partners, and the public.

    Sections include information about: definitions, implementation guidance, tools, resources, case ctudies, suggestions for community engagement, and open data stakeholder contacts.  The page also provides links to the associated code and related discussions on a github repository at:  https://github.com/project-open-data.

  • Do-It-Yourself Research Data Management Training Kit for Librarians

    Online training materials on topics designed for small groups of librarians who wish to gain confidence and understanding of research data management.  The DIY Training Kit is designed to contain everything needed to complete a similar training course on your own (in small groups) and is based on open educational materials. The materials have been enhanced with Data Curation Profiles and reflective questions based on the experience of academic librarians who have taken the course.

    The training kit includes:  
     - Promotional slides for the RDM Training Kit
    - Training schedule
    - Research Data MANTRA online course by EDINA and Data Library, University of Edinburgh
    - Reflective writing questions
    - Selected group exercises (with answers) from UK Data Archive, University of Essex - Managing and sharing data: Training resources. September, 2011 (PDF).
    - Podcasts for short talks by the original Edinburgh speakers if running course without ‘live’ speakers (Windows or Quicktime versions).
    - Presentation files (pptx) if learners decide to take turns presenting each topic.
    - Evaluation forms
    - Independent study assignment: Interview with a researcher, based on Data Curation Profile, from D2C2, Purdue University Libraries and Boston University Libraries.

  • Software Preservation Network 2019 Webinar Series Episode 1: The Code of Best Practices for Fair Use in Software Preservation, Why and How?

    This episode is one of 7 in the Software Preservation Network's 2019 Webinar Series on Code of Best Practices and Other Legal Tools for Software Preservation.  Each episode is recorded;  both presentation slides and webinar transcript as well as links to supplementary resources are also available.   Information about the full series can be found at:  https://www.softwarepreservationnetwork.org/events 
    Episode 1: The Code of Best Practices for Fair Use in Software Preservation, Why and How
    In this introduction to the webinar series, you’ll learn: 

    • What the Code is 
    • What the copyright doctrine of fair use is 
    • How it addresses problems such as making preservation copies, patron access onsite and remotely, sharing software with other institutions, and providing access to source code 
    • Why best practices codes are a robust, reliable guide to practice
  • Software Preservation Network 2019 Webinar Series Episode 2: Beginning the Preservation Workflow

    This episode is one of 7 in the Software Preservation Network's 2019 Webinar Series on Code of Best Practices and Other Legal Tools for Software Preservation.  Each episode is recorded;  both presentation slides and webinar transcript as well as links to supplementary resources are also available.   Information about the full series can be found at:  https://www.softwarepreservationnetwork.org/events 
    In the second episode in the series, speakers cover how to read the Situations in the Code (parsing descriptions, Principles, and Limitations), and explore how fair use applies to the foundational steps in a preservation workflow including stabilizing, describing, evaluating, and documenting software.

  • Singularity User Guide

    Singularity is a container solution created by necessity for scientific and application driven workloads.  .
    Over the past decade and a half, virtualization has gone from an engineering toy to a global infrastructure necessity and the evolution of enabling technologies has flourished. Most recently, we have seen the introduction of the latest spin on virtualization… “containers”. 
    Many scientists, especially those involved with the high performance computation (HPC) community, could benefit greatly by using container technology, but they need a feature set that differs somewhat from that available with current container technology. This necessity drives the creation of Singularity and articulated its four primary functions:

    • Mobility of compute
    • Reproducibility
    • User freedom
    • Support on existing traditional HPC 


    This user guide introduces Singularity, a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization.

  • Data Warehouse Tutorial For Beginners

    This Data Warehouse Tutorial For Beginners will give you an introduction to data warehousing and business intelligence. You will be able to understand basic data warehouse concepts with examples. The following topics have been covered in this tutorial:
    1. What Is The Need For BI?
    2. What Is Data Warehousing?
    3. Key Terminologies Related To DWH Architecture: a. OLTP Vs OLAP b. ETL c. Data Mart d. Metadata
    4. DWH Architecture
    5. Demo: Creating A DWH

     
  • Python: Working with Multidimensional Scientific Data

    The availability and scale of scientific data is increasing exponentially. Fortunately, ArcGIS provides functionality for reading, managing, analyzing and visualizing scientific data stored in three formats widely used in the scientific community – netCDF, HDF and GRIB. Using satellite and model derived earth science data, this session will present examples of data management, and global scale spatial and temporal analyses in ArcGIS. Finally, the session will discuss and demonstrate how to extend the data management and analytical capabilities of multidimensional data in ArcGIS using python packages.

  • MIT Open Courseware: Data Management

    The MIT Libraries Data Management Group hosts a set of workshops during IAP and throughout the year to assist MIT faculty and researchers with data set control, maintenance, and sharing. This resource contains a selection of presentations from those workshops. Topics include an introduction to data management, details on data sharing and storage, data management using the DMPTool, file organization, version control, and an overview of the open data requirements of various funding sources.

  • MIT Open Courseware: Communicating With Data

    Communicating With Data has a distinctive structure and content, combining fundamental quantitative techniques of using data to make informed management decisions with illustrations of how real decision makers, even highly trained professionals, fall prey to errors and biases in their understanding. We present the fundamental concepts underlying the quantitative techniques as a way of thinking, not just a way of calculating, in order to enhance decision-making skills. Rather than survey all of the techniques of management science, we stress those fundamental concepts and tools that we believe are most important for the practical analysis of management decisions, presenting the material as much as possible in the context of realistic business situations from a variety of settings. Exercises and examples drawn from marketing, finance, operations management, strategy, and other management functions.  Course features include selected lecture notes and problem set assignments with answers.  Materials offered as course was presented in Summer 2003.  

  • MIT Open Courseware: Spatial Database Management and Advanced Geographic Information Systems

    This class offers a very in-depth set of materials on spatial database management, including materials on the tools needed to work in spatial database management, and the applications of that data to real-life problem solving.  Exercises and tools for working with SQL, as well as sample database sets, are provided.  A real-life final project is presented in the projects section.  Materials are presented from the course as taught in Spring 2003.  
    This semester long subject (11.521) is divided into two halves. The first half focuses on learning spatial database management techniques and methods and the second half focuses on using these skills to address a 'real world,' client-oriented planning problem. 
    Course Features include:  
    Lecture notes
    Projects (no examples)
    Assignments: problem sets (no solutions)
    Assignments: programming with examples
    Exams (no solutions)

  • SQL for Librarians

    This Library Carpentry lesson introduces librarians to relational database management system using SQLite. At the conclusion of the lesson you will: understand what SQLite does; use SQLite to summarise and link databases. DB Browser for SQLite (https://sqlitebrowser.orgneeds to be installed before the start of the training. The tutorial covers:
    1. Introduction to SQL
    2. Basic Queries
    3. Aggregation
    4. Joins and aliases
    5. Database design supplement
    Exercises are included with most of the sections.

  • Learn SQL in 1 Hour - SQL Basics for Beginners

    A crash course in SQL. How to write SQL from scratch in 1 hour. In this video I show you how to write SQL using SQL Server and SQL Server Management Studio. We go through Creating a Database, Creating Tables, Inserting, Updating, Deleting, Selecting, Grouping, Summing, Indexing, Joining, and every basic you need to get starting writing SQL.

  • Bioconductor: Computational and Statistical Methods for the Analysis of Genomic Data

    Bioconductor is an open source, open development software project to provide tools for the analysis and comprehension of high-throughput genomic data. It is based primarily on the R programming language.

    Bioconductor provides training in computational and statistical methods for the analysis of genomic data. Courses and conference events are listed on the cited url.  You are welcome to use material from previous courses. However, you may not include these in separately published works (articles, books, websites). When using all or parts of the Bioconductor course materials (slides, vignettes, scripts) please cite the authors and refer your audience to the Bioconductor website.

  • Marine Biogeographic Data Management (Contributing and Using Ocean Biogeographic Information System) (2015)

    The course provided an introduction to the Ocean Biogeographic Information System (OBIS). This includes best practices in marine biogeographic data management, data publication, data access, data analysis, and data visualization. Content consists of slide presentations and videos.  NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.  

    Aims and Objectives

    • Expand the OBIS network of collaborators
    • Improve marine biogeographic data quality
    • Increase awareness of international standards and best practices related to marine biogeographic data
    • Increase the amount of open access data published through OBIS and its OBIS nodes
    • Increase the use of data from OBIS for science, species conservation, and area-based management applications

    Learning Outcomes

    • Knowledge and understanding of OBIS structure, mission, and objectives
    • Installation and management of IPT
    • Use of Darwin Core standards for species occurrence records, taxonomy, event/sample records and additional biological and environmental parameters.
    • Data quality control tools
    • Publishing data through IPT and contributing datasets to OBIS
    • Use of OBIS data access (SQL, web service, API/R). 
    • Data visualization tools (ArGIS online, CartoDB, QGIS, …) 

    Target Audience

    • Marine data managers
    • Staff of NODCs or ADUs/OBIS nodes working with marine biodiversity data
    • Principle Investigators of major marine biodiversity expeditions
    • National marine biodiversity focal points

    Sections 

    • Introductions to IOC, IODE, OTGA, and OBIS
    • Biodiversity Data Standards
    • Data Quality Control Procedures
    • Data Access and Visualisation
    • Social Aspects of Data Management
  • Administración de Datos Biogeográficos Marinos (Contribuyendo al Uso de OBIS) (2016)

    The course provides an introduction to the Ocean Biogeographic Information System (OBIS). It includes best practices in the management of marine biogeographic data, publication of data for free access (IPT), access to data, organization, analysis, and visualization.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    Goals:

    • Expand the network of OBIS collaborators.
    • Improve the quality of marine biogeographic data.
    • Increase knowledge of international standards and best practices related to marine biogeographic data.
    • Increase the amount of freely accessible data published through OBIS and its OBIS nodes.
    • Increase the use of OBIS data for science, species conservation, and area-based management applications.

    There are four modules consisting of Spanish language slide presentations and videos:

    • MODULE 1 - General and concepts
    • Introduction to IOC, IODE, OTGA and OBIS and related to WORMS, Marine Regions, DarwinCore biodiversity data standard, and metadata.
    •  
    • MODULE 2 - Data Quality Control Procedures
    •  
    • MODULE 3 - Best practices in the management and policy of marine biogeographic data and access, organization, analysis and visualization of OBIS data
    •  
    • MODULE 4 - Publication of data for free access (Integrate Publishing Toolkit -IPT)
  • Research Data Management

    Marine information managers are increasingly seen as major contributors to research data management (RDM) activities in general and in the design of research data services (RDS) in particular. They promote research by providing services for storage, discovery, and access and liaise and partner with researchers and data centers to foster an interoperable infrastructure for the above services.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    The series of units within this training course recognizes the potential contributions that librarians/information managers can offer and hence the need to develop their skills in the research data management process. Course materials consist of slide presentations and student activities. Topics include:

    • Data and information management in International Indian Ocean Expedition-2 (IIOE-2)
    • Open science data
    • Research data and publication lifecycles
    • Research data organization and standards
    • Data management plans
    • Data publication and data citation
    • Access to research data
    • Management of sensitive data
    • Repositories for data management
    • Data management resources
  • Quality Management System Essentials for IODE National Oceanographic Data Centres (NODC) and Associate Data Units (ADU)

    Course overview

    The International Oceanographic Data and Information Exchange (IODE) maintains a global network of National Oceanographic Data Centres (NODC) and Associate Data Units (ADU) responsible for the collection, quality control, archive and online publication of many millions of ocean observations. The concept of quality management has become increasingly significant for these centres to meet national and international competency standards for delivery of data products and services. The IODE Quality Management Framework encourages NODCs and ADUs to implement a quality management system which will lead to the accreditation.

    This workshop provides an introduction for NODCs and ADUs involved in the development, implementation, and management of a Quality Management System based on ISO 9001:2015.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    Aims and objectives

    • To introduce the IODE Quality Management Framework
    • To introduce the ISO 9000 series of standards
    • To provide a description of a Quality Management System
    • To describe the importance of quality management for oceanographic data
    • To describe the accreditation process for NODCs and ADU

    Note that the exercises are no longer accessible.

    Topics include:

    • Introduction to Quality Management Systems
    • QMS Implementation in Meteorological Services
    • Introduction to ISO standards
    • Understanding ISO 9001:2015
      • Overview
      • ISO 9001:2015 Clause 4. Context of the Organization
      • ISO 9001:2015 Clause 5. Leadership
      • ISO 9001:2015 Clause 6. Planning
      • ISO 9001:2015 Clause 7.Support
      • ISO 9001:2015 Clause 8. Operation
      • ISO 9001:2015 Clause 9. Performance Evaluation
      • ISO 19115:2015 Clause 10. Improvement
    • Developing a quality system manual
    • Experiences and lessons learned from implementing a QMS: SISMER
    • Implementing the Quality Management System
    • IODE Quality Management Framework and Accreditation
  • Code of Best Practices and Other Legal Tools for Software Preservation: 2019 Webinar Series

    Since 2015, the Software Preservation Network (SPN) has worked to create a space where organizations from industry, academia, government, cultural heritage, and the public sphere can contribute their myriad skills and capabilities toward collaborative solutions that will ensure persistent access to all software and all software-dependent objects. The organization's goal is to make it easier to deposit, discover, and reuse software.
    A key activity of the SPN is to provide webinar series on topics related to software preservation.  The 2019 series include:
    Episode 1: The Code of Best Practices for Fair Use in Software Preservation, Why and How?
    Episode 2:  Beginning the Preservation Workflow
    Episode 3:  Making Software Available Within Institutions and Networks
    Episode 4:  Working with Source Code and Software Licenses
    Episode 5:  Understanding the Anti-circumvention Rules and the Preservation Exemptions
    Episode 6:  Making the Code Part of Software Preservation Culture
    Episode 7:  International Implications
    See information about each episode separately.
     

     

  • HarvardX Biomedical Data Science Open Online Training - Data Analysis for the Life Sciences Series

    HarvardX Biomedical Data Science Open Online Training

    In 2014 funding was received from the NIH BD2K initiative to develop MOOCs for biomedical data science. The courses are divided into the Data Analysis for the Life Sciences series, the Genomics Data Analysis series, and the Using Python for Research course.

    This page includes links to the course material for the three courses:

    Data Analysis for the Life Sciences:  Genomics Data Analysis Using Python for Research

    Video lectures are included with, when available, an R markdown document to follow along, and the course itself. Note that you must be logged in to EdX to access the course. Registration is free. Links to the course pages are also included.

    This site inclues a link to two other course sets: Genomics Data Analysis, and Using Python for Research.

  • Creating Documentation and Metadata: Creating a Citation for Your Data

    This training module is part of the Federation of Earth Science Information Partners (or ESIP Federation's) Data Management for Scientists Short Course. The subject of this module is "Creating a Citation for Your Data." This module was authored by Robert Cook from the Oak Ridge National Laboratory. Besides the ESIP Federation, sponsors of this Data Management for Scientists Short Course are the Data Conservancy and the United States National Oceanic and Atmospheric Administration (NOAA).  This module is available in both presentation slide and video formats.

  • Responsible Data Use: Data Restrictions

    This training module is part of the Federation of Earth Science Information Partners (or ESIP Federation's) Data Management for Scientists Short Course.  The subject of this module is "Data Restrictions".  The module was authored by Robert R. Downs from the NASA Socioeconomic Data and Applications Center which is operated by CIESIN – the Center for International Earth Science Information Network at Columbia University.  Besides the ESIP Federation, sponsors of this Data Management for Scientists Short Course are the Data Conservancy and the United States National Oceanic and Atmospheric Administration (NOAA).  This module is available in both presentation slide and video formats.  

  • How to Make a Data Dictionary

    A data dictionary is critical to making your research more reproducible because it allows others to understand your data. The purpose of a data dictionary is to explain what all the variable names and values in your spreadsheet really mean. This OSF Best Practice Guide gives examples and instruction on how to asemble a data dictionary.

  • R Program (Data Analysis)--Full Course

    A full basic course in R software for data analysis, produced by Simply Statistics. This 42 part video course provides basic instruction on the use of R, where to get help with programming questions, and a number of real world examples.  Links to all the videos are available from the YouTube landing page and include topics such as:  Getting Help, What is Data, Representing Data, etc.  The course is also offered via Coursera (See https://simplystatistics.org/courses).  The lecture slides for Coursera's Data Analysis class are available on github at:  https://github.com/jtleek/dataanalysis.

  • MIT Open Courseware: Introduction to Computer Science and Programming in Python

    6.0001 Introduction to Computer Science and Programming in Python  is intended for students with little or no programming experience. It aims to provide students with an understanding of the role computation can play in solving problems and to help students, regardless of their major, feel justifiably confident of their ability to write small programs that allow them to accomplish useful goals. The class uses the Python 3.5 programming language. Course presented as taught in Fall 2016.
    Course features include:
    Video lectures
    Captions/transcript 
    Interactive assessments
    Lecture notes
    Assignments: problem sets (no solutions)
    Assignments: programming with examples

    MITx offers a free version of this subject on edX. Please register to get started:

    6.00.1x Introduction to Computer Science and Programming Using Python (Started January 22, 2019)  [help icon]

    6.00.2x Introduction to Computational Thinking and Data Science (Started March 26, 2019)

  • Why Cite Data?

    This video explains what data citation is and why it's important. It also discusses what digital object identifiers (DOIs) are and how they are used.

  • Data Management Resources (University of Arizona Research Data Management Services)

    The information on this website is intending to provide information on developing data management plans now being required by some federal agencies and to support researchers in the various stages of the research cycle.  Topics covered include:
    - Research Data Lifecycle
    - Data Management Plans with funding requirements from many agencies
    - Sharing Data
    - Managing Data
    Workshops and tutorials are available as recordings, slides, and exercises on topics such as:  Data Literacy for Postdocs, Increasing Openness and Reproducibility using the OSF, and Research Data Life Cycle.

  • Rocky Mountain Data Management Training for Certification

    This free training for the Data Management Association's Certified Data Management Professional® exam is brought to you by DAMA's Rocky Mountain Chapter. If you're studying for the CDMP exam, get your discounted copy of the DMBOK V2.

    Data Management Association International – Rocky Mountain Chapter (DAMA-RMC) is a not-for-profit, vendor-independent, professional organization dedicated to advancing the concepts and practices of enterprise information and data resource management (IRM/DRM).

    DAMA-RMC’s primary purpose is to promote the understanding, development and practice of managing information and data as key enterprise assets.  Topics include:
    Week 1:  Introduction
    Week 2:  Ethics
    Week 3:   Data Governance
    Week 4:  Data Architecture & Data Modeling and Design
    Week 5:  Data Storage & Operations - Data Security
    Week 6:  Data Storage & Operations - Data Security
    Week 7: Data Integration & Operability, Metadata

  • Coffee and Code: The Command Line - An Introduction

    Graphical user interfaces are fast, often more than fast enough to suit our needs. GUIs are feature rich, can be intuitive, and often filter out a lot of stuff we don't need to know about and aren't interested in. Nearly everything we need to do can be done simply and quickly using a GUI.

    The command line is a great resource for speeding up and automating routine activities without using a lot of processing power. In some cases, it can be better for:

    • Searching for files
    • Searching within files
    • Reading and writing files and data
    • Network activities

    Some file and data recovery processes can only be executed from the command line.

    Plus:

    • The command line is old fashioned
    • Potential efficiency gains take time to manifest
    • Even Neal Stephenson says it's obsolete
  • Coffee and Code: TaskJuggler

    What is TaskJuggler?

    TaskJuggler is an open source project (written in Ruby) planning and management application that provides a comprehensive set of tools for project planning, management, and reporting. Versions of TaskJuggler are available for Linux, the Mac OS, and Windows and multiple docker containers have been created that encapsulate TaskJuggler for ease of execution without having to directly install it within the host computer operating systemn.

    Some key characteristics of TaskJuggler include:

    • Text-based configuration files
    • A command-line tool that is run to perform scheduling and report generation
    • An optional local server process that can be run and with which a client tool can interact to more rapidly generate reports for projects that have been loaded into the server
    • Email-based workflows for large-scale project tracking
    • Support for web-based, CSV, and iCal reports enabling delivery of plan products through web browsers, further analysis and visualization of scheduling data outside of TaskJuggler, and sharing of project plan for integration into calendar systems.
    • Scenario support for comparing alternative project paths.
    • Accounting capabilities for estimating and tracking costs and revenue through the life of a project.
  • Coffee and Code: Database Basics

    Why Use a Database to Organize Your Data

    • Consisten structure - defined by you
    • Enforced data types
    • Can scale from single tables to sophisticated relational data models
    • Can be a personal file-based or shared server-based solution, depending on your needs
    • Standard language for interacting with your data
    • "Virtual Tables" can be created on the fly based on database queries 
    • Data can be accessed by many analysis tools