All Learning Resources

  • Data Warehouse Tutorial For Beginners

    This Data Warehouse Tutorial For Beginners will give you an introduction to data warehousing and business intelligence. You will be able to understand basic data warehouse concepts with examples. The following topics have been covered in this tutorial:
    1. What Is The Need For BI?
    2. What Is Data Warehousing?
    3. Key Terminologies Related To DWH Architecture: a. OLTP Vs OLAP b. ETL c. Data Mart d. Metadata
    4. DWH Architecture
    5. Demo: Creating A DWH

     
  • Python: Working with Multidimensional Scientific Data

    The availability and scale of scientific data is increasing exponentially. Fortunately, ArcGIS provides functionality for reading, managing, analyzing and visualizing scientific data stored in three formats widely used in the scientific community – netCDF, HDF and GRIB. Using satellite and model derived earth science data, this session will present examples of data management, and global scale spatial and temporal analyses in ArcGIS. Finally, the session will discuss and demonstrate how to extend the data management and analytical capabilities of multidimensional data in ArcGIS using python packages.

  • MIT Open Courseware: Data Management

    The MIT Libraries Data Management Group hosts a set of workshops during IAP and throughout the year to assist MIT faculty and researchers with data set control, maintenance, and sharing. This resource contains a selection of presentations from those workshops. Topics include an introduction to data management, details on data sharing and storage, data management using the DMPTool, file organization, version control, and an overview of the open data requirements of various funding sources.

  • MIT Open Courseware: Communicating With Data

    Communicating With Data has a distinctive structure and content, combining fundamental quantitative techniques of using data to make informed management decisions with illustrations of how real decision makers, even highly trained professionals, fall prey to errors and biases in their understanding. We present the fundamental concepts underlying the quantitative techniques as a way of thinking, not just a way of calculating, in order to enhance decision-making skills. Rather than survey all of the techniques of management science, we stress those fundamental concepts and tools that we believe are most important for the practical analysis of management decisions, presenting the material as much as possible in the context of realistic business situations from a variety of settings. Exercises and examples drawn from marketing, finance, operations management, strategy, and other management functions.  Course features include selected lecture notes and problem set assignments with answers.  Materials offered as course was presented in Summer 2003.  

  • MIT Open Courseware: Spatial Database Management and Advanced Geographic Information Systems

    This class offers a very in-depth set of materials on spatial database management, including materials on the tools needed to work in spatial database management, and the applications of that data to real-life problem solving.  Exercises and tools for working with SQL, as well as sample database sets, are provided.  A real-life final project is presented in the projects section.  Materials are presented from the course as taught in Spring 2003.  
    This semester long subject (11.521) is divided into two halves. The first half focuses on learning spatial database management techniques and methods and the second half focuses on using these skills to address a 'real world,' client-oriented planning problem. 
    Course Features include:  
    Lecture notes
    Projects (no examples)
    Assignments: problem sets (no solutions)
    Assignments: programming with examples
    Exams (no solutions)

  • SQL for Librarians

    This Library Carpentry lesson introduces librarians to relational database management system using SQLite. At the conclusion of the lesson you will: understand what SQLite does; use SQLite to summarise and link databases. DB Browser for SQLite (https://sqlitebrowser.orgneeds to be installed before the start of the training. The tutorial covers:
    1. Introduction to SQL
    2. Basic Queries
    3. Aggregation
    4. Joins and aliases
    5. Database design supplement
    Exercises are included with most of the sections.

  • Learn SQL in 1 Hour - SQL Basics for Beginners

    A crash course in SQL. How to write SQL from scratch in 1 hour. In this video I show you how to write SQL using SQL Server and SQL Server Management Studio. We go through Creating a Database, Creating Tables, Inserting, Updating, Deleting, Selecting, Grouping, Summing, Indexing, Joining, and every basic you need to get starting writing SQL.

  • Bioconductor: Computational and Statistical Methods for the Analysis of Genomic Data

    Bioconductor is an open source, open development software project to provide tools for the analysis and comprehension of high-throughput genomic data. It is based primarily on the R programming language.

    Bioconductor provides training in computational and statistical methods for the analysis of genomic data. Courses and conference events are listed on the cited url.  You are welcome to use material from previous courses. However, you may not include these in separately published works (articles, books, websites). When using all or parts of the Bioconductor course materials (slides, vignettes, scripts) please cite the authors and refer your audience to the Bioconductor website.

  • Marine Biogeographic Data Management (Contributing and Using Ocean Biogeographic Information System) (2015)

    The course provided an introduction to the Ocean Biogeographic Information System (OBIS). This includes best practices in marine biogeographic data management, data publication, data access, data analysis, and data visualization. Content consists of slide presentations and videos.  NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.  

    Aims and Objectives

    • Expand the OBIS network of collaborators
    • Improve marine biogeographic data quality
    • Increase awareness of international standards and best practices related to marine biogeographic data
    • Increase the amount of open access data published through OBIS and its OBIS nodes
    • Increase the use of data from OBIS for science, species conservation, and area-based management applications

    Learning Outcomes

    • Knowledge and understanding of OBIS structure, mission, and objectives
    • Installation and management of IPT
    • Use of Darwin Core standards for species occurrence records, taxonomy, event/sample records and additional biological and environmental parameters.
    • Data quality control tools
    • Publishing data through IPT and contributing datasets to OBIS
    • Use of OBIS data access (SQL, web service, API/R). 
    • Data visualization tools (ArGIS online, CartoDB, QGIS, …) 

    Target Audience

    • Marine data managers
    • Staff of NODCs or ADUs/OBIS nodes working with marine biodiversity data
    • Principle Investigators of major marine biodiversity expeditions
    • National marine biodiversity focal points

    Sections 

    • Introductions to IOC, IODE, OTGA, and OBIS
    • Biodiversity Data Standards
    • Data Quality Control Procedures
    • Data Access and Visualisation
    • Social Aspects of Data Management
  • Administración de Datos Biogeográficos Marinos (Contribuyendo al Uso de OBIS) (2016)

    The course provides an introduction to the Ocean Biogeographic Information System (OBIS). It includes best practices in the management of marine biogeographic data, publication of data for free access (IPT), access to data, organization, analysis, and visualization.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    Goals:

    • Expand the network of OBIS collaborators.
    • Improve the quality of marine biogeographic data.
    • Increase knowledge of international standards and best practices related to marine biogeographic data.
    • Increase the amount of freely accessible data published through OBIS and its OBIS nodes.
    • Increase the use of OBIS data for science, species conservation, and area-based management applications.

    There are four modules consisting of Spanish language slide presentations and videos:

    • MODULE 1 - General and concepts
    • Introduction to IOC, IODE, OTGA and OBIS and related to WORMS, Marine Regions, DarwinCore biodiversity data standard, and metadata.
    •  
    • MODULE 2 - Data Quality Control Procedures
    •  
    • MODULE 3 - Best practices in the management and policy of marine biogeographic data and access, organization, analysis and visualization of OBIS data
    •  
    • MODULE 4 - Publication of data for free access (Integrate Publishing Toolkit -IPT)
  • Research Data Management

    Marine information managers are increasingly seen as major contributors to research data management (RDM) activities in general and in the design of research data services (RDS) in particular. They promote research by providing services for storage, discovery, and access and liaise and partner with researchers and data centers to foster an interoperable infrastructure for the above services.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    The series of units within this training course recognizes the potential contributions that librarians/information managers can offer and hence the need to develop their skills in the research data management process. Course materials consist of slide presentations and student activities. Topics include:

    • Data and information management in International Indian Ocean Expedition-2 (IIOE-2)
    • Open science data
    • Research data and publication lifecycles
    • Research data organization and standards
    • Data management plans
    • Data publication and data citation
    • Access to research data
    • Management of sensitive data
    • Repositories for data management
    • Data management resources
  • Quality Management System Essentials for IODE National Oceanographic Data Centres (NODC) and Associate Data Units (ADU)

    Course overview

    The International Oceanographic Data and Information Exchange (IODE) maintains a global network of National Oceanographic Data Centres (NODC) and Associate Data Units (ADU) responsible for the collection, quality control, archive and online publication of many millions of ocean observations. The concept of quality management has become increasingly significant for these centres to meet national and international competency standards for delivery of data products and services. The IODE Quality Management Framework encourages NODCs and ADUs to implement a quality management system which will lead to the accreditation.

    This workshop provides an introduction for NODCs and ADUs involved in the development, implementation, and management of a Quality Management System based on ISO 9001:2015.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    Aims and objectives

    • To introduce the IODE Quality Management Framework
    • To introduce the ISO 9000 series of standards
    • To provide a description of a Quality Management System
    • To describe the importance of quality management for oceanographic data
    • To describe the accreditation process for NODCs and ADU

    Note that the exercises are no longer accessible.

    Topics include:

    • Introduction to Quality Management Systems
    • QMS Implementation in Meteorological Services
    • Introduction to ISO standards
    • Understanding ISO 9001:2015
      • Overview
      • ISO 9001:2015 Clause 4. Context of the Organization
      • ISO 9001:2015 Clause 5. Leadership
      • ISO 9001:2015 Clause 6. Planning
      • ISO 9001:2015 Clause 7.Support
      • ISO 9001:2015 Clause 8. Operation
      • ISO 9001:2015 Clause 9. Performance Evaluation
      • ISO 19115:2015 Clause 10. Improvement
    • Developing a quality system manual
    • Experiences and lessons learned from implementing a QMS: SISMER
    • Implementing the Quality Management System
    • IODE Quality Management Framework and Accreditation
  • Code of Best Practices and Other Legal Tools for Software Preservation: 2019 Webinar Series

    Since 2015, the Software Preservation Network (SPN) has worked to create a space where organizations from industry, academia, government, cultural heritage, and the public sphere can contribute their myriad skills and capabilities toward collaborative solutions that will ensure persistent access to all software and all software-dependent objects. The organization's goal is to make it easier to deposit, discover, and reuse software.
    A key activity of the SPN is to provide webinar series on topics related to software preservation.  The 2019 series include:
    Episode 1: The Code of Best Practices for Fair Use in Software Preservation, Why and How?
    Episode 2:  Beginning the Preservation Workflow
    Episode 3:  Making Software Available Within Institutions and Networks
    Episode 4:  Working with Source Code and Software Licenses
    Episode 5:  Understanding the Anti-circumvention Rules and the Preservation Exemptions
    Episode 6:  Making the Code Part of Software Preservation Culture
    Episode 7:  International Implications
    See information about each episode separately.
     

     

  • HarvardX Biomedical Data Science Open Online Training - Data Analysis for the Life Sciences Series

    HarvardX Biomedical Data Science Open Online Training

    In 2014 funding was received from the NIH BD2K initiative to develop MOOCs for biomedical data science. The courses are divided into the Data Analysis for the Life Sciences series, the Genomics Data Analysis series, and the Using Python for Research course.

    This page includes links to the course material for the three courses:

    Data Analysis for the Life Sciences:  Genomics Data Analysis Using Python for Research

    Video lectures are included with, when available, an R markdown document to follow along, and the course itself. Note that you must be logged in to EdX to access the course. Registration is free. Links to the course pages are also included.

    This site inclues a link to two other course sets: Genomics Data Analysis, and Using Python for Research.

  • How to Make a Data Dictionary

    A data dictionary is critical to making your research more reproducible because it allows others to understand your data. The purpose of a data dictionary is to explain what all the variable names and values in your spreadsheet really mean. This OSF Best Practice Guide gives examples and instruction on how to asemble a data dictionary.

  • R Program (Data Analysis)--Full Course

    A full basic course in R software for data analysis, produced by Simply Statistics. This 42 part video course provides basic instruction on the use of R, where to get help with programming questions, and a number of real world examples.  Links to all the videos are available from the YouTube landing page and include topics such as:  Getting Help, What is Data, Representing Data, etc.  The course is also offered via Coursera (See https://simplystatistics.org/courses).  The lecture slides for Coursera's Data Analysis class are available on github at:  https://github.com/jtleek/dataanalysis.

  • MIT Open Courseware: Introduction to Computer Science and Programming in Python

    6.0001 Introduction to Computer Science and Programming in Python  is intended for students with little or no programming experience. It aims to provide students with an understanding of the role computation can play in solving problems and to help students, regardless of their major, feel justifiably confident of their ability to write small programs that allow them to accomplish useful goals. The class uses the Python 3.5 programming language. Course presented as taught in Fall 2016.
    Course features include:
    Video lectures
    Captions/transcript 
    Interactive assessments
    Lecture notes
    Assignments: problem sets (no solutions)
    Assignments: programming with examples

    MITx offers a free version of this subject on edX. Please register to get started:

    6.00.1x Introduction to Computer Science and Programming Using Python (Started January 22, 2019)  [help icon]

    6.00.2x Introduction to Computational Thinking and Data Science (Started March 26, 2019)

  • Why Cite Data?

    This video explains what data citation is and why it's important. It also discusses what digital object identifiers (DOIs) are and how they are used.

  • Data Management Resources (University of Arizona Research Data Management Services)

    The information on this website is intending to provide information on developing data management plans now being required by some federal agencies and to support researchers in the various stages of the research cycle.  Topics covered include:
    - Research Data Lifecycle
    - Data Management Plans with funding requirements from many agencies
    - Sharing Data
    - Managing Data
    Workshops and tutorials are available as recordings, slides, and exercises on topics such as:  Data Literacy for Postdocs, Increasing Openness and Reproducibility using the OSF, and Research Data Life Cycle.

  • Rocky Mountain Data Management Training for Certification

    This free training for the Data Management Association's Certified Data Management Professional® exam is brought to you by DAMA's Rocky Mountain Chapter. If you're studying for the CDMP exam, get your discounted copy of the DMBOK V2.

    Data Management Association International – Rocky Mountain Chapter (DAMA-RMC) is a not-for-profit, vendor-independent, professional organization dedicated to advancing the concepts and practices of enterprise information and data resource management (IRM/DRM).

    DAMA-RMC’s primary purpose is to promote the understanding, development and practice of managing information and data as key enterprise assets.  Topics include:
    Week 1:  Introduction
    Week 2:  Ethics
    Week 3:   Data Governance
    Week 4:  Data Architecture & Data Modeling and Design
    Week 5:  Data Storage & Operations - Data Security
    Week 6:  Data Storage & Operations - Data Security
    Week 7: Data Integration & Operability, Metadata

  • Coffee and Code: The Command Line - An Introduction

    Graphical user interfaces are fast, often more than fast enough to suit our needs. GUIs are feature rich, can be intuitive, and often filter out a lot of stuff we don't need to know about and aren't interested in. Nearly everything we need to do can be done simply and quickly using a GUI.

    The command line is a great resource for speeding up and automating routine activities without using a lot of processing power. In some cases, it can be better for:

    • Searching for files
    • Searching within files
    • Reading and writing files and data
    • Network activities

    Some file and data recovery processes can only be executed from the command line.

    Plus:

    • The command line is old fashioned
    • Potential efficiency gains take time to manifest
    • Even Neal Stephenson says it's obsolete
  • Coffee and Code: TaskJuggler

    What is TaskJuggler?

    TaskJuggler is an open source project (written in Ruby) planning and management application that provides a comprehensive set of tools for project planning, management, and reporting. Versions of TaskJuggler are available for Linux, the Mac OS, and Windows and multiple docker containers have been created that encapsulate TaskJuggler for ease of execution without having to directly install it within the host computer operating systemn.

    Some key characteristics of TaskJuggler include:

    • Text-based configuration files
    • A command-line tool that is run to perform scheduling and report generation
    • An optional local server process that can be run and with which a client tool can interact to more rapidly generate reports for projects that have been loaded into the server
    • Email-based workflows for large-scale project tracking
    • Support for web-based, CSV, and iCal reports enabling delivery of plan products through web browsers, further analysis and visualization of scheduling data outside of TaskJuggler, and sharing of project plan for integration into calendar systems.
    • Scenario support for comparing alternative project paths.
    • Accounting capabilities for estimating and tracking costs and revenue through the life of a project.
  • Coffee and Code: Database Basics

    Why Use a Database to Organize Your Data

    • Consisten structure - defined by you
    • Enforced data types
    • Can scale from single tables to sophisticated relational data models
    • Can be a personal file-based or shared server-based solution, depending on your needs
    • Standard language for interacting with your data
    • "Virtual Tables" can be created on the fly based on database queries 
    • Data can be accessed by many analysis tools
  • Coffee and Code: Basics of Programming with Python

    This collection of materials was developed for the University of New Mexico Libraries' Code & Coffee workshop series to provide a high-level introduction to programming concepts illustrated with the Python programming language. The workshop content is contained in a collection of Jupyter Notebooks:

    Conceptual Overview: Programming Concepts.ipynb
    Surname analysis example: Name_Data.ipynb
    Library shelf space analysis example: Space Analysis.ipynb
    IR Keywords Versus IR "Aboutness" example [no longer functional due to decommissioning of UNM DSpace instance]: IR Keywords Versus IR "Aboutness".ipynb

    Why learn the basic principles of programming?¶

    Thinking algorithmically (a key element in the process used in developing programming solutions) is a powerful problem solving skill that is reinforeced with practice. Practicing programming is great practice.

    • Defining a problem with sufficient specificity that a solution can be effectively developed
    • Defining what the end-product of the process should be
    • Breaking a problem down into smaller components that interact with each other
    • Identifying the objects/data and actions that are needed to meet the requirements of each component
    • Linking components together to solve the defined problem
    • Identifying potential expansion points to reuse the developed capacity for solving related problems
    • Capabilities to streamline and automate routine processes through scripting are ubiquitous
    • Query languages built into existing tools (e.g. Excel, ArcGIS, Word)
    • Specialized languages for specific tasks (e.g. R, Pandoc template language, PHP)
    • General purpose languages for solving many problems (e.g. Bash shell, Perl, Python, C#)
    • Repeatabilty with documentation
    • Scalability
    • Portability
  • Mozilla Science Lab Open Data Instructor Guides

    This site is a resource for train-the-trainer type materials on Open Data. It's meant to provide a series of approachable, fun, collaborative workshops where each of the modules is interactive and customizable to meet a variety of audiences.

  • Data Management: Using Metadata to Find, Interpret & Share Your Data

    Ever struggle to find that file you tucked away last semester (or last week)? Having trouble remembering details in order to re-use your own data? Need others to understand & use your data? This workshop will introduce you to the power of metadata: what it is, why it’s so important, and how to get started with it. Stop wasting time in finding, interpreting or sharing your data. Whether you are new to thinking about metadata or you’re looking to build off some basic knowledge, this workshop is for you!

  • Data Management: Strategies for Data Sharing and Storage

    Not sure how to publish and share your data? Unclear on the best formats and information to include for optimal data reuse? This workshop will review existing options for long-term storage and strategies for sharing data with other researchers. Topics will include: data publication and citation, persistent identifiers, versioning, data formats and metadata for reuse, repositories, cost models and management strategies.

  • Learning programming on Khan Academy

    In this course, we'll be teaching the concepts of the JavaScript programming language and the cool functions you can use with it in the ProcessingJSlibrary. Before you dig in, here's a brief tour of how we teach programming here on Khan Academy, and how we think you can learn the most.

    Normally, we teach on Khan Academy using videos, but here in programming land, we teach with something we call "talk-throughs". A talk-through is like a video, but it's actually interactive- you can pause at any time if you want to play with the code yourself, and you can spin-off if you want to make your own version of what we made.  An animated GIF of a talk-through is included.

    See Terms of Service at:  https://www.khanacademy.org/about/tos 

  • Essentials 4 Data Support

    Essentials 4 Data Support is an introductory course for those people who (want to) support researchers in storing, managing, archiving and sharing their research data.  The Essentials 4 Data Support course aims to contribute to professionalization of data supporters and coordination between them. Data supporters are people who support researchers in storing, managing, archiving and sharing their research data.  Course may be taken online-only (no fee) with or without registration, or online plus face to face meetings as a full course with certificate (for a fee).  

  • Unidata Data Management Resource Center

    In this online resource center, Unidata provides information about evolving data management requirements, techniques, and tools. They walk through common requirements of funding agencies to make it easier for researchers to fulfill them. In addition, they explain how to use some common tools to build a scientific data managment workflow that makes the life of an investigator easier, and enhances access to their work. The resource center provides information about: 1) Agency Requirements, 2) Best Practices, 3) Tools for Managing Data, 4) Data Management Plan Resources, 5) Data Archives, and 6) Scenarios and Use Case.

  • Introduction to code versioning and collaboration with Git and GitHub: An EDI VTC Tutorial.

    This tutorial is an introduction to code versioning and collaboration with Git and GitHub.  Tutorial goals are to help you:  

    • Understand basic Git concepts and terminology.
    • Apply concepts as Git commands to track versioning of a developing file.
    • Create a GitHub repository and push local content to it.
    • Clone a GitHub repository to the local workspace to begin developing.
    • Inspire you to incorporate Git and GitHub into your workflow.


    There are a number of exercises within the tutorial to help you apply the concepts learned.  
    Follow up questions can be directed via email to:  o Colin Smith  ([email protected]) AND Susanne Grossman-Clarke ([email protected]).

  • Transform and visualize data in R using the packages tidyr, dplyr and ggplot2: An EDI VTC Tutorial.

    The two tutorials, presented by Susanne Grossman-Clarke, demonstrate how to tidy data in R with the package “tidyr” and transform data using the package “dplyr”. The goal of those data transformations is to support data visualization with the package “ggplot2” for data analysis and scientific publications of which examples were shown.

  • Principles of Database Management

    There are 14 videos included in this web lecture series of Prof. dr. Bart Baesens: Introduction to Database Management Systems. Prof. dr. Bart Baesens holds a PhD in Applied Economic Sciences from KU Leuven University (Belgium). He is currently an associate professor at KU Leuven, and a guest lecturer at the University of Southampton (United Kingdom). He has done extensive research on data mining and its applications. For more information, visit http://www.dataminingapps.com.   In this lecture series, the fundamental concepts behind databases, database technology, database management systems and data models are explained. Discussed topics entail: applications, definitions, file based vs. databased data management approaches, the elements of database systems and the advantages of database design.  Separate URLs are provided for each lecture in this series, found on the YouTube lecture series page.  

     

  • CMU Intro to Database Systems Course

    These courses are focused on the design and implementation of database management systems. Topics include data models (relational, document, key/value), storage models (n-ary, decomposition), query languages (SQL, stored procedures), storage architectures (heaps, log-structured), indexing (order preserving trees, hash tables), transaction processing (ACID, concurrency control), recovery (logging, checkpoints), query processing (joins, sorting, aggregation, optimization), and parallel architectures (multi-core, distributed). Case studies on open-source and commercial database systems will be used to illustrate these techniques and trade-offs. The course is appropriate for students with strong systems programming skills.  There are 26 videos associated with this course which was originally offered in Fall 2018 as Course 15 445/645 at Carnegie Mellon University.  

  • Research Data Management: Basics and Best Practices

    Big data, data management, and data life cycle are all buzzwords being discussed among librarians, researchers, and campus administrators across the country and around the world. Learn the basics of these terms and what services an academic library system might be expected to offer patrons, while identifying personal opportunities for improving how you work with your own data. You will have the opportunity to explore DMPTool during this session.

  • Linux for Biologists

    This workshop is designed to prepare a biologist to work in an interactive Linux environment of our BioHPC Lab workstations.  Slides are available for each of the sessions.  Basics are covered of the Linux operating system that are needed to operate workstations. In particular the topics will include: 

    Navigating a Linux workstation:
    Logging in and out of a Linux machine, directory structure, basic commands for dealing with files and directories
    Working with text files
    Transfer of files to and from a Linux workstation
    Basics of running applications on Linux
    Using multiple CPUs/cores: parallel applications
    Basics of shell scripting

  • 2018 NOAA Environmental Data Management Workshop (EDMW)

    The EDMW 2018 theme is "Improving Discovery, Access, Usability, and Archiving of NOAA Data for Societal Benefit." The workshop builds on past work by providing training, highlighting progress, identifying issues, fostering discussions, and determining where new technologies can be applied for management of environmental data and information at NOAA. All NOAA employees and contractors are welcome, including data producers, data managers, metadata developers, archivists, researchers, grant issuers, policy developers, program managers, and others.  Links to recordings of the sessions plus presentation slides are available.
    Some key topic areas include:

    • Big Earth Data Initiative (BEDI)
    • Data Governance
    • NCEI's Emerging Data Management System
    • Data Visualization
    • Data Lifecycle Highlights
    • Data Archiving
    • Data Integration
    • Metadata Authoring, Management & Evolution
    • NOAA Institutional Repository
    • Video Data Managment & Access Solutions
    • Unified Access Format (UAF), ERDDAP & netCDF
    • Improving Data Discovery & Access to Data 
    • Arctic & Antarctic Data Access
  • Creating a Data Management Plan

    Video presentation and slides introducing tips for creating a data management plan (DMP) provided by Clara S. Fowler, a Research Services and Assessment Manager at the Research Medical library of the University of Texas MD Anderson Cancer Center, including discussion of what a data management plan is, what is required for a National Institutes of Health (NIH)  DMP, what changes can be expected for a NIH DMP and the tools for creating data management plans. Topics include:
    Data management guide
    Sample data management plan for the National Science Foundation (NSF)
    Sample data sharing plan from the NIH
    NIH data sharing policy and implementation guide

  • Creating Effective Graphs

    Sunita Patterson, a scientific editor in Scientific Publications at the University of Texas MD Anderson Cancer Center, goes over the fundamentals of good graph design and data presentation including a discussion of when to use graphs, general principles for effective graphs and introducing different types of graph. Effective graphs “improve understanding of data”. They do not confuse or mislead; one graph is more effective than another if its quantitative information can be decoded more quickly or more easily by most observers.  Examples of graphs are given realted to Medicine and Health Services.
    Objectives of Short Course:
    - Know more effective ways to present data and know where to find more information on these graph forms.
    - Learn general principles for creating clear, accurate graphs.
    - Understand that different audiences have varying needs and the presentation should be appropriate for the audience.
    Summary of Course:
    - Limitations of some common graph forms
    - Human perception and our ability to decode graphs
    - Newer and more effective graph forms
    - General principles for creating effective tables and graphs

  • New Self-Guided Curriculum for Digitization

    Through the Public Library Partnerships Project (PLPP), DPLA has been working with existing DPLA Service Hubs to provide digital skills training for public librarians and connect them sustainably with state and regional resources for digitizing, describing, and exhibiting their cultural heritage content.

    During the project, DPLA collaborated with trainers at Digital Commonwealth, Digital Library of Georgia, Minnesota Digital Library, Montana Memory Project, and Mountain West Digital Library to write and iterate a workshop curriculum based on documented best practices. Through the project workshops, we used this curriculum to introduce 150 public librarians to the digitization process.

    Now at the end of the project, we’ve made this curriculum available in a self-guided version intended for digitization beginners from a variety of cultural heritage institutions. Each module includes a video presentation, slides with notes in Powerpoint, and slides in PDF. Please feel free to share, reuse, and adapt these materials.  Topics (with separate URLs for each) include:
    Planning for Digitization
    Selecting Content for a Digitization Project
    Understanding Copyright
    Using Metadata to Describe Digital Content
    Digital Reformatting and File Management
    Promoting Use of Your Digital Content