All Learning Resources

  • Singularity User Guide

    Singularity is a container solution created by necessity for scientific and application driven workloads.  .
    Over the past decade and a half, virtualization has gone from an engineering toy to a global infrastructure necessity and the evolution of enabling technologies has flourished. Most recently, we have seen the introduction of the latest spin on virtualization… “containers”. 
    Many scientists, especially those involved with the high performance computation (HPC) community, could benefit greatly by using container technology, but they need a feature set that differs somewhat from that available with current container technology. This necessity drives the creation of Singularity and articulated its four primary functions:

    • Mobility of compute
    • Reproducibility
    • User freedom
    • Support on existing traditional HPC 


    This user guide introduces Singularity, a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization.

  • The Oxford Common File Layout

    The Oxford Common File Layout (OCFL) specification describes an application-independent approach to the storage of digital information in a structured, transparent, and predictable manner. It is designed to promote long-term object management best practices within digital repositories.  This presentation was given at under the topic of Preservation Tools, Techniques and Policies for the Research Data Alliance Preserving Scientific Annotation Working Group on April 4, 2017. 
     

  • Workshop: Research Data Management in a Nutshell

    The workshop Research Data in a Nutshell was part of the Doctoral Day of the Albertus Magnus Graduate Center (AMGC) at the University of Cologne on January 18 2018.

    The workshop was intended as a brief, interactive introduction into RDM for beginning doctoral students.

  • Data Warehouse Tutorial For Beginners

    This Data Warehouse Tutorial For Beginners will give you an introduction to data warehousing and business intelligence. You will be able to understand basic data warehouse concepts with examples. The following topics have been covered in this tutorial:
    1. What Is The Need For BI?
    2. What Is Data Warehousing?
    3. Key Terminologies Related To DWH Architecture: a. OLTP Vs OLAP b. ETL c. Data Mart d. Metadata
    4. DWH Architecture
    5. Demo: Creating A DWH

     
  • Data Management Basics and Best Practices

    Big data, data management, and data life cycle are all buzzwords being discussed among librarians, researchers, and campus administrators across the country and around the world. Learn the basics of these terms and what services an academic library system might be expected to offer patrons, while identifying personal opportunities for improving how you work with your own data. You will have the opportunity to explore DMPTool during this session.

  • Python: Working with Multidimensional Scientific Data

    The availability and scale of scientific data is increasing exponentially. Fortunately, ArcGIS provides functionality for reading, managing, analyzing and visualizing scientific data stored in three formats widely used in the scientific community – netCDF, HDF and GRIB. Using satellite and model derived earth science data, this session will present examples of data management, and global scale spatial and temporal analyses in ArcGIS. Finally, the session will discuss and demonstrate how to extend the data management and analytical capabilities of multidimensional data in ArcGIS using python packages.

  • MIT Open Courseware: Data Management

    The MIT Libraries Data Management Group hosts a set of workshops during IAP and throughout the year to assist MIT faculty and researchers with data set control, maintenance, and sharing. This resource contains a selection of presentations from those workshops. Topics include an introduction to data management, details on data sharing and storage, data management using the DMPTool, file organization, version control, and an overview of the open data requirements of various funding sources.

  • MIT Open Courseware: Communicating With Data

    Communicating With Data has a distinctive structure and content, combining fundamental quantitative techniques of using data to make informed management decisions with illustrations of how real decision makers, even highly trained professionals, fall prey to errors and biases in their understanding. We present the fundamental concepts underlying the quantitative techniques as a way of thinking, not just a way of calculating, in order to enhance decision-making skills. Rather than survey all of the techniques of management science, we stress those fundamental concepts and tools that we believe are most important for the practical analysis of management decisions, presenting the material as much as possible in the context of realistic business situations from a variety of settings. Exercises and examples drawn from marketing, finance, operations management, strategy, and other management functions.  Course features include selected lecture notes and problem set assignments with answers.  Materials offered as course was presented in Summer 2003.  

  • MIT Open Courseware: Spatial Database Management and Advanced Geographic Information Systems

    This class offers a very in-depth set of materials on spatial database management, including materials on the tools needed to work in spatial database management, and the applications of that data to real-life problem solving.  Exercises and tools for working with SQL, as well as sample database sets, are provided.  A real-life final project is presented in the projects section.  Materials are presented from the course as taught in Spring 2003.  
    This semester long subject (11.521) is divided into two halves. The first half focuses on learning spatial database management techniques and methods and the second half focuses on using these skills to address a 'real world,' client-oriented planning problem. 
    Course Features include:  
    Lecture notes
    Projects (no examples)
    Assignments: problem sets (no solutions)
    Assignments: programming with examples
    Exams (no solutions)

  • Reproducible Quantitative Methods: Data analysis workflow using R

    Reproducibility and open scientific practices are increasingly being requested or required of scientists and researchers, but training on these practices has not kept pace. This course, offered by the Danish Diabetes Academy, intends to help bridge that gap. This course is aimed mainly at early career researchers (e.g. PhD and postdocs) and covers the fundamentals and workflow of data analysis in R.

    This repository contains the lesson, lecture, and assignment material for the course, including the website source files and other associated course administration files. 

     By the end of the course, students will have:

    1. An understanding of why an open and reproducible data workflow is important.
    2. Practical experience in setting up and carrying out an open and reproducible data analysis workflow.
    3. Know how to continue learning methods and applications in this field.

    Students will develop proficiency in using the R statistical computing language, as well as improving their data and code literacy. Throughout this course we will focus on a general quantitative analytical workflow, using the R statistical software and other modern tools. The course will place particular emphasis on research in diabetes and metabolism; it will be taught by instructors working in this field and it will use relevant examples where possible. This course will notteach statistical techniques, as these topics are already covered in university curriculums.

    For more detail on the course, check out the syllabus at:  https://dda-rcourse.lwjohnst.com.
     

  • SQL for Librarians

    This Library Carpentry lesson introduces librarians to relational database management system using SQLite. At the conclusion of the lesson you will: understand what SQLite does; use SQLite to summarise and link databases. DB Browser for SQLite (https://sqlitebrowser.orgneeds to be installed before the start of the training. The tutorial covers:
    1. Introduction to SQL
    2. Basic Queries
    3. Aggregation
    4. Joins and aliases
    5. Database design supplement
    Exercises are included with most of the sections.

  • Learn SQL in 1 Hour - SQL Basics for Beginners

    A crash course in SQL. How to write SQL from scratch in 1 hour. In this video I show you how to write SQL using SQL Server and SQL Server Management Studio. We go through Creating a Database, Creating Tables, Inserting, Updating, Deleting, Selecting, Grouping, Summing, Indexing, Joining, and every basic you need to get starting writing SQL.

  • Bioconductor: Computational and Statistical Methods for the Analysis of Genomic Data

    Bioconductor is an open source, open development software project to provide tools for the analysis and comprehension of high-throughput genomic data. It is based primarily on the R programming language.

    Bioconductor provides training in computational and statistical methods for the analysis of genomic data. Courses and conference events are listed on the cited url.  You are welcome to use material from previous courses. However, you may not include these in separately published works (articles, books, websites). When using all or parts of the Bioconductor course materials (slides, vignettes, scripts) please cite the authors and refer your audience to the Bioconductor website.

  • Marine Biogeographic Data Management (Contributing and Using Ocean Biogeographic Information System) (2015)

    The course provided an introduction to the Ocean Biogeographic Information System (OBIS). This includes best practices in marine biogeographic data management, data publication, data access, data analysis, and data visualization. Content consists of slide presentations and videos.  NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.  

    Aims and Objectives

    • Expand the OBIS network of collaborators
    • Improve marine biogeographic data quality
    • Increase awareness of international standards and best practices related to marine biogeographic data
    • Increase the amount of open access data published through OBIS and its OBIS nodes
    • Increase the use of data from OBIS for science, species conservation, and area-based management applications

    Learning Outcomes

    • Knowledge and understanding of OBIS structure, mission, and objectives
    • Installation and management of IPT
    • Use of Darwin Core standards for species occurrence records, taxonomy, event/sample records and additional biological and environmental parameters.
    • Data quality control tools
    • Publishing data through IPT and contributing datasets to OBIS
    • Use of OBIS data access (SQL, web service, API/R). 
    • Data visualization tools (ArGIS online, CartoDB, QGIS, …) 

    Target Audience

    • Marine data managers
    • Staff of NODCs or ADUs/OBIS nodes working with marine biodiversity data
    • Principle Investigators of major marine biodiversity expeditions
    • National marine biodiversity focal points

    Sections 

    • Introductions to IOC, IODE, OTGA, and OBIS
    • Biodiversity Data Standards
    • Data Quality Control Procedures
    • Data Access and Visualisation
    • Social Aspects of Data Management
  • Administración de Datos Biogeográficos Marinos (Contribuyendo al Uso de OBIS) (2016)

    The course provides an introduction to the Ocean Biogeographic Information System (OBIS). It includes best practices in the management of marine biogeographic data, publication of data for free access (IPT), access to data, organization, analysis, and visualization.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    Goals:

    • Expand the network of OBIS collaborators.
    • Improve the quality of marine biogeographic data.
    • Increase knowledge of international standards and best practices related to marine biogeographic data.
    • Increase the amount of freely accessible data published through OBIS and its OBIS nodes.
    • Increase the use of OBIS data for science, species conservation, and area-based management applications.

    There are four modules consisting of Spanish language slide presentations and videos:

    • MODULE 1 - General and concepts
    • Introduction to IOC, IODE, OTGA and OBIS and related to WORMS, Marine Regions, DarwinCore biodiversity data standard, and metadata.
    •  
    • MODULE 2 - Data Quality Control Procedures
    •  
    • MODULE 3 - Best practices in the management and policy of marine biogeographic data and access, organization, analysis and visualization of OBIS data
    •  
    • MODULE 4 - Publication of data for free access (Integrate Publishing Toolkit -IPT)
  • Research Data Management

    Marine information managers are increasingly seen as major contributors to research data management (RDM) activities in general and in the design of research data services (RDS) in particular. They promote research by providing services for storage, discovery, and access and liaise and partner with researchers and data centers to foster an interoperable infrastructure for the above services.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    The series of units within this training course recognizes the potential contributions that librarians/information managers can offer and hence the need to develop their skills in the research data management process. Course materials consist of slide presentations and student activities. Topics include:

    • Data and information management in International Indian Ocean Expedition-2 (IIOE-2)
    • Open science data
    • Research data and publication lifecycles
    • Research data organization and standards
    • Data management plans
    • Data publication and data citation
    • Access to research data
    • Management of sensitive data
    • Repositories for data management
    • Data management resources
  • Quality Management System Essentials for IODE National Oceanographic Data Centres (NODC) and Associate Data Units (ADU)

    Course overview

    The International Oceanographic Data and Information Exchange (IODE) maintains a global network of National Oceanographic Data Centres (NODC) and Associate Data Units (ADU) responsible for the collection, quality control, archive and online publication of many millions of ocean observations. The concept of quality management has become increasingly significant for these centres to meet national and international competency standards for delivery of data products and services. The IODE Quality Management Framework encourages NODCs and ADUs to implement a quality management system which will lead to the accreditation.

    This workshop provides an introduction for NODCs and ADUs involved in the development, implementation, and management of a Quality Management System based on ISO 9001:2015.   NOTE: The URL provided brings you to a page for courses on topics related to data management.  Establishment of login credentials will be required to access the course described here and others on related topics.

    Aims and objectives

    • To introduce the IODE Quality Management Framework
    • To introduce the ISO 9000 series of standards
    • To provide a description of a Quality Management System
    • To describe the importance of quality management for oceanographic data
    • To describe the accreditation process for NODCs and ADU

    Note that the exercises are no longer accessible.

    Topics include:

    • Introduction to Quality Management Systems
    • QMS Implementation in Meteorological Services
    • Introduction to ISO standards
    • Understanding ISO 9001:2015
      • Overview
      • ISO 9001:2015 Clause 4. Context of the Organization
      • ISO 9001:2015 Clause 5. Leadership
      • ISO 9001:2015 Clause 6. Planning
      • ISO 9001:2015 Clause 7.Support
      • ISO 9001:2015 Clause 8. Operation
      • ISO 9001:2015 Clause 9. Performance Evaluation
      • ISO 19115:2015 Clause 10. Improvement
    • Developing a quality system manual
    • Experiences and lessons learned from implementing a QMS: SISMER
    • Implementing the Quality Management System
    • IODE Quality Management Framework and Accreditation
  • Introduction to Lidar

    This self-paced, online training introduces several fundamental concepts of lidar and demonstrates how high-accuracy lidar-derived elevation data support natural resource and emergency management applications in the coastal zone.  Note: requires Adopbe Flash Plugin.
    Learning objectives:

    • Define lidar
    • Select different types of elevation data for specific coastal applications
    • Describe how lidar are collected
    • Identify the important characteristics of lidar data
    • Distinguish between different lidar data products
    • Recognize aspects of data quality that impact data usability
    • Locate sources of lidar data
    • Discover additional information and additional educational resources


     

  • Code of Best Practices and Other Legal Tools for Software Preservation: 2019 Webinar Series

    Since 2015, the Software Preservation Network (SPN) has worked to create a space where organizations from industry, academia, government, cultural heritage, and the public sphere can contribute their myriad skills and capabilities toward collaborative solutions that will ensure persistent access to all software and all software-dependent objects. The organization's goal is to make it easier to deposit, discover, and reuse software.
    A key activity of the SPN is to provide webinar series on topics related to software preservation.  The 2019 series include:
    Episode 1: The Code of Best Practices for Fair Use in Software Preservation, Why and How?
    Episode 2:  Beginning the Preservation Workflow
    Episode 3:  Making Software Available Within Institutions and Networks
    Episode 4:  Working with Source Code and Software Licenses
    Episode 5:  Understanding the Anti-circumvention Rules and the Preservation Exemptions
    Episode 6:  Making the Code Part of Software Preservation Culture
    Episode 7:  International Implications
    See information about each episode separately.
     

     

  • HarvardX Biomedical Data Science Open Online Training - Data Analysis for the Life Sciences Series

    HarvardX Biomedical Data Science Open Online Training

    In 2014 funding was received from the NIH BD2K initiative to develop MOOCs for biomedical data science. The courses are divided into the Data Analysis for the Life Sciences series, the Genomics Data Analysis series, and the Using Python for Research course.

    This page includes links to the course material for the three courses:

    Data Analysis for the Life Sciences:  Genomics Data Analysis Using Python for Research

    Video lectures are included with, when available, an R markdown document to follow along, and the course itself. Note that you must be logged in to EdX to access the course. Registration is free. Links to the course pages are also included.

    This site inclues a link to two other course sets: Genomics Data Analysis, and Using Python for Research.

  • Creating Documentation and Metadata: Creating a Citation for Your Data

    This training module is part of the Federation of Earth Science Information Partners (or ESIP Federation's) Data Management for Scientists Short Course. The subject of this module is "Creating a Citation for Your Data." This module was authored by Robert Cook from the Oak Ridge National Laboratory. Besides the ESIP Federation, sponsors of this Data Management for Scientists Short Course are the Data Conservancy and the United States National Oceanic and Atmospheric Administration (NOAA).  This module is available in both presentation slide and video formats.

  • Responsible Data Use: Data Restrictions

    This training module is part of the Federation of Earth Science Information Partners (or ESIP Federation's) Data Management for Scientists Short Course.  The subject of this module is "Data Restrictions".  The module was authored by Robert R. Downs from the NASA Socioeconomic Data and Applications Center which is operated by CIESIN – the Center for International Earth Science Information Network at Columbia University.  Besides the ESIP Federation, sponsors of this Data Management for Scientists Short Course are the Data Conservancy and the United States National Oceanic and Atmospheric Administration (NOAA).  This module is available in both presentation slide and video formats.  

  • How to Make a Data Dictionary

    A data dictionary is critical to making your research more reproducible because it allows others to understand your data. The purpose of a data dictionary is to explain what all the variable names and values in your spreadsheet really mean. This OSF Best Practice Guide gives examples and instruction on how to asemble a data dictionary.

  • R Program (Data Analysis)--Full Course

    A full basic course in R software for data analysis, produced by Simply Statistics. This 42 part video course provides basic instruction on the use of R, where to get help with programming questions, and a number of real world examples.  Links to all the videos are available from the YouTube landing page and include topics such as:  Getting Help, What is Data, Representing Data, etc.  The course is also offered via Coursera (See https://simplystatistics.org/courses).  The lecture slides for Coursera's Data Analysis class are available on github at:  https://github.com/jtleek/dataanalysis.

  • MIT Open Courseware: Introduction to Computer Science and Programming in Python

    6.0001 Introduction to Computer Science and Programming in Python  is intended for students with little or no programming experience. It aims to provide students with an understanding of the role computation can play in solving problems and to help students, regardless of their major, feel justifiably confident of their ability to write small programs that allow them to accomplish useful goals. The class uses the Python 3.5 programming language. Course presented as taught in Fall 2016.
    Course features include:
    Video lectures
    Captions/transcript 
    Interactive assessments
    Lecture notes
    Assignments: problem sets (no solutions)
    Assignments: programming with examples

    MITx offers a free version of this subject on edX. Please register to get started:

    6.00.1x Introduction to Computer Science and Programming Using Python (Started January 22, 2019)  [help icon]

    6.00.2x Introduction to Computational Thinking and Data Science (Started March 26, 2019)

  • Why Cite Data?

    This video explains what data citation is and why it's important. It also discusses what digital object identifiers (DOIs) are and how they are used.

  • Principles of Database Management

    There are 14 videos included in this web lecture series of Prof. dr. Bart Baesens: Introduction to Database Management Systems. Prof. dr. Bart Baesens holds a PhD in Applied Economic Sciences from KU Leuven University (Belgium). He is currently an associate professor at KU Leuven, and a guest lecturer at the University of Southampton (United Kingdom). He has done extensive research on data mining and its applications. For more information, visit http://www.dataminingapps.com.   In this lecture series, the fundamental concepts behind databases, database technology, database management systems and data models are explained. Discussed topics entail: applications, definitions, file based vs. databased data management approaches, the elements of database systems and the advantages of database design.  Separate URLs are provided for each lecture in this series, found on the YouTube lecture series page.  

     

  • CMU Intro to Database Systems Course

    These courses are focused on the design and implementation of database management systems. Topics include data models (relational, document, key/value), storage models (n-ary, decomposition), query languages (SQL, stored procedures), storage architectures (heaps, log-structured), indexing (order preserving trees, hash tables), transaction processing (ACID, concurrency control), recovery (logging, checkpoints), query processing (joins, sorting, aggregation, optimization), and parallel architectures (multi-core, distributed). Case studies on open-source and commercial database systems will be used to illustrate these techniques and trade-offs. The course is appropriate for students with strong systems programming skills.  There are 26 videos associated with this course which was originally offered in Fall 2018 as Course 15 445/645 at Carnegie Mellon University.  

  • Data Management Resources (University of Arizona Research Data Management Services)

    The information on this website is intending to provide information on developing data management plans now being required by some federal agencies and to support researchers in the various stages of the research cycle.  Topics covered include:
    - Research Data Lifecycle
    - Data Management Plans with funding requirements from many agencies
    - Sharing Data
    - Managing Data
    Workshops and tutorials are available as recordings, slides, and exercises on topics such as:  Data Literacy for Postdocs, Increasing Openness and Reproducibility using the OSF, and Research Data Life Cycle.

  • Rocky Mountain Data Management Training for Certification

    This free training for the Data Management Association's Certified Data Management Professional® exam is brought to you by DAMA's Rocky Mountain Chapter. If you're studying for the CDMP exam, get your discounted copy of the DMBOK V2.

    Data Management Association International – Rocky Mountain Chapter (DAMA-RMC) is a not-for-profit, vendor-independent, professional organization dedicated to advancing the concepts and practices of enterprise information and data resource management (IRM/DRM).

    DAMA-RMC’s primary purpose is to promote the understanding, development and practice of managing information and data as key enterprise assets.  Topics include:
    Week 1:  Introduction
    Week 2:  Ethics
    Week 3:   Data Governance
    Week 4:  Data Architecture & Data Modeling and Design
    Week 5:  Data Storage & Operations - Data Security
    Week 6:  Data Storage & Operations - Data Security
    Week 7: Data Integration & Operability, Metadata

  • Coffee and Code: The Command Line - An Introduction

    Graphical user interfaces are fast, often more than fast enough to suit our needs. GUIs are feature rich, can be intuitive, and often filter out a lot of stuff we don't need to know about and aren't interested in. Nearly everything we need to do can be done simply and quickly using a GUI.

    The command line is a great resource for speeding up and automating routine activities without using a lot of processing power. In some cases, it can be better for:

    • Searching for files
    • Searching within files
    • Reading and writing files and data
    • Network activities

    Some file and data recovery processes can only be executed from the command line.

    Plus:

    • The command line is old fashioned
    • Potential efficiency gains take time to manifest
    • Even Neal Stephenson says it's obsolete
  • Coffee and Code: TaskJuggler

    What is TaskJuggler?

    TaskJuggler is an open source project (written in Ruby) planning and management application that provides a comprehensive set of tools for project planning, management, and reporting. Versions of TaskJuggler are available for Linux, the Mac OS, and Windows and multiple docker containers have been created that encapsulate TaskJuggler for ease of execution without having to directly install it within the host computer operating systemn.

    Some key characteristics of TaskJuggler include:

    • Text-based configuration files
    • A command-line tool that is run to perform scheduling and report generation
    • An optional local server process that can be run and with which a client tool can interact to more rapidly generate reports for projects that have been loaded into the server
    • Email-based workflows for large-scale project tracking
    • Support for web-based, CSV, and iCal reports enabling delivery of plan products through web browsers, further analysis and visualization of scheduling data outside of TaskJuggler, and sharing of project plan for integration into calendar systems.
    • Scenario support for comparing alternative project paths.
    • Accounting capabilities for estimating and tracking costs and revenue through the life of a project.
  • Coffee and Code: Database Basics

    Why Use a Database to Organize Your Data

    • Consisten structure - defined by you
    • Enforced data types
    • Can scale from single tables to sophisticated relational data models
    • Can be a personal file-based or shared server-based solution, depending on your needs
    • Standard language for interacting with your data
    • "Virtual Tables" can be created on the fly based on database queries 
    • Data can be accessed by many analysis tools
  • Coffee and Code: R & RStudio

    What is R?
    R is an [Open Source](https://opensource.org) programming language that is specifically designed for data analysis and visualization. It consists of the [core R system](https://cran.r-project.org) and a collection of (currently) over [13,000 packages](http://cran.cnr.berkeley.edu) that provide specialized data manipulation, analysis, and visualization capabilities. R is an implementation of the *S* statistical language developed in the mid-1970s at Bell Labs, with the start of development in the early 1990s and a stable beta version available by 2000. R has been under continuous development for over 25 years and has hit major development [milestones](https://en.wikipedia.org/wiki/R_\(programming_language\)#Milestones) over that time.
    R syntax is relatively straighforward and is based on a core principle of providing reasonable default values for many functions, and allowing a lot of flexibility and power through the use of optional parameters.

  • Coffee and Code: Basics of Programming with Python

    This collection of materials was developed for the University of New Mexico Libraries' Code & Coffee workshop series to provide a high-level introduction to programming concepts illustrated with the Python programming language. The workshop content is contained in a collection of Jupyter Notebooks:

    Conceptual Overview: Programming Concepts.ipynb
    Surname analysis example: Name_Data.ipynb
    Library shelf space analysis example: Space Analysis.ipynb
    IR Keywords Versus IR "Aboutness" example [no longer functional due to decommissioning of UNM DSpace instance]: IR Keywords Versus IR "Aboutness".ipynb

    Why learn the basic principles of programming?¶

    Thinking algorithmically (a key element in the process used in developing programming solutions) is a powerful problem solving skill that is reinforeced with practice. Practicing programming is great practice.

    • Defining a problem with sufficient specificity that a solution can be effectively developed
    • Defining what the end-product of the process should be
    • Breaking a problem down into smaller components that interact with each other
    • Identifying the objects/data and actions that are needed to meet the requirements of each component
    • Linking components together to solve the defined problem
    • Identifying potential expansion points to reuse the developed capacity for solving related problems
    • Capabilities to streamline and automate routine processes through scripting are ubiquitous
    • Query languages built into existing tools (e.g. Excel, ArcGIS, Word)
    • Specialized languages for specific tasks (e.g. R, Pandoc template language, PHP)
    • General purpose languages for solving many problems (e.g. Bash shell, Perl, Python, C#)
    • Repeatabilty with documentation
    • Scalability
    • Portability
  • Mozilla Science Lab Open Data Instructor Guides

    This site is a resource for train-the-trainer type materials on Open Data. It's meant to provide a series of approachable, fun, collaborative workshops where each of the modules is interactive and customizable to meet a variety of audiences.

  • Data Management: Using Metadata to Find, Interpret & Share Your Data

    Ever struggle to find that file you tucked away last semester (or last week)? Having trouble remembering details in order to re-use your own data? Need others to understand & use your data? This workshop will introduce you to the power of metadata: what it is, why it’s so important, and how to get started with it. Stop wasting time in finding, interpreting or sharing your data. Whether you are new to thinking about metadata or you’re looking to build off some basic knowledge, this workshop is for you!

  • Data Management: Strategies for Data Sharing and Storage

    Not sure how to publish and share your data? Unclear on the best formats and information to include for optimal data reuse? This workshop will review existing options for long-term storage and strategies for sharing data with other researchers. Topics will include: data publication and citation, persistent identifiers, versioning, data formats and metadata for reuse, repositories, cost models and management strategies.

  • Learning programming on Khan Academy

    In this course, we'll be teaching the concepts of the JavaScript programming language and the cool functions you can use with it in the ProcessingJSlibrary. Before you dig in, here's a brief tour of how we teach programming here on Khan Academy, and how we think you can learn the most.

    Normally, we teach on Khan Academy using videos, but here in programming land, we teach with something we call "talk-throughs". A talk-through is like a video, but it's actually interactive- you can pause at any time if you want to play with the code yourself, and you can spin-off if you want to make your own version of what we made.  An animated GIF of a talk-through is included.

    See Terms of Service at:  https://www.khanacademy.org/about/tos 

  • Introduction to code versioning and collaboration with Git and GitHub: An EDI VTC Tutorial.

    This tutorial is an introduction to code versioning and collaboration with Git and GitHub.  Tutorial goals are to help you:  

    • Understand basic Git concepts and terminology.
    • Apply concepts as Git commands to track versioning of a developing file.
    • Create a GitHub repository and push local content to it.
    • Clone a GitHub repository to the local workspace to begin developing.
    • Inspire you to incorporate Git and GitHub into your workflow.


    There are a number of exercises within the tutorial to help you apply the concepts learned.  
    Follow up questions can be directed via email to:  o Colin Smith  ([email protected]) AND Susanne Grossman-Clarke ([email protected]).