All Learning Resources
University of California Libraries: Research Data Matters
What is research data and why is managing your data important? Where can you get help with research data management? In this introductory video, three University of California researchers address these questions from their own experience and explain the impact of good data management practices. Researchers interviewed include Professor Christine Borgman, Professor Rick Prelinger, and Professor Marjorie Katz.
Introduction to Python GIS for Data Science
Module on Python and GIS part-time data science course was offered by General Assembly during Summer 2015. The module provides a quick introduction to Python and how it relates to GIS.
Research Data Management: Practical Data Management
A series of modules and video tutorials describing research data management best practices.
Module 1: Where to start - data planning1.1 Data Life Cycle & Searching for Data (5:59 minutes)
1.3 File Naming (3:39 minutes)
1.4 ReadMe Files, Library Support, Checklist (4:29 minutes)Module 2: Description, storage, archiving
2.1 Data Description (2:16 minutes)
2.2 Workflow Documentation & Metadata Standards (4:36 minutes)
2.3 Storage & Backups (2:48 minutes)
2.4 Archiving: How (2:50 minutes)
2.5 Archiving: Where (3:57 minutes)Module 3: Publishing, sharing, visibility
3.1 What is Data Publishing? (4:50)
3.2 What and Where to Publish? (1:47)
3.3 Data Licenses (1:51)
3.4 Citing and DOI's (1:09)
3.5 ORCID (2:04)
3.6 Altmetrics (2:15)Research Rigor & Reproducibility: Understanding the Data Lifecycle for Research Success
This course provides recommended practices for facilitating the discoverability, access, integrity, and reuse value of your research data. The modules have been selected from a larger Canvas course "Best Practices for Biomedical Research Data Management (https://www.canvas.net/browse/harvard-medical/courses/biomed-research-da... ).
Biomedical research today is not only rigorous, innovative and insightful, it also has to be organized and reproducible. With more capacity to create and store data, there is the challenge of making data discoverable, understandable, and reusable. Many funding agencies and journal publishers are requiring publication of relevant data to promote open science and reproducibility of research.
In this course, students will learn how to identify and address current workflow challenges throughout the research life cycle. By understanding best practices for managing your data throughout a project, you will succeed in making your research ready to publish, share, interpret, and be used by others. Course materials include video lectures, presentation slides, readings and resources, research case studies, interactive activities and concept quizzes.
Best Practices for Biomedical Research Data Management
This course presents approximately 20 hours of content aimed at a broad audience on recommended practices facilitating the discoverability, access, integrity, reuse value, privacy, security, and long-term preservation of biomedical research data.
Each of the nine modules is dedicated to a specific component of data management best practices and includes video lectures, presentation slides, readings & resources, research teaching cases, interactive activities, and concept quizzes.
Background Statement:
Biomedical research today is not only rigorous, innovative and insightful, it also has to be organized and reproducible. With more capacity to create and store data, there is the challenge of making data discoverable, understandable, and reusable. Many funding agencies and journal publishers are requiring publication of relevant data to promote open science and reproducibility of research.In order to meet to these requirements and evolving trends, researchers and information professionals will need the data management and curation knowledge and skills to support the access, reuse and preservation of data.
This course is designed to address present and future data management needs.
Best Practices for Biomedical Research Data Management serves as an introductory course for information professionals and scientific researchers to the field of scientific data management. The course is also offered by Canvas Instruction, at: https://www.canvas.net/browse/harvard-medical/courses/biomed-research-da... .
In this course, learners will explore relationships between libraries and stakeholders seeking support for managing their research data.
EUDAT Research Data Management
This site provides several videos on research data management, including why its important, metadata, archives, and other topics.
The EUDAT training programme is delivered through a multiple channel approach and includes:
eTraining components delivered via the EUDAT website: a selection of presentations, documents and informative video tutorials clustered by topic and level of required skills targeting all EUDAT stakeholders.Ad-hoc workshops organised together with research communities and infrastructures to illustrate how to integrate EUDAT services in their research data management infrastructure. Mainly designed for research communities, infrastructures and data centres, they usually include pragmatic hands-on sessions. Interested in a EUDAT workshop for your research community? Contact us at [email protected].
One hour webinars delivered via the EUDAT website focusing on different research data management components and how EUDAT contributes to solving research data management challenges.
DataONE Data Management Module 02: Data Sharing
When first sharing research data, researchers often raise questions about the value, benefits, and mechanisms for sharing. Many stakeholders and interested parties, such as funding agencies, communities, other researchers, or members of the public may be interested in research, results and related data. This 30-40 minute lesson addresses data sharing in the context of the data life cycle, the value of sharing data, concerns about sharing data, and methods and best practices for sharing data and includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.
DataONE Data Management Module 01: Why Data Management
As rapidly changing technology enables researchers to collect large, complex datasets with relative ease, the need to effectively manage these data increases in kind. This is the first lesson in a series of education modules intended to provide a broad overview of various topics related to research data management. This 30-40 minute module covers trends in data collection, storage and loss, the importance and benefits of data management, and an introduction to the data life cycle and includes a downloadable presentation (PPT or PDF) with supporting hands-on exercise and handout.
Introduction to R
In this introduction to R, you will master the basics of this beautiful open source language, including factors, lists and data frames. With the knowledge gained in this course, you will be ready to undertake your first very own data analysis. Topics include: an introduction to basics, vectors, matrices, factors, lists and data forms. Approximately 62 exercises are included.
Intro to Python for Data Science
Python is a general-purpose programming language that is becoming more and more popular for doing data science. Companies worldwide are using Python to harvest insights from their data and get a competitive edge. Unlike any other Python tutorial, this course focuses on Python specifically for data science. In our Intro to Python class, you will learn about powerful ways to store and manipulate data as well as cool data science tools to start your own analyses. Topics covered include: Python basics, Python lists, functions and packages, and NumPy, an array package for Python.
Using, learning, teaching, and programming with the Paleobiology Database
The Paleobiology Database is a public database of paleontological data that anyone can use, maintained by an international non-governmental group of paleontologists. You can explore the data online in the Navigator, which lets you filter fossil occurrences by time, space, and taxonomy, and displays their modern and paleogeographic locations; or you can download the data to your own computer to do your own analyses. The educational resources offered by the Paleobiology include:
- Presentations including lectures and slide shows to introduce you to the PBDB
- Web apps that provide a variety of online interfaces for exploring PBDB data via the API
- Mobile apps that provide applications for iOS and Android providing new views of the PBDB's data via the API
- Lesson plans and teaching activities using the Paleobiology Database
- Tutorials on how to get and use data from the website, and on how to contribute data to the database, viewable on Youtube
- Libraries and functions for interacting with PBDB data via R
- Documentation, code examples, and issue reporting for the PBDB API
- Other Paleobiology Database related external resources including a link to the Paleobiology Github repository
For more information about the Paleobiology Database, see: https://paleobiodb.org/#/faq .Intermediate R
The intermediate R course is the logical next stop on your journey in the R programming language. In this R training you will learn about conditional statements, loops and functions to power your own R scripts. Next, you can make your R code more efficient and readable using the apply functions. Finally, the utilities chapter gets you up to speed with regular expressions in the R programming language, data structure manipulations and times and dates. This R tutorial will allow you to learn R and take the next step in advancing your overall knowledge and capabilities while programming in R.
Introduction to SAGA GIS Software
A quick introduction to the System for Automated Geographic Analysis (SAGA) GIS software which is an open source Geographic Information System software package. SAGA GIS has been designed for an easy and effective implementation of spatial algorithms and offers a comprehensive, crowing set of geoscientific methods. A data management module is included in the software.
ESRI Academy: Data Management
ESRI, the creator of ArcMap and other Geographic Information Systems (GIS) software product, provides a large number of training courses on topics that include Data Management as well as other skills such as the use of GIS, Python Programming, and other GIS skills. The types of training materials include tutorials, videos, web courses, instructor-led courses, training seminars, learning plans (including one that leads to 6 courses on the Fundamentals of Data Management) and story maps. Some training materials are available online while others are on location; some are free, and some have an associated fee. Each course provides a certificate once it is completed.
Data Carpentry Geospatial Workshop
This workshop is designed to teach both general geospatial concepts, but also build capacity related to the use of the "R" programming language for data management skills. The learner will find out how to use "R" with geospatial data, particularly geospatial raster and vector data. The workshop lessons include:
- Introduction to Geospatial Concepts to help the learner understand data structures and common storage and transfor formats for spatial data. The goal of this lesson is to provide an introduction to core geospatial data concepts. It is intended as a pre-requisite for the R for Raster and Vector Data lesson for learners who have no prior experience working with geospatial data.- Introduction to R for Geospatial Data to help the learner import data into $, cacluate summary statistics, and create publication-quality graphics by providing an introduction to the R programming language.
- Introduction to Geospatial Raster and Vector Data with R in which the learner will open, work with, and plot vector and raster-format spatial data in R. This lesson provides a more in-depth introduction to visualization (focusing on geospatial data), and working with data structures unique to geospatial data. It assumes that learners are already familiar with both geospatial data concepts and the core concepts of R.
The BD2K Guide to the Fundamentals of Data Science Series
The Big Data to Knowledge (BD2K) Initiative presents this virtual lecture series on the data science underlying modern biomedical research. Since its beginning in September 2016, the webinar series consists of presentations from experts across the country covering the basics of data management, representation, computation, statistical inference, data modeling, and other topics relevant to “big data” in biomedicine. The webinar series provides essential training suitable for individuals at an introductory overview level. All video presentations from the seminar series are streamed for live viewing, recorded, and posted online for future viewing and reference. These videos are also indexed as part of TCC’s Educational Resource Discovery Index (ERuDIte). This webinar series is a collaboration between the TCC, the NIH Office of the Associate Director for Data Science, and BD2K Centers Coordination Center (BD2KCCC).
View all archived videos on our YouTube channel:
https://www.youtube.com/channel/UCKIDQOa0JcUd3K9C1TS7FLQETD+ Toolkit: Training Students to manage ETD+ research outputs
The ETD+ Toolkit is a Google Drive Open Curriculum package that is an approach to improving student and faculty research output management. Focusing on the Electronic Thesis and Dissertation (ETD) as a mile-marker in a student’s research trajectory, it provides in-time advice to students and faculty about avoiding common digital loss scenarios for the ETD and all of its affiliated files.
The ETD+ Toolkit provides free introductory training resources on crucial data curation and digital longevity techniques. It has been designed as a training series to help students and faculty identify and offset risks and threats to their digital research footprints.
What it is:
An open set of six modules and evaluation instruments that prepare students to create, store, and maintain their research outputs on durable devices and in durable formats. Each is designed to stand alone; they may also be used as a series.What each module includes:
Each module includes Learning Objectives, a one-page Handout, a Guidance Brief, a Slideshow with full presenter notes, and an evaluation Survey. Each module is released under a CC-BY license and all elements are openly editable to make reuse as easy as possible.Open Access to Publications in Horizon 2020 (May 2017)
This webinar is part of the OpenAIRE Spring Webinars 2017.
It dealt with the Open Access mandate in H2020, what is expected of projects with regards to the OA policies in H2020 and how OpenAIRE can help.Webinar led by Eloy Rodrigues and Pedro Príncipe (UMinho)
Webinar presentation: https://www.slideshare.net/OpenAIRE_eu/openaire-webinar-open-access-to-publications-in-horizon-2020-may-2017
Webinar recordings: https://webinars.eifl.net/2017-05-29_OpenAIRE_H2020_OAtopublications/index.htmlLast updated on 30 December 2017.
Smithsonian Libraries: Describing Your Project : Citation Metadata
Smithsonian Libraries Metadata Guide.
The overall description for your project could be referred to as project metadata, citation metadata, a data record, a metadata record, or a dataset record. The information supplied in the project description should be sufficient to enable you and others to find and properly cite your data.
A metadata record gives the basic who, what, where, and when of the data. It is a high level description that others can use to cite your data. It may be submitted with a dataset as a separate file when deposited in a repository, or displayed in the repository with data entered into a form.
Metacat Administrator's Guide
Metacat is a repository for data and metadata (documentation about data) that helps scientists find, understand and effectively use data sets they manage or that have been created by others. Thousands of data sets are currently documented in a standardized way and stored in Metacat systems, providing the scientific community with a broad range of science data that–because the data are well and consistently described–can be easily searched, compared, merged, or used in other ways.
This Metacat Administrator's Guide includes instruction on the following topics:
Chapter 1: Introduction
Chapter 2: Contributors
Chapter 3: License
Chapter 4: Downloading and installing Metacat
Chapter 5: Configuring Metacat
Chapter 6: DataONE Member Node Support
Chapter 7: Accessing and submitting Metadata and data
Chapter 8: Metacat indexing
Chapter 9: Modifying and creating themes
Chapter 10: Metacat authentication mechanism
Chapter 11: Metacat's use of Geoserver
Chapter 12: Replication
Chapter 13: Harvester and harvest list editor
Chapter 14: OAI protocol for metadata harvesting
Chapter 15: Event logging
Chapter 16: Enabling web searches: sitemaps
Chapter 17: Appendix: Metacat properties
Chapter 18: Appendix: Development issuesQGIS - for Absolute Beginners
This video is a complete rundown of the basics in QGIS, a free GIS software package designed as an alternative to ArcMap.
QGIS is a user friendly Open Source Geographic Information System (GIS) licensed under the GNU General Public License. QGIS is an official project of the Open Source Geospatial Foundation (OSGeo). It runs on Linux, Unix, Mac OSX, Windows and Android and supports numerous vector, raster, and database formats and functionalities.QGIS Training Manual
A training manual written by the QGIS Development Team. It includes instruction on the basic use of the QGIS interface, applied applications, and other basic operations. Topics include: general tools, QGIS GUI, working with projections, raster and vector data, managing data sources and integration with GRASS GIS. Examples are given of working with GPS and OGC data. A list of plugins is also included.
QGIS aims to be a user-friendly GIS, providing common functions and features. The initial goal of the project was to provide a GIS data viewer. QGIS has reached the point in its evolution where it is being used by many for their daily GIS data-viewing needs. QGIS supports a number of raster and vector data formats, with new format support easily added using the plugin architecture.
Training Materials for Data Management in Reclamation
This document (downloadable from this landing page) provides supplementary educational materials focused upon US Bureau of Reclamation (USBR) approaches to data management that use and expand upon a number of USGS training modules on data management. The USBR supplementary materials include:
- A discussion of the Reclamation data lifecycle
- A Reclamation data management plan template
- Examples of Reclamation data management best practice
- Lessons learned from various USBR data management efforts.
Introduction to Data Management Plans
Video presentation and slides introducing the concept of Data Management Plans given by Dr. Andrew Stephonson at the Research Resource Forum at Northwestern University in 2016. Dr. Stephenson is Distinguished Professor of Biology and Associate Dean for Research and Graduate Education in the Eberly College of Science at Penn State. As an active researcher, he has generated and collected data for many years and served on many a panel reviewing grant proposals. From his perspective, data management plans make good sense. In the following video, he describes the elements of a DMP and why they are important. The video presentation is available at: https://www.youtube.com/watch?v=uHyDzt6E3qU
This presentation is part of a Data Management Plan Tutorial prepared by the Penn State University Libraries and contains the following modules:- Introduction to Data Management Plans
- Why Do You Need a Data Management Plan?
- Components of a Typical Plan
- Tools and Other Resources for Data Management Planning
- Summary
- Part 1: Data and Data Collection
- Part 2: Documenting the Data
- Part 3: Policies for Data Sharing and Access
- Part 4: Reuse and Redistribution of Data
- Part 5: Long-Term Preservation and Archiving of Data
- Next Steps to Take
The entire Data Management Plan tutorial can be found at: https://www.e-education.psu.edu/dmpt- Introduction to Data Management Plans
Why should you worry about good data management practices?
To prepare data for archival it must be organized in well-formatted, described, and documented datasets. Benefits of good data management include:
Short-term
Spend less time doing data management and more time doing research
Easier to prepare and use data for yourself
Collaborators can readily understand and use data filesLong-term (data publication)
Scientists outside your project can find, understand, and use your data to address broad questions
You get credit for archived data products and their use in other papers
Sponsors protect their investmentThis page provides an overview of data management planning and preparation. It offers practical methods to successfully share and archive your data at the ORNL DAAC. Topics include: Best Practices for Data Management, Writing Data Management Plans including examples of data management plans, How-to's amd Resources.
Tools for Version Control of Research Data
Research data tend to change over time (get expanded, corrected, cleaned, etc.). Version control is the management of changes to data or documents. This talk addresses why version control is a crucial component of research data management and introduces software tools that are available for this purpose. This workshop was part of the Conference Connecting Data for Research held at VU University in Amsterdam.
Top 10 FAIR Data & Software Things
The Top 10 FAIR Data & Software Global Sprint was held online over the course of two-days (29-30 November 2018), where participants from around the world were invited to develop brief guides (stand alone, self paced training materials), called "Things", that can be used by the research community to understand FAIR in different contexts but also as starting points for conversations around FAIR. The idea for "Top 10 Data Things" stems from initial work done at the Australian Research Data Commons or ARDC (formerly known as the Australian National Data Service).
The Global Sprint was organised by Library Carpentry, Australian Research Data Commons and the Research Data Alliance Libraries for Research Data Interest Group in collaboration with FOSTER Open Science, OpenAire, RDA Europe, Data Management Training Clearinghouse, California Digital Library, Dryad, AARNet, Center for Digital Scholarship at the Leiden University, and DANS. Anyone could join the Sprint and roughly 25 groups/individuals participated from The Netherlands, Germany, Australia, United States, Hungary, Norway, Italy, and Belgium. See the full list of registered Sprinters.
Sprinters worked off of a primer that was provided in advance together with an online ARDC webinar introducing FAIR and the Sprint titled, "Ready, Set, Go! Join the Top 10 FAIR Data Things Global Sprint." Groups/individuals developed their Things in Google docs which could be accessed and edited by all participants. The Sprinters also used a Zoom channel provided by ARDC, for online calls and coordination, and a Gitter channel, provided by Library Carpentry, to chat with each other throughout the two-days. In addition, participants used the Twitter hashtag #Top10FAIR to communicate with the broader community, sometimes including images of the day.
Participants greeted each other throughout the Sprint and created an overall welcoming environment. As the Sprint shifted to different timezones, it was a chance for participants to catch up. The Zoom and Gitter channels were a way for many to connect over FAIR but also discuss other topics. A number of participants did not know what to expect from a Library Carpentry/Carpentries-like event but found a welcoming environment where everyone could participate.
Guidelines for Effective Data Management Plans
Data Management Plans
Federal funding agencies are increasingly recommending or requiring formal data management plans with all grant applications. To help researchers meet those requirements, ICPSR offers these guidelines. Based on our Data Management Plan Web site, this document contains a framework, example data management plans, links to other resources, and a bibliography of related publications. ICPSR also hosts a blog on data management plans.Topics include:
Framework for Creating a Data Management Plan
Data Mangeme Plan Resources & Examples
Resources for Development
Templates and Tools
Guidance on Funder Requirements
Good Practice Guidance.We hope you find this information helpful as you craft a data management plan. Please contact us at [email protected] with any comments or suggestions.
Columbia Research Data Management Tutorials and Templates
The ReaDI Program has created several tutorials and templates to aid in the management of data during the collection phase of research and preparing for publication. Tutorial topics include: Good Laboratory Notebook Practices, Laboratory Notebook Checklist, Best Practices for Data Management When Using Instrumentation, and Guidelines on the Organization of Samples in a Laboratory. Downloadable templates are available on related topics, such as data to figure map templates.
The Research and Data Integrity (ReaDI) program is designed to enhance data management and research integrity at Columbia University. The ReaDI program provides resources, outreach and consultation services to researchers at all stages in their careers. Many of the resources are applicable to researchers at any institution.
How-to Guides to Managing a Research Project
These guides are designed to mirror the lifecycle of your research project. They provide support at its various stages. Topics include:
- Creating & analysing data
- Choosing file formats
- Data discovery & re-use
- Storing & preserving data
- Sharing data
- Handling sensitive & personal information
- Planning ahead for Data Management
- Software sustainability, preservation and sharing.Research data management training modules in Archaeology (Cambridge)
Looking after digital data is central to good research. We all know of horror stories of people losing or deleting their entire dissertation just weeks prior to a deadline! But even before this happens, good practice in looking after research data from the beginning to the end of a project makes work and life a lot less stressful. Defined in the widest sense, digital data includes all files created or manipulated on a computer (text, images, spreadsheets, databases, etc). With publishing and archiving of research increasingly online we all have a responsibility to ensure the long-term preservation of archaeological data, while at same time being aware of issues of sensitive data, intellectual property rights, open access, and freedom of information. The DataTrain teaching materials have been designed to familiarise post-graduate students in good practice in looking after their research data. A central tenet is the importance of thinking about this in conjunction with the projected outputs and publication of research projects. The eight presentations, followed by group discussion and written exercises, follow the lifecycle of digital data from pre-project planning, data creation, data management, publication, long-term preservation and lastly to issues of the re-use of digital data. At the same time the course follows the career path of researchers from post-graduate research students, through post-doctoral research projects, to larger collaborative and inter-disciplinary projects. The teaching material is targeted at co-ordinators of Core Research Skills courses for first year post-graduate research students in archaeology. The material is open access and you are invited to re-use and amend the content as best suits the requirements of your university department. The complete course is designed to run either as a four hour half-day workshop, or 2 x 2 hour classes. Alternatively, individual modules can be slotted into existing data management and core research skills teaching.
Ten Simple Rules for Creating a Good Data Management Plan
Research papers and data products are key outcomes of the science enterprise. Governmental, nongovernmental, and private foundation sponsors of research are increasingly recognizing the value of research data. As a result, most funders now require that sufficiently detailed data management plans be submitted as part of a research proposal. A data management plan (DMP) is a document that describes how you will treat your data during a project and what happens with the data after the project ends. Such plans typically cover all or portions of the data life cycle—from data discovery, collection, and organization (e.g., spreadsheets, databases), through quality assurance/quality control, documentation (e.g., data types, laboratory methods) and use of the data, to data preservation and sharing with others (e.g., data policies and dissemination approaches). The article also includes a downloadable image that illustrates the relationship between hypothetical research and data life cycles and highlights the links to the rules presented in this paper.
Research Data Services Guides in Support of Data Management
Research Data Services is a collaboration between the University of Iowa Libraries, the Office of the Vice President of Research and Economic Development, Information Technology Services, and other campus offices, to support researchers' data management needs. The guides that are part of these Services include answers to key questions, but may also include short videos on the following topics:
- Data Management Plans
- Data Organization and Documentation
- Data Repositories
- Datasets
- Other University of Iowa services and resources available as well as external tools, websites, and repositories that may be useful.Photogrammetry Workshop UNM GEM Lab
This course provides an introduction to photgrammetry with a full set of data to utilize in building a Digital Elevation Model using Agisoft Photoscan. The course uses a gitHub repository to grow the workshop into a full featured course on the applications of modern remote sensing and photogrammetry techniques in and for the environmental and geosciences.
Coffee and Code: Reproducibility and Communication
This workshop provides an introduction to reproducibility and communication of research using notebooks based on RStudio and Jupyter Notebooks. The development of effective documentation and accesible and reusable methods in scientific analysis can make a significant contribution to the reproducibility and understanding of a research activity. The integration of executable code with blocks of narrative content within notebook systems such as those provided in RStudio and the Jupyter Notebook (and Lab) software environments provides a streamined way to bring these minimum components (data, metadata, code, and software) into a package that can be easily shared with others for review and reuse.
This workshop will provide:
- A high-level introduction to the notebook interfaces provided for R and Python through the RStudio and Jupyter Notebook environments.
- An introduction to Markdown as a language supported by both systems for adding narrative content to notebooks
- Sample notebooks illustrating structure, content, and output options
From the master page for this resource, the Reproducibility and Communication Using Notebooks ipynb file provides more information about what is covered in this workshop.
Coffee and Code: NoSQL
Introduction to NoSQL
In previous sessions we have looked at use cases for relational database management systems (RDBMS), which predominantly make use of SQL. Today's session provides an overview of NoSQL databases. NoSQL can be understood to mean "no SQL" or, alternatively, "not only SQL." NoSQL databases are non-relational, which in the simplest terms means they are not made up of tables.
Topics we will cover include:
- Differences between SQL and NoSQL databases
- Types of NoSQL databases and their use cases
- Document database basics with MongoDB
- Graph database basics with Neo4j
Coffee and Code: Introduction to Version Control
This is a tutorial about version control, also known as revision control, a method for tracking changes to files and folders within a source code tree, project, or any complex set of files or documents.
Also see Advanced Version Control, here: https://github.com/unmrds/cc-version-control/blob/master/03-advanced-ver...
Coffee and Code: Advanced Version Control
Learn advanced version control practices for tracking changes to files and folders within a source code tree, project, or any complex set of files or documents.
This tutorial builds on concepts taught in "Introduction to Version Control," found here: https://github.com/unmrds/cc-version-control/blob/master/01-version-cont....
Git Repository for this Workshop: https://github.com/unmrds/cc-version-control
Coffee and Code: Introduction to Database Design
In this session, we are going to dig a little deeper into databases as representions of systems and processes. A database with a single table may not feel or function much differently from a spreadsheet. Much of the benefit of using databases results from designing them as models of complex systems in ways that spreadsheets just can't do:
- Inventory control and billing
- Human resources
- Blogging platforms
- Ecosystems
There will be some more advanced SQL statements this time, though we will still be using SQLite. Concepts which will be discussed and implemented in our code include
- Entities and attributes
- Keys
- Relationships
- Normalization
Getting Started with Data Management & DMPTool
Data management plans are critical for compliance on most sponsored projects, and will save you time and resources throughout your project. The DMPTool is on online tool to help you write a data management plan using templates with specific funder requirements.