Welcome to my Open Notebook

This is an Open Notebook with Selected Content - Delayed. All content is licenced with CC-BY. Find out more Here.

ONS-SCD.png

Research Protocol For Manitoba Centre For Health Policy

Version control

2013-12-02: This post was originally released
2015-10-02: The URL to the University Manitoba guidelines changed and has been updated.

Data Management Plan Checklist

1 U-Manitoba Centre for Health Policy Guidelines

These guidelines come from:

\noindent http://umanitoba.ca/faculties/medicine/units/mchp/protocol/media/manage_guidelines.pdf

Most of the material below is taken verbatim from the original. Unfortunately many of the items described below have links to internal MCHP documents that we cannot access. Nonetheless the structure of the guidelines provides a useful skeleton to frame our thinking.

The following areas should be reviewed with project team members near the beginning of the study and throughout the project as needed:

  • Confidentiality
  • Project team
  • File organization and documentation development
  • Communication
  • Administrative
  • Report Preparation
  • Project Completion

1.1 Confidentiality

Maintaining data access

1.2 Project Team Makeup

Roles and contact information should be documented on the project website for the following, where applicable (information may also be included on level of access approved for each team member).

1.2.1 Principal Investigator

This is the lead person on the project, who assumes responsibility for delivering the project. The PI makes decisions on project direction and analysis requirements, with input from programmers and the research coordinator (an iterative process). If there is more than one PI (e.g., multi-site studies), overall responsibility for the study needs to be determined, and how the required work will be allocated and coordinated among the co-investigators. Researcher Workgroup website (internal link)

1.2.2 Research Coordinator

Th RC is always assigned to deliverables and is usually brought in on other types of projects involving multiple sites, investigators and/or programmers. Responsibilities include project documentation, project management (e.g., ensuring that timelines are met, ensuring that project specifications are being followed), and working with both investigator(s) and the Programmer Coordinator throughout the project to coordinate project requirements.

1.2.3 The Programmer Coordinator

The PC is a central management role who facilitates assignment of programming resources to projects, ensuring the best possible match among programmers and investigators. Research Coordinator Workgroup website(internal link)

1.2.4 Programmer Analyst

This is primarily responsible for programming and related programming documentation (such that the purpose of the program and how results were derived can be understood by others). However, a major role may be taken in the analyses of the project as well, and this will characteristically vary with the project. Programmer Analyst Workgroup website(internal link)

1.2.5 Research Support

This is primarily responsible for preparing the final product (i.e., the report), including editing and formatting of final graphs and manuscript and using Reference Manager to set up the references. Research support also normally sets up and attends working group meetings. All requests for research support go through the Office Manager.

1.3 Project Team considerations

1.3.1 Roles

It is important to clarify everyone's roles at the beginning of the project; for example, whether the investigator routinely expects basic graphs and/or programming logs from the programmer.

1.3.2 Continuity

It is highly desirable to keep the same personnel, from the start of the project, where possible. It can take some time to develop a cohesive working relationship, particularly if work styles are not initially compatible. Furthermore, requesting others to temporarily fill in for team absences is generally best avoided, particularly for programming tasks (unless there is an extended period of absence). The original programmer will know best the potential impact of any changes that may need to be made to programming code.

1.3.3 Access levels

Access to MCHP internal resources (e.g., Windows, Unix) need to be assessed for all team members and set up as appropriate to their roles on the project.

1.3.4 Working group

A WG is always set up for deliverables (and frequently for other projects): Terms of Reference for working group (internal)

1.3.5 Atmospherics

1.4 File organization and Documentation Development.

All project-related documentation, including key e-mails used to update project methodology, should be saved within the project directory. Resources for directory setup and file development include:

1.4.1 Managing MCHP resources

This includes various process documents as well as an overview of the documentation process for incorporating research carried out by MCHP into online resources: Documentation Management Guide (internal)

1.4.2 MCHP directory structure

A detailed outline of how the Windows environment is structured at MCHP

1.4.3 Managing project files

How files and sub-directories should be organized and named as per the MCHP Guide to Managing Project Files (internal pdf). Information that may be suitable for incorporating into MCHP online resources should be identified; for example, a Concept Development section for subsequent integration of a new concept(s) into the MCHP Concept Dictionary. The deliverable glossary is another resource typically integrated into the MCHP Glossary.

1.4.4 Recommended Directories

NOTE this is a diversion from the MCHP guidelines. These recommended directories are from a combination of sources that we have synthesised.

  • Background: concise summaries: possibly many documents for main project and any main analyses based on the 1:3:25 paradigm: one page of main messages; a three-page executive summary; 25 pages of detailed findings.
  • Proposals: for documents related to grant applications.
  • Approvals: for ethics applications.
  • Budget: spreadsheets and so-forth.
  • Data
    • dataset1
    • dataset2
  • Paper1
    • Data
      • merged dataset1 and 2
    • Analysis (also see http://projecttemplate.net for a programmer oriented template)
      • exploratory analyses
      • data cleaning
      • main analysis
      • sensitivity analysis
      • data checking
      • model checking
      • internal review
    • Document
      • Draft
      • Journal1
        • rejected? :-(
      • Journal2
        • Response to reviews
    • Versions: folders named by date - dump entire copies of the project at certain milestones/change points
    • Archiving final data with final published paper
  • Papers 2, 3, etc: same structure as paper 1 hopefully the project spawns several papers
  • Communication: details of communication with stakeholders and decision makers
  • Meetings: for organisation and records of meetings
  • Contact details. table contacts lists
  • Completion: checklists to make sure project completion is systematic. Factor in a critical reflection of lessons learnt.
  • References

1.5 Communication

Project communication should be in written form, wherever possible, to serve as reference for project documentation. Access and confidentiality clearance levels for all involved in the project will determine whether separate communication plans need to be considered for confidential information.

1.5.1 E-mail

provides opportunities for feedback/ discussion from everyone and for documenting key project decisions. Responses on any given issue would normally be copied to every project member, with the expectation of receiving feedback within a reasonable period of time - e.g.,a few days). The Research Coordinator should be copied on ALL project correspondence in order to keep the information up to date on the project website.

  • E-mail etiquette (internal)

1.5.2 Meetings

Regularly-scheduled meetings or conference calls should include all project members where possible. Research Coordinators typically arrange project team meetings and take meeting minutes, while Research Support typically arranges the Working Group meetings.

  • Tips for taking notes (internal)
  • Outlook calendar
    Used for booking rooms, it displays information on room availability and may include schedules of team members.

1.6 Administrative

1.6.1 Time entry

Time spent on projects should be entered by all MCHP employees who are members of the project team.

  • website for time entry (internal)
  • procedures for time entry (internal)

1.7 Report preparation

This includes:

  • Policies - e.g., Dissemination of Research Findings
  • Standards - e.g., deliverable production, use of logos, web publishing
  • Guidelines - e.g., producing PDFs, powerpoint, and Reference Manager files
  • Other resources - e.g., e-mail etiquette, technical resources, photos.

1.7.1 Reliability and Validity Checks

Making sure the numbers "make sense". Carrying out these checks requires spelling out who will do which checks.

  • Data Validity Checks
    A variety of things to check for at various stages of the study. Programming can be reviewed, for example, by checking to ensure all programs have used the right exclusions, the correct definitions, etc. , and output has been accurately transferred to graphs, tables, and maps for the report.
  • Discrepancies between data sources
    In this case it is MCHP and Manitoba Health Reports - an example of cross-checking against another source of data.

1.8 Project Completion

Several steps need to take place to "finish" the project:

1.8.1 Final Project Meeting.

Wind-up or debriefing meetings are held shortly after public release of a deliverable. Such meetings provide all team members with an opportunity to communicate what worked/did not work in bringing the project to completion, providing lessons learned for future deliverables.

1.8.2 Final Documentation Review.

Findings from the wind-up meeting should be used to update and finalize the project website (including entering the date of release of report/paper). Both Windows and Unix project directories should be reviewed to ensure that only those SAS programs relevant to project analyses are kept (and well-documented) for future reference. Any related files which may be stored in a user directory should be moved to the project directory.

1.8.3 System Cleanup.

When the project is complete, the Systems Administrator should be informed. Project directories, including program files and output data sets, will be archived to tape or CD. Tape backups are retained for a 5-year period before being destroyed so any project may be restored up to five years after completion.

1.8.4 Integration of new material to institution repository

This is with MCHP resource repository - a general overview of this process is described in General Documentation Process {internal}.

</body>

Posted in  disentangle Project Management


Reproducible Research Pipelines In Epidemiology

The scientific questions motivating my work explore the health effects of environmental changes. These include droughts, bushfires, woodsmoke, dust storms, heat waves and local environmental conditions. The research needed to disentangle health effects of environmental changes from social factors. Some of the findings were novel and unexpected. Adequate documentation of the methods was problematic because of the many steps of data processing and analysis. Reproducible research pipelines address the problem of documenting data analyses by distributing data and code with publications.

Reproducibility is needed to improve credibility. It is often asserted in the literature that much research is not easy to reproduce. It is not clear what an effective way to implement these techniques is. The thesis asks how pipelines can be effectively implemented in epidemiology. It describes methods for reproducible research pipelines. It also demonstrates several applications of these methods in environmental epidemiology.

Environmental epidemiology requires us to study multifactorial pathogenesis. All diseases have multiple causal factors. To understand the many factors affecting health, epidemiologists must disentangle strands of a web of causal influences. Isolating factors is difficult and risks being overly reductionist. These determinants can interact in complex ways. Environmental epidemiologists often narrow the focus to a single environmental cause and health effect. A simple example is bushfire smoke and direct effects on cardio-respiratory disease. A more complex example is drought and suicide where the effects are indirect. The focus is on a chain of intermediary causal factors. These questions are usually explored in the context of many other factors that describe human biological variables and the socio-economic milieu.

While there is greater weight given to evidence from experimental than observational studies, experiments are difficult in environmental health. Analysis of observational data is often used instead. There are problems inherent in observational studies that pertain to variables that are confounders and effect modifiers. Observational studies face the principal problem of a large number of inter-relationships between variables. These can confound or modify effects. It is vital to a valid analysis and meaningful interpretation that we include these. It is problematic that scientists select variables from a multitude of possibilities found in the literature. Scientists also gather variables from a plethora of possible data sources. There is a long process of hypothesising, study design, data collection, cleaning, exploration, decision making, preparation, data analysis, model building and model checking. This process has been described as a vast ‘garden of forking paths’ which connect steps and decisions the analyst must make, but they could have made others. These issues might result in mere correlation interpreted as causation.

Adequately documenting the methods and results of data analysis helps safeguard against such mistakes. This thesis proposes that reproducible research pipelines address the problem of adequate documentation of data analysis. This is because they make it easy to check the methods. Assumptions are easy to challenge and results verified in new analyses. Reproducible research pipelines extend traditional research. They do this by encoding the steps in a computer ‘scripting’ language and distributing the data and code with publications. Traditional research moves through the steps of hypothesis and design, measured data, analytic data, computational results (for figures, tables and numerical results), and reports (text and formatted manuscript).

Posted in  disentangle


The Best Thing About Reproducibility Is Not Reproducibility, It Is Transparency And Rigour

Adequately documenting the methods and results of data analysis helps safeguard against errors of execution and interpretation. My PhD thesis proposes that reproducible research pipelines address the problem of adequate documentation of data analysis.

A graphical view of the reproducible research pipeline concept is shown below. The ideas were introduced into epidemiology by Peng et al in 2006, although Peng has more recently been using the terms ‘evidence based data analysis pipeline’ (Peng 2013) and ‘Data Science Pipeline’ (Peng 2015). Both terms are useful, but I chose to follow the original phrase. The graphical version shown below was introduced by Solymos and Feher (2008).

img

The best thing about reproducible work is not merely the ability to repeatedly arrive at the same result, but that having the organisational structures in place that are required for reproducibility also implicitly will improve the transparency and rigour of the work. This is because they make it easy to check the methods. Assumptions are easy to challenge and results verified in new analyses.

Reproducible research pipelines extend traditional research. They do this by encoding the steps in a computer ‘scripting’ language and distributing the data and code with publications. Traditional research moves through the steps of hypothesis and design, measured data, analytic data, computational results (for figures, tables and numerical results), and reports (text and formatted manuscript).

This model of the research pipeline sees a new relationship possible between the author and the reader. They approach the results and understandings of the research from opposite directions. Readers can dig deeper into the research to verify results or conduct similar studies. Reproducibility exists along a spectrum from minimum reproducibility that can be achieved by providing measured or analytic data and the analytic code. More reproducibility is gained by providing processing code necessary to transform original measured data into tidy data for analysis. Full reproducibility would include all stages of the pipeline.

References

Posted in  disentangle Reproducible Research Reports


Reproducible Research And Managing Digital Assets Part 3 of 3. ProjectTemplate is appropriate for large scale

Recap on this series of three posts

  • The first post showed the recommended files and folders for a data analysis project from Scott Long
  • That recommendation was pretty complex, with a few folders that I felt did not jump out as super-useful
  • The second post showed a very simple template from the R community called makeProject
  • I really like that one as it seems to be the minimum amount of stuff needed to make things work.

The ProjectTemplate framework

  • I have been using John Myles Whites ProjectTemplate R package http://projecttemplate.net/ for ages
  • I really like the ease with which I can get up and running a new project
  • and the ease with which I can pick up an old project and start adding new work

Quote from John’s first post

My inspiration for this approach comes from the rails command from
Ruby on Rails, which initializes a new Rails project with the proper
skeletal structure automatically. Also taken from Rails is
ProjectTemplate’s approach of preferring convention over
configuration: the automatic data and library loading as well as the
automatic testing work out of the box because assumptions are made
about the directory structure and naming conventions that will be used

http://www.johnmyleswhite.com/notebook/2010/08/26/projecttemplate/

  • I dont know anything about RoR but this philosophy works really well for my R programming too

R Code

if(!require(ProjectTemplate)) install.packages(ProjectTemplate); require(ProjectTemplate)
setwd("~/projects")
create.project("my-project")
setwd('my-project')
dir()
##  [1] "cache"       "config"      "data"        "diagnostics" "doc"        
##  [6] "graphs"      "lib"         "logs"        "munge"       "profiling"  
## [11] "README"      "reports"     "src"         "tests"       "TODO"   
##### these are very sensible default directories to create a modular
##### analysis workflow.  See the project homepage for descriptions
 
# now all you need to do whenever you start a new day 
load.project()
# and your workspace will be recreated and any new data automagically analysed in
# the manner you want

Advanced usage of ProjectTemplate

ProjectTemplate Demo

1 The Compendium concept

\section{The Compendium concept} My goal is to develop data analysis projects along the lines of the Compendium concept of Gentleman and Temple Lang (2007) \cite{Gentleman2007}. Compendia are dynamic documents containing text, code and data. Transformations are applied to the compendium to view its various aspects.

  • Code Extraction (Tangle): source code
  • Export (Weave): LaTeX, HTML, etc
  • Code Evaluation

I'm also following the orgmode technique of Schulte et al (2012) \cite{Schulte}

2 The R code that produced this report

I support the philosophy of Reproducible Research http://www.sciencemag.org/content/334/6060/1226.full, and where possible I provide data and code in the statistical software R that will allow analyses to be reproduced. This document is prepared automatically from the associated Emacs Orgmode file. If you do not have access to the Orgmode file please contact me.

cat('
 #######################################################################
 ## The R code is free software; please cite this paper as the source.  
 ## Copyright 2012, Ivan C Hanigan <ivan.hanigan@gmail.com> 
 ## This program is free software; you can redistribute it and/or modify
 ## it under the terms of the GNU General Public License as published by
 ## the Free Software Foundation; either version 2 of the License, or
 ## (at your option) any later version.
 ## 
 ## This program is distributed in the hope that it will be useful,
 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 ## GNU General Public License for more details.
 ## Free Software
 ## Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
 ## 02110-1301, USA
 #######################################################################
')

3 Inititalise R environment

####
# MAKE SURE YOU HAVE THE CORE LIBS
if (!require(ProjectTemplate)) install.packages('ProjectTemplate', repos='http://cran.csiro.au'); require(ProjectTemplate)
if (!require(lubridate)) install.packages('lubridate', repos='http://cran.csiro.au'); require(lubridate)
if (!require(reshape)) install.packages('reshape', repos='http://cran.csiro.au'); require(reshape)
if (!require(plyr)) install.packages('plyr', repos='http://cran.csiro.au'); require(plyr)
if (!require(ggplot2)) install.packages('ggplot2', repos='http://cran.csiro.au'); require(ggplot2)
if(!require(mgcv)) install.packages('mgcv', repos='http://cran.csiro.au');require(mgcv);
require(splines)
if(!require(NMMAPSlite)) install.packages('NMMAPSlite', repos='http://cran.csiro.au');require(NMMAPSlite)
rootdir <- getwd()  

4 ProjectTemplate

\section{ProjectTemplate} This is a simple demo of the R package \emph{ProjectTemplate} http://projecttemplate.net/ which is aimed at standardising the structure and general development of data analysis projects in R. A primary aim is to allow analysts to quickly get a project loaded up and ready to:

  • reproduce or
  • create new data analyses.

5 Why?

It has been recognised on the R blogosphere that it

6 The Reichian load, clean, func, do approach

\section{The Reichian load, clean, func, do approach}

The already mentioned blog post http://blog.revolutionanalytics.com/2010/10/a-workflow-for-r.html also links to another`best' approach, the:

By Josh Reich. I've also followed to prepare this demo using the tutorial and data from the package website http://projecttemplate.net/getting_started.html

7 The Peng NMMAPSlite approach

\section{The Peng NMMAPSlite approach} The other approach I followed was that of Roger Peng from Johns Hopkins and his NMMAPSlite R package \cite{Peng2004}. Especially the function

readCity(name, collapseAge = FALSE, asDataFrame = TRUE)

Arguments

  • name character, abbreviated name of a city
  • collapseAge logical, should age categories be collapsed?
  • asDataFrame logical, should a data frame be returned?)

Description: Provides remote access to daily mortality, weather, and air pollution data from the National Morbidity, Mortality, and Air Pollution Study for 108 U.S. cities (1987–2000); data are obtained from the Internet-based Health and Air Pollution Surveillance System (iHAPSS)

8 Init the project

\section{Init the project} First we want to initialise the project directory.

####
# init
require('ProjectTemplate')
create.project('analysis',minimal=TRUE)

9 dir()

####
# init dir
dir('analysis')
cache
config
data
munge
README
src

10 The reports directory

I've added the reports directory manually and asked the package author if this is generic enough to be in the defaults for

minimal = TRUE 

I believe it may be as the \emph{Getting Started} guidebook states:

`It's meant to contain the sort of written descriptions of the results of your analyses that you'd \textbf{publish in a scientific paper.}

With that report written …, we've gone through \textbf{the simplest sort of analysis you might run with ProjectTemplate}.

####
# init reports
dir.create('analysis/reports')

11 Do the analysis

\section{Do the analysis: use load,clean,func,do}

####
# this is the start of the analysis, 
# assumes the init.r file has been run
if(file.exists('analysis')) setwd('analysis')  
Sys.Date()
# keep a track of the dates the analysis is rerun
getwd()
# may want to keep a reference of the directory 
# the project is in so we can track the history 

12 Get the projecttemplate tutorial data

Get the data from http://projecttemplate.net/letters.csv.bz2 (I downloaded on 13-4-2012) Put it in the data directory for auto loading.

####
# analysis get tutorial data
download.file('http://projecttemplate.net/letters.csv.bz2', 
  destfile = 'data/letters.csv.bz2', mode = 'wb')

13 Tools

Edit the \emph{config/global.dcf} file to make sure that the load_libraries setting is turned on

14 Load the analysis data

#\section{load}

####
# analysis load
require(ProjectTemplate)
load.project()

15 check the analysis data

#\section{clean}

tail(letters)
zyryanzy
zythemzy
zythiazy
zythumzy
zyzomyszy
zyzzogetonzy

16 Develop munge code

#\section{load with processing (munge)}

Edit the \emph{munge/01-A.R} script so that it contains the following two lines of code:

# For our current analysis, we're interested in the total 
# number of occurrences of each letter in the first and 
# second letter positions and not in the words themselves.
# compute aggregates
first.letter.counts <- ddply(letters, c('FirstLetter'), 
  nrow)
second.letter.counts <- ddply(letters, c('SecondLetter'), 
  nrow)

Now if we run with

load.project()

all munging will happen automatically. However…

17 To munge or not to munge?

As you'll see on the website, once the data munging is completed and outputs cached, load.project() will keep recomputing work over and over. The author suggests we manually edit our configuration file.

 # edit the config file and turn munge on
 # load.project()
 # edit the config file and turn munge off
 # or my preference
 source('munge/01-A.r')
# which can be included in our first analysis script
# but subsequent analysis scripts can just call load.project() 
# without touching the config file

18 Cache

Once munging is complete we cache the results

cache('first.letter.counts')
cache('second.letter.counts')

# And need to keep an eye on the implications for our config file to avoid re-calculating these next time we 

load.project()

#\section{do}

19 Plot first and second letter counts

Produce some simple density plots to see the shape of the first and second letter counts.

  • Create \emph{src/generate_plots.R}. Use the src directory to store any analyses that you run.
  • The convention is that every analysis script starts with load.project() and then goes on to do something original with the data.

20 Do generate plots

Write the first analysis script into a file in \textbf{src}

require('ProjectTemplate')
load.project()
plot1 <- ggplot(first.letter.counts, aes(x = V1)) + 
  geom_density()
ggsave(file.path('reports', 'plot1.pdf'))

plot2 <- ggplot(second.letter.counts, aes(x = V1)) + 
  geom_density()
ggsave(file.path('reports', 'plot2.pdf'))
dev.off()

And now run it (I do this from a main `overview' script).

source('src/generate_plots.r')

21 First letter

22 Second letter

23 Report results

\section{Report results} We see that both the first and second letter distributions are very skewed. To make a note of this for posterity, we can write up our discovery in a text file that we store in the reports directory.

\documentclass[a4paper]{article}
\title{Letters analysis}
\author{Ivan Hanigan}
\begin{document}
\maketitle
blah blah blah
\end{document}


24 Produce final report

# now run LaTeX on the file in reports/letters.tex

25 Personalised project management directories

\section{Personalised project management directories}

####
# init additional directories for project management
analysisTemplate()
dir()
admin
analysis
data
document
init.r
metadata
ProjectTemplateDemo.org
references
tools
versions

Posted in  Data Management


Reproducible Research And Managing Digital Assets Part 2 of 3. makeProject is simple

This post is about an effective and simple data management framework for analysis projects. This post introduces Josh Reich’s LCFD framework, originally introduced in this answer on the stack overflow website here http://stackoverflow.com/a/1434424, and encoded into the makeProject R package http://cran.r-project.org/web/packages/makeProject/makeProject.pdf.

Literature Review Approach

This series of three posts is a summary of some of the most useful advice I have found based on my experience having implemented in my own work.

This is the second post in a series of three entries regarding some evidence-based best practice approaches I have reviewed. I have read many website articles and blog posts on a variety of approaches to the organisation of digital assets in a reporoducible research pipeline. The material I’ve gathered in my ongoing search and opportunistic readings regarding best practice in this area have been recommended by practitioners which provides some weight of evidence. In addition I have implemented some aspects of the many techniques and the reproducibility of my own work has improved greatly.

Digital Assets Management for Reproducible Research

The digital assets in a reproducible research pipeline include:

  1. Publication material (documents, figures, tables, literature)
  2. Data (raw measurements, data provided, data derived)
  3. Code (pre-processing, analysis and presentation)

How to use the makeProject package

Code:

# choose your project dir
setwd("~/projects")   
library(makeProject)
makeProject("makeProjectDemo")
#returns
"Creating Directories ...
Creating Code Files ...
Complete ..."
matrix(dir("makeProjectDemo"))
#[1,] "code"       
#[2,] "data"       
#[3,] "DESCRIPTION"
#[4,] "main.R"     

  • This has set up some simple and sensible tools for a data analysis.
  • Let’s have a look at the main.R script. This is the one file that is used to run all the modules of the project, found in the R scripts in the code folder.

Code:

# Project: makeProjectDemo
# Author: Your Name
# Maintainer: Who to complain to <yourfault@somewhere.net>
 
# This is the main file for the project
# It should do very little except call the other files
 
### Set the working directory
setwd("/home/ivan_hanigan/projects/makeProjectDemo")
 
 
### Set any global variables here
####################
 
 
 
####################
 
 
### Run the code
source("code/load.R")
source("code/clean.R")
source("code/func.R")
source("code/do.R")

I think that is very self-explanatory, but it does need some demonstration. The next instalment in this three part blog post will describe the ProjectTemplate approach. After that I will demonstrate ways that each of the three approaches can be used.

Posted in  Data Management