Welcome to my Open Notebook

This is an Open Notebook with Selected Content - Delayed. All content is licenced with CC-BY. Find out more Here.

ONS-SCD.png

Judging the evidence part 2

I previously reported on a lecture slide deck called ‘Judging the Evidence’ by Adrian Sleigh for a course PUBH7001 Introduction to Epidemiology, April 30, 2001. http://ivanhanigan.github.com/2015/11/judging-the-evidence-using-a-literature-review-database

I have also now extracted several slides into a template outline for reviewing epidemiological and other research.

Adrian Sleigh’s Protocol

Object of Study, Hypotheses or Research Questions
  1. Purpose of Study: Objectives of study; why was it done?
  2. Reference Population:
    • To whom do authors generalize results?
    • To whom should the findings be generalized?
Sampling

From the Reference Pop (target population) ->

Source Pop -> Eligible population

` The source population may be defined directly, as a matter of defining its membership criteria; or the definition may be indirect, as the catchment population of a defined way of identifying cases of the illness. The catchment population is, at any given time, the totality of those in the ‘were-would’ state of: were the illness now to occur, it would be ‘caught’ by that case identification scheme Source: Miettinen OS, 2007 http://www.teachepi.org/documents/courses/fundamentals/Pai_Lecture6_Selection%20bias.pdf `

Sample Pop:
  1. Refusals, Dropouts
  2. Participants -> Study Pop
Design of study
  1. Study setting: Where and when was the study done? What were the circumstances? Ethics?
  2. Type of study: Experimental vs natural, descriptive vs analytical (trial, cohort, case-control, prevalence, ecological, case-report, etc). If case-control or cohort, was the timing of data collection retrospective or prospective?
  3. Subjects: Who (number, age, sex, etc.)? How were they selected?
  4. Comparison groups: What control group or standard of comparison? How appropriate?
  5. Study size: Was the sample size adequate to give you confidence in the finding of “no association
  6. Bias and Confounding
    • a) Selection bias: Were groups comparable for subjects who entered and stayed in study? Selection influenced by exposure (c-c) or effect (cohort) under study? Drop-outs?
    • b) Confounding: Control of potential confounding variables in design of the study - matching or subject restriction?
Observations
  1. Procedure: How are the variables in the study defined and measured, ie how were data collected?
  2. Definition of terms: Are definitions of diagnostic criteria, measurements and outcome unambiguous? Could be reproduced?
  3. Bias and Confounding
    • a) Observation bias: Were study groups comparable for measurements or mode of observation? Mis-classification in determining exposure or disease categories? Differential between groups, or ‘random’?
    • b) Confounding: Information recorded on variables that could confound the association under study (to permit adjustment in the analysis)?

THANKS Prof Sleigh!

Posted in  bibliometrics and literature reviewing


blog comments powered by Disqus