Lecture Outline
 
Biometry
Biology 7530
Instructor: C. Ray Chandler

 About the Course

 Schedule

 Syllabus

 Lecture Outline

 Bibliography

 Bulletin Board

 Web Resources

 Contact the Instructor

This course outline corresponds to the outline you will see during lecture. The course topics and their organizational relationships are shown in black. Readings from the textbook (Biometry by Sokal and Rohlf) are shown in red. A brief summary of each topic is given in blue.


I. DATA IN BIOLOGY (pp. 8-32)
A. What Are Data? - data are numerical facts, the cumulative measurments or counts of individual biological entities
B. Variables - variables are characteristics that vary from one biological entity to another and that can be measured or quantified
1. measurement variables
a. ratio scale
b. interval scale
2. ordinal variables
3. attributes
C. Accuracy vs. Precision - variables can be measured accurately (close to the "true" value) and/or precisely (with a high degree of repeatability)
D. Frequency Distributions - a cumulative set of measurements (a data set) can be visualized as a frequency distribution
1. bar graph (Fig. 2.2)
2. histogram (Box 2.1)
3. frequency polygon (Fig. 2.3)
 
II. POPULATIONS AND SAMPLES (pp. 39-60)
A. Statistical Inference - statistical inference is the process of inferring the characteristics of a population by analyzing a small sample from that population
1. population
2. sample
B. Descriptive Statistics - descriptive statistics provide a numerical summary (or description) of data from a population or sample
1. statistics of location
a. mean (Box 4.2, 4.3)
b. median (Box 4.1)
c. mode
2. statistics of dispersion
a. range
b. interquartile range
c. variance (Box 4.2, 4.3)
d. standard deviation (Box 4.2, 4.3)
e. coefficient of variation (Box 4.3)
 
III. FREQUENCY DISTRIBUTIONS (pp. 61-115)
A. Importance in Biometry - in biometry a number of theoretical frequency distributions are used because they tell us what data to expect under certain specified conditions
B. Binomial Distribution - the binomial defines the distribution of events that have two outcomes (e.g., dead/alive, infected/uninfected)
1. definition
2. application (Table 5.1, Box 5.1)
C. Poisson Distribution - the Poisson is similar to the binomial, but is used when one outcome is rare and the number of events is large (often used to test whether an outcome is rare and random)
1. definition
2. application (Box 5.2)
D. Normal Distribution - many biological variables, particularly those affected by many factors that act additively, fit the normal distribution; the normal has a number of well-described characteristics
1. definition (Fig. 6.2)
2. moments (Box 6.2)
3. standard normal deviates (Fig. 6.3)
 
IV. INFERENCE AND HYPOTHESIS TESTING (pp. 127-178; 223-227)
A. Two Important Concepts - the concept of a standard error and a t-distribution underlie many common techniques in biostatistics
1. standard error (Table 7.1, Box 7.1)
2. t-distribution (Fig. 7.7, 7.8)
B. Setting Confidence Limits - confidence limits give a measure of precision or reliability when estimating parameters
1. mean (Box 7.2)
2. binomial proportion (Box 7.4)
C. A Classic Hypothesis Test (Fig. 7.14, 7.16, Box 7.5, 9.6) - the t-test, which determines whether two sample means are drawn from the same population, is a good example of a basic hypothesis test
 
V. ANALYSIS OF FREQUENCIES (pp. 685­737)
A. Introduction - the frequency of occurrence of attributes or events is a common form of data in biology
B. Goodness of Fit - goodness of fit tests evaluate whether observed frequencies match those expected based on some a priori frequncy distribution
1. G-test (Table 17.1, 17.2, Box 17.1, 17.2)
2. Chi-square test (Table 17.3, Box 17.1, 17.2)
C. Contingency Tables - contingency tables evaluate whether two or more attributes are independent of one another
1. Two-way tables (Box 17.6, 17.7, 17.8)
2. Three-way tables (Box 17.10)
 
VI. ANALYSIS OF VARIANCE (pp. 179-223; 229-260; 392-422)
A. Introduction - the analysis of variance is a fundamental concept in biometry
B. The F-Ratio - F, the ratio of two variances, is an enormously useful statistic
C. Basic Structure of an ANOVA - in ANOVA, F is the ratio of an appropriate among-group variance to within-group variance
1. within-group variance (Table 8.1, 8.3, Box 9.1, 9.4)
2. among-group variance (Table 8.1, 8.3, Box 9.1, 9.4)
3. ANOVA table (Table 8.5)
D. ANOVA Models - there are two models of ANOVA that affect details of the analysis
1. model I (Box 9.8, 9.10, 9.11, 9.12, 9.13)
2. model II (Box 9.2)
3. examples
E. Assumptions - ANOVA assumes normality and homogeneity of variance; transfomations can help achieve these
 
VII. MORE COMPLEX ANOVA (pp. 272-308; 321-356)
A. Two-way ANOVA - ANOVA can test for the effects of and interaction between two treatments or independent variables
1. introduction
2. calculation (Table 11.1, 11.2, Box. 11.1)
3. related tests (Box 11.3)
B. Repeated Measures ANOVA - ANOVA can also handle the special case in which the same experimental unit is measured repeatedly
1. principle of repeated measures
2. calculation (Box 11.4, 11.5)
C. Nested ANOVA - finally, the levels of one independent variable may occur only with particular levels of another independent variable (nested effects)
1. introduction
2. calculation (Box 10.1, 10.2, 10.4, 10.6)
 
VIII. REGRESSION AND CORRELATION (pp. 451-521; 541-549, 555-593, 649-654)
A. What's the Difference? (Table 15.1) - regression measures functional or causal relationships between two variables; correlation measures association
B. Regression - regression measures the linear relationship between a dependent variable and an independent variable
1. introduction (Fig. 14.1, 14.2, 14.4, )
a. terminology
b. uses of regression
2. two models
3. basic calculations (Table 14.1, Fig, 14.5, 14.6, 14.7, 14.8, Box 14.1)
4. significance testing (Box 14.3)
C. Analysis of Covariance - ANCOVA measures the effect of an independent variable on a dependent variable while holding the effects of a second independent variable (covariate) constant
D. Correlation - correlation measures the association between two independent variables
1. Pearson's (Box 15.2, 15.4)
2. partial (Fig. 16.12)
3. Model II lines (Box 15.6, Fig. 15.7)
 
IX. NONPARAMETRIC TECHNIQUES (pp. 423-447, 539-541, 593-601)
A. Introduction - nonparametric tests do not depend on data fitting a specific theoretical distribution
1. what are nonparametric tests?
2. costs and benefits
B. Among-group Comparisons - there are a variety of nonparametric alternatives for comparing the "location" of two or more groups
1. Mann-Whitney (Box 13.7)
2. Kruskal-Wallis (Box 13.6)
3. Kolmogorov-Smirnov (Box 13.9)
4. Friedmann's test (Box 13.10)
5. two-way alternative (Box 13.12)
6. Wilcoxon signed-ranks (Box 13.11)
C. Correlation - nonparametric correlations are based on ranked data
1. Spearman's
2. Kendall's (Box. 15.7)
D. Regression (Box. 41.11) - there are limited options for nonparamtric regression
 
X. CLOSING TOPICS (pp. 167-169, 260-265, 609-634, 678-681)
A. Power Analysis - power, the likelihood of rejecting a false null hypothesis, is a critical issues in biometry
B. Bayesian Statistics - bayesian methods provide an imprtant alternative to traditional hypothesis testing
C. Introduction to Multivariate - this class will lay the groundwork for learning important multivariate techniques


Return to the Biometry homepage