|
|
1 | (14) |
|
The Importance of Context |
|
|
3 | (2) |
|
|
5 | (3) |
|
Selection Among Statistical Procedures |
|
|
8 | (2) |
|
|
10 | (2) |
|
|
12 | (1) |
|
|
12 | (3) |
|
|
15 | (15) |
|
|
16 | (4) |
|
|
20 | (1) |
|
|
21 | (1) |
|
|
22 | (2) |
|
|
24 | (1) |
|
|
25 | (5) |
|
|
30 | (24) |
|
|
31 | (3) |
|
|
34 | (3) |
|
|
37 | (3) |
|
Alternative Methods of Plotting Data |
|
|
40 | (3) |
|
|
43 | (3) |
|
Using Computer Programs to Display Data |
|
|
46 | (1) |
|
|
47 | (1) |
|
|
48 | (6) |
|
Measures of Central Tendency |
|
|
54 | (9) |
|
|
55 | (1) |
|
|
55 | (1) |
|
|
56 | (1) |
|
Advantages and Disadvantages of the Mode, the Median, and the Mean |
|
|
57 | (2) |
|
Obtaining Measures of Central Tendency Using Minitab |
|
|
59 | (1) |
|
|
60 | (1) |
|
|
61 | (2) |
|
|
63 | (23) |
|
|
66 | (1) |
|
Interquartile Range and Other Range Statistics |
|
|
67 | (1) |
|
|
67 | (1) |
|
|
68 | (1) |
|
|
69 | (1) |
|
Computational Formulae for the Variance and the Standard Deviation |
|
|
70 | (2) |
|
The Mean and the Variance as Estimators |
|
|
72 | (2) |
|
Boxplots: Graphical Representations of Dispersions and Extreme Scores |
|
|
74 | (4) |
|
Obtaining Measures of Dispersion Using JMP |
|
|
78 | (2) |
|
|
80 | (2) |
|
|
82 | (1) |
|
|
83 | (3) |
|
|
86 | (18) |
|
|
88 | (4) |
|
The Standard Normal Distribution |
|
|
92 | (6) |
|
Setting Probable Limits on an Observation |
|
|
98 | (1) |
|
|
99 | (1) |
|
|
100 | (1) |
|
|
100 | (4) |
|
Basic Concepts of Probability |
|
|
104 | (14) |
|
|
105 | (2) |
|
Basic Terminology and Rules |
|
|
107 | (4) |
|
Discrete versus Continuous Variables |
|
|
111 | (1) |
|
Probability Distributions for Discrete Variables |
|
|
112 | (1) |
|
Probability Distributions for Continuous Variables |
|
|
113 | (2) |
|
|
115 | (1) |
|
|
116 | (2) |
|
Sampling Distributions and Hypothesis Testing |
|
|
118 | (23) |
|
Two Simple Examples Involving Course Evaluations and Rude Motorists |
|
|
120 | (2) |
|
|
122 | (1) |
|
|
123 | (2) |
|
|
125 | (1) |
|
Test Statistics and Their Sampling Distributions |
|
|
126 | (1) |
|
Using the Normal Distribution to Test Hypotheses |
|
|
127 | (3) |
|
Type I and Type II Errors |
|
|
130 | (3) |
|
One- and Two-Tailed Tests |
|
|
133 | (3) |
|
|
136 | (1) |
|
Back to Course Evaluations and Rude Motorists |
|
|
137 | (1) |
|
|
138 | (1) |
|
|
139 | (2) |
|
|
141 | (31) |
|
|
142 | (6) |
|
An Example: The Relationship Between Speed and Accuracy |
|
|
148 | (3) |
|
|
151 | (1) |
|
The Pearson Product-Moment Correlation Coefficient (r) |
|
|
152 | (2) |
|
Correlations with Ranked Data |
|
|
154 | (1) |
|
Factors That Affect the Correlation |
|
|
155 | (3) |
|
If Something Looks Too Good to Be True, Perhaps It Is |
|
|
158 | (1) |
|
Testing the Significance of a Correlation Coefficient |
|
|
159 | (2) |
|
Intercorrelation Matrices |
|
|
161 | (2) |
|
Other Correlation Coefficients |
|
|
163 | (1) |
|
Using Minitab and SPSS to Obtain Correlation Coefficients |
|
|
164 | (2) |
|
|
166 | (2) |
|
|
168 | (1) |
|
|
169 | (3) |
|
|
172 | (25) |
|
The Relationship Between Stress and Health |
|
|
173 | (2) |
|
|
175 | (1) |
|
|
176 | (4) |
|
The Accuracy of Prediction |
|
|
180 | (6) |
|
Hypothesis Testing in Regression |
|
|
186 | (1) |
|
Computer Solution Using SPSS |
|
|
187 | (2) |
|
|
189 | (2) |
|
|
191 | (1) |
|
|
192 | (5) |
|
|
197 | (26) |
|
|
200 | (5) |
|
|
205 | (2) |
|
The Visual Representation of Multiple Regression |
|
|
207 | (1) |
|
|
208 | (2) |
|
Refining the Regression Equation |
|
|
210 | (1) |
|
A Second Example: Height and Weight |
|
|
211 | (3) |
|
A Third Example: Psychological Symptoms in Cancer Patients |
|
|
214 | (4) |
|
|
218 | (1) |
|
|
219 | (4) |
|
Hypothesis Tests Applied to Means: One Sample |
|
|
223 | (24) |
|
Sampling Distribution of the Mean |
|
|
225 | (2) |
|
Testing Hypotheses About Means When σ is Known |
|
|
227 | (3) |
|
Testing a Sample Mean When σ Is Unknown (One-Sample t Test) |
|
|
230 | (6) |
|
Factors That Affect the Magnitude of t and the Decision About H0 |
|
|
236 | (1) |
|
A Second Example: The Moon Illusion |
|
|
237 | (1) |
|
Confidence Limits on the Mean |
|
|
238 | (3) |
|
Using a Computer Program to Run One-Sample t Tests |
|
|
241 | (1) |
|
|
242 | (1) |
|
|
243 | (1) |
|
|
244 | (3) |
|
Hypothesis Tests Applied to Means: Two Related Samples |
|
|
247 | (12) |
|
|
248 | (1) |
|
Student's t Applied to Difference Scores |
|
|
249 | (3) |
|
A Second Example: The Moon Illusion Again |
|
|
252 | (1) |
|
Advantages and Disadvantages of Using Related Samples |
|
|
253 | (1) |
|
Using Computer Software for t Tests on Related Samples |
|
|
254 | (1) |
|
|
255 | (1) |
|
|
256 | (3) |
|
Hypothesis Tests Applied to Means: Two Independent Samples |
|
|
259 | (20) |
|
Distribution of Differences Between Means |
|
|
260 | (7) |
|
Heterogeneity of Variance |
|
|
267 | (1) |
|
Nonnormality of Distributions |
|
|
268 | (1) |
|
A Second Example with Two Independent Samples |
|
|
269 | (1) |
|
Confidence Limits on μ1 -- μ2 |
|
|
270 | (1) |
|
Use of Computer Programs for Analysis of Two Independent Sample Means |
|
|
271 | (3) |
|
|
274 | (2) |
|
|
276 | (1) |
|
|
276 | (3) |
|
|
279 | (20) |
|
|
281 | (1) |
|
Factors That Affect the Powr of a Test |
|
|
282 | (2) |
|
|
284 | (2) |
|
Power Calculations for the One-Sample t Test |
|
|
286 | (3) |
|
Power Calculations for Differences Between Two Independent Means |
|
|
289 | (3) |
|
Power Calculations for the t Test for Related Samples |
|
|
292 | (2) |
|
Power Considerations in Terms of Sample Size |
|
|
294 | (1) |
|
You Don't Have to Do It by Hand |
|
|
295 | (1) |
|
|
296 | (1) |
|
|
296 | (3) |
|
One-way Analysis of Variance |
|
|
299 | (36) |
|
|
300 | (3) |
|
The Logic of the Analysis of Variance |
|
|
303 | (5) |
|
Calculations for the Analysis of Variance |
|
|
308 | (7) |
|
|
315 | (1) |
|
Multiple Comparison Procedures |
|
|
316 | (8) |
|
Violations of Assumptions |
|
|
324 | (1) |
|
|
325 | (1) |
|
Using JMP for a One-Way Analysis of Variance |
|
|
326 | (1) |
|
|
326 | (4) |
|
|
330 | (1) |
|
|
331 | (4) |
|
Factorial Analysis of Variance |
|
|
335 | (22) |
|
|
336 | (2) |
|
The Extension of the Eysenck Study |
|
|
338 | (5) |
|
|
343 | (2) |
|
|
345 | (3) |
|
|
348 | (1) |
|
|
348 | (1) |
|
A Final Example: Maternal Adaptation Revisited |
|
|
349 | (2) |
|
Using SPSS for Factorial Analysis of Variance |
|
|
351 | (1) |
|
|
352 | (1) |
|
|
353 | (4) |
|
Repeated-Measures Analysis of Variance |
|
|
357 | (14) |
|
An Example: The Treatment of Migraine Headaches |
|
|
358 | (2) |
|
|
360 | (2) |
|
Assumptions Involved in Repeated-Measures Designs |
|
|
362 | (1) |
|
Advantages and Diadvantages of Repeated-Measures Designs |
|
|
362 | (1) |
|
Using BMDP to Analyze Data in a Repeated-Measures Design |
|
|
363 | (3) |
|
|
366 | (2) |
|
|
368 | (1) |
|
|
368 | (3) |
|
|
371 | (22) |
|
One Classification Variable: The Chi-Square Goodness-of-Fit Test |
|
|
373 | (5) |
|
Two Classification Variables: Contingency Table Analysis |
|
|
378 | (2) |
|
Correction for Continuity |
|
|
380 | (1) |
|
Chi-Square for Larger Contingency Tables |
|
|
381 | (1) |
|
The Problem of Small Expected Frequencies |
|
|
382 | (1) |
|
The Use of Chi-Square as a Test on Proportions |
|
|
383 | (2) |
|
Nonindependent Observations |
|
|
385 | (1) |
|
Minitab Analysis of Contingency Tables |
|
|
386 | (1) |
|
|
386 | (2) |
|
|
388 | (1) |
|
|
389 | (4) |
|
Nonparametric and Distribution-Free Statistical Tests |
|
|
393 | (22) |
|
|
395 | (6) |
|
Wilcoxon's Matched-Pairs Signed-Ranks Test |
|
|
401 | (4) |
|
Kruskal--Wallis One-Way Analysis of Variance |
|
|
405 | (1) |
|
Friedman's Rank Test for k Correlated Samples |
|
|
406 | (3) |
|
|
409 | (1) |
|
|
409 | (6) |
|
Choosing the Appropriate Analysis |
|
|
415 | (8) |
|
|
417 | (6) |
Appendix A Arithmetic Review |
|
423 | (6) |
Appendix B Symbols and Notation |
|
429 | (3) |
Appendix C Basic Statistical Formulae |
|
432 | (4) |
Appendix D Dataset |
|
436 | (2) |
Appendix E Statistical Tables |
|
438 | (19) |
Glossary |
|
457 | (8) |
References |
|
465 | (6) |
Answers to Selected Exercises |
|
471 | (16) |
Index |
|
487 | |