Keidra Navaroli

# Notes from Week 2 Readings

**Citation**

Vogt, W. Paul et al.** ***Selecting the Right Analyses for Your Data: Quantitative, Qualitative, and Mixed Methods. *New York:* *The Guilford Press, 2014.

*[NOTE: Although the readings for this week included one chapter of When to Use What Research Design, I will address that resource in my upcoming presentation on experimental methods. These notes concentrate on the chapters covered by Selecting the Right Analyses for Your Data.]*

**What is the author's argument?**

In Chapters 3, 6, and 7 of *Selecting the Right Analyses for Your Data*, the authors discuss the methods for coding experimental data and introduce descriptive statistics and statistical inference as fundamental aspects for the analysis of quantitative data. They do not uphold one method over another, but instead use the chapters to explore options and their value to specific project types.

**Key Points**

· Experiments examine causal relationships by creating, manipulating, and controlling the variables that they study.

· Validity is essential at all stages of research and shows that the study is set up to reach justified conclusions. Forms include internal validity, external validity, construct validity, and content validity.

· All research should account for the inevitability of missing data, screen for outliers, and properly account for variables.

· Graphic representation (descriptive statistics) and computer-aided visualization of data are important for teaching, reporting, and analysis – especially for those outside of the fields of statistics.

· If your data does not come from a normally distributed population, consider using distribution-free statistics, which make no assumptions about the distribution of the data, using robust or “strong” statistics, removing outliers, or transforming your data (for example changing stats to percentages).

**Some Key Terms/Concepts**

**Coding** – how one prepares evidence for analysis; can vary by design.

**Reliability** – consistency or agreement among measures and a precondition of validity.

**Validity** – how accurately a study measures what it intends to measure.

**Descriptive Statistics** – methods used to portray the cases in a collection of quantitative data. It includes six main types: central tendency, dispersion, position, association, effect size (ES), and likely error. They can aid in the understanding of information about entire populations or small organizations.

**Mode** – most frequent score in a series of events; **Median** – middle score in a ranked series; **Mean **- total of all the scores divided by the number of scores.

**Statistical Inference** – using known quantities to draw conclusions about the probability of unknown quantities. The practice has three philosophies: classical or frequentist methods (based on an understanding of probability and long-run relative frequencies), Bayesian methods (based on a subjectivist view of probability which utilizes prior assessment), and resampling (includes bootstrapping, jackknife samples, and permutation tests), which, unlike classical and Bayesian methods, attempts to limit assumptions by using computer simulation to generate empirical rather than theoretical sampling distributions.

**Confidence Interval** – the range of plausible values for the population statistic; useful for assessing the replicability of the study’s results.

**Key Quotations**

“Because experiments collect data in many forms, most analysis techniques can be and have been used to analyze and interpret experimental data.” (p.100)

“Providing sufficient information for interpretation and replication is an ethical obligation of social and behavioral scientists.’’ (p. 208)

“Descriptive statistics are what you use to get to know your data, their features, and their oddities.” (p. 211)

“As a general rule, it is best to start with the most basic type of analysis and increase the level of complication only as needed to answer you research question.” (p. 231)

Experiments rarely use random sampling. Instead experiments use, and are virtually defined by, random assignment.” (p. 244)

**Strengths and Weaknesses**

*Selecting the Right Analyses for Your Data *begins each chapter with a helpful outline of the chapter’s objectives and a summary of the chapter after its conclusion. I found this essential, but the format seemed to be beneficial for some chapters more than others. For example, Chapter 6’s outline succinctly introduces and organizes its topics. Chapters 3 and 7 were less effective as the body of the text tended to splinter into many different concepts, definitions, and applications. The authors acknowledge this complexity, but it might have been useful to divide these chapters into smaller, more comprehensible segments that allow for extended discussion and examples.

**How does this relate to your research?**

By summarizing the stages of a research project -- from design, sampling, and ethics to coding/measurement, analysis, and interpretation – the authors provide an understanding of the experimental research process. I do not find that the type of research examined so far directly relates to my intended areas of study (I will most likely utilize archival or observational research rather than an experiment). Nevertheless, it provides an in depth look and vocabulary for the types of studies I may encounter.

**What connections can you make to other authors?**

As the principal author for both *Selecting the Right Analyses for Your Data *and *When to Use What Research Design, *Vogt* *addresses similar topics in both publications, especially as it relates to experimental design. Personally, I found *When to Use What Research Design* to be more appropriate for introductory audiences. *Selecting the Right Analyses for Your Data *tends to wade more heavily into a myriad of concepts and definitions that seem more appropriate for advanced statisticians and researchers.* *