Revisit Statistics 1 foundations: basic probability, random experiments, random variables, continuous vs. discrete, and binomial experiments.
- So you understand data sets, tables and charts. You understand means, medians, ranges, and standard deviations. Excellent. But that's not all that was covered in Statistics Fundamentals 1. So, what else do you need to remember before we start covering new statistical territory. Well, Statistics Fundamental 1 spent quite a bit of time exploring the basis of probability. Probability is essentially a ratio.
The ratio of a particular event or outcome versus all the possible outcomes. Sometimes, probabilities are simple. For example, flipping heads or tails on a coin. Sometimes it can be more complex. What are the odds it will rain tomorrow? Or what are the odds the next time you see your brother that he will be wearing a white shirt? In some cases, we need to know the probability of multiple events occurring. If we are rolling dice, what are the odds that a pair of dice will add up to seven or 11? Suppose we're picking cards from a 52 card deck.
What are the odds that the second card in the deck is an ace, given that the first card off the top of the deck was already revealed to be an ace? This would be a case of conditional probability. Probability can even help us understand more complex issues, like false positives in the world of medicine. Using Bayes’ theorem, you could calculate the chances of a person testing positive for a disease even though they didn't actually have the disease.
What other stats basics do we need to understand to move forward? Do you remember random experiments and random variables? Random experiments are opportunities to observe the outcome of a chance event. If we were rolling dice, the random experiment is observing and recording the outcome, which brings us to a random variable. A random variable is the numerical outcome of a random experiment. If we rolled a two and a three, our random variable would be five.
This would be an example of a discreet random variable since when we roll the die, the possible outcomes are one, two, three, four, five, or six. These are discreet numbers, so we cannot get an outcome of 2.4 or 5.99. On the other hand, if we are measuring the time it took runners to run 100 meters, now the outcomes are continuous.
We may have times of 12.45 seconds, 13.954, 10.35278. And so, as we explore probabilities, we need to be aware of the differences between measuring probabilities in systems with discreet outcomes and probabilities in systems with continuous outcomes. Which brings us around to normal distributions, probability densities, and even something called the Fuzzy Central Limit Theorem.
And let's not forget, binomial experiments. Experiments where we only have two possible outcomes. Pass or fail. Acceptable or defective. Heads or tails. All of these things were introduced, explained, and explored in Statistics Fundamentals Part 1. Do you remember these concepts? Do you know what they mean? If so, good job. If not, don't be shy about revisiting these concepts in Statistics Fundamentals 1 to refresh your memory.
In one way or another, a basic understanding of these terms and concepts will be important in moving through Statistics Fundamentals 2.
Eddie Davila first provides a bridge from Part 1, reviewing introductory concepts such as data and probability, and then moves into the topics of sampling, random samples, sample sizes, sampling error and trustworthiness, the central unit theorem, t-distribution, confidence intervals (including explaining unexpected outcomes), and hypothesis testing. This course is a must for those working in data science, business, and business analytics—or anyone else who wants to go beyond means and medians and gain a deeper understanding of how statistics work in the real world.
- Data and distributions
- Sample size considerations
- Random sampling
- Confidence intervals
- Hypothesis testing