September 26, 2022
PMAP 8521: Program evaluation
Andrew Young School of Policy Studies
Exam 1
Exam 1
FAQs
Exam 1
FAQs
Confidence intervals, credible intervals,
and a crash course on Bayesian statistics
Tell us about Exam 1!
Are p-values really misinterpreted
in published research?
Power calculations and sample size
Won't we always be able to find
a significant effect if the
sample size is big enough?
Yes!
Math with computers
andhs.co/live
Are the results from
p-hacking actually a
threat to validity?
Do people actually post
their preregistrations?
Do you have any tips for identifying the
threats to validity in articles since
they're often not super clear?
Especially things like spillovers,
Hawthorne effects, and John Henry effects?
Using a control group of some kind
seems to be the common fix
for all of these issues.
What happens if you can't do that?
Is the study just a lost cause?
That's the point of DAGs and quasi experiments; simulate having treatment and control groups
In the absence of p-values,
I'm confused about how
we report… significance?
Nobody really cares about p-values
Nobody really cares about p-values
Decision makers want to know
a number or a range of numbers—
some sort of effect and uncertainty
Nobody really cares about p-values
Decision makers want to know
a number or a range of numbers—
some sort of effect and uncertainty
Nobody cares how likely a number would be
in an imaginary null world!
Report point estimates and some sort of range
"It would be preferable if reporting standards emphasized confidence intervals or standard errors, and, even better, Bayesian posterior intervals."
Report point estimates and some sort of range
"It would be preferable if reporting standards emphasized confidence intervals or standard errors, and, even better, Bayesian posterior intervals."
Point estimate
The single number you calculate
(mean, coefficient, etc.)
Uncertainty
A range of possible values
Statistics: use a sample to make inferences about a population
Statistics: use a sample to make inferences about a population
Greek
Letters like β1 are the truth
Letters with extra markings like ^β1 are our estimate of the truth based on our sample
Statistics: use a sample to make inferences about a population
Greek
Letters like β1 are the truth
Letters with extra markings like ^β1 are our estimate of the truth based on our sample
Latin
Letters like X are actual data from our sample
Letters with extra markings like ˉX are calculations from our sample
Data → Calculation → Estimate → Truth
Data → Calculation → Estimate → Truth
Data | X |
Calculation | ˉX=∑XN |
Estimate | ˆμ |
Truth | μ |
Data → Calculation → Estimate → Truth
Data | X |
Calculation | ˉX=∑XN |
Estimate | ˆμ |
Truth | μ |
ˉX=ˆμ
X→ˉX→ˆμ🤞 hopefully 🤞→μ
Truth = Greek letter
An single unknown number that is true for the entire population
Truth = Greek letter
An single unknown number that is true for the entire population
Proportion of left-handed students at GSU
Median rent of apartments in NYC
Proportion of red M&Ms produced in a factory
ATE of your program
We take a sample and make a guess
We take a sample and make a guess
This single value is a point estimate
(This is the Greek letter with a hat)
You have an estimate,
but how different might that
estimate be if you take another sample?
You take a random sample of
50 GSU students and 5 are left-handed.
You take a random sample of
50 GSU students and 5 are left-handed.
If you take a different random sample of
50 GSU students, how many would you
expect to be left-handed?
You take a random sample of
50 GSU students and 5 are left-handed.
If you take a different random sample of
50 GSU students, how many would you
expect to be left-handed?
3 are left-handed. Is that surprising?
You take a random sample of
50 GSU students and 5 are left-handed.
If you take a different random sample of
50 GSU students, how many would you
expect to be left-handed?
3 are left-handed. Is that surprising?
40 are left-handed. Is that surprising?
How confident are we that the sample
picked up the population parameter?
How confident are we that the sample
picked up the population parameter?
Confidence interval is a net
How confident are we that the sample
picked up the population parameter?
Confidence interval is a net
We can be X% confident that our net is
picking up that population parameter
If we took 100 samples, at least 95 of them would have the
true population parameter in their 95% confidence intervals
A city manager wants to know the true average property value of single-value homes in her city. She takes a random sample of 200 houses and builds a 95% confidence interval. The interval is ($180,000, $300,000).
A city manager wants to know the true average property value of single-value homes in her city. She takes a random sample of 200 houses and builds a 95% confidence interval. The interval is ($180,000, $300,000).
We're 95% confident that the
interval ($180,000, $300,000)
captured the true mean value
It is way too tempting to say
“We’re 95% sure that the
population parameter is X”
It is way too tempting to say
“We’re 95% sure that the
population parameter is X”
People do this all the time! People with PhDs!
It is way too tempting to say
“We’re 95% sure that the
population parameter is X”
People do this all the time! People with PhDs!
YOU will try to do this too
OpenIntro Stats p. 186
First, notice that the statements are always about the population parameter, which considers all American adults for the energy polls or all New York adults for the quarantine poll.
We also avoided another common mistake: incorrect language might try to describe the confidence interval as capturing the population parameter with a certain probability. Making a probability interpretation is a common error: while it might be useful to think of it as a probability, the confidence level only quantifies how plausible it is that the parameter is in the given interval.
Another important consideration of confidence intervals is that they are only about the population parameter. A confidence interval says nothing about individual observations or point estimates. Confidence intervals only provide a plausible range for population parameters.
If you took lots of samples,
95% of their confidence intervals
would have the single true value in them
This kind of statistics is called "frequentism"
This kind of statistics is called "frequentism"
The population parameter θ is fixed and singular
while the data can vary
P(Data∣θ)
This kind of statistics is called "frequentism"
The population parameter θ is fixed and singular
while the data can vary
P(Data∣θ)
You can do an experiment over and over again;
take more and more samples and polls
"We are 95% confident that this net
captures the true population parameter"
"We are 95% confident that this net
captures the true population parameter"
"There's a 95% chance that the
true value falls in this range"
Weekends and
restaurant scores
P(θ∣Data)
P(H∣E)=P(H)×P(E∣H)P(E)
P(H∣E)=P(H)×P(E∣H)P(E)
P(Hypothesis∣Evidence)=
P(Hypothesis)×P(Evidence∣Hypothesis)P(Evidence)
Bayesian statistics and
more complex questions
But the math is too hard!
So we simulate!
(Monte Carlo Markov Chains, or MCMC)
Weekends and
restaurant scores again
In the world of frequentism,
there's a fixed population parameter
and the data can hypothetically vary
P(Data∣θ)
In the world of frequentism,
there's a fixed population parameter
and the data can hypothetically vary
P(Data∣θ)
In the world of Bayesianism,
the data is fixed (you collected it just once!)
and the population parameter can vary
P(θ∣Data)
In frequentism land, the parameter is fixed and singular and the data can vary - you can do an experiment over and over again, take more and more samples and polls
In Bayes land, the data is fixed (you collected it, that's it), and the parameter can vary
(AKA posterior intervals)
"Given the data, there is a 95% probability
that the true population parameter
falls in the credible interval"
a Bayesian statistician would say “given our observed data, there is a 95% probability that the true value of θ falls within the credible region” while a Frequentist statistician would say “there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of θ will fall within it”. (https://freakonometrics.hypotheses.org/18117)
Note how this drastically improve the interpretability of the Bayesian interval compared to the frequentist one. Indeed, the Bayesian framework allows us to say “given the observed data, the effect has 95% probability of falling within this range”, compared to the less straightforward, frequentist alternative (the 95% Confidence* Interval) would be “there is a 95% probability that when computing a confidence interval from data of this sort, the effect falls within this range”. (https://easystats.github.io/bayestestR/articles/credible_interval.html)
Frequentism
There's a 95% probability
that the range contains the true value
Probability of the range
Few people naturally
think like this
Bayesianism
There's a 95% probability
that the true value falls in this range
Probability of the actual value
People do naturally
think like this!
There's a 95% probability that the range contains the true value (freq) - We are 95% confident that this net captures the true population parameter vs. There's a 95% probability that the the true value falls in this range (bayes)
This is a minor linguistic difference but it actually matters a lot! With frequentism, you have a range of possible values - you don't really know the true parameter, but it's in that range somewhere. Could be at the very edge, could be in the middle. With Bayesianism, you focus on the parameter itself, which has a distribution around it. It could be on the edge, but is most likely in the middle
Probability of range boundaries vs probability of parameter values
Bayesian p-value = probability that it's greater than 0 - you can say that there's a 100% chance that the coefficient is not zero, no more null worlds!
We all think Bayesianly,
even if you've never heard of Bayesian stats
Every time you look at a confidence interval, you inherently think that the parameter is around that value, but that's wrong!
We all think Bayesianly,
even if you've never heard of Bayesian stats
Every time you look at a confidence interval, you inherently think that the parameter is around that value, but that's wrong!
BUT Imbens cites research that
that's actually generally okay
Often credible intervals are super similar to confidence intervals
What do you do without p-values then?
What do you do without p-values then?
Probability
of direction
What do you do without p-values then?
Probability
of direction
Region of practical
equivalence (ROPE)
Weekends and
restaurant scores
once more
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
o | Tile View: Overview of Slides |
Esc | Back to slideshow |
September 26, 2022
PMAP 8521: Program evaluation
Andrew Young School of Policy Studies
Exam 1
Exam 1
FAQs
Exam 1
FAQs
Confidence intervals, credible intervals,
and a crash course on Bayesian statistics
Tell us about Exam 1!
Are p-values really misinterpreted
in published research?
Power calculations and sample size
Won't we always be able to find
a significant effect if the
sample size is big enough?
Yes!
Math with computers
andhs.co/live
Are the results from
p-hacking actually a
threat to validity?
Do people actually post
their preregistrations?
Do you have any tips for identifying the
threats to validity in articles since
they're often not super clear?
Especially things like spillovers,
Hawthorne effects, and John Henry effects?
Using a control group of some kind
seems to be the common fix
for all of these issues.
What happens if you can't do that?
Is the study just a lost cause?
That's the point of DAGs and quasi experiments; simulate having treatment and control groups
In the absence of p-values,
I'm confused about how
we report… significance?
Nobody really cares about p-values
Nobody really cares about p-values
Decision makers want to know
a number or a range of numbers—
some sort of effect and uncertainty
Nobody really cares about p-values
Decision makers want to know
a number or a range of numbers—
some sort of effect and uncertainty
Nobody cares how likely a number would be
in an imaginary null world!
Report point estimates and some sort of range
"It would be preferable if reporting standards emphasized confidence intervals or standard errors, and, even better, Bayesian posterior intervals."
Report point estimates and some sort of range
"It would be preferable if reporting standards emphasized confidence intervals or standard errors, and, even better, Bayesian posterior intervals."
Point estimate
The single number you calculate
(mean, coefficient, etc.)
Uncertainty
A range of possible values
Statistics: use a sample to make inferences about a population
Statistics: use a sample to make inferences about a population
Greek
Letters like β1 are the truth
Letters with extra markings like ^β1 are our estimate of the truth based on our sample
Statistics: use a sample to make inferences about a population
Greek
Letters like β1 are the truth
Letters with extra markings like ^β1 are our estimate of the truth based on our sample
Latin
Letters like X are actual data from our sample
Letters with extra markings like ˉX are calculations from our sample
Data → Calculation → Estimate → Truth
Data → Calculation → Estimate → Truth
Data | X |
Calculation | ˉX=∑XN |
Estimate | ˆμ |
Truth | μ |
Data → Calculation → Estimate → Truth
Data | X |
Calculation | ˉX=∑XN |
Estimate | ˆμ |
Truth | μ |
ˉX=ˆμ
X→ˉX→ˆμ🤞 hopefully 🤞→μ
Truth = Greek letter
An single unknown number that is true for the entire population
Truth = Greek letter
An single unknown number that is true for the entire population
Proportion of left-handed students at GSU
Median rent of apartments in NYC
Proportion of red M&Ms produced in a factory
ATE of your program
We take a sample and make a guess
We take a sample and make a guess
This single value is a point estimate
(This is the Greek letter with a hat)
You have an estimate,
but how different might that
estimate be if you take another sample?
You take a random sample of
50 GSU students and 5 are left-handed.
You take a random sample of
50 GSU students and 5 are left-handed.
If you take a different random sample of
50 GSU students, how many would you
expect to be left-handed?
You take a random sample of
50 GSU students and 5 are left-handed.
If you take a different random sample of
50 GSU students, how many would you
expect to be left-handed?
3 are left-handed. Is that surprising?
You take a random sample of
50 GSU students and 5 are left-handed.
If you take a different random sample of
50 GSU students, how many would you
expect to be left-handed?
3 are left-handed. Is that surprising?
40 are left-handed. Is that surprising?
How confident are we that the sample
picked up the population parameter?
How confident are we that the sample
picked up the population parameter?
Confidence interval is a net
How confident are we that the sample
picked up the population parameter?
Confidence interval is a net
We can be X% confident that our net is
picking up that population parameter
If we took 100 samples, at least 95 of them would have the
true population parameter in their 95% confidence intervals
A city manager wants to know the true average property value of single-value homes in her city. She takes a random sample of 200 houses and builds a 95% confidence interval. The interval is ($180,000, $300,000).
A city manager wants to know the true average property value of single-value homes in her city. She takes a random sample of 200 houses and builds a 95% confidence interval. The interval is ($180,000, $300,000).
We're 95% confident that the
interval ($180,000, $300,000)
captured the true mean value
It is way too tempting to say
“We’re 95% sure that the
population parameter is X”
It is way too tempting to say
“We’re 95% sure that the
population parameter is X”
People do this all the time! People with PhDs!
It is way too tempting to say
“We’re 95% sure that the
population parameter is X”
People do this all the time! People with PhDs!
YOU will try to do this too
OpenIntro Stats p. 186
First, notice that the statements are always about the population parameter, which considers all American adults for the energy polls or all New York adults for the quarantine poll.
We also avoided another common mistake: incorrect language might try to describe the confidence interval as capturing the population parameter with a certain probability. Making a probability interpretation is a common error: while it might be useful to think of it as a probability, the confidence level only quantifies how plausible it is that the parameter is in the given interval.
Another important consideration of confidence intervals is that they are only about the population parameter. A confidence interval says nothing about individual observations or point estimates. Confidence intervals only provide a plausible range for population parameters.
If you took lots of samples,
95% of their confidence intervals
would have the single true value in them
This kind of statistics is called "frequentism"
This kind of statistics is called "frequentism"
The population parameter θ is fixed and singular
while the data can vary
P(Data∣θ)
This kind of statistics is called "frequentism"
The population parameter θ is fixed and singular
while the data can vary
P(Data∣θ)
You can do an experiment over and over again;
take more and more samples and polls
"We are 95% confident that this net
captures the true population parameter"
"We are 95% confident that this net
captures the true population parameter"
"There's a 95% chance that the
true value falls in this range"
Weekends and
restaurant scores
P(θ∣Data)
P(H∣E)=P(H)×P(E∣H)P(E)
P(H∣E)=P(H)×P(E∣H)P(E)
P(Hypothesis∣Evidence)=
P(Hypothesis)×P(Evidence∣Hypothesis)P(Evidence)
Bayesian statistics and
more complex questions
But the math is too hard!
So we simulate!
(Monte Carlo Markov Chains, or MCMC)
Weekends and
restaurant scores again
In the world of frequentism,
there's a fixed population parameter
and the data can hypothetically vary
P(Data∣θ)
In the world of frequentism,
there's a fixed population parameter
and the data can hypothetically vary
P(Data∣θ)
In the world of Bayesianism,
the data is fixed (you collected it just once!)
and the population parameter can vary
P(θ∣Data)
In frequentism land, the parameter is fixed and singular and the data can vary - you can do an experiment over and over again, take more and more samples and polls
In Bayes land, the data is fixed (you collected it, that's it), and the parameter can vary
(AKA posterior intervals)
"Given the data, there is a 95% probability
that the true population parameter
falls in the credible interval"
a Bayesian statistician would say “given our observed data, there is a 95% probability that the true value of θ falls within the credible region” while a Frequentist statistician would say “there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of θ will fall within it”. (https://freakonometrics.hypotheses.org/18117)
Note how this drastically improve the interpretability of the Bayesian interval compared to the frequentist one. Indeed, the Bayesian framework allows us to say “given the observed data, the effect has 95% probability of falling within this range”, compared to the less straightforward, frequentist alternative (the 95% Confidence* Interval) would be “there is a 95% probability that when computing a confidence interval from data of this sort, the effect falls within this range”. (https://easystats.github.io/bayestestR/articles/credible_interval.html)
Frequentism
There's a 95% probability
that the range contains the true value
Probability of the range
Few people naturally
think like this
Bayesianism
There's a 95% probability
that the true value falls in this range
Probability of the actual value
People do naturally
think like this!
There's a 95% probability that the range contains the true value (freq) - We are 95% confident that this net captures the true population parameter vs. There's a 95% probability that the the true value falls in this range (bayes)
This is a minor linguistic difference but it actually matters a lot! With frequentism, you have a range of possible values - you don't really know the true parameter, but it's in that range somewhere. Could be at the very edge, could be in the middle. With Bayesianism, you focus on the parameter itself, which has a distribution around it. It could be on the edge, but is most likely in the middle
Probability of range boundaries vs probability of parameter values
Bayesian p-value = probability that it's greater than 0 - you can say that there's a 100% chance that the coefficient is not zero, no more null worlds!
We all think Bayesianly,
even if you've never heard of Bayesian stats
Every time you look at a confidence interval, you inherently think that the parameter is around that value, but that's wrong!
We all think Bayesianly,
even if you've never heard of Bayesian stats
Every time you look at a confidence interval, you inherently think that the parameter is around that value, but that's wrong!
BUT Imbens cites research that
that's actually generally okay
Often credible intervals are super similar to confidence intervals
What do you do without p-values then?
What do you do without p-values then?
Probability
of direction
What do you do without p-values then?
Probability
of direction
Region of practical
equivalence (ROPE)
Weekends and
restaurant scores
once more