Inference and Hypothesis Testing#

OBJECTIVES

  • Review confidence intervals

  • Review standard error of the mean

  • Introduce Hypothesis Testing

  • Hypothesis test with one sample

  • Difference in two samples

  • Difference in multiple samples

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as stats
import seaborn as sns
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[1], line 5
      3 import pandas as pd
      4 import scipy.stats as stats
----> 5 import seaborn as sns

ModuleNotFoundError: No module named 'seaborn'

Quiz Review#

Using the titanic data, determine which features seem to discriminate well between passengers who survived and those that did not.

titanic = sns.load_dataset('titanic')
titanic.head()
survived pclass sex age sibsp parch fare embarked class who adult_male deck embark_town alive alone
0 0 3 male 22.0 1 0 7.2500 S Third man True NaN Southampton no False
1 1 1 female 38.0 1 0 71.2833 C First woman False C Cherbourg yes False
2 1 3 female 26.0 0 0 7.9250 S Third woman False NaN Southampton yes True
3 1 1 female 35.0 1 0 53.1000 S First woman False C Southampton yes False
4 0 3 male 35.0 0 0 8.0500 S Third man True NaN Southampton no True
sns.countplot(titanic, x = 'pclass', hue = 'survived')
<AxesSubplot: xlabel='pclass', ylabel='count'>
_images/4824bd5666798113fa1d1d9ddf90b0b251d37e83773c0badfd16f90f22ca983b.png
titanic.groupby('sex')['survived'].mean()
sex
female    0.742038
male      0.188908
Name: survived, dtype: float64
sns.histplot(titanic, x = 'age', hue = 'survived')
<AxesSubplot: xlabel='age', ylabel='Count'>
_images/9d2d39f402dc17b009054a4de645519eeb0f8b0db2db2075dbfa057ed2b572ad.png
titanic.groupby(titanic['age'] < 14)['survived'].mean()
age
False    0.365854
True     0.591549
Name: survived, dtype: float64

Standardization#

Suppose we have two distributions on different domains from which we would like to compare scores.

  • An English Class has test scores normally distributed with mean 95 and standard deviation 5.

  • A Mathematics Class has test scores normally distributed with mean 80 and standard deviation 7.

#math class
math_class = stats.norm(loc = 80, scale = 7)
#histogram
plt.hist(math_class.rvs(100))
(array([ 5.,  7.,  8., 18., 16., 23., 12.,  7.,  3.,  1.]),
 array([ 62.69072123,  66.46219501,  70.23366878,  74.00514256,
         77.77661634,  81.54809011,  85.31956389,  89.09103766,
         92.86251144,  96.63398522, 100.40545899]),
 <BarContainer object of 10 artists>)
_images/09387e76ed0ea9e670752af61ca6c9d4f30c5a2cbd720b865f18f11be796c043.png
#english scores
english_class = stats.norm(loc = 95, scale = 5)
#make a dataframe
tests_df = pd.DataFrame({'math': math_class.rvs(1000), 'english': english_class.rvs(1000)})
tests_df.head()
math english
0 87.047850 94.220540
1 93.723303 88.906155
2 81.280130 97.451730
3 78.016499 98.649934
4 82.784914 103.433882
#plot the histograms together
plt.hist(tests_df['math'])
plt.hist(tests_df['english'])
(array([  4.,  21.,  59., 150., 249., 247., 173.,  73.,  20.,   4.]),
 array([ 77.69682772,  81.13047556,  84.56412341,  87.99777125,
         91.4314191 ,  94.86506694,  98.29871478, 101.73236263,
        105.16601047, 108.59965832, 112.03330616]),
 <BarContainer object of 10 artists>)
_images/cbc058f9033bd44067f7b453e53352ecdbfd61dd06c5e790b8bf7259d2494181.png
#problem: Student A -- 82 in math How many std's away from the mean is 82???
#.        Student B -- 97 in English
#Who did better?
(82 - 80)/7
0.2857142857142857
(97 - 95)/5
0.4
(tests_df - tests_df.mean()) / tests_df.std()
math english
0 1.047439 -0.171651
1 1.978635 -1.201127
2 0.242867 0.454278
3 -0.212396 0.686388
4 0.452778 1.613110
... ... ...
995 0.738558 0.630461
996 -0.080349 -0.346814
997 0.223827 1.183140
998 -0.095123 -1.211663
999 0.540756 -0.880150

1000 rows × 2 columns

Standardizer#

The work of standardizing our data is extremely important for many models. To get a feel for an important library, your task is to build a Standardizer class that has two methods:

.fit()
.transform()

When the .fit method is called, you will learn the mean and standard deviation of the data. Upon learning these, assign them to the attributes .mean_ and .scale_. Then, use the .transform method to actually transform the data. Demonstrate its use with the tests_df. Note, you will need to call the .fit method prior to the .transform. As a bonus, try adding an error message that warns the user when calling fit prior to calling transform.

class Standardizer:
    def __init__(self):
        self.mean_ = None
        self.scale_ = None
        
    def fit(self, X):
        self.mean_ = X.mean()
        self.scale_ = X.std()
    
    def transform(self, X):
        return (X - self.mean_)/self.scale_
scaler = Standardizer()
scaler.fit(tests_df)
scaler.transform(tests_df)
math english
0 1.047439 -0.171651
1 1.978635 -1.201127
2 0.242867 0.454278
3 -0.212396 0.686388
4 0.452778 1.613110
... ... ...
995 0.738558 0.630461
996 -0.080349 -0.346814
997 0.223827 1.183140
998 -0.095123 -1.211663
999 0.540756 -0.880150

1000 rows × 2 columns

Differences between groups#

#read in the polls data
polls = pd.read_csv('https://raw.githubusercontent.com/jfkoehler/nyu_bootcamp_fa24/refs/heads/main/data/polls.csv')
#take a peek
polls.head()
p1 p2 p3 p4 p5
0 5 1 3 5 2
1 1 3 5 5 5
2 2 3 5 3 5
3 4 3 3 3 5
4 5 4 3 2 2

Confidence intervals#

\[\mu \pm t_{1 - \alpha / 2} \times \frac{s}{\sqrt{n}}\]
  • \(\alpha\): significance level – we determine this

  • t: t-score – we look this up

  • \(\mu\): we get this from the data

  • \(s\): we get this from the data NOTE: This is different than a population standard deviation.

t_dist = stats.t(df = len(polls))
print(t_dist.mean(), t_dist.std())
0.0 1.0067340828210365
x = np.linspace(-3, 3, 100)
plt.plot(x, t_dist.pdf(x))
plt.fill_between(x, t_dist.pdf(x), where = ((x > t_dist.ppf(.025)) &  (x< t_dist.ppf(.975))), hatch = '|', alpha = 0.5)
plt.grid()
plt.title('The t-distribution');
plt.xticks([-2, 2], ['2.5% of the data', '97.5% of the data']);
_images/5eba109aa04096627407a8ee7d8b9f075b4eb43e97f087dd389ecfd4f4c3e443.png
#examine the first question data
q1 = polls['p1']
q1.head()
0    5
1    1
2    2
3    4
4    5
Name: p1, dtype: int64
#determine degrees of freedom
#i.e. length - 1
dof = len(q1) - 1
print(f'{dof} degrees of freedom')
149 degrees of freedom
#look up test statistic
#we need our alpha and dof
#where do we bound 97.5% of our data
t_stat = stats.t.ppf(1 - 0.05/2, dof)
print(f'The t-statistic is {t_stat}')
The t-statistic is 1.976013177679155
#compute sample standard deviation
s = np.std(q1, ddof = 1)
print(f'The sample standard deviation is {s}')
The sample standard deviation is 1.1317069525271144
#sample size
n = len(q1)
print(f'The sample size is {n}')
The sample size is 150
#compute upper limit
upper = q1.mean() + t_stat*s/np.sqrt(n)
print(f'The upper limit of the confidence interval is {upper}')
The upper limit of the confidence interval is 4.215923838809285
#compute the lower bound
lower = q1.mean() - t_stat*s/np.sqrt(n)
print(f'The lower limit of the confidence interval is {lower}')
The lower limit of the confidence interval is 3.8507428278573816
#print it
(lower, upper)
(3.8507428278573816, 4.215923838809285)
#use scipy
#1 - alpha
#dof
#sem
#(1 - alpha, dof, mean, sem)
stats.t.interval(.95, n - 1, np.mean(q1), stats.sem(q1))
#plot it
#take 500 samples of size 7 from poll 1, find mean, kde of the results
sample_means = [q1.sample(20).mean() for _ in range(5000)]
sns.displot(sample_means, kind = 'kde')

Problem#

  • Find the 95% confidence interval for the second poll

  • Compare the two intervals, is there much overlap? What does this mean?

q2 = polls['p2']

Confidence interval for Difference in Means#

#statsmodels imports
from statsmodels.stats.weightstats import CompareMeans, DescrStatsW
#create our objects polls are DescrStatsWeights
#compare means of these
dq1 = DescrStatsW(q1)
dq2 = DescrStatsW(q2)
c = CompareMeans(dq1, dq2)
#90% confidence interval -- represents the difference between 
c.tconfint_diff(.05)
(0.3521083067086064, 0.9012250266247266)
#so what?

Jobs Data#

The data below is a sample of job postings from New York City. We want to investigate the lower and upper bound columns.

#read in the data
jobs = pd.read_csv('https://raw.githubusercontent.com/jfkoehler/nyu_bootcamp_fa24/refs/heads/main/data/jobs.csv')
#salary from
jobs.head()
job_id title agency posting_date salary_from salary_to
0 378085 HVAC Service Technic DEPT OF HEALTH/MENTAL HYGIENE 2018-12-21 385.0 385.0
1 377919 Psychologist, Level POLICE DEPARTMENT 2018-12-31 62458.0 81131.0
2 379321 Asset Manager HOUSING PRESERVATION & DVLPMNT 2019-01-07 52524.0 60000.0
3 378658 Public Health Adviso DEPT OF HEALTH/MENTAL HYGIENE 2019-01-02 37957.0 47142.0
4 321570 Deputy Commissioner, DEPT OF ENVIRONMENT PROTECTION 2018-01-26 209585.0 209585.0

Margin of Error#

Now, the question is to build a confidence interval that achieves a given amount of error.

\[error = z_{\alpha/2} \times \frac{\sigma}{\sqrt{n}}\]

PROBLEM

What is the minimum sample size necessary to estimate the upper salary range with 95% confidence within $3000?

  • need \(z\)-score: 1.96

  • E: 3000

  • \(\sigma\): np.std(jobs['salary_to'])

#do the computation
#repeat for $500

Testing Significance#

Now that we’ve tackled confidence intervals, let’s wrap up with a final test for significance. With a Hypothesis Test, the first step is declaring a null and alternative hypothesis. Typically, this will be an assumption of no difference.

\[H_0: \text{Null Hypothesis}\]
\[H_a: \text{Alternative Hypothesis}\]

For example, our data below have to do with a reading intervention and assessment after the fact. Our null hypothesis will be:

\[H_0: \mu_1 = \mu_2\]
\[H_a: \mu_1 \neq \mu_2\]
#read in the data
reading = pd.read_csv('https://raw.githubusercontent.com/jfkoehler/nyu_bootcamp_fa24/refs/heads/main/data/DRP.csv')
reading.head()
#distributions of groups
sns.displot(x = 'drp', hue = 'group', data = reading, kind='kde')

For our hypothesis test, we need two things:

  • Null and alternative hypothesis

\[H_0: \mu_t = \mu_c \]
\[H_a: \mu_t \neq \mu_c \]
  • Significance Level

  • \(\alpha = 0.05\) Just like before, we will set a tolerance for rejecting the null hypothesis.

#split the groups
treatment = reading.loc[reading['g'] == 0]['drp']
control = reading.loc[reading['g'] == 1]['drp']
#run the test
stats.ttest_ind(treatment, control)
#alpha at 0.05

SUPPOSE WE WANT TO TEST IF INTERVENTION MADE SCORES HIGHER

\[H_0: \mu_0 = \mu_1\]
\[H_1: \mu_0 < \mu_1\]
#alpha at 0.05
t_score, p = stats.ttest_ind(treatment, control)
p/2

PROBLEMS

  1. Given the mileage dataset, test the claim on the cars sticker that the average mpg for city driving is 30 mpg.

  2. If we increase our food intake, we generally gain weight. In one study, researchers fed 16 non-obese adults, age 25-36 1000 excess calories a day. According to theory, 3500 extra calories will translate into a weight gain of 1 point, therefore we expect each of the subjects to gain 16 pounds. the wtgain dataset contains the before and after eight week period gains.

  • Create a new column to represent the weight change of each subject.

  • Find the mean and standard deviation for the change.

  • Determine the 95% confidence interval for weight change and interpret in complete sentences.

  • Test the null hypothesis that the mean weight gain is 16 lbs. What do you conclude?

  1. Insurance adjusters are concerned about the high estimates they are receiving from Jocko’s Garage. To see if the estimates are unreasonably high, each of 10 damaged cars was take to Jocko’s and to another garage and the estimates were recorded in the jocko.csv file.

  • Create a new column that represents the difference in prices from the two garages. Find the mean and standard deviation of the difference.

  • Test the null hypothesis that there is no difference between the estimates at the 0.05 significance level.