4. Linear Models
Overview
-
4.1 Why normal distributions are normal
-
4.1.1 Normal by addition
-
4.1.2 Normal by multiplication
-
4.1.3 Normal by log-multiplication
-
4.1.4 Using Gaussian distributions
-
4.1.4.1 Ontological justification
-
4.1.4.2 Epistemological justification
-
Gaussian distribution
-
-
-
-
4.2 A language for describing models
-
4.2.1 Re-describing the globe tossing model
-
From model definition to Bayes’ theorem
-
-
-
4.3 A Gaussian model of height
-
4.3.1 The data
-
Data frames
-
Index magic
-
-
4.3.2 The model
-
Independent and identically distributed
-
A farewell to epsilon
-
Model definition to Bayes’ theorem again
-
-
4.3.3 Grid approximation of the posterior distribution
-
4.3.4 Sampling from the posterior
-
Sample size and the normality of σ’s posterior
-
-
4.3.5 Fitting the model with
map
-
Start values for
map
-
How strong is a prior?
-
-
4.3.6 Sampling from a
map
fit-
Under the hood with multivariate sampling
-
Getting σ right
-
-
-
4.4 Adding a predictor
-
What is “regression”?
-
4.4.1 The linear model strategy
-
4.4.1.1 Likelihood
-
4.4.1.2 Linear model
-
Nothing special or natural about linear models
-
Units and regression models
-
-
4.4.1.3 Priors
-
What’s the correct prior?
-
-
-
4.4.2 Fitting the model
-
Everything that depends upon parameters has a posterior distribution
-
Embedding linear models
-
-
4.4.3 Interpreting the model fit
-
What do parameters mean?
-
4.4.3.1 Tables of estimates
-
4.4.3.2 Plotting posterior inference against the data
-
4.4.3.3 Adding uncertainty around the mean
-
4.4.3.4 Plotting regression intervals and contours
-
Overconfident confidence intervals
-
How
link
works
-
-
4.4.3.5 Prediction intervals
-
Two kinds of uncertainty
-
Rolling your own
sim
-
-
-
-
4.5 Polynomial regression
-
Linear, additive, funky
-
Converting back to natural scale
-
-
4.6 Summary