CS229 Lecture Notes
Andrew Ng and Tengyu Ma
June 11, 2023
Contents
I Supervised learning 5
1 Linear regression 8
LMS algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 9
The normal equations . . . . . . . . . . . . . . . . . . . . . . . 13
Matrix derivatives . . . . . . . . . . . . . . . . . . . . . 13
Least squares revisited . . . . . . . . . . . . . . . . . . 14
Probabilistic interpretation . . . . . . . . . . . . . . . . . . . . 15
Locally weighted linear regression (optional reading) . . . . . . 17
2 Classification and logistic regression 20
Logistic regression . . . . . . . . . . . . . . . . . . . . . . . . 20
Digression: the perceptron learning algorithm . . . . . . . . . 23
Multi-class classification . . . . . . . . . . . . . . . . . . . . . 24
Another algorithm for maximizing `(θ) . . . . . . . . . . . . . 27
3 Generalized linear models 29
The exponential family . . . . . . . . . . . . . . . . . . . . . . 29
Constructing GLMs . . . . . . . . . . . . . . . . . . . . . . . . 31
Ordinary least squares . . . . . . . . . . . . . . . . . . 32
Logistic regression . . . . . . . . . . . . . . . . . . . . 33
4 Generative learning algorithms 34
Gaussian discriminant analysis . . . . . . . . . . . . . . . . . . 35
The multivariate normal distribution . . . . . . . . . . 35
The Gaussian discriminant analysis model . . . . . . . 38
Discussion: GDA and logistic regression . . . . . . . . 40
Naive bayes (Option Reading) . . . . . . . . . . . . . . . . . . 41
Laplace smoothing . . . . . . . . . . . . . . . . . . . . 44
Event models for text classification . . . . . . . . . . . 46
1
CS229 Spring 20223 2
5 Kernel methods 48
Feature maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
LMS (least mean squares) with features . . . . . . . . . . . . . 49
LMS with the kernel trick . . . . . . . . . . . . . . . . . . . . 49
Properties of kernels . . . . . . . . . . . . . . . . . . . . . . . 53
6 Support vector machines 59
Margins: intuition . . . . . . . . . . . . . . . . . . . . . . . . . 59
Notation (option reading) . . . . . . . . . . . . . . . . . . . . 61
Functional and geometric margins (option reading) . . . . . . 61
The optimal margin classifier (option reading) . . . . . . . . . 63
Lagrange duality (optional reading) . . . . . . . . . . . . . . . 65
Optimal margin classifiers: the dual form (option reading) . . 68
Regularization and the non-separable case (optional reading) . 72
The SMO algorithm (optional reading) . . . . . . . . . . . . . 73
Coordinate ascent . . . . . . . . . . . . . . . . . . . . . 74
SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
II Deep learning 79
7 Deep learning 80
Supervised learning with non-linear models . . . . . . . . . . . 80
Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Modules in Modern Neural Networks . . . . . . . . . . . . . . 92
Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . 98
Preliminaries on partial derivatives . . . . . . . . . . . 99
General strategy of backpropagation . . . . . . . . . . 102
Backward functions for basic modules . . . . . . . . . . 105
Back-propagation for MLPs . . . . . . . . . . . . . . . 107
Vectorization over training examples . . . . . . . . . . . . . . 109
III Generalization and regularization 112
8 Generalization 113
Bias-variance tradeoff . . . . . . . . . . . . . . . . . . . . . . . 115
A mathematical decomposition (for regression) . . . . . 120
The double descent phenomenon . . . . . . . . . . . . . . . . . 121
Sample complexity bounds (optional readings) . . . . . . . . . 126
CS229 Spring 20223 3
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 126
The case of finite H . . . . . . . . . . . . . . . . . . . . 128
The case of infinite H . . . . . . . . . . . . . . . . . . 131
9 Regularization and model selection 135
Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Implicit regularization effect (optional reading) . . . . . . . . . 137
Model selection via cross validation . . . . . . . . . . . . . . . 139
Bayesian statistics and regularization . . . . . . . . . . . . . . 142
IV Unsupervised learning 144
10 Clustering and the k-means algorithm 145
11 EM algorithms 148
EM for mixture of Gaussians . . . . . . . . . . . . . . . . . . . 148
Jensen’s inequality . . . . . . . . . . . . . . . . . . . . . . . . 151
General EM algorithms . . . . . . . . . . . . . . . . . . . . . . 152
Other interpretation of ELBO . . . . . . . . . . . . . . 158
Mixture of Gaussians revisited . . . . . . . . . . . . . . . . . . 158
Variational inference and variational auto-encoder (optional
reading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
12 Principal components analysis 165
13 Independent components analysis 171
ICA ambiguities . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Densities and linear transformations . . . . . . . . . . . . . . . 173
ICA algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
14 Self-supervised learning and foundation models 177
Pretraining and adaptation . . . . . . . . . . . . . . . . . . . . 177
Pretraining methods in computer vision . . . . . . . . . . . . . 179
Pretrained large language models . . . . . . . . . . . . . . . . 181
Open up the blackbox of Transformers . . . . . . . . . 183
Zero-shot learning and in-context learning . . . . . . . 186
CS229 Spring 20223 4
V Reinforcement Learning and Control 188
15 Reinforcement learning 189
Markov decision processes . . . . . . . . . . . . . . . . . . . . 190
Value iteration and policy iteration . . . . . . . . . . . . . . . 192
Learning a model for an MDP . . . . . . . . . . . . . . . . . . 194
Continuous state MDPs . . . . . . . . . . . . . . . . . . . . . 196
Discretization . . . . . . . . . . . . . . . . . . . . . . . 196
Value function approximation . . . . . . . . . . . . . . 199
Connections between Policy and Value Iteration (Optional) . . 203
16 LQR, DDP and LQG 206
Finite-horizon MDPs . . . . . . . . . . . . . . . . . . . . . . . 206
Linear Quadratic Regulation (LQR) . . . . . . . . . . . . . . . 210
From non-linear dynamics to LQR . . . . . . . . . . . . . . . 213
Linearization of dynamics . . . . . . . . . . . . . . . . 214
Differential Dynamic Programming (DDP) . . . . . . . 214
Linear Quadratic Gaussian (LQG) . . . . . . . . . . . . . . . . 216
17 Policy Gradient (REINFORCE) 220
Part I
Supervised learning
5
6
Let’s start by talking about a few examples of supervised learning prob-
lems. Suppose we have a dataset giving the living areas and prices of 47
houses from Portland, Oregon:
Living area (feet2) Price (1000$s)
2104 400
1600 330
2400 369
1416 232
3000 540
...
...
We can plot this data:
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
0
100
200
300
400
500
600
700
800
900
1000
housing prices
square feet
p
ri
c
e
(
in
$
1
0
0
0
)
Given data like this, how can we learn to predict the prices of other houses
in Portland, as a function of the size of their living areas?
To establish notation for future use, we’ll use x(i) to denote the “input”
variables (living area in this example), also called input features, and y(i)
to denote the “output” or target variable that we are trying to predict
(price). A pair (x(i), y(i)) is called a training example, and the dataset
that we’ll be using to learn—a list of n training examples {(x(i), y(i)); i =
1, . . . , n}—is called a training set. Note that the superscript “(i)” in the
notation is simply an index into the training set, and has nothing to do with
exponentiation. We will also use X denote the space of input values, and Y
the space of output values. In this example, X = Y = R.
To describe the supervised learning problem slightly more formally, our
goal is, given a training set, to learn a function h : X 7→ Y so that h(x) is a
“good” predictor for the corresponding value of y. For historical reasons, this
7
function h is called a hypothesis. Seen pictorially, the process is therefore
like this:
Training
set
house.)
(living area of
Learning
algorithm
h predicted yx
(predicted price)
of house)
When the target variable that we’re trying to predict is continuous, such
as in our housing example, we call the learning problem a regression prob-
lem. When y can take on only a small number of discrete values (such as
if, given the living area, we wanted to predict if a dwelling is a house or an
apartment, say), we call it a classification problem.
Chapter 1
Linear regression
To make our housing example more interesting, let’s consider a slightly richer
dataset in which we also know the number of bedrooms in each house:
Living area (feet2) #bedrooms Price (1000$s)
2104 3 400
1600 3 330
2400 3 369
1416 2 232
3000 4 540
...
...
...
Here, the x’s are two-dimensional vectors in R2. For instance, x(i)1 is the
living area of the i-th house in the training set, and x
(i)
2 is its number of
bedrooms. (In general, when designing a learning problem, it will be up to
you to decide what features to choose, so if you are out in Portland gathering
housing data, you might also decide to include other features such as whether
each house has a fireplace, the number of bathrooms, and so on. We’ll say
more about feature selection later, but for now let’s take the features as
given.)
To perform supervised learning, we must decide how we’re going to rep-
resent functions/hypotheses h in a computer. As an initial choice, let’s say
we decide to approximate y as a linear function of x:
hθ(x) = θ0 + θ1x1 + θ2x2
Here, the θi’s are the parameters (also called weights) parameterizing the
space of linear functions mapping from X to Y . When there is no risk of
8
9
confusion, we will drop the θ subscript in hθ(x), and write it more simply as
h(x). To simplify our notation, we also introduce the convention of letting
x0 = 1 (this is the intercept term), so that
h(x) =
d∑
i=0
θixi = θ
Tx,
where on the right-hand side above we are viewing θ and x both as vectors,
and here d is the number of input variables (not counting x0).
Now, given a training set, how do we pick, or learn, the parameters θ?
One reasonable method seems to be to make h(x) close to y, at least for
the training examples we have. To formalize this, we will define a function
that measures, for each value of the θ’s, how close the h(x(i))’s are to the
corresponding y(i)’s. We define the cost function:
J(θ) =
1
2
n∑
i=1
(hθ(x
(i))− y(i))2.
If you’ve seen linear regression before, you may recognize this as the familiar
least-squares cost function that gives rise to the ordinary least squares
regression model. Whether or not you have seen it previously, let’s keep
going, and we’ll eventually show this to be a special case of a much broader
family of algorithms.
LMS algorithm
We want to choose θ so as to minimize J(θ). To do so, let’s use a search
algorithm that starts with some “initial guess” for θ, and that repeatedly
changes θ to make J(θ) smaller, until hopefully we converge to a value of
θ that minimizes J(θ). Specifically, let’s consider the gradient descent
algorithm, which starts with some initial θ, and repeatedly performs the
update:
θj := θj − α
∂
∂θj
J(θ).
(This update is simultaneously performed for all values of j = 0, . . . , d.)
Here, α is called the learning rate. This is a very natural algorithm that
repeatedly takes a step in the direction of steepest decrease of J .
In order to implement this algorithm, we have to work out what is the
partial derivative term on the right hand side. Let’s first work it out for the
10
case of if we have only one training example (x, y), so that we can neglect
the sum in the definition of J . We have:
∂
∂θj
J(θ) =
∂
∂θj
1
2
(hθ(x)− y)
2
= 2 ·
1
2
(hθ(x)− y) ·
∂
∂θj
(hθ(x)− y)
= (hθ(x)− y) ·
∂
∂θj
(
d∑
i=0
θixi − y
)
= (hθ(x)− y)xj
For a single training example, this gives the update rule:1
θj := θj + α
(
y(i) − hθ(x(i))
)
x
(i)
j .
The rule is called the LMS update rule (LMS stands for “least mean squares”),
and is also known as the Widrow-Hoff learning rule. This rule has several
properties that seem natural and intuitive. For instance, the magnitude of
the update is proportional to the error term (y(i) − hθ(x(i))); thus, for in-
stance, if we are encountering a training example on which our prediction
nearly matches the actual value of y(i), then we find that there is little need
to change the parameters; in contrast, a larger change to the parameters will
be made if our prediction hθ(x
(i)) has a large error (., if it is very far from
y(i)).
We’d derived the LMS rule for when there was only a single training
example. There are two ways to modify this method for a training set of
more than one example. The first is replace it with the following algorithm:
Repeat until convergence {
θj := θj + α
n∑
i=1
(
y(i) − hθ(x(i))
)
x
(i)
j , (for every j) ()
}
1We use the notation “a := b” to denote an operation (in a computer program) in
which we set the value of a variable a to be equal to the value of b. In other words, this
operation overwrites a with the value of b. In contrast, we will write “a = b” when we are
asserting a statement of fact, that the value of a is equal to the value of b.
11
By grouping the updates of the coordinates into an update of the vector
θ, we can rewrite update () in a slightly more succinct way:
θ := θ + α
n∑
i=1
(
y(i) − hθ(x(i))
)
x(i)
The reader can easily verify that the quantity in the summation in the
update rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this
is simply gradient descent on the original cost function J . This method looks
at every example in the entire training set on every step, and is called batch
gradient descent. Note that, while gradient descent can be susceptible
to local minima in general, the optimization problem we have posed here
for linear regression has only one global, and no other local, optima; thus
gradient descent always converges (assuming the learning rate α is not too
large) to the global minimum. Indeed, J is a convex quadratic function.
Here is an example of gradient descent as it is run to minimize a quadratic
function.
5 10 15 20 25 30 35 40 45 50
5
10
15
20
25
30
35
40
45
50
The ellipses shown above are the contours of a quadratic function. Also
shown is the trajectory taken by gradient descent, which was initialized at
(48,30). The x’s in the figure (joined by straight lines) mark the successive
values of θ that gradient descent went through.
When we run batch gradient descent to fit θ on our previous dataset,
to learn to predict housing price as a function of living area, we obtain
θ0 = , θ1 = . If we plot hθ(x) as a function of x (area), along
with the training data, we obtain the following figure:
12
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
0
100
200
300
400
500
600
700
800
900
1000
housing prices
square feet
p
ri
c
e
(
in
$
1
0
0
0
)
If the number of bedrooms were included as one of the input features as well,
we get θ0 = , θ1 = , θ2 = −.
The above results were obtained with batch gradient descent. There is
an alternative to batch gradient descent that also works very well. Consider
the following algorithm:
Loop {
for i = 1 to n, {
θj := θj + α
(
y(i) − hθ(x(i))
)
x
(i)
j , (for every j) ()
}
}
By grouping the updates of the coordinates into an update of the vector
θ, we can rewrite update () in a slightly more succinct way:
θ := θ + α
(
y(i) − hθ(x(i))
)
x(i)
In this algorithm, we repeatedly run through the training set, and each
time we encounter a training example, we update the parameters according
to the gradient of the error with respect to that single training example only.
This algorithm is called stochastic gradient descent (also incremental
gradient descent). Whereas batch gradient descent has to scan through
the entire training set before taking a single step—a costly operation if n is
large—stochastic gradient descent can start making progress right away, and
13
continues to make progress with each example it looks at. Often, stochastic
gradient descent gets θ “close” to the minimum much faster than batch gra-
dient descent. (Note however that it may never “converge” to the minimum,
and the parameters θ will keep oscillating around the minimum of J(θ); but
in practice most of the values near the minimum will be reasonably good
approximations to the true ) For these reasons, particularly when
the training set is large, stochastic gradient descent is often preferred over
batch gradient descent.
The normal equations
Gradient descent gives one way of minimizing J . Let’s discuss a second way
of doing so, this time performing the minimization explicitly and without
resorting to an iterative algorithm. In this method, we will minimize J by
explicitly taking its derivatives with respect to the θj’s, and setting them to
zero. To enable us to do this without having to write reams of algebra and
pages full of matrices of derivatives, let’s introduce some notation for doing
calculus with matrices.
Matrix derivatives
For a function f : Rn×d 7→ R mapping from n-by-d matrices to the real
numbers, we define the derivative of f with respect to A to be:
∇Af(A) =
∂f
∂A11
· · · ∂f
∂A1d
...
. . .
...
∂f
∂An1
· · · ∂f
∂And
Thus, the gradient ∇Af(A) is itself an n-by-d matrix, whose (i, j)-element is
∂f/∂Aij. For example, suppose A =
[
A11 A12
A21 A22
]
is a 2-by-2 matrix, and
the function f : R2×2 7→ R is given by
f(A) =
3
2
A11 + 5A
2
12 + A21A22.
2By slowly letting the learning rate α decrease to zero as the algorithm runs, it is also
possible to ensure that the parameters will converge to the global minimum rather than
merely oscillate around the minimum.
14
Here, Aij denotes the (i, j) entry of the matrix A. We then have
∇Af(A) =
[
3
2
10A12
A22 A21
]
.
Least squares revisited
Armed with the tools of matrix derivatives, let us now proceed to find in
closed-form the value of θ that minimizes J(θ). We begin by re-writing J in
matrix-vectorial notation.
Given a training set, define the design matrix X to be the n-by-d matrix
(actually n-by-d + 1, if we include the intercept term) that contains the
training examples’ input values in its rows:
X =
— (x(1))T —
— (x(2))T —
...
— (x(n))T —
.
Also, let ~y be the n-dimensional vector containing all the target values from
the training set:
~y =
y(1)
y(2)
...
y(n)
.
Now, since hθ(x
(i)) = (x(i))T θ, we can easily verify that
Xθ − ~y =
(x
(1))T θ
...
(x(n))T θ
−
y
(1)
...
y(n)
=
hθ(x
(1))− y(1)
...
hθ(x
(n))− y(n)
.
Thus, using the fact that for a vector z, we have that zT z =
∑
i z
2
i :
1
2
(Xθ − ~y)T (Xθ − ~y) =
1
2
n∑
i=1
(hθ(x
(i))− y(i))2
= J(θ)
15
Finally, to minimize J , let’s find its derivatives with respect to θ. Hence,
∇θJ(θ) = ∇θ
1
2
(Xθ − ~y)T (Xθ − ~y)
=
1
2
∇θ
(
(Xθ)TXθ − (Xθ)T~y − ~yT (Xθ) + ~yT~y
)
=
1
2
∇θ
(
θT (XTX)θ − ~yT (Xθ)− ~yT (Xθ)
)
=
1
2
∇θ
(
θT (XTX)θ − 2(XT~y)T θ
)
=
1
2
(
2XTXθ − 2XT~y
)
= XTXθ −XT~y
In the third step, we used the fact that aT b = bTa, and in the fifth step
used the facts ∇xbTx = b and ∇xxTAx = 2Ax for symmetric matrix A (for
more details, see Section of “Linear Algebra Review and Reference”). To
minimize J , we set its derivatives to zero, and obtain the normal equations:
XTXθ = XT~y
Thus, the value of θ that minimizes J(θ) is given in closed form by the
equation
θ = (XTX)−1XT~
Probabilistic interpretation
When faced with a regression problem, why might linear regression, and
specifically why might the least-squares cost function J , be a reasonable
choice? In this section, we will give a set of probabilistic assumptions, under
which least-squares regression is derived as a very natural algorithm.
Let us assume that the target variables and the inputs are related via the
equation
y(i) = θTx(i) + �(i),
3Note that in the above step, we are implicitly assuming that XTX is an invertible
matrix. This can be checked before calculating the inverse. If either the number of
linearly independent examples is fewer than the number of features, or if the features
are not linearly independent, then XTX will not be invertible. Even in such cases, it is
possible to “fix” the situation with additional techniques, which we skip here for the sake
of simplicty.
16
where �(i) is an error term that captures either unmodeled effects (such as
if there are some features very pertinent to predicting housing price, but
that we’d left out of the regression), or random noise. Let us further assume
that the �(i) are distributed IID (independently and identically distributed)
according to a Gaussian distribution (also called a Normal distribution) with
mean zero and some variance σ2. We can write this assumption as “�(i) ∼
N (0, σ2).” ., the density of �(i) is given by
p(�(i)) =
1
√
2πσ
exp
(
−
(�(i))2
2σ2
)
.
This implies that
p(y(i)|x(i); θ) =
1
√
2πσ
exp
(
−
(y(i) − θTx(i))2
2σ2
)
.
The notation “p(y(i)|x(i); θ)” indicates that this is the distribution of y(i)
given x(i) and parameterized by θ. Note that we should not condition on θ
(“p(y(i)|x(i), θ)”), since θ is not a random variable. We can also write the
distribution of y(i) as y(i) | x(i); θ ∼ N (θTx(i), σ2).
Given X (the design matrix, which contains all the x(i)’s) and θ, what
is the distribution of the y(i)’s? The probability of the data is given by
p(~y|X; θ). This quantity is typically viewed a function of ~y (and perhaps X),
for a fixed value of θ. When we wish to explicitly view this as a function of
θ, we will instead call it the likelihood function:
L(θ) = L(θ;X, ~y) = p(~y|X; θ).
Note that by the independence assumption on the �(i)’s (and hence also the
y(i)’s given the x(i)’s), this can also be written
L(θ) =
n∏
i=1
p(y(i) | x(i); θ)
=
n∏
i=1
1
√
2πσ
exp
(
−
(y(i) − θTx(i))2
2σ2
)
.
Now, given this probabilistic model relating the y(i)’s and the x(i)’s, what
is a reasonable way of choosing our best guess of the parameters θ? The
principal of maximum likelihood says that we should choose θ so as to
make the data as high probability as possible. ., we should choose θ to
maximize L(θ).
17
Instead of maximizing L(θ), we can also maximize any strictly increasing
function of L(θ). In particular, the derivations will be a bit simpler if we
instead maximize the log likelihood `(θ):
`(θ) = logL(θ)
= log
n∏
i=1
1
√
2πσ
exp
(
−
(y(i) − θTx(i))2
2σ2
)
=
n∑
i=1
log
1
√
2πσ
exp
(
−
(y(i) − θTx(i))2
2σ2
)
= n log
1
√
2πσ
−
1
σ2
·
1
2
n∑
i=1
(y(i) − θTx(i))2.
Hence, maximizing `(θ) gives the same answer as minimizing
1
2
n∑
i=1
(y(i) − θTx(i))2,
which we recognize to be J(θ), our original least-squares cost function.
To summarize: Under the previous probabilistic assumptions on the data,
least-squares regression corresponds to finding the maximum likelihood esti-
mate of θ. This is thus one set of assumptions under which least-squares re-
gression can be justified as a very natural method that’s just doing maximum
likelihood estimation. (Note however that the probabilistic assumptions are
by no means necessary for least-squares to be a perfectly good and rational
procedure, and there may—and indeed there are—other natural assumptions
that can also be used to justify it.)
Note also that, in our previous discussion, our final choice of θ did not
depend on what was σ2, and indeed we’d have arrived at the same result
even if σ2 were unknown. We will use this fact again later, when we talk
about the exponential family and generalized linear models.
Locally weighted linear regression (optional
reading)
Consider the problem of predicting y from x ∈ R. The leftmost figure below
shows the result of fitting a y = θ0 + θ1x to a dataset. We see that the data
doesn’t really lie on straight line, and so the fit is not very good.
18
0 1 2 3 4 5 6 7
0
1
2
3
4
x
y
0 1 2 3 4 5 6 7
0
1
2
3
4
x
y
0 1 2 3 4 5 6 7
0
1
2
3
4
x
y
Instead, if we had added an extra feature x2, and fit y = θ0 + θ1x+ θ2x
2,
then we obtain a slightly better fit to the data. (See middle figure) Naively, it
might seem that the more features we add, the better. However, there is also
a danger in adding too many features: The rightmost figure is the result of
fitting a 5-th order polynomial y =
∑5
j=0 θjx
j. We see that even though the
fitted curve passes through the data perfectly, we would not expect this to
be a very good predictor of, say, housing prices (y) for different living areas
(x). Without formally defining what these terms mean, we’ll say the figure
on the left shows an instance of underfitting—in which the data clearly
shows structure not captured by the model—and the figure on the right is
an example of overfitting. (Later in this class, when we talk about learning
theory we’ll formalize some of these notions, and also define more carefully
just what it means for a hypothesis to be good or bad.)
As discussed previously, and as shown in the example above, the choice of
features is important to ensuring good performance of a learning algorithm.
(When we talk about model selection, we’ll also see algorithms for automat-
ically choosing a good set of features.) In this section, let us briefly talk
about the locally weighted linear regression (LWR) algorithm which, assum-
ing there is sufficient training data, makes the choice of features less critical.
This treatment will be brief, since you’ll get a chance to explore some of the
properties of the LWR algorithm yourself in the homework.
In the original linear regression algorithm, to make a prediction at a query
point x (., to evaluate h(x)), we would:
1. Fit θ to minimize
∑
i(y
(i) − θTx(i))2.
2. Output θTx.
In contrast, the locally weighted linear regression algorithm does the fol-
lowing:
1. Fit θ to minimize
∑
iw
(i)(y(i) − θTx(i))2.
2. Output θTx.
19
Here, the w(i)’s are non-negative valued weights. Intuitively, if w(i) is large
for a particular value of i, then in picking θ, we’ll try hard to make (y(i) −
θTx(i))2 small. If w(i) is small, then the (y(i) − θTx(i))2 error term will be
pretty much ignored in the fit.
A fairly standard choice for the weights is4
w(i) = exp
(
−
(x(i) − x)2
2τ 2
)
Note that the weights depend on the particular point x at which we’re trying
to evaluate x. Moreover, if |x(i) − x| is small, then w(i) is close to 1; and
if |x(i) − x| is large, then w(i) is small. Hence, θ is chosen giving a much
higher “weight” to the (errors on) training examples close to the query point
x. (Note also that while the formula for the weights takes a form that is
cosmetically similar to the density of a Gaussian distribution, the w(i)’s do
not directly have anything to do with Gaussians, and in particular the w(i)
are not random variables, normally distributed or otherwise.) The parameter
τ controls how quickly the weight of a training example falls off with distance
of its x(i) from the query point x; τ is called the bandwidth parameter, and
is also something that you’ll get to experiment with in your homework.
Locally weighted linear regression is the first example we’re seeing of a
non-parametric algorithm. The (unweighted) linear regression algorithm
that we saw earlier is known as a parametric learning algorithm, because
it has a fixed, finite number of parameters (the θi’s), which are fit to the
data. Once we’ve fit the θi’s and stored them away, we no longer need to
keep the training data around to make future predictions. In contrast, to
make predictions using locally weighted linear regression, we need to keep
the entire training set around. The term “non-parametric” (roughly) refers
to the fact that the amount of stuff we need to keep in order to represent the
hypothesis h grows linearly with the size of the training set.
4If x is vector-valued, this is generalized to be w(i) = exp(−(x(i)−x)T (x(i)−x)/(2τ2)),
or w(i) = exp(−(x(i) − x)TΣ−1(x(i) − x)/(2τ2)), for an appropriate choice of τ or Σ.
Chapter 2
Classification and logistic
regression
Let’s now talk about the classification problem. This is just like the regression
problem, except that the values y we now want to predict take on only
a small number of discrete values. For now, we will focus on the binary
classification problem in which y can take on only two values, 0 and 1.
(Most of what we say here will also generalize to the multiple-class case.)
For instance, if we are trying to build a spam classifier for email, then x(i)
may be some features of a piece of email, and y may be 1 if it is a piece
of spam mail, and 0 otherwise. 0 is also called the negative class, and 1
the positive class, and they are sometimes also denoted by the symbols “-”
and “+.” Given x(i), the corresponding y(i) is also called the label for the
training example.
Logistic regression
We could approach the classification problem ignoring the fact that y is
discrete-valued, and use our old linear regression algorithm to try to predict
y given x. However, it is easy to construct examples where this method
performs very poorly. Intuitively, it also doesn’t make sense for hθ(x) to take
values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}.
To fix this, let’s change the form for our hypotheses hθ(x). We will choose
hθ(x) = g(θ
Tx) =
1
1 + e−θ
T x
,
where
g(z) =
1
1 + e−z
20
21
is called the logistic function or the sigmoid function. Here is a plot
showing g(z):
−5 −4 −3 −2 −1 0 1 2 3 4 5
0
1
z
g
(z
)
Notice that g(z) tends towards 1 as z → ∞, and g(z) tends towards 0 as
z → −∞. Moreover, g(z), and hence also h(x), is always bounded between
0 and 1. As before, we are keeping the convention of letting x0 = 1, so that
θTx = θ0 +
∑d
j=1 θjxj.
For now, let’s take the choice of g as given. Other functions that smoothly
increase from 0 to 1 can also be used, but for a couple of reasons that we’ll see
later (when we talk about GLMs, and when we talk about generative learning
algorithms), the choice of the logistic function is a fairly natural one. Before
moving on, here’s a useful property of the derivative of the sigmoid function,
which we write as g′:
g′(z) =
d
dz
1
1 + e−z
=
1
(1 + e−z)2
(
e−z
)
=
1
(1 + e−z)
·
(
1−
1
(1 + e−z)
)
= g(z)(1− g(z)).
So, given the logistic regression model, how do we fit θ for it? Following
how we saw least squares regression could be derived as the maximum like-
lihood estimator under a set of assumptions, let’s endow our classification
model with a set of probabilistic assumptions, and then fit the parameters
via maximum likelihood.
22
Let us assume that
P (y = 1 | x; θ) = hθ(x)
P (y = 0 | x; θ) = 1− hθ(x)
Note that this can be written more compactly as
p(y | x; θ) = (hθ(x))
y
(1− hθ(x))
1−y
Assuming that the n training examples were generated independently, we
can then write down the likelihood of the parameters as
L(θ) = p(~y | X; θ)
=
n∏
i=1
p(y(i) | x(i); θ)
=
n∏
i=1
(
hθ(x
(i))
)y(i) (
1− hθ(x(i))
)1−y(i)
As before, it will be easier to maximize the log likelihood:
`(θ) = logL(θ) =
n∑
i=1
y(i) log h(x(i)) + (1− y(i)) log(1− h(x(i))) ()
How do we maximize the likelihood? Similar to our derivation in the case
of linear regression, we can use gradient ascent. Written in vectorial notation,
our updates will therefore be given by θ := θ + α∇θ`(θ). (Note the positive
rather than negative sign in the update formula, since we’re maximizing,
rather than minimizing, a function now.) Let’s start by working with just
one training example (x, y), and take derivatives to derive the stochastic
gradient ascent rule:
∂
∂θj
`(θ) =
(
y
1
g(θTx)
− (1− y)
1
1− g(θTx)
)
∂
∂θj
g(θTx)
=
(
y
1
g(θTx)
− (1− y)
1
1− g(θTx)
)
g(θTx)(1− g(θTx))
∂
∂θj
θTx
=
(
y(1− g(θTx))− (1− y)g(θTx)
)
xj
= (y − hθ(x))xj ()
23
Above, we used the fact that g′(z) = g(z)(1− g(z)). This therefore gives us
the stochastic gradient ascent rule
θj := θj + α
(
y(i) − hθ(x(i))
)
x
(i)
j
If we compare this to the LMS update rule, we see that it looks identical; but
this is not the same algorithm, because hθ(x
(i)) is now defined as a non-linear
function of θTx(i). Nonetheless, it’s a little surprising that we end up with
the same update rule for a rather different algorithm and learning problem.
Is this coincidence, or is there a deeper reason behind this? We’ll answer this
when we get to GLM models.
Remark : An alternative notational viewpoint of the same loss func-
tion is also useful, especially for Section where we study nonlinear models.
Let `logistic : R× {0, 1} → R≥0 be the logistic loss defined as
`logistic(t, y) , y log(1 + exp(−t)) + (1− y) log(1 + exp(t)) . ()
One can verify by plugging in hθ(x) = 1/(1 + e
−θ>x) that the negative log-
likelihood (the negation of `(θ) in equation ()) can be re-written as
−`(θ) = `logistic(θ>x, y). ()
Oftentimes θ>x or t is called the logit. Basic calculus gives us that
∂`logistic(t, y)
∂t
= y
− exp(−t)
1 + exp(−t)
+ (1− y)
1
1 + exp(−t)
()
= 1/(1 + exp(−t))− y. ()
Then, using the chain rule, we have that
∂
∂θj
`(θ) = −
∂`logistic(t, y)
∂t
·
∂t
∂θj
()
= (y − 1/(1 + exp(−t))) · xj = (y − hθ(x))xj , ()
which is consistent with the derivation in equation (). We will see this
viewpoint can be extended nonlinear models in Section .
Digression: the perceptron learning algo-
rithm
We now digress to talk briefly about an algorithm that’s of some historical
interest, and that we will also return to later when we talk about learning
24
theory. Consider modifying the logistic regression method to “force” it to
output values that are either 0 or 1 or exactly. To do so, it seems natural to
change the definition of g to be the threshold function:
g(z) =
{
1 if z ≥ 0
0 if z < 0
If we then let hθ(x) = g(θ
Tx) as before but using this modified definition of
g, and if we use the update rule
θj := θj + α
(
y(i) − hθ(x(i))
)
x
(i)
j .
then we have the perceptron learning algorithn.
In the 1960s, this “perceptron” was argued to be a rough model for how
individual neurons in the brain work. Given how simple the algorithm is, it
will also provide a starting point for our analysis when we talk about learning
theory later in this class. Note however that even though the perceptron may
be cosmetically similar to the other algorithms we talked about, it is actually
a very different type of algorithm than logistic regression and least squares
linear regression; in particular, it is difficult to endow the perceptron’s predic-
tions with meaningful probabilistic interpretations, or derive the perceptron
as a maximum likelihood estimation algorithm.
Multi-class classification
Consider a classification problem in which the response variable y can take on
any one of k values, so y ∈ {1, 2, . . . , k}. For example, rather than classifying
emails into the two classes spam or not-spam—which would have been a
binary classification problem—we might want to classify them into three
classes, such as spam, personal mails, and work-related mails. The label /
response variable is still discrete, but can now take on more than two values.
We will thus model it as distributed according to a multinomial distribution.
In this case, p(y | x; θ) is a distribution over k possible discrete outcomes
and is thus a multinomial distribution. Recall that a multinomial distribu-
tion involves k numbers φ1, . . . , φk specifying the probability of each of the
outcomes. Note that these numbers must satisfy
∑k
i=1 φi = 1. We will de-
sign a parameterized model that outputs φ1, . . . , φk satisfying this constraint
given the input x.
We introduce k groups of parameters θ1, . . . , θk, each of them being a
vector in Rd. Intuitively, we would like to use θ>1 x, . . . , θ
>
k x to represent
25
φ1, . . . , φk, the probabilities P (y = 1 | x; θ), . . . , P (y = k | x; θ). However,
there are two issues with such a direct approach. First, θ>j x is not neces-
sarily within [0, 1]. Second, the summation of θ>j x’s is not necessarily 1.
Thus, instead, we will use the softmax function to turn (θ>1 x, · · · , θ>k x) into
a probability vector with nonnegative entries that sum up to 1.
Define the softmax function softmax : Rk → Rk as
softmax(t1, . . . , tk) =
exp(t1)∑k
j=1 exp(tj)
...
exp(tk)∑k
j=1 exp(tj)
. ()
The inputs to the softmax function, the vector t here, are often called log-
its. Note that by definition, the output of the softmax function is always a
probability vector whose entries are nonnegative and sum up to 1.
Let (t1, . . . , tk) = (θ
>
1 x, · · · , θ>k x). We apply the softmax function to
(t1, . . . , tk), and use the output as the probabilities P (y = 1 | x; θ), . . . , P (y =
k | x; θ). We obtain the following probabilistic model:
P (y = 1 | x; θ)...
P (y = k | x; θ)
= softmax(t1, · · · , tk) =
exp(θ>1 x)∑k
j=1 exp(θ
>
j x)
...
exp(θ>
k
x)∑k
j=1 exp(θ
>
j x)
. ()
For notational convenience, we will let φi =
exp(ti)∑k
j=1 exp(tj)
. More succinctly, the
equation above can be written as:
P (y = i | x; θ) = φi =
exp(ti)∑k
j=1 exp(tj)
=
exp(θ>i x)∑k
j=1 exp(θ
>
j x)
. ()
Next, we compute the negative log-likelihood of a single example (x, y).
− log p(y | x, θ) = − log
(
exp(ty)∑k
j=1 exp(tj)
)
= − log
(
exp(θ>y x)∑k
j=1 exp(θ
>
j x)
)
()
Thus, the loss function, the negative log-likelihood of the training data, is
given as
`(θ) =
n∑
i=1
− log
(
exp(θ>
y(i)
x(i))∑k
j=1 exp(θ
>
j x
(i))
)
. ()
26
It’s convenient to define the cross-entropy loss `ce : Rk × {1, . . . , k} → R≥0,
which modularizes in the complex equation above:1
`ce((t1, . . . , tk), y) = − log
(
exp(ty)∑k
j=1 exp(tj)
)
. ()
With this notation, we can simply rewrite equation () as
`(θ) =
n∑
i=1
`ce((θ
>
1 x
(i), . . . , θ>k x
(i)), y(i)) . ()
Moreover, conveniently, the cross-entropy loss also has a simple gradient. Let
t = (t1, . . . , tk), and recall φi =
exp(ti)∑k
j=1 exp(tj)
. By basic calculus, we can derive
∂`ce(t, y)
∂ti
= φi − 1{y = i} , ()
where 1{·} is the indicator function, that is, 1{y = i} = 1 if y = i, and
1{y = i} = 0 if y 6= i. Alternatively, in vectorized notations, we have the
following form which will be useful for Chapter 7:
∂`ce(t, y)
∂t
= φ− ey , ()
where es ∈ Rk is the s-th natural basis vector (where the s-th entry is 1 and
all other entries are zeros.) Using Chain rule, we have that
∂`ce((θ
>
1 x, . . . , θ
>
k x), y)
∂θi
=
∂`(t, y)
∂ti
·
∂ti
∂θi
= (φi − 1{y = i}) · x . ()
Therefore, the gradient of the loss with respect to the part of parameter θi is
∂`(θ)
∂θi
=
n∑
j=1
(φ
(j)
i − 1{y
(j) = i}) · x(j) , ()
where φ
(j)
i =
exp(θ>i x
(j))∑k
s=1 exp(θ
>
s x
(j))
is the probability that the model predicts item i
for example x(j). With the gradients above, one can implement (stochastic)
gradient descent to minimize the loss function `(θ).
1There are some ambiguity in the naming here. Some people call the cross-entropy loss
the function that maps the probability vector (the φ in our language) and label y to the
final real number, and call our version of cross-entropy loss softmax-cross-entropy loss.
We choose our current naming convention because it’s consistent with the naming of most
modern deep learning library such as PyTorch and Jax.
27
1 2 3 4 5
−10
0
10
20
30
40
50
60
x
f(
x
)
1 2 3 4 5
−10
0
10
20
30
40
50
60
x
f(
x
)
1 2 3 4 5
−10
0
10
20
30
40
50
60
x
f(
x
)
Another algorithm for maximizing `(θ)
Returning to logistic regression with g(z) being the sigmoid function, let’s
now talk about a different algorithm for maximizing `(θ).
To get us started, let’s consider Newton’s method for finding a zero of a
function. Specifically, suppose we have some function f : R 7→ R, and we
wish to find a value of θ so that f(θ) = 0. Here, θ ∈ R is a real number.
Newton’s method performs the following update:
θ := θ −
f(θ)
f ′(θ)
.
This method has a natural interpretation in which we can think of it as
approximating the function f via a linear function that is tangent to f at
the current guess θ, solving for where that linear function equals to zero, and
letting the next guess for θ be where that linear function is zero.
Here’s a picture of the Newton’s method in action:
In the leftmost figure, we see the function f plotted along with the line
y = 0. We’re trying to find θ so that f(θ) = 0; the value of θ that achieves this
is about . Suppose we initialized the algorithm with θ = . Newton’s
method then fits a straight line tangent to f at θ = , and solves for the
where that line evaluates to 0. (Middle figure.) This give us the next guess
for θ, which is about . The rightmost figure shows the result of running
one more iteration, which the updates θ to about . After a few more
iterations, we rapidly approach θ = .
Newton’s method gives a way of getting to f(θ) = 0. What if we want to
use it to maximize some function `? The maxima of ` correspond to points
where its first derivative `′(θ) is zero. So, by letting f(θ) = `′(θ), we can use
the same algorithm to maximize `, and we obtain update rule:
θ := θ −
`′(θ)
`′′(θ)
.
(Something to think about: How would this change if we wanted to use
Newton’s method to minimize rather than maximize a function?)
28
Lastly, in our logistic regression setting, θ is vector-valued, so we need to
generalize Newton’s method to this setting. The generalization of Newton’s
method to this multidimensional setting (also called the Newton-Raphson
method) is given by
θ := θ −H−1∇θ`(θ).
Here, ∇θ`(θ) is, as usual, the vector of partial derivatives of `(θ) with respect
to the θi’s; and H is an d-by-d matrix (actually, d+1−by−d+1, assuming that
we include the intercept term) called the Hessian, whose entries are given
by
Hij =
∂2`(θ)
∂θi∂θj
.
Newton’s method typically enjoys faster convergence than (batch) gra-
dient descent, and requires many fewer iterations to get very close to the
minimum. One iteration of Newton’s can, however, be more expensive than
one iteration of gradient descent, since it requires finding and inverting an
d-by-d Hessian; but so long as d is not too large, it is usually much faster
overall. When Newton’s method is applied to maximize the logistic regres-
sion log likelihood function `(θ), the resulting method is also called Fisher
scoring.
Chapter 3
Generalized linear models
So far, we’ve seen a regression example, and a classification example. In the
regression example, we had y|x; θ ∼ N (µ, σ2), and in the classification one,
y|x; θ ∼ Bernoulli(φ), for some appropriate definitions of µ and φ as functions
of x and θ. In this section, we will show that both of these methods are
special cases of a broader family of models, called Generalized Linear Models
(GLMs).1 We will also show how other models in the GLM family can be
derived and applied to other classification and regression problems.
The exponential family
To work our way up to GLMs, we will begin by defining exponential family
distributions. We say that a class of distributions is in the exponential family
if it can be written in the form
p(y; η) = b(y) exp(ηTT (y)− a(η)) ()
Here, η is called the natural parameter (also called the canonical param-
eter) of the distribution; T (y) is the sufficient statistic (for the distribu-
tions we consider, it will often be the case that T (y) = y); and a(η) is the log
partition function. The quantity e−a(η) essentially plays the role of a nor-
malization constant, that makes sure the distribution p(y; η) sums/integrates
over y to 1.
A fixed choice of T , a and b defines a family (or set) of distributions that
is parameterized by η; as we vary η, we then get different distributions within
this family.
1The presentation of the material in this section takes inspiration from Michael I.
Jordan, Learning in graphical models (unpublished book draft), and also McCullagh and
Nelder, Generalized Linear Models (2nd ed.).
29
30
We now show that the Bernoulli and the Gaussian distributions are ex-
amples of exponential family distributions. The Bernoulli distribution with
mean φ, written Bernoulli(φ), specifies a distribution over y ∈ {0, 1}, so that
p(y = 1;φ) = φ; p(y = 0;φ) = 1 − φ. As we vary φ, we obtain Bernoulli
distributions with different means. We now show that this class of Bernoulli
distributions, ones obtained by varying φ, is in the exponential family; .,
that there is a choice of T , a and b so that Equation () becomes exactly
the class of Bernoulli distributions.
We write the Bernoulli distribution as:
p(y;φ) = φy(1− φ)1−y
= exp(y log φ+ (1− y) log(1− φ))
= exp
((
log
(
φ
1− φ
))
y + log(1− φ)
)
.
Thus, the natural parameter is given by η = log(φ/(1− φ)). Interestingly, if
we invert this definition for η by solving for φ in terms of η, we obtain φ =
1/(1 + e−η). This is the familiar sigmoid function! This will come up again
when we derive logistic regression as a GLM. To complete the formulation
of the Bernoulli distribution as an exponential family distribution, we also
have
T (y) = y
a(η) = − log(1− φ)
= log(1 + eη)
b(y) = 1
This shows that the Bernoulli distribution can be written in the form of
Equation (), using an appropriate choice of T , a and b.
Let’s now move on to consider the Gaussian distribution. Recall that,
when deriving linear regression, the value of σ2 had no effect on our final
choice of θ and hθ(x). Thus, we can choose an arbitrary value for σ
2 without
changing anything. To simplify the derivation below, let’s set σ2 = We
2If we leave σ2 as a variable, the Gaussian distribution can also be shown to be in the
exponential family, where η ∈ R2 is now a 2-dimension vector that depends on both µ and
σ. For the purposes of GLMs, however, the σ2 parameter can also be treated by considering
a more general definition of the exponential family: p(y; η, τ) = b(a, τ) exp((ηTT (y) −
a(η))/c(τ)). Here, τ is called the dispersion parameter, and for the Gaussian, c(τ) = σ2;
but given our simplification above, we won’t need the more general definition for the
examples we will consider here.
31
then have:
p(y;µ) =
1
√
2π
exp
(
−
1
2
(y − µ)2
)
=
1
√
2π
exp
(
−
1
2
y2
)
· exp
(
µy −
1
2
µ2
)
Thus, we see that the Gaussian is in the exponential family, with
η = µ
T (y) = y
a(η) = µ2/2
= η2/2
b(y) = (1/
√
2π) exp(−y2/2).
There’re many other distributions that are members of the exponen-
tial family: The multinomial (which we’ll see later), the Poisson (for mod-
elling count-data; also see the problem set); the gamma and the exponen-
tial (for modelling continuous, non-negative random variables, such as time-
intervals); the beta and the Dirichlet (for distributions over probabilities);
and many more. In the next section, we will describe a general “recipe”
for constructing models in which y (given x and θ) comes from any of these
distributions.
Constructing GLMs
Suppose you would like to build a model to estimate the number y of cus-
tomers arriving in your store (or number of page-views on your website) in
any given hour, based on certain features x such as store promotions, recent
advertising, weather, day-of-week, etc. We know that the Poisson distribu-
tion usually gives a good model for numbers of visitors. Knowing this, how
can we come up with a model for our problem? Fortunately, the Poisson is an
exponential family distribution, so we can apply a Generalized Linear Model
(GLM). In this section, we will we will describe a method for constructing
GLM models for problems such as these.
More generally, consider a classification or regression problem where we
would like to predict the value of some random variable y as a function of
x. To derive a GLM for this problem, we will make the following three
assumptions about the conditional distribution of y given x and about our
model:
32
1. y | x; θ ∼ ExponentialFamily(η). ., given x and θ, the distribution of
y follows some exponential family distribution, with parameter η.
2. Given x, our goal is to predict the expected value of T (y) given x.
In most of our examples, we will have T (y) = y, so this means we
would like the prediction h(x) output by our learned hypothesis h to
satisfy h(x) = E[y|x]. (Note that this assumption is satisfied in the
choices for hθ(x) for both logistic regression and linear regression. For
instance, in logistic regression, we had hθ(x) = p(y = 1|x; θ) = 0 · p(y =
0|x; θ) + 1 · p(y = 1|x; θ) = E[y|x; θ].)
3. The natural parameter η and the inputs x are related linearly: η = θTx.
(Or, if η is vector-valued, then ηi = θ
T
i x.)
The third of these assumptions might seem the least well justified of
the above, and it might be better thought of as a “design choice” in our
recipe for designing GLMs, rather than as an assumption per se. These
three assumptions/design choices will allow us to derive a very elegant class
of learning algorithms, namely GLMs, that have many desirable properties
such as ease of learning. Furthermore, the resulting models are often very
effective for modelling different types of distributions over y; for example, we
will shortly show that both logistic regression and ordinary least squares can
both be derived as GLMs.
Ordinary least squares
To show that ordinary least squares is a special case of the GLM family
of models, consider the setting where the target variable y (also called the
response variable in GLM terminology) is continuous, and we model the
conditional distribution of y given x as a Gaussian N (µ, σ2). (Here, µ may
depend x.) So, we let the ExponentialFamily(η) distribution above be
the Gaussian distribution. As we saw previously, in the formulation of the
Gaussian as an exponential family distribution, we had µ = η. So, we have
hθ(x) = E[y|x; θ]
= µ
= η
= θTx.
The first equality follows from Assumption 2, above; the second equality
follows from the fact that y|x; θ ∼ N (µ, σ2), and so its expected value is given
33
by µ; the third equality follows from Assumption 1 (and our earlier derivation
showing that µ = η in the formulation of the Gaussian as an exponential
family distribution); and the last equality follows from Assumption 3.
Logistic regression
We now consider logistic regression. Here we are interested in binary classifi-
cation, so y ∈ {0, 1}. Given that y is binary-valued, it therefore seems natural
to choose the Bernoulli family of distributions to model the conditional dis-
tribution of y given x. In our formulation of the Bernoulli distribution as
an exponential family distribution, we had φ = 1/(1 + e−η). Furthermore,
note that if y|x; θ ∼ Bernoulli(φ), then E[y|x; θ] = φ. So, following a similar
derivation as the one for ordinary least squares, we get:
hθ(x) = E[y|x; θ]
= φ
= 1/(1 + e−η)
= 1/(1 + e−θ
T x)
So, this gives us hypothesis functions of the form hθ(x) = 1/(1 + e
−θT x). If
you are previously wondering how we came up with the form of the logistic
function 1/(1 + e−z), this gives one answer: Once we assume that y condi-
tioned on x is Bernoulli, it arises as a consequence of the definition of GLMs
and exponential family distributions.
To introduce a little more terminology, the function g giving the distri-
bution’s mean as a function of the natural parameter (g(η) = E[T (y); η])
is called the canonical response function. Its inverse, g−1, is called the
canonical link function. Thus, the canonical response function for the
Gaussian family is just the identify function; and the canonical response
function for the Bernoulli is the logistic
3Many texts use g to denote the link function, and g−1 to denote the response function;
but the notation we’re using here, inherited from the early machine learning literature,
will be more consistent with the notation used in the rest of the class.
Chapter 4
Generative learning algorithms
So far, we’ve mainly been talking about learning algorithms that model
p(y|x; θ), the conditional distribution of y given x. For instance, logistic
regression modeled p(y|x; θ) as hθ(x) = g(θTx) where g is the sigmoid func-
tion. In these notes, we’ll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish
between elephants (y = 1) and dogs (y = 0), based on some features of
an animal. Given a training set, an algorithm like logistic regression or
the perceptron algorithm (basically) tries to find a straight line—that is, a
decision boundary—that separates the elephants and dogs. Then, to classify
a new animal as either an elephant or a dog, it checks on which side of the
decision boundary it falls, and makes its prediction accordingly.
Here’s a different approach. First, looking at elephants, we can build a
model of what elephants look like. Then, looking at dogs, we can build a
separate model of what dogs look like. Finally, to classify a new animal, we
can match the new animal against the elephant model, and match it against
the dog model, to see whether the new animal looks more like the elephants
or more like the dogs we had seen in the training set.
Algorithms that try to learn p(y|x) directly (such as logistic regression),
or algorithms that try to learn mappings directly from the space of inputs X
to the labels {0, 1}, (such as the perceptron algorithm) are called discrim-
inative learning algorithms. Here, we’ll talk about algorithms that instead
try to model p(x|y) (and p(y)). These algorithms are called generative
learning algorithms. For instance, if y indicates whether an example is a
dog (0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’
features, and p(x|y = 1) models the distribution of elephants’ features.
After modeling p(y) (called the class priors) and p(x|y), our algorithm
34
35
can then use Bayes rule to derive the posterior distribution on y given x:
p(y|x) =
p(x|y)p(y)
p(x)
.
Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y =
0)p(y = 0) (you should be able to verify that this is true from the standard
properties of probabilities), and thus can also be expressed in terms of the
quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating
p(y|x) in order to make a prediction, then we don’t actually need to calculate
the denominator, since
arg max
y
p(y|x) = arg max
y
p(x|y)p(y)
p(x)
= arg max
y
p(x|y)p(y).
Gaussian discriminant analysis
The first generative learning algorithm that we’ll look at is Gaussian discrim-
inant analysis (GDA). In this model, we’ll assume that p(x|y) is distributed
according to a multivariate normal distribution. Let’s talk briefly about the
properties of multivariate normal distributions before moving on to the GDA
model itself.
The multivariate normal distribution
The multivariate normal distribution in d-dimensions, also called the multi-
variate Gaussian distribution, is parameterized by a mean vector µ ∈ Rd
and a covariance matrix Σ ∈ Rd×d, where Σ ≥ 0 is symmetric and positive
semi-definite. Also written “N (µ,Σ)”, its density is given by:
p(x;µ,Σ) =
1
(2π)d/2|Σ|1/2
exp
(
−
1
2
(x− µ)TΣ−1(x− µ)
)
.
In the equation above, “|Σ|” denotes the determinant of the matrix Σ.
For a random variable X distributed N (µ,Σ), the mean is (unsurpris-
ingly) given by µ:
E[X] =
∫
x
x p(x;µ,Σ)dx = µ
The covariance of a vector-valued random variable Z is defined as Cov(Z) =
E[(Z − E[Z])(Z − E[Z])T ]. This generalizes the notion of the variance of a
36
real-valued random variable. The covariance can also be defined as Cov(Z) =
E[ZZT ]− (E[Z])(E[Z])T . (You should be able to prove to yourself that these
two definitions are equivalent.) If X ∼ N (µ,Σ), then
Cov(X) = Σ.
Here are some examples of what the density of a Gaussian distribution
looks like:
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
The left-most figure shows a Gaussian with mean zero (that is, the 2x1
zero-vector) and covariance matrix Σ = I (the 2x2 identity matrix). A Gaus-
sian with zero mean and identity covariance is also called the standard nor-
mal distribution. The middle figure shows the density of a Gaussian with
zero mean and Σ = ; and in the rightmost figure shows one with , Σ = 2I.
We see that as Σ becomes larger, the Gaussian becomes more “spread-out,”
and as it becomes smaller, the distribution becomes more “compressed.”
Let’s look at some more examples.
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
The figures above show Gaussians with mean 0, and with covariance
matrices respectively
Σ =
[
1 0
0 1
]
; Σ =
[
1
1
]
; Σ =
[
1
1
]
.
The leftmost figure shows the familiar standard normal distribution, and we
see that as we increase the off-diagonal entry in Σ, the density becomes more
37
“compressed” towards the 45◦ line (given by x1 = x2). We can see this more
clearly when we look at the contours of the same three densities:
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
Here’s one last set of examples generated by varying Σ:
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
The plots above used, respectively,
Σ =
[
1
1
]
; Σ =
[
1
1
]
; Σ =
[
3
1
]
.
From the leftmost and middle figures, we see that by decreasing the off-
diagonal elements of the covariance matrix, the density now becomes “com-
pressed” again, but in the opposite direction. Lastly, as we vary the pa-
rameters, more generally the contours will form ellipses (the rightmost figure
showing an example).
As our last set of examples, fixing Σ = I, by varying µ, we can also move
the mean of the density around.
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
38
The figures above were generated using Σ = I, and respectively
µ =
[
1
0
]
; µ =
[
0
]
; µ =
[
-1
]
.
The Gaussian discriminant analysis model
When we have a classification problem in which the input features x are
continuous-valued random variables, we can then use the Gaussian Discrim-
inant Analysis (GDA) model, which models p(x|y) using a multivariate nor-
mal distribution. The model is:
y ∼ Bernoulli(φ)
x|y = 0 ∼ N (µ0,Σ)
x|y = 1 ∼ N (µ1,Σ)
Writing out the distributions, this is:
p(y) = φy(1− φ)1−y
p(x|y = 0) =
1
(2π)d/2|Σ|1/2
exp
(
−
1
2
(x− µ0)TΣ−1(x− µ0)
)
p(x|y = 1) =
1
(2π)d/2|Σ|1/2
exp
(
−
1
2
(x− µ1)TΣ−1(x− µ1)
)
Here, the parameters of our model are φ, Σ, µ0 and µ1. (Note that while
there’re two different mean vectors µ0 and µ1, this model is usually applied
using only one covariance matrix Σ.) The log-likelihood of the data is given
by
`(φ, µ0, µ1,Σ) = log
n∏
i=1
p(x(i), y(i);φ, µ0, µ1,Σ)
= log
n∏
i=1
p(x(i)|y(i);µ0, µ1,Σ)p(y(i);φ).
39
By maximizing ` with respect to the parameters, we find the maximum like-
lihood estimate of the parameters (see problem set 1) to be:
φ =
1
n
n∑
i=1
1{y(i) = 1}
µ0 =
∑n
i=1 1{y
(i) = 0}x(i)∑n
i=1 1{y(i) = 0}
µ1 =
∑n
i=1 1{y
(i) = 1}x(i)∑n
i=1 1{y(i) = 1}
Σ =
1
n
n∑
i=1
(x(i) − µy(i))(x
(i) − µy(i))
T .
Pictorially, what the algorithm is doing can be seen in as follows:
−2 −1 0 1 2 3 4 5 6 7
−7
−6
−5
−4
−3
−2
−1
0
1
Shown in the figure are the training set, as well as the contours of the
two Gaussian distributions that have been fit to the data in each of the
two classes. Note that the two Gaussians have contours that are the same
shape and orientation, since they share a covariance matrix Σ, but they have
different means µ0 and µ1. Also shown in the figure is the straight line
giving the decision boundary at which p(y = 1|x) = . On one side of
the boundary, we’ll predict y = 1 to be the most likely outcome, and on the
other side, we’ll predict y = 0.
40
Discussion: GDA and logistic regression
The GDA model has an interesting relationship to logistic regression. If we
view the quantity p(y = 1|x;φ, µ0, µ1,Σ) as a function of x, we’ll find that it
can be expressed in the form
p(y = 1|x;φ,Σ, µ0, µ1) =
1
1 + exp(−θTx)
,
where θ is some appropriate function of φ,Σ, µ0, µ1.
1 This is exactly the form
that logistic regression—a discriminative algorithm—used to model p(y =
1|x).
When would we prefer one model over another? GDA and logistic regres-
sion will, in general, give different decision boundaries when trained on the
same dataset. Which is better?
We just argued that if p(x|y) is multivariate gaussian (with shared Σ),
then p(y|x) necessarily follows a logistic function. The converse, however,
is not true; ., p(y|x) being a logistic function does not imply p(x|y) is
multivariate gaussian. This shows that GDA makes stronger modeling as-
sumptions about the data than does logistic regression. It turns out that
when these modeling assumptions are correct, then GDA will find better fits
to the data, and is a better model. Specifically, when p(x|y) is indeed gaus-
sian (with shared Σ), then GDA is asymptotically efficient. Informally,
this means that in the limit of very large training sets (large n), there is no
algorithm that is strictly better than GDA (in terms of, say, how accurately
they estimate p(y|x)). In particular, it can be shown that in this setting,
GDA will be a better algorithm than logistic regression; and more generally,
even for small training set sizes, we would generally expect GDA to better.
In contrast, by making significantly weaker assumptions, logistic regres-
sion is also more robust and less sensitive to incorrect modeling assumptions.
There are many different sets of assumptions that would lead to p(y|x) taking
the form of a logistic function. For example, if x|y = 0 ∼ Poisson(λ0), and
x|y = 1 ∼ Poisson(λ1), then p(y|x) will be logistic. Logistic regression will
also work well on Poisson data like this. But if we were to use GDA on such
data—and fit Gaussian distributions to such non-Gaussian data—then the
results will be less predictable, and GDA may (or may not) do well.
To summarize: GDA makes stronger modeling assumptions, and is more
data efficient (., requires less training data to learn “well”) when the mod-
eling assumptions are correct or at least approximately correct. Logistic
1This uses the convention of redefining the x(i)’s on the right-hand-side to be (d+ 1)-
dimensional vectors by adding the extra coordinate x
(i)
0 = 1; see problem set 1.
41
regression makes weaker assumptions, and is significantly more robust to
deviations from modeling assumptions. Specifically, when the data is in-
deed non-Gaussian, then in the limit of large datasets, logistic regression will
almost always do better than GDA. For this reason, in practice logistic re-
gression is used more often than GDA. (Some related considerations about
discriminative vs. generative models also apply for the Naive Bayes algo-
rithm that we discuss next, but the Naive Bayes algorithm is still considered
a very good, and is certainly also a very popular, classification algorithm.)
Naive bayes (Option Reading)
In GDA, the feature vectors x were continuous, real-valued vectors. Let’s
now talk about a different learning algorithm in which the xj’s are discrete-
valued.
For our motivating example, consider building an email spam filter using
machine learning. Here, we wish to classify messages according to whether
they are unsolicited commercial (spam) email, or non-spam email. After
learning to do this, we can then have our mail reader automatically filter
out the spam messages and perhaps place them in a separate mail folder.
Classifying emails is one example of a broader set of problems called text
classification.
Let’s say we have a training set (a set of emails labeled as spam or non-
spam). We’ll begin our construction of our spam filter by specifying the
features xj used to represent an email.
We will represent an email via a feature vector whose length is equal to
the number of words in the dictionary. Specifically, if an email contains the
j-th word of the dictionary, then we will set xj = 1; otherwise, we let xj = 0.
For instance, the vector
x =
1
0
0
...
1
...
0
a
aardvark
aardwolf
...
buy
...
zygmurgy
is used to represent an email that contains the words “a” and “buy,” but not
42
“aardvark,” “aardwolf” or “zygmurgy.”2 The set of words encoded into the
feature vector is called the vocabulary, so the dimension of x is equal to
the size of the vocabulary.
Having chosen our feature vector, we now want to build a generative
model. So, we have to model p(x|y). But if we have, say, a vocabulary of
50000 words, then x ∈ {0, 1}50000 (x is a 50000-dimensional vector of 0’s and
1’s), and if we were to model x explicitly with a multinomial distribution over
the 250000 possible outcomes, then we’d end up with a (250000−1)-dimensional
parameter vector. This is clearly too many parameters.
To model p(x|y), we will therefore make a very strong assumption. We will
assume that the xi’s are conditionally independent given y. This assumption
is called the Naive Bayes (NB) assumption, and the resulting algorithm is
called the Naive Bayes classifier. For instance, if y = 1 means spam email;
“buy” is word 2087 and “price” is word 39831; then we are assuming that if
I tell you y = 1 (that a particular piece of email is spam), then knowledge
of x2087 (knowledge of whether “buy” appears in the message) will have no
effect on your beliefs about the value of x39831 (whether “price” appears).
More formally, this can be written p(x2087|y) = p(x2087|y, x39831). (Note that
this is not the same as saying that x2087 and x39831 are independent, which
would have been written “p(x2087) = p(x2087|x39831)”; rather, we are only
assuming that x2087 and x39831 are conditionally independent given y.)
We now have:
p(x1, . . . , x50000|y)
= p(x1|y)p(x2|y, x1)p(x3|y, x1, x2) · · · p(x50000|y, x1, . . . , x49999)
= p(x1|y)p(x2|y)p(x3|y) · · · p(x50000|y)
=
d∏
j=1
p(xj|y)
The first equality simply follows from the usual properties of probabilities,
and the second equality used the NB assumption. We note that even though
2Actually, rather than looking through an English dictionary for the list of all English
words, in practice it is more common to look through our training set and encode in our
feature vector only the words that occur at least once there. Apart from reducing the
number of words modeled and hence reducing our computational and space requirements,
this also has the advantage of allowing us to model/include as a feature many words
that may appear in your email (such as “cs229”) but that you won’t find in a dictionary.
Sometimes (as in the homework), we also exclude the very high frequency words (which
will be words like “the,” “of,” “and”; these high frequency, “content free” words are called
stop words) since they occur in so many documents and do little to indicate whether an
email is spam or non-spam.
43
the Naive Bayes assumption is an extremely strong assumptions, the resulting
algorithm works well on many problems.
Our model is parameterized by φj|y=1 = p(xj = 1|y = 1), φj|y=0 = p(xj =
1|y = 0), and φy = p(y = 1). As usual, given a training set {(x(i), y(i)); i =
1, . . . , n}, we can write down the joint likelihood of the data:
L(φy, φj|y=0, φj|y=1) =
n∏
i=1
p(x(i), y(i)).
Maximizing this with respect to φy, φj|y=0 and φj|y=1 gives the maximum
likelihood estimates:
φj|y=1 =
∑n
i=1 1{x
(i)
j = 1 ∧ y
(i) = 1}∑n
i=1 1{y(i) = 1}
φj|y=0 =
∑n
i=1 1{x
(i)
j = 1 ∧ y
(i) = 0}∑n
i=1 1{y(i) = 0}
φy =
∑n
i=1 1{y
(i) = 1}
n
In the equations above, the “∧” symbol means “and.” The parameters have
a very natural interpretation. For instance, φj|y=1 is just the fraction of the
spam (y = 1) emails in which word j does appear.
Having fit all these parameters, to make a prediction on a new example
with features x, we then simply calculate
p(y = 1|x) =
p(x|y = 1)p(y = 1)
p(x)
=
(∏d
j=1 p(xj|y = 1)
)
p(y = 1)(∏d
j=1 p(xj|y = 1)
)
p(y = 1) +
(∏d
j=1 p(xj|y = 0)
)
p(y = 0)
,
and pick whichever class has the higher posterior probability.
Lastly, we note that while we have developed the Naive Bayes algorithm
mainly for the case of problems where the features xj are binary-valued, the
generalization to where xj can take values in {1, 2, . . . , kj} is straightforward.
Here, we would simply model p(xj|y) as multinomial rather than as Bernoulli.
Indeed, even if some original input attribute (say, the living area of a house,
as in our earlier example) were continuous valued, it is quite common to
discretize it—that is, turn it into a small set of discrete values—and apply
Naive Bayes. For instance, if we use some feature xj to represent living area,
we might discretize the continuous values as follows:
44
Living area (sq. feet) < 400 400-800 800-1200 1200-1600 >1600
xi 1 2 3 4 5
Thus, for a house with living area 890 square feet, we would set the value
of the corresponding feature xj to 3. We can then apply the Naive Bayes
algorithm, and model p(xj|y) with a multinomial distribution, as described
previously. When the original, continuous-valued attributes are not well-
modeled by a multivariate normal distribution, discretizing the features and
using Naive Bayes (instead of GDA) will often result in a better classifier.
Laplace smoothing
The Naive Bayes algorithm as we have described it will work fairly well
for many problems, but there is a simple change that makes it work much
better, especially for text classification. Let’s briefly discuss a problem with
the algorithm in its current form, and then talk about how we can fix it.
Consider spam/email classification, and let’s suppose that, we are in the
year of 20xx, after completing CS229 and having done excellent work on the
project, you decide around May 20xx to submit work you did to the NeurIPS
conference for Because you end up discussing the conference
in your emails, you also start getting messages with the word “neurips”
in it. But this is your first NeurIPS paper, and until this time, you had
not previously seen any emails containing the word “neurips”; in particular
“neurips” did not ever appear in your training set of spam/non-spam emails.
Assuming that “neurips” was the 35000th word in the dictionary, your Naive
Bayes spam filter therefore had picked its maximum likelihood estimates of
the parameters φ35000|y to be
φ35000|y=1 =
∑n
i=1 1{x
(i)
35000 = 1 ∧ y(i) = 1}∑n
i=1 1{y(i) = 1}
= 0
φ35000|y=0 =
∑n
i=1 1{x
(i)
35000 = 1 ∧ y(i) = 0}∑n
i=1 1{y(i) = 0}
= 0
., because it has never seen “neurips” before in either spam or non-spam
training examples, it thinks the probability of seeing it in either type of email
is zero. Hence, when trying to decide if one of these messages containing
3NeurIPS is one of the top machine learning conferences. The deadline for submitting
a paper is typically in May-June.
45
“neurips” is spam, it calculates the class posterior probabilities, and obtains
p(y = 1|x) =
∏d
j=1 p(xj|y = 1)p(y = 1)∏d
j=1 p(xj|y = 1)p(y = 1) +
∏d
j=1 p(xj|y = 0)p(y = 0)
=
0
0
.
This is because each of the terms “
∏d
j=1 p(xj|y)” includes a term p(x35000|y) =
0 that is multiplied into it. Hence, our algorithm obtains 0/0, and doesn’t
know how to make a prediction.
Stating the problem more broadly, it is statistically a bad idea to esti-
mate the probability of some event to be zero just because you haven’t seen
it before in your finite training set. Take the problem of estimating the mean
of a multinomial random variable z taking values in {1, . . . , k}. We can pa-
rameterize our multinomial with φj = p(z = j). Given a set of n independent
observations {z(1), . . . , z(n)}, the maximum likelihood estimates are given by
φj =
∑n
i=1 1{z
(i) = j}
n
.
As we saw previously, if we were to use these maximum likelihood estimates,
then some of the φj’s might end up as zero, which was a problem. To avoid
this, we can use Laplace smoothing, which replaces the above estimate
with
φj =
1 +
∑n
i=1 1{z
(i) = j}
k + n
.
Here, we’ve added 1 to the numerator, and k to the denominator. Note that∑k
j=1 φj = 1 still holds (check this yourself!), which is a desirable property
since the φj’s are estimates for probabilities that we know must sum to 1.
Also, φj 6= 0 for all values of j, solving our problem of probabilities being
estimated as zero. Under certain (arguably quite strong) conditions, it can
be shown that the Laplace smoothing actually gives the optimal estimator
of the φj’s.
Returning to our Naive Bayes classifier, with Laplace smoothing, we
therefore obtain the following estimates of the parameters:
φj|y=1 =
1 +
∑n
i=1 1{x
(i)
j = 1 ∧ y
(i) = 1}
2 +
∑n
i=1 1{y(i) = 1}
φj|y=0 =
1 +
∑n
i=1 1{x
(i)
j = 1 ∧ y
(i) = 0}
2 +
∑n
i=1 1{y(i) = 0}
46
(In practice, it usually doesn’t matter much whether we apply Laplace smooth-
ing to φy or not, since we will typically have a fair fraction each of spam and
non-spam messages, so φy will be a reasonable estimate of p(y = 1) and will
be quite far from 0 anyway.)
Event models for text classification
To close off our discussion of generative learning algorithms, let’s talk about
one more model that is specifically for text classification. While Naive Bayes
as we’ve presented it will work well for many classification problems, for text
classification, there is a related model that does even better.
In the specific context of text classification, Naive Bayes as presented uses
the what’s called the Bernoulli event model (or sometimes multi-variate
Bernoulli event model). In this model, we assumed that the way an email
is generated is that first it is randomly determined (according to the class
priors p(y)) whether a spammer or non-spammer will send you your next
message. Then, the person sending the email runs through the dictionary,
deciding whether to include each word j in that email independently and
according to the probabilities p(xj = 1|y) = φj|y. Thus, the probability of a
message was given by p(y)
∏d
j=1 p(xj|y).
Here’s a different model, called the Multinomial event model. To
describe this model, we will use a different notation and set of features for
representing emails. We let xj denote the identity of the j-th word in the
email. Thus, xj is now an integer taking values in {1, . . . , |V |}, where |V |
is the size of our vocabulary (dictionary). An email of d words is now rep-
resented by a vector (x1, x2, . . . , xd) of length d; note that d can vary for
different documents. For instance, if an email starts with “A NeurIPS . . . ,”
then x1 = 1 (“a” is the first word in the dictionary), and x2 = 35000 (if
“neurips” is the 35000th word in the dictionary).
In the multinomial event model, we assume that the way an email is
generated is via a random process in which spam/non-spam is first deter-
mined (according to p(y)) as before. Then, the sender of the email writes the
email by first generating x1 from some multinomial distribution over words
(p(x1|y)). Next, the second word x2 is chosen independently of x1 but from
the same multinomial distribution, and similarly for x3, x4, and so on, until
all d words of the email have been generated. Thus, the overall probability of
a message is given by p(y)
∏d
j=1 p(xj|y). Note that this formula looks like the
one we had earlier for the probability of a message under the Bernoulli event
model, but that the terms in the formula now mean very different things. In
particular xj|y is now a multinomial, rather than a Bernoulli distribution.
47
The parameters for our new model are φy = p(y) as before, φk|y=1 =
p(xj = k|y = 1) (for any j) and φk|y=0 = p(xj = k|y = 0). Note that we have
assumed that p(xj|y) is the same for all values of j (., that the distribution
according to which a word is generated does not depend on its position j
within the email).
If we are given a training set {(x(i), y(i)); i = 1, . . . , n} where x(i) =
(x
(i)
1 , x
(i)
2 , . . . , x
(i)
di
) (here, di is the number of words in the i-training example),
the likelihood of the data is given by
L(φy, φk|y=0, φk|y=1) =
n∏
i=1
p(x(i), y(i))
=
n∏
i=1
(
di∏
j=1
p(x
(i)
j |y;φk|y=0, φk|y=1)
)
p(y(i);φy).
Maximizing this yields the maximum likelihood estimates of the parameters:
φk|y=1 =
∑n
i=1
∑di
j=1 1{x
(i)
j = k ∧ y
(i) = 1}∑n
i=1 1{y(i) = 1}di
φk|y=0 =
∑n
i=1
∑di
j=1 1{x
(i)
j = k ∧ y
(i) = 0}∑n
i=1 1{y(i) = 0}di
φy =
∑n
i=1 1{y
(i) = 1}
n
.
If we were to apply Laplace smoothing (which is needed in practice for good
performance) when estimating φk|y=0 and φk|y=1, we add 1 to the numerators
and |V | to the denominators, and obtain:
φk|y=1 =
1 +
∑n
i=1
∑di
j=1 1{x
(i)
j = k ∧ y
(i) = 1}
|V |+
∑n
i=1 1{y(i) = 1}di
φk|y=0 =
1 +
∑n
i=1
∑di
j=1 1{x
(i)
j = k ∧ y
(i) = 0}
|V |+
∑n
i=1 1{y(i) = 0}di
.
While not necessarily the very best classification algorithm, the Naive Bayes
classifier often works surprisingly well. It is often also a very good “first thing
to try,” given its simplicity and ease of implementation.
Chapter 5
Kernel methods
Feature maps
Recall that in our discussion about linear regression, we considered the prob-
lem of predicting the price of a house (denoted by y) from the living area of
the house (denoted by x), and we fit a linear function of x to the training
data. What if the price y can be more accurately represented as a non-linear
function of x? In this case, we need a more expressive family of models than
linear models.
We start by considering fitting cubic functions y = θ3x
3 + θ2x
2 + θ1x+ θ0.
It turns out that we can view the cubic function as a linear function over
the a different set of feature variables (defined below). Concretely, let the
function φ : R→ R4 be defined as
φ(x) =
1
x
x2
x3
∈ R4. ()
Let θ ∈ R4 be the vector containing θ0, θ1, θ2, θ3 as entries. Then we can
rewrite the cubic function in x as:
θ3x
3 + θ2x
2 + θ1x+ θ0 = θ
Tφ(x)
Thus, a cubic function of the variable x can be viewed as a linear function
over the variables φ(x). To distinguish between these two sets of variables,
in the context of kernel methods, we will call the “original” input value the
input attributes of a problem (in this case, x, the living area). When the
48
49
original input is mapped to some new set of quantities φ(x), we will call those
new quantities the features variables. (Unfortunately, different authors use
different terms to describe these two things in different contexts.) We will
call φ a feature map, which maps the attributes to the features.
LMS (least mean squares) with features
We will derive the gradient descent algorithm for fitting the model θTφ(x).
First recall that for ordinary least square problem where we were to fit θTx,
the batch gradient descent update is (see the first lecture note for its deriva-
tion):
θ := θ + α
n∑
i=1
(
y(i) − hθ(x(i))
)
x(i)
:= θ + α
n∑
i=1
(
y(i) − θTx(i)
)
x(i). ()
Let φ : Rd → Rp be a feature map that maps attribute x (in Rd) to the
features φ(x) in Rp. (In the motivating example in the previous subsection,
we have d = 1 and p = 4.) Now our goal is to fit the function θTφ(x), with
θ being a vector in Rp instead of Rd. We can replace all the occurrences of
x(i) in the algorithm above by φ(x(i)) to obtain the new update:
θ := θ + α
n∑
i=1
(
y(i) − θTφ(x(i))
)
φ(x(i)) ()
Similarly, the corresponding stochastic gradient descent update rule is
θ := θ + α
(
y(i) − θTφ(x(i))
)
φ(x(i)) ()
LMS with the kernel trick
The gradient descent update, or stochastic gradient update above becomes
computationally expensive when the features φ(x) is high-dimensional. For
example, consider the direct extension of the feature map in equation ()
to high-dimensional input x: suppose x ∈ Rd, and let φ(x) be the vector that
50
contains all the monomials of x with degree ≤ 3
φ(x) =
1
x1
x2
...
x21
x1x2
x1x3
...
x2x1
...
x31
x21x2
...
. ()
The dimension of the features φ(x) is on the order of This is a pro-
hibitively long vector for computational purpose — when d = 1000, each
update requires at least computing and storing a 10003 = 109 dimensional
vector, which is 106 times slower than the update rule for for ordinary least
squares updates ().
It may appear at first that such d3 runtime per update and memory usage
are inevitable, because the vector θ itself is of dimension p ≈ d3, and we may
need to update every entry of θ and store it. However, we will introduce the
kernel trick with which we will not need to store θ explicitly, and the runtime
can be significantly improved.
For simplicity, we assume the initialize the value θ = 0, and we focus
on the iterative update (). The main observation is that at any time, θ
can be represented as a linear combination of the vectors φ(x(1)), . . . , φ(x(n)).
Indeed, we can show this inductively as follows. At initialization, θ = 0 =∑n
i=1 0 · φ(x
(i)). Assume at some point, θ can be represented as
θ =
n∑
i=1
βiφ(x
(i)) ()
1Here, for simplicity, we include all the monomials with repetitions (so that, ., x1x2x3
and x2x3x1 both appear in φ(x)). Therefore, there are totally 1 + d + d
2 + d3 entries in
φ(x).
51
for some β1, . . . , βn ∈ R. Then we claim that in the next round, θ is still a
linear combination of φ(x(1)), . . . , φ(x(n)) because
θ := θ + α
n∑
i=1
(
y(i) − θTφ(x(i))
)
φ(x(i))
=
n∑
i=1
βiφ(x
(i)) + α
n∑
i=1
(
y(i) − θTφ(x(i))
)
φ(x(i))
=
n∑
i=1
(βi + α
(
y(i) − θTφ(x(i))
)
)︸ ︷︷ ︸
new βi
φ(x(i)) ()
You may realize that our general strategy is to implicitly represent the p-
dimensional vector θ by a set of coefficients β1, . . . , βn. Towards doing this,
we derive the update rule of the coefficients β1, . . . , βn. Using the equation
above, we see that the new βi depends on the old one via
βi := βi + α
(
y(i) − θTφ(x(i))
)
()
Here we still have the old θ on the RHS of the equation. Replacing θ by
θ =
∑n
j=1 βjφ(x
(j)) gives
∀i ∈ {1, . . . , n}, βi := βi + α
(
y(i) −
n∑
j=1
βjφ(x
(j))
T
φ(x(i))
)
We often rewrite φ(x(j))
T
φ(x(i)) as 〈φ(x(j)), φ(x(i))〉 to emphasize that it’s the
inner product of the two feature vectors. Viewing βi’s as the new representa-
tion of θ, we have successfully translated the batch gradient descent algorithm
into an algorithm that updates the value of β iteratively. It may appear that
at every iteration, we still need to compute the values of 〈φ(x(j)), φ(x(i))〉 for
all pairs of i, j, each of which may take roughly O(p) operation. However,
two important properties come to rescue:
1. We can pre-compute the pairwise inner products 〈φ(x(j)), φ(x(i))〉 for all
pairs of i, j before the loop starts.
2. For the feature map φ defined in () (or many other interesting fea-
ture maps), computing 〈φ(x(j)), φ(x(i))〉 can be efficient and does not
52
necessarily require computing φ(x(i)) explicitly. This is because:
〈φ(x), φ(z)〉 = 1 +
d∑
i=1
xizi +
∑
i,j∈{1,...,d}
xixjzizj +
∑
i,j,k∈{1,...,d}
xixjxkzizjzk
= 1 +
d∑
i=1
xizi +
(
d∑
i=1
xizi
)2
+
(
d∑
i=1
xizi
)3
= 1 + 〈x, z〉+ 〈x, z〉2 + 〈x, z〉3 ()
Therefore, to compute 〈φ(x), φ(z)〉, we can first compute 〈x, z〉 with
O(d) time and then take another constant number of operations to com-
pute 1 + 〈x, z〉+ 〈x, z〉2 + 〈x, z〉3.
As you will see, the inner products between the features 〈φ(x), φ(z)〉 are
essential here. We define the Kernel corresponding to the feature map φ as
a function that maps X × X → R satisfying: 2
K(x, z) , 〈φ(x), φ(z)〉 ()
To wrap up the discussion, we write the down the final algorithm as
follows:
1. Compute all the values K(x(i), x(j)) , 〈φ(x(i)), φ(x(j))〉 using equa-
tion () for all i, j ∈ {1, . . . , n}. Set β := 0.
2. Loop:
∀i ∈ {1, . . . , n}, βi := βi + α
(
y(i) −
n∑
j=1
βjK(x
(i), x(j))
)
()
Or in vector notation, letting K be the n × n matrix with Kij =
K(x(i), x(j)), we have
β := β + α(~y −Kβ)
With the algorithm above, we can update the representation β of the
vector θ efficiently with O(n) time per update. Finally, we need to show that
2Recall that X is the space of the input x. In our running example, X = Rd
53
the knowledge of the representation β suffices to compute the prediction
θTφ(x). Indeed, we have
θTφ(x) =
n∑
i=1
βiφ(x
(i))
T
φ(x) =
n∑
i=1
βiK(x
(i), x) ()
You may realize that fundamentally all we need to know about the feature
map φ(·) is encapsulated in the corresponding kernel function K(·, ·). We
will expand on this in the next section.
Properties of kernels
In the last subsection, we started with an explicitly defined feature map φ,
which induces the kernel function K(x, z) , 〈φ(x), φ(z)〉. Then we saw that
the kernel function is so intrinsic so that as long as the kernel function is
defined, the whole training algorithm can be written entirely in the language
of the kernel without referring to the feature map φ, so can the prediction of
a test example x (equation ().)
Therefore, it would be tempted to define other kernel function K(·, ·) and
run the algorithm (). Note that the algorithm () does not need to
explicitly access the feature map φ, and therefore we only need to ensure the
existence of the feature map φ, but do not necessarily need to be able to
explicitly write φ down.
What kinds of functions K(·, ·) can correspond to some feature map φ? In
other words, can we tell if there is some feature mapping φ so that K(x, z) =
φ(x)Tφ(z) for all x, z?
If we can answer this question by giving a precise characterization of valid
kernel functions, then we can completely change the interface of selecting
feature maps φ to the interface of selecting kernel function K. Concretely,
we can pick a function K, verify that it satisfies the characterization (so
that there exists a feature map φ that K corresponds to), and then we can
run update rule (). The benefit here is that we don’t have to be able
to compute φ or write it down analytically, and we only need to know its
existence. We will answer this question at the end of this subsection after
we go through several concrete examples of kernels.
Suppose x, z ∈ Rd, and let’s first consider the function K(·, ·) defined as:
K(x, z) = (xT z)2.
54
We can also write this as
K(x, z) =
(
d∑
i=1
xizi
)(
d∑
j=1
xjzj
)
=
d∑
i=1
d∑
j=1
xixjzizj
=
d∑
i,j=1
(xixj)(zizj)
Thus, we see that K(x, z) = 〈φ(x), φ(z)〉 is the kernel function that corre-
sponds to the the feature mapping φ given (shown here for the case of d = 3)
by
φ(x) =
x1x1
x1x2
x1x3
x2x1
x2x2
x2x3
x3x1
x3x2
x3x3
.
Revisiting the computational efficiency perspective of kernel, note that whereas
calculating the high-dimensional φ(x) requires O(d2) time, finding K(x, z)
takes only O(d) time—linear in the dimension of the input attributes.
For another related example, also consider K(·, ·) defined by
K(x, z) = (xT z + c)2
=
d∑
i,j=1
(xixj)(zizj) +
d∑
i=1
(
√
2cxi)(
√
2czi) + c
2.
(Check this yourself.) This function K is a kernel function that corresponds
55
to the feature mapping (again shown for d = 3)
φ(x) =
x1x1
x1x2
x1x3
x2x1
x2x2
x2x3
x3x1
x3x2
x3x3√
2cx1√
2cx2√
2cx3
c
,
and the parameter c controls the relative weighting between the xi (first
order) and the xixj (second order) terms.
More broadly, the kernel K(x, z) = (xT z + c)k corresponds to a feature
mapping to an
(
d+k
k
)
feature space, corresponding of all monomials of the
form xi1xi2 . . . xik that are up to order k. However, despite working in this
O(dk)-dimensional space, computing K(x, z) still takes only O(d) time, and
hence we never need to explicitly represent feature vectors in this very high
dimensional feature space.
Kernels as similarity metrics. Now, let’s talk about a slightly different
view of kernels. Intuitively, (and there are things wrong with this intuition,
but nevermind), if φ(x) and φ(z) are close together, then we might expect
K(x, z) = φ(x)Tφ(z) to be large. Conversely, if φ(x) and φ(z) are far apart—
say nearly orthogonal to each other—then K(x, z) = φ(x)Tφ(z) will be small.
So, we can think of K(x, z) as some measurement of how similar are φ(x)
and φ(z), or of how similar are x and z.
Given this intuition, suppose that for some learning problem that you’re
working on, you’ve come up with some function K(x, z) that you think might
be a reasonable measure of how similar x and z are. For instance, perhaps
you chose
K(x, z) = exp
(
−
||x− z||2
2σ2
)
.
This is a reasonable measure of x and z’s similarity, and is close to 1 when
x and z are close, and near 0 when x and z are far apart. Does there exist
56
a feature map φ such that the kernel K defined above satisfies K(x, z) =
φ(x)Tφ(z)? In this particular example, the answer is yes. This kernel is called
the Gaussian kernel, and corresponds to an infinite dimensional feature
mapping φ. We will give a precise characterization about what properties
a function K needs to satisfy so that it can be a valid kernel function that
corresponds to some feature map φ.
Necessary conditions for valid kernels. Suppose for now that K is
indeed a valid kernel corresponding to some feature mapping φ, and we will
first see what properties it satisfies. Now, consider some finite set of n points
(not necessarily the training set) {x(1), . . . , x(n)}, and let a square, n-by-n
matrix K be defined so that its (i, j)-entry is given by Kij = K(x
(i), x(j)).
This matrix is called the kernel matrix. Note that we’ve overloaded the
notation and used K to denote both the kernel function K(x, z) and the
kernel matrix K, due to their obvious close relationship.
Now, if K is a valid kernel, then Kij = K(x
(i), x(j)) = φ(x(i))Tφ(x(j)) =
φ(x(j))Tφ(x(i)) = K(x(j), x(i)) = Kji, and hence K must be symmetric. More-
over, letting φk(x) denote the k-th coordinate of the vector φ(x), we find that
for any vector z, we have
zTKz =
∑
i
∑
j
ziKijzj
=
∑
i
∑
j
ziφ(x
(i))Tφ(x(j))zj
=
∑
i
∑
j
zi
∑
k
φk(x
(i))φk(x
(j))zj
=
∑
k
∑
i
∑
j
ziφk(x
(i))φk(x
(j))zj
=
∑
k
(∑
i
ziφk(x
(i))
)2
≥ 0.
The second-to-last step uses the fact that
∑
i,j aiaj = (
∑
i ai)
2 for ai =
ziφk(x
(i)). Since z was arbitrary, this shows that K is positive semi-definite
(K ≥ 0).
Hence, we’ve shown that if K is a valid kernel (., if it corresponds to
some feature mapping φ), then the corresponding kernel matrix K ∈ Rn×n
is symmetric positive semidefinite.
57
Sufficient conditions for valid kernels. More generally, the condition
above turns out to be not only a necessary, but also a sufficient, condition
for K to be a valid kernel (also called a Mercer kernel). The following result
is due to
Theorem (Mercer). Let K : Rd × Rd 7→ R be given. Then for K
to be a valid (Mercer) kernel, it is necessary and sufficient that for any
{x(1), . . . , x(n)}, (n <∞), the corresponding kernel matrix is symmetric pos-
itive semi-definite.
Given a function K, apart from trying to find a feature mapping φ that
corresponds to it, this theorem therefore gives another way of testing if it is
a valid kernel. You’ll also have a chance to play with these ideas more in
problem set 2.
In class, we also briefly talked about a couple of other examples of ker-
nels. For instance, consider the digit recognition problem, in which given
an image (16x16 pixels) of a handwritten digit (0-9), we have to figure out
which digit it was. Using either a simple polynomial kernel K(x, z) = (xT z)k
or the Gaussian kernel, SVMs were able to obtain extremely good perfor-
mance on this problem. This was particularly surprising since the input
attributes x were just 256-dimensional vectors of the image pixel intensity
values, and the system had no prior knowledge about vision, or even about
which pixels are adjacent to which other ones. Another example that we
briefly talked about in lecture was that if the objects x that we are trying
to classify are strings (say, x is a list of amino acids, which strung together
form a protein), then it seems hard to construct a reasonable, “small” set of
features for most learning algorithms, especially if different strings have dif-
ferent lengths. However, consider letting φ(x) be a feature vector that counts
the number of occurrences of each length-k substring in x. If we’re consid-
ering strings of English letters, then there are 26k such strings. Hence, φ(x)
is a 26k dimensional vector; even for moderate values of k, this is probably
too big for us to efficiently work with. (., 264 ≈ 460000.) However, using
(dynamic programming-ish) string matching algorithms, it is possible to ef-
ficiently compute K(x, z) = φ(x)Tφ(z), so that we can now implicitly work
in this 26k-dimensional feature space, but without ever explicitly computing
feature vectors in this space.
3Many texts present Mercer’s theorem in a slightly more complicated form involving
L2 functions, but when the input attributes take values in Rd, the version given here is
equivalent.
58
Application of kernel methods: We’ve seen the application of kernels
to linear regression. In the next part, we will introduce the support vector
machines to which kernels can be directly applied. dwell too much longer on
it here. In fact, the idea of kernels has significantly broader applicability than
linear regression and SVMs. Specifically, if you have any learning algorithm
that you can write in terms of only inner products 〈x, z〉 between input
attribute vectors, then by replacing this with K(x, z) where K is a kernel,
you can “magically” allow your algorithm to work efficiently in the high
dimensional feature space corresponding to K. For instance, this kernel trick
can be applied with the perceptron to derive a kernel perceptron algorithm.
Many of the algorithms that we’ll see later in this class will also be amenable
to this method, which has come to be known as the “kernel trick.”
Chapter 6
Support vector machines
This set of notes presents the Support Vector Machine (SVM) learning al-
gorithm. SVMs are among the best (and many believe are indeed the best)
“off-the-shelf” supervised learning algorithms. To tell the SVM story, we’ll
need to first talk about margins and the idea of separating data with a large
“gap.” Next, we’ll talk about the optimal margin classifier, which will lead
us into a digression on Lagrange duality. We’ll also see kernels, which give
a way to apply SVMs efficiently in very high dimensional (such as infinite-
dimensional) feature spaces, and finally, we’ll close off the story with the
SMO algorithm, which gives an efficient implementation of SVMs.
Margins: intuition
We’ll start our story on SVMs by talking about margins. This section will
give the intuitions about margins and about the “confidence” of our predic-
tions; these ideas will be made formal in Section .
Consider logistic regression, where the probability p(y = 1|x; θ) is mod-
eled by hθ(x) = g(θ
Tx). We then predict “1” on an input x if and only if
hθ(x) ≥ , or equivalently, if and only if θTx ≥ 0. Consider a positive
training example (y = 1). The larger θTx is, the larger also is hθ(x) = p(y =
1|x; θ), and thus also the higher our degree of “confidence” that the label is 1.
Thus, informally we can think of our prediction as being very confident that
y = 1 if θTx � 0. Similarly, we think of logistic regression as confidently
predicting y = 0, if θTx� 0. Given a training set, again informally it seems
that we’d have found a good fit to the training data if we can find θ so that
θTx(i) � 0 whenever y(i) = 1, and θTx(i) � 0 whenever y(i) = 0, since this
would reflect a very confident (and correct) set of classifications for all the
59
60
training examples. This seems to be a nice goal to aim for, and we’ll soon
formalize this idea using the notion of functional margins.
For a different type of intuition, consider the following figure, in which x’s
represent positive training examples, o’s denote negative training examples,
a decision boundary (this is the line given by the equation θTx = 0, and
is also called the separating hyperplane) is also shown, and three points
have also been labeled A, B and C.
��
��
��
B
A
C
Notice that the point A is very far from the decision boundary. If we are
asked to make a prediction for the value of y at A, it seems we should be
quite confident that y = 1 there. Conversely, the point C is very close to
the decision boundary, and while it’s on the side of the decision boundary
on which we would predict y = 1, it seems likely that just a small change to
the decision boundary could easily have caused out prediction to be y = 0.
Hence, we’re much more confident about our prediction at A than at C. The
point B lies in-between these two cases, and more broadly, we see that if
a point is far from the separating hyperplane, then we may be significantly
more confident in our predictions. Again, informally we think it would be
nice if, given a training set, we manage to find a decision boundary that
allows us to make all correct and confident (meaning far from the decision
boundary) predictions on the training examples. We’ll formalize this later
using the notion of geometric margins.
61
Notation (option reading)
To make our discussion of SVMs easier, we’ll first need to introduce a new
notation for talking about classification. We will be considering a linear
classifier for a binary classification problem with labels y and features x.
From now, we’ll use y ∈ {−1, 1} (instead of {0, 1}) to denote the class labels.
Also, rather than parameterizing our linear classifier with the vector θ, we
will use parameters w, b, and write our classifier as
hw,b(x) = g(w
Tx+ b).
Here, g(z) = 1 if z ≥ 0, and g(z) = −1 otherwise. This “w, b” notation
allows us to explicitly treat the intercept term b separately from the other
parameters. (We also drop the convention we had previously of letting x0 = 1
be an extra coordinate in the input feature vector.) Thus, b takes the role of
what was previously θ0, and w takes the role of [θ1 . . . θd]
T .
Note also that, from our definition of g above, our classifier will directly
predict either 1 or −1 (cf. the perceptron algorithm), without first going
through the intermediate step of estimating p(y = 1) (which is what