I may be the only person who feels this way, but it’s awfully easy to read a paper or a book, see some equations, think about them a bit, then sort of nod your head and think you understand them. However, when you go to actually implement them you look back and the jump from the symbols on the page to code that runs on a computer is little bigger than you thought. So this is mostly me thinking aloud, but I was reading about optimization functions that rely on the hessian and I wrote this out to make sure I understood this well enough to calculate it if I want.
I picked some random training data. First, we set up the design matrix x, the dependent variable y, and the theta (or beta) at which we will evaluate the hessian:
1 2 3 |
|
Typically you’d put a column of ones on the left of the matrix as an intercept term, but I didn’t set my problem up that way. The Hessian is the n by n matrix of 2nd derivatives of a scalar valued function. In our case, there are two parameters, ie two explanatory variables, so
Note that we denote the ith of m training example as
the superscript in parentheses is not exponentiation. Here x is a column vector and y is either 0 or 1.
We also need some functions. R supports closures so you don’t have to pass x and y around.
1 2 3 4 5 |
|
Finally, we can use the numDeriv package to calculate the Hessian and compare with a hand calculation:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
You can clearly see why any optimization algorithm requiring the hessian will be slow; you iterate over every training example once for each explanatory variable.
Also, MathJax is an awesome and painless latex for your blog.