nonlinear solver for a non-stats programmer
January 27, 2010 8:33 PM   Subscribe

Nonlinear numerical solver for a non-stats programmer - I have a large slab of client data to generate reports on. I suspect this is a simple job for anyone who knows R and/or other stats tools, but have no idea where to start myself.

Given a large dataset along the lines of

x1 = y1 = z1 = 1
x1 * y1 * z2 = 1.125
x1 * y1 * z3 = 1.15628523111612
x1 * y2 * z1 = 1.25
x8 * y4 * z3 = 1.78255355129651

I need to solve for the various values of x[1...Xmax], y[1...Ymax] and z[1...Zmax]. I believe the system is over-determined, so (as I understand it) I can't just iterate line by line solving for one new constant each time but instead some sort of least-squares solution is the only option. Also this isn't (as I understand it) a linear system, which is about as far as I was able to get in the R manual.

Is there a simple way of doing this in R or some other free numerical analysis package, or am I going to have to dig out my old numerical computation textbooks and do it from scratch? Because I'd really like to avoid doing that if possible.
posted by russm to Computers & Internet (9 answers total) 3 users marked this as a favorite
If it's overdetermined, that means there's more than one solution. Least squares won't solve that problem; you need to impose some constraint(s).

But note that ln(x*y) = ln(x) + ln(y).
posted by mikeand1 at 9:43 PM on January 27, 2010

Have you tried wolfram alpha? I don't know how well wolfram alpha handles over-determined systems of equations, but if you put a system of equations into the search box (separated by commas), the right answer pops out instantly.
posted by eisenkr at 9:57 PM on January 27, 2010

The R package systemfit claims to be able to do this, I haven't used it myself. Mathematica could do this easily too if you have access.

It looks like you could do it by hand as well - if you know x1 = y1 = z1 = 1, you can solve x1 * y1 * z2 = 1.125 for z2 =1.125, and should be able to continue on eliminating variables.
posted by scodger at 9:57 PM on January 27, 2010 [1 favorite]

well, you can code newton's method pretty easily....or really, secant method since you probably don't want to worry about coding up a derivative, though for what you're doing the derivative doesn't look too complicated.

you have a simple nonlinear problem -- you want to solve f(x)=0, where x is a vector of all of your variables.
posted by virga at 10:00 PM on January 27, 2010

mikeand1: If it's overdetermined, there are possibly NO solutions. More than one solution happens when the system is underdetermined.

scodger: If the system really is overdetermined, solving by hand will eventually run into problems.

mikeand1's advice about logarithms, though, is spot on, *provided* your right-hand-side values are greater than 0. You can find the least squares approximation using Octave. In this case, your A matrix will be a *binary* matrix, with each row a bitmap indicating which variables are present in the equation left-hand-sides. You can ignore x1, y1, and z1 since those are 1. Your y vector (which you will solve for) will be the logarithms of each of your x2...zmax variables all laid out in a column. Your b vector will be a column with the logarithms of the right-hand-sides of the equations.

Set up the matrix A and the vector b, then type y = inv(A'*A)*A'*b; after that, type exp(y) and there are all your answers!

(Provided that the least squares approximation in log space is an acceptable approximation to you.)
posted by tss at 10:01 PM on January 27, 2010

I don't know about R, but there are
open-source nonlinear solvers
out there in the form of libraries or command line programs. For some you'd have to use an API, but others accept input scripts written in AMPL

(I haven't actually used any of these yet, but I was looking into them recently for a hobby project)
posted by qxntpqbbbqxl at 10:20 PM on January 27, 2010

tss - while the system is overdetermined, that's only due to high redundancy and low resolution of the sampled data - least squares in log space should be perfectly sufficient. I'll bang it into Octave when I get in to work tomorrow. cheers!
posted by russm at 5:12 AM on January 28, 2010

On reflection, the best way to do least squares in Octave (or MATLAB) is to use the following:


instead of inv(A'*A)*A'*b. Mathematically, they are equivalent, but this new version is better numerically, or so I am told.
posted by tss at 7:17 AM on January 28, 2010

tss - that works a treat on a reduced dataset, now to point it at the full 2000+ variable set and see what happens... very much obliged!
posted by russm at 6:24 PM on January 28, 2010

« Older What are more songs that start off with (or...   |   The crush that refuses to die Newer »
This thread is closed to new comments.