An odd property of Eigenvectors
June 9, 2012 9:22 AM   Subscribe

The first eigenvector is related to the sum of the columns of a matrix. What is this property called?

I came across a property of positive square matrices that I'd like to know more about. It would appear that a positive random matrix M whose first eigenvector v (i.e. the e-vector corresponding to the largest eigenvalue) has the following property on average: The sum of the columns of M are proportional to the eigenvector v. That is:

\sum_{j=1}^NM_{ji} is proportional to v_i

This is not an exact relation (there is a bit of variation), but the relationship is visually quite obvious. If you have mathematica, you can see the proportionality by entering:

M = Table[Random[], {i, 1, 100}, {j, 1, 100}]; (*random matrix *)
ev = Eigensystem[M][[2]][[1]]; (* first eigenvector *)
cs = Total[Transpose[M]]; (* sum of columns *)
ListPlot[Transpose[{ev, cs}]]



What's the name of this property. What has been proven about this? I'm aware of some results about the importance of the first eigenvector if I take M to some large power, but have not heard about this proportionality before.
posted by bessel functions seem unnecessarily complicated to Science & Nature (7 answers total) 2 users marked this as a favorite
 
Seems like this might be related to the Perron-Frobenius theorem somehow.
posted by leahwrenn at 9:32 AM on June 9, 2012


Response by poster: Its likely related to Frobenius-Perron, but I can't figure out how to prove it. I was hoping it had a special name. Some mathematician should have looked at this at some point!
posted by bessel functions seem unnecessarily complicated at 9:58 AM on June 9, 2012


I don't know, but math.stackexchange might!
posted by katrielalex at 10:47 AM on June 9, 2012


Best answer: Well, if M is your random matrix, and v is a column vector of 1s, then Mv is the sum of the columns of M. On average, v points as much in the direction of the first eigenvector v1 of M as in the direction of any other eigenvector. But the component of v in the direction of v1 will be made longer by multiplying by M (since the corresponding eigenvalue is the largest) than the component of v in the direction of any other eigenvector. So on average, Mv will point more in the direction of v1 than in the direction of any other eigenvector.

This is the same argument as showing that Mnv tends towards an eigenvector for the largest eigenvalue, but with n = 1.

I don't know anything about the distribution of eigenvalues and eigenvectors in a random matrix. If eigenvalues and vectors are usually spread out, then Mv is, in general, going to be close to an eigenvector for the largest eigenvalue. If the two largest eigenvalues are very close, then Mv is, in general, going to look more like a sum of eigenvectors for those eigenvalues.
posted by samw at 11:11 AM on June 9, 2012


I don't know if this property has a name, by the way.
posted by samw at 11:12 AM on June 9, 2012


See Arnoldi iteration. It is guaranteed to converge to an eigenvector of the largest eigenvalue. You are talking about the specialcase of starting with an initial guess of [1 … 1]T.
posted by Nomyte at 11:17 AM on June 9, 2012


Response by poster: Ah, that's it! The first and second eigenvalues are well separated for my matrices, as they should be from the Semicircle Law (as they have a non-zero mean). samw's comment got me there.

Thanks all!
posted by bessel functions seem unnecessarily complicated at 12:55 PM on June 9, 2012


« Older How do I not suck at life?   |   Accessing a subnet behind a VPN client from... Newer »
This thread is closed to new comments.