Classical vs statistical entropy
December 12, 2011 1:24 AM   Subscribe

Can someone explain to me in layman's terms the relation between classical entropy (heat over temperature) and statistical entropy (-k \sum p_i ln p_i)? More specifically, how do the microstate probabilities arise and how do they relate to heat and temperature?

I roughly understand what microstates and macrostates are (having read this). I also understand why the logarithm makes sense (having read this). What is not clear to me is the relation to the classical concept of entropy. I read this but I found it a little confusing. Some enlightening remarks would be highly appreciated. Thanks!
posted by ochlophonic to Science & Nature (5 answers total) 2 users marked this as a favorite
 
Susskind did an entire semester on statistical mechanics at Stanford if you have the time. Lecture 3 covers what you're talking about, if I remember right.
posted by empath at 4:57 AM on December 12, 2011


He actually gets a little bit lost in the math at one point in that video, if I remember correctly, so it's not exactly easy stuff, even for a theoretical physicist.

Khan academy has a series on entropy as well that starts here. That's probably a bit simpler.
posted by empath at 5:54 AM on December 12, 2011


What background are you coming from? Classical thermodynamics in its standard formulation is impossible to understand without a good understanding of multivariate calculus IMO, and then it's more or less just keeping in mind what one is currently calculating, depending on which parameters.

For layman's explanations, the most obvious starting point for an understanding of entropy is its role in TS, the difference between heat and free energy. TS is energy that is in the system, but can not be converted to mechanical work. This stays constant in a reversible process and increases in any irreversible process, see the second law and the Carnot cycle.

In the statistical approach, a similar role is played by the phase space volume for each macrostate (or more precisely its log, but the relation is monotonous anyway) - given certain macro parameters, there's a certain part of the phase space (the space of possible microstates) that would generate those macro parameters. The larger that part, the less you know about the exact microstate of the system, and the less energy you can extract from it. A very good explanation of how that last sentence makes any sense can be found in Feynman's Lectures on Computation, where he goes through the entire relationship between entropy, information and energy based on a very simple example (one atom in a box).

That Wikibooks article is terrible because it insists on mingling quantum mechanics into an explanation of a classical concept.
posted by themel at 6:25 AM on December 12, 2011


Response by poster: I can handle some math (CS PhD student), but I was hoping for a quick&intuitive answer:) Anyway thanks for the Khan academy link and the Feynman reference (didn't know he lectured on computation, sounds very interesting).
posted by ochlophonic at 11:34 PM on December 13, 2011


Your statement about classical entropy (that it is heat over temperature) suggests a mindset where entropy is somehow derived from the more fundamental quantities of temperature and heat. In the statistical approach the logic goes the other way: the fundamental quantities describing your thermodynamical system are its internal energy (heat) and the number of possible-but-indistinguishable internal states (whose logarithm is the entropy). Temperature is the rate of heat increase as entropy is added --- or perhaps more sensibly, inverse temperature is the rate that the number of microstates increases as heat is added, 1/T = ∂S/∂U.

This makes the second law a probabilistic statement: if you leave two systems at different temperatures in thermal contact, the heat flow between them is more likely to result greater total entropy than less, simply because there are more ways for that to happen. The only difference between the second law and poker-player combinatorics (where you are unlikely to be dealt a low-entropy hand like one of the four* possible royal flushes) is that the number of microstates is so generally so huge that you can forget about everything "unlikely."
how do the microstate probabilities arise?
The fundamental assumption is that if you can't distinguish between two microstates, then they are equally likely: in the poker example, if I tell you that I have a royal flush, you generally can't guess which suit, or what order I received the cards in.
how do they relate to heat?
In general, lower-energy microstates are more probable than higher-energy microstates. The ratio of the probabilities is given by the Boltzmann factor, exp(E/kT), where E is the difference in energy and kT is the thermal energy, proportional to the temperature. So if two microstates have nearly the same energy, they're nearly equally likely; if the energies are very different, the lower-energy microstate is more likely, and the definition of "very" depends on the ambient temperature. There's usually a competition between the preference for low-energy states and the much larger number of higher-energy states.

The sum of all the possible microstates (which must add up to 1, since the system must be in some microstate) is called the partition function, and has some quirky physical relationships to other thermodynamical quantities --- you seem to have pulled one out as a definition of entropy. Finding partition functions is physics homework.
and temperature?
I think I did this one first.


* pedant note: if you don't insist the cards be dealt in order, there are 4*5! = 480 royal flushes, out of (52 choose 5) = 2.6M five-card poker hands.
posted by fantabulous timewaster at 11:05 PM on December 14, 2011


« Older physics-based Android games   |   how to make a royalty agreement Newer »
This thread is closed to new comments.