Is there actual scientific research behind this oft-cited UI response time => human perception table? Or is it just "conventional wisdom." If not, I'd love to see the original research and know how it was conducted.
So I take it that the OkTrends blog was killed off after Match bought OkCupid. Where can I now get my regular fix of really interesting statistics presented at a level that the lay person can understand? (I already know about Nate Silver and xkcd's What If.)
I have recently been introduced to the concept of pseudoreplication as a mistake that people often make when using inferential statistics to evaluate treatment outcomes. My field (evolutionary and conservation biology) makes heavy use of inferential statistics, including techniques that are vulnerable to pseudoreplication, yet nowhere in my formal education have I been taught about how poor experimental design and lack of statistical rigor can lead to fallacies like this. My personal statistical proficiency is poor, but I am working to remedy that. To that end, could folks help me by identifying and ideally explaining whatever other potential pitfalls you can think of, and explaining how they can be avoided through careful experimental design and data-analysis?
I'd like to learn about data science. Things like predictive modelling, regression and classification and so on. What would be good books or online courses to start with?
"Fewer persons alive at 70 today survive until 90 than forty years ago." True or False? [more inside]
In his TED talk, Sean Carroll very briefly discusses Feynman's explanation about how the universe we can see and experience is not a statistically lucky perturbation, before moving on to the rest of his lecture. In other words, Feynman seems to discount the "Boltzmann brain" hypothesis, that we're not just the ephemeral product of a lucky shuffle of a metaphorical box full of marbles. Can someone explain this to me in words other than Carroll's, i.e., what evidence Feynman was using to prove his argument?
What are some non-academic jobs near Tel Aviv, Israel that would be a good fit for someone with training in statistics (MA) and ecology (PhD)? [more inside]
I am thinking about doing a study in my school that correlates diet with behaviour, what advice would you give me to ensure watertight data and conclusions? Any other interesting variables I could look at? [more inside]
Statistics question: is it possible to test sets of cumulative data for significant differences in rate? [more inside]
StatisticAnalysisFilter: I took (pretty close to) scientific observations of the general populace in a neighborhood for a few months (personal project, long story). I measured the number of people who had trait X (or did not have trait X) in two locations, A and B. Now, I want to test the statistical significance of these results. Is the chi-square test sufficient for this? Or is there a better option? [more inside]
How to translate the scientific/statistical English words "accuracy" and "precision" into scientific/statistical Russian? [more inside]
What are some simple experiments that help explain complicated phenomena? [more inside]
"The DNA of humans and chimps is 98.4% identical." I've read that several places. I've also read "The DNA of all living things is 90% identical" and "The DNA of humans and lettuce is 16% identical." How could I find out which of those last two statements is correct? Or is the problem that I don't understand which part of the DNA is being referred to? (Frankly, I'm not that clear on DNA in the first place - I'd just like the right number.)
Tools for a scientific publisher to provide usage statistics for its subscribers? Is this a good idea? [more inside]
Can you recommend a statistics book which is appropriate for the stats required for basic bioscience / clinical research? [more inside]
StatsFilter: I want to compare quantitatively the results of subjecting two different groups of people, A and B, to two different "programs". Which statistical test do I use to see which "program" had a greater effect? [more inside]
StatsFilter: Evidence challenging/contradicting conventional controlled/placebo trials? [more inside]
A statistics / scientific convention question. I've noticed in scientific journals that often when a set of data is presented with values normalized to one of the sample groups, and the value for that sample group is arbitrarily set to 1, 10, 100 or whatever, to simplify interpretation, the variability/error data for that one sample group is left out. Is there a good statistical reason for that or is it just some random convention with no good reason? [more inside]
In science, what is the difference between a theory and a law? Why is something called a law rather than a theory?