How do you estimate a cancer risk from radiation exposure?
April 3, 2011 8:37 AM Subscribe
How is the following risk assessment from this NYT infographic reached?: "The estimated increase in cancer risk of eating two unwashed pounds of [radioactive cesium-tainted broccoli] is about two chances in a million." Is it based on population studies, or sieverts, or something else? What biological effect do sieverts measure?
I saw this previous question, but it's not quite what I'm asking; I understand how radiation can cause cancer, but not how radiation levels get translated into differing likelihoods of cancer. To be clear, I'm not asking out of paranoia but out of curiosity for the science; reassurances that it's low aren't necessary because I'm not worried about radiation levels, I'm looking for answers about what is measured and how the extrapolations are made.
Do we get these numbers the same way we get numbers about other types of cancer? I ask because it seems like it would be harder to gather for something like radiation exposure, which at higher levels is a rarer event, but have we studied this more closely than I would assume? It's my understanding -- which may be wrong -- that our statistics for cancer causes tend to be based on studies of afflicted populations, or at least some of them are, i.e. you can look at a group of cancer sufferers and try to identify common things they've been exposed to, or you can look at a population exposed to something out of the norm and calculate how many of them have/get cancer.
Are these calculations based on this kind of data? Or are they extrapolations based on measuring how mutagenic certain kinds of radiation are at certain levels? Or do they translate everything into sieverts, and an extrapolation is made from that?
I have a feeling the answer I'm looking for may be in this question when calculating sieverts, what is the biological effect being measured and how is it quantified? Rate of cell division, mutation rate, or rate of cell death, something else? What I've read isn't clear, but it makes it sound like a mishmash of anything damaging, all of which gets quantified and somehow added together -- is this is case? If so, does that mean any 'sievert' can represent, say, mostly mutagenic effects OR mostly cell death OR anywhere between? That would seem to be problematic for estimating cancer rates. Can you actually make a close prediction of cancer rates using sieverts, or is it pretty rough, or do we not do this so it's irrelevant?
I saw this previous question, but it's not quite what I'm asking; I understand how radiation can cause cancer, but not how radiation levels get translated into differing likelihoods of cancer. To be clear, I'm not asking out of paranoia but out of curiosity for the science; reassurances that it's low aren't necessary because I'm not worried about radiation levels, I'm looking for answers about what is measured and how the extrapolations are made.
Do we get these numbers the same way we get numbers about other types of cancer? I ask because it seems like it would be harder to gather for something like radiation exposure, which at higher levels is a rarer event, but have we studied this more closely than I would assume? It's my understanding -- which may be wrong -- that our statistics for cancer causes tend to be based on studies of afflicted populations, or at least some of them are, i.e. you can look at a group of cancer sufferers and try to identify common things they've been exposed to, or you can look at a population exposed to something out of the norm and calculate how many of them have/get cancer.
Are these calculations based on this kind of data? Or are they extrapolations based on measuring how mutagenic certain kinds of radiation are at certain levels? Or do they translate everything into sieverts, and an extrapolation is made from that?
I have a feeling the answer I'm looking for may be in this question when calculating sieverts, what is the biological effect being measured and how is it quantified? Rate of cell division, mutation rate, or rate of cell death, something else? What I've read isn't clear, but it makes it sound like a mishmash of anything damaging, all of which gets quantified and somehow added together -- is this is case? If so, does that mean any 'sievert' can represent, say, mostly mutagenic effects OR mostly cell death OR anywhere between? That would seem to be problematic for estimating cancer rates. Can you actually make a close prediction of cancer rates using sieverts, or is it pretty rough, or do we not do this so it's irrelevant?
Best answer: Perhaps an overly detailed explanation from Lubos Motl.
posted by empath at 8:54 AM on April 3, 2011 [1 favorite]
posted by empath at 8:54 AM on April 3, 2011 [1 favorite]
Response by poster: That you empath! That post, and especially the page it linked to, answered my questions. :-)
posted by Nattie at 9:13 AM on April 3, 2011
posted by Nattie at 9:13 AM on April 3, 2011
This is technical, but you may find it informative. More detail.
posted by blue mustard at 9:18 AM on April 3, 2011
posted by blue mustard at 9:18 AM on April 3, 2011
This thread is closed to new comments.
As I recall from the interview (I can't find a link) he was a scientist, and was careful to make the distinction between people dying at an unexpectedly young age and determining that they died because they had been exposed to excessive levels of radiation. I inferred that he was trying to say that different people react to different levels of radiation differently, and, therefore, the only way for epidemiologists to parse radiation risks is to do so probabilistically. That is, if you have exposure level X you have Y% chance of developing illness Z.
posted by dfriedman at 8:50 AM on April 3, 2011