Statistics...Not my strong suit
December 11, 2012 6:09 AM   Subscribe

[StatisticsForTheFeebleMinded]My medical office sees patients who must have monthly blood draws for a condition they have. The samples have the same two tests performed on them at an outside laboratory and by our in-house laboratory. Within the last year, these values have begun to differ wildly. Need recommendations for software/programs/equations/thoughts for analyzing any trend within the numbers that might offer an explanation as to why this is now happening.

Currently, I have a small sample set (~20 patients), who have THE SAME blood sample tested by two different laboratories. I could dig through "the books" more to include all results (from both labs) within the past year for each of these patients.

I slept through my statistics class in college, so I'm hitting a brick wall here. No linear correlation is really evident just from glancing at the numbers.

My variables - Age (which shouldn't be a factor), Test A performed by outside lab, Test A performed by inhouse lab, Test B performed by outside lab, and Test B performed by inhouse lab.

What would be some programs or software that would allow me to plot out these variables, resulting in a trend or trajectory of these numbers?
posted by kuanes to Science & Nature (25 answers total)
 
I just want to point out that you may have another potential variable. Unless the patients walk from one lab to the other to have the test re-done, you also have the factors of time-of-day/pre/post-meal.

That is, you should include the conditions under which the test is done as another variable if that data is available.
posted by vacapinta at 6:20 AM on December 11, 2012


Response by poster: No threadsitting, but...

As far as I know, food should not affect this test.

I will state that the samples are probably processed 12 to 24 hours later at the outside lab, but this shouldn't be a factor, either.

And as I stated, this has just become a problem in the last 10-12 months. Everything was "fine" beforehand.
posted by kuanes at 6:24 AM on December 11, 2012


I think you could do a two-way ANOVA with R, which could tell you where a significant effect is coming from, as well as point to interactions (a particular test at a particular lab for a particular age group, for example).
posted by Blazecock Pileon at 6:25 AM on December 11, 2012


vacapinta, I think the blood is getting drawn and then half of the drawn blood is getting sent to the outside lab, and half of the drawn blood goes to the inside lab.

The easiest way to plot this kind of thing is with Excel.
posted by gregr at 6:26 AM on December 11, 2012


Where are you looking to go with your stats? Assuming it is actually the same blood specimen being split between two labs, and that the time delay between analyses has not previously been a problem, I'd call lab 2 and see if there's been a change in methodology or units used in the interpretation. An inept personnel change might also factor in but shouldn't. Also if the specimen is being shipped, handling by the shipper might factor (for instance, is the specimen truly frozen all the time, etc.)
posted by beaning at 6:26 AM on December 11, 2012 [1 favorite]


Have you tried contacting the pathologists / scientists at the respective labs to discuss it? Finding out whether there's any changes in the way they process their samples is absolutely necessary. Part of being experienced in this field also involves knowing why test results can go funny for technical reasons.
What is the test?
posted by chiquitita at 6:29 AM on December 11, 2012 [1 favorite]


I assume that the reference ranges from the outside lab have not changed? Labs change methodology for all sorts of reasons, and results change as a consequence. Reference ranges should also change. I'm not sure what statistics are going to tell you here.
posted by OmieWise at 6:32 AM on December 11, 2012


Rather than performing a complex statistical analysis, I would just look for the earliest time the results started to diverge (you've said ~10-12 months, but a more precise date would be helpful), then ask the senior people in charge of each lab whether anything changed at that time.

You should also ask what calibration each lab has done of their instruments, and whether they can re-calibrate with appropriate reference samples.
posted by James Scott-Brown at 6:40 AM on December 11, 2012 [2 favorites]


Response by poster: The outside lab insists that nothing has changed, although there is a culture of "our results couldn't possibly be wrong."

This is also a process (the lab result) that's been around for a good 30 years, so it's not really something in which the methodology would have changed.

I'm fairly certain that how the sample is shipped and received has not changed in this time period.
posted by kuanes at 7:01 AM on December 11, 2012


If there's anything certain, it's that assumptions are often wrong. Run the numbers and see what you find. At the very least, it'll give some quantitative direction to start focusing your investigation.
posted by Blazecock Pileon at 7:03 AM on December 11, 2012


Has all your equipment been recalibrated within the correct time frame for recalibration? Has the lab's? Ask for their calibration documentation.
posted by bookdragoness at 7:04 AM on December 11, 2012


Yeah, I don't think mathematical analysis is going to tell you what you don't already basically intuit - the analysis would be to discover IF the test results are diverging in a way that's significant, and it's already so bad that you're noticing just by glancing at the numbers, as it were.

As has been said above, you need to spend your time finding out WHY this is happening.

Not sure why you are doing the same test in-house and out, but it would appear that the place to start is by sharing your findings with the outside lab (which as a major vendor should be willing to help you figure out what's going on with your in-house tests, assuming that they're not trying to sell you on taking that part to them) and/or the manufacturer of the equipment you're using in-house. You might also find a third party lab technician/consultant to bring in to help.
posted by randomkeystrike at 7:06 AM on December 11, 2012


on preview, if your outside lab acts like nothing could possibly be wrong, another entity to consult would be a competitive lab. Obviously, they're biased, but they're still experts whose brains can be picked.

does this lab you're using have a marketing/sales rep who's more customer-focused than the people you've been talking to?

Blazecock does make a good point that's somewhat counter to mine - it sometimes happens that a few anecdotes override the trend. For example, a guy working for me has 2-3 really bad weeks, and I start to think "he's not doing a good job." But then I average his past 7-8 weeks and realize that his average is right where it's always been. So do confirm that MANY or MOST of the results are off, and that they're off in a systematic way (one's always higher than the other, or one has more variability than the other). But you don't need a lot of deep stats training to see that kind of trend.
posted by randomkeystrike at 7:09 AM on December 11, 2012


Best answer: For the number of samples you are talking about, Microsoft Excel is a good program for plotting and analyzing data.

One thing to plot is the results of test A from location one against the results of test A from location two. A linear correlation may indicate that the unit of measurement that the test is reported in has changed, or a test is giving systematically lower readings. No linear trend means that one or the other (or both) locations may be giving completely bad readings.

In the first case, it may be best to send results from a few people to a third lab to arbitrate. In the second case, you could use a third lab, or you could split samples in quarters, and send two each to the inhouse lab, and two each to the outside lab.

If you want people good at data analysis to dig into your results, you could always make
a table in Excel, save as csv, and post to pastebin with a link in this thread. Anonymized data is ok to post, and I would remove the patient ages before posting. (year of birth is considered PII for over 90 or something like that.)
posted by Maxwell_Smart at 7:47 AM on December 11, 2012


Response by poster: This is confirmed. To the point that patients who are refusing a certain kind of treatment for this condition are having numbers in the "treatment range," which is essentially an impossibility.
posted by kuanes at 7:48 AM on December 11, 2012


Response by poster: I may look at what Maxwell_Smart has suggested. I'll get back to some of you.
posted by kuanes at 7:51 AM on December 11, 2012


Sending a certain number of samples to a third lab (as well as continuing to send them to your in-house and current outside lab) is what occurs to me. This would confirm the problem and also give definite positive evidence as to which of the two labs--yours or the outside lab--has gone off the rails.

Also, this is a business relationship so if you have evidence that the lab is giving wrong results and not taking reasonable steps to address the problem then they are not holding up their end of the business deal and the course of action is to switch to a competitor, if that option is available.
posted by flug at 7:55 AM on December 11, 2012 [1 favorite]


Response by poster: Not a business relationship, as this is a "state" lab.

Third party lab has been suggested, but this is dicey, because of the nature of the test and the fact that it's run by the state.
posted by kuanes at 8:21 AM on December 11, 2012


It might actually help if you identified the test.
posted by OmieWise at 8:23 AM on December 11, 2012


Response by poster: Phenylalanine and Tyrosine Levels (plus ratio).
posted by kuanes at 8:32 AM on December 11, 2012


Best answer: Let's get the process clear here.

Patient comes to your facility.
Member of internal staff does a blood draw, filling two (four?) vials in the same encounter.
Vial 1 (or 1&3) is processed internally to do tests A and B
Vial 2 (or 2&4) is sent to external facility to do tests A and B
Test A and B return a single numerical value

concern: numerical value returned by external lab for tests A and B diverge notably from numerical value result from internal testing for tests A and B.

Is this accurate?

My concern about your desire to tests these numbers and their divergence is whether the values represent a linear result. If they don't then I'm not sure a statistical approach is going to help you a lot.

If, as you say, the resulting values from external lab represent a value that is implausible given their treatment choices then it seems like you'd be far better off if you can connect other observed/recorded symptoms/indications.

I'd think your best way to demonstrate an issue here is if you have a pattern of samples from individuals that crosses this period where things changed. If Patient A has two sample patterns and the two histories don't follow a shared curve then that indicates an issue. I guess if the external lab is flat-out falsifying data and doing it randomly then you might be able to demonstrate a pattern of their sample plot not following a similar trajectory from month to month, which it should within reason.
posted by phearlez at 8:38 AM on December 11, 2012


Response by poster: You are accurate in your summary, phearlez.

At some point within the last year, the outside lab's results started "declining," so my doctor decided to have these done in-house as well, to see what the deal was. I guess what you state in your last paragraph is what I'm ultimately attempting to do.
posted by kuanes at 8:55 AM on December 11, 2012


So you don't have any internal tests from the period before the problem.

My thought is there's two things you want to look at to test two thesis.

Let's call these AI for test A done internally, AE for test A done externally, BI for test B internally, etc.

Option One: external lab has a non-deliberate process problem. If that's the case then I'd think you could plot each patient's results in your spreadsheet with each 'stick' in a column. In your rows you have AI, AE, AI-AE, BI, BE, BI-BE.

The plot of the differences should be somewhat flat if everyone is being consistent. This requires you to be pretty certain about your own process and testing, and if I were the external lab I might question whether you, as an organization doing a small number of these, is as consistent as I am. If these are Guthrie tests and humans are comparing sizes to charts then they might claim you're not reading them as accurately

However a continued divergence in one direction would seem, to me, to be an indication of an issue with them. Additionally, if you are reasonably certain from the patients that they're not in compliance and your tests show that and theirs doesn't then that's a significant thing.

Option Two: external lab has a falsification problem, either one person or organizationally. If that's the case then I think the above spreadsheet will be all over the map, assuming they don't have a way to maliciously look at a patients' history and use similar fake numbers to the previous test.

A better way of detecting this, I think, would be for you to send out some samples of a known quantity: some blood draws from your own staff. Do a double-draw and send them out to external lab as two different people. If they come back notably different then you know something's screwy over there. You might be able to detect a procedural issue (like sloppy culturing or growth-circle reading) that way but it for sure will reveal if someone is pulling numbers out of their ass.
posted by phearlez at 9:37 AM on December 11, 2012 [1 favorite]


These tests are usually done by chromatography, and there are about a million things that can go wrong with those instruments-parts to clean, things get clogged, smeared, pressures are off, etc. Sometimes, things go wrong very slowly and the lab might not even notice the values are off. In our lab, we had a problem where results were divergent between two instruments, but each individual instrument was internally consistent. No one noticed until a research team who was using our lab contacted us, they only noticed because they had run many samples in multiples.

I would get your data, but don't go overboard with additional testing, then call the lab and ask to speak to the Director. This should be an MD or PhD. Don't bother with division supervisors or techs. Present your problems to that person, and they are pretty much obligated to address them. Don't let them brush you off. Things like this are their responsibility.
posted by Missense Mutation at 9:56 AM on December 11, 2012


Oooo, assay troubleshooting is what Tiggers like best!

What method are they using for the tests (a quick Googling reveals multiple methods). Are both tests coming back equally skewed? Are they both skewed the same way? Was it a sudden shift in results or has it been a slowly increasing trend? There are different implications in both.

If you do what phearlez suggests, what I would do is take some normal serum and some of some patient serum and do a spiking series by mixing these to get four levels across the range of the test. If the levels of the samples you send are 10, 15, 20 and 25 by your method (just pulling numbers out of the air here) the numbers you get back ought to give you a good idea of what the problem is. For example, if you get back 21, 15, 12, 35, 11, their assay is broken. If you get back 15, 20, 25 and 30, that's a constant shift of 5 and it's probable that they've started using a subpotent standard without adequate bridging. If you get back 10, 17, 24 and 31 (and it's a chromatagraphic method) somebody is skimping on a cleaning step, or their making up their mobile phase wrong or something like that.

That "nothing here has changed" thing is a real problem. If you can find the exact date when you detected a shift that will help if it's a lot to lot thing.
posted by Kid Charlemagne at 5:50 PM on December 11, 2012


« Older How do I download Windows XP updates just once and...   |   What's this Venezuelan musical genre called? Newer »
This thread is closed to new comments.