Comparison of PSA test results from Quest and Labcorp
Quest and Labcorp use different instrumentation/standards for PSA testing. A 2005 web article by Dr. William J. Catalona has stated that these different instrumentation/standards can yield test results that vary by as much as 23%. The WHO standard, which is used by Quest, yielded the lower PSA scores in the Catalona article (https://drcatalona.com/quest/psa-tests-are-not-all-the-same/). This article has prompted me to try to find out for myself.
Yesterday, I had the opportunity to have back-to-back blood draws for PSA tests at both labs (literally next door to each other in the same facility). The Quest test was for normal PSA (lower detection limit of 0.1), and the Labcorp test was an ultra-sensitive test (I would have gone for a normal PSA test, but that’s what my doctor ordered).
Here are the results:
Labcorp: 0.094
Quest: 0.11
Both very low, and close, but Quest was 17% higher if you want to compare.
In my case, these low values, and the difference between them, are significant because I am ten years post radical prostatectomy (2015) and until June 2025, my PSA was undetectable on Labcorp tests.
In June, my PSA on a Quest test came back 0.11 (same as today’s results). No change in three months—good news! But, my PSA is above the limit of detection—not good news.
Interesting to note that my ultra-sensitive test at Labcorp was below 0.1, the lower threshold of detection for a normal PSA test.
The 0.11 measurement in June set in motion a PET SCAN and a pelvic MRI, both of which showed “concern for recurrence” in the anastomosis. So, strong imaging evidence for local recurrence, but no imaging evidence for metastatic disease. DREs also detected a small nodule. The evidence for local recurrence seems pretty clear even without a biopsy, and with PSA < 0.2.
I am planning to start EBRT within the next week or two.
Interesting to note that had I used Labcorp back in June for a normal PSA test, that the result may have been “below detection” and I would have been sent home until a follow up in another year. Instead, I am now a two-time cancer survivor. I’m just glad I caught it early and that I am treating it early. A small difference in PSA test results yielded a rather dramatic shift in diagnosis.
It is also interesting to note that even though my anastomosis lit up with an SUV of 13 on the PET scan, that conventional wisdom in the medical world says that PCa doesn’t show up on PET scans until PSA is around 0.5 (again, I’m 0.1). I don’t know how to reconcile that other than to say, conventional wisdom isn’t always right. I suppose there is some very slight chance that the palpable nodule is benign tissue, but highly active metabolically. However, I’m not going with that unlikely possibility, nor are my oncologist and urologist.
So, there you have it. Make what you will of this info. In my case, different tests from different labs yielded slightly different PSA test results that were highly significant.
Interested in more discussions like this? Go to the Prostate Cancer Support Group.
Which is the test used by the VA?
That I don’t know. Hopefully someone here can answer that. It’s a good thing to know, for sure. My understanding also is that values produced by the standard that used by Labcorp were used to determine the markers for suspiciously elevated PSA (4.0) and BCR (0.2).
Melvin: Chile3 here. Please look at my posting referencing the Vesicourethral Anastomosis. Could you please comment reltative to any conversations you had with your team prior to your imaging series?
Thank you in advance,
Chile 3
I am a former Laboratory Director and Clinical Lab Scientist (a.k.a licensed Medical Lab Technologist). I was in the field for forty years. A little knowledge for everyone about lab testing:
There is inherent "variation" in lab test results no matter the method or instrument (brand and model). It has been generally accepted for decades that any test result that varies from any other test result on the same patient by 10% or less is "clinically insignificant", and can be attributed to that day to day, hour to hour, inherent method and instrument variation. However, test results found to be greater than 10% variation from the previous result on the same patient, are considered "clinically significant," and warrant repeat testing and investigation.
Laboratories have clinical lab information systems (computer systems) with built-in quality control "rules", known as the Westgaard Rules, developed by Dr. Westgaard and now carried on by his son. The rules are built into the computer system, and when a result on a patient varies by greater than 10% or the quality control test results vary by 10% from what is expected, it triggers an alert that results in an investigation as to why the variation occurred. In fact, clinical lab specimen testing is so accurate and reproducible, that it results in the fewest lab errors of all. "Pre-analytic" errors (human handling of the specimen before testing) and/or post-analytic errors (computer systems, or human errors reporting the data) are the cause of the majority of lab errors now and for the passed several decades. The analytic instrument testing phase is nearly perfect.
Two things are continuously monitored in the lab testing environment with quality control materials and methods: Accuracy and Precision. Think of a target with a bullseye. The "bullseye" in the middle of the target is the true result. The test method (chemical reagents and instrument) seek to "hit" that true value "accurately" through calibrating the instrument and method with known chemical "standards" and quality control material that is like patient serum. "Precision" is reproducibility: can the method repeatedly yield consistent results? You can have a test method and instrument with high precision (very reproducible), but if it isn't "accurate" hitting the true value, then it is meaningless. You need a new method or you need to recalibrate the instrument. You could hit the same "outer ring" of that "target" consistently, and say the method and instrument have good "precision" (reproducibility), but it could be very "inaccurate", well away from the "true value" (bullseye). You could also have a method and instrument that produces results all over the target in every ring and all over the target. That method would be neither accurate nor precise. So, you know where this going. You want a method that is "accurate" and hits the true, expected value, AND, you want it do so consistently with reproducibility, which means it is "precise." So, if you had 10 "arrows" (testing events), you would want all ten arrows to hit the middle bullseye to show it is an accurate and precise (reproducible) test method.
Through careful comparison and evaluation, labs select and use a certain instrument and method. They calibrate the instrument at a frequency per the manufacturer's recommendation, as well as data in the lab that evaluates the calibration stability. Then depending on the stability of the method, "high", "normal" and "low" quality control material that mimics human serum or plasma, is run once per day, twice per day, or on each lab shift...sometimes it is run each analytic run of one or more specimens. If the quality control test values are within expected range, then the lab knows that the test run of patient samples will be valid. If the quality control test values are outside expected values, then the test run is rejected based on the Westgaard Rules used by the Lab, and the situation is investigated as to why? It could be an instrument functionality issue (examples): an incorrect specimen sample volume aspirated into the test cup for testing; or...an incorrect amount of chemical reagent was dispensed into the testing cup for the test run; or...the instrument has an electronic malfunction due to heat, humidity, or other issues; or the chemical reagent lot # was bad or the vial exceeded the expiration date. As you can see, there are many potential causes of lab testing errors, and you can see this is a massive issue for every lab. Many labs have quality control techs who analyze data constantly to catch errors or "bias" and recognize that a statistical "trending" or "shift" is occurring that signals a problem. Every instrument has these features built into them as well, along with the Westgaard Rules. Every Lot #, expiration date, date of rehydration or preparation of the quality control material, chemical reagents, or specimens is monitored. Finally...
This gentleman's comparison of the two test values is inaccurate. He had his blood tested by two different labs using TWO DIFFERENT METHODS AND INSTRUMENTS. This is an "apples to oranges" inaccurate comparison even though the test was a "PSA" test...one being the ultra-sensitive method. The only way that comparison would be accurate is if the two different labs - LabCorp and Quest - used the "same exact instrument" and "test method", and ideally were using the same Lot # of chemical reagent, and tested a portion of the "same blood sample" (split the sample and sent half to LabCorp and half to Quest). Only then could as close of a comparison be made. I hope this helps somewhat in your understanding of specimen collection, quality control, instruments, and accuracy vs reproducibility in clinical labs!
Thank you for your very detailed explanation of testing methods, and controls and safeguards to ensure accuracy and precision. Very informative.
The point in my post was not about inherent flaws in testing, or to say that results from different labs ARE comparable. Rather that different instruments and standards do not necessarily produce the same results, for exactly the reasons you described. In my case, the results were not far off, despite different standards being used, but the percentage difference was significant, as might be expected. I think many people are not aware of this, and this includes medical providers. That is the real point. I have received dismissive looks from providers who think that different labs with different instruments will produce consistently similar results.
When my PSA first rose above the detectable limit of 0.1, my urologist initially dismissed it as “lab variation”. Sure, that could have been the case, but he was resistant to immediately repeating the test to find out. He suggested re-testing in 3 months. Given that my reading of 0.11 potentially signaled a BCR, I was not satisfied with his suggestion, so I got my PCP to order another test. I went to the same collection facility which sent my blood to the same lab as before. It came back at 0.12. I considered that highly consistent with the previous test result and my PCP agreed that it cast doubt on the lab variation explanation. The 0.11/0.12 results appeared to be real. Ultimately, that lead to scanning that confirmed recurrent prostate cancer.
For my latest PSA test, my provider wanted to send me to Labcorp. I told them that my 0.11 result from 3 months prior was done by Quest and that I wanted the next test to be done by them, otherwise the results would not be comparable. I got the “seriously, it doesn’t matter stare”. Also, the provider did not tell me that she was ordering an ultra-sensitive test, which I have never done before (I didn’t even know it was ultra-sensitive until I saw the rest results). So, once again I had my PCP put in an order to Quest for a normal PSA. So, this was not a controlled test, just the best I could do to ensure consistency and hopefully make a point about that. The result came back 0.11, just like three months prior. The Labcorp ultra-sensitive test came back 0.094. Taken at face value, one might have wrongly concluded from that result that my PSA had decreased and that it was below the magical 0.1 mark. The concurrent test by Quest showed otherwise—my PSA was unchanged. I reported this all to my oncologist’s office, hoping they might get better educated on the matter.
So, I agree with your point, different labs and different tests are “apples and oranges”. Test results using different standards and protocols are not interchangeable and patients and providers alike should be fully aware of this.
I wish your excellent explanation here was required reading in every provider’s office. Thank you.
Rlpostrp:
Thank you.
The VA does not use a single laboratory. Most do them in-house but many send them out to local labs with whom they contract.