Screening: Is it Always a Good Thing?

0
6327

With all the medical research we do, it’s easy to lose sight of what the end goal should be: to save lives. Whether saving lives is measured in the form of years of potential lives lost, disability-adjusted life years, or quality-adjusted life years, the outcome should make a meaningful difference to peoples’ lifespan. Why is this so important to keep in mind? With new ideas emerge fancier technologies for screening populations, but we should ask ourselves whether it really makes a difference. Could these innovations possibly cause more harm than good? It’s important to keep in mind that while innovation holds great curative potential, if not implemented properly, it also has the potential for great damage.

The goal of new screening technologies is to improve the outcomes of the natural history of a disease, or the unmeasured burden of the disease, also known as the “iceberg phenomenon”. This target usually focuses on improving the length with or without quality of life; an indicator we’ve recently come to discover is pretty darn important.

Wilson and Jungner defined the characteristics of a good screening program back in 1968 and several national guidelines have since followed similar principles. These are: the disease must be an important health problem; its natural history must be understood; there should be a high prevalence in the pre-clinical phaseandthere must be a long time between first symptoms and overt disease; the screening test must be sensitive and specific, simple and cheap, safe and reliable; and, the health system must have adequate facilities for follow up diagnosis and treatment available. Of course – things that all make sense.

But, it is also important that we weigh the harms and benefits of implementing screening programs. For example, Marmot’s study in 2013 showed that in a population of 10,000, 43 deaths were prevented and 129 people were over-diagnosed (1:3 deaths prevented to over-diagnosis). As a profession, medicine is very quick to want to diagnose, but over-diagnosis can bring its own set of problems, such as imparting undue stress and anxiety. Thirty one percent of all cancer (of which 50% of those are detected by screening) is over-diagnosed each year, equating to 70, 000 annually in the UK (see http://www.nature.com/bjc/journal/v108/n11/full/bjc2013177a.html for full article). Anyone would justify over-diagnosis of 3 people if it saves one life, but just how meaningful is giving a diagnosis?

That’s where biases come in. It’s important that we be aware of biases that are masqueraded by what seems like a good thing in screening. Length-time bias occurs when there are more people diagnosed, but there remains the same number of mortalities over the same period of time. With lead-time bias, the early symptomatic phase is shortened because we are diagnosing illness earlier, causing survival to appear longer (see diagram below). In those cases,  is it better to receive the diagnosis earlier? Or is it better to not receive the diagnosis at all? Is over-diagnosis really making a difference in mortality outcomes? Does the nature of the illness play a part in determining when to communicate a diagnosis, or communicating it at all?

 

In other instances, a different effect was observed. In the 1960s, the NY Breast Screening Trial showed a significant reduction in cancer mortality for both groups of women: those who were offered and not offered mammography. Of the women who underwent screening, 42.4 died, and of those not offered mammography, 57.6 died. This suggested that self-selection is a problematic bias, as we exclude a large proportion of the population, invariably those who have limited access to health care.

We can’t overlook the dramatic successes screening programs have had in saving lives over the decades. As we can see though, many ethical challenges emerge. As practitioners, public health and policy experts, it’s important to be cognizant of the big picture and really appreciate the effects (both good and bad) that new screening technologies can have. Sure, it’s great when a new product or technology comes on the market, but we have to do the research and ask ourselves: will it always have the effect we want?

Acknowledgements: Dr. Geraldine McDarby, National University of Ireland, Galway

SHARE
Previous articleGetting Rid Of Silos And What Medical Conferences Can Learn From It – A DLD 2014 Recap
Next articleRemote Patient Interaction With Google Chromebox
Originally from Canada, Manisha Sachdeva is a registered physiotherapist and Irish-based medical student. She works with marginalized populations, particularly refugees and the homeless community. Her latest research includes counseling on end-of-life wishes and integration of advance directives into medical record systems, as well as co-developing a preventive health care tool for an inner city electronic medical record system. She’s the founder of a student think tank and she's interested in the dimensions of social innovation in health care.

LEAVE A REPLY

Please enter your comment!
Please enter your name here