7971:1 – What will you trust when it comes to the safety of HRT?

You get used to outrageous medical claims in the press, but The Telegraph has truly surpassed itself today with its front page headline declaring that ‘HRT ‘is safe’ for postmenopausal women after all‘.

The article states that new research ‘has found no evidence that HRT is linked to any life-threatening condition’, and makes much of the fact that the new study followed women for a decade. There is a quote from Dr Lila Nachtigall, one of the study authors and a Professor of Obstetrics and Gynaecology at New York University who claims that: ‘the risks of HRT have definitely been overstated. The benefits outweigh the risk.’

Prof John Studd from London is even more forthright, saying: ‘Most GPs are afraid of HRT – they will have learnt as medical students that it is linked to health risks. But those studies that were replicated in the textbooks were worthless. They collected the data all wrong.’

These are bold statements, and so you would expect them to be based on a significant piece of research. The main study that Prof Studd so comprehensively dismisses is the British Million Women study – over 1 million women were studied specifically to look at the risk of breast cancer with HRT and it found a small, but significant, increased risk. To overturn the findings of such a significant piece of research would require something big.

So what is this new research? Well the article, as is so often the case, fails to tell you – but if you are still reading as far as the 11th paragraph you may start to have your doubts: the study followed 80 women. 80! Not 800 000, or even 80 000, but 80! To be fair, when you look at the study itself it’s actually 136 – 80 women on HRT and 56 without. So with 1 084 110 women in the million women study and 136 in this new, apparently game-changing research – that’s 7971:1.

What’s more, when you look at the new study in detail (and here I’m grateful to Adam Jacobs on twitter who managed to locate it) the study was not designed to look at the safety of HRT – the intention of the research was to answer a question about the effects of HRT on body fat composition, and any findings on the safety of HRT were only a secondary consideration. What is more, it is described as a retrospective cohort study – that means it looked backwards at the history of these 80 women, so if a woman had got breast cancer related to HRT she might not have been alive to take part in the study in the first place.

Even if the study had been designed to prove there was no link between breast cancer and HRT, the Million Women study suggests an increase of only 5 extra breast cancers in 1000 women taking HRT for 10 years – so 80 women would only have 0.4 extra breast cancers between them – meaning the study is far too weak to draw any conclusions at all. Oh – and the study was sponsored by Pfizer, who might just have a commercial interest in lots more women going on HRT.

The Telegraph was not the only newspaper to pick up the story, but it was by far the worst reporting among the broadsheets – The Guardian, for instance, picked up the small number of women in the study and tried to bring a sense of balance to its piece – just so long as you read past the headline and the first two paragraphs.

In closing, I would like to say one or two things to Prof John Studd of Wimpole Street. The first is that if you are going to have an official website it would be best, for reasons of probity, if you could include an easy to find declaration of interests; maybe I am being dense, but I failed to find yours. Secondly, GPs are not afraid to prescribe HRT – and we have learnt one or two things since medical school – but we do like to prescribe it after having a discussion with the woman concerned about the balance of benefits versus risk, as we like to base this on reliable evidence.

And for a woman considering HRT wondering what all this means? HRT remains the best way to control symptoms of the menopause, which can be very distressing. There is an increased risk of some cancers, but it really is quite small and many woman feel it is well worth taking that risk in order to feel well; have a chat with your GP about it.

 

95% Less Harmful – the Story of a Statistic

When Public Health England (PHE) published their recent report on e cigarettes, the statistic to hit the headlines was the claim that the electronic variety were ‘95% less harmful’ than standard cigarettes. It’s a figure that will have entered the collective consciousness of journalists and vaping enthusiasts, and I can guarantee that we will hear it quoted again and again in coming months and years.

The question is: where has it come from, and what does it mean?

The first question is easy to answer: the 95% figure does not come from PHE. Their report simply quotes the estimates made by another group of experts published by Nutt et al in European Addiction Research. Simply put, PHE have said: ‘other experts have guessed that e cigarettes are 95% less harmful than standard cigarettes, and that seems about right to us.’

The over reliance on the findings of another group of experts has received some very public criticism – most notably in an editorial in The Lancet when it emerged that the findings of this group had been funded by an organisation with links to industry, and that three of its authors had significant financial conflicts of interest. These are valid points, although they may have been made better if The Lancet had included the author’s name and declaration of interests alongside the editorial.

The second question is harder to answer, and here is my main concern with how the 95% figure has been presented. What does ‘95% less harmful’ actually mean?

If I were a smoker, wondering whether to switch to vaping, I would primarily be interested in one thing: how harmful are they to me. In other words – am I less likely to die or get ill if I switch to e cigarettes?

Well, the PHE report would seem to answer this question – in the forward to the full report the authors state that e cigarettes are ‘95% less harmful to your health than smoking.’ The trouble is that the report where they obtained the 95% figure looked at far more than just the effects of smoking on the health of an individual.

The piece of work by Nutt and colleagues involved a group of experts being asked to estimate the harm of a range of nicotine products against 12 different criteria – these included the risk to individual health, but also other societal harms such as economic impact, international damage and links with crime. The 95% figure was only achieved after all 12 factors were weighted for importance and then each nicotine containing product was given a composite score.

Now the propensity for a commercial product to be linked with criminal activity may be very important to PHE, but it wouldn’t influence my individual health choice, nor the advice I would want to give to patients.

Moreover, the work by Nutt and colleagues includes this statement: ‘Perhaps not surprisingly, given their massively greater use as compared with other products, cigarettes were ranked the most harmful.’ So the research was greatly influenced by the extent to which products are used. On this basis you could conclude that drinking wine is more harmful than drinking methylated spirits – on a population basis this is true, but it would be a poor basis for individual advice. 

In response to the criticism in The Lancet, PHE produced a subsequent statement in order to try to achieve some clarity over the 95% figure – only to muddy the waters further by claiming that the figure was linked to the fact that there are 95% fewer harmful chemicals in e cigarettes than standard cigarettes. This may well be true – but it is not the reason why they gave the 95% figure in the first place. It also assumes a linear relationship between the amount of chemical and the degree of harm – 5% of the chemical might only cause 1% of the harm, or it could be 50%.

One of the main problems I have with the 95% statistic, therefore, is one of principle – I just don’t like being duped by the misuse of statistics.

My second issue, however, is more pragmatic: the statistic does not help us with some of the key questions we need to answer.

That e cigarettes are safer than standard cigarettes is not much in doubt – mostly on the basis that smoking is so bad for health that it isn’t hard to beat. There is clearly much to be gained by smokers switching to the electronic variety. The next question concerns what smokers should do next.

Much is said about e cigarettes being an aid to quitting, but what is unique about them is that people often stay with them for the longterm, in a way that they would never consider with something like a nicotine patch. This may be their greatest strength – people may be able to quit who could never do so before – but it is also a new phenomenon as longterm nicotine substitution becomes the norm.

Are e cigarettes so safe that once smokers move over to them they can consider the job done? Many vapers talk about it in these terms. For the short term, it seems they are safe. They have been in common use for 5-8 years and there have been no major concerns so far (although acute poisoning is a new problem with liquid nicotine) – but then the same is true for cigarettes where it is use over decades that is the problem. For me, the 95% figure is too questionable to be able to help here.

There are more dilemmas I face as a doctor since I need to know how to interpret the health risks of someone who uses an electronic cigarette. When it comes to cardiovascular risk, should I consider them a smoker, a non-smoker, or something in between? If they have a persistent cough, do I suggest a chest x-ray early on the grounds that they are at increased risk, or can we watch and wait for a while?

We are a long way from being able to answer questions like this, and I would have preferred a little more honesty from PHE about what we don’t yet know, a little less reliance on the opinions of experts, and only to be presented with a figure like 95% if it is based on hard, objective evidence.

Should Policy Makers Tell GPs How Often to Diagnose?

I’m sure NHS England were surprised by the response to their plans to pay GPs £55 every time they diagnosed dementia. What started as a seemingly simple idea to help the Government hit their diagnosis target before the election caused such a furore that Simon Stevens declared the end of the policy before it had really begun, making it clear that it would end at the end of March.

What was striking about the reaction was not the objection among GPs – policy makers are used to that and well accustomed to ignoring it – but the strength of feeling among the public. I’m sure this is what made the difference – no politician wants to lose in the arena of public opinion. It’s not hard to see how this happened. There was something innately wrong about paying GPs to diagnose; no in-depth analysis was needed, no exploration of the evidence – it was just so clearly a bad idea and both doctors and patients were alarmed at want it meant for the doctor-patient relationship.

What continues to concern me, though, is that policy-makers still think they know best when it comes to how many patients GPs should diagnose with a variety of conditions – from heart disease to asthma, diabetes and even depression – and have an even more powerful mechanism for enforcing this, which is to put pressure on practices with low diagnosis rates through naming and shaming, and the threat of inspection. A practice may have the moral courage to resist a financial bribe, but what about if the reputation of your practice is at stake?

I have written in the British Medical Journal about this, published this week, and this is a toll-free link if you are interested. What is crucial is that at the moment of diagnosis there should be nothing in the mind of the GP other than what is best for the patient – it is fundamental to the doctor-patient relationship and something well worth shouting about.

Statins, Statins Everywhere

The health of America is in trouble. Life expectancy is noticeably lower than in other developed nations, 15% of the country lives precariously without health insurance, and the launch of Obamacare was so badly botched that this much-needed health reform is in serious jeopardy. Not to worry, though, the American Heart Association and the American College of Cardiology have a plan that will rescue the health of the nation: put a third of US citizens on statins – that ought to do it!

The new guidelines, released last month was widely reported in the UK press. The Mail misleadingly called the publication a new study rather than a set of guidelines, while the BBC gave a more measured view, including a revealing statistic that roughly half the expert panel had financial ties to the makers of cardiovascular drugs. What is worse, while the panel’s conflicts of interest appear to be clearly presented, with neither the chair nor co-chairs having conflicts, the superb investigative journalist Jeanne Lenzer has discovered that the chair in particular has been rather misleading with declaring his own interests. The protestation from the AHA spokesperson Dr George Mensah that ‘It is practically impossible to find a large group of outside experts in the field who have no relationships to industry’ is hard to swallow. In a country with as many specialists as the US? There were only 15 members on the panel – is it really that hard to find experts without financial ties? Or is it harder to tell some Key Opinion Leaders that their much vaunted opinions are not welcome since they are too close to industry?

The major change to the guidelines is that there is less emphasis on absolute levels of cholesterol, and a new category for treatment in those aged 40-75 with an estimated 10 year cardiovascular risk of 7.5%. Current UK guidelines recommend treatment at 20% risk, but NICE say they are looking at the same evidence as the US, before publishing new guidance next year. Despite the important debate in the medical press about overmedicalisation – spearheaded by the BMJ’s excellent Too Much Medicine series – we can expect a lowering of treatment thresholds when NICE issues its verdict.

The problem with the way we present guidelines, though, is that they are far too black and white, when the world of medicine we inhabit with our patients is generally full of grey. The question we should be asking is not what the threshold should be for treatment, but how to empower patients to make their own, informed decisions – because ultimately, the level of risk a patient is prepared to accept before they take a tablet is a personal decision, and a panel of experts has no authority to tell patients what risk they should, or should not take.

If we use the 7.5% cut-off, for instance, and assume that taking a statin for 10 years would lead to a 50% reduction in significant cardiovascular events (which is likely to be a gross over-estimate). This means that 3.75% of patients would avoid an event by taking the drug – call it 4% for ease of maths – and 96% would not benefit. The number needed to treat (nnt) is therefore 25 to avoid one event. What will our patients think about this? Surely that is entirely subjective and not for experts to dictate? One patient may have seen a close family member affected by a devastating stroke and might think any ability to reduce the risk of stroke is an opportunity to be grasped, another might consider the 3650 tablets they would have to swallow over 10 years and wonder if a 1 in 25 risk is really worth trying to avoid. In reality, the benefits of statins are much smaller than a 50% reduction, and so the nnt for low risk patients is likely to be 50, 100 or even higher.

We need a different approach to guidelines, one based on nnt, and the corresponding number needed to harm (nnh) (like this excellent calculator from ClinRisk Ltd. There should be a lower level below which the NHS says treatment is not justified on the grounds of either harm or rationing, and then a range of nnt and nnh based on individual risk. Expert panels should analyse the evidence to provide these figures, not to tell people what to do, and doctors and their patients can be given the freedom and flexibility of a large area of grey,  in which they can personalise treatment and truly empower patient choice. The experts and policy-makers won’t like it though – because it involves trusting patients, and we’ve never quite mastered how to do that.

This article was originally published in Pulse magazine (free registration required)

Can You Walk off the Risk of Breast Cancer?

One of this week’s health stories is typical of how rather unexciting research can reach the headlines by virtue of its association with a condition like breast cancer, but it also serves as a good example of two of the most common sources of sloppy reporting that plague health stories – which makes me think it a subject worthy of a blog.

The research relates to the possible effect of exercise on the risk of developing breast cancer, and the headline is Walking ‘cuts breast cancer risk’. If true, this is hardly an earth-shattering discovery. Perhaps it will add in some small way to our understanding of the mechanisms involved in the development of cancer, but this is for the journals to worry about. When it appears in mainstream media, the point is surely whether it means anything to an individual concerned about her breast cancer risk – in other words, if you want to reduce your risk of developing breast cancer, should you take up walking? Unfortunately, the way the results are reported makes it very difficult to answer this question.

 

Problem 1: associations are not the same as cause and effect

The first problem is that the study has made an observation, which has been presented as a cause. The researchers did quite a simple thing: they arranged for a group of over 73 000 post-menopausal women to complete a questionnaire at intervals over a 17 year period from 1992 to 2009, asking questions about how many hours walking the women did, and any diagnosis of breast cancer. They found that those who walked for 7 or more hours per week were less likely to have been diagnosed with breast cancer than those who walked for 3 hours or less. This does not mean that the walking caused the reduction in risk, however. It may well have done, but it could have been some other factor. There could have been a different cause that was linked to both breast cancer risk and the amount women walk. For instance, walking less could be linked to obesity, which could explain the extra breast cancer risk.

The researchers were aware of this problem, and tried to exclude some factors – for instance, it was not due to those who developed breast cancer being more overweight than those who did not – but they can never exclude all of the possible confounding influences. For instance, it may be that those who walked less were more likely to have other health problems, and the increased risk of breast cancer was in some way linked to this.

In my experience, observational health studies are very frequently reported as cause and effect. I can understand why – Walking ‘cuts breast cancer risk’ Has more of a ring to it than Walking is associated with a reduced risk of breast cancer. The problem is that the more catchy headline is misleading, and it is left to the reader to spot the error.

Problem 2: what do we mean by a reduction in risk?

The second pitfall when it comes to knowing what to make of a study like this is more serious – and more troubling, because the fault lies not with mainstream journalists trying to enhance their stories, but researchers and journal editors being guilty of the same. The problem is this: as is so often the case, the results have been presented in terms of a reduction in relative rather than absolute risk.

The trial demonstrated a 14% Relative Risk Reduction (RRR) – but is that a 14% reduction of a big number or a small number? If the Dragons in Dragons’ Den are offered a 14% share in company profit, they are very quick to ask how big that profit will be before they part with their money. The same should apply to us before we invest our energies in a health intervention. If the Dragons want to know the absolute amount of money they can expect to receive then we should expect to know the Absolute Risk Reduction (ARR) of any intervention.

The problem is that ARRs are always a lot smaller than RRRs, and so they make research look far less impressive, and researchers are reluctant to give them the attention they deserve. From the BBC article it is impossible to find the ARR, and so you have to go to the original research – and even here only the abstract is available without paying a fee and so you have to work the numbers out for yourself. It turns out that the risk of developing breast cancer over the 17 years of the study was 6.4 percent, making a 14% RRR equate to a 0.9% ARR.

Let us assume for the moment that the reduction in risk really is due to walking. Then if you are a woman after the menopause, and you walk for 7 hours a week rather than 3, then over a 17 year period you would reduce your risk of getting breast cancer by 0.9%. Put another way, if 1000 women walked the extra 4 hours a week for 17 years that would be 3 560 000 hours of walking to save 9 cases of breast cancer, or 393 000 hours of walking per case. At 3 miles per hour, it’s the equivalent of walking more than 47 times round the world! Now I do know that this statistic is probably as meaningless as being given a 14% relative risk reduction – but it was fun to work out!

That’s not to say that walking is a bad idea – there are clearly very good reasons for walking more. However, whatever the associated health benefits might be, the two most compelling reasons to walk will always be these: it’s a very useful way of getting from A to B, and most people find they rather enjoy it!

The Drive to Improve 5 Year Cancer Survival – an NHS Priority, or Political Folly?

In the original Johnny English film, Rowan Atkinson’s hapless spy performs a flawless daredevil penetration into the heart of a hostile occupied building. Dropped by helicopter onto the roof, his use of grappling irons is exemplary, his ability to move through locked doors and windows – textbook. Flushed with his own success, it is only after assaulting several members of staff that he realises he has inadvertently broken into the local hospital instead of his intended target.

If you are going to invest a lot of time and effort into something important, no matter how good your intentions might be, it is vital to aim for the right target.

When the Government published its NHS Mandate earlier this month, a cornerstone of the proposal was the commendable aim for the NHS to be better at Preventing people from dying prematurely. A key aspect of this is to look at deaths from cancer – so far so good. The details, however, is where there is a problem – the focus is to look at 1 and 5 year mortality rates. It is quite simply the wrong target and will result in bad decisions that will be bad for patients and wasteful of scarce NHS resources. The target should be overall mortality, nothing more and nothing less.

5 year mortality data were originally devised to assess the effectiveness of treatment. Here they are useful – if you want to know how one chemotherapy regime works compared with another then the overall 5 year mortality can be very helpful. The problem comes when we use it to assess overall performance, or start comparing data for different countries. We end up with disturbing headlines such as this from the Daily Mail in 2009. These cause politicians real headaches, and the danger of knee-jerk reactions and bad decisions.

The problem with 5 year survival is that they are so easy to manipulate – and the easiest ways to do this bring little benefit to patients, or even harm. The hardest way to really improve survival from cancer is to genuinely improve treatment and care – this is expensive, requires investment in the people who run cancer services, often relies on medical breakthroughs and has no guarantee of success. There are far easier, much more reliable methods for achieving results if you are so inclined, and two may prove irresistible to politicians so dependent on a quick fix and the next set of statistics.

Technique 1: Diagnose cancer earlier

If you have a cancer that is incurable and you are going to die in three years time despite whatever treatment medicine can offer, then you will fall the wrong side of the 5 year statistic. If, however, I can persuade you to be diagnosed 2 years earlier – through an awareness campaign, or cancer screening for instance, then even if I don’t change your outcome one iota you will have crossed magically into the success side of my statistic – Tada! Of course, for some people an earlier diagnosis may make a difference to their outcome, and we would always want to reduce delays once someone develops symptoms related to cancer, but the evidence is that early diagnosis through screening has a limited impact on overall improvements in survival.

Another, more powerful, lure of early diagnosis through screening is the prospect of picking up cancers that are so early that they would not ever become a problem. If these cancers go completely undetected then they will have no impact on the statistics. If, however, they are diagnosed they will, by definition, be treated successfully, and they will add a rosy glow to the 5 year survival data. To take prostate cancer as an example. If you screen for prostate cancer you will save lives – but for every life you save you will need to treat 48 other men who would never have died from their ‘cancer’. Without screening there would be one man who will enter the data, and may or may not survive 5 years. With screening 49 men become statistics – and they are all on the good side. This is a compelling political argument, but is it good for patients?

Technique 2: Redefine Cancer

Cancers like pancreatic cancer are what we all think of when we use the Big C word – nasty, aggressive diseases that are almost impossible to treat and spread rapidly. If the NHS is to be tasked with improving 5 year survival for pancreatic cancer then it is on a hiding to nothing – medicine needs to move on and make a break-through if that is to happen. So to balance the books, as it were, there is a great temptation to put as many easy to treat cancers on the other side of the scales as possible, and the best way to do that is to redefine what we mean by cancer. Terms like Ductal Carcinoma in Situ, which is really a pre-cancerous change in the breast of an uncertain nature, have come under the cancer umbrella in recent years. Treated like any other breast cancer, the survival is phenomenally good and it is fantastic for statistics, but the evidence is that many women are treated for it unnecessarily as it will not always develop into a true cancer.

The Importance of Mortality Data 

The problem with relying on 5 year survival is that it encourages Governments to endorse screening programmes on the basis that they improve statistics, rather than being good for patients. It is vital that all screening programmes are rigorously evaluated for both benefits and harms before they are implemented. If the NHS Mandate looked at overall mortality from cancer instead then the drive to improve would be free from these pressures to artificially manipulate statistics, and the focus could be on better care, as well as public health initiatives that might really make a difference, such as plain packaging for cigarettes.

The Government might even be pleasantly surprised. In 2008, the most recent year where full data are available, the World Health Organisation database ranks the UK quite favourably – just above Germany and better than most European countries outside Scandinavia. Maybe a pat on the back is in order for the NHS? Or is that not politically permissible these days?

Quick Post – Aspirin in the news again

 

I think I might start counting how many times aspirin hits the headlines in 2012 – twice so far and I am sure there will be more to come! Today’s news is a re-analysis of whether or not  the drug can reduce the risk of cancer. Interesting, worthy of further research, but not something that is going to send me running to the pharmacy just yet.

Why the scepticism? Well, three reasons. Firstly, the studies that were analysed were all designed to look at the risk of heart disease, not cancer – this is an important factor as it means the findings are much more likely to be due to chance than had the study been about cancer in the first place. Secondly, if the findings are true, 1000 people need to be treated for 3 to 5 years for 3 people to avoid getting a cancer. That’s a lot of people taking aspirin who were never going to get cancer anyway. And thirdly because there is bound to be another headline later this year about why taking aspirin is not such a good idea – but maybe there I have crossed the line between healthy scepticism and downright cynicism!