7971:1 – What will you trust when it comes to the safety of HRT?

You get used to outrageous medical claims in the press, but The Telegraph has truly surpassed itself today with its front page headline declaring that ‘HRT ‘is safe’ for postmenopausal women after all‘.

The article states that new research ‘has found no evidence that HRT is linked to any life-threatening condition’, and makes much of the fact that the new study followed women for a decade. There is a quote from Dr Lila Nachtigall, one of the study authors and a Professor of Obstetrics and Gynaecology at New York University who claims that: ‘the risks of HRT have definitely been overstated. The benefits outweigh the risk.’

Prof John Studd from London is even more forthright, saying: ‘Most GPs are afraid of HRT – they will have learnt as medical students that it is linked to health risks. But those studies that were replicated in the textbooks were worthless. They collected the data all wrong.’

These are bold statements, and so you would expect them to be based on a significant piece of research. The main study that Prof Studd so comprehensively dismisses is the British Million Women study – over 1 million women were studied specifically to look at the risk of breast cancer with HRT and it found a small, but significant, increased risk. To overturn the findings of such a significant piece of research would require something big.

So what is this new research? Well the article, as is so often the case, fails to tell you – but if you are still reading as far as the 11th paragraph you may start to have your doubts: the study followed 80 women. 80! Not 800 000, or even 80 000, but 80! To be fair, when you look at the study itself it’s actually 136 – 80 women on HRT and 56 without. So with 1 084 110 women in the million women study and 136 in this new, apparently game-changing research – that’s 7971:1.

What’s more, when you look at the new study in detail (and here I’m grateful to Adam Jacobs on twitter who managed to locate it) the study was not designed to look at the safety of HRT – the intention of the research was to answer a question about the effects of HRT on body fat composition, and any findings on the safety of HRT were only a secondary consideration. What is more, it is described as a retrospective cohort study – that means it looked backwards at the history of these 80 women, so if a woman had got breast cancer related to HRT she might not have been alive to take part in the study in the first place.

Even if the study had been designed to prove there was no link between breast cancer and HRT, the Million Women study suggests an increase of only 5 extra breast cancers in 1000 women taking HRT for 10 years – so 80 women would only have 0.4 extra breast cancers between them – meaning the study is far too weak to draw any conclusions at all. Oh – and the study was sponsored by Pfizer, who might just have a commercial interest in lots more women going on HRT.

The Telegraph was not the only newspaper to pick up the story, but it was by far the worst reporting among the broadsheets – The Guardian, for instance, picked up the small number of women in the study and tried to bring a sense of balance to its piece – just so long as you read past the headline and the first two paragraphs.

In closing, I would like to say one or two things to Prof John Studd of Wimpole Street. The first is that if you are going to have an official website it would be best, for reasons of probity, if you could include an easy to find declaration of interests; maybe I am being dense, but I failed to find yours. Secondly, GPs are not afraid to prescribe HRT – and we have learnt one or two things since medical school – but we do like to prescribe it after having a discussion with the woman concerned about the balance of benefits versus risk, as we like to base this on reliable evidence.

And for a woman considering HRT wondering what all this means? HRT remains the best way to control symptoms of the menopause, which can be very distressing. There is an increased risk of some cancers, but it really is quite small and many woman feel it is well worth taking that risk in order to feel well; have a chat with your GP about it.

 

95% Less Harmful – the Story of a Statistic

When Public Health England (PHE) published their recent report on e cigarettes, the statistic to hit the headlines was the claim that the electronic variety were ‘95% less harmful’ than standard cigarettes. It’s a figure that will have entered the collective consciousness of journalists and vaping enthusiasts, and I can guarantee that we will hear it quoted again and again in coming months and years.

The question is: where has it come from, and what does it mean?

The first question is easy to answer: the 95% figure does not come from PHE. Their report simply quotes the estimates made by another group of experts published by Nutt et al in European Addiction Research. Simply put, PHE have said: ‘other experts have guessed that e cigarettes are 95% less harmful than standard cigarettes, and that seems about right to us.’

The over reliance on the findings of another group of experts has received some very public criticism – most notably in an editorial in The Lancet when it emerged that the findings of this group had been funded by an organisation with links to industry, and that three of its authors had significant financial conflicts of interest. These are valid points, although they may have been made better if The Lancet had included the author’s name and declaration of interests alongside the editorial.

The second question is harder to answer, and here is my main concern with how the 95% figure has been presented. What does ‘95% less harmful’ actually mean?

If I were a smoker, wondering whether to switch to vaping, I would primarily be interested in one thing: how harmful are they to me. In other words – am I less likely to die or get ill if I switch to e cigarettes?

Well, the PHE report would seem to answer this question – in the forward to the full report the authors state that e cigarettes are ‘95% less harmful to your health than smoking.’ The trouble is that the report where they obtained the 95% figure looked at far more than just the effects of smoking on the health of an individual.

The piece of work by Nutt and colleagues involved a group of experts being asked to estimate the harm of a range of nicotine products against 12 different criteria – these included the risk to individual health, but also other societal harms such as economic impact, international damage and links with crime. The 95% figure was only achieved after all 12 factors were weighted for importance and then each nicotine containing product was given a composite score.

Now the propensity for a commercial product to be linked with criminal activity may be very important to PHE, but it wouldn’t influence my individual health choice, nor the advice I would want to give to patients.

Moreover, the work by Nutt and colleagues includes this statement: ‘Perhaps not surprisingly, given their massively greater use as compared with other products, cigarettes were ranked the most harmful.’ So the research was greatly influenced by the extent to which products are used. On this basis you could conclude that drinking wine is more harmful than drinking methylated spirits – on a population basis this is true, but it would be a poor basis for individual advice. 

In response to the criticism in The Lancet, PHE produced a subsequent statement in order to try to achieve some clarity over the 95% figure – only to muddy the waters further by claiming that the figure was linked to the fact that there are 95% fewer harmful chemicals in e cigarettes than standard cigarettes. This may well be true – but it is not the reason why they gave the 95% figure in the first place. It also assumes a linear relationship between the amount of chemical and the degree of harm – 5% of the chemical might only cause 1% of the harm, or it could be 50%.

One of the main problems I have with the 95% statistic, therefore, is one of principle – I just don’t like being duped by the misuse of statistics.

My second issue, however, is more pragmatic: the statistic does not help us with some of the key questions we need to answer.

That e cigarettes are safer than standard cigarettes is not much in doubt – mostly on the basis that smoking is so bad for health that it isn’t hard to beat. There is clearly much to be gained by smokers switching to the electronic variety. The next question concerns what smokers should do next.

Much is said about e cigarettes being an aid to quitting, but what is unique about them is that people often stay with them for the longterm, in a way that they would never consider with something like a nicotine patch. This may be their greatest strength – people may be able to quit who could never do so before – but it is also a new phenomenon as longterm nicotine substitution becomes the norm.

Are e cigarettes so safe that once smokers move over to them they can consider the job done? Many vapers talk about it in these terms. For the short term, it seems they are safe. They have been in common use for 5-8 years and there have been no major concerns so far (although acute poisoning is a new problem with liquid nicotine) – but then the same is true for cigarettes where it is use over decades that is the problem. For me, the 95% figure is too questionable to be able to help here.

There are more dilemmas I face as a doctor since I need to know how to interpret the health risks of someone who uses an electronic cigarette. When it comes to cardiovascular risk, should I consider them a smoker, a non-smoker, or something in between? If they have a persistent cough, do I suggest a chest x-ray early on the grounds that they are at increased risk, or can we watch and wait for a while?

We are a long way from being able to answer questions like this, and I would have preferred a little more honesty from PHE about what we don’t yet know, a little less reliance on the opinions of experts, and only to be presented with a figure like 95% if it is based on hard, objective evidence.

Should Policy Makers Tell GPs How Often to Diagnose?

I’m sure NHS England were surprised by the response to their plans to pay GPs £55 every time they diagnosed dementia. What started as a seemingly simple idea to help the Government hit their diagnosis target before the election caused such a furore that Simon Stevens declared the end of the policy before it had really begun, making it clear that it would end at the end of March.

What was striking about the reaction was not the objection among GPs – policy makers are used to that and well accustomed to ignoring it – but the strength of feeling among the public. I’m sure this is what made the difference – no politician wants to lose in the arena of public opinion. It’s not hard to see how this happened. There was something innately wrong about paying GPs to diagnose; no in-depth analysis was needed, no exploration of the evidence – it was just so clearly a bad idea and both doctors and patients were alarmed at want it meant for the doctor-patient relationship.

What continues to concern me, though, is that policy-makers still think they know best when it comes to how many patients GPs should diagnose with a variety of conditions – from heart disease to asthma, diabetes and even depression – and have an even more powerful mechanism for enforcing this, which is to put pressure on practices with low diagnosis rates through naming and shaming, and the threat of inspection. A practice may have the moral courage to resist a financial bribe, but what about if the reputation of your practice is at stake?

I have written in the British Medical Journal about this, published this week, and this is a toll-free link if you are interested. What is crucial is that at the moment of diagnosis there should be nothing in the mind of the GP other than what is best for the patient – it is fundamental to the doctor-patient relationship and something well worth shouting about.

Statins, Statins Everywhere

The health of America is in trouble. Life expectancy is noticeably lower than in other developed nations, 15% of the country lives precariously without health insurance, and the launch of Obamacare was so badly botched that this much-needed health reform is in serious jeopardy. Not to worry, though, the American Heart Association and the American College of Cardiology have a plan that will rescue the health of the nation: put a third of US citizens on statins – that ought to do it!

The new guidelines, released last month was widely reported in the UK press. The Mail misleadingly called the publication a new study rather than a set of guidelines, while the BBC gave a more measured view, including a revealing statistic that roughly half the expert panel had financial ties to the makers of cardiovascular drugs. What is worse, while the panel’s conflicts of interest appear to be clearly presented, with neither the chair nor co-chairs having conflicts, the superb investigative journalist Jeanne Lenzer has discovered that the chair in particular has been rather misleading with declaring his own interests. The protestation from the AHA spokesperson Dr George Mensah that ‘It is practically impossible to find a large group of outside experts in the field who have no relationships to industry’ is hard to swallow. In a country with as many specialists as the US? There were only 15 members on the panel – is it really that hard to find experts without financial ties? Or is it harder to tell some Key Opinion Leaders that their much vaunted opinions are not welcome since they are too close to industry?

The major change to the guidelines is that there is less emphasis on absolute levels of cholesterol, and a new category for treatment in those aged 40-75 with an estimated 10 year cardiovascular risk of 7.5%. Current UK guidelines recommend treatment at 20% risk, but NICE say they are looking at the same evidence as the US, before publishing new guidance next year. Despite the important debate in the medical press about overmedicalisation – spearheaded by the BMJ’s excellent Too Much Medicine series – we can expect a lowering of treatment thresholds when NICE issues its verdict.

The problem with the way we present guidelines, though, is that they are far too black and white, when the world of medicine we inhabit with our patients is generally full of grey. The question we should be asking is not what the threshold should be for treatment, but how to empower patients to make their own, informed decisions – because ultimately, the level of risk a patient is prepared to accept before they take a tablet is a personal decision, and a panel of experts has no authority to tell patients what risk they should, or should not take.

If we use the 7.5% cut-off, for instance, and assume that taking a statin for 10 years would lead to a 50% reduction in significant cardiovascular events (which is likely to be a gross over-estimate). This means that 3.75% of patients would avoid an event by taking the drug – call it 4% for ease of maths – and 96% would not benefit. The number needed to treat (nnt) is therefore 25 to avoid one event. What will our patients think about this? Surely that is entirely subjective and not for experts to dictate? One patient may have seen a close family member affected by a devastating stroke and might think any ability to reduce the risk of stroke is an opportunity to be grasped, another might consider the 3650 tablets they would have to swallow over 10 years and wonder if a 1 in 25 risk is really worth trying to avoid. In reality, the benefits of statins are much smaller than a 50% reduction, and so the nnt for low risk patients is likely to be 50, 100 or even higher.

We need a different approach to guidelines, one based on nnt, and the corresponding number needed to harm (nnh) (like this excellent calculator from ClinRisk Ltd. There should be a lower level below which the NHS says treatment is not justified on the grounds of either harm or rationing, and then a range of nnt and nnh based on individual risk. Expert panels should analyse the evidence to provide these figures, not to tell people what to do, and doctors and their patients can be given the freedom and flexibility of a large area of grey,  in which they can personalise treatment and truly empower patient choice. The experts and policy-makers won’t like it though – because it involves trusting patients, and we’ve never quite mastered how to do that.

This article was originally published in Pulse magazine (free registration required)

Can You Walk off the Risk of Breast Cancer?

One of this week’s health stories is typical of how rather unexciting research can reach the headlines by virtue of its association with a condition like breast cancer, but it also serves as a good example of two of the most common sources of sloppy reporting that plague health stories – which makes me think it a subject worthy of a blog.

The research relates to the possible effect of exercise on the risk of developing breast cancer, and the headline is Walking ‘cuts breast cancer risk’. If true, this is hardly an earth-shattering discovery. Perhaps it will add in some small way to our understanding of the mechanisms involved in the development of cancer, but this is for the journals to worry about. When it appears in mainstream media, the point is surely whether it means anything to an individual concerned about her breast cancer risk – in other words, if you want to reduce your risk of developing breast cancer, should you take up walking? Unfortunately, the way the results are reported makes it very difficult to answer this question.

 

Problem 1: associations are not the same as cause and effect

The first problem is that the study has made an observation, which has been presented as a cause. The researchers did quite a simple thing: they arranged for a group of over 73 000 post-menopausal women to complete a questionnaire at intervals over a 17 year period from 1992 to 2009, asking questions about how many hours walking the women did, and any diagnosis of breast cancer. They found that those who walked for 7 or more hours per week were less likely to have been diagnosed with breast cancer than those who walked for 3 hours or less. This does not mean that the walking caused the reduction in risk, however. It may well have done, but it could have been some other factor. There could have been a different cause that was linked to both breast cancer risk and the amount women walk. For instance, walking less could be linked to obesity, which could explain the extra breast cancer risk.

The researchers were aware of this problem, and tried to exclude some factors – for instance, it was not due to those who developed breast cancer being more overweight than those who did not – but they can never exclude all of the possible confounding influences. For instance, it may be that those who walked less were more likely to have other health problems, and the increased risk of breast cancer was in some way linked to this.

In my experience, observational health studies are very frequently reported as cause and effect. I can understand why – Walking ‘cuts breast cancer risk’ Has more of a ring to it than Walking is associated with a reduced risk of breast cancer. The problem is that the more catchy headline is misleading, and it is left to the reader to spot the error.

Problem 2: what do we mean by a reduction in risk?

The second pitfall when it comes to knowing what to make of a study like this is more serious – and more troubling, because the fault lies not with mainstream journalists trying to enhance their stories, but researchers and journal editors being guilty of the same. The problem is this: as is so often the case, the results have been presented in terms of a reduction in relative rather than absolute risk.

The trial demonstrated a 14% Relative Risk Reduction (RRR) – but is that a 14% reduction of a big number or a small number? If the Dragons in Dragons’ Den are offered a 14% share in company profit, they are very quick to ask how big that profit will be before they part with their money. The same should apply to us before we invest our energies in a health intervention. If the Dragons want to know the absolute amount of money they can expect to receive then we should expect to know the Absolute Risk Reduction (ARR) of any intervention.

The problem is that ARRs are always a lot smaller than RRRs, and so they make research look far less impressive, and researchers are reluctant to give them the attention they deserve. From the BBC article it is impossible to find the ARR, and so you have to go to the original research – and even here only the abstract is available without paying a fee and so you have to work the numbers out for yourself. It turns out that the risk of developing breast cancer over the 17 years of the study was 6.4 percent, making a 14% RRR equate to a 0.9% ARR.

Let us assume for the moment that the reduction in risk really is due to walking. Then if you are a woman after the menopause, and you walk for 7 hours a week rather than 3, then over a 17 year period you would reduce your risk of getting breast cancer by 0.9%. Put another way, if 1000 women walked the extra 4 hours a week for 17 years that would be 3 560 000 hours of walking to save 9 cases of breast cancer, or 393 000 hours of walking per case. At 3 miles per hour, it’s the equivalent of walking more than 47 times round the world! Now I do know that this statistic is probably as meaningless as being given a 14% relative risk reduction – but it was fun to work out!

That’s not to say that walking is a bad idea – there are clearly very good reasons for walking more. However, whatever the associated health benefits might be, the two most compelling reasons to walk will always be these: it’s a very useful way of getting from A to B, and most people find they rather enjoy it!

The Drive to Improve 5 Year Cancer Survival – an NHS Priority, or Political Folly?

In the original Johnny English film, Rowan Atkinson’s hapless spy performs a flawless daredevil penetration into the heart of a hostile occupied building. Dropped by helicopter onto the roof, his use of grappling irons is exemplary, his ability to move through locked doors and windows – textbook. Flushed with his own success, it is only after assaulting several members of staff that he realises he has inadvertently broken into the local hospital instead of his intended target.

If you are going to invest a lot of time and effort into something important, no matter how good your intentions might be, it is vital to aim for the right target.

When the Government published its NHS Mandate earlier this month, a cornerstone of the proposal was the commendable aim for the NHS to be better at Preventing people from dying prematurely. A key aspect of this is to look at deaths from cancer – so far so good. The details, however, is where there is a problem – the focus is to look at 1 and 5 year mortality rates. It is quite simply the wrong target and will result in bad decisions that will be bad for patients and wasteful of scarce NHS resources. The target should be overall mortality, nothing more and nothing less.

5 year mortality data were originally devised to assess the effectiveness of treatment. Here they are useful – if you want to know how one chemotherapy regime works compared with another then the overall 5 year mortality can be very helpful. The problem comes when we use it to assess overall performance, or start comparing data for different countries. We end up with disturbing headlines such as this from the Daily Mail in 2009. These cause politicians real headaches, and the danger of knee-jerk reactions and bad decisions.

The problem with 5 year survival is that they are so easy to manipulate – and the easiest ways to do this bring little benefit to patients, or even harm. The hardest way to really improve survival from cancer is to genuinely improve treatment and care – this is expensive, requires investment in the people who run cancer services, often relies on medical breakthroughs and has no guarantee of success. There are far easier, much more reliable methods for achieving results if you are so inclined, and two may prove irresistible to politicians so dependent on a quick fix and the next set of statistics.

Technique 1: Diagnose cancer earlier

If you have a cancer that is incurable and you are going to die in three years time despite whatever treatment medicine can offer, then you will fall the wrong side of the 5 year statistic. If, however, I can persuade you to be diagnosed 2 years earlier – through an awareness campaign, or cancer screening for instance, then even if I don’t change your outcome one iota you will have crossed magically into the success side of my statistic – Tada! Of course, for some people an earlier diagnosis may make a difference to their outcome, and we would always want to reduce delays once someone develops symptoms related to cancer, but the evidence is that early diagnosis through screening has a limited impact on overall improvements in survival.

Another, more powerful, lure of early diagnosis through screening is the prospect of picking up cancers that are so early that they would not ever become a problem. If these cancers go completely undetected then they will have no impact on the statistics. If, however, they are diagnosed they will, by definition, be treated successfully, and they will add a rosy glow to the 5 year survival data. To take prostate cancer as an example. If you screen for prostate cancer you will save lives – but for every life you save you will need to treat 48 other men who would never have died from their ‘cancer’. Without screening there would be one man who will enter the data, and may or may not survive 5 years. With screening 49 men become statistics – and they are all on the good side. This is a compelling political argument, but is it good for patients?

Technique 2: Redefine Cancer

Cancers like pancreatic cancer are what we all think of when we use the Big C word – nasty, aggressive diseases that are almost impossible to treat and spread rapidly. If the NHS is to be tasked with improving 5 year survival for pancreatic cancer then it is on a hiding to nothing – medicine needs to move on and make a break-through if that is to happen. So to balance the books, as it were, there is a great temptation to put as many easy to treat cancers on the other side of the scales as possible, and the best way to do that is to redefine what we mean by cancer. Terms like Ductal Carcinoma in Situ, which is really a pre-cancerous change in the breast of an uncertain nature, have come under the cancer umbrella in recent years. Treated like any other breast cancer, the survival is phenomenally good and it is fantastic for statistics, but the evidence is that many women are treated for it unnecessarily as it will not always develop into a true cancer.

The Importance of Mortality Data 

The problem with relying on 5 year survival is that it encourages Governments to endorse screening programmes on the basis that they improve statistics, rather than being good for patients. It is vital that all screening programmes are rigorously evaluated for both benefits and harms before they are implemented. If the NHS Mandate looked at overall mortality from cancer instead then the drive to improve would be free from these pressures to artificially manipulate statistics, and the focus could be on better care, as well as public health initiatives that might really make a difference, such as plain packaging for cigarettes.

The Government might even be pleasantly surprised. In 2008, the most recent year where full data are available, the World Health Organisation database ranks the UK quite favourably – just above Germany and better than most European countries outside Scandinavia. Maybe a pat on the back is in order for the NHS? Or is that not politically permissible these days?

Quick Post – Aspirin in the news again

 

I think I might start counting how many times aspirin hits the headlines in 2012 – twice so far and I am sure there will be more to come! Today’s news is a re-analysis of whether or not  the drug can reduce the risk of cancer. Interesting, worthy of further research, but not something that is going to send me running to the pharmacy just yet.

Why the scepticism? Well, three reasons. Firstly, the studies that were analysed were all designed to look at the risk of heart disease, not cancer – this is an important factor as it means the findings are much more likely to be due to chance than had the study been about cancer in the first place. Secondly, if the findings are true, 1000 people need to be treated for 3 to 5 years for 3 people to avoid getting a cancer. That’s a lot of people taking aspirin who were never going to get cancer anyway. And thirdly because there is bound to be another headline later this year about why taking aspirin is not such a good idea – but maybe there I have crossed the line between healthy scepticism and downright cynicism!

 

To patch or not to patch? The latest on Nicotine Replacement Therapy

It is an inevitable consequence of the headline-driven world we live in that the newsworthiness of any health story will always be measured by its ability to generate a good strap line on a popular subject with a high level of public interest, rather than its actual value to the health of the nation. I can’t say that I mind this – as someone who enjoys a catchy headline as much as anyone I suspect it would be terribly dull if it were not so.

Nevertheless, the inevitable consequence of this approach is to elevate some stories somewhat above their station, as was the case with last weekend’s headline: ‘Nicotine patches no better than will power to quit smoking‘. I can’t blame the Daily Telegraph for this somewhat misleading conclusion – after all, the Today Programme on Radio 4 said much the same thing, and it is rather more attention grabbing than ‘Ex-smokers relapse at the same rate regardless of how they quit’, or even ‘quite a lot of people who have recently quit smoking actually succeed’ – both of which are more true to the facts, but would never help me to get a job in journalism.

The reports related to a study on the outcomes for people in the USA who had recently quit smoking which was published recently in the Journal Tobacco Control. I have no problems with the paper – it is well conducted study with honest intentions, and has added something to our understanding of smoking relapse. It does not, however, tell us about whether or not nicotine patches help smokers to quit – since it only looked at relapse rates of those who have already quit, either with the help of Nicotine Replacement Therapy (NRT) or without it.

The researchers conducted telephone interviews in 2001-2002 to make contact with 4991 people who were current smokers, recent quitters (those who had quit in the last 2 years) and young adults (who were thought more likely to take up smoking). They then phoned these people again at 2 and 4 years to review about their smoking habits. Of those who managed to quit, about 30% had started again at 2 years, and a further 30% of the remainder had started again at 4 years. Put it another way, 70% were not smoking at 2 years and 70% (so 49% of the original number) of these were still not smoking at 4 years – good for them! If 50% of people who manage to stop smoking are still ex-smokers at 4 years that is something to celebrate in my book.

The study then went on to look at any factors that might predict why a person would relapse – age, sex, ethnicity, educational attainment, level of smoking, duration of quitting at the time of the research and use of NRT were all studied. The problem with looking at so many variables is that the numbers start to get quite small when you break it down – so there were only just over 50 users of NRT in the entire study (compared with over 400 who had quit without using NRT). They found that the only factor that could predict whether or not someone might relapse was if they had already quit for longer then 6 months – when relapse fell from the baseline of 30% to around 17%, a figure that reached statistical significance.

So, if you quit using NRT then this (now quite small) study suggests that you are no less likely to relapse than someone who has not used NRT. Well that is not a finding that knocks me over! Why would we expect NRT to make a difference to relapse rates months later? There is no reason for it to have a prolonged effect – the key question is, does it make you more likely to quit in the first place? If I am twice as likely to quit by using NRT and I have the same relapse rate, then I am still twice as likely to become an ex-smoker in the long term. If NRT had double the relapse rate then we might have a problem, but that is not what this research has shown.

The considered opinion on the effectiveness of NRT from the meta-analyses of all the Randomised Controlled Trials (the major ones of which are actually quoted in this paper) is that the quit rate with NRT is somewhere between 1.5 and 3.1 times the rate with placebo. So NRT does work. It might not suit everyone, and is no substitute for a good dose of will power, but we should not throw it out of the smoker’s armoury just yet.

PIP Breast Implants – Where is the meaning behind the statistics?

The controversy over the PIP Breast Implant Scare all seems to come down to the  rate of rupture of these suspect devices – believe the French figure of 5% and you may agree with the French Government that all the implants should be removed. However, if you side with the statistics of the English Government and the Medicines and Healthcare products Regulatory Authority (the MHRA, the UK body that approves or otherwise the licence for medicines and surgical devices), and you may find the 1% rupture rate they quote rather more reassuring, and agree that there is no cause for alarm. If it offends your European sensibilities to decide which side of the channel should win your allegiance, then you can always opt for the 7% figure quoted by the Transform cosmetic surgery group – although as this was only 7 out of 108 patients this is hardly the most robust of statistics.

For my own part, I have a major problem with all these numbers, which is this: What do they mean by a rupture rate? Is the quoted rupture rate the rate of rupture in the lifetime of the implant or the rate per year? There is a very great difference between the two! 5% over the lifetime of an implant might be very acceptable to some women, but I suspect few would be unconcerned about a 5% risk year-on-year.

In the last 2 weeks I have made several attempts to scour the internet and try to find clarification on this matter. I have reviewed the guidance on the MHRA website, and trawled through numerous news reports from the BBC, Reuters and other respected news agencies; I have searched Google Scholar and done my best to review the literature (distinctly lacking as much of the important data is held by the companies themselves or the surgeons performing the operations and has not been published in journals). I have read the letters to Health Care Professionals from the Chief Executive of the NHS and the Chief Medical Officer. In all, I have probably spent 2 or 3 hours in the cause, and – until today when the first glimmer of clarity has come my way – I have found nothing: No clarity on what is meant by the rupture rate, and no reference for the original figure that these data are based upon. If it is this hard for a GP to find answers, how must it be for a worried patient, who may have no medical training, to make the important decision as to whether or not to subject herself to an operation on the basis of an uncertain and undefined risk?

And the glimmer of clarity? Well the Department of Health published its interim report yesterday. It is a typically meaty Government document and so not for the faint hearted, but it does contain the following point (page 8, paragraph 18):

“Much attention so far has been given to the issue of rupture in breast implants. The cumulative risk of rupture of a breast implant increases progressively over time. An analysis published by the FDA showed that the rupture rate for the Allergan implant is 0.5% after 2 years, rising to 10.1% (cumulative) after 10 years. For Mentor implants, the post implantation failure rate at 8 years was 13.6% (cumulative). It follows that quoting a “rate of rupture” for an implant, without specifying the time since the original implant, is unhelpful and  potentially misleading (my italics). ”

Thank you Department of Health! So the 1% rate (relating to a different implant that has no safety concerns) rises to 10% over 10 years – in other words it is 1% per year. Interestingly the 7% rate quoted by the Transform cosmetic group was 7 patients out of 108 who had had implants since 2005, so presumably this is 7% over 6 years, which sounds suspiciously close to 1% per year to me, even though it was reported by the media as being higher even than the 5% French rate – which still remains totally unclarified.

The baseline rate, therefore, that any woman should have had explained to her prior to surgery and against which the PIP data needs to be compared, is a 1% year-on-year rupture rate. Unfortunately, when the group analysing the data for the DOH report came to the PIP implants (included in a very confusing Annex D in their report) they found that the data was woefully incomplete and that no conclusions can be drawn. For the moment there is not enough evidence to recommend routine removal on the basis of the risk of rupture, but they do conclude that it could be warranted on the grounds of the anxiety that the implant scare has caused, and that private clinics should be prepared to pay for this.

A significant concern about the way this controversy has been handled is the assumption that has often been made that the only thing the patient needs is guidance from the authorities – that all that is required is for the Government, be it English or French, to issue a recommendation that all the affected women should follow. It is certainly helpful for Governments to issue advice where they can, and many women will indeed want to follow it, but many will want to decide for themselves. Whether or not to have the implants removed will be a very personal decision, and women will need to be able to come to this individually. For this they need a clear presentation of the risk, with numbers that actually mean something and in a form that can be easily understood.

Where is the Evidence?

I had one of my increasingly rare encounters with a representative from the pharmaceutical industry this afternoon. As usual, it left me wondering when our society will have the courage to stand up to the giants of the industry, and insist that they show some real evidence before they are allowed to market their products.

The drug in question is a new form of pain-killer called Tapentadol. I was quite interested to hear about this product since I had only come across it for the first time three days earlier, and it seemed worthwhile finding out a bit more. The drug is a morphine-like pain-killer, and the major selling point is that it has a novel, dual mode of action. Pain can be classed as nociceptive (standard, hit your thumb with a hammer type pain) and neuropathic (pain caused by over sensitive pain nerves, as occurs for instance after a bout of shingles). We already have drugs that can work on each type of pain, and these include Oxycodone (which is similar to morphine and good for your broken thumb), and Duloxetine, which works on neuropathic pain. ‘All the power of Oxycodone and Duloxetine wrapped up in one molecule!’ the Rep proudly informed me.

Well, if it has the combined strength of two drugs, is it not reasonable to expect it to be more effective than one of those drugs on its own? Apparently not. The best data she could show me was that Tapentadol was no less effective than Oxycodone.

‘I thought you said it was more effective?’ I asked her.

‘Oh it is, it has a dual mode of action.’ she replied.

‘But this just shows it’s no worse.’

‘But we know it is more effective.’

‘And how do we know this?’

‘The consultants at St George’s are using it.’

Well, much as I am sure I respect the consultants at St George’s, this is not what I would call evidence-based medicine.

When a Drug Rep shows you data you can be sure of one thing – it is the best data they have in favour of their drug. What you have to worry about is the data out there that they are not showing you. If this was the best she had in terms of efficacy, then this drug is not yet ready to convince me about its novel new dual mode of action.

It’s at times like this that I am glad of organisations like NICE which will look at whether these new agents really are what they claim to be – and even the PCT which gives increasingly tight guidance on what should usually be prescribed on the grounds of cost-effectiveness. Whilst we might not like being told what we can and cannot do, these organisations stop doctors jumping on the latest bandwagon and are some of our best defences against being hoodwinked by the pharmaceutical industry.

Hmmm, I seem to have strayed into politics. Well, sometimes you just have to do these things. Rant over.