A new study of e-cigarettes’ efficacy in quitting smoking has not only pitted some of vaping’s most outspoken scientific supporters against one of its fiercest academic critics, but also illustrates many of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted over a paper published within the Lancet Respiratory Medicine and co-authored by Stanton Glantz, director in the Center for Tobacco Control Research and Education in the University of California, San Francisco, plus a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is certainly named as first author but will not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to evaluate the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: in other words, to discover whether use of e-cigs is correlated with success in quitting, which might well mean that vaping helps you give up smoking. To do this they performed a meta-analysis of 20 previously published papers. That is certainly, they didn’t conduct any new research directly on actual smokers or vapers, but rather made an effort to blend the final results of existing studies to find out if they converge on a likely answer. This can be a common and well-accepted approach to extracting truth from statistics in numerous fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online along with by the university, is that vapers are 28% not as likely to avoid smoking than non-vapers – a conclusion which may suggest that vaping is not just ineffective in quitting smoking, but actually counterproductive.
The end result has, predictably, been uproar from your supporters of E Cig Vapor in the scientific and public health community, specifically in Britain. One of the gravest charges are the ones levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and through Carl V. Phillips, scientific director in the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) inside the Usa, who wrote “it is obvious that Glantz was misinterpreting the data willfully, rather than accidentally”.
Robert West, another British psychologist and also the director of tobacco studies at a centre run by University College London, said “publication of this study represents a major failure from the peer review system within this journal”. Linda Bauld, professor of health policy in the University of Stirling, suggested the “conclusions are tentative and sometimes incorrect”. Ann McNeill, professor of tobacco addiction in the National Addiction Centre at King’s College London, said “this review is not really scientific” and added that “the information included about two studies which i co-authored is either inaccurate or misleading”.
But what, precisely, would be the problems these eminent critics find in the Kalkhoran/Glantz paper? To reply to a number of that question, it’s necessary to go under the sensational 28%, and examine that which was studied and how.
Meta-analysis is really a seductive idea. If (say) you have 100 separate studies, all of 1000 individuals, why not combine them to create – in effect – just one study of 100,000 people, the final results that needs to be much less vunerable to any distortions which may have crept into a person investigation?
(This may happen, for example, by inadvertently selecting participants using a greater or lesser propensity to stop smoking because of some factor not considered by the researchers – an instance of “selection bias”.)
Of course, the statistical side of any meta-analysis is quite more sophisticated than merely averaging the totals, but that’s the overall concept. As well as from that simplistic outline, it’s immediately apparent where problems can arise.
Whether its results have to be meaningful, the meta-analysis must somehow take account of variations in the design of the individual studies (they could define “smoking cessation” differently, for example). If it ignores those variations, and tries to shoehorn all results into a model that some of them don’t fit, it’s introducing its own distortions.
Moreover, when the studies it’s based on are inherently flawed in any way, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This is a charge made by the Truth Initiative, a U.S. anti-smoking nonprofit which normally takes an unwelcoming look at e-cigarettes, in regards to a previous Glantz meta-analysis which comes to similar conclusions to the Kalkhoran/Glantz study.
In a submission a year ago to the United states Food and Drug Administration (FDA), answering that federal agency’s require comments on its proposed electronic cigarette regulation, the reality Initiative noted it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of them have been included in a meta-analysis [Glantz’s] that states reveal that smokers who use e-cigarettes are less likely to give up smoking in comparison to people who do not. This meta- analysis simply lumps together the errors of inference from the correlations.”
In addition, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate and also the findings of such meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and be prepared to receive an apple pie.
Such doubts about meta-analyses are far from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the facts Initiative’s points as he wrote within the Lancet Respiratory Medicine – exactly the same journal that published this year’s Kalkhoran/Glantz work – that the studies a part of their meta-analysis were “mostly observational, often with no control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies simply do not exist yet”.
So a meta-analysis can only be as effective as the research it aggregates, and drawing conclusions from this is only valid in the event the studies it’s based upon are constructed in similar methods to each other – or, at least, if any differences are carefully compensated for. Needless to say, such drawbacks also affect meta-analyses which can be favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms from the Kalkhoran/Glantz work rise above the drawbacks of meta-analyses generally speaking, and concentrate on the specific questions caused from the San Francisco researchers as well as the ways they tried to respond to them.
One frequently-expressed concern continues to be that Kalkhoran and Glantz were studying the wrong people, skewing their analysis by not accurately reflecting the true number of e-cig-assisted quitters.
As CASAA’s Phillips highlights, the e-cigarette users in the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes if the studies on their own quit attempts started. Thus, the analysis by its nature excluded those that had started vaping and quickly abandoned smoking; if these people happens to large numbers, counting them would have made e-cigarettes seem a more successful way to quitting smoking.
A different question was raised by Yale’s Bernstein, who observed that does not all vapers who smoke are trying to quit combustibles. Naturally, those that aren’t trying to quit won’t quit, and Bernstein observed that when these folks kndnkt excluded from the data, it suggested “no effect of e-cigarettes, not too electronic cigarette users were more unlikely to quit”.
Excluding some who did have the ability to quit – then including individuals who have no goal of quitting anyway – would certainly appear to impact the results of a report purporting to measure successful quit attempts, even though Kalkhoran and Glantz argue that their “conclusion was insensitive to a wide range of study design factors, including if the study population consisted only of smokers interested in quitting smoking, or all smokers”.
But additionally there is a further slightly cloudy area which affects much science – not simply meta-analyses, and not simply these types of researchers’ work – and, importantly, is often overlooked in media reporting, along with by institutions’ publicity departments.