http://www.***.com/
The Scientist 15[17]:39, Sep. 3, 2001
OPINION
Financial Gain: Just One of Many Motives
By Walter A. Brown
Anthony Canamucio
It's hard to concoct a research scenario in which the investigator
does not desire one result over another. Perhaps a bit of government-
supported contract research falls into this category--the engineer
paid by the Air Force, let's say, to measure the tensile strength of
several compounds or the biochemist paid by the cooking.net">food and Drug
Administration to assess the bioactivity of two generic agents.
But most investigators do care about the results; that's why they do
the research in the first place. And that's why, on a regular,
sometimes daily, basis your everyday working scientist comes up
against the conflict between her desire to achieve a certain result
and her obligation to uncover the truth. That conflict is an
important part of the scientist's inner world. It influences every
feature of the research endeavor from the research question selected
to research design, methods of measurement, and the statistics
applied in data analysis.
An array of motives far too vast to enumerate lie beneath the desire
to find one "truth" rather than another. Among the most powerful and
ungovernable stands desire for recognition. But less troublesome ones
are also at play. A "positive" result can bring a grant, the
gratitude of your supervisor, a poster at the Federation meeting and
with it another crack at the cute cell biology postdoc.
Somewhere on the scale of influence between the desire for
recognition and a beer with the cute postdoc lie financial motives:
the investigator who conducts research on the efficacy of a treatment
or diagnostic test but also has a large financial stake in the
company developing these products, or the investigator who conducts a
clinical trial for a drug company from which he receives research
money, consultation fees, and lecture honoraria.
Not surprisingly, a study examining authors' published positions on
the safety of calcium-channel antagonists (used to treat high {*filter*}
pressure and angina) found that authors who supported the use of
these agents were more likely than those who did not to have
financial relationships with the companies that sell them.1 Yes, the
financial motive may well influence what researchers report. But it
is only one among many motives, it is hardly the most powerful, and
in cold cash terms it is small change. As a recent editorial in The
Economist points out, "Science has always had to cope with conflicts
of interest. The most awkward one a researcher can face is his own
interest in the correctness of his hypotheses."2 Nonetheless it's the
financial conflict that gets the media ballyhooing and journal
editors and bioethicists wringing their hands.
Why the exclusive attention to just one of the many considerations
that influence scientific research. An obvious answer is that
financial dealings are often a matter of public record and thus
easier to detect and verify than, let's say, desire for mother's
approval. But money is not the only external and verifiable source of
influence. The hypotheses stated in an investigator's grant, the date
a grant comes up for renewal, an investigator's publication record,
and the date when he comes up for promotion also create a desire for
particular research results and can be matters of public record.
Do financial conflicts of interest present a greater hazard to public
health than other sorts of conflicts? Clearly financial conflicts are
most likely to be at play when research involves a potentially
marketable product. And when financial considerations result in
research that makes a product appear more effective, less toxic, or
more reliable than it is, the public suffers. Drug companies do
strive mightily to design, analyze, and report research studies in a
way that puts their products in the best possible light. Such
sponsored studies often smack more of advertising than they do of
science, and they need to be read as such. In an attempt to curtail
some of the pharmaceutical industry's more troublesome carrying on,
which in some recent highly publicized cases has included suppression
of data and harrassment of uncooperative investigators, the editors
of prominent medical journals have collectively decided to refuse
publication of drug-company sponsored studies "unless the researchers
involved are guaranteed scientific independence."3 The editors are
right to do so.
But the profit motive is not the only danger to public health. From
{*filter*}letting to routine tonsillectomy to the bland ulcer diet to
psychoanalytic treatment of schizophrenia, medical history is replete
with ineffective, sometimes dangerous, treatments, promoted primarily
because their advocates believed in them and not because they stood
to make money from them. We are hardly free today of research
conducted by the advocate of a treatment who has little to gain
financially from that treatment's acceptance but much to gain in
recognition and esteem.
Why then are researchers called upon to disclose their financial
interests but not, say, their dates of grant renewal and promotion?
And why do journal editors, bioethicists, and other guardians of
scientific morality address, with suffocating sanctimony, the horrors
of financial conflicts but leave untouched the more powerful, darker,
and less governable motives that influence the research endeavor?
Could the biases and motives of academia be at play?
The folks that do the financial conflict hand wringing are almost
invariably full-time academics or well entrenched in the academic
establishment. They are unlikely to be entrepreneurial, tend, like
most academics, to be politically liberal, and they have an intuitive
distrust of business. They don't attend to the other motives
influencing researchers because those motives are so pervasive and
they have lived with them so long that they seem a normal and
expectable part of the research life. And, needless to say, those who
trumpet the wickedness of financial conflicts are themselves beset,
as are all scientists, by the entire range of the other darker and
lighter motives.
The researcher's solutions to the conflicts inherent in almost any
study range from outright deception through data massage and
statistical sleight of hand, to less than enthusiastic attempts to
disprove one's hypothesis to uncompromising truthfulness. These
solutions are at play regardless of the nature of the conflict. I
like to think that most scientists are devoted to the truth and that
when other considerations have an influence, that influence is
inadvertent and operates outside their awareness.
Inadvertent or not, how can we know when we read a research report
that we are reading the truth? The answer is that we can't. The
requirement that researchers disclose their relevant financial
interests--widely recommended and, perhaps for good reason, widely
ignored1,4--informs the reader only of the potential for a financial
conflict. It doesn't tell the reader the only thing that's important:
whether the researcher was swayed by those interests and whether the
results are accordingly invalid. And, of course, financial
disclosures don't tell us the extent to which the researcher was
gripped by one of the more pervasive and powerful sources of bias.
The scientific community has been sorting the wheat from the chaff
for a long time, far longer that it has been fretting over financial
conflicts. The process can be laborious and time-consuming, but it
works and there is probably no substitute for it. Fraudulent
research, the history of science shows us, does have some
distinguishing features. In 1953 Irving Langmuir, a Nobel laureate in
chemistry, gave a lecture on "the science of things that aren't so."5
He discussed a series of "phenomena," all of which had generated
great interest and all of which proved to be nonexistent. The
earmarks of this "pathological science" included: 1) the effect is of
a magnitude close to the limit of detectability; 2) the theories put
forth are fantastic and contrary to experience; 3) criticisms are met
by ad hoc excuses thought up on the spur of the moment ("They always
had an answer--always."); 4) only supporters observe the effects;
critics can't reproduce them.
The last is most important. The final, proper, and only trustworthy
arbiters of research results are time and replication.
psychiatry, Brown Medical School and Tufts University School of
Medicine.
References
1.H.T. Stelfox et al., "Conflicts of interest in the debate over
calcium-channel antagonists," New England Journal of Medicine,
338:101-6, 1998.
2.The Editors, "Going for gold," The Economist, May 19, 2001, page
15.
3.S Okie, "A stand for scientific independence: medical journals aim
to curtail drug companies influence," The Washington Post, Aug. 5,
2001, page A1.
4.S. Krimsky, L.S. Rothenberg, "Conflict of interest policies in
science and medical journals: editorial practices and author
disclosures," Science and Engineering Ethics, 7:205-18, 2001.
5.G. Taubes, Bad Science, New York: Random House, 1998, pp. 342-3.
----------------------------------------------------------------------
----------
The Scientist 15[17]:39, Sep. 3, 2001
** NOTICE: In accordance with Title 17 U.S.C. Section 107, this material
is distributed without profit to those who have expressed a prior interest
in receiving the included information for research and educational
purposes. **