The late 1990s and early 2000s saw a number of prescription drugs come to market that ended up causing grave harm. Most famous among these was the arthritis and pain drug Vioxx (rofecoxib).
Approved by the FDA in May 1999, Vioxx soon became a multibillion-dollar drug, with peak sales of $2.5 billion a year, and was prescribed to millions of people. By 2001, initial studies indicated that the drug might be putting people at higher risk for heart attacks and strokes.
Alarm bells went off and a high-profile, wide-scale probe followed. The problem was confirmed and in September 2004, Merck pulled the drug from the market. But not before it resulted in an estimated 88,000 to 140,000 people having heart attacks or strokes, and as many as 56,000 deaths, according to a 2005 report conducted by the FDA.
How could that have happened? The short answer is that the way drugs get approved — often based on just a handful of studies in a few thousand people — can’t pick up every possible adverse effect or safety problem of a new drug. Only wide use, in tens of thousands or even millions of people, over many years will reveal some problems.
the pharmaceutical industry, researchers and the FDA have long known this. But it’s cost prohibitive to test drugs on tens of thousands of people. So “the system” relies on monitoring drugs after they’re approved to see if problems crop up with “real world” use.
In the wake of Vioxx and several other high-profile cases of post-approval harm — sometimes years after approval — lawmakers in Congress asked a simple question: If we have to live with imperfect pre-approval detection of possible problems, why can’t we radically improve the way we monitor drugs after they enter the market?
Birth of the Sentinel System
Very good question. In 2007, Congress mandated that the FDA develop a computer-based system to track and analyze the safety of drugs after they hit the market — and do it pronto.
Following a year of debate and discussion, in May 2008, HHS (Department of Health and Human Services) and FDA launched something called the Sentinel Initiative. It was the brainchild of then-HHS secretary Mike Leavitt and Mark McClellan, then the FDA commissioner.
Leavitt, somewhat of a hi-tech geek, pledged “a national, integrated, electronic system for monitoring medical product safety.” Ultimately, he said, Sentinel would “help monitor medical products throughout their entire life cycle and thus better ensure the protection and promotion of public health.”
Leavitt also linked Sentinel to another George W. Bush administration initiative, the Nationwide Health Information Network (NHIN), built in part on electronic health records. The NIHN was going to “connect clinicians across the healthcare system and enable the sharing of data as necessary with public health agencies,” Leavitt said.
You have probably guessed where this is going. The NHIN never materialized — at all. It was thwarted by what became a chaotic, controversial and expensive ($20 billion) program to push doctors and hospitals to adopt electronic health records.
Little Impact from Sentinel?
As for Sentinel, it still exists. But it, too, has dashed hopes and suffered from a combination of bureaucratic inertia, technical obstacles and a lack of urgency.
I took part in launching Sentinel from 2007 to 2010 as a consumer representative to the project. At the time, I worked at Consumers Union/Consumer Reports and co-directed a project — which also still exists — called Consumer Reports Best Buy Drugs.
We continue to lack a robust, comprehensive and modernized computer-based system in the US to detect problems with drugs once they come on the market.
To make a long story short, initial enthusiasm for Sentinel waned over time. The history and a probing critique of Sentinel are well told in a recent piece from the health and medical site STAT.
The article alleges that Sentinel has had “little measurable impact” after 10 years and a cost of $207 million. The current budget is $20 million per year. Most experts STAT spoke to questioned “whether Sentinel can adequately identify risks involving drugs.”
To date, the article says, Sentinel has led to changes in the labels of only 2 medications out of several hundred that have been assessed. None were removed from the market. Drug labels guide doctors in prescribing drugs, warn of potential serious problems to watch for, and help both doctors and consumers discern possible side effects.
In a written response to my email query about the STAT article and their perspective on Sentinel, the FDA emphasized that looking at just label changes or drugs pulled from the market was not an accurate or comprehensive way to assess Sentinel.
They said Sentinel had “informed the issuance of 5 safety communications, including 2 safety label changes” but also provided “reassuring” evidence for dozens of other drugs.
In their words: “This leads FDA to not take any visible action because FDA has concluded that the prescribing information adequately describes the risks and benefits.”
Most notably, the agency said, a full-scale FDA analysis that included a Sentinel analysis assessed whether long-term use of stimulant drugs to treat ADHD (attention deficit/hyperactivity disorder) was associated with cardiomyopathy and heart failure. The analysis “provided important new safety information and a deeper understanding of these medical products” the agency stated in its response, but led to no regulatory action. In other words, the drugs were cleared.
Fair enough. It is important to be reassured that medicines taken by millions, and especially children, are safe.
Is Drug Safety Monitoring System Underutilized?
But then FDA’s response turned somewhat more disingenuous. For example, FDA claims that Sentinel has been “fully operational” for only 18 months. That’s technically true because the agency called the program a “pilot” and “mini-Sentinel” for 8 years.
FDA basically claims Sentinel was not ready for prime time, and that during those 8 years it was “developing infrastructure and methods.”
Since I was involved and followed the program I know that by year 5 or 6, FDA had significant capabilities under Sentinel and did not use the tools and network at their disposal to fully test things out.
The network is, in fact, impressive, and another reason to wonder why the yield of meaningful information is not yet more useful. Sentinel is one of the largest “distributed data” networks in the US today and possibly the world. According to Harvard Pilgrim Health Care, the nonprofit health insurer based in Wellesley, MA, that now manages the program under contract to FDA, Sentinel now encompasses:
- 18 organizations, including many of the nation’s largest health insurers (Aetna, Anthem, Humana, Kaiser Permanente) and various disease registries
- 88 hospitals and other inpatient facilities
- Prescription drug data on some 200 million people, with the routine accrual of data on 48 million
- 4 billion prescriptions
- 4 billion doctor or lab visits and hospital stays
- 42 million acute inpatient stays
- 7.2 billion unique medical encounters
The thing is, it’s well known that several of the insurers in this network already had their own drug and medical product safety monitoring systems in the early to mid 2000s. Indeed, Kaiser Permanente’s system picked up a signal on Vioxx in 2001-2002 that helped trigger the wider probes.
Why has it taken so long then to get things together and generate information about hundreds of drugs — safety, side effects, effects on different populations, special considerations — for both doctors and consumers? That was the promise, and it was a worthy one.
FDA’s explanation to me: Capturing the right and accurate data — for example, on both inpatient hospitalizations and outpatient clinic visits, and on deaths inside and outside the healthcare system — is just plain hard.
Yes, it is. I concur. That was the challenge — to overcome the obstacles and make it happen in the interests of public health.
FDA Slow to Develop Sentinel and Get It Rolling
The FDA officials who corresponded with me wrote: “FDA is [now] exploring novel linkages to vital records data systems to inform these outcomes, which could improve Sentinel’s ability to connect the dots between monitored products and fatalities occurring outside of the healthcare system.”
That should have happened years ago.
The problems and slow development of Sentinel would perhaps create less rancor if it were not the second time the agency had failed to build an adequate drug safety alert system.
Sentinel was designed to complement and supplement a legacy drug safety monitoring system called FAERS (FDA Adverse Event Reporting System), a database that contains information on adverse event and medication error reports submitted to the FDA by doctors, nurses, pharmacists, hospitals, lawyers, manufacturers and consumers.
Reporting into the system is voluntary except for drug manufacturers; they must report problems. FAERS has some big weaknesses, which the agency acknowledges. For one, because reporting is voluntary, only an estimated 10% of adverse events are reported. Reports are also not validated and many don’t contain enough data to permit a full evaluation of a potential safety problem, such as whether there is a causal relationship between a drug and an event.
And while FAERS data is available to doctors and the public, it’s not easy to use and so is rarely consulted by either. But in fairness, FAERS has resulted in some important drug label changes over the years, and helped push a few bad drugs off the market.
The upshot of all this is that we continue to lack a robust, comprehensive and modernized computer-based system in the US to detect problems with drugs once they come on the market. That is a big gap. Arguably, it’s a gap that is more important than ever to fill as the new administration in Washington and new FDA chief, Scott Gottlieb, intend to accelerate the approval of drugs.
Sentinel is at the cutting edge of the big-data revolution, and, as a concept, still has tremendous value and promise. The initial vision was a good one. But the FDA now needs to move much faster to deploy Sentinel to quickly identify unsafe drugs and adverse effects that consumers must be informed about.
Steven Findlay is an independent medical and health policy journalist and a contributing editor/writer for Consumer Reports. He derives some of his posts and insights from Consumer Reports Best Buy Drugs, a grant-funded public information and education program that evaluates prescription drugs based on authoritative, peer-reviewed research.