arrow-down arrow-left arrow-right check clock edit filter phone question-mark search submit-arrow

by Ronald Brus | 29 Apr 2016

Can the price of drugs be justified based on R&D investment?

Should the price of drugs be based on the investment in R&D related to them? I was recently asked this question and my response was blunt. In the current setting, the reality is that they are. After up to 15 years of R&D, a pharmaceutical company has a limited window to recoup the costs of R&D and make a profit (let’s not forget it’s a commercial enterprise with an eye on shareholder value) before the drug patent expires and generic manufacturers are free to make and sell the same drugs at a fraction of the price.

 

It’s not the pharmaceutical industry’s fault — we, as society, ask the industry to undertake years of extensive clinical testing, meaning that a promising molecule discovered 15 years ago is often only coming to market today. We also see wider stakeholders in the healthcare system rewarded by the same system  — whether that be doctors who are reimbursed to run a trial at their hospital and present it at a major conference, or regulators who charge for review of the clinical trial data to approve a drug. The entire system is set up so that the pharmaceutical industry pays out huge sums of money before a drug actually comes to market (the estimate varies but the average is reported to be $2bn and could be up to $5bn for each new drug). With this in mind, can we challenge them on the prices they set in a free market with a clear conscience?

 

But what if we lived in a different world, where drugs were developed and launched in half the time? Where we saw new potential life-saving medications come to market quicker, at much lower prices? And a world where the market was left to decide whether the drug was beneficial and worth the money? Sound almost too good to be true? This could be a reality but first, let’s take a closer look at the current drug development process and the clinical testing that is required.

 

Before a new drug enters a human for testing, there have already been over 3 years of lab testing on computer models, live cell cultures and animals. By this point, scientists are confident that the active compound has the desired effect on the disease being targeted but need to test on humans to be sure. It is at this point that clinical research, which will take up to 6 years, begins.

 

The first phase of testing is performed on a small sample of either healthy volunteers or patients to monitor tolerance or safety and determine whether the desired effect is observed in humans. Phase 1 typically lasts one year.

 

The drug developers then move to Phase 2 clinical testing, which will last for another two years and take place on a larger sample of patients to determine the drug does have the desired effect, has acceptable levels of safety and identify the appropriate dosing.

 

Finally, after what is likely to have been around three years of human testing, the drug then enters its final round of clinical testing prior to market authorization. Phase 3 testing is typically a 3 year period of testing on large groups of patients to monitor longer term adverse events, or side effects, and establish efficacy. This is the most costly stage of development due to the numbers of subject required for the trials. A 2011 study found that over 90% of the total clinical testing budget was spent on this Phase 3.

 

If you’re wondering what accounts for the remaining 3 years of the average 15 years to launch a new drug, that’s taken up with by submissions to and reviews by the regulatory and reimbursement authorities.

 

If we go back to clinical testing, we’ve established that after Phase 2 (over 6 years of testing in total), we know that a drug has an acceptable level of safety and does what it says on the box. Would it not make sense at this point to launch and leave it up to doctors to decide whether the drug is effective in the real-world environment? In what other industry is the manufacturer left to shoulder the economic burden of proving the effectiveness of its product and allowed to control the very environment in which this takes place?

 

Phase 3 trials are set up to optimize the chances of a successful outcome for the product — patients are selected according to strict criteria, monitored stringently to ensure they are taking the product as prescribed and removed from the trial when they stop responding. In the real world, we don’t conform to these strict requirements and we behave unpredictably. What difference does it make whether we are in a clinical trial setting versus taking a medication in the real world? At least, as a patient in the real world, we know we are taking the product and not a placebo. This is why some countries now call for ‘real world evidence’ to demonstrate that the product works in the real world before agreeing a price they are willing to pay for that product.

 

We’re not going to be able to change this overnight, but we can give it a nudge in the right direction. Regulators are starting to do this by offering new mechanisms to gain earlier approval pre-Phase 3 but a drug company still needs to go through that lengthy and costly clinical trial round alongside having its product on the market as a result of this earlier conditional approval. Two-sided market places such as Uber and AirBnB have used technology to improve asset allocation through distributed supply and users have seen service levels rise dramatically and prices plummet. Could we apply the same logic to medications? That’s exactly what we’re trying to do at myTomorrows. We make treatments still in development available and, where the regulatory criteria are met, leave physicians and their patients to decide whether to start treatment based on the data. In the long term, this model could lead to lower drug costs. This is what we believe in — earlier access to new treatments which could help patients, at a lower cost to society.