Next-generation genome sequencing

By Catherine

Written by Catherine Bolgar


“Next-generation sequencing” (NGS) is the latest step in genomic sequencing. Currently NGS is used mostly in the lab, but possibilities for clinical applications are opening up.

Sequencing of the entire human genome was completed in 2003, a process that took over a decade. The goal was to identify all the three billion base pairs of deoxyribonucleic acid (DNA) in our 23 pairs of chromosomes. Genes are segments of the chemical compound DNA, passed down from parents to children, and mutations in our genes are linked to many diseases, from cancer to inherited conditions such as sickle-cell disease, cystic fibrosis or Tay-Sachs disease.

Faster computers, new technology and better optics are giving medical researchers a big boost, while also cutting costs. For example, the supercomputer at the Argonne National Laboratory near Chicago— jointly ordered by Argonne and the University of Chicago in 2010—can analyze 240 whole genomes simultaneously in two days, compared with the several months it takes to analyze a single whole genome with less extraordinary computers.

The speed and depth of coverage for looking for mutations has improved,” says Glen J. Weiss, director of clinical research and medical oncologist at Cancer Treatment Centers of America at Western Regional Medical Center in Goodyear, Arizona, and associate professor in the cancer and cell biology division at the Translational Genomics Research Institute (TGen), based in Phoenix.

“Turnaround time for getting results has improved. Is it ready for direct clinical applications? At this time, yes, though in a fairly limited way.”

Many of the genes being sequenced don’t yet have a sufficiently complete picture to apply to clinical use. “Whole-genome sequencing is still research-oriented, trying to identify mutations associated with carcinogenesis,” Dr. Weiss says.


Research isn’t restricted to DNA. Ribonucleic acid (RNA) also plays a role in cells, passing genetic information from DNA to proteins. New aspects of RNA’s importance have only recently been uncovered. The set of all RNA molecules is called the transcriptome. Exons are the part of the gene that encodes proteins, and all the exons in a genome are called the exome.

NGS comprises whole-genome sequencing, selected gene-set sequencing, whole-exome sequencing and whole-transcriptome sequencing, as well as other techniques, such as massively parallel sequencing.

Whole-exome sequencing is mostly used in research, for example to find genes related to glaucoma. In the clinic it has helped doctors pinpoint previously undiagnosed uncommon genetic diseases in patients.

Whole-genome sequencing has been used on cancer patients to compare tumor DNA with patients’ normal DNA, so doctors can choose the best treatment based on the mutations that were found. Treatments are specific to the cancer’s genetic profile, so speedy analysis of that profile is essential for patients with metastatic cancer.

Whole-genome sequencing also has been used on pathogens, such as bacteria. Real-time sequencing of Acinetobacter baumannii, a bacterium which is resistant to many drugs and which attacks severely ill patients, helped Queen Elizabeth Hospital Birmingham in the U.K. control an outbreak of a new strain of A. baumannii. Similarly, it was used to track methicillin-resistant Staphylococcus aureus (MRSA) in other hospitals in the U.K. and Thailand.

Whole-transcriptome sequencing has found genetic markers in blood related to post-traumatic stress disorder, uncovered a cellsignaling pathway related to inflammatory bone erosion and rheumatoid arthritis, as well as revealing which genes are active in human muscles and how muscles in men and women differ.

Dr. Weiss was part of a team using a combination of whole-genome sequencing and whole-transcriptome sequencing to identify more targets in a person’s tumor—targets that might respond to specific treatment.

However, “for the most part, in the past 15 years, using precision medicine and sequencing based on tumors has not yielded as much success as we would have liked,” Dr. Weiss says. When imatinib hit the market in 2001 as a treatment for chronic myeloid leukemia (CML), a cancer of the white blood cells, “we thought targeting would be the cure-all for all cancers. The naivety of researchers, then, is that CML has only one driver abnormality,” he says. “But other cancers can have a whole slew of other abnormalities, instead of just one, but dozens of molecular subtypes.”

cancer cellFor example, several years ago lung cancer was linked to only a couple of genetic mutations. Today, we know that “there are about 15 different mutations, and each has a unique prognosis and outcome with a drug,” Dr. Weiss says. However, “if you look at a pie chart of how much individual mutation makes up out of the overall lung-cancer population, for nearly half of cases we don’t know what’s driving the cancer,” he adds “There are still a lot of unknowns.” What we do know for sure is that genetic sequencing has a growing role to play in medical research.


Catherine Bolgar is a former managing editor of The Wall Street Journal Europe. For more from Catherine Bolgar, contributors from the Economist Intelligence Unit along with industry experts, join the Future Realities discussion.


Photos courtesy of iStock

The logic of biologics

By Catherine

Written by Catherine Bolgar


Biologics have long been the great hope in the fight against non-communicable diseases. Cancer, cardiovascular and chronic respiratory diseases, diabetes and mental health account for 63% of all deaths world-wide. According to a 2011 World Economic Forum report, these diseases will cost some $47 trillion in lost global output over the next two decades. Unsurprisingly, biologics are grabbing an increasing share of the blockbuster drugs market; in 2014 they represented six of the world’s 10 best-selling pharmaceuticals.

Unlike conventional chemical-based drugs, biologics are organic and consist of larger molecules, with thousands of times more atoms. Their greater complexity, however, means that “the regulatory pathway is more cumbersome,” notes Ranjith Gopinathan, program manager, life sciences in the European health-care practice of Frost & Sullivan, a global market research and consulting firm. Of the 41 new drugs approved by the U.S. Food and Drug Administration in 2014, only 11 were biologics.

Never the less, biologic drugs that have been approved have made a huge and rapid impact. Take Sofosbuvir (sold by Gilead as Solvaldi), an anti-viral medication that helps cure hepatitis C. With some 150 million sufferers world-wide, the drug became a global best seller within its first year on the market.

One of the hottest areas in biologics is the development of monoclonal antibodies. These mimic the body’s natural antibodies and have proven to be particularly effective in cancer treatment. They can make cancer cells more visible to the immune system, block growth signals, prevent new blood vessel formation in tumors, and deliver radiation or chemotherapy to cancer cells.

Trastuzumab (sold by Roche as Herceptin), for example, is a monoclonal antibody that targets the HER2+ receptor in breast cancer, a genetic variation found in 15% of  breast cancer patients. When used with other chemotherapy drugs, Herceptin increases survival rates 37%. Roche has come up with other biologics—pertuzumab (sold as Perjeta) and trastuzumab emtasine (sold as Kadcyla)—that can further improve Herceptin’s results, says Barbara Gilmore, a senior industry analyst at Frost & Sullivan.

Close lookAnother monoclonal antibody, launched on the U.S. market in March 2015, is dinutuximab, (marketed by United Therapeutics as  Unituxin). Containing mouse and human components, it helps the immune system find and destroy cancer cells by targeting a substance found on the surface of neuroblastoma tumor cells. Neuroblastoma cancer starts in the nervous system and typically afflicts children under five.

Monoclonal antibodies are key to the success of targeted therapeutics, a process that attacks diseases without affecting healthy cells and tissues. Meanwhile, advances in companion diagnostics and genetic profiling would bolster personalized medicine.

“The growth will be in personalized medicine and targeted therapeutics,” says Mr. Gopinathan. “More efficient drug-development processes based on the disease pathophysiology and genetic risk factors would be game-changers in the industry.” He predicts: “Biologics will continue to outpace overall pharma growth.”

Another promising growth area lies in non-brand versions of biologics, known as “biosimilars.” These are analogous to the $261 billion generic drugs market that replicates conventional drugs whose patents have expired.

One such biosimilar, developed by Novartis, is Zarxio , a version of Amgen’s filgrastim (sold as Neupogen), which helps prevent infection during chemotherapy. Amgen is also developing six of its own biosimilar drugs.  “Here’s a biotech company that makes biotech drugs, and even though they have a robust pipeline, they’re also making biosimilars,” says Ms. Gilmore. “It’s very smart. There’s money to be made there.”

Frost & Sullivan forecases a 60% compound annual growth in the biosimilar market between 2012 and 2019. A RAND Corp. study estimates  that biosimilars could reduce spending on biologic drugs in the U.S. by $44 billion over the next decade, while Spain’s University of the Basque Country forecasts €20 billion savings in Europe through 2020.

iStock_000029461972_SmallHowever, getting biosimilars into the market remains a major challenge. Biologics’ complexity makes them hard to replicate because they use biological processes or living organisms to create the drugs’ molecules.

The European Union has approved only 19 biosimilar drugs since 2006, and the U.S. approved its first biosimilar, Zarxio, in March 2015. Herceptin lost its patent protection last year in Europe and will lose its U.S. patent in 2019, but no biosimilars have yet been approved in those jurisdictions, an indication of how difficult the process is.

Moreover, unlike generics, biosimilars are not much cheaper than their originals to produce. Mr. Gopinathan calculates that “the price reduction is, at most, 30%.” Health care’s great hope will still come at a price.


Catherine Bolgar is a former managing editor of The Wall Street Journal Europe. For more from Catherine Bolgar, contributors from the Economist Intelligence Unit along with industry experts, join the Future Realities discussion.


Photos courtesy of iStock

How Medicine Makes Sense of Big Data

By Catherine

Written by Catherine Bolgar*

Big data for Medical

Big data is a game-changer for medical research. The ability to analyze vast sets of information, thanks to bigger and faster computers, is helping researchers to understand diseases, tease out genetic factors and spot patterns.

More researchers are looking at big data and understanding how we can utilize [it] in a better manner,” says Ervin Sejdic, assistant professor of electrical and computer engineering at the University of Pittsburgh, U.S., and founder of its Innovative Medical Engineering Developments lab.

In the past, clinicians would get data from patients and hold it up to metrics to try to see something by looking among different patient groups. “What they’re doing is flushing out the details. But the devil lies in the details,” Dr. Sejdic says. “The details are where we start understanding things. What’s really shifting in medicine is the fact that, yes, there is data, but let’s look at whole data sets.”

At the same time, better and smaller electronics, from smartphones to sensors you can wear, can compile more information at a detailed level and over bigger populations. “Researchers are looking at the interactions between different physiological systems. Sometimes these interactions break down in people with various diseases. Sometimes you have to look at the level of a minute, or an hour, or a day,” Dr. Sejdic says. “What big data is going to enable us to do is finally look at a human system as a system, rather than as individual components put together.”

Big data also is helping doctors and researchers to view diseases in shades of gray, rather than with a purely black-and-white outlook.

In the past, diseases were viewed in a simplistic way: a person is healthy or a person has disease. We would get specific information about the two states and compare the difference,” says Sergei Krivov, research fellow at the University of Leeds, U.K., who recently published research on the monitoring of kidney-transplant patients using big data techniques.

With transplants, he says, “There are two outcomes: perfect or problems. We are trying to find a single parameter to describe where you are between these two stages and what is the prognosis.” Based on the indicator, doctors can decide at an earlier stage whether to intervene into the process.

What I would like to see in the future is the following picture,” Dr. Krivov says. “A sizable part of the population frequently gives blood for analysis, for example during regular visits to their doctors. This would go to a data center. Based on this data for five or 10 years, we could determine indicators describing the degree of progression or the likelihood to occur for different diseases. We will give back this information as numbers, which is easy to interpret. This, in turn, will encourage patients to participate.”

One indicator patients might get with this approach is their biological age. “So you’re 30 years old, but your biological age is 20—or 40,” Dr. Krivov says. “Changes in your diet, exercise or lifestyle affect biological age. You might get younger, biologically. That would be reinforcement to the patient that he or she is doing well.”

DNA moleculeSome recent uses of big data include predicting the future of metabolic syndrome, advancing neuroscience, identifying dangerous pathogens, and conducting cancer research, among many others. DNA sequencing is getting cheaper thanks to big data, and genetic sequencing with big data is becoming a key part of epidemiology, because it helps trace chains of infection. Big data is helping researchers not only to understand the different genetic mutations in cancer, but also to personalize medicine: different mutations respond differently to treatments, and getting the right treatment straight away spares patients from side effects of treatments that aren’t effective for their particular kind of cancer.

However, challenges remain for big data to reach its full potential of analyzing many kinds of information from many patients. With computers, it’s “garbage in, garbage out,” so data needs to be structured to ensure consistency. Information often isn’t shared because organizations lack procedures or systems for communication. Advances in technology are helping to overcome some of those challenges, according to “The ‘Big Data’ Revolution in Healthcare,” a study by McKinsey & Co.

Big data is still a work in progress in medicine. “If a certain number of people have a disease, the task of searching for them will take minutes instead of days,” Dr. Sejdic says. “But for other things, it will still take days because you need to develop software first for analyzing the data.”

Too much data can be a problem, too. “When you know what you want to find out, it’s a much easier problem,” he says. “But if you’re looking for new patterns, it’s more of a fishing expedition. Whenever we do clinical trials, we are flushing out the details. There’s so much information that it’s hard to track it. Until we do that, we won’t have a good understanding. The major change will occur in the next 10 to 15 years.”

*For more from Catherine, contributors from the Economist Intelligence Unit along with industry experts, join The Future Realities discussion.

Page 1 of 512345