Personalized Healthcare: The New Era

Comment

Personalized Healthcare: The New Era

In medicine, the advent of personalized healthcare is showing that “one size” does not necessarily “fit all.” Specifically, personalized healthcare implies an ability to use an individual’s genetic characteristics to diagnose his or her condition with more precision and finesse. With this development, physicians can select treatments that have increased chances of success and minimal possibilities of adverse reactions. However, personalized medicine does not just imply better diagnostics and therapeutics: it also underlies an ability to better predict any given individual’s susceptibility to a particular disease. Thus, it can be used to devise a comprehensive plan to avoid the disease or reduce its extent.1 The advent of personalization in healthcare has brought a preventative aspect to a field that has traditionally employed a reactive approach,2 where patients are generally treated and diagnosed after symptoms appear.

Medicine has always been personalized: treatment is tailored to individuals following examination. However, the new movement to personalize medicine takes this individualization to the next level. The initial genome sequence was reported by the International Human Genome Sequencing Consortium in 2001; now, scientists can determine information about human physiology and evolution to a detail never before possible, creating a genetics-based foundation for biomedical research.3 Genes can help determine an individual’s health, and scientists can better identify and analyze the causes of disease based on genetic polymorphisms, or variations. This scientific advancement is an integral factor in the personalized healthcare revolution. Technological developments that allow the sequencing of the human genome on a real time scale at relatively low costs have also helped to move this new era of medicine forward.2

The science behind such personalized treatment plans and prediction capabilities follows simple logic: scientists can create a guide by identifying and characterizing genomic signatures associated with particular responses to chemotherapy drugs, such as sensitivity or resistance. They can then use the aforementioned patterns to understand the molecular mechanisms that create such responses and categorize genes based on these pathways and mutations.4 Therefore, physicians can compare the genetic makeup of patients’ tumors to these libraries of information. This genetic profiling matches patients with successfully-treated individuals who have similar polymorphisms to provide effective treatment that increases the accuracy of predictions, minimizes allergic reactions, and reduces unnecessary follow-up treatments. For example, efforts are underway to create individualized cancer therapy based on molecular analysis of patients. Traditionally, prediction of cancer recurrence is based on empirical lessons learned from past cases to treat current patients, looking specifically at metrics such as tumor size, lymph node status, response to systemic treatment, and remission intervals.4 While this type of prediction has merit, it only provides generalized estimates of recurrence and survival for patients; those with no risk of cancer relapse are often put through potentially toxic chemotherapy. With the new age of personalized medicine, powerful analytical methods, such as protein profiles and dysfunctional molecular pathways, will allow physicians to predict the behavior of a patient’s tumors on a whole new level. Personalized oncologic treatment can plot the clinical course for each patient with a particular disease based on his or her own conditions rather than generalizations from a heterogeneous sample of past cases. This type of healthcare thus improves upon current medicine by creating a subset of homogeneous groups within past cases, allowing physicians to make a more accurate prediction of an individual’s response to treatment.

Additionally, personalized medicine can prevent medical maladies such as adverse drug reactions, which lead to more than two million hospitalizations and 100,000 deaths per year in the U.S. alone.5 It can also lead to safer dosing and more focused drug testing. However, this approach is hindered by the nascent nature of genomics technology and the difficulty in identifying all possible genetic variations. Particularly challenging are cases where certain drug reactions result from multiple genes working in conjunction.6 Furthermore, opponents of gene sequencing argue that harnessing too much predictive information could be frightening for the patient. For example, patients shown to possess a genetic predisposition towards a degenerative disease such as Alzheimer’s could experience serious psychological effects and depression due to a sense of fatalism; this knowledge could adversely impact their motivation to reduce risks. This possibility has been demonstrated in clinical studies regarding genetic testing for familial hypercholesterolaemia, which measures predisposition to heart disease.7 This dilemma leads back to a fundamental question of gene sequencing—how much do we really want to know about our genetic nature?

As of today, personalized medicine is starting to make its mark through some commonly available tests such as the dihydropyrimidine dehydrogenase test, which can predict if a patient will have severe, sometimes fatal, reactions to 5-fluorouracil, a common chemotherapy medicine.8 Better known are the genetic tests for BRCA1 and BRCA2 mutations that reveal an increased risk of breast cancer,9 popularized by actress Angelina Jolie’s preventative double mastectomy. With these and other such genetics-based tests, the era of personalized medicine has begun, and only time can reveal what will come next.

References

  1. Center for Personalized Genetic Medicine. http://pcpgm.partners.org/about-us/PM (accessed Oct 24, 2013).
  2. Galas, D. J.; Hood L. IBC. 2009, 1, 1-4.
  3. Venter, J. C. et al. Science 2001, 291, 1304-1351.
  4. Mansour, J. C.; Schwarz, R. E. J. Am. Coll. Surgeons 2008, 207, 250-258.
  5. Shastry, B. S. Nature 2006, 6, 16-21.
  6. CNN Health. http://www-cgi.cnn.com/HEALTH/library/CA/00078.html (accessed Oct 24, 2013).
  7. Senior, V. et al. Soc. Sci. Med. 1999, 48, 1857-1860.
  8. Salonga, D. et al. Clin. Cancer Res. 2006, 6, 1322.
  9. National Cancer Institute Fact Sheet. http://www.cancer.gov/cancertopics/factsheet/Risk/BRCA (accessed Oct 24, 2013).

Comment

The Hyperloop: A Push for the Alternative is Real

Comment

The Hyperloop: A Push for the Alternative is Real

The drive down CA I-5 from San Francisco to L.A. takes six hours. That’s six grueling hours of negotiating the horrors of urban traffic, trying to stay awake over vast stretches of monotonous Central California farmland, and watching your gas tank guzzle your hard-earned dollars. A plane ride? A nonstop flight would take only an hour and a half, but that’s without considering the time it takes to pass through security or the price of checking in any luggage. In an age where a thought can be sent half-way around the world in mere seconds, this commute is one emphatic no-no to the time-strapped millennial American. Imagine an alternative. What if there was a way to travel this same distance in just half an hour—and for only twenty dollars a ride?

Say hello to the hyperloop.

The summer of 2013, entrepreneur Elon Musk suggested an alternative method of travel: the “Hyperloop,” a proposed form of high-speed transportation with the potential to travel up to 760 mph.1 To put things into perspective, Japanese maglev (magnetic levitation) trains have maxed out at 361 mph, and the commercial Boeing 747 plane travels at an average of 570 mph.2 The speed of sound is 767 mph. On top of it all, while the California High-Speed Rail Project is currently asking for over $68 billion dollars, Elon Musk estimates that constructing the Hyperloop would only cost $6 billion.

Subsonic speeds at one-tenth of the California High-Speed Rail Project—this audacious claim is not a first for Elon Musk, the man behind the Hyperloop. Take a look at his distinctive accomplishments as CEO and founder of the companies PayPal, Space X, and Tesla: today, the online payment service PayPal is ubiquitous; Space X is in the works of delivering its third interstellar manned vehicle prototype; and Tesla’s electric car stocks trade at over $170 a share. Clearly, Elon Musk has been the impetus behind phenomenal and successful ideas. Still, his history of successful ideas does not guarantee that the Hyperloop would be a successful project. Furthermore, Elon Musk has publicly announced that he will not be tackling the Hyperloop. However, he has released a 57 page report detailing his ideas, which are freely available for anyone to view online. Based on this information, let’s take a look at how this project might be able to work.

A brief technical breakdown

If the Hyperloop sounds like something straight out of science fiction, you wouldn’t be far from the truth. Science fiction authors have written about something that sounds very much like the Hyperloop since the 1900s. In addition to its basic design, the proposed technology for the Hyperloop has also been around for many years.

The technology behind the Hyperloop is the implementation of pneumatic action, which relies on the compression of air to induce movement. Starting in the mid-to-late 19th century, numerous major cities in the U.S. and other countries relied heavily on massive underground networks of pneumatic tubes to send mail. Although this technology has been replaced by email, pneumatic action is still at work in post offices and hospitals today. Pneumatic action is also integral to air hockey tables, which utilizes air to reduce friction. In fact, Elon Musk described the Hyperloop as a cross between a railgun, a Concorde, and an air hockey table.1

More precisely put, the Hyperloop is a closed-tube transportation system that provides uninterrupted traffic in both directions, perhaps akin to the energy-efficient, tiny maglev pods inside of elevated tubes. This project differs from existing infrastructure developments in three predominant ways by combining a partially evacuated tube with pressure-regulating pods on elevated concrete pylons.1 Though these features are not new, the combination of these attributes is indeed novel.

To carry passengers inside the tubes, two different distinct pod systems have been proposed: The first is an all-passenger version carrying 28 seats. The second, larger system would carry three automobiles and their passengers. No matter which system is utilized, each pod would have pressurized cabins containing backup air supply and oxygen masks in case of emergency, much like an airplane.1

The pods would travel very fast—up to sub-sonic speeds—due to a nearly frictionless journey.1 The steel tubes play an important role by containing a partial vacuum, one ideally one-sixth of the air pressure on Mars. Rather than relying solely on electromagnetism for the entirety of the trip, friction would also be greatly reduced thanks to electric compressor fans attached to the traveling pods.

The electric compressor fans can help overcome the Kantrowitz Limit, a limitation on mass flow related to the size ratio between pod and tube. If the space between a pod and the interior of the tube is too small, then the Hyperloop system will behave like a syringe, where the pod will push the entire column of air in the tube. The compressor fans address this problem by pushing high pressure air from the front to the rear to create a cushion of air underneath the pod.1

To lower energy costs on the environment, the bulk of the system would be powered by a solar generator. This solar energy would be split between charging the pod batteries and the rest of the system. Musk’s plan would rev up the pods from their stations using magnetic linear accelerators resembling railguns;1 once in the main travel tubes, the pods would be given periodic boosts by external linear electric motors similar to those in the Tesla Model S.

Finally, in order to lower the cost of construction, the partially-evacuated tubes would be elevated on concrete pylons. This type of system mitigates damage caused by earthquakes and reduces maintenance costs.1 Additionally, elevation would allow the Hyperloop to travel without interruption, bypassing ground traffic, farmland, and wildlife. By building on pylons, the system could also follow the California I-5 highway, reducing the need for expensive land acquisition suits.

Again, these suggested attributes for the Hyperloop already exist in one form or another, but they have not yet been combined in such a manner. For instance, much of the Federal Highway System uses concrete pylons to elevate interwoven freeways. The subways of New York City and Chicago are enclosed systems, but they do not run on cushions of airs. Maglevs run on a cushion of air, but they are not enclosed.

So what does this mean?

All in all, the scientific community generally seems to agree that the Hyperloop is technically feasible. However, Elon Musk’s proposal does come with many caveats. The most significant problems seem to stem from economic and political concerns. Critics have claimed that Elon Musk’s idea is overtly conceptual and utilizes unrealistic cost estimations. The estimations are particularly contentious because the Hyperloop proposal is reminiscent of the California High-Speed Rail, a project with initial estimates that were much lower than the current budget; in 2008, state voters had approved $9.91 billion dollars for the California High-Speed Rail Project. By April 2012, High-Speed Rail estimates had reached $91.4 billion, nearly ten times original estimates, before public outcry triggered a budget revision.

However, the Hyperloop manages to avert many of the same issues plaguing the California rail project. One benefit of building on concrete pylons is a reduction in land acquisition, currently the most expensive and politically contentious issue affecting transportation projects. Another benefit of the project is an independence from federal funding. Without a reliance on bond measures or public coffers, the Hyperloop would not need to build in potentially unprofitable areas to keep policymakers satisfied. Free from political strings, it could focus on creating a profitable, sustainable venture.

Regardless of the advantages, it seems unlikely that construction on the Hyperloop following the proposed route between San Francisco and Los Angeles would begin during the construction of the California High-Speed Rail. This is unfortunate, as a route between L.A. and San Francisco guarantees a customer base. Furthermore, city pairs that are 120 to 900 miles apart are ideal for rail or similar transportation, as this distance is too far to comfortably travel by car and yet too close to efficiently travel by plane.7 L.A. and San Francisco fit perfectly in that niche.

Elon Musk has not made any plans to develop the Hyperloop, causing many to believe that the entrepreneur may be using the Hyperloop as an elaborate red herring. One critic has pointed out that the construction of the California High-Speed Rail would directly compete with automobiles in Silicon Valley, one of Tesla’s target market demographics. Could Musk trying to derail the already tenuous rail initiative in order to reduce economic competition? Interestingly, after the official Hyperloop press release, local politicians in the Silicon Valley area lobbied to prevent the California High-Speed Rail from developing in their respective constituent zones. Was this mere coincidence?

Regardless, the most valuable attribute of the Hyperloop project is its status as an open-source engineering project open to privatization. Using the same basic principles, one man has even built a working miniature prototype with the capacity to hover on a cushion of air. Autodesk, a 3D design software, has produced realistic, promotional renderings of the Hyperloop. ET3 is a company that is working on similar evacuated vacuum tube designs.3 In this day and age, crowdfunding platforms like Kickstarter and microfunding organizations like Kiva could potentially be used to financially support the construction of the Hyperloop. In the Netherlands, the company Windcentrale raised over 1.3 million Euros in only thirteen hours from 1700 Dutch households eager for a local wind turbine.5 Money for the Hyperloop could be similarly raised.

Whatever the case, one thing is clear: even if the Hyperloop remains a conceptual project, the idea has opened up an important dialogue regarding the future of mass transportation in America. It has bought attention to the inefficiencies of the California High-Speed Rail Project, and it has renewed interest in technological advances for public transportation. The Hyperloop might also lead the U.S. towards privatized or crowdfunded infrastructure. Perhaps most importantly, the buzz around the Hyperloop has reaffirmed the need for alternative transportation.

References

  1. Musk, Elon. Hyperloop Alpha. https://www.teslamotors.com/sites/default/files/blog_images/hyperloop-alpha.pdf (accessed Oct 20, 2013).
  2. 747 Family. http://www.boeing.com/boeing/commercial/747family/background.page (accessed Nov 4, 2013).
  3. “The Evacuated Tube Transport Technology Network.” ET3, 24 Oct. 2013. <http://et3.net/>.
  4. United States Bureau. The California Energy Commission. California Gasoline Statistics & Data. Energy Almanac. Web. 30 Oct. 2013. <http://energyalmanac.ca.gov/gasoline/>
  5. “Dutch Wind Turbine Purchase Sets World Crowdfunding Record.” Renewable Energy World. Ed. Tildy Bayar. 24 Sept. 2013. Web. <http://www.renewableenergyworld.com/rea/news/article/2013/09/dutch-wind-turbine-purchase-sets-world-crowdfunding-record>.
  6. “2012 Annual Urban Mobility Report.” Urban Mobility Information. Texas A&M Transportation Institute. Web. 11 Oct. 2013. <http://mobility.tamu.edu/ums/>.
  7. “Competitive Interaction Between Airports, Airlines, and High Speed Rail.” Joint Transport Research Centre. Paris. Discussion Papers.7 (2009): 20. 8 Dec. 2013.
  8. Central Intelligence Agency. The World Factbook. n.d. Web. 18 Nov. 2013. <https://www.cia.gov/library/publications/the­world­factbook/>.
  9. Levinson, David. “Density and Dispersion: the Co­development of Land use and Rail.” London Journal of Economic Geography, 8 (1), 55­77. 10.1093/jeg/lbm038. 2007.


Comment

Electronic Foil: Blurring the Interface Between Computers and Bodies

Comment

Electronic Foil: Blurring the Interface Between Computers and Bodies

What if computer technology was to integrate within our bodies? Think of all the possibilities. We could install built-in health monitoring systems as well as technological communication devices. These seemingly far-off advances are exactly what material scientists are currently trying to accomplish.

In a landmark development, researchers have successfully designed electronic interfaces that can be fully implemented into the human body. Dubbed "electronic foils," this invention allows circuits to conform to any surface, giving them the unique ability to adapt to a moving body part. These electronics—which can be stretched, bent, and crumpled—may someday become as common as plastic wrap.1 Wearable technologies have potential applications ranging from medical diagnostics and wound healing to video game control. Electronic skin has been shown to monitor patients' health measurements as effectively as conventional, state-of-the-art electrodes that require bulky pads, straps, and irritating adhesive gels.2

Traditional electronics are hard and unyielding, typically making them unable to serve in biological applications.1 However, developments in materials science are transforming the way we think about electronics. Dr. John Rogers, an engineer at the University of Illinois at Urbana-Champaign, studies the characteristics and applications of "soft materials,” or those without physical restraints. Scientists have engineered “transient electronics,” a new class of electronics that can degrade completely after carrying out a designated task. The potential applications of this technology are impressive. For example, Rogers and his team successfully embedded heat-sensitive, transient circuits into surgical wounds of mice to fight infection. These devices were then absorbed into surrounding tissue after a threshold exposure to biological fluids.3

The secret to transient electronics’ disappearing act lies in their material composition. The circuits implanted in the mice were crafted from sheets of porous silicon and magnesium electrodes packaged in enzymatically-degradable silk. Its solubility in water is programmable thorough the addition of magnesium oxide to create a crystalline structure. By tweaking the framework, researchers can control how quickly the engineered silk dissolves, be it over a matter of seconds or several years.3 Scientists foresee applications in which these devices are placed on a wound or inside the body immediately after surgery, so they can integrate and wirelessly transmit information,1 such as heart rate, blood pressure, and blood oxygen saturation levels.3

Using Roger’s work, Martin Kaltenbrunner, an engineer at the University of Tokyo, discovered how to further expand the applications of flexible electronics and demonstrated a number of uses for their virtually unbreakable circuits. For example, the researchers built a thin transistor incorporated into a tactile sensor that conforms to uneven surfaces. By placing such a device on the roof of a person's mouth, paralyzed patients could give yes or no answers by touching different spots of the sensor. Touch sensitivity could also help those with artificial limbs to gain feeling. Kaltenbrunner’s team also built a mini temperature sensor that adheres to a person's finger. In the future, this simple sensor could be implemented in an imperceptible adhesive bandage for health monitoring.3

The lines between human and machine are becoming blurred. As frightening as seems, the impact of electronic foils on biomedical devices is a positive one. Health monitoring will become commonplace, and the skin can be used as a signal transductor. With the onset of these human-integrable electronics, the field of biomedicine will be completely revolutionized.

References

  1. Kaltenbrunner, M. et al. Nature 2013, 499, 458-463.
  2. Kim, D. et al. Science 2011, 333, 838-843.
  3. Hwang, S. et al. Science 2012, 337, 1640-1644. 

Comment

How to Live Forever

1 Comment

How to Live Forever

Despite common debate over its desirability, immortality has been an object of fascination for humans since the beginnings of recorded history. Why do we die in the first place? Religious and philosophical explanations abound. Evolution also provides important insights, and we are just beginning to understand the detailed molecular underpinnings of aging. With knowledge, of course, comes application. Whether or not you seek immortality, the technology for significant life extension may become available in our lifetimes.1 Eventually, with these new advances, every year we live will add more than a year to our lifespans; this is the point when we become immortal.

Immortality is a more complex concept than many realize. In the dictionary, it is defined simply as “the ability to live forever.”2 What if you lived forever as an extremely old man or woman, physically and mentally weak and handicapped? This is not often the image associated with eternal life. Immortality, then, would be desirable only if it came hand-in-hand with another important concept: eternal youth. Another notion of immortality is embodied by Superman or the mythic Greek hero Achilles: invulnerability. By definition, however, invulnerability is not an essential feature of immortality. The real obstacles to immortality are not freak accidents or acts of violence but aging. Aging causes the loss of both youth and immunity to disease. In fact, no one dies purely from the aging process; death is caused by one of many age-related complications.

Eliminating aging is thus synonymous with achieving immortality. The first step in this direction, of course, is to understand the aging process and how it leads to disease. The best answer to this question has been offered by Cambridge biogerontologist Aubrey de Grey. De Grey argues that aging is the accumulation of damage as a result of the normal, essential biological processes of metabolism.2 This damage accumulates over the course of our lifetimes and, once it passes a critical threshold, leads to pathological symptoms. The field of biogerontology mainly focuses on understanding the processes of metabolism in the hopes of preventing accumulation of damage. Geriatrics is a related specialty that focuses on mitigating the symptoms of age-related disease. De Grey points to the enormous complexity of understanding either process and offers an alternative: identifying and directly dealing with the damage.3

What types of damage does this entail? To begin, it is essential to understand that the body is a collection of billions of cells. The health of these cells directly translates to the wellbeing of our bodies. Aging is caused by deterioration of our cells, which typically destroy and recycle substances to prevent accumulation of damage over time. De Grey believes there are seven categories of damage that lead to aging. Two are mutations of DNA, the molecule that stores our genetic information. Two are accumulations of molecules that our cells have lost the ability to destroy. One is an accumulation of crosslinks between our cells, causing our tissues to become constrained and brittle. Another is the loss of irreplaceable cells, such as those in our heart or brain. The final classification is an accumulation of death-resistant cells that cause damage to our bodies. De Grey has proposed Strategies for Engineered Negligible Senescence (SENS) for repairing each source of damage. Some of these strategies, such as stem cell therapy, are theoretical and unproven; others, like gene therapy, are modeled after pharmaceuticals that have already gone through clinical trials. De Grey’s SENS are innovative and radical by the standards of the medical and scientific community, causing many to question their viability.

The first SENS therapies will not be perfect. They will eliminate enough damage to keep us below the threshold of developing age-related diseases for a few extra decades, but they will leave even more stubborn forms of damage behind. A few decades, however, is a long time for modern science. By the time our bodies start to show signs of aging, more effective therapies will be available. This process would continue indefinitely.

The fundamental weakness of SENS is that it is based on keeping an imperfectly understood biological organism functioning long after it was ever designed to be. The alternative is to switch out of our flesh and blood homes and into new territory: electronics. For our bodies, this seems relatively straightforward; while fully functioning humanoid robots are far from perfect, it is not a great leap to assume that they will be as capable, if not far more powerful than human bodies in the future-certainly by the time SENS would begin to wear out. Transferring our minds to an electronic medium offers far more considerable challenges. Amazingly, progress in this direction is already under way. Many scientists believe that the first step is to create a map of the synaptic circuits that connect the neurons in our brain.4 Uploading this map into a computer, along with a model of how neurons function, would theoretically recreate our consciousness inside a computer. The process of mapping and simulating has already started with programs such as the Blue Brain Project and Obama’s BRAIN initiative. In particular, the former has already succeeded in modeling an important circuit that occurs repeatedly in the mouse brain.5

Transferring our minds to computers would mean that any damage that occurred could be reliably fixed, making us truly immortal. Interestingly, the switch would also fulfill many other ambitions. Our mental processes would be significantly faster. We would be able to upload our minds into an immense information cloud, powerful robots, or interstellar cruise vessels. We would be able to fundamentally alter the architecture of our minds, eliminating archaic evolutionary vestiges (such as our propensity toward violence) and endowing ourselves with perfect memories and vast intelligences. We would be able to store and reload previous versions of ourselves. We would be able to create unlimited copies of ourselves, bringing us as close as possible to invulnerability as we may ever get.6

While you may have never seriously considered the idea that you might be able to live forever, theoretically it possible; technologies for radical life extension are currently in development. Whether such advancements reach the market in our lifetimes is in large part dependent on the level of public support for key research. Although the hope of living forever comes with the risk of disappointment, keep in mind that efforts toward achieving immortality will increase, if not your lifespan, that of your children and future generations.

References

  1. Kurzweil, R. The singularity is near: when humans transcend biology. Penguin Books: New York, 2006.
  2. Oxford Dictionaries. http://www.oxforddictionaries.com/us/definition/american_english/immortality (accessed March 13, 2014).
  3. De Grey, A. D. ; Rae, M. Ending aging: the rejuvenation breakthroughs that could reverse human aging in our lifetime. St. Martin’s Griffin: New York, 2008.
  4. Morgan, J. L.; Lichtman, J. W.  Nature Methods 2013, 10, 494–500.
  5. Requarth, T. http://www.nytimes.com/2013/03/19/science/bringing-a-virtual-brain-to-life.html (accessed March 13, 2014).
  6. Hall, J. S. Nanofuture: What’s Next for Nanotechnology. Prometheus Books: New York, 2005.

1 Comment

Optimization of an Untargeted Metabolomic Platform for Adherent Cell Lines

Comment

Optimization of an Untargeted Metabolomic Platform for Adherent Cell Lines

ABSTRACT

Metabolites are the end products of all cell function. Metabolomics is the study of these small molecules such as sugars, lipids, and amino acids using analytical techniques including liquid chromatography coupled to mass spectrometry (LC-MS) and nuclear magnetic resonance (NMR). These instruments can measure differences between disease states and drug treatments as well as other physiological effects on cells. Ideal sample preparation is highly precise, accurate, and efficient with minimal metabolite loss. However, there is currently no universal method for isolating and analyzing these metabolites efficiently. Additionally, there is a lack of effective normalization strategies in the field of metabolomics, making comparison of results from different cell types or treatments difficult. The traditional method for data normalization uses protein concentration; however, the buffers used to extract metabolites effectively are not compatible with proteins, leading to protein precipitation and erroneous protein concentration measurements. Although time-consuming, cellular imaging would provide the best results in normalizing for cell number. We conducted a study comparing alternative filtration methods during sample preparation. Two types of filtration plates were analyzed for a variety of compound classes, features, and levels of sensitivity. Data normalization was tested by linearly increasing cell number and comparing whether protein concentration, cell debris weight, or DNA concentration increased at a rate that was proportional to cell number. Sample preparation was found to be more efficient using protein precipitation plates compared to phospholipid removal plates during lysate filtration. After testing several techniques to normalize data by cell number, DNA extraction from the cell debris demonstrated the most time-efficient, consistent, and reproducible results.

INTRODUCTION

Metabolomics is the study of small molecules, such as lipids, amino acids, and sugars present within the cell.1,2 This field gives insight into cellular phenotype and function.3 Untargeted metabolomics is the common method used to repeatedly and quickly identify as many small molecules as possible at a specific occurrence using high resolution/high mass accuracy mass spectrometry.1-4 As a response to variations in environment or drug treatment, changes in metabolites can give insight into what is happening within the cell. Detection of these changes can facilitate identification of possible cancer oncogenesis.1

Despite the potential of metabolomics to provide new insights into cancer, it suffers from a lack of robust methodology. For example, at the time of this writing, the largest number of metabolites reported for a single study is approximately 400, whereas the number of metabolites in the human metabolome is estimated to be on the order of 40,000.2 Since the amount and types of measured metabolites are largely influenced by the method chosen for sample preparation, one focus of this report is sample preparation-related factors that could improve both the number of measured metabolites and the accuracy or precision with which particular metabolites are measured.2

Ideal sample preparation involves a procedure that is reproducible, fast, simple, and free of metabolite loss or degradation during extraction.2 During collection, the noise-to-signal ratio is problematic, but there are several ways to decrease it. One method focuses on protein precipitation. In this technique, metabolites are extracted from cells using solvents that quench metabolism and are filtered to remove protein and cell debris.5 Using a different approach, Ferreiro-Vera et al. found that using solid-phase extraction (SPE) to remove phospholipids yielded three- to four-fold up-regulation of metabolite signals compared to liquid-liquid extraction (LLE).5 We chose to focus on SPE due to its significant increase in metabolite signals and its simpler workflow.

Another problem associated with metabolomic data is the absence of robust methods for data normalization. Because raw metabolite measurements exhibit variation attributable to a number of sources, identification of a normalizer capable of reducing or eliminating the effects of such variation is important. Typically, total protein content or cell numbers are used.6 However, measuring total protein is not straightforward, as the buffers needed to extract metabolites are not compatible with proteins; for example, methanol in the buffers causes protein precipitation. While cell imaging as a surrogate for quantifying cell number is a good option, it is time-consuming and therefore not preferred because of the time-sensitivity of several drug treatments.6 Thus, an alternative, faster method that employs the same sample for both metabolite extraction and data normalization is needed.

Mass spectrometry (MS) was preferred over NMR spectrometry because MS has greater sensitivity, selectivity, and range of identifiable metabolites.11 One weakness of MS relative to NMR, however, is that the sample cannot be recovered. MS allows the user to easily identify the metabolite by comparing the molecular weight of the molecule to the Tandem mass spectrometry (MS/MS) spectra and retention time of the sample.11

The overall objectives of our research were to improve metabolomic data quality by 1) identifying a robust surrogate for the cell number method to facilitate data normalization and 2) optimizing sample preparation methods. Two different sample filtration strategies were tested—one using a protein precipitation plate and a second using a new phospholipid-depleting filter plate. Additionally, several data normalization strategies were evaluated. Finally, we applied the resulting optimizations to investigate the mechanism of action of L-asparaginase through its metabolic response.

MATERIALS AND METHODS

Cells Lines

OVCAR-8 and OVCAR-4 are ovarian cancer cell lines from the subpanel of the NCI-60, one of the most highly profiled sets of cell lines in the world. Unlimited, easily obtained, and manipulated, OVCAR-8 cells are sensitive to L-asparaginase, while OVCAR-4 cells are resistant.10 The U251 central nervous system line and MDA-MB-435 melanoma line were also used.9 All cells were routinely maintained in RPMI-1640 media containing 5% fetal bovine serum (FBS) and 1% (2 mM) L-glutamine. Cultures were grown in an atmosphere of 5% CO2 and 90% relative humidity at 37°C.

Sample Preparation

Samples were prepared through filtration of the supernatant, vacuum-assisted evaporation to dryness, and reconstitution of the dried sample. Cell samples in 10 cm dishes were placed on dry ice and washed twice with 4-5 mL of 60% CH3OH, 40% HPLC-grade H2O, and 0.85% (NH4)HCO3, then extracted using 500 µL of CH3OH:CHCl3:H2O (7:2:1) buffer.7,8 The cells were homogenized for 30s to lyse the cells. The cells were then centrifuged at 20,000 x g at 4°C for 2 min. The lysate was removed and inserted into the protein removal plate while the remaining debris in the cell was saved. In order to prepare for the DNA extraction, the cell debris is air dried because speed vacuuming makes it difficult for the pellet to fully solubilize.

Filtration

A phospholipid removal plate was used because Ferreiro-Veraet al. suggested it as a better way to effectively remove noise- or suppression-imparting metabolites.5 We compared the protein removal plate to the phospholipid removal plate. Each well was filled with 500Lfresh CH3OH with 10 µM xylitol. 200 L of the lysate was then added to each well. After using the vacuum to filter the lysate through the plates, the samples were dried using a speed vacuum and later reconstituted and analyzed in a hybrid ion trap MS.

Metabolite Analysis

Samples stored at -80°C were reconstituted in 50 µL of mobile phase A (deionized H2O + 0.1% Formic Acid (FA)) immediately prior to analysis with LC-MS. The LC parameters were as follows: 5µL of metabolomic sample were injected onto a C18 column (1.8µm particle size, 2.1 x 50mm), and analytes were eluted using a 25 min linear gradient of 100% H2O + 0.1% FA to 100% ACN + 0.1% FA. Analytes were detected in positive ion MS mode, and data was analyzed using XCMS Online software (Scripps Research Institute, La Jolla, CA). Averaged metabolite intensities for identified compounds were then plotted against cell number.

DNA Extraction

As a surrogate protocol for counting cell number, total genomic DNA was extracted and DNA concentration was quantified using a DNA extraction kit from IBI Scientific, following the Red Blood Cell Preparation protocol. After the cell pellets were completely air dried at room temperature, the pellet was reconstituted in 150 L of lysis buffer. The pellets were then stored at room temperature in the buffer for at least 24 h to allow for maximum resolubilization. DNA preparation was performed by adding GB buffer, shaking vigorously, and incubating at 60 °C for at least 10 min to allow for complete solubilization of the pellet. Afterward, the solution was treated with 100% C2H5OH and transferred to a GD maxi column. For large cell numbers, the solution was split between two columns to avoid saturation of the column. The columns were centrifuged, washed twice with wash buffer, and eluted with 50 L of elution buffer.

RESULTS

Filtration

A phospholipid removal plate was evaluated against a protein precipitation filter plate in order to determine which plate yielded the highest intensity metabolites as well as the greatest variety of compound classes. As shown in Figure 1, both filtration plates yielded similar metabolite intensities for most of the known metabolites. However, in the case of the polyaminesputrescine and spermidine (Figure 1C), the protein removal plate significantly outperformed the phospholipid removal plate. Only one metabolite, glucosamine, displayed greater signal intensity with the phospholipid removal plate (Figure 1B). This result suggests that the phospholipid removal plate may also be useful for future analysis of other carbohydrates.

Protein Concentration and Cell Debris Mass

To attempt to normalize cell number, we tried measuring protein concentration. As shown in Figure 2, extraction of proteins with lysis buffer (Figure 2A) worked well in comparison to extraction with CH3OH/acetonitrile (Figure 2B), which showed low protein concentration. These were compared using levels of vehicle and L-asparaginase. To obtain better results, we tried another method for cell normalization using cell debris mass. Figure 3 shows that, in systems with fewer cell numbers, cell debris mass showed a low increase in weight; however, in samples with higher cell counts, the mass increased greatly. Neither cell debris nor protein concentration produced a linear relationship with cell number when using conditions and buffers required for metabolomics (Figure 2B and 3).

DNA

DNA concentration was measured for four different cell lines to evaluate its association with cell number. First, cell dilutions ranging from 0.25 – 4 million cells/dish were seeded and processed to determine DNA concentration. Figure 4 illustrates a positive linear correlation between cell number and DNA concentration among all four-cell lines investigated. Cell number was then evaluated by DNA concentration throughout a 48-hour incubation period. DNA concentration was recorded at 0, 4, 8, 24, and 48 hours after incubation. Analysis of the OVCAR-8 and OVCAR-4 cell lines suggests that a greater number of cells yielded faster growth (Figure 5). However, because cell doubling was not seen within the first 24 hours, these data suggest a slight lag phase in growth after feeding. This observation suggests that proximity between cells affects their growth (i.e., some lines grow faster once confluence exceeds a threshold level).

To test this hypothesis, OVCAR-4 cell growth was monitored by cell count (Figure 6) and DNA concentration (Figure 5B). Our results were consistent with the published doubling time of 39 hours for this cell line. However, the results also indicated a lag phase in the doubling of DNA concentration (Figure 5B) and cell number (Figure 6), although the latter lag phase was subtle.11 To validate the existence of the growth lag observed in OVCAR-4 cells, we investigated cell growth by DNA concentration for two additional cell lines: U251 and MDA-MB-435. Because cells of these two lines tend to grow individually, we expected to observe no lag phase. However, as shown in Figure 5C and 5D, both lines exhibited a growth lag. This lag is smaller than that of the OVCAR-4 line but greater than the growth lag of the OVCAR-8 line, as shown by the differences in DNA concentration between the 0 and 8 hour time points.

DISCUSSION

Metabolomic data quality can be dramatically affected by sample preparation and data normalization methods amongst many other factors. Ferreiro-Vera et al. reported that removing phospholipids provided improved signal intensity for majority of retrieved metabolites. Based on these results, we hypothesized that phospholipid depletion would improve metabolite signal intensity across a diverse set of metabolites extracted from the ovarian cancer cell line OVCAR-8.2 However, we found that a protein removal plate yielded greater signal intensity than the phospholipid removal plate for most metabolites. Specifically, the polyamines were significantly depleted in the phospholipid removal plate. Nevertheless, the phospholipid removal plate yielded a notable amount of glucosamine, suggesting that this plate may be helpful for the examination of carbohydrate metabolites.

Initial attempts to use protein concentration and cell debris mass from metabolomics samples failed to provide a solution for normalization. We initially tried normalizing cell number using protein. However, it was an issue because we miss the soluble proteins by measuring the debris. Protein concentration works well with bicine-chaps buffer, which keeps the proteins in solution. However, the buffers required for metabolite extraction do not preserve the proteins. Also, the methanol used for extraction causes precipitation of the proteins, making the concentrations measured extremely low. Realizing that the cell pellet could be put to use, we attempted to normalize cell number by cell debris mass. This method proved to be inefficient because the analytical scale was not precise enough to accurately give the same weight in repeated measurements. In the end, DNA extraction from the cellular debris proved to be an extremely effective method of normalization. Using OVCAR-8 and OVCAR-4 cell lines, DNA concentration was shown to increase proportionally to cell number. For time course experiments, cell number can therefore be normalized using DNA extraction from the metabolomic cellular debris.

In conclusion, this study has shown that protein precipitation plates are superior to phospholipid removal plates for most metabolite classes. However, for sugars, it may be more advantageous to use phospholipid removal plates. Rather than manually counting cells through a microscope, DNA extraction can efficiently normalize cell number without affecting the time required for metabolomic analysis.

ACKNOWLEDGMENTS

I would like to thank Dr. John Weinstein for allowing me to have this wonderful research experience, Dr. Phil Lorenzi for supporting me and providing feedback to help make me a better scientist, and Dr. Leslie Silva for her daily guidance, advice, and support through my whole experience at MD Anderson Cancer Center. This research was supported by the University of Texas, MD Anderson Cancer Center.

References

  1. Dehaven, C.D., et al.; Organization of GC/MS and LC/MS metabolomics data into chemical libraries. J. Cheminform, 2010, 2, 9.
  2. Vuckovic, D.; Current trends and challenges in sample preparation for global metabolomics using liquid chromatography-mass spectrometry. Anal BioanalChem, 2012, 403, 1523-48.
  3. Jankevics, A. et al.; Separating the wheat from the chaff: a prioritisation pipeline for the analysis of metabolomics datasets. Metabolomics, 2012, 8, 29-36.
  4. Patti, G.J.; Separation strategies for untargeted metabolomics. J Sep Sci, 2011, 34, 3460-9.
  5. Ferreiro-Vera, C.; F. Priego-Capote; M.D. Luque de Castro; Comparison of sample preparation approaches for phospholipids profiling in human serum by liquid chromatography-tandem mass spectrometry. J Chromatogr A, 2012, 1240, 21-28.
  6. Cao, B. et al.; GC-TOFMS analysis of metabolites in adherent MDCK cells and a novel strategy for identifying intracellular metabolic markers for use as cell amount indicators in data normalization. Anal BioanalChem, 2011, 400, 2983-93.
  7. Xiao, J.F.; B. Zhou; H.W. Ressom; Metabolite identification and quantitation in LC-MS/MS-based metabolomics. Trends AnalytChem, 2012, 32, 1-14.
  8. Lorenzi, P.L. et al.; DNA fingerprinting of the NCI-60 cell line panel. Mol Cancer Ther, 2009, 8, 713-24.
  9. Carnicer, M. et al.; Development of quantitative metabolomics for Pichiapastoris. Metabolomics, 2012, 8, 284-298.
  10. deJonge, L.P. et al.; Optimization of cold methanol quenching for quantitative metabolomics of Penicilliumchrysogenum. Metabolomics, 2012, 8, 727-735.
  11. Shankavaram, U.T. et al.; CellMiner: a relational database and query tool for the NCI-60 cancer cell lines. BMC Genomics, 2009, 10, 277.
  12. Roberts, D. et al.; Identification of genes associated with platinum drug sensitivity and resistance in human ovarian cancer cells. Br J Cancer, 2005, 92, 1149-58.

Comment

Statins May Hold the Answer to Eternal Youth

Comment

Statins May Hold the Answer to Eternal Youth

Abstract

The signs and symptoms of aging are mostly a consequence of impaired antioxidant function in the body. Statins are drugs that have recently been discovered to counter these age-related changes. These drugs are typically prescribed for long-term control of plasma cholesterol levels in patients with atherosclerotic coronary artery disease. Statins have been found to enhance the enzyme paraoxonase, a potent antioxidant molecule; as a result, statins are able to alleviate manifestations of aging and effectively retard the aging process. Supplementing statins with dimercaprol and restriction of calorie consumption may also prove helpful in decelerating aging. However, use of statins frequency causes myopathy. In order to establish statins as an effective anti-aging medication, the exact pathophysiology of this side-effect must be determined.

Keywords: statins, anti-aging, HMG Co-A reductase, paraoxonase, antioxidant, ROS, sulfhydryl, dimercaprol, caloric restriction

Introduction

Statins decrease low-density lipoprotein (LDL) cholesterol while simultaneously elevating high-density lipoprotein (HDL) cholesterol levels.3 Rise in serum LDL level is correlated to increased cellular uptake of cholesterol, especially in the endothelia.2 During atherosclerosis, these LDL molecules undergo oxidation and amplify the inflammatory process through macrophage induced cytokine release. Cytokines then give rise to a prothrombotic state that leads to coronary artery thrombosis.3 HDL molecules, however, remove the excess cholesterol from body tissues and transport them to the liver for final degradation.2 Since statins reduce the serum LDL levels and increase serum HDL concentration, the risk of morbidity and mortality associated with coronary artery disease (CAD) can be significantly reduced by taking this drug.

However, recent research has revealed that the typical mechanism of action used by statins to control atherosclerosis might be useful in synthesizing future anti-aging medication.5 Aging brings about inevitable physiological changes. Among these, cardiopulmonary and renal disorders are the leading causes of death in the geriatric population, while aging skin spoils the beauty of an eternal youth. Aging imposes a significant health burden on the economy of a country, especially in developing nations. Degradation of beauty, although less important, possesses significant social implications. Hence, discovery of a drug to minimize age-related health complications and maintain external beauty can be a boon to health welfare authorities of every country and contribute to the economic prosperity of their nations.

The Mechanism of Statins in Opposing Atherosclerosis

The predominant mechanism of action of statins in preventing atherosclerotic CAD is by competitive inhibition of hexamethyl glutaryl Co-A (HMG Co-A) reductase enzyme, the key enzyme in the pathway of cholesterol biosynthesis. Statins also increase the expression of LDL receptors in the hepatocytes in order to clear plasma LDL molecules. In addition, the drugs increase the activity of paraoxonase-HDL enzyme complex.4 The functions of HDL and LDL molecules are exactly opposite to each other. LDL delivers cholesterol to the body tissues, while HDL removes the cellular cholesterol and presents them to the liver. Since low plasma cholesterol level is equivalent to low serum LDL concentration, LDL availability at the site of atherosclerosis is significantly decreased. As a result, LDL oxidation is reduced.

The alloenzyme PON 1 192 QQ of the paraoxonase-HDL complex is reported to possess the most prominent antioxidant action among all the enzymatic variants. It catalyzes the hydrolysis of phospholipid hydroperoxides in LDL.4 Paraoxonase plays an important role in preventing lipid peroxidation elsewhere in the body as well. Statins have been found to boost the activity of paraoxonase.5 Statins oppose atherosclerosis by using paraoxonase-HDL complex as the mediator.

Aging and Impaired Antioxidant Activity in the Body

Aging involves a significant deterioration in antioxidant activity of the body. There is a significant increase in mitochondrial activity as the body ages.6 As the site of oxidative phosphorylation, mitochondria produce reactive oxygen species (ROS) that are neutralized by superoxide dismutase (SOD), an important antioxidant molecule. With aging, the mitochondrial DNA may undergo mutations that amplify ROS generation.6 The free radicals alter the structure of mitochondrial membrane lipids and bring about undesirable changes in the organelle and other parts of the cell. This oxidative stress creates a functional deficiency of the SOD enzyme.6

ROS concentration has been found to rise in older age groups with unrestricted intake of calorie-rich foods (fats and carbohydrates). Metabolism of calorie-rich food increases the rate of transfer of electrons in oxidative phosphorylation, resulting in increased generation of ROS.7 Additionally, excessive calorie consumption leads to a dysregulation of cellular autophagy that, in turn, results in dolichol accumulation in the cells.8 Higher concentrations of dolichol lead to higher HMG Co-A reductase enzyme activity, thereby increasing serum cholesterol level in old age.8 Elevated serum cholesterol concentration increases rate of LDL oxidation and generates highly reactive LDL free radicals. Accumulation of free radicals and faulty SOD activity causes the loss of oxidant-antioxidant balance in the body tissues. The high level of oxidants then causes age-related changes in the different organ systems. Figure 1 below summarizes this relation between ROS production, excess calorie intake, and aging.

Statins as Possible Anti-aging Drugs

Their mechanism of action suggests that statins may be able to halt the natural process of aging by using HDL as a mediator. HDL downregulates LDL oxidation through paraoxonase;4 there is a time-regulated fall in the paraoxonase activity due to the accumulation of lipid peroxides, which are generated as byproducts of LDL oxidation.9 This decreased PON 1 192 QQ function is caused by interactions between its sulfhydryl groups and the metabolic by-products.9 Statins can reduce this interaction by decreasing the concentration of LDL molecules in the blood stream. If LDL levels are low, oxidation rate also drops, resulting in significantly reduced generation of oxidized byproducts that interact with paraoxonase.

Reduction of LDL oxidation, however, does not seem to be the definitive solution to this complex problem. Other non-high density lipoproteins such as intermediate-density lipoprotein (IDL) are independent risk factors for age-related CAD. Accumulation of IDL in high levels can bring about adverse effects despite low levels of LDL molecules. Nevertheless, new generations of statins have shown remarkable effects in reducing IDL levels in the blood stream.10

Statins can be combined with the drug dimercaprol to prevent decrease in paraoxonase activity. Dimercaprol is typically used as a chelating agent in heavy metal poisoning, such as arsenic poisoning. The drug possesses a large number of free sulfhydryl groups that attract metals; this unique property is used to free respiratory enzymes from inhibitory complexes with heavy metals (Figure 2). Because oxidized LDL products bind with the sulfhydryl radical of paraoxonase, dimercaprol can serve as a more attractive substrate for these oxidized products; this competition reduces paraoxonase inhibition (Figure 3). Unfortunately, dimercaprol is a nephrotoxic drug that can exacerbate the already-declining renal function of old age. Dimercaprol also leads to dose-related emesis, hypertension, and palpitation. To combat this, statins can be used in conjunction with dimercaprol to exert a synergistic effect in increasing paraoxonase activity. Therefore, the dose of dimercaprol can be decreased to a level where it will cause only minimal physiological distress.

In addition to these drug therapies, restricted calorie consumption can improve antioxidant effects in the body. Caloric restriction lowers both the HMG Co-A reductase activity and cholesterol biosynthesis. Low cholesterol production means a lower concentration of LDL and its subsequent oxidization, significantly enhancing paraoxonase activity and working with statin therapy to dampen generation of ROS.

Statins can be helpful in reducing incidence of atherosclerotic CAD, hypertension, hypertensive nephropathy, renal artery atherosclerosis, cerebral artery sclerosis, diabetic nephropathy, and dermatological changes. Slow progression of atherothrombotic changes in cerebral artery can lower incidences of cerebrovascular disease (stroke), improve cognition; the antioxidant activities of statins may also prevent cellular death, thus reducing development of skin wrinkles. Death of pancreatic beta cells due to ROS overproduction can be effectively limited and insulin sensitivity in the body tissues may be improved, which can decrease the risk of developing diabetic nephropathy and other micro and macrovascular changes.

A significant side effect, however, is the tendency of statins to cause myopathies.11 This adverse effect must be reduced or eliminated before statins can be introduced as an anti-aging pill in the global market. Drug trials have failed to establish any specific dose-response relationship for this pathological condition.12 However, lipophilic statins (simvastatin, lovastatin, atorvastatin) have been associated with a greater number of reported myopathy cases.13 As a result, statin-induced myopathy may be prevented by prescribing the lowest therapeutic dose of this group of drugs.

Conclusion

The antioxidant property of statins may be effective not only in treating dyslipidemia but also as a potential anti-aging medication. Although the combination of statins, dimercaprol, and caloric restriction seems promising towards reducing or even reversing the process of aging, the additive role of these therapies has not yet been studied fully. More intensive research is necessary to fill the gaps in our knowledge about aging mechanisms and to develop anti-aging drugs.

References

  1. Malloy, M. J.; Kane, J. P. Agents Used in Dyslipidemia. In Basic & Clinical Pharmacology, 11th ed.; Katzung, B. G. et al. Eds.; Tata McGraw-Hill: New Delhi, 2009; p 605-620.
  2. Botham, K. M.; Mayes, P. A. Lipid Transport and Storage. In Harper’s Illustrated Biochemistry, 27th ed.; Murray, R. K. et al. Eds.; McGraw-Hill: Singapore, 2006; p217-229.
  3. Mitchell, R. N.; Schoen, F. J. Blood Vessels. In Robbins and Cotran’s Pathological Basis of Disease, 8th ed.; Kumar, V. et al., Eds.; Elsevier: Haryana, 2010; p 487-528.
  4. Mahley, R. W.; Bersot, T.P. Drug Therapy for Hypercholesterolemia and Dyslipidemia. In Goodman and Gilman’s The Pharmacological Basis of Therapeutics, 10th ed.; Hardman, J. G. et al., Eds.; McGraw-Hill: New York, 2010; p 971-1002.
  5. Durrington, P. N. et al. Arterioscler. Thromb., Vasc. Biol. 2001, 21, 473-480.
  6. Pollack, M.; Leeuwenburgh, C. Molecular Mechanisms of Oxidative Stress in Aging: Free Radicals, Aging, Antioxidants and Disease. In Handbook of Oxidants and Antioxidants in Exercise; Sen, C. K. et al., Eds.; Elsevier Science: Amsterdam; p 881-923.
  7. de Grey, A. D. N. J. History of the Mitochondrial Free Radical Theory of Aging, 1954-1995. In The Mitochondrial Free Radical Theory of Aging; de Grey, A. D. N. J., Ed.; R. G. Landes: Austin, Texas; p 65-84.
  8. Cavallini, G. et al. Curr. Aging Sci. 1999, 1, 4-9.
  9. Aviram, M. et al. Free Radic. Biol. Med. 1999, 26, 892-904.
  10. Stein, D. T. et al. Arterioscler. Thromb. Vasc. Biol. 2001, 21, 2026-2031.
  11. Amato, A. A.; Brown Jr, R. H. Muscular Dystrophies and Other Muscle Diseases. In Harrison’s Principles of Internal Medicine, 18th ed.; Longo, D.L. et al., Eds.; McGraw-Hill: New York, 2012; p 3487-3508.
  12. Stewart, A. PLoS Curr. [Online] 2013, 1. http://currents.plos.org/genomictests/article/slco1b1-polymorphisms-and-statin-induced-myopathy/ (accessed March 14, 2013).
  13. Sathasivam, S; Lecky, B. Statin Induced Myopathy. BMJ [Online] 2008, 337:a2286. 

Comment

Hands-free driving: A Roadmap to the Future

Comment

Hands-free driving: A Roadmap to the Future

The simple act of driving can be an unproductive, dangerous, and time consuming activity, one that can be solved through the installation of autonomous technology within vehicles. This technology is considered to be among the most crucial breakthroughs in human travel that is being developed today; it is believed to have the capacity to create an improved and efficient driving experience by limiting fuel consumption, decreasing traffic congestion, and reducing wasted time during road trips.

One of the driving forces behind the creation of autonomous vehicles is safety. Autonomous technology promises safer travel compared to human-operated vehicles, as the cars are equipped with laser and video detection systems to control the car's speed and steering mechanisms while avoiding obstacles in the roadway. This blend of autonomous technologies promises to make driving 99% safer while also allowing the travelers to focus on other activities.1

These cars must detect and make rapid decisions to avoid objects in the roadway; the simple act of crossing an intersection requires the robotic cars to account for the inertias, right-of-way, and velocity of approaching vehicles.2 A major problem facing autonomous vehicles is the idea of real-time communication. As humans correspond face-to-face, these autonomous cars need to interact in real-time, allowing the cars to work together safely. However, this type of communication is unpredictable and extremely hard to maintain.3 Autonomous technology presents near endless benefits to automobile commuters; however, this technology faces not only current mechanical and software problems but also major legal and social issues. This technology needs to be perfected in every way possible before being released into city streets. Through my review of the autonomous technology within these computer-driven cars, I will explore the type of technology that operates these cars, how it operates the vehicle, the benefits created from this technology, and any possible legal and social concerns that arise from their use.

Developing Technologies: Seeing, Thinking, Steering

The ability for autonomous cars to see and judge risks in the roadway is vital to safe operation of the vehicle. An outstanding prototype of autonomous technology was created in 2007 by the Stanford Racing Team. Their robotic car Stanley, which won the DARPA Grand Challenge, operated solely on a software system that processed and converted visual data into appropriate driving commands.4 This software system uses an onboard sensors including lasers, cameras, and radar instruments to gather outside information from the road, allowing the robotic vehicle to observe and judge the approaching roadway;4 these sensors are placed on top of the vehicle. The combination of lasers and cameras allows for increased detection of obstacles by allowing both short and long range detection, respectively.4 As the cameras receive the long range images, the lasers allow the vehicle to detect the dimension of approaching objects that could harm the vehicle. Detection of hazardous obstacles is one of the easier aspects of autonomous driving; split second decision-making based on the detection system is harder to accomplish. An autonomous vehicle must use the information from the detection systems to determine if the road surface is safe for driving. Measuring the dimensions of detected objects allows the car to determine if they are true obstacles, such as roadway debris, or non-obstacles, such as grass and gravel. The researchers who helped build Stanley stated that the robot had trouble determining the difference between tall grass and rocks, which poses obvious difficulties in application.4 In addition to obstacle recognition software, autonomous vehicles require extensive algorithms to accomplish and maintain velocity, steering, acceleration, and braking—functions all controlled by the same system of detection and decision making.

Dynamically Guided Routes

Route guidance is core to autonomous vehicle technology, which is not safe and effective without a computed path. The purpose of route guidance is to gather information from outside sources (e.g. other vehicles, fleet signals) and stored data to create the most efficient route. However, this technology is hindered by the limited amount of information that can be stored within the vehicle due to static map conditions.5 Static conditions are defined as the basic components of individual roadways, such as the length of the road, speed limit, and pre-existing intersection signals. Using static systems can result in unreliable and slower routes due to an inability to account for dynamic road situations; for example, these static routes can be highly ineffective once an accident occurs on the roadways.

Generating accurate routes while on the road is another computationally challenging problem for autonomous technology.5 Due to the mobile condition of autonomous vehicles, current onboard computational power cannot compute and translate both long algorithms and dynamic conditions at the same time. Researchers attempting to create an algorithm must balance quick execution and efficient route creation with low computational power.

An additional problem arises from dynamic roadways. Dynamic roads are defined as streets that are always changing due to traffic jams, accidents, and construction.5 In his article on route guidance, Yanyan Chen stated that a good route is one that, although possibly not the fastest, is both reliable and acceptable to the driver’s needs. As a solution, Chen and his team created the Risk-Averse A' Algorithm (Figure 1). This algorithm suggests a risk-averse strategy that pre-computes factors that affect traffic (such as weather and time of day), accounts for dynamic traffic flow and accidents, and computes a low-risk and reliable route. The Risk-Averse A' Algorithm is widely accepted in the field of autonomous research as the most efficient form of computing reliable and adaptive directions. In fact, Stanley used this algorithm in the DARPA Challenge.4

The task of navigating an autonomous car through an intersection is not simple. The vehicles must be able to use algorithms to derive not only the distance from the car to the intersection but also its current inertia. Simultaneously, this information must be constantly compared with that of other vehicles. The two main challenges in crossing an intersection are establishing reliable communication with other vehicles as well as the dynamic, convoluted environment of intersections. For autonomous navigation to be possible, vehicles must communicate with each other to determine which car has right of way. When approaching an intersection, each car should propagate signals to the other vehicles, a failsafe in case oncoming cars are not detected by the visual and laser system (Figure 2). In theory, autonomous vehicles will discharge signals containing position and velocity information. At an intersection, approaching cars can detect and process this information to determine the appropriate mechanical move.

The dynamic environment of an intersection creates a whole new series of problems with the introduction of unknown variables. An autonomous system must be able to adapt, sense, and make decisions in short periods of time. The proposed ideas on how to navigate intersections use a decentralized navigation function, a method that has no need for long-range communication between vehicles. It enables cars to navigate independently while maintaining network connectivity and an overall goal. This function allows the car to account for dynamic traffic and improves the use of algorithms.2

Robotic Communication

The problem of real-time coordination between vehicles is a major obstacle that must be overcome for this technology to function safely on city streets and highways. Without reliable and fast communication, autonomous vehicles cannot navigate intersections, conserve energy, drive in safe formations, or create efficient routes. However, communication through wireless networks is not always reliable. Dr. Mélanie Bouroche from Trinity College, Dublin, stated that a “vehicle intending to cross an un-signaled junction needs to communicate in an area wide-enough to ensure that other vehicles … will receive its messages.”3 Figure 3 illustrates how the cars should disperse signals to communicate with other vehicles.

In the article “Real-Time Coordination of Autonomous Vehicles,” Bouroche, Hughes, and Cahill found a solution to this communication issue by creating a space-elastic communication model. A coordination model for autonomous cars allowed autonomous vehicles to adapt their behavior depending on the state of communication, ensuring safety constraints were never violated.3

Conclusion

Autonomous technology should improve daily travel by decreasing fuel consumption, traffic congestion, and accidents. The construction of new highways and streets to accommodate this technology would modernize and improve the efficiency of cities. Daily life could be enhanced, as driving time could be spent more productively. Autonomous technology can greatly improve everyday vehicular travel—but only if it is correctly implemented into society. Many problems still remain in the realization of autonomous vehicles: detection systems must be improved to effectively identify and avoid obstacles, algorithms need to be refined to quickly compute dynamic routes, and communication between vehicles needs to be drastically improved in order to avoid accidents. The legal and societal issues must also be addressed: will all vehicular travel be converted to automated travel? If so, will all citizens be forced to use technology that controls their movement? If not, will separate highways and roads be built? Who will fund this new creation of streets and roads? Who will ultimately control and maintain such a system? Autonomous technology has the potential to vastly improve travel, but it can introduce system vulnerabilities and malfunction. Self-directed vehicles must be thoroughly researched and tested before the technology can be implemented on city, state, and national streets.  

References

  1. Hayes, B. Am. Sci. 2011, 99, 362-366.
  2. Fankhauser, B. et al. CIS 2011: IEEE 5th International Conference, Qingdao, China, Sept 17-19, 2011; pp.392-397.
  3. Bouroche, M. et al. IEEE Conference on Intelligent Transportation Systems, 2006, 1232-1239.
  4. Thrun, S. et al. J. Field Robot. 2006, 23, 661-692.
  5. Chen, Y. et al. J. Intell. Transport. S. 2010, 14, 188–196.
  6. Bergenhem, C. et al. Sartre, 2008, 1-12.
  7. Dahlkamp, H. et al. In Proceedings of the Robotics Science and Systems Conference, 2006, 1-7.
  8. Elliott, C. et al. In The Royal Academy of Engineering, 2009, 1-19.
  9. Douglas, G. W. Unmanned Systems, 1995, 13, 3.
  10. Laugier, C. et al. In Proceedings of the IFAC Symposium on Intelligent Autonomous Vehicles. 2001, 10-18.
  11. Wright, A. Comm. ACM 2011, 54, 16-18. 

Comment

Aquaporin-4 and Brain Therapy

Comment

Aquaporin-4 and Brain Therapy

Introduction

One of the tasks of modern medicine is to address the many diseases affecting the brain. These maladies come in various forms – including neurodegenerative complications, tumors, vascular constriction, and buildup of intracranial pressure.1,2,3 Several of these disease classes are caused, in part or in full, by a faulty waste clearance or water flux. Although a pervasive system of slow cerebrospinal fluid (CSF) movement in the brain’s ventricles is present, a rapid method for clearing solutes from the cortex’s interstitial space, which contains neural tissue and the surrounding extracellular matrix, was unknown.4,5 Recently, Iliff et al discovered a new mechanism for the flow of CSF in the mouse brain – the “glymphatic” system.5,6 This pathway provides an accelerated mechanism to clear dissolved materials from the interstitial space, preventing a buildup of solutes and toxins.5,6 At the center of this system is the water transport protein aquaporin-4 (AQP4), and the extent of this water channel’s various roles are only now being identified. New perspectives on the mechanism by which the brain is “cleansed” may lead to breakthroughs in therapeutics for brain disorders such as Alzheimer’s Disease (AD), which is the sixth leading cause of death in the US each year.7

The Infrastructure

To understand AQP4 and its role in the brain, the environment in which it operates must be examined. As seen in Figure 1, surrounding the brain are three meninges, protective layers between the skull and the cortex.8 Between these layers – the dura, arachnoid, and pia maters – are cavities, including the subarachnoid space that lies just below the arachnoid layer.9 As the figure shows, directly underneath the pia mater are the cortex and interstitial space. Within the cortex, CSF flows through a system of chambers called the ventricles, illustrated in Figure 2.10 CSF suffuses the brain and has several vital functions, namely shock absorption, nutrient provision, and waste clearance.11 CSF is produced in a mass of capillaries called the choroid plexus and flows through the ventricles into the subarachnoid space, bathing the brain and never crossing the blood-brain barrier.10 After circulating through the brain’s interstitial space and ventricles, the CSF then leaves the brain through aquaporin channels surrounding the cephalic veins.6

A New Plumbing System

A team of researchers has discovered an alternate pathway for CSF that clears water-soluble materials from the interstitial space.6 CSF in this so-called “glymphatic pathway” starts in the subarachnoid cavity and then seeps into the cortex, as seen in Figure 3.6 This fluid eventually leaves the brain, carrying with it the waste generated by cells. CSF enters the parenchyma from the subarachnoid cavity and travels immediately alongside the blood vessels.6 This route, which forms a sheath around the blood vessels, is dubbed the “paravascular” pathway, and CSF enters and exits the interstitial space through these avenues.6 The pia membrane guides this pathway until the artery penetrates the cortex, as seen in Figure 3. From there, the endfeet of astrocytes bind the outer wall.6 Astrocytes are glial cells that play structural roles in the nervous system, and endfeet are the enlarged endings of the astrocytes that contact other cell bodies and contain AQP4 proteins.6,12

Iliff et al found that AQP4 are highly polarized at the endfeet of astrocytes, which suggested that these proteins provide a pathway for CSF into the parenchyma.6 To test this hypothesis, they compared wild type and Aqp4-null mice on the basis of CSF influx into the parenchyma.6 They injected tracers such as radiolabeled TR-d3 intracisternally, finding that that tracer influx into the parenchyma was significantly reduced in Aqp4-null mice.4 According to their model, AQP4 facilitates CSF flow into the parenchyma. There, CSF mixes with the interstitial fluid in the parenchyma; AQP4 then drives these fluids out and into the paravenous pathway by bulk flow.6 The rapid clearance of tracer in wild type mice and the significantly reduced clearance in Aqp4-null mice demonstrated the pathway’s ability to clear solutes from the brain. This finding is important because the build-up of Aβ is often associated with the onset and progression of AD.

AQP4 and Aβ

To facilitate Aβ removal, astrocytes become activated at a threshold Aβ, but undergo apoptosis at high concentrations.13,14,15 Thus, the concentration of astrocytes has to be in a narrow window. A study by Yang et al further explored the role of AQP4 in the removal of Aβ.16 They found that AQP4 deficiency reduced the astrocytic activation in response to Aβ in mice, and Aqp4-knockout reduced astrocyte death at high Aβ levels.16 Furthermore, AQP4 expression increased as Aβ concentration increased, likely due to a protein synthesis mechanism. Further investigation demonstrated that lipoprotein receptor-related protein-1 (LRP1) is directly involved in the uptake of Aβ, and knockout of Aqp4 reduced up-regulation of LRP1 in response to Aβ.15,16 Finally, AQP4 deficiency was found to alter the levels and time-course of MAPKs, a family of protein kinases involved in the response to astrocyte stressors.16 The role of AQP4 in cleansing the parenchyma as well as modulating astrocytic responses to Aβ thus pinpoint it as a major target for the following potential therapies: repairing defects in toxin clearance from the interstitial space, increasing expression in AQP4-deficient patients to increase astrocyte response, and knocking out Aqp4 in patients with high levels of Aβ to prevent astrocyte damage.

Sleep and Aβ Clearance

Interestingly, there is a link between Aβ clearance and sleep. Xie et al. studied Aβ clearance from the parenchyma in sleeping, anesthetized, and wakeful rodents, obtaining evidence that sleep plays a role in solute clearance from the brain. The researchers found that glymphatic CSF influx was suppressed in wakeful rodents compared to sleeping rodents.17 Glymphatic CSF influx is vital because it clears solutes from the brain in a way somewhat analogous to the way kidneys filer the blood. Real-time measurements showed that the parenchymal space was reduced in wakeful rodents, which led to increased resistance to fluid influx.17 Moreover, Aβ clearance was faster in sleeping rodents. Adrenergic signaling was hypothesized as the cause of volume reduction, implicating hormones such as norepinephrine.17 AQP4 is implicated in this phenomenon, as constricted interstitial space resists the CSF influx that this protein enables.

Aquaporin Therapy

If future treatment will target AQP4 function, then researchers must learn to manipulate its expression. However such regulatory mechanisms are not well understood. It is well-known that cells can ingest proteins in the plasma membrane and thus modulate the membrane protein landscape. Huang et al studied this phenomenon with AQP4, utilizing the fact that occluding the middle cerebral artery mimics ischemia and alters AQP4 expression in astrocyte membranes.18 They found that artery occlusion down-regulates AQP4 expression and discovered various mechanisms behind this response.18 Specifically, they determined that AQP4 co-localized in the cytoplasm with several proteins involved in membrane protein endocytosis, after the onset of ischemia.18 They posited that this co-localization indicates the internalization of AQP4.18 These correlations indicate that AQP4 is intimately connected with fluctuations in brain oxygen and nutrient levels, which are limited when blood flow is restricted.

Future Research

Aquaporin-4 is vital to many processes in the brain, but the range and details of these roles are not yet fully understood. As demonstrated, this protein is the central actor in the newly defined glymphatic system responsible for clearing solutes from CSF in the interstitial space. This function implicates AQP4 in the progression of AD and suggests other brain states and neurological conditions may have links to the protein’s function. Studies have demonstrated that AQP4 expression is dynamic, indicating that it can be regulated. The hope is that modulation of aquaporin expression or function could be used in brain therapy. Future research will no doubt focus on these mechanisms, and discoveries will aid in developing a treatment for various brain disorders.

References

  1. Goetz, C., Textbook of Clinical Neurology, 3rd Edition; Saunders: Philadelphia, 2007.
  2. Goldman, L. Goldman’s Cecil Medicine; Saunders Elsevier: Philadelphia, 2008.
  3. Karriem-Norwood, V. Brain Diseases, WebMD. http://www.webmd.com/brain/brain-diseases (Accessed December 1, 2013).
  4. Crisan, E. Ventricles of the Brain, Medscape. http://emedicine.medscape.com/article/1923254-overview#aw2aab6b3 (Accessed December 1, 2013).
  5. Scientists Discover Previously Unknown Cleansing System in Brain, University of Rochester Medical Center. http://www.urmc.rochester.edu/news/story/index.cfm?id=3584 (Accessed February 11, 2014).
  6. Iliff J.J. Cerebrospinal Fluid Circulation: A Paravascular Pathway Facilitates CSF Flow Through the Brain Parenchyma and the Clearance of Interstitial Solutes, Including Amyloid ß. Sci Transl Med 2012, 4, 147ra111.
  7. 2012 Alzheimer’s Disease Facts and Figures. Alzheimer’s Association. http://www.alz.org/downloads/facts_figures_2012.pdf (Accessed December 6, 2013).
  8. Dugdale III, D. Meninges of the Brain. MedlinePlus, National Institutes of Health. http://www.nlm.nih.gov/medlineplus/ency/imagepages/19080.htm (Accessed December 1, 2013).
  9. O’Rahilly, R.; Muller, F.; Carpenter, S.; Swenson, R. Chapter 43: The Brain, Cranial Nerves, and Meninges. Basic Human Anatomy. [Online] Dartmouth Medical School: Hanover, 2008. http://www.innerbody.com/anatomy/nervous/subarachnoid-space (Accessed December 1, 2013).
  10. Agamanolis, Dimitri. Chapter 14 Cerebrospinal Fluid. Neuropathology. [Online] http://neuropathology-web.org/chapter14/chapter14CSF.html (Accessed Dec. 1, 2013).
  11. Cerebrospinal Fluid (CSF), National Multiple Sclerosis Society. http://www.nationalmssociety.org/about-multiple-sclerosis/what-we-know-about-ms/diagnosing-ms/cerebrospinal-fluid/index.aspx (Accessed December 1, 2013).
  12. Millodot, M. Astrocytes. Dictionary of Optometry and Visual Science, 7th edition; Butterworth-Heinemann: Oxford, U.K., 2009.
  13. Nielsen, H.M. et al. Glia [Online] 2010, 58, 1235-1246.
  14. Kobayashi, K. J Alzheimer’s Dis [Online] 2004, 6, 623-632.
  15. Arelin, K. Brain Research Molecular/Brain Research [Online] 2002, 104, 38-46.
  16. Yang, W. Mol Cell Neurosci [Online] 2012, 49, 406-414.
  17. Xie, L Science [Online] 2013, 342, 373-377.
  18. Huang, J. Brain Research [Online] 2013, 1539, 61-72.
  19. Almodovar, B. et al. Rev Cubana Me Top [Online] 2005, 57, 3, 230-232.
  20. Ibe, B.C., et al. J. Tropical Pediatr. [Online] 1994, 40, 315-316.
  21. Slowik, G. What Is Meningitis? eHealthMD. http://ehealthmd.com/content/what-meningitis#axzz2l3OzwGfb (Accessed Dec. 1, 2013).
  22. Iadecola C. and Nedergaard M. Glial regulation of the cerebral microvasculature. Nat Neurosci [Online] 2007, 10, 1369-1376.

Comment

Do You Know Your Bottled Water?

Comment

Do You Know Your Bottled Water?

Nearly 54% of the American population drink bottled water.1 As they pick up packages of plastic-wrapped bottles off the shelf, they may believe the water within is cleansed of toxic chemicals, free of bacteria, and enhanced with minerals. In truth, bottled water is just as likely to have the same level of harmful chemicals as that of tap water, contain between 20,000 to 200,000 bacterial cells, and lack any beneficial minerals depending on water source.2

As sales of bottled water increase, consumption of tap water decreases.3 Since 1999, the average world consumption of bottled water has increased by 7% each year based on annual per capita,4 now surpassing the sales of milk and beer.5 This growing habit is costly to both the wallet and environment. Companies price bottled water 500 to 1,000 times higher than tap water on average,4 and producing one bottle of water requires 1,000 to 2,000 times more energy than producing the same amount of tap water—in addition to producing plastic waste.5 Are these increased costs justified by the belief that bottled water is safer or cleaner than tap water? In reality, that perception is a marketing myth.

Contrary to popular belief, bottled water is held under lower safety standards than tap water is. There are two different groups regulating drinking water: the Food and Drug Administration (FDA) oversees bottled water and the Environmental Protection Agency (EPA) regulates tap water. In setting health standards, the FDA follows the footsteps of the EPA. The EPA is the first to set limits on dangerous chemicals, biological wastes, pesticides, and microbial organisms in tap water, and the FDA then adopts those limits for bottled water. As a result, regulations on bottled water are no stricter than those on tap water. In fact, they are often weaker. Table 1 lists health standards that differ between bottled and tap water; bottled water only has stricter limits on copper, fluoride, and lead.

The higher lead level of 15 parts per billion (ppb) in tap water allowed by the EPA—three times the limit for bottled water—may alarm some, but research indicates lead exposure at 15 ppb does not elevate blood lead levels in most adults. When the FDA followed suit to establish a lead limit in bottled water, it lowered the limit to 5 ppb because the majority of bottled water manufacturers could reasonably reach that level.6 Bottled water does not need to travel through pipes made of lead, so a stricter limit is sensible and could only be beneficial.

Unfortunately, FDA standards are poorly enforced. Water that is bottled and sold within the same state is not covered by FDA rules: an estimated 60-70% of bottled water sold in the United States meet the criteria of being intrastate commerce and thus are only regulated by the state.6 A survey revealed that most states spend few, if any, resources policing bottled water, so compliance with standards for more than half of the bottled water on grocery shelves is discretionary.6 With overall weaker health standards and lax enforcement of regulation, bottled water is not obligated to be safer than tap water. How does the quality of bottled and tap water compare in reality? Examination reveals differences in their mineral content and microbial content that impacts personal health.

Mineral Comparison

In terms of mineral composition, the amount of mineral content in bottled and tap water largely depends on source and treatment. Bottled water can be designated as spring, mineral, or purified. Spring water, including brands such as Ozarka® and Arrowhead®, originate from surface springs with water flowing naturally from underground supplies. Mineral water is simply spring water with at least 250 parts per million (ppm) of dissolved minerals such as salts and metals.3 Purified water brands such as Aquafina® and Dasani® take water from either underground or tap water sources and filter it to remove all minerals.

Tap water is more simply categorized as being sourced from surface water or ground water. Surface water refers to lakes, rivers, or oceans, while ground water describes any reservoir located beneath the earth's surface. For example, most of the tap water in Houston, Texas originates from a single surface water source.3 The source of any drinking water affects which minerals ultimately remain in the drinking water.

Three specific minerals important for a healthy body are calcium, magnesium, and sodium. Adequate calcium intake is important to maintain and restore bone strength for the young and to prevent osteoporosis in the old. Insufficient consumption of magnesium has been associated with heart disease including arrhythmias and sudden death.3 On the other hand, overly high sodium intake is well associated with high blood pressure and death from heart disease.7 The intake of all three of these minerals can be ensured by drinking water high in calcium, high in magnesium, and low in sodium. In fact, magnesium in water is absorbed approximately 30% faster than magnesium in food.3

A comparative study in 2004 examined these three minerals in bottled and tap water across major U.S. regions. It concluded that drinking two liters per day of tap groundwater water in certain regions or bottled mineral water of certain brands can significantly supplement a person’s daily intake of calcium and magnesium (Table 2).3 To obtain mineral data, the study contacted tap water authorities in 21 different cities spanning the U.S. and obtained published data for 37 North American brands of commercial bottled water. While tap water sources showed wide variations in calcium, magnesium and sodium content, mineral levels of bottled water were more consistent from category to category. In general, tap water from groundwater sources had higher levels of calcium and magnesium than those from surface water sources. High levels of calcium correlated with high levels of magnesium, while sodium levels varied more independently. Out of 12 states, water mineral levels were highest in Arizona, California, Indiana, and Texas. In half of the sources from those states, two liters of tap water allow adults to fulfill between 8 - 16% of calcium and 6 - 31% of magnesium daily recommended intake (DRI). Additionally, more than 90% of all tap water sources contained less than 5% of sodium DRI in two liters.

Amongst the bottled waters, spring water consistently contained low levels of all three minerals, while mineral waters contained relatively high levels of all three minerals (Table 2). Ozarka® spring water, produced in Texas, provides less than 2% of the three minerals' DRIs. In contrast, one liter of Mendocino® mineral water supplies 30% of the calcium and magnesium DRIs in women, and one liter of Vichy Springs® mineral water provides more than 33% of the recommended maximum sodium DRI. Based on these percentages, drinking bottled “mineral” as well as tap water from groundwater sources in certain cities can supplement food intake to fulfil calcium and magnesium DRIs.

Microbial Comparison

Despite labels depicting pristine lakes and mountains, bottled drinking water nearly always contains living microorganisms. In general, processing drinking water serves to make the water safe—not necessarily sterile. The FDA and EPA only regulate bottled and tap water for coliform bacteria, which are independently harmless but indicate the presence of other disease-causing organisms.1 E. coli are an example of coliform bacteria that reside in the human and animal intestines and are widely present in drinking water.8 The total amount of microorganisms in water is often measured by incubating and counting the colony forming units (CFU), or bacteria that develop into colonies. Water with under 100 CFU/mL indicates microbial safety, while counts from 100-500 CFU/mL are questionable.8

In 2000, a research group compared the microbial content of bottled and tap water by obtaining samples from the four tap water processing plants in Cleveland, Ohio and 57 samples of bottled water from a number of stores.9 The bottled water samples included products classified as spring, artesian, purified, and distilled. Bacteria levels in the bottled waters ranged from 0.01 to 4,900 CFU/mL, while the tap water samples varied from 0.2 to 2.7 CFU/mL.9 More specifically, 15 bottled water samples contained at least 10 times as much bacteria as the tap water average, three bottled water samples contained about the same amount of bacteria, and 39 bottled water samples possessed less bacteria. As shown in Figure 1, one-fourth of the samples of bottled water, mainly spring and artesian water, had more bacteria than tap water, demonstrating that bottled water is not reliably more clean of bacteria than tap water. The bacteria in both bottled and tap water can cause gastrointestinal discomfort or illness.10

What is the Healthiest and Cleanest Water to Drink?

Since bottled water and tap waters contain varying levels of microbes, clean water is most reliably obtained by disinfecting tap water with commercially available water purifiers. Most purifiers also act as filters to remove chlorine, its byproducts, and other harmful chemicals that accumulate in tap water. However, chemical-removing filters will also remove any calcium and magnesium present as well, so purified tap water loses its mineral benefits. The loss of aqueous mineral intake can be overcome by eating foods rich in calcium and magnesium; maintaining mineral DRIs will result in a better level of health and energy. Bottled water is not guaranteed to be cleaner than tap water, so drinking properly filtered tap water may be the most economic and health way for hydration.

References

  1. Rosenberg, F. A. Clin. Microbiol. Newsl. 2003, 25, 41–44.
  2. Hammes, F. Drinking water microbiology: from understanding to applications; Eawag News: Duebendorf, Switzerland, 2011.
  3. Azoulay, A. et al. J. J. Gen. Intern. Med. 2001, 16, 168–175.
  4. Ferrier, C. AMBIO 2001, 30, 118–119.
  5. Gleick, P. H.; Cooley, H. S. Environ. Res. Lett. 2009, 4, 014009.
  6. Olson, E. D. et al. Bottled Water: Pure Drink or Pure Hype?; NRDC: New York, 1999.
  7. Chobanian, A. V; Hill, M. Hypertension 2000, 35, 858–863.
  8. Edberg, S. C. et al. J. J. Appl. Microbiol. 2000, 88, 106S–116S.
  9. Lalumandier, J. A.; Ayers, L. W. Arch. Fam. Med. 2000, 9, 246–250.
  10. Payment, P. et al. Water Sci. Technol. 1993, 27, 137–143

Comment

Cancer Ancestry: Identifying the Genomes of Individual Cancers

Comment

Cancer Ancestry: Identifying the Genomes of Individual Cancers

Cancer-causing mutations in the human genome have been a subject of intense research over the past decade. With increasing numbers of mutations identified and linked to individual cancers, the possibility of treating individual patients with a customized treatment plan based on their individual cancer genome is quickly becoming a reality.

Cancer arises when individual cells acquire mutations in their DNA. These mutations allow cancerous cells to proliferate uncontrollably, aggressively invade surrounding tissues, and metastasize to distant locations. Based on this progression, a potentially tremendous implication emerges: if every type of cancer arises from an ancestor cell that acquires a single mutation, then scientists should be able to trace every type of cancer back to its original mutation through modern genomic sequencing technologies. High volumes of the human genome have been analyzed in search of these ancestor mutations using a variety of techniques, the most common of which is a large Polymerase Chain Reaction (PCR) screen. In this type of study, DNA of up to one hundred cancer patients is sequenced; the sequences are then analyzed for repeating codons, the DNA units that determine single amino acids in a protein. Analyzing the enormous volume of data from these screens requires the efforts of several institutions. The first 90 cancer-causing mutations were identified at the Johns Hopkins Medical Institute, where scientists screened 11 breast cancer and 11 colorectal cancer patients’ genomes.3 After this study was published in 2006, researchers found these 90 mutations across every known type of cancer. These findings stimulated even more ambitious projects: if the original cancer-causing mutations are identified, scientists may be able to reverse the cancer process by removing faulty DNA sequences using precisely targeted DNA truncation proteins.

However, such a feat is obviously more easily said than done. One of the many obstacles in identifying cancer genomes is the fact that approximately 10 to 15% of cancers derive large portions of their DNA from viruses such as HIV and Hepatitis B. The addition of foreign DNA complicates the search for the original mutation, since viral DNA and RNA are propagated in human cells. This phenomenon masks human mutations that may have existed before the virus entered the host cell. In addition, because tumors are inherently unstable, cancers may lose up to 25% of their genetic code due to errors in cell division, making the task of tracing them even more difficult. Finally, the mutations in every individual cancer have accumulated over the patient’s lifetime; differentiating between mutations of the original cancerous cell line and those caused by aging and environmental factors is an arduous task.

In order to overcome these challenges, scientists use several approaches. First, they increase the sample size—this strategy ensures that the mutations are not specific to an individual organism or geographic area but are common in all patients with that type of cancer. Second, accumulated data concerning viral genomes allow scientists to screen for and mark the areas of viral origin in patients’ DNA. Several advances have already been made despite the difficulties: for instance, in endometrial cancer—a cancer originating in the uterine lining—mutations in the Nucleotide Excision Repair (NER) and MisMatch Repair (MMR) genes have been found to occur in 13% of all affected patients.4 NER and MMR are involved in DNA repair mechanisms and act as the body’s “guardians” of the DNA replication process. In a healthy individual, both NER and MMR ensure that each new cell receives a complete set of functional chromosomes following cell division. In a cancerous cell, these two genes acquire a mutation that permits replication of damaged and mismatched DNA sequences. Similarly, mutations in the normally tumor-suppressing Breast Cancer Type 1 Susceptibility Gene 1 and Gene 2 (BRCA1 and BRCA2) have been identified as major culprits in breast cancer. In prostate cancer, E-26 Transformation Specific (ETS) and Transmembrane Protease, Serine 2 (TMPRSS2) are two DNA transcription regulatory proteins discovered to initiate the disease process.5

One of the latest frontiers in cancer treatment is the identification and study of individual, disease-causing mutations. Thousands of tumor genomes have been sequenced to discover recurring mutations in each cancer, and tremendous advances have been made in this emergent field of cancer genomics. Further study will ultimately aim to tailor cancer treatment to the patient’s specific set of mutations in the emerging field of personalized medicine. This strategy is already being used in the treatment of leukemia at the Cincinnati Children’s Hospital, where a clinical study has been underway since August 2013.2 This trial uses a combined treatment program that includes standard drug therapy while targeting a specific mutation in the mTOR gene, which is responsible for DNA damage repair. Thus, less than a decade after researchers first began to identify unique cancer-causing mutations, treatment programs tailored to patient genomes are becoming a reality.

References

  1. Lengauer, C. et al. Nature 1998, 396, 643-649.
  2. Miller, N. http://www.cincinnatichildrens.org (accessed Nov 9, 2013).
  3. Sjöblom, T. et al. Science 2006, 314, 268-274.
  4. Stratton, M. et al. Nature 2009, 458, 719-724.
  5. Tomlins, S. et al. Science 2005, 310, 644-648. 

Comment

Stem Cells and Hearts

Comment

Stem Cells and Hearts

In the U.S., someone dies of heart disease every 33 seconds. Currently, heart disease is the number one cause of death in the U.S.; by 2020, this disease is predicted to be the leading cause of death worldwide.1 Many of these deaths can be prevented with heart transplants. However, only about 2,000 of the 3,000 patients on the wait-list for heart donations actually receive heart transplants every year. Furthermore, patients who do receive donor hearts often have to wait for months.2

The shortage of organ donors and a rise in demand for organ transplants have instigated research on artificial organ engineering.5 In Tokyo, Japan, cell biologist Takanori Takebe has successfully synthesized and transplanted a “liver bud,” a tiny functioning portion of a liver, into a mouse; his experiment was able to partially restore liver function.3 Dr. Alex Seifalian from University College London, who has previously conducted artificial nose transplants, is now working on engineering artificial cardiovascular components.5

Breakthroughs in artificial organ engineering are also being made more locally. At St. Luke’s Episcopal Hospital in the Texas Medical Center, Dr. Doris Taylor is working on cultivating fully functional human hearts from proteins and stem cells. She is renowned for her discovery of “whole-organ decellularization,” a process where organs are stripped of all living cells to leave a protein framework. Taylor has successfully used this method as the first step in breathing life into artificially grown hearts of rats and pigs, and she is attempting to achieve the same results with a human heart.2

In this method, Taylor uses a pig heart as a scaffold, or protein template, for the growth of the human heart, as they are similar in size and physiological structure. In order to create such a scaffold, Taylor first strips pig hearts of all their cells, leaving behind the extracellular matrix and creating a framework free of foreign cells. The heart is then immersed for at least two days in a detergent found commonly in baby shampoo, which results in whole-organ decellularization. The decellularized pig heart emerges from the detergent bath completely white, since the red color of organs is usually derived from the now-absent hemoglobin and myoglobin (two oxygen-carrying proteins) in the cells. Therefore, only the structural proteins of the organ—devoid of both color and life—remain.2

To bring this ghost heart to life, Taylor enlists the aid of stem cells from human bone marrow, blood, and fat. The immature stem cells have the potential to differentiate into any cell in the body and stimulate the growth of the artificial organ. After the stem cells are added,2 the artificial heart is placed in a bioreactor that mimics the exact conditions necessary for growth, including a separate blood and oxygen supply as well as a beating sensation.6 Amazingly, a heartbeat is observed after just a few days, and the artificial organ can successfully pump blood after just a few weeks.2

Of course, this method is not limited to the development of a single type of organ. Not only will Taylor’s research benefit patients suffering from heart failure, it will also increase availability of other artificial organs like livers, pancreases, kidneys, and lungs. Taylor has already proven that decellularization and stem cell scaffolding is a practical possibility with other organs; additionally, she has completed successful lab trials with organ implants in rats.4 While the full growth of a human heart is still being refined and other organ experiments have recently been completed, Taylor predicts that her team will be able to approach the Food and Drug Administration (FDA) with proposals for clinical trials within the next two years. The trial of integrating entire organ into human patients may be further into the future, but Taylor proposes that they will begin with cardiac patches and valves, smaller functioning artificial portions of a heart, to show the safety and superiority of the decellularization and stem cell scaffolding process. Hopefully, after refining the procedure and proving its success, whole-organ decellularization will be used to grow organs unique to every individual who needs it.2

While this process is useful for all transplant patients, it is especially important for people with heart disease. The muscle cells of the heart, cardiomyocytes, have no regenerative capabilities.4 Not only is heart tissue incapable of regeneration, but the transplant window for hearts is also extremely short: donor hearts will typically only last four hours before they are rendered useless to the patient, which means that a heart of matching blood type and proteins must be transported to the hospital within that time period. Due to high demand and time limitations, finding compatible hearts within a reasonable distance is difficult. Though mechanical hearts are emerging as possible replacements for donor hearts, they are not perfect; use of a natural heart would be vastly superior.2 Mechanical hearts face the issue of unnatural malfunction; natural hearts, which are designed for a human body, will better “fit” the individual and can be tailored to avoid patient rejection. With the advent of biologically grown hearts, more hospitals will have access to replacement organs, increasing the patients’ options for transplant. Another critical advantage of artificially grown hearts lies in the fact that the patient may not need anti-rejection medication. The patient’s own stem cells could be used to grow the heart. The artificial tissue would then grow to have the same protein markers as the rest of the cells in the body, minimizing the chances that the organ would be rejected.2 Still, the use of stem cells could be potentially problematic, as human stem cells decrease in number and deteriorate in function over time. In this respect, stem cells from younger patients are usually desirable, so the eradication of all anti-rejection medication is not feasible in the near future.

The development of artificial organs provides a solution to issues of organ rejection, availability, compatibility, and mechanical failure. Dr. Taylor’s stem cell research also presents the possibility of improving current technologies that help patients with partially functioning hearts. Her work has the potential to grow skin grafts for burn centers and aid in dialysis treatment for liver failure in the near future.2

While other organs are not as fragile as the heart, decellularization and protein scaffolding can potentially benefit the body holistically. Similar to the heart, other organs such as the kidney are capable of healing themselves of small injuries as opposed to major ones requiring transplant and emergency care. Taylor’s research, though still very much in development, could change the future of transplant medicine across all organs.

References

  1. The Heart Foundation. http://www.theheartfoundation.org/heart-disease-facts/heart-disease-statistics/ (accessed Oct 14, 2013).
  2. Galehouse, M. Saving Lives With Help From Pigs and Cells. Houston Chronicle, Houston, Jan 23, 2013.
  3. Jacobson, R. Liver Buds Show Promise, but Growing New Organs is Still a Long Way Off. http://www.pbs.org/newshour/rundown/2013/07/liver-buds-show-promise-but-growing-new-organs-is-still-a-long-way-off.html (accessed Oct 14, 2013).
  4. Moore, Charles. Texas Heart Institute’s Dr. Doris Taylor in the Forefront of Heart Tissue Regeneration Research. http://bionews-tx.com/news/2013/07/04/texas-heart-institutes-dr-doris-taylor-in-the-forefront-of-heart-tissue-regeneration-research/ (accessed Oct 14, 2013).
  5. Naik, G. Science Fiction Comes Alive as Researchers Grow Organs in Lab. http://online.wsj.com/news/articles/SB10001424127887323699704578328251335196648 (accessed Oct 14, 2013).
  6. Suchetka, D. 'Ghost Heart,' a Framework for Growing New Human Hearts, Could Be Answer for Thousands Waiting for New Heart. http://www.cleveland.com/healthfit/index.ssf/2012/08/ghost_heart_a_framework_for_gr.html (accessed Oct 14, 2013). 

Comment

Farming the Unknown: The Role of the Livestock Industry in Preserving Human Health

Comment

Farming the Unknown: The Role of the Livestock Industry in Preserving Human Health

The livestock industry is a vast network of expectations. A farmer expects meat, dairy, and eggs from his animals, and a consumer expects to obtain these products from grocery stores. Industry expects profitable revenue from the sales of these products. Given the intensiveness of modern agriculture, this chain of action has been massively amplified. Meat production has doubled since the 1950s, and currently almost 10 billion animals—not including additional goods such as dairy and eggs—are consumed every year in the United States alone.1 Due to the magnitude of this industry, even small changes can bring about large scale effects. Infections exemplify this chain of events.

Though animal infections might initially seem to be a lesser concern, their effects on human health are rapidly becoming more pronounced and pervasive. During the past few years, an increased number of food-borne disease outbreaks have been traced to products such as beef, pork, poultry, and milk.2 These outbreaks are especially concerning because the pathogens involved are new strains previously harmless to humans. Rather, these pathogens have become infectious to humans due to mutations that occur in animal hosts; such diseases that jump from animals to humans are termed zoonotic. Within the food industry, zoonotic illnesses can be transmitted by consumption or through contact with animals. Crucially, zoonotic cases are much harder to treat because there is no precedent for their treatment.

How often does this transmission occur? Since 1980, 87 new human pathogens have been identified, out of which a staggering 80% are zoonotic.3 Furthermore, many of these have been found in domestic animals, which serve as reservoirs for a variety of infectious agents. The large number of zoonoses raises several key questions. Are these outbreaks the product of our management of livestock or simply a natural phenomenon? How far could zoonotic illnesses escalate in terms of human cases and mortality? What practices or perspectives should we modify to prevent further damage?

Prominent virologist and Nobel laureate in medicine Sir Frank MacFarlane Burnet provided a timeless perspective to this issue in the mid-20th century. He conceptualized infectious disease as equally fundamental to other interactions between organisms such as predation, decomposition, and competition.4 Taking into account how we have harnessed nature, particularly with the aim of producing more food, we can see how farming animals has also inadvertently farmed pathogens.

Treating animals as living environments that can promote pathogenic evolution and diffusion is crucial to creating proper regulations in the livestock industry that protect the safety of consumers in the long run. Current practices risk the emergence of zoonotic diseases by facilitating transmission under heavily industrialized environments and by fostering antibiotic resistance in bacteria. Cooperative action between government, producers, and educated consumers is necessary to improve current practices and preserve good health for everyone.

Influenza: Old Threats, New Fears

The flu is not exactly a stranger to human health, but we must realize that the influenza virus not only affects humans but also other species such as pigs and birds. In fact, what is known as “the flu” is not a single virus but rather a whole family of viruses. The largest family of influenza viruses, influenza A, has different strains of viruses classified with a shorthand notation for their main surface glycoproteins –H for hemagglutinin and N for neuraminidase (Figure 1). These surface glycoproteins are important because their structure and shape determines if the virus will attach to the cellular receptors of its host and infect it. For example, the influenza H7N7 virus has a structure that allows it to specifically infect horses but not humans. Trouble arises when these surface glycoproteins undergo structural changes and the virus gains the capacity to infect humans, as was the case during the 2003 avian flu and the 2009 swine flu pandemics, when the influenza virus jumped from poultry and swine to humans.

Since 2003 when it was first documented in humans, avian influenza H5N1 has been responsible for over 600 human infections and associated with a 60% mortality rate due to severe respiratory failure.5 The majority of these cases occurred in Asia and Africa, particularly in countries such as Indonesia, Vietnam, and Egypt, which accounted for over 75% of all cases.5-6 Though no H5N1 cases have been reported in the U.S., there have been 17 low-pathogenicity outbreaks of avian flu in American poultry since 1997, and one highly pathogenic outbreak of H5N2 in 2004 with 7,000 chickens infected in Texas.5

Poultry is not the only area of livestock industry where flu viruses are a human health concern. The 2009 outbreak of influenza H1N1—popularly termed “swine flu” from its origin in pigs—was officially declared a pandemic by the WHO and the CDC. With an estimated 61 million cases and over 12,000 deaths attributed to the swine flu since 2009, H1N1 is an example of a zoonotic disease that became pandemic due to an interspecies jump that turned it from a regular pig virus to a multi-species contagion.7

The theory of how influenza viruses mutate to infect humans includes the role of birds and pigs as “mixing vessels” for mutant viruses to arise.8 In pigs, the genetic material from pig, bird, and human viruses (in any combination) reassorts within the cells to produce a virus that can be transmitted among several species. This process also occurs in birds with the mixing of human viruses and domestic and wild avian viral strains. If this theory is accurate, one can infer that a high density of pigs in an enclosed area could easily be a springboard for the emergence of new, infectious influenza strains. Thus, the “new” farms of America where pigs and poultry are stocked to minimize space and maximize production provide just the right environment for one infected pig to transfer the disease to the rest. Human handlers then face the risk of exposure to a new disease that can be as fatal as it is infectious, as the 2009 swine flu pandemic and the 2003 avian flu cases demonstrated. As consumers, adequate care of our food sources should not only be priority in avoiding disease but also in national and global health.

Feeding our Food: Antibiotic Resistance in the Food Industry

Interspecies transmission is not the only way through which new diseases can become pathogenic to humans. In the case of bacteria, new pathogenic strains can arise in animals from the action of another mechanism: antibiotic resistance. Antibiotic resistance is the result of the fundamental concept of evolutionary biology—individuals with advantageous traits that allow survival and reproduction will perpetuate these traits to their offspring. Even within the same population, antibiotic resistance varies among individual bacteria—some have a natural resistance to certain antibiotics while others simply die off when exposed. Thus, antibiotic use effectively selects bacteria with such resistance or, in some cases, total immunity. In this way, the livestock industry provides a selective environment.

The rise of these resistant strains—commonly termed “superbugs” for their extensive resistance to a variety of common antibiotics—has been a serious threat in hospitals; there, antibiotic use is widespread, and drug resistance causes almost 100,000 deaths each year from pathogens such as Methicillin-resistant Streptococcus aureus, Candida albicans, Acenitobacter baumanni, and dozens of other species.9 Our attention should not be exclusively focused to hospitals as sources of superbug infections, however. The widespread use of antibiotics in the livestock industry to avoid common bacterial diseases in food animals also poses the risk of emerging superbug strains, and it has not been without its share of outbreaks and casualties.

The Center for Science in the Public Interest –a non-profit organization that focuses on advocating for increased food safety in the US—has reported that antibiotic-resistant pathogens have been the cause of 55 major outbreaks since 1973, and that the majority of cases have come from dairy products, beef, and poultry. Furthermore, the same study reported that most of these pathogens exhibit resistance to over 7 different antibiotics.10 One of the main culprits identified in these outbreaks is the bacterium Salmonella typhimurium along with other Salmonella species, which account for over half of these cases. Salmonella is especially dangerous because it is so pervasive; it is able to lay dormant in a variety of livestock products such as uncooked eggs, milk, cheese, poultry, and beef until incubating in a live host for infection. Escherichia coli 0157:H7 (commonly known as E. coli), a bacterium that usually resides in the intestines of mammals, has also been implicated in a number of outbreaks related primarily to beef products. Overall, antibiotic-resistant pathogens have been the cause of over 20,500 illnesses, with over 31,000 hospitalizations and 27 deaths.10

These cases demonstrate how the widespread use of antibiotics in the food industry is perpetuating the risk of infections and damage to human health with antibiotic-resistant bacteria. Currently, the Food and Drug Administration (FDA) in the U.S. still approves of the use of antibiotics as a treatment for sick animals; furthermore, the organization allows antibiotic use in healthy animals as prevention and even as growth enhancers.11 In fact, over 74% of all antibiotics produced in the United States are used in livestock animals for these reasons.9,11 Using antibiotics in non-infected animals in this way generates a greater environmental pressure for superbugs to emerge; this type of use in particular should be restricted. Managing a proper use of antibiotics to reduce the risk of emerging strains of superbugs should be prioritized in the food industry just as it is in health care.

Hungry for a Solution

Still open to debate is the question of how many resources should be allocated to the problem of widespread antibiotic use. Currently, diseases are transmitted from animals to humans faster than they are evolving within humans. Not only that, many of these zoonotic diseases have high potential to become a pandemic due to their high infectivity, as in the case of H5N1 avian influenza. Measures to prevent the transmission of viruses among livestock animals and to reduce the rate of emergent antibiotic-resistant strains need to take into account the environmental and evolutionary nature of a zoonosis.

A more thorough surveillance of livestock animals and monitoring signs of new emerging strains are important in preventing the spread of such deadly pathogens. This strategy requires intensive molecular analysis, a larger number of professionals working in the field, and a nationwide initiative. Keeping an accurate record of where new strains arise and the number of animal and human cases would significantly improve epidemiological surveillance of infectious disease. This process requires cooperation at multiple levels to ensure that the logistics and public support for these initiatives is ongoing and effective. Additionally, educating people about the nature of zoonotic pathogens is crucial to fostering the dialogue and action necessary to secure the good health of animals, producers, and consumers.

References

  1. John’s Hopkins Center for a Livable Future: Industrial Food Animal Production in America. Fall 2013. http://www.jhsph.edu/research/centers-and-institutes/johns-hopkins-center-for-a-livable-future/_pdf/research/clf_reports/CLF-PEW-for%20Web.pdf (accessed Oct 24, 2013).
  2. Cleaveland, S. et al. Phil. Trans. R. Soc. B. 2001, 356, 991.
  3. Watanabe, M. E. BioScience 2008, 58, 680.
  4. Burnet, F. M. Biological Aspects of Infectious Disease. Macmillan: New York, 1940.
  5. Centers for Disease Control and Prevention: Avian Flu and Humans. http://www.cdc.gov/flu/avianflu/h5n1-people.html. (accessed Oct 12, 2013)
  6. Cumulative number of confirmed human cases of avian influenza A(H5N1) reported to WHO. http://www.who.int/influenza/human_animal_interface/H5N1_cumulative_table_archives/en/ (accessed March 14, 2013)
  7. Chan, M. World Now at the Start of the 2009 Influenza Pandemic. http://www.who.int/mediacentre/news/statements/2009/h1n1_pandemic_phase6_20090611/en/ (accessed March 14, 2013).
  8. Ma, W. et al. J. Mol. Genet. Med. [Online] 2009, 3, 158-164.
  9. Mathew, A. G. et al. Foodborne Pathog. Dis. 2007, 4, 115-133.
  10. DeWaal, C. S.; Grooters, S. V. Antibiotic Resistance in Foodborne Pathogens. http://cspinet.org/new/pdf/outbreaks_antibiotic_resistance_in_foodborne_pathogens_2013.pdf (accessed March 14, 2014).
  11. Shames, L. Agencies Have Made Limited Progress Addressing Antibiotic Use in Animals http://louise.house.gov/images/user_images/gt/stories/GAO_Report_on_Antibioic_Resistance.pdf. (accessed Jan 20, 2014).

Comment