Rewriting the Genetic Code

1 Comment

Rewriting the Genetic Code

DNA, or deoxyribonucleic acid, is at the root of all life as we know it. Using just four component nucleotide bases, DNA contains all the information needed to build any protein. To make a protein, DNA is transcribed into mRNA, which is in turn “translated” into a protein by molecules called ribosomes. More specifically, the mRNA is “read” by ribosomes in groups of three base pairs called codons, each of which codes for one of 20 possible amino acids. The set of rules that determines which codon codes for which amino acid is called the genetic code, and it is common to almost all known organisms--everything, from bacteria to humankind, shares the same protein expression schema. It is this common code that allows synthetic biologists to insert one organism’s genes into another to create transgenic organisms; generally, a gene will be expressed the same way no matter what organism possesses it. Interestingly, some scientists are attempting to alter this biological norm: rather than modifying genes, they are attempting to modify the genetic code itself in order to create genetically recoded organisms, or GROs.

This modification is possible due to the redundancy of the genetic code. Because there are 64 possible unique codons and only 20 amino acids found in nature, many amino acids are specified by more than one codon. Theoretically, then, researchers should be able to swap every instance of a particular codon with a synonymous one without harming a cell, then repurpose the eliminated codon.1 This was proven possible in a 2013 paper by Lajoie et al. published in Science; in it, a team of scientists working with E. coli cells substituted all instances of the codon UAG, which signals cells to stop translation of a protein, with the functionally equivalent sequence UAA. They then deleted release factor 1 (RF1), the protein that gives UAG its stop function. Finally, they reassigned the function of UAG to code for a non-standard amino acid (NSAA) of their choice.1

In a more recent paper, Ostrov et al. took this recoding even further by excising seven codons from the E. coli genome, reducing it to 57 codons. Because there are 62,214 instances of these codons in the E. coli genome, researchers couldn’t directly excise them from the E. coli DNA with typical gene-editing strategies. Instead, they resorted to synthesizing long stretches of the modified genome, inserting them into the bacteria, and testing to make sure the modifications were not lethal. At time of publishing, they had completed testing of 63% of the recoded genes and found that most of their changes had not significantly impaired the bacteria’s fitness, indicating that such large changes to the genetic code are feasible.2

Should Ostrov’s team succeed in their recoding, there are a number of possible applications for the resulting GRO. One would be the creation of virus-resistant cells.1,2,3 Viral DNA injected into recoded bacteria would be improperly translated if it contained the repurposed codons due to the modified protein expression machinery the GRO possesses. Such resistance was demonstrated by Lajoie in an experiment in which he infected E. coli modified to have no UAG codons with two types of viruses: a T7 virus that contained UAG codons in critical genes, and a T4 virus that did not. As expected, the modified cells showed resistance to T7, but were infected normally by T4. The researchers concluded that more extensive genetic code modifications would probably make the bacteria immune to viral infection entirely.2 Using such organisms in lieu of unmodified ones in bacteria-dependent processes like cheese- and yogurt-making, biogas manufacturing, and hormone production would reduce the cost of those processes by eliminating the hassle and expense associated with viral infection.3,4 It should be noted that while GROs would theoretically be resistant to infection, they would also be unable to “infect” anything themselves. If GRO genes were to be taken up by other organisms, they would also be improperly translated as well, making horizontal gene transfer impossible. This means that GROs are “safe” in the sense that they would not be able to spread their genes to organisms in the wild like other GMOs can.5

GROs could also be used to make novel proteins. The eliminated codons could be repurposed to code for amino acids not found in nature, or non-standard amino acids (NSAAs). It is possible to use GRO’s to produce a range of proteins with expanded chemical properties, free from the limits imposed by strictly using the 20 standard amino acids.2,3 These proteins could then be used in medical or industrial applications. For example, the biopharmaceutical company Ambrx develops proteins with NSAAs for use as medicine to treat cancer and other diseases.6

While GRO’s can do incredible things, they are not without their drawbacks. Proteins produced by the modified cells could turn out to be toxic, and if these GROs manage to escape from the lab into the wild, they could flourish due to their resistance to viral infection.2,3 To prevent this scenario from happening, Ostrov’s team has devised a failsafe. In previous experiments, the researchers modified bacteria so that two essential genes, named adk and tyrS, depended on NSAAs to function. Because NSAAs aren’t found in the wild, this modification effectively confines the bacteria within the lab, and it is difficult for the organisms to thwart this containment strategy spontaneously. Ostrov et al. intend to implement this failsafe in their 57-codon E. coli.2

Genetic recoding is an exciting development in synthetic biology, one that offers a new paradigm for genetic modification. Though the field is still young and the sheer amount of DNA changes needed to recode organisms poses significant challenges to the creation of GROs, genetic recoding has the potential to yield tremendously useful organisms.

References

  1. Lajoie, M. J., et al. Science 2013, 342, 357-360.
  2. Ostrov, N., et al. Science 2016, 353, 819-822.
  3. Bohannon, J. Biologists are close to reinventing the genetic code of life. Science, Aug. 18, 2016. http://www.sciencemag.org/news/2016/08/biologists-are-close-reinventing-genetic-code-life (accessed Sept. 29, 2016).
  4. Diep, F. What are genetically recoded organisms?. Popular Science, Oct. 15, 2013. http://www.popsci.com/article/science/what-are-genetically-recoded-organisms (accessed Sept. 29, 2016)
  5. Commercial and industrial applications of microorganisms. http://www.contentextra.com/lifesciences/files/topicguides/9781447935674_LS_TG2_2.pdf (accessed Nov. 1 2016)
  6. Ambrx. http://ambrx.com/about/ (accessed Oct. 6, 2016)

1 Comment

CRISPR: The Double-Edged Sword of Genetic Engineering

Comment

CRISPR: The Double-Edged Sword of Genetic Engineering

The phrase “genetic engineering” often brings to mind a mad scientist manipulating mutant genes or a Frankenstein-like creation emerging from a test tube. Brought to the forefront by heated debates over genetically modified crops, genetic engineering has long been viewed as a difficult, risky process fraught with errors.1

The intentional use of CRISPRs, short for clustered regularly interspaced sport palindromic repeats, turned the world of genetic engineering on its head. Pioneered by Jennifer Doudna of Berkeley2 in 2012 and praised as the “Model T” of genetic engineering,3 CRISPR as a tool is both transforming what it means to edit genes and raising difficult ethical and moral questions about genetic engineering as a discipline.

CRISPR itself is no new discovery. The repeats are sequences used by bacteria and microorganisms to protect against viral infections. Upon invasion by a virus, CRISPR identifies the DNA segments from the invading virus, processes them into “spacers,” or short palindromic repeats of DNA, and inserts them back into the bacterial genome.4 When the bacterial DNA undergoes transcription, the resulting RNA is a single-chain molecule that acts as a guide to destroy viral material. In a way, the RNA functions as a blacklist for the bacterial cell: re-invasion attempts by the same virus are quickly identified using the DNA record and subsequently stopped. That same blacklist enables CRISPR to be a powerful engineering tool. The spacers act as easily identifiable flags in the genome, allowing for high precision when manipulating individual nucleotide sequences in genes.5 The old biotechnology system can be perceived as a confused traveler holding an inaccurate map, with a general location and vague person to meet. By the same analogy, CRISPR provides a mugshot of the person to meet and the precise coordinates of where to find them. Scientists have taken advantage of this precision and now use modified proteins, such as Cas-9, to activate gene expression as opposed to cutting the DNA,6 an innovative style of genetic engineering. Traditional genetic engineering can be a shot in the dark, but with the accuracy of CRISPR, mutations are very rare.7 For the first time, scientists are able to pinpoint the exact location of genes, cut the desired sequence, and leave no damage. Another benefit of CRISPR is the reincorporation of genes that have become lost, either by breeding or evolution, bringing back extinct qualities. For example, scientists have succeeded in introducing mammoth genes into living elephant cells.8,9 Even better, CRISPR is very inexpensive, costing around $75 to edit a gene at Baylor College of Medicine,10 and accessible to anyone with biological expertise, starting with graduate students.11 The term “Model T of genetic engineering” could hardly be more appropriate.

CRISPR stretches the boundaries of bioengineering. One enterprising team from China led by oncologist Dr. Lu You has already begun trials on humans. They plan on injecting cells modified using the CRISPR-Cas9 system into patients with metastatic non-small cell lung cancer--patients who otherwise have little hope of survival.12 To prevent attacks on healthy cells, extracted critical immune mediators called T cells will be edited with the CRISPR-Cas9 system. CRISPR will identify and “snip” out a gene that encodes PD-1, a protein that acts as a check on the T-cell’s capacity to launch an immune response. Essentially, Lu’s team is creating super-T-cells, ones that have no mercy for any suspicious activity. This operation is very risky. CRISPR’s mechanisms are not thoroughly understood, and mistakes with gene editing could have drastic consequences.11 In addition, the super T cells could attack in an autoimmune reaction, leading to degradation of critical organs. In an attempt to prevent such a response, Lu’s team will extract T-cells from the tumor itself, as those T-cells would likely have already specialized in attacking cancer cells. To ensure patient safety, the team will examine the effects of three different dosage regimens on ten patients, watching closely for side effects and responsiveness.

Trials involving such cutting-edge technology raise many questions. With ease of use and accessibility, CRISPR has the potential to become a tool worthy of science fiction horror. Several ethics groups have raised concerns over the inappropriate use of CRISPR: they worry that the technology could be used by amateurs and thus yield dangerous results. In spite of these concerns, China greenlit direct editing of human embryos, creating international uproar and a moratorium on further human embryo testing.13 But such editing could lead to new breakthroughs: CRISPR could reveal how genes regulate early embryonic development, leading to a better understanding of infertility and miscarriages.14

This double-edged nature defines CRISPR-Cas9’s increasing relevance at the helm of bioengineering. The momentum behind CRISPR and its seemingly endless applications continue to broach long-unanswered questions in biology, disease treatment, and genetic engineering. Still, with that momentum comes caution: with CRISPR’s discoveries come increasingly blurred ethical distinctions.

References

  1. Union of Concerned Scientists. http://www.ucsusa.org/food_and_agriculture/our-failing-food-system/genetic-engineering/risks-of-genetic-engineering.html#.WBvf3DKZPox (accessed Sept. 30, 2016).
  2. Doudna, J. Nature [Online] 2015, 7583, 469-471. http://www.nature.com/news/genome-editing-revolution-my-whirlwind-year-with-crispr-1.19063 (accessed Oct. 5, 2016).
  3. Mariscal, C.; Petropanagos, A. Monash Bioeth. Rev. [Online] 2016, 102, 1-16. http://link.springer.com/article/10.1007%2Fs40592-016-0062-2 (accessed Oct. 5, 2016).
  4. Broad Institute. https://www.broadinstitute.org/what-broad/areas-focus/project-spotlight/questions-and-answers-about-crispr (accessed Sept. 30, 2016).
  5. Pak, E. CRISPR: A game-changing genetic engineering technique. Harvard Medical School, July 31, 2015. http://sitn.hms.harvard.edu/flash/2014/crispr-a-game-changing-genetic-engineering-technique/ (accessed Sept. 30, 2016)
  6. Hendel, A. et al. Nature Biotech. 2015, 33, 985-989.
  7. Kleinstiver, B. et al. Nature. 2016, 529, 490-495.
  8. Shapiro, B. Genome Biology. [Online] 2015, 228. N.p. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4632474/ (accessed Nov. 1, 2016).
  9. Reardon, S. Nature. [Online] 2016, 7593, 160-163. http://www.nature.com/news/welcome-to-the-crispr-zoo-1.19537 (accessed Nov. 1, 2016).
  10. Baylor College of Medicine. https://www.bcm.edu/research/advanced-technology-core-labs/lab-listing/mouse-embryonic-stem-cell-core/services/crispr-service-schedule (accessed Sept. 30, 2016).
  11. Ledford, H. Nature. [Online] 2015, 7554, 20-24. http://www.nature.com/news/crispr-the-disruptor-1.17673 (accessed Oct. 11, 2016).
  12. Cyranoski, D. Nature. [Online] 2016, 7613, 476-477. http://www.nature.com/news/chinese-scientists-to-pioneer-first-human-crispr-trial-1.20302 (accessed Sept. 30, 2016).
  13. Kolata, G. Chinese Scientists Edit Genes of Human Embryos, Raising Concerns. The New York Times, April 23, 2015. http://www.nytimes.com/2015/04/24/health/chinese-scientists-edit-genes-of-human-embryos-raising-concerns.html?_r=0 (accessed Oct. 2, 2016).
  14. Stein, R. Breaking Taboo, Swedish Scientist Seeks To Edit DNA of Healthy Human Embryos. NPR, Sept. 22, 2016. http://www.npr.org/sections/health-shots/2016/09/22/494591738/breaking-taboo-swedish-scientist-seeks-to-edit-dna-of-healthy-human-embryos (accessed Sept. 22, 2016).

Comment

Healthcare Reforms for the Mentally Ill

Comment

Healthcare Reforms for the Mentally Ill

Neuropsychiatric illnesses are some of the most devastating conditions in the world. Despite being non-communicable, mental and neurological conditions are estimated to contribute to approximately 30.8% of all of the years lived in disability1. Furthermore, in developed nations like the United States, mental disorders have been reported to erode around 2.5% of the yearly gross national product, which fails to account for the opportunity cost of families who have to take care of patients long-term.1 If left untreated, many patients with neuropsychiatric illnesses cannot find gainful employment; their aberrant behavior is stigmatized and prevents forward professional and personal advancement. In fact, about three times as many individuals living with mental illnesses who are in state/local prisons rather than rehabilitative psychiatric institutions.2

Though the Affordable Care Act has substantially decreased the amount of uninsured individuals in the U.S., there are still millions of people who fall into something called the Medicaid gap.3 People in this group make too much money for Medicaid, but too little money to be able to qualify for government tax credits in purchasing an insurance plan. In an attempt to fix this ‘hole,’ the federal government offers aid to states in order to expand their Medicaid programs as needed.4 States that have accepted the Medicaid expansion sponsored by the federal government, have seen sudden reductions in their populations of uninsured people, which has directly improved quality of life for the least fortunate people in society. However, in the many states that continue to reject federal aid, the situation is considerably worse--especially for the mentally ill.

Mental health patients are especially vulnerable to falling into the Medicare gap. Many patients suffering from psychiatric conditions often are unable to find serious employment. According to a report by the Department of Health and Human Services in March 2016, there are 1.9 million low-income, uninsured individuals with mental health disorders who cannot access proper healthcare resources.5 These impoverished psychiatric patients are originally eligible for Medicare. However, once their treatment takes and they become employed, they might pass the Medicare income threshold. If their private health insurance does not cover the cost of their psychiatric treatments, patients will relapse, creating a vicious cycle that is exceptionally difficult to break out of.6

Furthermore, many psychiatric illnesses often initially present during adolescence or early adulthood, which is right around the time students leave home to go to college. So, during initial presentation, many students lack the proper support system necessary to deal with their condition, causing many to drop out of college or receive poor grades. Families often chalk up these conditions to poor adjustments to a brand new college environment at home, preventing psychiatric patients from properly receiving treatment.6 Alone, many students with psychiatric conditions delay seeking treatment, fearing being labeled as “crazy” or “insane” by their peers.

Under the status quo, psychiatric patients face significant barriers to care. As the Medicaid gap is unfortunately subject to political maneuverings, it probably will not be fixed immediately. However, the United States could fund the expansion of Assertive Community Treatment programs, which provide medication, therapy, and social support in an outpatient setting.8 Such programs dramatically reduce hospitalization times for psychiatric patients, alleviating the costs of medical treatment. Funding these programs would help insurance issues from being a deterrent to treatment.

In the current system, psychiatric patients face numerous deterrents to receiving treatment, from lack of family support to significant social stigma. Having access to health insurance be a further barrier to care is a significant oversight of the current system and ought to be corrected.

References

  1. World Health Organization. Chapter 2: Burden of Mental and Behavioural Disorders. 2001. 20 3 2016 <http://www.who.int/whr/2001/chapter2/en/index3.html>.
  2. Torrey, E. F.; Kennard, A. D.; Elsinger, D.; Lamb, R.; Pavle, J. More Mentally Ill Persons Are in Jails and Prisons Than Hospitals: A Survey of the States .
  3. Kaiser Family Foundation. Key Facts about the Uninsured Population. 5 8 2015. 25 3 2016 <http://kff.org/uninsured/fact-sheet/key-facts-about-the-uninsured-population/>.
  4. Ross, Janell. Obamacare mandated better mental health-care coverage. It hasn't happened. 7 8 2015. 24 3 2016 <https://www.washingtonpost.com/news/the-fix/wp/2015/10/07/obamacare-mandated-better-mental-health-care-coverage-it-hasnt-happened/>.
  5. Dey, J.; Rosenoff, E.; West, K. Benefits of Medicaid Expansion for Behavioral Health. 28 3 2016 <https://aspe.hhs.gov/sites/default/files/pdf/190506/BHMedicaidExpansion.pdf>
  6. Taskiran, Sarper. Interview. Rishi Suresh. Istanbul, 3 3 2016.
  7. Gonen, Oner Gurkan. Interview. Rishi Suresh. Houston, 1 4 2016.
  8. Assertive Community Treatment https://www.centerforebp.case.edu/practices/act (accessed Jan 2017).

Comment

Modeling Climate Change: A Gift From the Pliocene

Comment

Modeling Climate Change: A Gift From the Pliocene

Believe it or not, we are still recovering from the most recent ice age that occurred between 21,000 and 11,500 years ago. And yet, in the past 200 years, the Earth's average global temperature has risen by 0.8 ºC at a rate more than ten times faster than the average ice-age recovery rate.1 This increase in global temperature, which shows no signs of slowing down, will have tremendous consequences for our planet’s biodiversity and overall ecology.

Climate change is caused by three main factors: changes in the position of the Earth’s continents, variations in the Earth’s orbital positions, and increases in the atmospheric concentration of “greenhouse gases”, such as carbon dioxide.2 In the past 200 years, the Earth’s continents have barely moved and its orbit around the sun has not changed.2 Therefore, to explain the 0.8 ºC increase in global average temperature that has occurred, the only reasonable conclusion is that there has been a change in the concentration of greenhouse gases.

After decades of research by the Intergovernmental Panel on Climate Change (IPCC), this theory was supported. The IPCC Fourth Assessment Report concluded that the increase in global average temperature is very likely due to the observed increase in anthropogenic greenhouse gas concentrations. Also included in the report is a prediction that global temperatures will increase between 1.1 ºC and 6.4 ºC by the end of the 21st century.2

Though we know what is causing the warming, we are unsure of its effects. The geologists and geophysicists at the US Geological Service (USGS) are attempting to address this uncertainty through the Pliocene Research, Interpretation, and Synoptic Mapping (PRISM) program.3

The middle of the Pliocene Era occurred roughly 3 million years ago-- a relatively short time on the geological time scale. Between the Pliocene era and our current Holocene era, the continents have barely drifted, the planet has maintained a near identical orbit around the sun, and the type of organisms living on earth has remained relatively constant.2 Because of these three commonalities , we can draw three conclusions. Because the continents have barely drifted, global heat distribution through oceanic circulation is the same. Additionally, because the planet’s orbit is essentially the same, glacial-interglacial cycles have not been altered. Finally, because the type of organisms has remained relatively constant, the biodiversity of the Pliocene is comparable to our own.

While the eras share many similarities, the main difference between them is that the Pliocene was about 4 ºC warmer at the equator and 10 ºC warmer at the poles.4 Because the Pliocene had similar conditions to today, but was warmer, it is likely that at the end of the century, our planet’s ecology may begin to look like the Pliocene. This idea has been supported by the research done by the USGS’s PRISM.3

It is a unique and exciting opportunity to be able to study a geological era so similar to our own and apply discoveries we make from that era to our current environment. PRISM is using multiple techniques to extract as much data about the Pliocene as possible. The concentration of magnesium ions, the number of carbon double bonds in organic structures called alkenones, and the concentration and distribution of fossilized pollen all provide a wealth of information that can be used to inform us about climate change. However, the single most useful source of such information comes from planktic foraminifera, or foram.5

Foram, abundant during the Pliocene era, are unicellular, ocean-dwelling organisms adorned with calcium shells. Fossilized foram are extracted from deep-sea core drilling. The type and concentration of the extracted foram reveal vital information about the temperature, salinity, and productivity of the oceans during the foram’s lifetime.5 By performing factor analysis and other statistical analyses on this information, PRISM has created a model of the Pliocene that covers both oceanic and terrestrial areas, providing a broad view of our planet as it existed 3 million years ago. Using the information provided by this model, scientists can determine where temperatures will increase the most and what impact such a temperature increase will have on life that can exist in those areas.

Since its inception in 1989, PRISM has predicted, with proven accuracy, two main trends.The first is that average temperatures will increase the most at the poles, with areas nearest to the equator experiencing the least amount of temperature increase.5 The second is that tropical plants will expand outward from the equator, taking root in the middle and higher latitudes.5

There are some uncertainties associated with the research behind PRISM. Several assumptions were made, such as the idea of uniformitarianism, which states that the same natural laws and physical processes that occur now were true in the past. The researchers also assumed that the ecological tolerances of certain key species, such as foram, have not significantly changed in the last 3 million years. Even with these normalizing assumptions, an important discrepancy exists between the Pliocene and our Holocene: the Pliocene achieved its temperature at a normal rate and remained relatively stable throughout its era, while our temperatures are increasing at a much more rapid rate.

The film industry has fetishized climate change, predicting giant hurricanes and an instant ice age, as seen in the films 2012 and The Day After Tomorrow. Fortunately, nothing as cataclysmic will occur. However, a rise in global average temperature and a change in our ecosystems is nothing to be ignored or dismissed as normal. It is only through the research done by the USGS via PRISM and similar systems that our species can be prepared for the coming decades of change.

References

  1. Earth Observatory. http://earthobservatory.nasa.gov/Features/GlobalWarming/page3.php (accessed Oct. 1, 2016).
  2. Pachauri, R.K., et. al. IPCC 4th Assessment 2007, 104.
  3. PRISM4D Collaborating Institutions. Pliocene Research Interpretation and Synoptic Mapping. http://geology.er.usgs.gov/egpsc/prism/ (Oct. 3, 2016).
  4. Monroe, R. What Does 400PPM Look Like?. https://scripps.ucsd.edu/programs/keelingcurve/2013/12/03/what-does-400-ppm-look-like/ (accessed Oct. 19, 2016).
  5. Robinson, M. M., J. Am. Sci. 2011, 99, 228

Comment

Machine Minds: An Exploration of Artificial Neural Networks

1 Comment

Machine Minds: An Exploration of Artificial Neural Networks

Abstract:

An artificial neural network is a computational method that mirrors the way a biological nervous system processes information. Artificial neural networks are used in many different fields to process large sets of data, often providing useful analyses that allow for prediction and identification of new data. However, neural networks struggle with providing clear explanations regarding why certain outcomes occur. Despite these difficulties, neural networks are valuable data analysis tools applicable to a variety of fields. This paper will explore the general architecture, advantages and applications of neural networks.

Introduction:

Artificial neural networks attempt to mimic the functions of the human brain. Biological nervous systems are composed of building blocks called neurons. In a biological nervous system, biological neurons communicate with axons and dendrites. When a biological neuron receives a message, it sends an electric signal down its axon. If this electric signal is greater than a threshold value, the electrical signal is converted to a chemical signal that is sent to nearby biological neurons.2 Similarly, while artificial neural networks are dictated by formulas and data structures, they can be conceptualized as being composed of artificial neurons, which hold similar functions to their biological counterparts. When an artificial neuron receives data, if the change in the activation level of a receiving artificial neuron exceeds a defined threshold value, the artificial neuron creates an output signal that propagates to other connected artificial neurons.2 The human brain learns from past experiences and applies this information from the past in new settings. Similarly, artificial neural networks can adapt their behaviors until their responses are both accurate and consistent in new situations.1

While artificial neural networks are structurally similar to their biological counterparts, artificial neural networks are distinct in several ways. For example, certain artificial neural networks send signals only at fixed time intervals, unlike biological neural networks, in which neuronal activity is variable.3 Another major difference between biological neural networks and artificial neural networks is the time of response. For biological neural networks, there is often a latent period before a response, whereas in artificial neural networks, responses are immediate.3

Neural networks are useful in a wide-range of fields that involve large datasets, ranging from biological systems to economic analysis. These networks are practical in problems involving pattern recognition, such as predicting data trends.3 Neural networks are also effective when data is error-prone, such as in cognitive software like speech and image recognition.3

Neural Network Architecture:

One popular neural network design is the Multilayer Perceptrons (MLP) design. In the MLP design, each artificial neuron outputs a weighted sum of its inputs based on the strength of the synaptic connections.1 Artificial neuron synaptic strength is determined by the formulaic design of the neural network and is directly proportional to weight: stronger and more valuable artificial neurons have a larger weight and therefore are more influential in the weighted sum. The output of the neuron is based on whether the weighted sum is greater than the threshold value of the artificial neuron.1 The MLP design was originally composed of perceptrons. Perceptrons are artificial neurons that provide a binary output of zero or one. Perceptrons have limited use in a neural network model because small changes in the input can drastically alter the output value of the system. However, most current MLP systems use sigmoid neurons instead of perceptrons. Sigmoid neurons can take inputs and produce outputs of values between zero and one, allowing for more variation in the inputs because these changes do not radically alter the outcome of the model.4

In terms of the architecture of the MLP design, the network is a feedforward neural network.1 In a feedforward design, the units are arranged so signals travel exclusively from input to output. These networks are composed of a layer of input neurons, a layer of output neurons, and a series of hidden layers in between the input and output layers. These hidden layers are composed of internal neurons that further process the data within the system. The complexity of this model varies with the number of hidden layers and the number of inputs in each layer.1

In an MLP design, once the number of layers and the number of units in each layer are determined, the threshold values and the synaptic weights in the system need to be set using training algorithms so that the errors in the system are minimized.4 These training algorithms use a known dataset (the training data) to modify the system until the differences between the expected output and the actual output values are minimized.4 Training algorithms allow for neural networks to be constructed with optimal weights, which lets the neural network make accurate predictions when presented with new data. One such training algorithm is the backpropagation algorithm. In this design, the algorithm analyzes the gradient vector and the error surface in the data until a minimum is found.1 The difficult part of the backpropagation algorithm is determining the step size. Larger steps can result in faster runtimes, but can overstep the solution; comparatively smaller steps can lead to a much slower runtime, but are more likely to find a correct solution.1

While feedforward neural network designs like MLP are common, there are many other neural network designs. These other structures include examples such as recurrent neural networks, which allow for connections between neurons in the same layer, and self-organizing maps, in which neurons attain weights that retain characteristics of the input. All of these network types also have variations within their specific frameworks.5 The Hopfield network and Boltzmann machine neural network architectures utilize the recurrent neural network design.5 While feedforward neural networks are the most common, each design is uniquely suited to solve specific problems.

Disadvantages

One of the main problems with neural networks is that, for the most part, they have limited ability to identify causal relationships explicitly. Developers of neural networks feed these networks large swathes of data and allow for the neural networks to determine independently which input variables are most important.10 However, it is difficult for the network to indicate to the developers which variables are most important in calculating the outputs. While some techniques exist to analyze the relative importance of each neuron in a neural network, these techniques still do not present as clear of a causal relationship between variables as can be gained in similar data analysis methods such as a logistic regression.10

Another problem with neural networks is the tendency to overfit. Overfitting of data occurs when a data analysis model such as a neural network generates good predictions for the training data but worse ones for testing data.10 Overfitting happens because the model accounts for irregularities and outliers in the training data that may not be present across actual data sets. Developers can mitigate overfitting in neural networks by penalizing large weights and limiting the number of neurons in hidden layers.10 Reducing the number of neurons in hidden layers reduces overfitting but also limits the ability of the neural network to model more complex, nonlinear relationships.10

Applications

Artificial neural networks allow for processing of large amounts of data, making them useful tools in many fields of research. For example, the field of bioinformatics relies heavily on neural network pattern recognition to predict various proteins’ secondary structures. One popular algorithm used for this purpose is Position Specific Iterated Basic Local Alignment Search Tool (PSI-BLAST) Secondary Structure Prediction (PSIPRED).6 This algorithm uses a two-stage structure that consists of two three-layered feedforward neural networks. The first stage of PSIPRED involves inputting a scoring matrix generated by using the PSI-BLAST algorithm on a peptide sequence. PSIPRED then takes 15 positions from the scoring matrix and uses them to output three values that represent the probabilities of forming the three protein secondary structures: helix, coil, and strand.6 These probabilities are then input into the second stage neural network along with the 15 positions from the scoring matrix, and the output of this second stage neural network includes three values representing more accurate probabilities of forming helix, coil, and strand secondary structures.6

Neural networks are used not only to predict protein structures, but also to analyze genes associated with the development and progression of cancer. More specifically, researchers and doctors use artificial neural networks to identify the type of cancer associated with certain tumors. Such identification is useful for correct diagnosis and treatement of each specific cancer.7 These artificial neural networks enable researchers to match genomic characteristics from large datasets to specific types of cancer and predict these types of cancer.7 (What to put in/ what to get out/process)

In bioinformatic scenarios such as the above two examples, trained artificial neural networks quickly provide high-quality results for prediction tasks.6 These characteristics of neural networks are important for bioinformatics projects because bioinformatics generally involves large quantities of data that need to be interpreted both effectively and efficiently.6

The applications of artificial neural networks are also viable within fields outside the natural sciences, such as finance. These networks can be used to predict subtle trends such as variations in the stock market or when organizations will face bankruptcy.8,9 Neural networks can provide more accurate predictions more efficiently than other prediction models.9

Conclusion:

Over the past decade, artificial neural networks have become more refined and are being used in a wide variety of fields. Artificial neural networks allow researchers to find patterns in the largest of datasets and utilize the patterns to predict potential outcomes. These artificial neural networks provide a new computational way to learn and understand diverse assortments of data and allow for a more accurate and effective grasp of the world.

References

  1. Taiwo Oladipupo Ayodele (2010). Types of Machine Learning Algorithms, New Advances in Machine Learning, Yagang Zhang (Ed.), InTech, DOI: 10.5772/9385. Available from: http://www.intechopen.com/books/new-advances-in-machine-learning/types-of-machine-learning-algorithms
  2. Neural Networks: An Introduction By Berndt Muller, Joachim Reinhardt
  3. Urbas, John V. Article
  4. : Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press, 2015 
  5. Elements of Artificial Neural Networks by Kishan Mehrotra, Chilukuri Mohan
  6. Neural Networks in Bioinformatics by Ke Chen, Lukasz A. Kurgan
  7. Artificial Neural Networks in the cancer genomics frontier by Andrew Oustimov, Vincent Vu
  8. An enhanced artificial neural network for stock price predictions by Jiaxin Ma
  9. A comparison of artificial neural network model and logistics regression in prediction of companies’ bankruptcy by Ali Mansouri
  10. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes by Jack V. Tu

1 Comment

Telomeres: Ways to Prolong Life

Comment

Telomeres: Ways to Prolong Life

Two hundred years ago, the average life expectancy oscillated between 30 and 40 years, as it had for centuries before. Medical knowledge was fairly limited to superstition and folk cures, and the science behind what actually caused disease and death was lacking. Since then, the average lifespan of human beings has skyrocketed due to scientific advancements in health care, such as an understanding of bacteria and infections. Today, new discoveries are being made in cellular biology which, in theory, could lead us to the next revolutionary leap in life span. Most promising among these recent discoveries is the manipulation of telomeres in order to slow the aging process, and the use of telomerase to identify cancerous cells.

Before understanding how telomeres can be utilized to increase the average lifespan of humans, it is essential to understand what a telomere is. When cells divide, their DNA must be copied so that all of the cells share an identical DNA sequence. However, the DNA cannot be copied all the way to the end of the strand, resulting in the loss of some DNA at the end of the sequence with every single replication.1 To prevent valuable genetic code from being cut off during cell division, our DNA contains telomeres, a meaningless combination of nucleotides at the end of our chromosomal sequences that can be cut off without consequences to the meaningful part of the DNA. Repeated cell replication causes these protective telomeres to become shorter and shorter, until valuable genetic code is eventually cut off, causing the cell to malfunction and ultimately die.1 The enzyme telomerase functions in cells to rebuild these constantly degrading telomeres, but its activity is relatively low in normal cells as compared to cancer cells.2

The applications of telomerase manipulation have only come up fairly recently, with the discovery of the functionality of both telomeres and telomerase in the mid 80’s by Nobel Prize winners Elizabeth Blackburn, Carol Grieder, and Jack Sjozak.3 Blackburn discovered a sequence at the end of chromosomes that was repeated several times, but could not determine what the purpose of this sequence was. At the same time, Sjozak was observing the degradation of minichromosomes, chromatin-like structures which replicated during cell division when introduced to a yeast cell. Together, they combined their work by isolating Blackburn’s repeating DNA sequences, attaching them to Sjozak’s minichromosomes, and then placing the minichromosomes back inside yeast cells. With the new addition to their DNA sequence, the minichromosomes did not degrade as they had before, thus proving that the purpose of the repeating DNA sequence, dubbed the telomere, was to protect the chromosome and delay cellular aging.

Because of the relationship between telomeres and cellular aging, many scientists theorize that cell longevity could be enhanced by finding a way to control telomere degradation and keep protective caps on the end of cell DNA indefinitely.1 Were this to be accomplished, the cells would be able to divide an infinite number of times before they started to lose valuable genetic code, which would theoretically extend the life of the organism as a whole.

In addition, studies into telomeres have revealed new ways of combatting cancer. Although there are many subtypes of cancer, all variations of cancer involve the uncontrollable, rapid division of cells. Despite this rapid division, the telomeres of cancer cells do not shorten like those of a normal cell upon division, otherwise this rapid division would be impossible. Cancer cells are likely able to maintain their telomeres due to their higher levels of telomerase.3 This knowledge allows scientists to use telomerase levels as an indicator of cancerous cells, and then proceed to target these cells. Vaccines that target telomerase production have the potential to be the newest weapon in combatting cancer.2 Cancerous cells continue to proliferate at an uncontrollable rate even when telomerase production is interrupted. However, without the telomerase to protect their telomeres from degradation, these cells eventually die.

As the scientific community advances its ability to control telomeres, it comes closer to controlling the process of cellular reproduction, one of the many factors associated with human aging and cancerous cells. With knowledge in these areas continuing to develop, the possibility of completely eradicating cancer and slowing the aging process is becoming more and more realistic.

References

  1. Genetic Science Learning Center. Learn.Genetics. http://learn.genetics.utah.edu (accessed Oct. 5, 2016).
  2. Shay, J. W.; Woodring W. E.  NRD. [Online] 2016, 5. http://www.nature.com/nrd/journal/v5/n7/full/nrd2081.html (accessed Oct. 16, 2016).
  3. The 2009 Nobel Prize in Physiology or Medicine - Press Release. The Nobel Prize. https://www.nobelprize.org/nobel_prizes/medicine/laureates/2009/press.html (accessed Oct. 4, 2016).

Comment

Astrocytes: Shining the Spotlight on the Brain’s Rising Star

Comment

Astrocytes: Shining the Spotlight on the Brain’s Rising Star

We have within us the most complex and inspiring stage to ever be set: the human brain. The cellular components of the brain act as players, interacting through chemical and electrical signaling to elicit emotions and convey information. Although most of our attention has in the past been focused on neurons, which were erroneously presumed to act alone in their leading role, scientists are slowly realizing that astrocytes—glial cells in the brain that were previously assumed to only have a supportive role in association with neurons—are so much more than merely supporting characters.

Though neurons are the stars, most of the brain is actually composed of supportive cells like microglia, oligodendrocytes, and, most notably, astrocytes. Astrocytes, whose formal name is a misnomer given that modern imaging technology reveals they actually maintain a branch-like shape rather than a star-like one, exist as one of three mature types in the grey matter, white matter, or retina. Structurally, the grey matter astrocyte variant exhibits bushy, root-like tendrils and a spherical shape. The white matter variant, commonly found in the hippocampus, favors finer extensions called processes. The retinal variant features an elongated structure.¹

Functionally, astrocytes were previously believed to play a solely supportive role, as they constitute a large percentage of the glial cells present in the brain. Glial cells are essentially all of the non-neural cells in the brain that assist in basic functioning; they themselves are not electrically excitable. However, current research suggests that astrocytes play far more than merely a supporting role in the brain. Astrocytes and neurons directly interact to interpret stimuli and store memories⁴, among many other yet undiscovered tasks.

Although astrocytes are not electrically excitable, astrocytes communicate with neurons via calcium signaling and the neurotransmitter glutamate.² Calcium signaling works whereby intracellular calcium in astrocytes is released upon excitation and is propagated in waves that move through neighboring astrocytes and neurons. Neurons experience a responsive increase in intracellular calcium if they are directly touching affected astrocytes, as the signal is communicated via gap junctions rather than synaptically. Such signalling is unidirectional; calcium excitation can move from astrocyte to neuron, but not from neuron to astrocyte.³ The orientation of astrocytes in different regions of the brain and their proximity to neurons allows them to form close communication networks that help information travel throughout the central nervous system.

Astrocytes in the hippocampus play a role in memory development. They act as an intermediary cell in a neural inhibitory circuit that utilizes acetylcholine, glutamate, and Gamma-Aminobutyric Acid (GABA) to solidify experiential learning and memory formation. Disruption of cholinergic signaling, signaling relating to acetylcholine, prohibits the formation of memories in the dentate gyrus of the hippocampal formation. Astrocytes act as mediators that convert cholinergic inputs into glutamatergic activation of neurons.⁴ Without the assistance of astrocytic networks in close association with neurons, memory formation and long-term potentiation would be far less efficient if even still possible.

Astrocytes’ ability to interpret and release chemical neurotransmitters, especially glutamate, allows them to regulate the intensity of synaptic firing in neurons.⁵ Increased glutamate uptake by astrocytes reduces synaptic strength in associated neurons by decreasing neuronal concentration of glutamate.⁶ Regulation of synaptic strength in firing is crucial for healthy brain function. If synapses fire too much or too powerfully, they may overwhelm the brain. Conversely, if synapses fire too infrequently or not strongly enough, messages might not make their way throughout the central nervous system. The ability of astrocytes to modulate synaptic activity through selective glutamate interactions puts them in an integral position to assist in consistent and efficient transmission of information throughout the human body.

Through regulation of neurotransmitters and psychoactive chemicals in the brain, astrocytes are able to maintain homeostasis in the central nervous system. Potassium buffering and balancing of pH are the major ways that astrocytes assist in maintaining optimal conditions for brain function.⁷ Astrocytes are able to compensate for the slow re-uptake of potassium by neurons, thus decluttering the extracellular space of free potassium in response to neuronal activity. Re-uptake of these ions is extremely important to brain function as synaptic transmission by neurons relies on electrically switching membrane potentials along neuronal axons.

Due to their role in synaptic regulation and their critical position in the brain network, astrocytes also have the potential to aid in therapies for dealing with neurological disorders. For example, epileptic seizures have been found to be related to an excitatory loop between neurons and astrocytes. Focal ictal discharges, the brain activity responsible for epileptic seizures, are correlated to hyperactivity in neurons as well as an increase in intracellular calcium in nearby astrocytes; the calcium oscillations then spread to neighboring astrocyte networks to perpetuate the ictal discharge and continue the seizure. Astrocytes in epileptic brain tissues exhibit structural changes that may favor such a positive feedback loop. Inhibition of calcium uptake in astrocytes, and consequent decrease in release of glutamate and ATP, is linked to suppression of ictal discharges, and therefore linked to a decrease in the severity and occurrence of epileptic seizures.⁸ Furthermore, it is evident that astrocyte activity also plays a role in memory loss associated with Alzheimer’s Disease. Although astrocytes in the hippocampus contain low levels of the neurotransmitter GABA under normal conditions, hyperactive astrocytes near amyloid plaques in affected individuals exhibit increased levels of GABA that are not evident in other types of glial cells. GABA is the main inhibitory neurotransmitter in the brain, and abnormal increases in GABA are associated with Alzheimer’s Disease; introducing antagonist molecules has been shown to reduce memory impairment, but at the cost of inducing seizures.⁹ Since there is a shift in GABA release by astrocytes between normal and diseased individuals, astrocytes could be as the key to remedying neurodegenerative conditions like Alzheimer’s.

In addition to aiding in treatment of neurological disorders, astrocytes may also help stroke victims. Astrocytes ultimately support damaged neurons by donating their mitochondria to the neurons.¹⁰ Mitochondria produce adenosine triphosphate (ATP) and act as the energy powerhouse in eukaryotic cells; active cells like neurons cannot survive without them. Usually neurons accommodate their exceptionally large energy needs by multiplying their intracellular mitochondria via fission. However, when neurons undergo stress or damage, as in the case of stroke, the neuron is left without its source of energy. New research suggests that astrocytes come to the rescue by releasing their own mitochondria into the extracellular environment in response to high levels of the enzyme CD38, so that damaged neurons can absorb the free mitochondria and survive the damage.¹¹ Astrocytes also help restore neuronal mitochondria and ATP production post-insult by utilizing lactate shuttles, in which astrocytes generate lactate through anaerobic respiration and then pass the lactate to neurons where it can be used as a substrate for oxidative metabolism¹². Such a partnership between astrocytes and neurons presents researchers with the option of using astrocyte-targeted therapies to salvage neuronal systems in stroke victims and others afflicted by ailments associated with mitochondrial deficiencies in the brain.

Essentially, astrocytes are far more than the background supporters they were once thought to be. Before modern technological developments, the capabilities and potential of astrocytes were left woefully unnoticed. Astrocytes interact both directly and indirectly with neurons through chemical signaling to create memories, interpret stimuli, regulate signaling, and, maintain a healthy central nervous system. A greater understanding of the critical role astrocytes play in the human brain could allow scientists to develop astrocyte-targeted therapeutic practices. As astrocytes slowly inch their way into the spotlight of neuroscientific research, there is so much yet to be discovered.

References

  1. Kimelberg, H.K.; Nedergaard, M. Neurotherapeutics 2010, 7, 338-353
  2. Schummers, J. et al. Science 2008, 320, 1638-1643
  3. Nedergaard, M. Science 1994, 263, 1768+
  4. Ferrarelli, L. K. Sci. Signal 2016, 9, ec126
  5. Gittis, A. H.; Brasier, D. J. Science 2015, 349, 690-691
  6. Pannasch, U. et al. Nature Neuroscience 2014, 17, 549+
  7. Kimelberg, H.K.; Nedergaard, M. Neurotherapeutics 2010, 7(4), 338-353
  8. Gomez-Gonzalo, M. et al. PLoS Biology 2010, 8,
  9. Jo, S. et al. Nature Medicine 2014, 20, 886+
  10. VanHook, A. M. Sci. Signal 2016, 9, ec174
  11. Hayakawa, K. et al. Nature 2016, 535, 551-555
  12. Genc, S. et al. BMC Systems Biology 2011, 5, 162

Comment

Surviving Without the Sixth Sense

Comment

Surviving Without the Sixth Sense

Though references to the “sixth sense” often bring images of paranormal phenomena to mind, the scientific world has bestowed this title to our innate awareness of our own bodies in space. Proprioception, the official name of this sense, is what allows us to play sports and navigate in the dark. Like our other five senses, our capability for spatial awareness has become so automatic that we hardly ever think about it. But scientists at the National Institute of Health (NIH) have made some breakthroughs about a genetic disorder that causes people to lack this sense, leading to skeletal abnormalities, balance difficulties, and even the inability to discern some forms of touch.1

The gene PIEZO2 has been associated with the body’s ability to sense touch and coordinate physical actions and movement. While there is not a substantial amount of research about this gene, previous studies on mice show that it is instrumental in proprioception.2 Furthermore, NIH researchers have recently attributed a specific phenotype to a mutation in PIEZO2, opening a potential avenue to unlock its secrets.

Pediatric neurologist Carsten G. Bönnermann, the senior investigator at the NIH National Institute of Neurological Disorders and Stroke, had been studying two patients with remarkably similar cases when he met Alexander Chesler at a lecture. Chesler, an investigator at the NIH National Center for Complementary and Integrative Health, joined Bönnermann in performing a series of genetic and practical tests to investigate the disorder.1

The subjects examined were an 8-year-old girl and an 18-year-old woman from different backgrounds and geographical areas. Even though these patients were not related, they both exhibited a set of similar and highly uncommon phenotypes. For example, each presented with scoliosis - unusual sideways spinal curvature - accompanied by fingers, feet, and hips that could bend at atypical angles. In addition to these physical symptoms, the patients experienced difficulty walking, substantial lack of coordination, and unusual responses to physical touch.1These symptoms are the result of PIEZO2 mutations that block the gene’s normal activity or production. Using full genome sequencing, researchers found that both patients have at least one recessively-inherited nonsense variant in the coding region of PIEZO2.1 But because these patients represent the first well-documented cases of specific proprioceptive disorders, there is not an abundance of research about the gene itself. Available previous studies convey that PIEZO2 encodes a mechanosensitive protein - that is, it generates electrical nerve signals in response to detected changes in factors such as cell shape.2 This function is responsible for many of our physical capabilities, including spatial awareness, balance, hearing, and touch. In fact, PIEZO2 has even been found to be expressed in neurons that control mechanosensory responses, such as perception of light touch, in mice. Past studies found that removing the gene in mouse models caused intolerable limb defects.2 Since this gene is highly homogenous in humans and in mice (the two versions are 95% similar), many researchers assumed that humans could not live without the gene either. According to Bönnermann and Chesler, however, it is clear that this PIEZO2 mutation does not cause a similar fate in humans.

Along with laboratory work, Bönnermann and Chesler employed techniques to further investigate the tangible effects of the mutations. Utilizing a control group for comparison, the researchers presented patients with a set of tests that examined their movement and sensory abilities. The results were startling, to say the least. The patients revealed almost a total lack of proprioception when blindfolded. They stumbled and fell while walking and could not determine which way their joints were moving without looking. In addition, both failed to successfully move a finger from their noses to a target. The absence of certain sensory abilities is also astonishing - both patients could not feel the vibrations of a tuning fork pressed against their skin, could not differentiate between ends of a caliper pressed against their palms, and could not sense a soft brush across their palms and bottom of their feet. Furthermore, when this same soft brush was swept across hairy skin, both of the patients claimed that the sensation was prickly. This particular result revealed that the subjects were generally missing brain activation in the region linked to physical sensation, yet they appeared to have an emotional response to the brushing across hairy skin; these specific brain patterns directly contrasted with those of the control group participants. Additional tests performed on the two women revealed that the patients’ detection of pain, itching, and temperature was normal when compared to the control group findings, and that they possessed nervous system capabilities and cognitive functions appropriate for their ages.1

Because the patients are still able to function in daily life, it is apparent that the nervous system has alternate pathways that allow them to use sight to largely compensate for their lack of proprioception.3,4 Through further research, scientists can tap into these alternate pathways when designing therapies for similar patients. Additionally, the common physical features of both patients shed light on the fact that PIEZO2 gene mutations could contribute to the observed genetic musculoskeletal disorders.3 This suggests that proprioception itself is necessary for normal musculoskeletal development; it is possible that abnormalities developed over time as a result of patients’ postural responses and compensations to their deficiencies.4

In an era when our lives depend so heavily on our abilities to maneuver our bodies and coordinate movements, the idea of lacking proprioception is especially concerning. Bönnermann and Chesler’s discoveries open new doors for further investigation of PIEZO2’s role in the nervous system and musculoskeletal development. These discoveries can also aid in better understanding a variety of other neurological disorders. But, there is still much unknown about the full effects of the PIEZO2 mutation. For example, we do not know if musculoskeletal abnormalities injure the spinal cord, if the gene mutation poses additional consequences for the elderly, or if women are more susceptible to the disorder than are men. Furthermore, it is very likely that there are numerous other patients around the world who present similar symptoms to the 8-year-old girl and 18-year-old woman observed by Bönnermann and Chesler. While researchers work towards gaining a better understanding of the disease and developing specific therapies, these patients must focus on other coping mechanisms, such as reliance on vision, to accomplish even the most basic daily activities. Because contrary to popular perception, the sixth sense is not an ability to see ghosts; it is so much more.

References

  1. Chesler, A. T. et al. N Engl J Med. 2016, 375, 1355-1364
  2. Woo, S. et al. Nat Neurosci. 2015, 18, 1756-1762
  3. “‘Sixth sense’ may be more than just a feeling”. National Institutes of Health, https://www.nih.gov/news-events/news-releases/Sixth-sense-may-be-more-just-feeling (accessed Sept. 22, 2016).
  4. Price, Michael. “Researchers discover gene behind ‘sixth sense’ in humans”. Science Magazine, http://www.sciencemag.org/news/2016/09/researchers-discover-gene-behind-sixth-sense-humans (accessed Sept. 22, 2016)

Comment

The Health of Healthcare Providers

Comment

The Health of Healthcare Providers

A car crash. A heart attack. A drug overdose. No matter what time of day, where you are, or what your problem is, emergency medical technicians (EMTs) will be on call and ready to come to your aid. These health care providers are charged with providing quality care to maintain or improve patient health in the field, and their efforts have saved the lives of many who could not otherwise find care on their own. While these EMTs deserve praise and respect for their line of work, what they deserve even more is consideration for the health issues that they themselves face. Emergency medical technicians suffer from a host of long-term health issues, including weight gain, burnout, and psychological changes.

The daily "schedule" of an EMT is probably most characterized by its variability and unpredictability. The entirety of their day is a summation of what everyone in their area is doing, those people's health issues, and the uncertainty of life itself. While there are start and end times to their shifts, even these are not hard and fast--shifts have the potential to start early or end late based on when people call 911. An EMT can spend their entire shift on the ambulance, without time to eat a proper meal or to get any sleep. These healthcare providers learn to catch a few minutes of sleep here and there when possible. Their yearly schedules are also unpredictable, with lottery systems in place to ensure that someone is working every day, at all hours of the day, while maintaining some fairness. Most services will have either 12 or 24 hour shifts, and this lottery system can result in EMTs having stacked shifts that are either back to back or at least within close proximity to one another. This only enhances the possibility of sleep disorders, with 70 percent of EMTs reporting having at least one sleep problem.1 While many people have experienced the effects of exhaustion and burnout due to a lack of sleep, few can say that their entire professional career has been characterized by these feelings. EMTs have been shown to be more than twice as likely than control groups to have moderate to high scores on the Epworth Sleepiness Scale (ESS), which is correlated with a greater likelihood of falling asleep during daily activities such as conversing, sitting in public places, and driving.1 The restriction and outright deprivation of sleep in EMTs has been shown to cause a large variety of health problems, and seems to be the main factor in the decline of both physical and mental health for EMTs.

A regular amount of sleep is essential in maintaining a healthy body. Reduced sleep has been associated with an increase in weight gain, cardiovascular disease, and weakened immune system functions. Studies have shown that, at least in men, short sleep durations are linked to weight gain and obesity, which is potentially due to alterations in hormones that regulate appetite.2,3 Due to this trend, it is no surprise that a 2009 study found that sleep durations that deviated from an ideal 7-8 hours, as well as frequent insomnia, increased the risk of cardiovascular disease. The fact that EMTs often have poor diets compounds that risk. An EMT needs to be ready around the clock to respond, which means there really isn’t any time to sit down and have a proper meal. Fast food becomes the meal of choice due to its convenience, both in availability and speed. Some hospitals have attempted to improve upon this shortcoming in the emergency medical service (EMS) world by providing some snacks and drinks at the hospital. This, however, creates a different issue due to the high calorie nature of these snacks. The body generally knows when it is full by detecting stretch in the stomach, and signaling the brain that enough food has been consumed. In a balanced diet, a lot of this space should be filled with fruits, vegetables, and overall low calorie items unless you are an athlete who uses a lot more energy. By eating smaller, high calorie items, an EMT will need to eat more in order to feel full, but this will result in the person exceeding their recommended daily calories. The extra energy will often get stored as fat, compounding the weight gain due to sleep deprivation. Studies involving the effects of restricted sleep on the immune system are less common, but one experiment demonstrated markers of systemic inflammation which could, again, lead to cardiovascular disease and obesity.2

Mental health is not spared from complications due to long waking periods with minimal sleep. A study was conducted to test the cognitive abilities of subjects experiencing varying amounts of sleep restriction;the results showed that less sleep led to cognitive deficits, and being awake for more than 16 hours led to deficits regardless of how much sleep the subject had gotten.4 This finding affects both the EMTs, who can injure themselves, and the patients, who may suffer due to more errors being made in the field. First year physicians, who similarly can work over 24 hour shifts, are subject to an increased risk of automobile crashes and percutaneous (skin) injuries when sleep deprived.5 These injuries often happen when leaving a shift. A typical EMT shift lasts from one morning to the next, and the EMT will leave his or her shift during rush hour on little to no sleep, increasing the dangerous possibility of falling asleep or dozing at the wheel. A similar study to the one on first year physicians mentioned prior studied extended duration work at critical-care units, and found that long shifts increased the risk of medical errors and lapses in attention.6 In addition to the more direct mental health problems posed by the continuous strain, EMTs and others in the healthcare field also face more personal issues, including burnout and changes in behavior. A study on pediatric residents, who face similar amounts of stress and workloads, established that 20% of participants were suffering from depression, and 75% met the criteria for burnout, both of which led to medical errors made during work.7 A separate study found that emergency physicians suffering from burnout also faced high emotional exhaustion, depersonalization, and a low sense of accomplishment.8 While many go into the healthcare field to help others, exhaustion and desensitization create a sort of cynicism in order to defend against the enormous emotional burden that comes with treating patients day in and day out.

Sleep deprivation, long work duration, and the stress that comes with the job contribute to a poor environment for the physical and mental health of emergency medical technicians and other healthcare providers. However, a recent study has shown that downtime, especially after dealing with critical patients, led to lower rates of depression and acute stress in EMTs.9 While this does not necessarily ameliorate post-traumatic stress or burnout, it is a start to addressing the situation. Other possible interventions would include providing more balanced meals at hospitals that are readily available to EMTs, as well as an improved scheduling system that prevents or limits back to back shifts. These concepts can apply to others facing high workloads with abnormal sleeping schedules as well, including college students, who are also at risk for mood disorders and a poorer quality of life due to the rigors of college life.10

References

  1. Pirrallo, R. G. et al. International Journal of the Science and Practice of Sleep Medicine. 2012, 16, 149-162.
  2. Banks, S. et al. J. Clin. Sleep Med. 2007, 3(5), 519-528.
  3. Watanabe, M. et al. Sleep  2010, 33(2), 161-167.
  4. Van Dongen, H. P. et al. Sleep 2004, 27(4), 117-126.
  5. Najib, T. A. et al. JAMA 2006, 296(9), 1055-1062.
  6. Barger, L. K. et al. PLoS Med. [Online] 2006, 3(12), e487. https://dx.doi.org/10.1371%2Fjournal.pmed.0030487 (accessed Oct. 3, 2016)
  7. Fahrenkopf, A. M. et al. BMJ [Online] 2008, 336, 488. http://dx.doi.org/10.1136/bmj.39469.763218.BE (accessed Oct. 3, 2016)
  8. Ben-Itzhak, S. et al. Clin. Exp. Emerg. Med. 2015, 2(4), 217-225.
  9. Halpern, J. et al. Biomed. Res. Int. [Online] 2014, 2014. http://dx.doi.org/10.1155/2014/483140 (accessed Oct. 3, 2016)
  10. Singh, R. et al. J. Clin. Diagn. Res. [Online] 2016, 10(5), JC01-JC05. https://dx.doi.org/10.7860%2FJCDR%2F2016%2F19140.7878 (accessed Oct 3, 2016)

Comment

The Creation of Successful Scaffolds for Tissue Engineering

Comment

The Creation of Successful Scaffolds for Tissue Engineering

Abstract

Tissue engineering is a broad field with applications ranging from pharmaceutical testing to total organ replacement. Recently, there has been extensive research on creating tissue that is able to replace or repair natural human tissue. Much of this research focuses on the creation of scaffolds that can both support cell growth and successfully integrate with the surrounding tissue. This article will introduce the concept of a scaffold for tissue engineering; discuss key areas of research including biomolecule use, vascularization, mechanical strength, and tissue attachment; and introduce some important recent advancements in these areas.

Introduction

Tissue engineering relies on four main factors: the growth of appropriate cells, the introduction of the proper biomolecules to these cells, the attachment of the cells to an appropriate scaffold, and the application of specific mechanical and biological forces to develop the completed tissue.1

Successful cell culture has been possible since the 1960’s, but these early methods lacked the adaptability necessary to make functioning tissues. With the introduction of induced pluripotent stem cells in 2008, however, researchers have not faced the same resource limitation previously encountered. As a result, the growth of cells of a desired type has not been limiting to researchers in tissue engineering and thus warrants less concern than other factors in contemporary tissue engineering.2,3

Similarly, the introduction of essential biomolecules (such as growth factors) to the developing tissue has generally not restricted modern tissue engineering efforts. Extensive research and knowledge of biomolecule function as well as relatively reliable methods of obtaining important biomolecules have allowed researchers to make engineered tissues more successfully emulate functional human tissue using biomolecules.4,5 Despite these advancements in information and procurement methods, however, the ability of biomolecules to improve engineered tissue often relies on the structure and chemical composition of the scaffold material.6

Cellular attachment has also been a heavily explored field of research. This refers specifically to the ability of the engineered tissue to seamlessly integrate into the surrounding tissue. Studies in cellular attachment often focus on qualities of scaffolds such as porosity as well as the introduction of biomolecules to encourage tissue union on the cellular level. Like biomolecule effectiveness, successful cellular attachment depends on the material and structure of the tissue scaffolding.7

Also critical to developing functional tissue is exposing it to the right environment. This development of tissue properties via the application of mechanical and biological forces depends strongly on finding materials that can withstand the required forces while supplying cells with the necessary environment and nutrients. Previous research in this has focused on several scaffold materials for various reasons. However, improvements to the material or the specific methods of development are still greatly needed in order to create functional implantable tissue. Because of the difficulty of conducting research in this area, devoted efforts to improving these methods remain critical to successful tissue engineering.

In order for a scaffold to be capable of supporting cells until the formation of a functioning tissue, it is necessary to satisfy several key requirements, principally introduction of helpful biomolecules, vascularization, mechanical function, appropriate chemical and physical environment, and compatibility with surrounding biological tissue.8,9 Great progress has been made towards satisfying many of these conditions, but further research in the field of tissue engineering must address challenges with existing scaffolds and improve their utility for replacing or repairing human tissue.

Key Research Areas of Scaffolding Design

Biomolecules

Throughout most early tissue engineering projects, researchers focused on simple cell culture surrounding specific material scaffolds.10 Promising developments such as the creation of engineered cartilage motivated further funding and interest in research. However, these early efforts missed out on several crucial factors to tissue engineering that allow implantable tissue to take on more complex functional roles. In order to create tissue that is functional and able to direct biological processes alongside nearby natural tissue, it is important to understand the interactions of biomolecules with engineered tissue.

Because the ultimate goal of tissue engineering is to create functional, implantable tissue that mimics biological systems, most important biomolecules have been explored by researchers in the medical field outside of tissue engineering. As a result, a solid body of research exists describing the functions and interactions of various biomolecules. Because of this existing information, understanding their potential uses in tissue engineering relies mainly on studying the interactions of biomolecules with materials which are not native to the body; most commonly, these non-biological materials are used as scaffolding. To complicate the topic further, biomolecules are a considerably large category encompassing everything from DNA to glucose to proteins. As such, it is most necessary to focus on those that interact closely with engineered tissue.

One type of biomolecule that is subject to much research and speculation in current tissue engineering is the growth factor.11 Specific growth factors can have a variety of functions from general cell proliferation to the formation of blood cells and vessels.12-14 They can also be responsible for disease, especially the unchecked cell generation of cancer.15 Many of the positive roles have direct applications to tissue engineering. For example, Transforming Growth Factor-beta (TGF-β) regulates normal growth and development in humans.16 One study found that while addition of ligands to engineered tissue could increase cellular adhesion to nearby cells, the addition also decreased the generation of the extracellular matrix, a key structure in functional tissue.17 To remedy this, the researchers then tested the same method with the addition of TGF-β. They saw a significant increase in the generation of the extracellular matrix, improving their engineered tissue’s ability to become functional faster and more effectively. Clearly, a combination of growth factors and other tissue engineering methods can lead to better outcomes for functional tissue engineering.

With the utility of growth factors established, delivery methods become very important. Several methods have been shown as effective, including delivery in a gelatin carrier.18 However, some of the most promising procedures rely on the scaffolding’s properties. One set of studies mimicked the natural release of growth factors through the extracellular matrix by creating a nanofiber scaffold containing growth factors for delayed release.19 The study saw an positive influence on the behavior of cells as a result of the release of growth factor. Other methods vary physical properties of the scaffold such as pore size to trigger immune pathways that release regenerative growth factors, as will be discussed later. The use of biomolecules and specifically growth factors is heavily linked to the choice of scaffolding material and can be critical to the success of an engineered tissue.

Vascularization

Because almost all tissue cannot survive without proper oxygenation, engineered tissue vascularization has been a focus of many researchers in recent years to optimize chances of engineered tissue success.20 For many of the areas of advancement, this process depends on the scaffold.21 The actual requirements for level and complexity of vasculature vary greatly based on the type of tissue; the requirements for blood flow in the highly vascularized lungs are different than those for cortical bone.22,23 Therefore, it is more appropriate for this topic to address the methods which have been developed for creating vascularized tissue rather than the actual designs of specific tissues.

One method that has shown great promise is the use of modified 3D printers to cast vascularized tissue.24 This method uses the relatively new printing technology to create carbohydrate glass networks in the form of the desired vascular network. The network is then coated with a hydrogel scaffold to allow cells to grow. The carbohydrate glass is then dissolved from inside of the hydrogel, leaving an open vasculature in a specific shape. This method has been successful in achieving cell growth in areas of engineered tissue that would normally undergo necrosis. Even more remarkably, the created vasculature showed the ability to branch into a more complex system when coated with endothelial cells.24

However, this method is not always applicable. Many tissue types require scaffolds that are more rigid or have different properties than hydrogels. In this case, researchers have focused on the effect of a material’s porosity on angiogenesis.7,25 Several key factors have been identified for blood vessel growth, including pore size, surface area, and endothelial cell seeding similar to that which was successful in 3D printed hydrogels. Of course, many other methods are currently being researched based on a variety of scaffolds. Improvements on these methods, combined with better research into the interactions of vascularization with biomaterial attachment, show great promise for engineering complex, differentiated tissue.

Mechanical Strength

Research has consistently demonstrated that large-scale cell culture is not limiting to bioengineering. With the introduction of technology like bioreactors or three-dimensional cell culture plates, growing cells of the desired qualities and in the appropriate form continues to become easier for researchers; this in turn allows for a focus on factors beyond simply gathering the proper types of cells.2 This is important because most applications in tissue engineering require more than just the ability to create groupings of cells—the cells must have a certain degree of mechanical strength in order to functionally replace tissue that experiences physical pressure.

The mechanical strength of a tissue is a result of many developmental factors and can be classified in different ways, often based on the type of force applied to the tissue or the amount of force the tissue is able to withstand. Regardless, mechanical strength of a tissue primarily relies on the physical strength of the tissue and its ability for its cells to function under an applied pressure; these are both products of the material and fabrication methods of the scaffolding used. For example, scaffolds in bone tissue engineering are often measured for compressive strength. Studies have found that certain techniques, such as cooking in a vacuum oven, may increase compressive strength.26 One group found that they were able to match the higher end of the possible strength of cancellous (spongy) bone via 3D printing by using specific molecules within the binding layers.27 This simple change resulted in scaffolding that displayed ten times the mechanical strength of scaffolding with traditional materials, a value within the range for natural bone. Additionally, the use of specific binding agents between layers of scaffold resulted in increased cellular attachment, the implications of which will be discussed later.27 These changes result in tissue that is more able to meet the functional requirements and therefore to be easily used as a replacement for bone. Thus, simple changes in materials and methods used can drastically increase the mechanical usability of scaffolds and often have positive effects on other important qualities for certain types of tissue.

Clearly, not all designed tissues require the mechanical strength of bone; contrastingly for contrast, the brain experiences less than one kPa of pressure compared to the for bone’s 106 kPa pressure bones experience.28 Thus, not all scaffolds must support the same amount of pressure, and scaffolds must be made accordingly to accommodate for these structural differences. Additionally, other tissues might experience forces such as tension or torsion based on their locations within the body. This means that mechanical properties must be looked at on a tissue-by-tissue basis in order to determine their corresponding scaffolding structures. But mechanical limitations are only a primary factor in bone, cartilage, and cardiovascular engineered tissue, the latter of which has significantly more complicated mechanical requirements.29

Research in the past few years has investigated increasingly complex aspects of scaffold design and their effects on macroscopic physical properties. For example, it is generally accepted that pore size and related surface area within engineered bone replacements are key to cellular attachment. However, recent advances in scaffold fabrication techniques have allowed researchers to investigate very specific properties of these pores such as their individual geometry. In one recent study, it was found that using an inverse opal geometry--an architecture known for its high strength in materials engineering--for pores led to a doubling of mineralization within a bone engineering scaffold.30 Mineralization is a crucial quality of bone because of its contribution to compressive strength.31 This result is so important because it demonstrates the recent ability of researchers to alter scaffolds on a microscopic level in order to affect macroscopic changes in tissue properties.

Attachment to Nearby Tissue

Even with an ideal design, a tissue’s success as an implant relies on its ability to integrate with the surrounding tissue. For some types of tissue, this is simply a matter of avoiding rejection by the host through an immune response.32 In these cases, it is important to choose materials with a specific consideration for reducing this immune response. Over the past several decades, it has been shown that the key requirement for biocompatibility is the use of materials that are nearly biologically inert and thus do not trigger a negative response from natural tissue.33 This is based on the strategy which focuses on minimizing the immune response of tissue surrounding the implant in order to avoid issues such as inflammation which might be detrimental to the patient undergoing the procedure. This method has been relatively effective for implants ranging from total joint replacements to heart valves.

Avoiding a negative immune response has proven successful for some medical fields. However, more complex solutions involving a guided immune response might be necessary for engineered tissue implants to survive and take on the intended function. This issue of balancing biochemical inertness and tissue survival has led researchers to investigate the possibility of using the host immune response in an advantageous way for the success of the implant.34 This method of intentionally triggering surrounding natural tissue relies on the understanding that immune response is actually essential to tissue repair. While an inert biomaterial may be able to avoid a negative reaction, it will also discourage a positive reaction. Without provoking some sort of response to the new tissue, an implant will remain foreign to bordering tissue; this means that the cells cannot take on important functions, limiting the success of any biomaterial that has more than a mechanical use.

Current studies have focused primarily on modifying surface topography and chemistry to target a positive immune reaction in the cells surrounding the new tissue. One example is the grafting of oligopeptides onto the surface of an implant to stimulate macrophage response. This method ultimately leads to the release of growth factors and greater levels of cellular attachment because of the chemical signals involved in the natural immune response.35 Another study found that the use of a certain pore size in the scaffold material led to faster and more complete healing in an in vivo study using rabbits. Upon further investigation, it was found that the smaller pore size was interacting with macrophages involved in the triggered immune response; this interaction ultimately led more macrophages to differentiate into a regenerative pathway, leading to better and faster healing of the implant with the surrounding tissue.36 Similar studies have investigated the effect of methods such as attaching surface proteins with similarly enlightening results. These and other promising studies have led to an increased awareness of chemical signaling as a method to enhance biomaterial integration with larger implications including faster healing time and greater functionality.

Conclusion

The use of scaffolds for tissue engineering has been the subject of much research because of its potential for extensive utilization in the medical field. Recent advancements have focused on several areas, particularly the use of biomolecules, improved vascularization, increases in mechanical strength, and attachment to existing tissue. Advancements in each of these fields have been closely related to the use of scaffolding. Several biomolecules, especially growth factors, have led to a greater ability for tissue to adapt as an integrated part of the body after implantation. These growth factors rely on efficient means of delivery, notably through inclusion in the scaffold, in order to have an effect on the tissue. The development of new methods and refinement of existing ones has allowed researchers to successfully vascularize tissue on multiple types of scaffolds. Likewise, better methods of strengthening engineered tissue scaffolds before cell growth and implantation have allowed for improved functionality, especially under mechanical forces. Modifications to scaffolding and the addition of special molecules have allowed for increased cellular attachment, improving the efficacy of engineered tissue for implantation. Further advancement in each of these areas could lead to more effective scaffolds and the ability to successfully use engineered tissue for functional implants in medical treatments.

References

  1. “Tissue Engineering and Regenerative Medicine.” National Institute of Biomedical Imaging and Bioengineering. N.p., 22 July 2013. Web. 29 Oct. 2016.
  2. Haycock, John W. “3D Cell Culture: A Review of Current Approaches and Techniques.” 3D Cell Culture: Methods and Protocols. Ed. John W. Haycock. Totowa, NJ: Humana Press, 2011. 1–15. Web.
  3. Takahashi, Kazutoshi, and Shinya Yamanaka. “Induction of Pluripotent Stem Cells from Mouse Embryonic and Adult Fibroblast Cultures by Defined Factors.” Cell 126.4 (2006): 663–676. ScienceDirect. Web.
  4. Richardson, Thomas P. et al. “Polymeric System for Dual Growth Factor Delivery.” Nat Biotech 19.11 (2001): 1029–1034. Web.
  5. Liao, IC, SY Chew, and KW Leong. “Aligned Core–shell Nanofibers Delivering Bioactive Proteins.” Nanomedicine 1.4 (2006): 465–471. Print.
  6. Elliott Donaghue, Irja et al. “Cell and Biomolecule Delivery for Tissue Repair and Regeneration in the Central Nervous System.” Journal of Controlled Release: Official Journal of the Controlled Release Society 190 (2014): 219–227. PubMed. Web.
  7. Murphy, Ciara M., Matthew G. Haugh, and Fergal J. O’Brien. “The Effect of Mean Pore Size on Cell Attachment, Proliferation and Migration in Collagen–glycosaminoglycan Scaffolds for Bone Tissue Engineering.” Biomaterials 31.3 (2010): 461–466. Web.
  8. Sachlos, E., and J. T. Czernuszka. “Making Tissue Engineering Scaffolds Work. Review: The Application of Solid Freeform Fabrication Technology to the Production of Tissue Engineering Scaffolds.” European Cells & Materials 5 (2003): 29-39-40. Print.
  9. Chen, Guoping, Takashi Ushida, and Tetsuya Tateishi. “Scaffold Design for Tissue Engineering.” Macromolecular Bioscience 2.2 (2002): 67–77. Wiley Online Library. Web.
  10. Vacanti, Charles A. 2006. “The history of tissue engineering.” Journal of Cellular and Molecular Medicine 10 (3): 569-576.
  11. Depprich, Rita A. “Biomolecule Use in Tissue Engineering.” Fundamentals of Tissue Engineering and Regenerative Medicine. Ed. Ulrich Meyer et al. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. 121–135. Web.
  12. Laiho, Marikki, and Jorma Keski-Oja. “Growth Factors in the Regulation of Pericellular Proteolysis: A Review.” Cancer Research 49.10 (1989): 2533. Print.
  13. Morstyn, George, and Antony W. Burgess. “Hemopoietic Growth Factors: A Review.” Cancer Research 48.20 (1988): 5624. Print.
  14. Yancopoulos, George D. et al. “Vascular-Specific Growth Factors and Blood Vessel Formation.” Nature 407.6801 (2000): 242–248. Web.
  15. Aaronson, SA. “Growth Factors and Cancer.” Science 254.5035 (1991): 1146. Web.
  16. Lawrence, DA. “Transforming Growth Factor-Beta: A General Review.” European cytokine network 7.3 (1996): 363–374. Print.
  17. Mann, Brenda K, Rachael H Schmedlen, and Jennifer L West. “Tethered-TGF-β Increases Extracellular Matrix Production of Vascular Smooth Muscle Cells.” Biomaterials 22.5 (2001): 439–444. Web.
  18. Malafaya, Patrícia B., Gabriela A. Silva, and Rui L. Reis. “Natural–origin Polymers as Carriers and Scaffolds for Biomolecules and Cell Delivery in Tissue Engineering Applications.” Matrices and Scaffolds for Drug Delivery in Tissue Engineering 59.4–5 (2007): 207–233. Web.
  19. Sahoo, Sambit et al. “Growth Factor Delivery through Electrospun Nanofibers in Scaffolds for Tissue Engineering Applications.” Journal of Biomedical Materials Research Part A 93A.4 (2010): 1539–1550. Web.
  20. Novosel, Esther C., Claudia Kleinhans, and Petra J. Kluger. “Vascularization Is the Key Challenge in Tissue Engineering.” From Tissue Engineering to Regenerative Medicine- The Potential and the Pitfalls 63.4–5 (2011): 300–311. Web.
  21. Drury, Jeanie L., and David J. Mooney. “Hydrogels for Tissue Engineering: Scaffold Design Variables and Applications.” Synthesis of Biomimetic Polymers 24.24 (2003): 4337–4351. Web.
  22. Lafage-Proust, Marie-Helene et al. “Assessment of Bone Vascularization and Its Role in Bone Remodeling.” BoneKEy Rep 4 (2015): n. pag. Web.
  23. Türkvatan, Aysel et al. “Multidetector CT Angiography of Renal Vasculature: Normal Anatomy and Variants.” European Radiology 19.1 (2009): 236–244. Web.
  24. Miller, Jordan S. et al. “Rapid Casting of Patterned Vascular Networks for Perfusable Engineered Three-Dimensional Tissues.” Nature Materials 11.9 (2012): 768–774. www.nature.com. Web.
  25. Lovett, Michael et al. “Vascularization Strategies for Tissue Engineering.” Tissue Engineering. Part B, Reviews 15.3 (2009): 353–370. Web.
  26. Cox, Sophie C. et al. “3D Printing of Porous Hydroxyapatite Scaffolds Intended for Use in Bone Tissue Engineering Applications.” Materials Science and Engineering: C 47 (2015): 237–247. ScienceDirect. Web.
  27. Fielding, Gary A., Amit Bandyopadhyay, and Susmita Bose. “Effects of Silica and Zinc Oxide Doping on Mechanical and Biological Properties of 3D Printed Tricalcium Phosphate Tissue Engineering Scaffolds.” Dental Materials 28.2 (2012): 113–122. ScienceDirect. Web.
  28. Engler, Adam J. et al. “Matrix Elasticity Directs Stem Cell Lineage Specification.” Cell 126.4 (2006): 677–689. ScienceDirect. Web.
  29. Bilodeau, Katia, and Diego Mantovani. “Bioreactors for Tissue Engineering: Focus on Mechanical Constraints. A Comparative Review.” Tissue Engineering 12.8 (2006): 2367–2383. online.liebertpub.com (Atypon). Web.
  30. Sommer, Marianne R. et al. “Silk Fibroin Scaffolds with Inverse Opal Structure for Bone Tissue Engineering.” Journal of Biomedical Materials Research Part B: Applied Biomaterials (2016): n/a-n/a. Wiley Online Library. Web.
  31. Sapir-Koren, Rony, and Gregory Livshits. “Bone Mineralization and Regulation of Phosphate Homeostasis.” IBMS BoneKEy 8.6 (2011): 286–300. www.nature.com. Web.
  32. Boehler, Ryan M., John G. Graham, and Lonnie D. Shea. “Tissue Engineering Tools for Modulation of the Immune Response.” BioTechniques 51.4 (2011): 239–passim. PubMed Central. Web.
  33. Follet, H. et al. “The Degree of Mineralization Is a Determinant of Bone Strength: A Study on Human Calcanei.” Bone 34.5 (2004): 783–789. PubMed. Web.
  34. Franz, Sandra et al. “Immune Responses to Implants – A Review of the Implications for the Design of Immunomodulatory Biomaterials.” Biomaterials 32.28 (2011): 6692–6709. ScienceDirect. Web.
  35. Kao, Weiyuan John, and Damian Lee. “In Vivo Modulation of Host Response and Macrophage Behavior by Polymer Networks Grafted with Fibronectin-Derived Biomimetic Oligopeptides: The Role of RGD and PHSRN Domains.” Biomaterials 22.21 (2001): 2901–2909. ScienceDirect. Web.
  36. Bryers, James D, Cecilia M Giachelli, and Buddy D Ratner. “Engineering Biomaterials to Integrate and Heal: The Biocompatibility Paradigm Shifts.” Biotechnology and bioengineering 109.8 (2012): 1898–1911. Web.

Comment

Zika and Fetal Viruses: Sharing More Than A Motherly Bond

Comment

Zika and Fetal Viruses: Sharing More Than A Motherly Bond

Zika is a blood-borne pathogen primarily transmitted through mosquito bites and sexual activities. Pregnant women infected by Zika can pass the virus to their fetus, causing microcephaly, a condition in which the baby has an abnormally small head indicative of abnormal brain development. With the outbreak of the Zika virus and its consequences for pregnant women and their babies, much research has focused on how the infection leads to microcephaly in fetuses.

Current Zika research has been focused on uncovering methods for early detection of Zika in pregnant women and educating the public on safe sexual practices to contain the vector of transmission to just mosquitoes.1 However, to truly end the Zika epidemic, there are three critical steps that need to be taken. First, researchers must determine the point at which maternal infections harm the neurological development of fetuses in order to ensure treatment is administered to the mothers before the brain damage becomes irreversible. Subsequently, researchers must determine the mechanism through which Zika spreads from mother to fetus. After this step, researchers can begin developing therapies to protect the fetus from Zika once the mother is already infected and also start creating a preventative vaccine. Although Zika seems like a mysterious new illness, there are several other well-studied viral infections that affect pregnancies, such as cytomegalovirus (CMV). CMV infection during pregnancy also leads to severe fetal brain damage. Previous research techniques could provide clues for researchers trying to understand more about Zika, and learning more about Zika will better equip us for handling prenatal viral outbreaks in the future.

The current detection of microcephaly of infants with Zika-infected mothers involves fetal ultrasound as early as 18 weeks into the gestation period.2 However, this is a late diagnosis of fetal Zika infection and at this point the brain abnormalities caused by the virus are irreversible. Ultrasounds and MRI scans of infants with confirmed CMV infection can detect these neurological abnormalities as well.3 However, these brain lesions are also irreversible, making early detection a necessity for CMV infections as well. Fortunately, the presence of CMV or CMV DNA in amniotic fluid can be used for early diagnosis, and current treatment options include administration of valacyclovir or hyperimmunoglobulin in the window before the fetus develops brain lesions.4 Researchers must try to identify fetal Zika infection as early as possible as opposed to relying on fetal microcephaly as the sole diagnostic tool. Some potential early detection methods include testing for Zika in the urine of pregnant women as soon as Zika symptoms are present, as opposed to screening the fetus for infection.5

Discovering the mechanism through which Zika infects the fetus is necessary to develop therapies to protect the fetus from infection. Many viruses that are transferred to the fetus during pregnancy do so by compromising the immune function of the placental barrier, allowing the virus to cross the placenta and infect the fetus. The syncytiotrophoblast is the epithelial covering of placental embryonic villi, which are highly vascular finger-like projections that increase the surface area available for exchange of nutrients and wastes between the mother and fetus.6 In one study, experiments found that infection of extravillous trophoblast cells decreased the immune function of the placenta, which increased fetal susceptibility to infection.7 Determining which cells in the placenta are infected by Zika could aid research into preventative treatments for fetal infection.

Since viruses that cross the placental barrier are able to infect the fetus, understanding the interaction between immune cells and the placental barrier is important for developing therapies against Zika that increase fetal viral resistance. In one study, researchers found that primary human trophoblast cells use cell-derived vesicles called exosomes to transfer miRNA, conferring placental immune resistance to a multitude of viruses to other pregnancy-related cells.8 miRNAs are responsible for regulating gene expression, and different miRNAs exist in different cells so that those cells will have specific functions and defenses. Isolating these miRNA exosomes, using them to supplement placental cell strains, and subsequently testing whether those cells are more or less susceptible to Zika could support the development of drugs that bolster the placental immune defense mechanism already in place. Since viral diseases that cross the placenta lead to poor fetal outcome, developing protective measures for the placenta is imperative, not only for protection against Zika but also for protection against new viruses without vaccinations.9

Combating new and more elusive viral outbreaks is difficult, but understanding and preventing viral infection in fetuses is like taking a shot in the dark. Although the prospects for infants infected by Zika are currently poor, combining the research done on other congenital infections paints a more complete picture on viral transmission during pregnancy. Instead of starting from scratch, scientists can use this information to determine the tests that can detect Zika, the organs to examine for compromised immune system function, and the treatment types that have a higher probability of effectiveness. Zika will not be the last virus that causes birth defects, but by combining the efforts of many scientists, we can get closer to stopping fetal viral infection once and for all.

References

  1. Wong, K. V. J. Epidemiol. Public Health Rev. 2016, 1.
  2. Mlakar, J., et al. N. Engl. J. Med. 2016, 374, 951-958.
  3. Malinger, G., et al. Am. J. Neuroradiol. 2003, 24, 28-32.
  4. Leruez-Ville, M., et al. Am. J. Obstet. Gynecol. 2016, 215, 462.
  5. Gourinat, A. C., et al. Emerg. Infect. Dis. 2015, 21, 84-86.
  6. Delorme-Axford, E., et al. Proc. Natl. Acad. Sci. 2013, 110, 12048-12053.
  7. Zhang, J.; Parry, S. Ann. N. Y. Acad. Sci. 2001, 943, 148-156.
  8. Mouillet, J. F., et al. Int. J. Dev. Bio. 2014, 58, 281.
  9. Mor, G.; Cardenas I. Am. J. Reprod. Immunol. 2010, 63, 425-433.

Comment

Wearable Tech is the New Black

Comment

Wearable Tech is the New Black

What if our clothes could detect cancer? That may seem like a far fetched, “only applicable in a sci-fi universe” type of concept, but such clothes do exist and similar devices that merge technology and medicine are actually quite prominent today. The wearable technology industry, a field poised to grow to $11.61 billion by 20201, is exploding in the healthcare market as numerous companies produce various devices that help us in our day to day lives such as wearable EKG monitors and epilepsy detecting smart watches. Advancements in sensor miniaturization and integration with medical devices have greatly opened this interdisciplinary trade by lowering costs. Wearable technology ranging from the Apple Watch to consumable body-monitoring pills can be used for everything from health and wellness monitoring to early detection of disorders. But as these technologies become ubiquitous, there are important privacy and interoperability concerns that must be addressed.

Wearable tech like the Garmin Vivosmart HR+ watch uses sensors to obtain insightful data about its wearer’s health. This bracelet-like device tracks steps walked, distance traveled, calories burned, pulse, and overall fitness trends over time.2 It transmits the information to an app on the user’s smartphone which uses various algorithms to create insights about the person’s daily activity. This data about a person’s daily athletic habits is useful to remind them that fitness is not limited to working out at the gym or playing a sport--it’s a way of life. Holding tangible evidence of one’s physical activity for the day or history of vital signs empowers patients to take control of their personal health. The direct feedback of these devices influences patients to make better choices such as taking the stairs instead of the elevator or setting up a doctor appointment early on if they see something abnormal in the data from their EKG sensor. Connecting hard evidence from the body to physical and emotional perceptions refines the reality of those experiences by reducing the subjectivity and oversimplification that feelings about personal well being may bring about.

Not only can wearable technology gather information from the body, but these devices can also detect and monitor diseases. Diabetes, the 7th leading cause of death in the United States,3 can be detected via AccuCheck, a technology that can send an analysis of blood sugar levels directly to your phone.4 Analysis software like BodyTel can also connect patients with doctors and other family members who would be interested in looking at the data gathered from the blood test.5 Ingestible devices such as the Ingestion Event Marker take monitoring a step further. Designed to monitor medication intake, the pills keep track of when and how frequently patients take their medication. The Freescale KL02 chip, another ingestible device, monitors specific organs in the body and relays the organ’s status back to a Wi-Fi enabled device which doctors can use to remotely measure the progression of an illness. They can assess the effectiveness of a treatment with quantitative evidence which makes decision-making about future treatment plans more effective.

Many skeptics hesitate to adopt wearable technology because of valid concerns about accuracy and privacy. To make sure medical devices are kept to the same standards and are safe for patient use, the US Food and Drug Administration (FDA) has begun to implement a device approval process. Approval is only granted to devices that provably improve the functionality of traditional medical devices and do not pose a great risk to patients if they malfunction.6In spite of the FDA approval process, much research is needed to determine whether the information, analysis and insights received from various wearable technologies can be trusted.

Privacy is another big issue especially for devices like fitness trackers that use GPS location to monitor user behavior. Many questions about data ownership (does the company or the patient own the data?) and data security (how safe is my data from hackers and/or the government and insurance companies?) are still in a fuzzy gray area with no clear answers.7 Wearable technology connected to online social media sites, where one’s location may be unknowingly tied to his or her posts, can increase the chance for people to become victims of stalking or theft. Lastly, another key issue that makes medical practitioners hesitant to use wearable technology is the lack of interoperability, or the ability to exchange data, between devices. Data structured one way on a certain wearable device may not be accessible on another machine. Incorrect information might be exchanged, or data could be delayed or unsynchronized, all to the detriment of the patient.

Wearable technology is changing the way we live our lives and understand the world around us. It is modifying the way health care professionals think about patient care by emphasizing quantitative evidence for decision making over the more subjective analysis of symptoms. The ability for numeric evidence about one’s body to be documented holds people accountable for the actions. Patients can check to see if they meet their daily step target or optimal sleep count, and doctors can track the intake of a pill and see its effect on the patient’s body. For better or for worse, we won’t get the false satisfaction of achieving our fitness goal or of believing in the success of a doctor’s recommended course of action without tangible results. While we have many obstacles to overcome, wearable technology has improved the quality of life for many people and will continue to do so in the future.

References

  1. [Hunt, Amber. Experts: Wearable Tech Tests Our Privacy Limits. http://www.usatoday.com/story/tech/2015/02/05/tech-wearables-privacy/22955707/ (accessed Oct. 24, 2016).
  2. Vivosmart HR+. https://buy.garmin.com/en-US/US/into-sports/health-fitness/vivosmart-hr-/prod548743.html (accessed Oct. 31, 2016).
  3. Statistics about Diabetes. http://www.diabetes.org/diabetes-basics/statistics/ (accessed Nov. 1, 2016).
  4. Accu-Chek Mobile. https://www.accu-chek.co.uk/gb/products/metersystems/mobile.html (accessed Oct. 31, 2016).
  5. GlucoTel. http://bodytel.com/portfolios/glucotel/ (accessed Oct. 31, 2016)
  6. Mobile medical applications guidance for industry and Food and Drug Administration staff. U. S. Food and Drug Administration, Feb. 9, 2015. http://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM263366.pdf (accessed Oct. 17, 2016).
  7. Meingast, M.; Roosta, T.; Sastry, S. Security and Privacy Issues with Health Care Information Technology. http://www.cs.jhu.edu/~sdoshi/jhuisi650/discussion/secprivhealthit.pdf (accessed Nov. 1, 2016).

Comment

Algae: Pond Scum or Energy of the Future?

Comment

Algae: Pond Scum or Energy of the Future?

In many ways, rising fuel demands indicate positive development--a global increase in energy accessibility. But as the threat of climate change from burning fuel begins to manifest, it spurs the question: How can the planet meet global energy needs while sustaining our environment for years to come? While every person deserves access to energy and the comfort it brings, the population cannot afford to stand by as climate change brings about ecosystem loss, natural disaster, and the submersion of coastal communities. Instead, we need a technological solution which will meet global energy needs while promoting ecological sustainability. When people think of renewable energy, they tend to picture solar panels, wind turbines, and corn-based ethanol. But what our society may need to start picturing is that nondescript, green-brown muck that crowds the surface of ponds: algae.

Conventional fuel sources, such as oil and coal, produce energy when the carbon they contain combusts upon burning. Problematically, these sources have sequestered carbon for millions of years, hence the term fossil fuels. Releasing this carbon now increases atmospheric CO2 to levels that our planet cannot tolerate without a significant change in climate. Because fossils fuels form directly from the decomposition of plants, live plants also produce the compounds we normally burn to release energy. But, unlike fossil fuels, living biomass photosynthesizes up to the point of harvest, taking CO2 out of the atmosphere. This coupling between the uptake of CO2 by photosynthesis and the release of CO2 by combustion means using biomass for fuel should not add net carbon to the atmosphere.1 Because biofuel provides the same form of energy through the same processes as fossil fuel, but uses renewable resources and does not increase atmospheric carbon, it can viably support both societal and ecological sustainability.

If biofuel can come from a variety of sources such as corn, soy, and other crops, then why should we consider algae in particular? Algae double every few hours, a high growth rate which will be crucial for meeting current energy demands.2 And beyond just their power in numbers, algae provide energy more efficiently than other biomass sources, such as corn.1 Fat composes up to 50 percent of their body weight, making them the most productive provider of plant oil.3,2 Compared to traditional vegetable biofuel sources, algae can provide up to 50 times more oil per acre.4 Also, unlike other sources of biomass, using algae for fuel will not detract from food production. One of the primary drawbacks of growing biomass for fuel is that it competes with agricultural land and draws from resources that would otherwise be used to feed people.3 Not only does algae avoid this dilemma by either growing on arid, otherwise unusable land or on water, but also it need not compete with overtaxed freshwater resources. Algae proliferates easily on saltwater and even wastewater.4 Furthermore, introducing algae biofuel into the energy economy would not require a systemic change in infrastructure because it can be processed in existing oil refineries and sold in existing gas stations.2

However, algae biofuel has yet to make its grand entrance into the energy industry. When oil prices rose in 2007, interest shifted towards alternative energy sources. U.S. energy autonomy and the environmental consequences of carbon emission became key points of discussion. Scientists and policymakers alike were excited by the prospect of algae biofuel, and research on algae drew governmental and industrial support. But as U.S. fossil fuel production increased and oil prices dropped, enthusiasm waned.2

Many technical barriers must be overcome to achieve widespread use of algae, and progress has been slow. For example, algae’s rapid growth rate is both its asset and its Achilles’ heel. Areas colonized by algae can easily become overcrowded, which blocks access to sunlight and causes large amounts of algae to die off. Therefore, in order to farm algae as a fuel source, technology must be developed to regulate its growth.3 Unfortunately, the question of how to sustainably grow algae has proved troublesome to solve. Typically, algae for biofuel use is grown in reactors in order to control growth rate. But the ideal reactor design has yet to be developed, and in fact, some current designs use more energy than the algae yield produces.5

Although algae biofuel faces technological obstacles and dwindling government interest, many scientists today still see algae as a viable and crucial solution for future energy sustainability. UC San Diego houses the California Center for Algal Biotechnology, and Dr. Stephen Mayfield, a molecular biologist at the center, has worked with algae for over 30 years. In this time he has helped start four companies, including Sapphire Energy, founded in 2007, which focuses on developing algae biofuels. After receiving $100 million from venture capitalists in 2009, Sapphire Energy built a 70,000-square-foot lab in San Diego and a 220-acre farm in New Mexico. They successfully powered cars and jets with algae biofuel, drawing attention and $600 million in further funding from ExxonMobil. Although diminished interest then stalled production, algal researchers today believe people will come to understand the potential of using algae.2 The Mayfield Lab currently works on developing genetic and molecular tools to make algae fuel a viable means of energy production.4 They grow algae, extract its lipids, and convert them to gasoline, jet, and diesel fuel. Mayfield believes his lab will reach a low price of 80 or 85 dollars per barrel as they continue researching with large-scale biofuel production.1

The advantage of growing algae for energy production lies not only in its renewability and carbon neutrality, but also its potential for other uses. In addition to just growing on wastewater, algae can treat the water by removing nitrates.5 Algae farms could also provide a means of carbon sequestration. If placed near sources of industrial pollution, they could remove harmful CO2 emissions from the atmosphere through photosynthesis.4 Additionally, algae by-products are high in protein and could serve as fish and animal feed.5

At this time of increased energy demand and dwindling fossil fuel reserves, climate change concerns caused by increased atmospheric carbon, and an interest in U.S. energy independence, we need economically viable but also renewable, carbon neutral energy sources.4 Algae holds the potential to address these needs. Its rapid growth and photosynthetic ability mean its use as biofuel will be a sustainable process that does not increase net atmospheric carbon. The auxiliary benefits of using algae, such as wastewater treatment and carbon sequestration, increase the economic feasibility of adapting algae biofuel. While technological barriers must be overcome before algae biofuel can be implemented on a large scale, demographic and environmental conditions today indicate that continued research will be a smart investment for future sustainability.

References

  1. Deaver, Benjamin. Is Algae Our Last Chance to Fuel the World? Inside Science, Sep. 8, 2016.
  2. Dineen, Jessica. How Scientists Are Engineering Algae To Fuel Your Car and Cure Cancer. Forbes UCVoice, Mar. 30, 2015.
  3. Top 10 Sources for Biofuel. Seeker, Jan. 19, 2015.
  4. California Center for Algae Biotechnology. http://algae.ucsd.edu/. (accessed Oct. 16, 2016).
  5. Is Algae the Next Sustainable Biofuel? Forbes StatoilVoice, Feb. 27, 2015. (republished from Dec. 2013)

Comment

First World Health Problems

Comment

First World Health Problems

I am a first generation American, as both of my parents immigrated here from Myanmar, a third world country. There had been no occurrence of any Inflammatory Bowel Disease (IBD) in my family, yet I was diagnosed with Ulcerative Colitis at the beginning of my sophomore year of high school. Since IBD is known to be caused by a mix of genetic and environmental factors,1,2 what specifically triggered me to develop Ulcerative Colitis? Was it the food in America, the air I was exposed to, a combination of the two, or neither of them at all? Did the “environment” of the first world in the United States cause me to develop Ulcerative Colitis?

IBD is a chronic autoimmune disease, characterized by persistent inflammation of the digestive tract and classified into two separate categories: Ulcerative Colitis and Crohn’s Disease.3 Currently, there is no known cure for IBD, as its pathogenesis (i.e. the manner in which it develops) is not fully understood.1 Interestingly, the incidence of IBD has increased dramatically over the past century.1 A systematic review by Molodecky et al. showed that the incidence rate of IBD was significantly higher in Western nations. This may be due to better diagnostic techniques or the growth of environmental factors that promote its development. This could also suggest that there may be certain stimuli in first world countries that can trigger pathogenesis in individuals with a genetic predisposition to IBD.

Environmental factors that are believed to affect IBD include smoking, diet, geographic location, social status, stress, and microbes.1 Smoking has had varying effects on the development of IBD depending on the form; smoking is a key risk factor for Crohn’s Disease, while non-smokers and ex-smokers are usually diagnosed with Ulcerative Colitis.4 There have not been many studies investigating the causal relationship between diet and IBD due to the diversity in diet composition.1 However, since IBD affects the digestive system, diet has long been thought to have some impact on the pathogenesis of the disease.1 In first world countries, there is access to a larger variety of food, which may impact the prevalence of IBD. People susceptible to the disease in developing countries may have a smaller chance of being exposed to “trigger” foods. In addition, IBD has been found in higher rates in urban areas versus rural areas.1,4,5 This makes sense, as cities have a multitude of potential disease-inducing environmental factors including pollution, poor sanitation, and microbial exposure. Higher socioeconomic status has also been linked to higher rates of IBD.4 This may be partly due to the sedentary nature of white collar work, which has also been linked to increased rates of IBD.1 Stress used to be viewed as a possible factor in the pathogenesis of IBD, but recent evidence has indicated that it only exacerbates the disease.3 Recent research has focused on the microorganisms in the gut, called gut flora, as they seem to have a vital role in the instigation of IBD.1 In animal models, it has even been observed that pathogenesis of IBD is not possible in a germ-free environment.1 The idea of the importance of microorganisms in human health is also linked to the Hygiene Hypothesis.

The Hygiene Hypothesis states that the lack of infections in western countries is the reason for an increasing amount of autoimmune and allergic diseases.6 The idea behind the theory is that some infectious agents guard against a wide variety of immune-related disorders.6 Animal models and clinical trials have provided some evidence backing the Hygiene Hypothesis, but it is hard to causally attribute the pathogenesis of autoimmune and allergic diseases to a decrease in infections, since first world countries have very different environmental factors than third world countries.6

The increasing incidence of IBD in developed countries is not yet fully understood, but recent research points towards a complex combination of environmental and genetic factors. The rise of autoimmune disease diagnoses may also be attributed to better medical equipment and facilities and the tendency of people in more developed countries to regularly get checked by a doctor. There are many difficulties in researching the pathogenesis of IBD including isolating certain environmental factors and obtaining tissue and data from third world countries. However, there is much promising research and it might not be long until we discover a cure for IBD.

References

  1. Danese, S. et al. Autoimm Rev 2004, 3.5, 394-400.
  2. Podolsky, Daniel K. N Engl J Med 2002,  347.6, 417-29.
  3. Mayo Clinic. "Inflammatory Bowel Disease (IBD)." http://www.mayoclinic.org/diseases-conditions/inflammatory-bowel-disease/basics/definition/con-20034908 (accessed Sep. 30, 2016).
  4. CDC. "Epidemiology of the IBD." https://www.cdc.gov/ibd/ibd-epidemiology.htm (accessed Oct.17, 2016).
  5. Molodecky, N. et al. Gastroenterol 2012, 142.1, n. pag.
  6. Okada, H. et. al. Clin Exp Immuno 2010, 160, 1–9.

Comment

Corals in Hot Water, Literally

Comment

Corals in Hot Water, Literally

Coral reefs support more species per unit area than any other marine environment, provide over half a billion people worldwide with socio-economic benefits, and produce an estimated USD $30 billion annually.1 Many people do not realize that these diverse ecosystems are at risk of extinction as a result of human activity--the Caribbean has already lost 80% of its coral cover in the past few decades2 and some estimates report that at least 60% of all coral will be lost by 2030.1 One of the most predominant and direct threats to the health of these fragile ecosystems is the enormous amount of carbon dioxide and methane that have spilled into the atmosphere, warming the planet and its oceans on unprecedented levels.

Corals are Cnidarians, the phylum characterized by simple symmetrical structural anatomy. Corals reproduce either asexually or sexually and create stationary colonies made up of hundreds of genetically identical polyps.3 The major reef-building corals belong to a sub-order of corals, called Scleractinia. These corals contribute substantially to the reef. framework and are key species in building and maintaining the structural complexity of the reef.3 The survival of this group is of particular concern, since mass die- offs of these corals affect the integrity of the reef. Corals form a symbiosis with tiny single-celled algae of the genus Symbiodinium. This symbiotic relationship supports incredible levels of biodiversity and is a beautifully intricate relationship that is quite fragile to sudden environmental change.3

The oceans absorb nearly half of the carbon dioxide in the atmosphere through chemical processes that occur at its surface.4 Carbon dioxide combines with water molecules to create a mixture of bicarbonate, calcium carbonate, and carbonic acid. Calcium carbonate is an important molecule used by many marine organisms to secrete their calcareous shells or skeletons. The increase of carbon dioxide in the atmosphere shifts this chemical equilibrium, creating higher levels of carbonic acid and less calcium carbonate.4 Carbonic acid increases the acidity of the ocean and this phenomenon has been shown to affect the skeletal formation of juvenile corals.5 Acidification weakens the structural integrity of coral skeletons and contributes to heightened dissolution of carbonate reef structure.3

The massive influx of greenhouse gases into our atmosphere has also caused the planet to warm very quickly. Corals are in hot water, literally. Warmer ocean temperatures have deadly effects on corals and stress the symbiosis that corals have with the algae that live in their tissues. Though coral can procure food by snatching plankton and other organisms with protruding tentacles, they rely heavily on the photosynthesizing organism Symbiodinium for most of their energy supply.3 Symbiodinium provides fixed carbon compounds and sugars necessary for coral skeletal growth. The coral provides the algae with a fixed position in the water column, protection from predators, and supplementary carbon dioxide.3 Symbiodinium live under conditions that are 1 to 2° C below their maximum upper thermal limit. Under warmer conditions due to climate change, sea surface temperatures can rise a few degrees above their maximum thermal limit. This means that a sudden rise in sea temperatures can stress Symbiodinium by causing photosynthetic breakdown and the formation of reactive oxygen species that are toxic to corals.3 The algae leave or are expelled from the coral tissues as a mechanism for short-term survival in what is known as bleaching. Coral will die from starvation unless the stressor dissipates and the algae return to the coral’s tissues.3

Undoubtedly, the warming of the seas is one of the most widespread threats to coral reef ecosystems. However, other threats combined with global warming may have synergistic effects that heighten the vulnerability of coral to higher temperatures. These threats include coastal development that either destroys local reefs or displaces sediment to nearby reefs, smothering them. Large human populations near coasts expel high amounts of nitrogen and phosphorous into the ecosystem, which can increase the abundance of macroalgae and reduce hard coral cover. Increased nutrient loading has been shown to be a factor contributing to a higher prevalence of coral disease and coral bleaching.6 Recreational fishing and other activities can cause physical injury to coral making them more susceptible to disease. Additionally, fishing heavily reduces population numbers of many species of fish that keep the ecosystem in balance.

The first documented global bleaching event in 1998 killed off an estimated 16% of the world’s reefs; the world experienced the destruction of the third global bleaching event occurred only last year.1 Starting in mid-2015, an El Niño Southern Oscillation (ENSO) weather event spurred hot sea surface temperatures that decimated coral reefs across the Pacific, starting with Hawaii, then hitting places like American Samoa, Australia, and reefs in the Indian Ocean.7 The aftermath in the Great Barrier Reef is stunning; the north portion of the reef experienced an average of 67% mortality.8 Some of these reefs, such as the ones surrounding Lizard Island, have been reduced to coral skeletons draped in macroalgae. With climate change, it is expected that the occurrence of ENSO events will become more frequent, and reefs around the world will be exposed to greater thermal stress.1

Some scientists are hopeful that corals may be able to acclimatize in the short term and adapt in the long term to warming ocean temperatures. The key to this process lies in the genetic type of Symbiodinium that reside in the coral tissues. There are over 250 identified types of Symbiodinium, and genetically similar types are grouped into clades A-I. The different clades of these algae have the potential to affect the physiological performance of their coral host, including responses to thermotolerance, growth, and survival under more extreme light conditions.3 Clade D symbiont types are generally more thermotolerant than those in other clades. Studies have shown a low abundance of Clade D organisms living in healthy corals before a bleaching event, but after bleaching and subsequently recovering, the coral has a greater abundance of Clade D within its tissues.9,10 Many corals are generalists and have the ability to shuffle their symbiont type in response to stress.11

However, there is a catch. Though some algal members of Clade D are highly thermotolerant, they are also known as selfish opportunists. The reason healthy, stress-free corals generally do not have a symbiosis with this clade is that it tends to hoard the energy and organic compounds it creates from photosynthesis and shares fewer products with its coral host.3

Approaches that seemed too radical a decade ago are now widely considered as the only means to save coral reefs from the looming threat of extinction. Ruth Gates, a researcher at the Hawaii Institute of Marine Biology is exploring the idea of assisted evolution in corals. Her experiments include breeding individual corals in the lab, exposing them to an array of stressors, such as higher temperatures and lower pH, and picking the hardiest survivors to transplant to reefs.12 In other areas of the globe, scientists are breeding coral larvae in labs and then releasing them onto degraded reefs where they will hopefully settle and form colonies.

Governments and policy makers can create policies that have significant impact on the health of reefs. The creation of marine protected areas that heavily regulates or outlaws harvesting of marine species offers sanctuary to a stressed and threatened ecosystem.3 There is still a long way to go, and the discoveries being made so far about coral physiology and resilience are proving that the coral organism is incredibly complex.

The outlook on the future of healthy reefs is bleak; rising fossil fuel consumption rates mock the global goal of keeping rising temperatures below two degrees Celsius. Local stressors such as overfishing, pollution, and coastal development cause degradation of reefs worldwide. Direct human interference in the acclimatization and adaptation of corals may be instrumental to their survival. Rapid transitions to cleaner sources of energy, the creation of more marine protection areas, and rigid management of reef fish stocks may ensure coral reef survival. If humans fail in this endeavor, one of the most biodiverse and productive ecosystems on earth that has persisted for millions of years may come crashing to an end within our lifetime.

References

  1. Cesar, H., L. Burke, and L. Pet-Soede. 2003. "The Economics of Worldwide Coral Reef Degradation." Arnhem, The Netherlands: Cesar Environmental Economics Consulting. http://pdf.wri.org/cesardegradationreport100203.pdf (accessed Dec 14, 2016)
  2. Gardner, T.A. et al. Science 2003, 301:958–960.
  3. Sheppard C., Davy S., Piling G., The Biology of Coral Reefs; Biology of Habitats Series; Oxford University Press; 1st Edition, 2009
  4. Branch, T.A.et al. Trends in Ecology and Evolution 2013, 28:178-185
  5. Foster, T. et al. Science Advances 2016, 2(2) e1501130
  6. Vega Thurber, R.L. et al. Glob Change Biol 2013, 20:544-554
  7. NOAA Coral Watch, NOAA declares third ever global coral bleaching event. Oct 8, 2015. http://www.noaanews.noaa.gov/stories2015/100815-noaa-declares-third-ever-global-coral-bleaching-event.html (accessed Dec 15, 2016)
  8. ARC Centre of Excellence for Coral Reef Studies, Life and Death after the Great Barrier Reef Bleaching. Nov 29, 2016 https://www.coralcoe.org.au/media-releases/life-and-death-after-great-barrier-reef-bleaching (accessed Dec 13, 2016)
  9. Jones A.M. et al. Proc. R. Soc. B 2008, 275:1359-1365
  10. Silverstein, R. et al. Glob Change Biology 2014, 1:236-249
  11. Correa, A.S.; Baker, A.C. Glob Change Biology 2010, 17:68-75
  12. Mascarelli, M. Nature 2014, 508:444-446

Comment

East Joins West: The Rise of Integrative Medicine

Comment

East Joins West: The Rise of Integrative Medicine

An ancient practice developed thousands of years ago and still used by millions of people all over the world, Traditional Chinese Medicine (TCM) has undoubtedly played a role in the field of medicine. But just what is TCM? Is it effective? And can it ever be integrated with Western medicine?

The techniques of TCM stem from the beliefs upon which it was founded. The theory of the yin and yang balance holds that all things in the universe are composed of a balance between the forces of yin and yang. While yin is generally associated with objects that are dark, still, and cold, yang is associated with items that are bright, warm, and in motion.1 In TCM, illness is believed to be a result of an imbalance of yin or yang in the body. For instance, when yin does not cool yang, yang rises and headaches, flushing, sore eyes, and sore throats result. When yang does not warm yin, poor circulation of blood, lethargy, pallor, and cold limbs result. TCM aims to determine the nature of the disharmony and correct it through a variety of approaches. As the balance is restored in the body, so is the health.2

Another fundamental concept of TCM is the idea of qi, which is the energy or vital force responsible for controlling the functions of the human mind and body. Qi flows through the body through 12 meridians, or channels, that correspond to the 12 major organ systems, and 8 extra meridians that are all interconnected with the major channels. Just like an imbalance between yin and yang, disruption to the flow causes disease, and correction of the flow restores the body to balance.2 In TCM, disease is not viewed as something that a patient has. Rather, it is something that the patient is. There is no isolated entity called “disease,” but only a whole person whose body functions may be balanced or imbalanced, harmonious or disharmonious.3 Thus, TCM practitioners aim to increase or decrease qi in the body to create a healthy yin-yang balance through various techniques such as acupuncture, herbal medicine, nutrition, and mind/body exercise (tai chi, yoga). Eastern treatments are dismissed by some as superfluous to the recovery process and even harmful if used in place of more conventional treatments. However, evidence exists indicating Eastern treatments can be very effective parts of recovery plans.

The most common TCM treatments are acupuncture, which involves inserting needles at precise meridian points, and herbal medicine, which refers to using plant products (seeds, berries, roots, leaves, bark, or flowers) for medicinal purposes. Acupuncture seeks to improve the body’s functions by stimulating specific anatomic sites—commonly referred to as acupuncture points, or acupoints. It releases the blocked qi in the body, which may be causing pain, lack of function, or illness. Although the effects of acupuncture are still being researched, results from several studies suggest that it can stimulate function in the body and induce its natural healing response through various physiological systems.4 According to the WHO (World Health Organization), acupuncture is effective for treating 28 conditions, while limited but probable evidence suggests it may have an effective value for many more. Acupuncture seems to have gained the most clinical acceptance as a pain reduction therapy. Research from an international team of experts pooled the results of 29 studies on chronic pain involving nearly 18,000 participants—some had acupuncture, some had “sham” acupuncture, and some did not have acupuncture at all. Overall, the study found acupuncture treatments to be superior to both a lack of acupuncture treatment and sham acupuncture treatments for the reduction of chronic pain, suggesting that such treatments are a reasonable option for afflicted patients.5 According to a study carried out at the Technical University of Munich, people with tension headaches and/or migraines may find acupuncture to be very effective in alleviating their symptoms.6 Another study at the University of Texas M.D. Anderson Cancer Center found that twice weekly acupuncture treatments relieved debilitating symptoms of xerostomia--severe dry mouth--among patients undergoing radiation for head and neck cancer.7 Additionally, acupuncture has been demonstrated to both enhance performance in the memory-related brain regions of mild cognitive impairment patients (who have an increased risk of progressing towards Alzheimer’s disease),8 and to provide therapeutic advantages in regulating inflammation in infection and inflammatory disease.9

Many studies have also demonstrated the efficacy of herbal medicine in treating various illnesses. Recently, the WHO estimated that 80% of people worldwide rely on herbal medicines for some part of their primary health care. Researchers from the University of Adelaide have shown that a mixture of extracts from the roots of two medicinal herbs, Kushe and Baituling, works to kill cancer cells.10 Furthermore, scientists concluded that herbal plants have the potential to delay the development of diabetic complications, although more investigations are necessary to characterize this antidiabetic effect.11 Finally, a study found that Chinese herbal formulations appeared to alleviate symptoms for some patients with Irritable Bowel Syndrome, a common functional bowel disorder that is characterized by chronic or recurrent abdominal pain and does not currently have any reliable medical treatment.12

Both TCM and Western medicine seek to ease pain and improve function. Can the two be combined? TCM was largely ignored by Western medical professionals until recent years, but is slowly gaining traction among scientists and clinicians as studies show that an integrative approach has been effective. For instance, for patients dealing with chronic pain, Western medicine can stop the pain quickly with medication or interventional therapy, while TCM can provide a longer-lasting solution to the underlying problem with milder side effects and a greater focus on treating the underlying illness.13 A study by Cardiff University’s School of Medicine and Peking University in China showed that combining TCM and Western medicine could offer hope for developing new treatments for liver, lung, bone, and colorectal cancers.14 Also, studies on the use of traditional Chinese medicines for the treatment of multiple diseases like bronchial asthma, atopic dermatitis, and IBS showed that an interdisciplinary approach to TCM may lead to the discovery of new medicines.15

TCM is still a developing field in the Western world, and more research and clinical trials on the benefits and mechanisms of TCM are being conducted. While TCM methods such as acupuncture and herbal medicine must be further examined to be accepted as credible treatment techniques in modern medicine, they have been demonstrated to treat various illnesses and conditions. Therefore, while it is unlikely for TCM to be a suitable standalone option for disease management, it does have its place in a treatment plan with potential applications alongside Western medicine. Utilizing TCM as a complement to Western medicine presents hope in increasing the effectiveness of healthcare treatment.

References

  1. Yin and Yang Theory. Acupuncture Today. http://www.acupuncturetoday.com/abc/yinyang.php (accessed Dec. 15, 2016).
  2. Lao, L. et al. Integrative pediatric oncology. 2012, 125-135.
  3. The Conceptual Differences between Chinese and Western Medicine. http://www.mosherhealth.com/mosher-health-system/chinese-medicine/chinese-versus-western (accessed Dec. 15, 2016).
  4. How Acupuncture Can Relieve Pain and Improve Sleep, Digestion, and Emotional Well-being. http://cim.ucsd.edu/clinical-care/acupuncture.shtml (accessed Dec. 15, 2016).
  5. Vickers, A J. et al. Arch of Internal Med. 2012, 172, 1444-1453.
  6. Melchart, D. et al. Bmj. 2005, 331, 376-382.
  7. Meng, Z. et al. Cancer. 2012, 118, 3337-3344.
  8. Feng, Y. et al. Magnetic resonance imaging. 2012, 30, 672-682.
  9. Torres-Rosas, R. et al. Nature medicine. 2014, 20, 291-295.
  10. Qu, Z. et al. Oncotarget. 2016, 7, 66003-66019.
  11. Bnouham, M. et al. Int. J. of Diabetes and Metabolism. 2006, 14, 1.
  12. Bensoussan, A. et al. Jama. 1998, 280, 1585-1589.
  13. Jiang, W. Trends in pharmacological sciences. 2005, 26, 558-563.
  14. Combining Chinese, Western medicine could lead to new cancer treatments. https://www.sciencedaily.com/releases/2013/09/130928091021.htm (accessed Dec. 15, 2016).
  15. Yuan, R.; Yuan L. Pharmacology & therapeutics. 2000, 86, 191-198.

Comment

Transplanting Time

Comment

Transplanting Time

Nowadays, it is possible for patients with organ failure to live for decades after receiving an organ transplant. Since the first successful kidney transplant in the 1950s,1,2 advances in the procedure, including the improvement of drugs that facilitate acceptance of the foreign body parts,3 have allowed surgeons to transplant a wider variety of organs, such as the heart, lungs, liver, and pancreas.2,4 Over 750,000 lives have been saved and extended through the use of organ transplants, an unthinkable feat just over 50 years ago.2 Limitations to organ transplantation, such as the lack of available organs, and the development of new advancements that can improve the process promote ongoing discussion regarding the ethics of transplants.

The idea behind an organ transplant is simple. When both the recipient and the new organ are ready, surgeons detach the blood vessels attached to the failing organ before putting the new one in its place by reattaching the patient’s blood vessels to the functioning organ. To prevent rejection of the new organ, the recipient will continue to take immunosuppressant drugs3. In exchange for this lifelong commitment, the patient often receives a longer, more enjoyable life.2

The organs used in transplants usually originate from a cadaver or a living donor.1-3 Some individuals are deterred from becoming an organ donor because they are concerned that doctors will not do their best to save them if their organs are needed. This concern is further complicated by blurred definitions of “dead”; in one ethically ambiguous situation, dying patients who are brain dead may be taken off of life support so that their organs may be donated.1-3 Stories of patients who reawaken from comas after being pronounced “dead” may give some encouragement, but a patient’s family and doctors must decide when to give up that hope. Aside from organs received from the deceased, living donors, who may be family, friends, or strangers to the recipient, may donate organs that they can live without, such as a lung or a kidney.1-3 However, the potential injuring of a healthy person for the sake of another may contradict the oath that doctors take, which instructs physicians to help, not harm their patients.1

One of the most pressing issues today stems from the following question: who receives the organs? The transplant waiting list is constantly growing because the number of organs needed greatly exceeds the number of organs that are available.1-3 Unfortunately, 22 patients die every day while they are waiting for a new organ.4 Because the issue of receiving a transplant is time-sensitive, medical officials must decide who receives a transplant first. Should the person who needs a transplant the most have greater priority over another who has been on the waiting list longer? Should a child be eligible before a senior? Should a lifelong smoker be able to obtain a new lung? Currently, national policy takes different factors into account depending on the organ to be transplanted. For example, other than compatibility requirements, patients on the waiting list for liver transplants are ranked solely on their medical need and distance from the donor hospital.4 On the other hand, people waiting for kidneys are further considered based on whether they have donated a kidney previously, their age, and their time spent on the waiting list.4

Despite various efforts to increase the number of organ donors through education and legislation, the supply of organs does not meet the current and increasing need for them.1-3 As a result, other methods of obtaining these precious resources are currently being developed, one of which is the use of animal organs, a process known as xenotransplantation. Different animal cells, tissues, and organs are being researched for use in humans, giving some hope to those on the waiting list or those who do not quite qualify for a transplant.2,3 In the past, surgeons have attempted to use a primate’s heart and liver for transplantation, but the surgical outcomes were poor.2 Other applications of animal tissue are more promising, such as the use of pigs’ islet cells, which can produce insulin, in humans.2 However, a considerable risk of using these animal parts is that new diseases may be passed from animal to human. Additionally, animal rights groups have protested the use of primates as a source of whole organs.2

Another possible solution to the deficit of organs is the use of stem cells, which have the potential to grow and specialize. Embryonic stem cells can repair and regenerate damaged organs, but harvesting them destroys the source embryo.2,3 Although the embryos are created outside of humans, there are objections to their use. What differentiates a mass of cells from a living person? Fortunately, adult stem cells can be used for treatment as well.2 Researchers have developed a new method that causes adult stem cells to return to a state similar to that of the embryonic stem cells, although the efficacy of the induced adult stem cells compared to the embryonic stem cells is still unclear.7

Regardless of the continuous controversy over the ethics of transplantation, the boundaries for organ transplants are being pushed further and further. Head transplants have been attempted for over a century in other animals, such as dogs,5 but several doctors want to move on to work with humans. To attach a head to a new body, the surgeon would need to connect the old and new nerves in the spinal cord so that the patient’s brain could interact with the host body. Progress is already being made in repairing severe spinal cord injuries. In China, Dr. Ren Xiaoping plans to attempt a complete body transplant, believed by some to be currently impossible.6 There is not much information about the amount of pain that the recipient of a body transplant must endure,5 so it may ultimately decrease, rather than increase, the patient’s quality of life. Overall, most agree that it would be unethical to continue, considering the limited success of such projects and the high chance of failure and death.

Organ transplants and new developments in the field have raised many interesting questions about the ethics of the organ transplantation process. As a society, we should determine how to address these problems and set boundaries to decide what is “right.”

References

  1. Jonsen, A. R. Virtual Mentor. 2012, 14, 264-268.
  2. Abouna, G. M. Med. Princ. Prac. 2002, 12, 54-69.
  3. Paul, B. et al. Ethics of Organ Transplantation. University of Minnesota Center for Bioethics [Online], February 2004 http://www.ahc.umn.edu/img/assets/26104/Organ_Transplantation.pdf (accessed Nov. 4, 2016)
  4. Organ Procurement and Transplantation Network. https://optn.transplant.hrsa.gov/ (accessed Nov. 4 2016)
  5. Lamba, N. et al. Acta Neurochirurgica. 2016.
  6. Tatlow, D. K. Doctor’s Plan for Full-Body Transplants Raises Doubts Even in Daring China. The New York Times. http://www.nytimes.com/2016/06/12/world/asia/china-body-transplant.html?_r=0 (accessed Nov. 4, 2016)
  7. National Institutes of Health. stemcells.nih.gov/info/basics/6.htm (accessed Jan. 23, 2017)

 

Comment

Molecular Mechanisms Behind Alzheimer’s Disease and Epilepsy

2 Comments

Molecular Mechanisms Behind Alzheimer’s Disease and Epilepsy

Abstract

Seizures are characterized by periods of high neuronal activity and are caused by alterations in synaptic function that disrupt the equilibrium between excitation and inhibition in neurons. While often associated with epilepsy, seizures can also occur after brain injuries and interestingly, are common in Alzheimer’s patients. While Alzheimer’s patients rarely show the common physical signs of seizures, recent research has shown that electroencephalogram (EEG) technology can detect nonconvulsive seizures in Alzheimer’s patients. Furthermore, patients with Alzheimer’s have a 6- to 10-fold increase in the probability of developing seizures during the course of their disease compared to healthy controls.2 While previous research has focused on the underlying molecular mechanisms of Aβ tangles in the brain, the research presented here relates seizures to the cognitive decline in Alzheimer’s patients in an attempt to find therapeutic approaches that tackle both epilepsy and Alzheimer’s.

Introduction

The hippocampus is found in the temporal lobe and is involved in the creation and consolidation of new memories. It is the first part of the brain to undergo neurodegeneration in Alzheimer’s disease, and as such, the disease is characterized by memory loss. Alzheimer’s is different than other types of dementia because patients’ episodic memories are affected strongly and quickly. Likewise, patients who suffer from epilepsy also exhibit neurodegeneration in their hippocampi and have impaired episodic memories. Such similarities led researchers to hypothesize that the two diseases have the same pathophysiological mechanisms. In one study, four epileptic patients exhibited progressive memory loss that clinically resembled Alzheimer’s disease.6 In another study, researchers found that seizures precede cognitive symptoms in late-onset Alzheimer’s disease.7 This led researchers to hypothesize that a high incidence of seizures increases the rate of cognitive decline in Alzheimer’s patients. However, much is yet to be discovered about the molecular mechanisms underlying seizure activity and cognitive impairments.

Amyloid precursor protein (APP) is the precursor molecule to Aβ, the polypeptide that makes up the Aβ plaques found in the brains of Alzheimer’s patients. In many Alzheimer’s labs, the J20 APP mouse model of disease is used to simulate human Alzheimer’s. These mice overexpress the human form of APP, develop amyloid plaques, and have severe deficits in learning and memory. The mice also have high levels of epileptiform activity and exhibit spontaneous seizures that are characteristic of epilepsy.11 Understanding the long-lasting effects of these seizures is important in designing therapies for a disease that is affected by recurrent seizures. Thus, comparing the APP mouse model of disease with the temporal lobe epilepsy (TLE) mouse model is essential in unraveling the mysteries of seizures and cognitive decline.

Shared Pathology of the Two Diseases

The molecular mechanisms behind the two diseases are still unknown and under much research. An early observation in both TLE and Alzheimer’s involved a decrease in calbindin-28DK, a calcium buffering protein, in the hippocampus.10 Neuronal calcium buffering and calcium homeostasis are well-known to be involved in learning and memory. Calcium channels are involved in synaptic transmission, and a high calcium ion influx often results in altered neuronal excitability and calcium signaling. Calbindin acts as a buffer for binding free Ca2+ and is thus critical to calcium homeostasis.

Some APP mice have severe seizures and an extremely high loss of calbindin, while other APP mice exhibit no loss in calbindin. The reasons behind this is unclear, but like patients, mice are also very variable.

The loss of calbindin in both Alzheimer’s and TLE is highly correlated with cognitive deficits. However, the molecular mechanism behind the calbindin loss is unclear. Many researchers are now working to uncover this mechanism in the hopes of preventing the calbindin loss, thereby improving therapeutic avenues for Alzheimer’s and epilepsy patients.

Seizures and Neurogenesis

The dentate gyrus is one of the two areas of the adult brain that exhibit neurogenesis.13 Understanding neurogenesis in the hippocampus can lead to promising therapeutic targets in the form of neuronal replacement therapy. Preliminary research in Alzheimer’s and TLE has shown changes in neurogenesis over the course of the disease.14 However, whether neurogenesis is increased or decreased remains a controversial topic, as studies frequently contradict each other.

Many researchers study neurogenesis in the context of different diseases. In memory research, neurogenesis is thought to be involved in both memory formation and memory consolidation.12 Alzheimer’s leads to the gradual decrease in the generation of neural progenitors, the stem cells that can differentiate to create a variety of different neuronal and glial cell types.8 Further studies have shown that the neural stem cell pool undergoes accelerated depletion due to seizure activity.15 Initially, heightened neuronal activity stimulates neural progenitors to divide rapidly at a much faster rate than controls. This rapid division depletes the limited stem cell pool prematurely. Interestingly enough, this enhanced neurogenesis is detected long before other AD-linked pathologies. When the APP mice become older, the stem cell pool is depleted to a point where neurogenesis occurs much slower compared to controls.9 This is thought to represent memory deficits, in that the APP mice can no longer consolidate new memories as effectively. The same phenomenon occurs in mice with TLE.

The discovery of this premature neurogenesis in Alzheimer’s disease has many therapeutic benefits. For one, enhanced neurogenesis can be used as a marker for Alzheimer’s long before any symptoms are present. Furthermore, targeting increased neurogenesis holds potential as a therapeutic avenue, leading to better remedies for preventing the pathological effects of recurrent seizures in Alzheimer’s disease.

Conclusion

Research linking epilepsy with other neurodegenerative disorders is still in its infancy, and leaves many researchers skeptical about the potential to create a single therapy for multiple conditions. Previous EEG studies recorded Alzheimer’s patients for a few hours at a time and found limited epileptiform activity; enhanced overnight technology has shown that about half of Alzheimer’s patients have epileptiform activity in a 24-hour period, with most activity occurring during sleep1. Recording patients for even longer periods of time will likely raise this percentage. Further research is being conducted to show the importance of seizures in enhancing cognitive deficits and understanding Alzheimer’s disease, and could lead to amazing therapeutic advances in the future.

References

  1. Vossel, K. A. et. al. Incidence and Impact of Subclinical Epileptiform Activity. Ann Neurol. 2016.
  2. Pandis, D. Scarmeas, N. Seizures in Alzheimer Disease: Clinical and Epidemiological Data. Epilepsy Curr. 2012. 12(5), 184-187.
  3. Chin, J. Scharfman, H. Shared cognitive and behavioral impairments in epilepsy and Alzheimer’s disease and potential underlying mechanisms. Epilepsy & Behavior. 2013. 26, 343-351.
  4. Carter, D. S. et. al. Long-term decrease in calbindin-D28K expression in the hippocampus of epileptic rats following pilocarpine-induced status epilepticus. Epilepsy Res. 2008. 79(2-3), 213-223.
  5. Jin, K. et. al. Increased hippocampal neurogenesis in Alzheimer’s Disease. Proc Natl Acad Sci. 2004. 101(1), 343-347.
  6. Ito, M., Echizenya, N., Nemoto, D., & Kase, M. (2009). A case series of epilepsy-derived memory impairment resembling Alzheimer disease. Alzheimer Disease and Associated Disorders, 23(4), 406–409.
  7. Picco, A., Archetti, S., Ferrara, M., Arnaldi, D., Piccini, A., Serrati, C., … Nobili, F. (2011). Seizures can precede cognitive symptoms in late-onset Alzheimer’s disease. Journal of Alzheimer’s Disease: JAD, 27(4), 737–742.
  8. Zeng, Q., Zheng, M., Zhang, T., & He, G. (2016). Hippocampal neurogenesis in the APP/PS1/nestin-GFP triple transgenic mouse model of Alzheimer’s disease. Neuroscience, 314, 64–74. https://doi.org/10.1016/j.neuroscience.2015.11.05
  9. Lopez-Toledano, M. A., Ali Faghihi, M., Patel, N. S., & Wahlestedt, C. (2010). Adult neurogenesis: a potential tool for early diagnosis in Alzheimer’s disease? Journal of Alzheimer’s Disease: JAD, 20(2), 395–408. https://doi.org/10.3233/JAD-2010-1388
  10. Palop, J. J., Jones, B., Kekonius, L., Chin, J., Yu, G.-Q., Raber, J., … Mucke, L. (2003). Neuronal depletion of calcium-dependent proteins in the dentate gyrus istightly linked to Alzheimer’s disease-related cognitive deficits. Proceedings of the National Academy of Sciences of the United States of America, 100(16), 9572–9577. https://doi.org/10.1073/pnas.1133381100
  11. Research Models: J20. AlzForum: Networking for a Cure.
  12. Kitamura, T. Inokuchi, K. (2014). Role of adult neurogenesis in hippocampal-cortical memory consolidation. Molecular Brain 7:13. 10.1186/1756-6606-7-13.
  13. Piatti, V. Ewell, L. Leutgeb, J. Neurogenesis in the dentate gyrus: carrying the message or dictating the tone. Frontiers in Neuroscience 7:50. doi: 10.3389/fnins.2013.00050
  14. Noebels, J. (2011). A Perfect Storm: Converging Paths of Epilepsy and Alzheimer’s Dementia Intersect in the Hippocampal Formation. Epilepsia 52, 39-46. doi:  10.1111/j.1528-1167.2010.02909.x
  15. Jasper, H.; et.al. In Jasper’s Basic Mechanisms of the Epilepsies, 4; Rogawski, M., et al., Eds.; Oxford University Press: USA, 2012

2 Comments

Detection of Gut Inflammation and Tumors Using Photoacoustic Imaging

Comment

Detection of Gut Inflammation and Tumors Using Photoacoustic Imaging

Abstract:

Photoacoustic imaging is a technique in which contrast agents absorb photon energy and emit signals that can be analyzed by ultrasound transducers. This method allows for unprecedented depth imaging that can provide a non-invasive alternative to current diagnostic tools used to detect internal tissue inflammation.1 The Rice iGEM team strove to use photoacoustic technology and biomarkers to develop a noninvasive method of locally detecting gut inflammation and colon cancer. As a first step, we genetically engineered Escherichia coli to express near-infrared fluorescent proteins iRFP670 and iRFP713 and conducted tests using biomarkers to determine whether expression was confined to a singular local area.

Introduction:

In photoacoustic imaging, laser pulses of a specific, predetermined wavelength (the excitation wavelength) activate and thermally excite a contrast agent such as a pigment or protein. The heat makes the contrast agent contract and expand producing an ultrasonic emission wavelength longer than the excitation wavelength used. The emission wavelength data are used to produce 2D or 3D images of tissues that have high resolution and contrast.2

The objective of this photoacoustic imaging project is to engineer bacteria to produce contrast agents in the presence of biomarkers specific to gut inflammation and colon cancer and ultimately to deliver the bacteria into the intestines. The bacteria will produce the contrast agents in response to certain biomarkers and lasers will excite the contrast agents, which will emit signals in local, targeted areas, allowing for a non-invasive imaging method. Our goal is to develop a non-invasive photoacoustic imaging delivery method that uses engineered bacteria to report gut inflammation and identify colon cancer. To achieve this, we constructed plasmids that have a nitric-oxide-sensing promoter (soxR/S) or a hypoxia-sensing promoter (narK or fdhf) fused to genes encoding near-infrared fluorescent proteins or violacein with emission wavelengths of 670 nm (iRFP670) and 713 nm (iRFP713). Nitric oxide and hypoxia, biological markers of gut inflammation in both mice and humans, would therefore promote expression of the desired iRFPs or violacein.3,4

Results and Discussion

Arabinose

To test the inducibility and detectability of our iRFPs, we used pBAD, a promoter that is part of the arabinose operon located in E. coli.5 We formed genetic circuits consisting of the pBAD expression system and iRFP670 and iRFP713 (Fig. 1a). AraC, a constitutively produced transcription regulator, changes form in the presence of arabinose sugar, allowing for the activation of the pBAD promoter.

CT Figure 1b.jpg

Fluorescence levels emitted by the iRFPs increased significantly when placed in wells containing increasing concentrations of arabinose (Figure 2). This correlation suggests that our selected iRFPs fluoresce sufficiently when promoters are induced by environmental signals. The results of the arabinose assays showed that we successfully produced iRFPs; the next steps were to engineer bacteria to produce the same iRFPs under nitric oxide and hypoxia.

Nitric Oxide

The next step was to test the nitric oxide induction of iRFP fluorescence. We used a genetic circuit consisting of a constitutive promoter and the soxR gene, which in turn expresses the SoxR protein (Figure 1b). In the presence of nitric oxide, SoxR changes form to activate the promoter soxS, which activates the expression of the desired gene. The source of nitric oxide added to our engineered bacteria samples was diethylenetriamine/nitric oxide adduct (DETA/NO).

Figure 3 shows no significant difference of fluorescence/OD600 between DETA/NO concentrations. This finding implies that our engineered bacteria were unable to detect the nitric oxide biomarker and produce iRFP; future troubleshooting includes verifying promoter strength and correct sample conditions. Furthermore, nitric oxide has an extremely short half-life of a few seconds, which may not be enough time for most of the engineered bacteria to sense the nitric oxide, limiting iRFP production and fluorescence.

CT Figure 1c.jpg

Hypoxia

We also tested the induction of iRFP fluorescence with the hypoxia-inducible promoters narK and fdhf. We expected iRFP production and fluorescence to increase when using the narK and fdhf promoters in anaerobic conditions (Figure 1c and d).

However, we observed the opposite result. A decreased fluorescence for both iRFP constructs in both promoters was measured when exposed to hypoxia (Figure 4). This finding suggests that our engineered bacteria were unable to detect the hypoxia biomarker and produce iRFP; future troubleshooting includes verifying promoter strength and correct sample conditions.

Future Directions

Further studies include testing the engineered bacteria co-cultured with colon cancer cells and developing other constructs that will enable bacteria to sense carcinogenic tumors and make them fluoresce for imaging and treatment purposes.

Violacein has anti-cancer therapy potential

Violacein is a fluorescent pigment for in vivo photoacoustic imaging in the near-infrared range and shows anti-tumoral activity6. It has high potential for future work in bacterial tumor targeting. We have succeeded in constructing violacein using Golden Gate shuffling7 and intend to use it in experiments such as the nitric oxide and hypoxia assays we used for iRFP670 and 713.

Invasin can allow for targeted cell therapy

Using a beta integrin called invasin, certain bacteria are able to invade mammalian cells.8-9 If we engineer E. coli that have the beta integrin invasion as well as the genetic circuits capable of sensing nitric oxide and/or hypoxia, we can potentially allow the E. coli to invade colon cells and release contrast agents for photoacoustic imaging or therapeutic agents such as violacein only in the presence of specific biomarkers.10 Additionally, if we engineer the bacteria that exhibit invasin to invade colon cancer cells only and not normal cells, then this approach would potentially allow for a localized targeting and treatment of cancerous tumors. This design allows us to create scenarios with parameters more similar to the conditions observed in the human gut as we will be unable to test our engineered bacteria in an actual human gut.

Acknowledgements:

The International Genetically Engineered Machine (iGEM) Foundation (igem.org) is an independent, non-profit organization dedicated to education and competition, the advancement of synthetic biology, and the development of an open community and collaboration.

This project would not have been possible without the patient instruction and generous encouragement of our Principal Investigators (Dr. Beth Beason-Abmayr and Dr. Jonathan Silberg, BioSciences at Rice), our graduate student advisors and our undergraduate team. I would also like to thank our iGEM collaborators.

This work was supported by the Wiess School of Natural Sciences and the George R. Brown School of Engineering and the Departments of BioSciences, Bioengineering, and Chemical and Biomolecular Engineering at Rice University; Dr. Rebecca Richards-Kortum, HHMI Pre-College and Undergraduate Science Education Program Grant #52008107; and Dr. George N. Phillips, Jr., Looney Endowment Fund.

If you would like to know more information about our project and our team, please visit our iGEM wiki at 2016.igem.org/Team:Rice.

References

  1. Ntziachristos, V. Nat Methods. 2010, 7, 603-614.
  2. Weber, J. et al. Nat Methods. 2016, 13, 639-650.
  3. Archer, E. J. et al. ACS Synth. Biol. 2012, 1, 451–457.
  4. Hӧckel, M.; Vaupel, P. JNCI J Natl Cancer Inst. 2001, 93, 266−276.
  5. Guzman, L. M. et al. J of Bacteriology. 1995, 177, 4121-4130.
  6. Shcherbakova, D. M.; Verkhusha, V. V. Nat Methods. 2013, 10, 751-754.
  7. Engler, C. et al. PLOS One. 2009, 4, 1-9.
  8. Anderson, J. et al. Sci Direct. 2006, 355, 619–627
  9. Arao, S. et al. Pancreas. 2000, 20, 619-627.
  10. Jiang, Y. et al. Sci Rep. 2015, 19, 1-9.

Comment

Venom, M.D.: How Some of the World’s Deadliest Toxins Fight Cancer

Comment

Venom, M.D.: How Some of the World’s Deadliest Toxins Fight Cancer

Nature, as mesmerizing as it can be, is undeniably hostile. There are endless hazards, both living and nonliving, scattered throughout all parts of the planet. At first glance, the world seems to be quite unwelcoming. Yet through science, humans find ways to survive nature and gain the ability to see its beauty. A fascinating way this is achieved involves taking one deadly element of nature and utilizing it to combat another. In labs and universities across the world today, scientists are fighting one of the world’s most devastating diseases, cancer, with a surprising weapon: animal toxins.

Various scientists around the globe are collecting venomous or poisonous animals and studying the biochemical weapons they synthesize. In their natural forms, these toxins could kill or cause devastating harm to the human body. However, by closely inspecting the chemical properties of these toxins, we have uncovered many potential ways they could help us understand, treat, and cure various diseases. These discoveries have shed a new light on many of the deadly animals we have here on Earth. Mankind may have gained new friends—ones that could be crucial to our survival against cancer and other illnesses.

Take the scorpion, for example. This arachnid exists in hundreds of forms across the globe. Although its stinger is primarily used for killing prey, it is often used for defense against other animals, including humans. Most cases of scorpion stings result in nothing more than pain, swelling, and numbness of the area. However, there are some species of scorpions that are capable of causing more severe symptoms, including death.1 One such species, Leiurus quinquestriatus (more commonly known as the “deathstalker scorpion”), is said to contain some of the most potent venoms on the planet.2 Yet despite its potency, deathstalker venom is a prime target for cancer research. One team of scientists from the University of Washington used the chlorotoxin in the venom to assist in gene therapy (the insertion of genes to fight disease) to combat glioma, a widespread and fatal brain cancer. Chlorotoxin has two important properties that make it effective against fighting glioma. First, it selectively binds to a surface protein found on many tumour cells. Second, chlorotoxin is able to inhibit the spread of tumours by disabling their metastatic ability. The scientists combined the toxin with nanoparticles in order to increase the effectiveness of gene therapy. 3 4

Other scientists found a different way to treat glioma using deathstalker venom. Researchers at the Transmolecular Corporation in Cambridge, Massachusetts produced an artificial version of the venom and attached it to a radioactive form of iodine, I-131. The resultant compound was able to find and kill glioma cells by releasing radiation, most of which was absorbed by the cancerous cells. 5 There are instances of other scorpion species aiding in cancer research as well, such as the Centruroides tecomanus scorpion in Mexico. This species’ toxin contains peptides that have the ability to specifically target lymphoma cells and kill them by damaging their ion channels. The selective nature of the peptides makes them especially useful as a cancer treatment as they leave healthy cells untouched.6

Scorpions have demonstrated tremendous medical potential, but they are far from the only animals that could contribute to the fight against cancer. Another animal that may help us overcome this disease is the wasp. To most people, wasps are nothing more than annoying pests that disturb our outdoor life. Wasps are known for their painful stings, which they use both for defense and for hunting. Yet science has shown that the venom of these insects may have medicinal properties. Researchers from the Institute for Biomedical Research in Barcelona investigated a peptide found in wasp venom for its ability to treat breast cancer. The peptide is able to kill cancer cells by puncturing the cell’s outer wall. In order to make this peptide useful in treatment, it must be able to target cancer cells specifically. Scientists overcame the specificity problem by conjugating the venom peptide with a targeting peptide specific to cancer cells.7 Similar techniques were used in Brazil while scientists of São Paulo State University studied the species Polybia paulista, another organism from the wasp family. This animal’s venom contains MP1, which also serves as a destructive agent of the cell’s plasma membrane. When a cell is healthy, certain components of the membrane should be on the inner side of the membrane, facing the interior of the cell. However, in a cancerous cell, these components, (namely, the phospholipids phosphatidylserine (PS) and phosphatidylethanolamine (PE) ) are on the outer side of the membrane. In a series of simulations, MP1 was observed to selectively and aggressively target membranes that had PS and PE on the outside of the cell. Evidently, using targeted administration of wasp toxins is a viable method to combat cancer.8

Amazingly enough, the list of cancer-fighting animals at our disposal does not end here. One of the most feared creatures on Earth, the snake, is also among the animals under scientific investigation for possible medical breakthroughs. One group of scientists discovered that a compound from the venom of the Southeast Asia pit viper (Calloselasma rhodastoma) binds to a platelet receptor protein called CLEC-2, causing clotting of the blood. A different molecule expressed by cancer cells, podoplanin, binds to CLEC-2 in a manner similar to the snake venom, also causing blood clotting. Why does this matter? In the case of cancer, tumors induce blood clots to protect themselves from the immune system, allowing them to grow freely. They also induce the formation of lymphatic vessels to assist their survival. The interaction between CLEC-2 and podoplanin is vital for for both the formation of these blood clots and lymphatic vessels, and is thus critical to the persistence of tumors. If a drug is developed to inhibit this interaction, it would be very effective in cancer treatment and prevention.9 Research surrounding the snake venom may help us develop such an inhibitor. .

Even though there may be deadly animals roaming the Earth, it is important to remember that they have done more for us than most people realize. So next time you see a scorpion crawling around or a wasp buzzing in the air, react with appreciation, rather than with fear. Looking at our world in this manner will make it seem like a much friendlier place to live.

References

  1. Mayo Clinic. http://www.mayoclinic.org/diseases-conditions/scorpion-stings/home/ovc-20252158 (accessed Oct. 29, 2016).
  2. Lucian K. Ross. Leiurus quinquestriatus (Ehrenberg, 1828). The Scorpion Files, 2008. http://www.ntnu.no/ub/scorpion-files/l_quinquestriatus_info.pdf (accessed Nov. 3, 2016).
  3. Kievit F.M. et al. ACS Nano, 2010, 4, (8), 4587–4594.
  4. University of Washington. "Scorpion Venom With Nanoparticles Slows Spread Of Brain Cancer." ScienceDaily. ScienceDaily, 17 April 2009. <www.sciencedaily.com/releases/2009/04/090416133816.htm>.
  5. Health Physics Society. "Radioactive Scorpion Venom For Fighting Cancer." ScienceDaily. ScienceDaily, 27 June 2006. <www.sciencedaily.com/releases/2006/06/060627174755.htm>.
  6. Investigación y Desarrollo. "Scorpion venom is toxic to cancer cells." ScienceDaily. ScienceDaily, 27 May 2015. <www.sciencedaily.com/releases/2015/05/150527091547.htm>.
  7. Moreno M. et al. J Control Release, 2014, 182, 13-21.
  8. Leite N.B. et al. Biophysical Journal, 2015, 109, (5), 936-947.
  9. Suzuki-Inoue K. et al. Journal of Biological Chemistry, 2010, 285, 24494-24507.

 

Comment