Viewing entries in
M

Modeling Climate Change: A Gift From the Pliocene

Comment

Modeling Climate Change: A Gift From the Pliocene

Believe it or not, we are still recovering from the most recent ice age that occurred between 21,000 and 11,500 years ago. And yet, in the past 200 years, the Earth's average global temperature has risen by 0.8 ºC at a rate more than ten times faster than the average ice-age recovery rate.1 This increase in global temperature, which shows no signs of slowing down, will have tremendous consequences for our planet’s biodiversity and overall ecology.

Climate change is caused by three main factors: changes in the position of the Earth’s continents, variations in the Earth’s orbital positions, and increases in the atmospheric concentration of “greenhouse gases”, such as carbon dioxide.2 In the past 200 years, the Earth’s continents have barely moved and its orbit around the sun has not changed.2 Therefore, to explain the 0.8 ºC increase in global average temperature that has occurred, the only reasonable conclusion is that there has been a change in the concentration of greenhouse gases.

After decades of research by the Intergovernmental Panel on Climate Change (IPCC), this theory was supported. The IPCC Fourth Assessment Report concluded that the increase in global average temperature is very likely due to the observed increase in anthropogenic greenhouse gas concentrations. Also included in the report is a prediction that global temperatures will increase between 1.1 ºC and 6.4 ºC by the end of the 21st century.2

Though we know what is causing the warming, we are unsure of its effects. The geologists and geophysicists at the US Geological Service (USGS) are attempting to address this uncertainty through the Pliocene Research, Interpretation, and Synoptic Mapping (PRISM) program.3

The middle of the Pliocene Era occurred roughly 3 million years ago-- a relatively short time on the geological time scale. Between the Pliocene era and our current Holocene era, the continents have barely drifted, the planet has maintained a near identical orbit around the sun, and the type of organisms living on earth has remained relatively constant.2 Because of these three commonalities , we can draw three conclusions. Because the continents have barely drifted, global heat distribution through oceanic circulation is the same. Additionally, because the planet’s orbit is essentially the same, glacial-interglacial cycles have not been altered. Finally, because the type of organisms has remained relatively constant, the biodiversity of the Pliocene is comparable to our own.

While the eras share many similarities, the main difference between them is that the Pliocene was about 4 ºC warmer at the equator and 10 ºC warmer at the poles.4 Because the Pliocene had similar conditions to today, but was warmer, it is likely that at the end of the century, our planet’s ecology may begin to look like the Pliocene. This idea has been supported by the research done by the USGS’s PRISM.3

It is a unique and exciting opportunity to be able to study a geological era so similar to our own and apply discoveries we make from that era to our current environment. PRISM is using multiple techniques to extract as much data about the Pliocene as possible. The concentration of magnesium ions, the number of carbon double bonds in organic structures called alkenones, and the concentration and distribution of fossilized pollen all provide a wealth of information that can be used to inform us about climate change. However, the single most useful source of such information comes from planktic foraminifera, or foram.5

Foram, abundant during the Pliocene era, are unicellular, ocean-dwelling organisms adorned with calcium shells. Fossilized foram are extracted from deep-sea core drilling. The type and concentration of the extracted foram reveal vital information about the temperature, salinity, and productivity of the oceans during the foram’s lifetime.5 By performing factor analysis and other statistical analyses on this information, PRISM has created a model of the Pliocene that covers both oceanic and terrestrial areas, providing a broad view of our planet as it existed 3 million years ago. Using the information provided by this model, scientists can determine where temperatures will increase the most and what impact such a temperature increase will have on life that can exist in those areas.

Since its inception in 1989, PRISM has predicted, with proven accuracy, two main trends.The first is that average temperatures will increase the most at the poles, with areas nearest to the equator experiencing the least amount of temperature increase.5 The second is that tropical plants will expand outward from the equator, taking root in the middle and higher latitudes.5

There are some uncertainties associated with the research behind PRISM. Several assumptions were made, such as the idea of uniformitarianism, which states that the same natural laws and physical processes that occur now were true in the past. The researchers also assumed that the ecological tolerances of certain key species, such as foram, have not significantly changed in the last 3 million years. Even with these normalizing assumptions, an important discrepancy exists between the Pliocene and our Holocene: the Pliocene achieved its temperature at a normal rate and remained relatively stable throughout its era, while our temperatures are increasing at a much more rapid rate.

The film industry has fetishized climate change, predicting giant hurricanes and an instant ice age, as seen in the films 2012 and The Day After Tomorrow. Fortunately, nothing as cataclysmic will occur. However, a rise in global average temperature and a change in our ecosystems is nothing to be ignored or dismissed as normal. It is only through the research done by the USGS via PRISM and similar systems that our species can be prepared for the coming decades of change.

References

  1. Earth Observatory. http://earthobservatory.nasa.gov/Features/GlobalWarming/page3.php (accessed Oct. 1, 2016).
  2. Pachauri, R.K., et. al. IPCC 4th Assessment 2007, 104.
  3. PRISM4D Collaborating Institutions. Pliocene Research Interpretation and Synoptic Mapping. http://geology.er.usgs.gov/egpsc/prism/ (Oct. 3, 2016).
  4. Monroe, R. What Does 400PPM Look Like?. https://scripps.ucsd.edu/programs/keelingcurve/2013/12/03/what-does-400-ppm-look-like/ (accessed Oct. 19, 2016).
  5. Robinson, M. M., J. Am. Sci. 2011, 99, 228

Comment

Machine Minds: An Exploration of Artificial Neural Networks

1 Comment

Machine Minds: An Exploration of Artificial Neural Networks

Abstract:

An artificial neural network is a computational method that mirrors the way a biological nervous system processes information. Artificial neural networks are used in many different fields to process large sets of data, often providing useful analyses that allow for prediction and identification of new data. However, neural networks struggle with providing clear explanations regarding why certain outcomes occur. Despite these difficulties, neural networks are valuable data analysis tools applicable to a variety of fields. This paper will explore the general architecture, advantages and applications of neural networks.

Introduction:

Artificial neural networks attempt to mimic the functions of the human brain. Biological nervous systems are composed of building blocks called neurons. In a biological nervous system, biological neurons communicate with axons and dendrites. When a biological neuron receives a message, it sends an electric signal down its axon. If this electric signal is greater than a threshold value, the electrical signal is converted to a chemical signal that is sent to nearby biological neurons.2 Similarly, while artificial neural networks are dictated by formulas and data structures, they can be conceptualized as being composed of artificial neurons, which hold similar functions to their biological counterparts. When an artificial neuron receives data, if the change in the activation level of a receiving artificial neuron exceeds a defined threshold value, the artificial neuron creates an output signal that propagates to other connected artificial neurons.2 The human brain learns from past experiences and applies this information from the past in new settings. Similarly, artificial neural networks can adapt their behaviors until their responses are both accurate and consistent in new situations.1

While artificial neural networks are structurally similar to their biological counterparts, artificial neural networks are distinct in several ways. For example, certain artificial neural networks send signals only at fixed time intervals, unlike biological neural networks, in which neuronal activity is variable.3 Another major difference between biological neural networks and artificial neural networks is the time of response. For biological neural networks, there is often a latent period before a response, whereas in artificial neural networks, responses are immediate.3

Neural networks are useful in a wide-range of fields that involve large datasets, ranging from biological systems to economic analysis. These networks are practical in problems involving pattern recognition, such as predicting data trends.3 Neural networks are also effective when data is error-prone, such as in cognitive software like speech and image recognition.3

Neural Network Architecture:

One popular neural network design is the Multilayer Perceptrons (MLP) design. In the MLP design, each artificial neuron outputs a weighted sum of its inputs based on the strength of the synaptic connections.1 Artificial neuron synaptic strength is determined by the formulaic design of the neural network and is directly proportional to weight: stronger and more valuable artificial neurons have a larger weight and therefore are more influential in the weighted sum. The output of the neuron is based on whether the weighted sum is greater than the threshold value of the artificial neuron.1 The MLP design was originally composed of perceptrons. Perceptrons are artificial neurons that provide a binary output of zero or one. Perceptrons have limited use in a neural network model because small changes in the input can drastically alter the output value of the system. However, most current MLP systems use sigmoid neurons instead of perceptrons. Sigmoid neurons can take inputs and produce outputs of values between zero and one, allowing for more variation in the inputs because these changes do not radically alter the outcome of the model.4

In terms of the architecture of the MLP design, the network is a feedforward neural network.1 In a feedforward design, the units are arranged so signals travel exclusively from input to output. These networks are composed of a layer of input neurons, a layer of output neurons, and a series of hidden layers in between the input and output layers. These hidden layers are composed of internal neurons that further process the data within the system. The complexity of this model varies with the number of hidden layers and the number of inputs in each layer.1

In an MLP design, once the number of layers and the number of units in each layer are determined, the threshold values and the synaptic weights in the system need to be set using training algorithms so that the errors in the system are minimized.4 These training algorithms use a known dataset (the training data) to modify the system until the differences between the expected output and the actual output values are minimized.4 Training algorithms allow for neural networks to be constructed with optimal weights, which lets the neural network make accurate predictions when presented with new data. One such training algorithm is the backpropagation algorithm. In this design, the algorithm analyzes the gradient vector and the error surface in the data until a minimum is found.1 The difficult part of the backpropagation algorithm is determining the step size. Larger steps can result in faster runtimes, but can overstep the solution; comparatively smaller steps can lead to a much slower runtime, but are more likely to find a correct solution.1

While feedforward neural network designs like MLP are common, there are many other neural network designs. These other structures include examples such as recurrent neural networks, which allow for connections between neurons in the same layer, and self-organizing maps, in which neurons attain weights that retain characteristics of the input. All of these network types also have variations within their specific frameworks.5 The Hopfield network and Boltzmann machine neural network architectures utilize the recurrent neural network design.5 While feedforward neural networks are the most common, each design is uniquely suited to solve specific problems.

Disadvantages

One of the main problems with neural networks is that, for the most part, they have limited ability to identify causal relationships explicitly. Developers of neural networks feed these networks large swathes of data and allow for the neural networks to determine independently which input variables are most important.10 However, it is difficult for the network to indicate to the developers which variables are most important in calculating the outputs. While some techniques exist to analyze the relative importance of each neuron in a neural network, these techniques still do not present as clear of a causal relationship between variables as can be gained in similar data analysis methods such as a logistic regression.10

Another problem with neural networks is the tendency to overfit. Overfitting of data occurs when a data analysis model such as a neural network generates good predictions for the training data but worse ones for testing data.10 Overfitting happens because the model accounts for irregularities and outliers in the training data that may not be present across actual data sets. Developers can mitigate overfitting in neural networks by penalizing large weights and limiting the number of neurons in hidden layers.10 Reducing the number of neurons in hidden layers reduces overfitting but also limits the ability of the neural network to model more complex, nonlinear relationships.10

Applications

Artificial neural networks allow for processing of large amounts of data, making them useful tools in many fields of research. For example, the field of bioinformatics relies heavily on neural network pattern recognition to predict various proteins’ secondary structures. One popular algorithm used for this purpose is Position Specific Iterated Basic Local Alignment Search Tool (PSI-BLAST) Secondary Structure Prediction (PSIPRED).6 This algorithm uses a two-stage structure that consists of two three-layered feedforward neural networks. The first stage of PSIPRED involves inputting a scoring matrix generated by using the PSI-BLAST algorithm on a peptide sequence. PSIPRED then takes 15 positions from the scoring matrix and uses them to output three values that represent the probabilities of forming the three protein secondary structures: helix, coil, and strand.6 These probabilities are then input into the second stage neural network along with the 15 positions from the scoring matrix, and the output of this second stage neural network includes three values representing more accurate probabilities of forming helix, coil, and strand secondary structures.6

Neural networks are used not only to predict protein structures, but also to analyze genes associated with the development and progression of cancer. More specifically, researchers and doctors use artificial neural networks to identify the type of cancer associated with certain tumors. Such identification is useful for correct diagnosis and treatement of each specific cancer.7 These artificial neural networks enable researchers to match genomic characteristics from large datasets to specific types of cancer and predict these types of cancer.7 (What to put in/ what to get out/process)

In bioinformatic scenarios such as the above two examples, trained artificial neural networks quickly provide high-quality results for prediction tasks.6 These characteristics of neural networks are important for bioinformatics projects because bioinformatics generally involves large quantities of data that need to be interpreted both effectively and efficiently.6

The applications of artificial neural networks are also viable within fields outside the natural sciences, such as finance. These networks can be used to predict subtle trends such as variations in the stock market or when organizations will face bankruptcy.8,9 Neural networks can provide more accurate predictions more efficiently than other prediction models.9

Conclusion:

Over the past decade, artificial neural networks have become more refined and are being used in a wide variety of fields. Artificial neural networks allow researchers to find patterns in the largest of datasets and utilize the patterns to predict potential outcomes. These artificial neural networks provide a new computational way to learn and understand diverse assortments of data and allow for a more accurate and effective grasp of the world.

References

  1. Taiwo Oladipupo Ayodele (2010). Types of Machine Learning Algorithms, New Advances in Machine Learning, Yagang Zhang (Ed.), InTech, DOI: 10.5772/9385. Available from: http://www.intechopen.com/books/new-advances-in-machine-learning/types-of-machine-learning-algorithms
  2. Neural Networks: An Introduction By Berndt Muller, Joachim Reinhardt
  3. Urbas, John V. Article
  4. : Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press, 2015 
  5. Elements of Artificial Neural Networks by Kishan Mehrotra, Chilukuri Mohan
  6. Neural Networks in Bioinformatics by Ke Chen, Lukasz A. Kurgan
  7. Artificial Neural Networks in the cancer genomics frontier by Andrew Oustimov, Vincent Vu
  8. An enhanced artificial neural network for stock price predictions by Jiaxin Ma
  9. A comparison of artificial neural network model and logistics regression in prediction of companies’ bankruptcy by Ali Mansouri
  10. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes by Jack V. Tu

1 Comment

Molecular Mechanisms Behind Alzheimer’s Disease and Epilepsy

2 Comments

Molecular Mechanisms Behind Alzheimer’s Disease and Epilepsy

Abstract

Seizures are characterized by periods of high neuronal activity and are caused by alterations in synaptic function that disrupt the equilibrium between excitation and inhibition in neurons. While often associated with epilepsy, seizures can also occur after brain injuries and interestingly, are common in Alzheimer’s patients. While Alzheimer’s patients rarely show the common physical signs of seizures, recent research has shown that electroencephalogram (EEG) technology can detect nonconvulsive seizures in Alzheimer’s patients. Furthermore, patients with Alzheimer’s have a 6- to 10-fold increase in the probability of developing seizures during the course of their disease compared to healthy controls.2 While previous research has focused on the underlying molecular mechanisms of Aβ tangles in the brain, the research presented here relates seizures to the cognitive decline in Alzheimer’s patients in an attempt to find therapeutic approaches that tackle both epilepsy and Alzheimer’s.

Introduction

The hippocampus is found in the temporal lobe and is involved in the creation and consolidation of new memories. It is the first part of the brain to undergo neurodegeneration in Alzheimer’s disease, and as such, the disease is characterized by memory loss. Alzheimer’s is different than other types of dementia because patients’ episodic memories are affected strongly and quickly. Likewise, patients who suffer from epilepsy also exhibit neurodegeneration in their hippocampi and have impaired episodic memories. Such similarities led researchers to hypothesize that the two diseases have the same pathophysiological mechanisms. In one study, four epileptic patients exhibited progressive memory loss that clinically resembled Alzheimer’s disease.6 In another study, researchers found that seizures precede cognitive symptoms in late-onset Alzheimer’s disease.7 This led researchers to hypothesize that a high incidence of seizures increases the rate of cognitive decline in Alzheimer’s patients. However, much is yet to be discovered about the molecular mechanisms underlying seizure activity and cognitive impairments.

Amyloid precursor protein (APP) is the precursor molecule to Aβ, the polypeptide that makes up the Aβ plaques found in the brains of Alzheimer’s patients. In many Alzheimer’s labs, the J20 APP mouse model of disease is used to simulate human Alzheimer’s. These mice overexpress the human form of APP, develop amyloid plaques, and have severe deficits in learning and memory. The mice also have high levels of epileptiform activity and exhibit spontaneous seizures that are characteristic of epilepsy.11 Understanding the long-lasting effects of these seizures is important in designing therapies for a disease that is affected by recurrent seizures. Thus, comparing the APP mouse model of disease with the temporal lobe epilepsy (TLE) mouse model is essential in unraveling the mysteries of seizures and cognitive decline.

Shared Pathology of the Two Diseases

The molecular mechanisms behind the two diseases are still unknown and under much research. An early observation in both TLE and Alzheimer’s involved a decrease in calbindin-28DK, a calcium buffering protein, in the hippocampus.10 Neuronal calcium buffering and calcium homeostasis are well-known to be involved in learning and memory. Calcium channels are involved in synaptic transmission, and a high calcium ion influx often results in altered neuronal excitability and calcium signaling. Calbindin acts as a buffer for binding free Ca2+ and is thus critical to calcium homeostasis.

Some APP mice have severe seizures and an extremely high loss of calbindin, while other APP mice exhibit no loss in calbindin. The reasons behind this is unclear, but like patients, mice are also very variable.

The loss of calbindin in both Alzheimer’s and TLE is highly correlated with cognitive deficits. However, the molecular mechanism behind the calbindin loss is unclear. Many researchers are now working to uncover this mechanism in the hopes of preventing the calbindin loss, thereby improving therapeutic avenues for Alzheimer’s and epilepsy patients.

Seizures and Neurogenesis

The dentate gyrus is one of the two areas of the adult brain that exhibit neurogenesis.13 Understanding neurogenesis in the hippocampus can lead to promising therapeutic targets in the form of neuronal replacement therapy. Preliminary research in Alzheimer’s and TLE has shown changes in neurogenesis over the course of the disease.14 However, whether neurogenesis is increased or decreased remains a controversial topic, as studies frequently contradict each other.

Many researchers study neurogenesis in the context of different diseases. In memory research, neurogenesis is thought to be involved in both memory formation and memory consolidation.12 Alzheimer’s leads to the gradual decrease in the generation of neural progenitors, the stem cells that can differentiate to create a variety of different neuronal and glial cell types.8 Further studies have shown that the neural stem cell pool undergoes accelerated depletion due to seizure activity.15 Initially, heightened neuronal activity stimulates neural progenitors to divide rapidly at a much faster rate than controls. This rapid division depletes the limited stem cell pool prematurely. Interestingly enough, this enhanced neurogenesis is detected long before other AD-linked pathologies. When the APP mice become older, the stem cell pool is depleted to a point where neurogenesis occurs much slower compared to controls.9 This is thought to represent memory deficits, in that the APP mice can no longer consolidate new memories as effectively. The same phenomenon occurs in mice with TLE.

The discovery of this premature neurogenesis in Alzheimer’s disease has many therapeutic benefits. For one, enhanced neurogenesis can be used as a marker for Alzheimer’s long before any symptoms are present. Furthermore, targeting increased neurogenesis holds potential as a therapeutic avenue, leading to better remedies for preventing the pathological effects of recurrent seizures in Alzheimer’s disease.

Conclusion

Research linking epilepsy with other neurodegenerative disorders is still in its infancy, and leaves many researchers skeptical about the potential to create a single therapy for multiple conditions. Previous EEG studies recorded Alzheimer’s patients for a few hours at a time and found limited epileptiform activity; enhanced overnight technology has shown that about half of Alzheimer’s patients have epileptiform activity in a 24-hour period, with most activity occurring during sleep1. Recording patients for even longer periods of time will likely raise this percentage. Further research is being conducted to show the importance of seizures in enhancing cognitive deficits and understanding Alzheimer’s disease, and could lead to amazing therapeutic advances in the future.

References

  1. Vossel, K. A. et. al. Incidence and Impact of Subclinical Epileptiform Activity. Ann Neurol. 2016.
  2. Pandis, D. Scarmeas, N. Seizures in Alzheimer Disease: Clinical and Epidemiological Data. Epilepsy Curr. 2012. 12(5), 184-187.
  3. Chin, J. Scharfman, H. Shared cognitive and behavioral impairments in epilepsy and Alzheimer’s disease and potential underlying mechanisms. Epilepsy & Behavior. 2013. 26, 343-351.
  4. Carter, D. S. et. al. Long-term decrease in calbindin-D28K expression in the hippocampus of epileptic rats following pilocarpine-induced status epilepticus. Epilepsy Res. 2008. 79(2-3), 213-223.
  5. Jin, K. et. al. Increased hippocampal neurogenesis in Alzheimer’s Disease. Proc Natl Acad Sci. 2004. 101(1), 343-347.
  6. Ito, M., Echizenya, N., Nemoto, D., & Kase, M. (2009). A case series of epilepsy-derived memory impairment resembling Alzheimer disease. Alzheimer Disease and Associated Disorders, 23(4), 406–409.
  7. Picco, A., Archetti, S., Ferrara, M., Arnaldi, D., Piccini, A., Serrati, C., … Nobili, F. (2011). Seizures can precede cognitive symptoms in late-onset Alzheimer’s disease. Journal of Alzheimer’s Disease: JAD, 27(4), 737–742.
  8. Zeng, Q., Zheng, M., Zhang, T., & He, G. (2016). Hippocampal neurogenesis in the APP/PS1/nestin-GFP triple transgenic mouse model of Alzheimer’s disease. Neuroscience, 314, 64–74. https://doi.org/10.1016/j.neuroscience.2015.11.05
  9. Lopez-Toledano, M. A., Ali Faghihi, M., Patel, N. S., & Wahlestedt, C. (2010). Adult neurogenesis: a potential tool for early diagnosis in Alzheimer’s disease? Journal of Alzheimer’s Disease: JAD, 20(2), 395–408. https://doi.org/10.3233/JAD-2010-1388
  10. Palop, J. J., Jones, B., Kekonius, L., Chin, J., Yu, G.-Q., Raber, J., … Mucke, L. (2003). Neuronal depletion of calcium-dependent proteins in the dentate gyrus istightly linked to Alzheimer’s disease-related cognitive deficits. Proceedings of the National Academy of Sciences of the United States of America, 100(16), 9572–9577. https://doi.org/10.1073/pnas.1133381100
  11. Research Models: J20. AlzForum: Networking for a Cure.
  12. Kitamura, T. Inokuchi, K. (2014). Role of adult neurogenesis in hippocampal-cortical memory consolidation. Molecular Brain 7:13. 10.1186/1756-6606-7-13.
  13. Piatti, V. Ewell, L. Leutgeb, J. Neurogenesis in the dentate gyrus: carrying the message or dictating the tone. Frontiers in Neuroscience 7:50. doi: 10.3389/fnins.2013.00050
  14. Noebels, J. (2011). A Perfect Storm: Converging Paths of Epilepsy and Alzheimer’s Dementia Intersect in the Hippocampal Formation. Epilepsia 52, 39-46. doi:  10.1111/j.1528-1167.2010.02909.x
  15. Jasper, H.; et.al. In Jasper’s Basic Mechanisms of the Epilepsies, 4; Rogawski, M., et al., Eds.; Oxford University Press: USA, 2012

2 Comments

Microbes: Partners in Cancer Research

Comment

Microbes: Partners in Cancer Research

To millions around the world, the word ‘cancer’ evokes emotions of sorrow and fear. For decades, scientists around the world have been trying to combat this disease, but to no avail. Despite the best efforts of modern medicine, about 46% of patients diagnosed with cancer still pass away as a direct result of the disease.1 However, the research performed by Dr. Michael Gustin at Rice University may change the field of oncology forever.

Cancer is a complex and multifaceted disease that is currently not fully understood by medical doctors and scientists. Tumors vary considerably between different types of cancers and from patient to patient, further complicating the problem. Understanding how cancer develops and responds to stimuli is essential to producing a viable cure, or even an individualized treatment.

Dr. Gustin’s research delves into the heart of this problem. The complexity of the human body and its component cells are currently beyond the scope of any one unifying model. For this reason, starting basic research with human subjects would be detrimental. Researchers turn instead to simpler eukaryotes in order to understand the signal pathways involved in the cell cycle and how they respond to stress.2 Through years of hard work and research, Dr. Gustin’s studies have made huge contributions to the field of oncology.

Dr. Gustin studied a species of yeast, Saccharomyces cerevisiae, and its response to osmolarity. His research uncovered the high osmolarity glycerol (HOG) pathway and mitogen-activated protein kinase (MAPK) cascade, which work together to maintain cellular homeostasis. The HOG pathway is much like a “switchboard [that] control[s] cellular behavior and survival within a cell, which is regulated by the MAPK cascade through the sequential phosphorylation of a series of protein kinases that mediates the stress response.”3 These combined processes allow the cell to respond to extracellular stress by regulating gene expression, cell proliferation, and cell survival and apoptosis. To activate the transduction pathway, the sensor protein Sln1 recognizes a stressor and subsequently phosphorylates, or activates, a receiver protein that mediates the cellular response. This signal transduction pathway leads to the many responses that protect a cell against external stressors. These same protective processes, however, allow cancer cells to shield themselves from the body’s immune system, making them much more difficult to attack.

Dr. Gustin has used this new understanding of the HOG pathway to expand his research into similar pathways in other organisms. Fascinatingly, the expression of human orthologs of HOG1 proteins within yeast cells resulted in the same stimulation of the pathway despite the vast evolutionary differences between yeast and mammals. Beyond the evolutionary implications of this research, this illustrates that the “[HOG] pathway defines a central stress response signaling network for all eukaryotic organisms”.3 So much has already been learned through studies on Saccharomyces cerevisiae and yet researchers have recently discovered an even more representative organism. This fungus, Candida albicans, is the new model under study by Dr. Gustin and serves as the next step towards producing a working model of cancer and its responses to stressors. Its more complex responses to signalling make it a better working model than Saccharomyces cerevisiae.4 The research that has been conducted on Candida albicans has already contributed to the research community’s wealth of information, taking great strides towards eventual human applications in the field of medicine. For example, biological therapeutics designed to combating breast cancer cells have already been tested on both Candida albicans biofilms and breast cancer cells to great success.5

This research could eventually be applied towards improving current chemotherapy techniques for cancer treatment. Eventual applications of this research are heavily oriented towards fighting cancer through the use of chemotherapy techniques. Current chemotherapy techniques utilize cytotoxic chemicals that damage and kill cancerous cells, thereby controlling the size and spread of tumors. Many of these drugs can disrupt the cell cycle, preventing the cancerous cell from proliferating efficiently. Alternatively, a more aggressive treatment can induce apoptosis, programmed cell death, within the cancerous cell.6 For both methods, the chemotherapy targets the signal pathways that control the vital processes of the cancer cell. Dr. Gustin’s research plays a vital role in future chemotherapy technologies and the struggle against mutant cancer cells.

According to Dr. Gustin, current chemotherapy is only effective locally, and often fails to completely incapacitate cancer cells that are farther away from the site of drug administration where drug toxicity is highest. As a result, distant cancer cells are given the opportunity to develop cytoprotective mechanisms that increase their resistance to the drug.7 Currently, a major goal of Dr. Gustin’s research is to discover how and why certain cancer cells are more resistant to chemotherapy. The long-term goal is to understand the major pathways involved with cancer resistance to apoptosis, and to eventually produce a therapeutic product that can target the crucial pathways and inhibitors. With its specificity, this new drug would vastly increase treatment efficacy and provide humanity with a vital tool with which to combat cancer, saving countless lives in the future.

References   

  1. American Cancer Society. https://www.cancer.org/latest-new/cancer-facts-and-figures-death-rate-down-25-since-1991.html (February 3 2017).                  
  2. Radmaneshfar, E.; Kaloriti, D.; Gustin, M.; Gow, N.; Brown, A.; Grebogi, C.; Thiel, M. Plos ONE, 2013, 8, e86067.                
  3. Brewster, J.; Gustin, M. Sci. Signal. 2014, 7, re7.
  4. Rocha, C.R.; Schröppel, K.; Harcus, D.; Marcil, A.; Dignard, D.; Taylor, B.N.; Thomas, D.Y.; Whiteway, M.; Leberer, E. Mol. Biol. Cell. 2001, 12, 3631-3643.
  5. Malaikozhundan, B.; Vaseeharan, B.; Vijayakumar, S.; Pandiselvi K.; Kalanjiam R.; Murugan K.; Benelli G. Microbial Pathogenesis 2017, 102, n.p. Manuscript in progress.
  6. Shapiro, G. and Harper, J.; J Clin Invest. 1999, 104, 1645–1653.
  7. Das, B.; Yeger, H.; Baruchel, H.; Freedman, M.H.; Koren, G.; Baruchel, S. Eur. J. Cancer. 2003, 39, 2556-2565.

Comment