Epigenetic Processes in Cancer Research

Comment

Epigenetic Processes in Cancer Research

Abstract

Epigenetics is the study of the phenotypic variation caused by external factors (e.g. diet, nicotine use, and carcinogenic chemical exposure) that influence the mechanisms through which cells read and interpret genes. Epigenetic modifications are independent of genetic mutations. Currently, epigenetics offers significant promise in novel noninvasive cancer therapies and early diagnostic tools. Key epigenetic processes including DNA methylation and histone modification are reversible, unlike most genetic mutations. Reversing these processes in tumor suppressor genes can restore normal behavior in tumor cells. This review discusses the biological basis and treatment potential of these processes and provides a brief analysis of their potential application in cancer treatment.

Introduction

DNA is a double-helical structure packed into the nuclei of eukaryotic cells in the form of chromatin. The basic unit of DNA packing in chromatin is the nucleosome, a complex of eight histone proteins and approximately 140 DNA base pairs. Further coiling of repeating nucleosome fibers eventually yields a chromatid, a key component of a chromosome.

Gene expression is epigenetically regulated through alterations in chromatin structure and the double helix of DNA by addition of functional groups. DNA methylation and histone modification are the most understood mechanisms of such epigenetic modulation. Errant modification of functional group patterns in DNA or histone tails could result in unintended silencing of critical regulatory genes, a process which contributes to the development of several diseases, including cancer. Understanding these complex mechanisms is essential for the development of cancer therapies that reverse the inhibition of tumor-suppressor genes. This review seeks to outline the biochemical basis of the gene silencing effected by DNA methylation and histone modification, as well as the current developments in reversing these mechanisms for the prevention of cancer.

Histone Modification & Its Implications on Cancer

Histone modifications are reversible modifications at the N-terminals of histones in a nucleosome. These include, but are not limited to, acetylation, methylation, phosphorylation, and ubiquitination. Depending on the group added or removed from a given histone, local gene transcription can be upregulated or downregulated. Histone acetylation, for instance, modifies chromatin into a less-condensed conformation that is more transcriptionally active. Conversely, histone deacetylases (HDACs) remove acetyl groups, increasing the ionic attractions between positively charged histones and negatively charged DNA and tightening chromatin structure. In order to further condense chromatin, HDACs can also recruit K9 histone methyltransferases (HMTs) to methylate H3K9 histone residues, providing the condensing agent heterochromatin protein 1 (HP1) with a binding site. As a result of the condensed heterochromatin conformation, the cell has limited transcriptional access to the genome.

Recent research has established histone modifications as useful biomarkers in cancer diagnosis. HDACs, specifically HDAC1, can often be identified in elevated quantities in prostate and gastric cancer patients. Prostate cancer tumors have been characterized by general hypomethylation of histone residues H4K20me1 and H4K20me2, as well as hypermethylation of residue H3K27 and increased activity of its specific methyltransferase. These conditions are now associated with progression and metastasis of prostate cancer. Histone modifications play a multifaceted role in cancer diagnosis and treatment. In addition to serving as diagnostic biomarkers, they are also potential therapeutic agents. Drugs that specifically inhibit HDACs are currently being manufactured and assessed for clinical application after recent FDA approval.5

DNA Methylation & Its Implications on Cancer

In DNA methylation, a methyl group is covalently added to a cytosine ring in the DNA sequence, forming a 5-methylcytosine. Molecules of 5-methylcytosine are found primarily at cytosine-guanine dinucleotides (CpGs). Some areas of high CpG density, such as CpG islands, are characteristically unmethylated and are located in or near the promoter regions of genes, allowing them to play a role in gene expression. Enzymes termed DNA methyltransferases (DNMTs) facilitate the methylation of CpG residues. Three key members of this family include Dnmt1, Dnmt3a, and Dnmt3b. Dnmt1 is labeled as the “maintenance” DNMT due to its methylation of DNA near replication forks, which preserves the epigenetic inheritance of methylation patterns across cellular generations. The other two DNMTs serve to directly methylate other CpG residues. These enzymes can either be recruited by a transcription factor bound to the promoter region of a given gene to methylate a specific CpG island or simply methylate all CpG sites across a genome not protected by a transcription factor. Several families of proteins, such as the UHRF proteins, the zinc-finger proteins, and most notably, the MBD proteins, are responsible for the interpretation of these methylation patterns. For example, MBD proteins possess transcriptional repression domains that allow them to bind to methylated DNA and silence nearby genes.

For a malignant cancer to develop, critical tumor-suppressor genes such as p53 must be knocked out. This process can be accomplished through hypermethylation of CpG islands in certain promoter regions and genes. For example, when the DNA repair gene BRCA1 becomes hypermethylated, it is rendered inactive, which can lead to the development of breast cancer. The reversibility of methylation implies that these tumor-suppressor genes could be reactivated to effectively treat tumors. Infiltration of a cell by pharmaceutical agents that inhibit methylation-mediated suppression could restore the function of a silenced tumor-suppressor gene.6

One recent study demonstrated that azacitidine, an agent that can be incorporated into DNA, is capable of inducing hypomethylation. The chemical modification of this agent’s diphosphate form by a ribonucleotide reductase and subsequent phosphorylation creates a triphosphate form that displaces cytosine bases in DNA. As a result, DNMTs are limited in methylation functionality since they are isolated on a substituted DNA strand.7 Such manipulation of naturally occurring DNA methylation processes in cancer cells appears to be a promising method of restoring the normal function of tumor-suppressor genes.

Co-Application of DNA Methylation & Histone Modification

The discovery of the protein MeCP2 has validated the hypothesis that DNA methylation regulates histone acetylation patterns. MeCP2 recruits histone HDACs to the promoter regions of methylated CpG islands. The resulting hypoacetylated histones yield a highly condensed heterochromatin structure that represses transcription of nearby genes. Clinical studies have thus emphasized the utilization of both demethylating agents and histone deacetylase inhibitors for transcriptional activation of tumor-suppressor genes.8

Conclusion

The field of epigenetics has opened a door to novel, promising cancer therapies unimaginable just a decade ago. Manipulating DNA methylation and histone modification can reverse tumor-suppressor gene silencing. Reactivating tumor-suppressor genes through epigenetic modifications can halt and reverse cancer proliferation.

References

  1. Kornberg, R.D. Science. 1974, 184, 868-871.
  2. Fahrner, J.A. et al. J Cancer Res. 2002, 62, 7213-7218.
  3. Alelú-Paz, R. et al. J Sign Transduc. 2012, 8 pg.
  4. Bártová, E. et al. J Histochem Cytochem. 2008, 56, 711-721.
  5. Chervona, Y. et al. Am J Cancer Res. 2012, 5, 589-597.
  6. Baylin, B.S. Nature Clin Prac Onco. 2005, 2, S4-S11.
  7. Jütterman, R. et al. Proc Natl Acad Sci USA. 1994, 91, 11797-11801.
  8. Herranz, M. et al. In Target Discovery and Validation Reviews and Protocols, 1; Sioud, M., Eds.; Humana Press: New York, NY, 2007, 2, pp 25-62.

Comment

Developments in Bone Regenerative Medicine Using Stem Cell Treatment

Comment

Developments in Bone Regenerative Medicine Using Stem Cell Treatment

Abstract

There is an acute need for alternatives to modern bone regeneration techniques, which have in vivo morbidity and high cost. Dental pulp stem cells (DPSCs) constitute an immunocompatible and easily accessible cell source that is capable of osteogenic differentiation. In this study, we engineered economical hard-soft intercalated substrates using various thicknesses of graphene/polybutadiene composites and polystyrene/polybutadiene blends. We investigated the ability of these scaffolds to increase proliferation and induce osteogenic differentiation in DPSCs without chemical inducers such as dexamethasone, which may accelerate cancer metastasis.

For each concentration, samples were prepared with dexamethasone as a positive control. Proliferation studies demonstrated the scaffolds’ effects on DPSC clonogenic potential: doubling times were shown to be statistically lower than controls for all substrates. Confocal microscopy and scanning electron microscopy/energy dispersive X-ray spectroscopy indicated widespread osteogenic differentiation of DPSCs cultured on graphene/polybutadiene substrates without dexamethasone. Further investigation of the interaction between hard-soft intercalated substrates and cells can yield promising results for regenerative therapy.

Introduction

Current mainstream bone regeneration techniques, such as autologous bone grafts, have many limitations, including donor site morbidity, graft resorption, and high cost.1,2 An estimated 1.5 million individuals suffer from bone-disease related fractures each year, and about 54 million individuals in the United States have osteoporosis and low bone mass, placing them at increased risk for fracture.2,3,4,5 Bone tissue scaffold implants have been explored in the past decade as an alternative option for bone regeneration treatments. In order to successfully regenerate bone tissue, scaffolds typically require the use of biochemical growth factors that are associated with side effects, such as the acceleration of cancer metastasis.6,7 In addition, administering these factors in vivo is a challenge.6 The purpose of this project was to engineer and characterize a scaffold that would overcome these obstacles and induce osteogenesis by controlling the mechanical environment of the implanted cells.

First isolated in 2001 from the dental pulp chamber, dental pulp stem cells (DPSCs) are multipotent ecto-mesenchymal stem cells.8,9 Previous studies have shown that these cells are capable of osteogenic, odontogenic, chondrogenic, and adipogenic differentiation.10,11,12 Due to their highly proliferative nature and various osteogenic markers, DPSCs provide a promising source of stem cells for bone regeneration.11

An ideal scaffold should be able to assist cellular attachment, proliferation and differentiation.13 While several types of substrates suitable for these purposes have been identified, such as polydimethylsiloxane14 and polymethyl methacrylate15, almost all of them require multiple administrations of growth factors to promote osteogenic differentiation.6 In recent years, the mechanical cues of the extracellular matrix (ECM) have been shown to play a key role in cell differentiation, and are a promising alternative to chemical inducers.16,17

Recent studies demonstrate that hydrophobic materials show higher protein adsorption and cellular activity when compared to hydrophilic surfaces; therefore, we employed hydrophobic materials in our experimental scaffold.18,19,20,21 Polybutadiene (PB) is a hydrophobic, biocompatible elastomer with low rigidity. Altering the thickness of PB films can vary the mechanical cues to cells, inducing the desired differentiation. DPSCs placed onto spin-casted PB films of different thicknesses have been observed to biomineralize calcium phosphate, supporting the idea that mechanical stimuli can initiate differentiation.6,16,17 Atactic polystyrene (PS) is a rigid, inexpensive hydrophobic polymer.22 As PB is flexible and PS is hard, a polymer blend of PS-PB creates a rigid yet elastic surface that could mimic the mechanical properties of the ECM.

Recently, certain carbon compounds have been recognized as biomimetic.23 The remarkable rigidity and elasticity of graphene, a one-atom thick nanomaterial, make it a compelling biocompatible scaffold material candidate.24 Studies have also shown that using a thin sheet of graphene as a substrate enhances the growth and osteogenic differentiation of cells.23

We hypothesized that DPSCs plated on hard-soft intercalated substrates—specifically, graphene-polybutadiene (G-PB) substrates and polystyrene-polybutadiene (PS-PB) substrates of varying thicknesses—would mimic the elasticity and rigidity of the bone ECM and thus induce osteogenesis without the use of chemical inducers, such as dexamethasone (DEX).

Materials and Methods

G-PB and PS-PB solutions were prepared through dissolution of varying amounts of graphene and PS in PB-toluene solutions of varying concentrations. Graphene was added to a thin PB solution (3 mg PB/mL toluene) to create a 1:1 G-PB ratio by mass. Graphene was added to a thick PB solution (20 mg PB/mL toluene) to create 1:1 and 1:5 G-PB ratios by mass. PS was added to a thick PB solution to create 1:1, 1:2, and 1:4 PS-PB blend ratios by mass. Spincasting was used to apply G-PB and PS-PB onto silicon wafers as layers of varying thicknesses (thin PB: 20.5nm, thick PB: 202.0nm).25 Subsequently, DPSCs were plated onto the coated wafers either with or without dexamethasone (DEX). Following a culture period of eight days, the cells were counted with a hemacytometer to determine proliferation, and then stained with xylenol orange for qualitative analysis of calcification. Cell morphology and calcification of stained cells were determined through confocal microscopy and phase contrast fluorescent microscopy. Cell modulus and scaffold surface character were determined using atomic force microscopy. Finally, cell biomineralization was analyzed using scanning electron microscopy.

Results

Cell Proliferation and Morphology

To ensure the biocompatibility of graphene, cell proliferation studies were conducted on all G-PB substrates. Results showed that G-PB did not inhibit DPSC proliferation. The doubling time was lowest for 1:1 G-thick PB, while doubling time was observed to be greatest for 1:1 G-thin PB. Multiple two-sample t-tests showed that the graphene substrates had significantly lower doubling times than standard plastic monolayer (p < 0.001).

After days 3, 5, and 8, cell morphology of the DPSCs cultured on the G-PB and PS-PB films was analyzed using phase contrast fluorescent microscopy. Images showed normal cell morphology and growth, based on comparison with the control thin PB and thick PB samples. After day 16 and 21 of cell incubation, morphology of the DPSCs cultured on all substrates was analyzed using confocal microscopy. There was no distinctive difference among the morphology of the DPSC colonies. DPSCs appeared to be fibroblast-like and were confluent in culture by day 15 of incubation.

Modulus Studies

In order to establish a relation between rigidity of the cells and rigidity of the substrate the cells were growing on, modulus measurements were taken using atomic force microscopy. Modulus results are included in Figure 3.

Differentiation Studies

After day 16 of incubation, calcification of DPSCs on all substrates was analyzed using confocal microscopy. Imaging indicated preliminary calcification on all substrates with and without DEX. Qualitatively, the DEX samples exhibited much higher levels of calcification than their non-DEX counterparts, as evident in thin PB, 1:1 G-thin PB, and 1:4 PS-PB samples.

After days 16 and 21 of incubation, biomineralization by the DPSCs cultured on the substrates was analyzed by scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDX). The presence of white, granular deposits in SEM images indicates the formation of hydroxyapatite, which signifies differentiation. This differentiation was confirmed by the presence of calcium and phosphate peaks in EDX analysis. Other crystals (not biomineralized) were determined to be calcium carbonates by EDX analysis and were not indicative of DPSC differentiation.

On day 16, only thin PB induced with DEX was shown to have biomineralized with sporadic crystal deposits. By day 21, all samples were shown to have biomineralized to some degree, except for 1:1 PS-PB (non-DEX) and 1:4 PS-PB (non-DEX). Heavy biomineralization in crystal and dotted structures was apparent in DPSCs cultured on 1:1 G-thin PB (both with and without DEX). Furthermore, samples containing graphene appeared to have greater amounts of hydroxyapatite than the control groups. All PS-PB substrates biomineralized in the presence of DEX, while only 1:2 PS-PB was shown to biomineralize without DEX. While results indicate that while PS-PB copolymers generally require DEX for biomineralization and differentiation, this is not the case for G-PB substrates. Biomineralization occurred on DPSCs cultured on G-PB substrates without DEX, demonstrating the differentiating ability of the G-PB mechanical environment and its interactions with DPSCs.

Discussion

This study investigated the effect of hard-soft intercalated scaffolds on the proliferation and differentiation of DPSCs in vitro. As cells have been shown to respond to substrate mechanical cues, we monitored the effect of ECM-mimicking hard-soft intercalated substrates on the behavior of DPSCs. We chose graphene and polystyrene as the hard components, and used polybutadiene as a soft matrix.

By using AFM for characterization of the G-PB composite and PS-PB blend substrates, we demonstrated that all surfaces had proper phase separation and uniform dispersion. This ensured that DPSCs would be exposed to both the hard peaks and soft surfaces during culture, allowing us to draw valid conclusions regarding the effect of substrate mechanics.

Modulus studies on substrates indicated that 1:1 G-thin PB was the most rigid substrate and control thin PB was the second most rigid. In general, G-PB substrates were 8-20 times stiffer than PS-PB substrates. The high relative modulus of graphene-based substrates can be attributed to the stiffness of graphene itself. The cell modulus appeared highly correlated to the substrate modulus, both indicating that the greater the stiffness of the substrate, the greater the stiffness of DPSCs cultured on it and supporting the finding that substrate stiffness affects the cell ECM.27,28 For example, 1:1 G-thin PB had the highest surface modulus, and DPSCs grown on 1:1 G-thin PB had the highest cell modulus. Conversely, DPSCs grown on 1:4 PS-PB had the lowest cell modulus, and 1:4 PS-PB had one of the lowest relative surface moduli. Another notable trend involves DEX; cells cultured with DEX had greater moduli than cells cultured without, suggesting a possible mechanism used by DEX to enhance stiffness and thereby osteogenesis of DPSCs.

Cell morphology studies indicated normal growth and normal cell shape on all substrates. Cell proliferation studies indicated that all samples had significantly lower cell doubling times than standard plastic monolayer (p < 0.001). Results confirm that graphene is not cytotoxic to DPSCs, which supports previous research.27

SEM/EDX indicated that DPSCs grown on thick PB soft substrates appeared to have increased proliferation but limited biomineralization. In contrast, cells on the hardest substrates, 1:1 G-thin PB and thin PB, exhibited slower proliferation, but formed more calcium phosphate crystals, indicating greater biomineralization and osteogenic differentiation. The success of G-PB substrates in inducing osteogenic differentiation may be explained by the behavior of graphene itself. Graphene can influence cytoskeletal proteins, thus altering the differentiation of DPSCs through chemical and electrochemical means, such as hydrogen bonding with RGD peptides.29,30 In addition, G-PB substrate stiffness may upregulate levels of alkaline phosphatase and osteocalcin, creating isometric tension in the DPSC actin network and resulting in greater crystal formation.30 Overall, the proliferation results indicate that cells that undergo higher proliferation will undergo less crystal formation and osteogenic differentiation (and vice-versa).

The data presented here indicate that hard-soft intercalated substrates have the potential to enhance both proliferation and differentiation of DPSCs. G-PB substrates possess greater differentiation capabilities, whereas PS-PB substrates possess greater proliferative capabilities. Within graphene-based substrates, 1:1 G-thin PB induced the greatest biomineralization, performing better than various other substrates induced with DEX. This indicates that substrate stiffness is a potent stimulus that can serve as a promising alternative to biochemical factors like DEX.

Conclusion

The development of an ideal scaffold has been the focus of significant research in regenerative medicine. Altering the mechanical environment of the cell offers several advantages over current strategies, which are largely reliant on growth factors that can lead to acceleration of cancer metastasis. Within this study, the optimal scaffold for growth and differentiation of DPSCs was determined to be the 1:1 G-thin PB sample, which exhibited the greatest cell modulus, crystal deposition, and biomineralization. In addition, our study indicates two key relationships: one, the correlation between substrate and cell rigidity, and two, the tradeoff between scaffold-induced proliferation and scaffold-induced differentiation of cells, which depends on substrate characteristics. Further investigation of hard-soft intercalated substrates holds potential for developing safer and more cost-effective bone regeneration scaffolds.

References

  1. Spin-Neto, R. et al. J Digit Imaging. 2011, 24(6), 959–966.
  2. Rogers, G. F. et al. J Craniofac Surg. 2012, 23(1), 323–327.
  3. Bone health and osteoporosis: a report of the Surgeon General; Office of The Surgeon General: Rockville, 2004.
  4. Christodoulou, C. et al. Postgrad Med J. 2003, 79(929), 133–138.
  5. Hisbergues, M. et al. J Biomed Mater Res B. 2009, 88(2), 519–529.
  6. Chang, C. et al. Ann J Mater Sci Eng. 2014, 1(3), 7.
  7. Jang, JY. et al. BioMed Res Int. 2011, 2011.
  8. d’Aquino, R. et al. Stem Cell Rev. 2008, 4(1), 21–26.
  9. Liu, H. et al. Methods Enzymol. 2006, 419, 99–113.
  10. Jimi, E. et al. Int J Dent. 2012, 2012.
  11. Gronthos, S. et al. Proc Natl Acad Sci. 2000, 97(25), 13625–13630.
  12. Chen, S. et al. Arch Oral Biol. 2005, 50(2), 227–236.
  13. Daley, W. P. et al. J Cell Sci. 2008, 121(3), 255–264.
  14. Kim, S.J. et al. J Mater Sci Mater Med. 2008, 19(8), 2953–2962.
  15. Dalby, M. J. et al. Nature Mater. 2007, 6(12), 997–1003.
  16. Engler, A. J. et al. Cell. 2006, 126(4), 677–689.
  17. Reilly, G. C. et al. J Biomech. 2010, 43(1), 55–62.
  18. Schakenraad, J. M. et al. J Biomed Mater Res. 1986, 20(6), 773–784.
  19. Lee, J. H. et al. J Biomed Mater Res. 1997, 34(1), 105–114.
  20. Ruardy, T. G. et al. J Colloid Interface Sci. 1997, 188(1), 209–217.
  21. Elliott, J. T. et al. Biomaterials. 2007, 28(4), 576–585.
  22. Danusso, F. et al. J Polym Sci. 1957, 24(106), 161–172.
  23. Nayak, T. R. et al. ACS Nano. 2011, 5(6), 4670–4678.
  24. Goenka, S. et al. J Control Release. 2014, 173, 75–88.
  25. Extrand, C. W. Polym Eng Sci. 1994, 34(5), 390–394.
  26. Oh, S. et al. Proc Natl Acad Sci. 2009, 106(7), 2130–2135
  27. Jana, B. et al. Chem Commun. 2014, 50(78), 11595–11598.
  28. Nayak, T. R. et al. ACS Nano. 2010, 4(12), 7717–7725.
  29. Banks, J. M. et al. Biomaterials. 2014, 35(32), 8951–8959.
  30. Arnsdorf, E. J. et al. J Cell Sci. 2009, 122(4), 546–553.

Comment

Developments in Gold Nanoparticles and Cancer Therapy

Comment

Developments in Gold Nanoparticles and Cancer Therapy

Abstract

Nanotechnology has recently produced several breakthroughs in localized cancer therapy. Specifically, directing the accumulation of gold nanoparticles (GNPs) in cancerous tissue enables the targeted release of cytotoxic drugs and enhances the efficacy of established cancer therapy methods. This article will give a basic overview of the structure and design of GNPs, the role of GNPs in drug delivery and localized cancer therapy, and the challenges in developing and using GNPs for cancer treatment.

Introduction

Chemotherapy is currently the most broadly utilized method of treatment for most subtypes of cancer. However, cytotoxic chemotherapy drugs are limited by their lack of specificity; chemotherapeutic agents target all of the body’s most actively dividing cells, giving rise to a number of dangerous side effects.1 GNPs have recently attracted interest due to their ability to act as localized cancer treatments—they offer a non-cytotoxic, versatile, specific targeting mechanism for cancer treatment and a high binding affinity for a wide variety of organic molecules.2 Researchers have demonstrated the ability to chemically modify the surfaces of GNPs to induce binding to specific pharmaceutical agents, biomacromolecules, and malignant cell tissues. This allows GNPs to deliver therapeutic agents at tumor sites more precisely than standard intravenous chemotherapy can. GNPs also increase the efficiency of established cancer therapy methods, such as hyperthermia.3 This article will briefly cover the design and characteristics of GNPs, and then outline both the roles of GNPs in cancer therapy and the challenges in implementing GNP-based treatment options.

Design and Characteristics of Gold Nanoparticles

Nanoparticle Structure

To date, gold nanoparticles have been developed in several shapes and sizes.4 Although GNPs have also successfully been synthesized as rods, triangles, and hexagons, spherical GNPs have been demonstrated to be one of the most biocompatible nanoparticle models. GNP shape affects accumulation behavior in cells.4 A study by Tian et. al. found that hexagonal GNPs produced a greater rate of vesicular aggregation than both spherical and triangular GNPs.

Differences in GNP shape also cause variation in surface area and volume, which affects cellular uptake, biocompatibility, and therapeutic efficiency.4 For example, GNPs with greater surface area or more vertices possess enhanced cell binding capabilities but also heightened cell toxicity. Clearly, GNP nanoparticles must be designed with respect to their intended function.

Nanoparticle Surface Modification

In order to target specific cells or tissues, GNPs must undergo a ligand attachment process known as surface modification. The types of ligand particles attached to a GNP affect its overall behavior. For example, ligand particles consisting of inert molecular chains can stabilize nanoparticles against inefficient aggregation.5 Polyethylene glycol (PEG) is a hydrocarbon chain that stabilizes GNPs by repelling other molecules using steric effects; incoming molecules are unable to penetrate the PEG-modified surface of the GNPs.5 Certain ligand sequences can enable a GNP to strongly bind to a target molecule by molecular recognition, which is determined by geometric matching of the surfaces of the two molecules.5

Tumor cells often express more cell surface receptors than normal cells; targeting these receptors for drug delivery increases drug accumulation and therapeutic efficacy.6 However, the receptors on the surface of tumor cells must be exclusive to cancerous cells in order to optimize nanoparticle and drug targeting. For example, most tumor cells have integrin receptors.7 To target these residues, the surfaces of the therapeutic GNPs can be functionalized with the arginine-glycine-aspartic acid (RGD) sequence, which binds to key members within the integrin family.8 Successful targeting can lead to endocytosis and intracellular release of the therapeutic elements that the GNPs carry.

An important factor of GNP therapy is the efficient targeting and release of remedial agents at the designated cancerous site. There are two types of GNP targeting: passive and active. In passive targeting, nanoparticles accumulate at a specific site by physicochemical factors (e.g. size, molecular weight, and shape), extravasation, or pharmacological factors. Release can be triggered by internal factors such as pH changes or external stimuli such as application of light.2 In active targeting, ligand molecules attached to the surface of a GNP render it capable of effectively delivering pharmaceutical agents and large biomacromolecules to specific cells in the body.

Gold Nanoparticles in Localized Cancer Therapy

Hyperthermia

Hyperthermia is a localized cancer therapy in which cancerous tissue is exposed to high temperatures to induce cell death. Placing gold nanoparticles at the site of therapy can improve the efficiency and effectiveness of hyperthermia, leading to lower levels of tumor growth. GNPs aggregated at cancerous tissues allow intense, localized increases in temperature that better induce cell death. In one study on mice, breast tumor tissue containing aggregated GNPs experienced a temperature increase 28°C higher than control breast tumor tissue when subjected to laser excitation.9 While the control tissue had recurring cancerous growth, the introduction of GNPs significantly increased the therapeutic temperature of the tumors and permanently damaged the cancerous tissue.

Organelle Targeting

GNPs are also capable of specifically targeting malfunctioning organelles in tumor cells, such as nuclei or mitochondria. The nucleus is an important target in localized cancer therapy since it controls the processes of cell growth, proliferation, and apoptosis, which are commonly defective in tumor cells. Accumulation of GNPs inside nuclei can disrupt faulty nuclear processes and eventually induce cell apoptosis. The structure of the GNP used to target the nucleus determines the final effect. For example, small spherical and “nanoflower”-shaped GNPs compromise nuclear functioning, but large GNPs do not.10

Dysfunctional mitochondria are also valuable targets in localized therapy as they control the energy supply of tumor cells and are key regulators of their apoptotic pathways.10 Specific organelle targeting causes internal cell damage to cancerous tissue only, sparing normal tissue from the damaging effects of therapeutic agents. This makes nuclear and mitochondrial targeting a desirable treatment option that merits further investigation.

Challenges of Gold Nanoparticles in Localized Cancer Therapy

Cellular Uptake

Significant difficulties have been encountered in engineering a viable method of cellular GNP uptake. Notably, GNPs must not only bind to a given cancer cell’s surface and undergo endocytosis into the cell, but they must also evade endosomes and lysosomes.10 These obstacles are present regardless of whether the GNPs are engineered to target specific organelles or release therapeutic agents inside cancerous cells. Recent research has demonstrated that GNPs can avoid digestion by being functionalized with certain surface groups, such as polyethylenimine, that allow them to escape endosomes and lysosomes.10

Toxic Effects on Local Tissue

The cytotoxic effects of GNPs on local cells and tissues remain poorly understood.11 However, recent research developments have revealed a relationship between the shapes and sizes of GNPs and their cell toxicities. Larger GNPs have been found to be more cytotoxic than smaller ones.12 Gold nanospheres were lethal at lower concentrations, while gold nanostars were less toxic.13 While different shapes and sizes of GNPs can be beneficial in various localized cancer therapies, GNPs must be optimized on an application-by-application basis with regard to their toxicity level.

Conclusion

Gold nanoparticles have emerged as viable agents for cancer therapy. GNPs are effective in targeting malignant cells specifically, making them less toxic to normal cells than traditional cancer therapies. By modifying their surfaces with different chemical groups, scientists can engineer GNPs to accumulate at specific tumor sites. The shape and size of a GNP also affect its behavior during targeting, accumulation, and cellular endocytosis. After accumulation, GNPs may be used to enhance the efficacy of established cancer therapies such as hyperthermia. Alternatively, GNPs can deliver chemotherapy drugs to tumor cells internally or target specific organelles inside the cell, such as the nucleus and the mitochondria.

Although some research has shown that GNPs themselves do not produce acute cytotoxicity in cells, other research has indicated that nanoparticle concentration, shape, and size may all affect cytotoxicity. Therefore, nanoparticle design should be optimized to increase cancerous cell death but limit cytotoxicity in nearby normal cells.

References

  1. Estanquiero, M. et al. Colloid Surface B. 2015, 126, 631-648.
  2. Pissuwan, D. et al. J Control Release. 2009, 149, 65-71.
  3. Chatterjee, D. V. et al. Ther Deliv. 2011, 2, 1001-1014.
  4. Tian, F. et al. Nanomedicine. 2015, 10, 2643-2657.
  5. Sperling, R.A. et al. Phil Trans R Soc A. 2010, 368, 1333-1383.
  6. Amreddy, N. et al. Int J of Nanomedicine. 2015, 10, 6773-6788.
  7. Kumar, A. et al. Biotechnol Adv. 2013, 31, 593-606.
  8. Perlin, L. et al. Soft Matter. 2008, 4, 2331-2349.
  9. Jain, S. et al. Br J Radiol. 2012, 85, 101-113.
  10. [Kodiha, M. et al. Theranostics. 2015, 5, 357-370.
  11. Nel, A. et al. Science. 2006, 311, 622-627.
  12. Pan, Y. et al. Small. 2007, 3, 1941-1949.
  13. Favi, P. M. et al. J Biomed Mater Red A. 2015, 103, 3449-3462. 

Comment

Mastering Mega Minds: Our Quest for Cognitive Development

Comment

Mastering Mega Minds: Our Quest for Cognitive Development

Humans are continuously pursuing perfection. This drive is what motivates scientific researchers and comic book authors to dream about the invention of bionic men. It seems inevitable that this quest has expanded to target humankind’s most prized possession: our brain. Cognitive enhancements are various technologies created in order to elevate human mental capacities. However, as scientists and entrepreneurs attempt to research and develop cognitive enhancements, society faces an ethical dilemma. Policy must help create a balance, maximizing the benefits of augmented mental processing while minimizing potential risks.

Cognitive enhancements are becoming increasingly prevalent and exist in numerous forms, from genetic engineering to brain stimulation devices to cognition-enhancing drugs. The vast differences between these categories make it difficult to generalize a single proposition that can effectively regulate enhancements as a whole. Overall, out of these types, prescription pills and stimulation devices currently have the largest potential for widespread usage.

Prescription pills exemplify the many benefits and drawbacks of using cognitive enhancements. ADHD medications like Ritalin and Adderall, which stimulate dopamine and norepinephrine activity in the brain, may be the most ubiquitous example of available cognitive enhancements. These drugs are especially abused among college students, who use these pills to stay awake for longer periods of time and enhance their attention while studying. In a collection of studies, 4.1 to 10.8% of American college students reported recreationally using a prescription stimulant in the past year, while the College Life Study determined that up to a quarter of undergraduates used stimulants at least once during college.1,2 Students may not know or may disregard the fact that prolonged abuse has resulted in serious health concerns, including cardiopulmonary issues and addiction. When these medications are taken incorrectly, especially in conjunction with alcohol, users risk seizures and death.3

In addition to stimulants, there are a variety of other prescriptions that have been shown to improve cognitive function. Amphetamines affect neurotransmitters in the brain to increase consciousness and adjust sleep patterns. They achieve this by preventing dopamine reuptake and disrupting normal vesicular packaging, which also increases dopamine concentration in the synaptic cleft through reverse transport from the cytosol.4 These drugs are currently used by the armed forces to mitigate pilots’ fatigue in high-intensity situations. While usage of these drugs may help regulate pilots’ energy levels, this unfortunately means that pilots face heavy pressure to take amphetamines despite the possibility of addiction and the lack of approval from the U.S. Food and Drug Administration.5

Besides prescription medications, various technological devices exist or are being created that affect cognition. For instance, transcranial direct current stimulation (tDCS) and transcranial magnetic stimulation (TMS) are devices currently marketed to enhance cognitive functioning through online websites and non-medical clinics, even though they have not yet received comprehensive clinical evaluations for this purpose.6 tDCS works by placing electrodes on the scalp to target specific brain areas. The machine sends a small direct current through electrodes to stimulate or inhibit neuronal activity. Similarly, TMS uses magnetic fields to alter neural activity. These methods have been shown to improve cognitive abilities including working memory, attention, language, and decision-making. Though these improvements are generally short-term, one University of Oxford study used tDCS to produce long-term improvements in mathematical abilities. Researchers taught subjects a new numerical system and then tested their ability to process and map the numbers into space. Subjects who received tDCS stimulation to the posterior parietal cortex displayed increased performance and consistency up to six to seven months after the treatment. This evidence indicates that tDCS can be used for the development of mathematical abilities as well as the treatment of degenerative neurological disorders such as Alzheimer’s.7

Regulation of cognitive enhancements is a multifaceted issue for which the risks and benefits of widespread usage must be intensively examined. According to one perspective, enhancements possess the ability to maximize human efficiency. If an enhancement can enable the acceleration of technological development and enable individuals to solve issues that affect society, it could improve the quality of life for users and non-users alike. This is why bans on anabolic steroids are not directly comparable to those on cognitive enhancements. While both medications share the goal of helping humans accomplish tasks beyond their natural capabilities, cognitive enhancements could accelerate technological and societal advancement. This would be more beneficial to society than one individual’s enhanced physical prowess.

While discussing this, it should be noted that such enhancements will not instantaneously bestow the user with Einsteinian intellectual capabilities. In a recent meta-analysis of 48 academic studies with 1,409 participants, prescription stimulants were found to improve delayed working memory but only had modest effects on inhibitory control and short-term episodic memory. The report also discussed how in some situations, other methods, including getting adequate sleep and using cognitive techniques like mnemonics, are far more beneficial than taking drugs such as methylphenidate and amphetamines. Biomedical enhancements, however, have broad effects that are applicable to many situations, while traditional cognitive techniques that don’t directly change the biology behind neural processes are task-specific and only rarely produce significant improvements.8

However, if we allow enhancement use to grow unchecked, an extreme possibility is the creation of a dystopian society led by only those wealthy enough to afford cognitive enhancements. Speculation about other negative societal effects is endless; for example, widespread use of cognitive enhancements could create a cutthroat work environment with constant pressure to use prescription pills or cranial stimulation, despite side effects and cost, in order to compete in the job market.

The possibility of addiction to cognitive enhancements and issues of social stratification based on access or cost should not be disregarded. However, there are many proposed solutions to these issues. Possible governmental regulation proposed by neuroethics researchers includes ensuring that cognitive enhancements are not readily available and are only given to those who demonstrate knowledge of the risks and responsible use of such enhancements. Additionally, the creation of a national database, similar to the current system used to regulate addictive pain relievers, would also help control the amount of medication prescribed to individuals. This database could be an integrated system that allows doctors to view patients’ other prescriptions, ensuring that those who attempt to deceive s pharmacies to obtain medications for personal abuse or illegal resale could not easily abuse the system. Finally, to address the issue of potential social inequality, researchers at Oxford University’s Future of Humanity Institute proposed a system in which the government could support broad development, competition, public understanding, a price ceiling, and even subsidized access for disadvantaged groups, leading to greater equalized access to cognitive enhancements.9

Advancements have made it possible to alter our minds using medical technology. Society requires balance to regulate these enhancements, an environment that will promote safe use while preventing abuse. The regulation of cognitive enhancement technologies should occur at several levels to be effective, from market approval to individual use. When creating these laws, research should not be limited because that could inhibit the discovery of possible cures to cognitive disorders. Instead, the neuroethics community should focus on safety and public usage regulations with the mission of preventing abuse and social stratification. Cognitive enhancements have the potential to affect the ways we learn, work, and live. However, specific regulations to address the risks and implications of this growing technology are required; otherwise the results could be devastating.

References

  1. McCabe, S.E. et al. J. Psychoactive Drugs 2006, 38, 43-56.
  2. Arria, A.M. et al. Subst. Abus. 2008, 29(4), 19-38.
  3. Morton, W.A.; Stockton, G. J. Clin. Psychiatry 2000, 2(5), 159-164.
  4. España, R.; Scammel, T. SLEEP 2011, 34(7), 845-858.
  5. Rasmussen, N. Am. J. Public Health 2008, 98(6), 974-985.
  6. Maslen, H. et al. J. of Law and Biosci. 2014, 1, 68-93.
  7. Kadosh, R.C. et al. Curr. Biol. 2010, 20, 2016-2020.
  8. Ilieva, I.P. et al. J. Cogn. Neurosci. 2015, 1069-1089.
  9. Bostrom, N.; Sandberg, A. Sci. Eng. Ethics 2009, 15, 311-341.

Comment

Defect Patch: The Band-Aid for the Heart

Comment

Defect Patch: The Band-Aid for the Heart

Imagine hearing that your newborn, only a few minutes out of womb, has a heart defect and will only live a couple more days. Shockingly, 1 in every 125 babies is born with some type of con-genital heart defect, drastically reducing his or her lifespan.1 However, research institutes and hospitals nationwide are testing solutions and advanced devices to treat this condition. The most promising approach is the defect patch, in which scaffolds of tissue are engineered to mimic a healthy heart. The heart is enormously complex; mimicking is easier said than done. These patches require a tensile strength (for the heart’s pulses and variances) that is greater than that of the left ventricle of the human.1 To add to the difficulty of creating such a device, layers of the patch have to be not only tense and strong, but also soft and supple, as cardiac cells prefer mal-leable tissue environments.

Researchers have taken on this challenge and, through testing various biomaterials, have de-termined the compatibility of each material within the patch. The materials are judged on the ba-sis of their biocompatibility, biodegradable nature, reabsorption, strength, and shapeability.2 Natural possibilities include gelatin, chitosan, fibrin, and submucosa.1 Though gelatin is easily biode-gradable, it has poor strength and lacks cell surface adhesion properties. Similarly, fibrin binds to different receptors, but with weak compression.3 On the artificial side, the polyglycolic acid (PGA) polymer, is strong and porous, while the poly lactic co-glycolic acid (PLGA) polymer has regulated biological properties, but poor cell attachment. This trade-off between different components of a good patch is what makes the building and modification of these systems so difficult. Nevertheless, the future of defect patches is extremely promising.

An unnatural polymer that is often used in creating patches is polycaprolactone, or PCL. This material is covered with gelatin-chitosan hydrogel to form a hydrophilic (water-conducive) patch.1 In the process of making the patch, many different solutions of PCL matrices are pre-pared. The tension of the patch is measured to make sure that it will not rip or become damaged due to increased heart rate as the child develops. The force of the patch must always be greater than that of the left ventricle to ensure that the patch and the heart muscles do not rupture.1 Although many considerations must be accounted for in making this artificial patch, the malleability and adhesive strength of the device are the most important.1 Imagine a 12-year-old child with a defect patch implanted in the heart. Suppose this child attempts to do a cardio workout, including 100 jumping jacks, a few laps around a track, and some pushups. The heart patch must be able to reach the ultimate tensile strain under stress without detaching or bursting. The PCL core of the patch must also be able to handle large bursts of activity. Finally, the patch must be able to grow with the child and the heart must be able to grow new cells around the patch. In summary, the PCL patch must be biodegradable, have sufficient mechanical strength, and remain viable under harsh conditions.

While artificial materials like PCL are effective, in some situations, the aforementioned design criteria are best fulfilled by patches made from natural biomaterials. For instance, chitosan serves as a good template for the outside portion of the patch.4 This material is biocompatible, bioabsorbable, and shapeable. Using natural materials can reduce the risk of vascularization, or the abnormal formation of blood vessels. They can also adapt to gradual changes of the heart. Natural patches being developed and tested in Dr. Jeffrey Jacot’s lab at Rice University include a core of stem cells, which can differentiate into more specialized cells as the heart grows. They currently contain amniotic fluid-derived stem cells (AFSC) which must be isolated from hu-mans.5 Researchers prepare a layer of chitosan (or fibrin in some cases) and polyethylene gly-col hydrogels to compose the outside part of the patch.4 They then inject AFSC into this matrix to form the final patch. The efficiency of the patch is measured by recording the stem cells’ ability to transform into new cells. In experiments, AFSC are able to differentiate into virtually any cell type, and are particularly promising in regenerative medicine.5 These initial prototypes are still being developed and thoroughly tested on rodents.6 A major limitation of this approach is the ina-bility of patches to adapt in rapidly developing hearts, such as those of human infants and patch testing on humans or even larger mammals has yet to be done. The most important challenges for the future of defect patches are flexibility and adaptability.6 After all, this patch is essentially a transformed and repaired body part. Through the work of labs like Dr. Jacot’s, cardiac defects in infants and children may be completely treatable with a patch. Hopefully, in the future, babies with this “Band-Aid” may have more than a few weeks to live, if not an entire lifetime.

References

  1. Pok, S., et al., ACS Nano. 2014, 9822–9832.
  2. Pok, S., et al., Acta Biomaterialia. 5630–5642.
  3. Pok, S., et al., Journal of Cardiovascular Translational Research J. of Cardiovasc. Trans. Res. 2011, 646–654.
  4. Tottey, S., Johnson, et al. Biomaterials. 2011, 32(1), 128– 136.
  5. Benavides, O. M., et al., Tissue Engineering Part A. 1185–1194.
  6. Pok, S., et al., Tissue Engineering Part A. 1877–1887.

Comment

Microglia: Gardeners with Guns

Comment

Microglia: Gardeners with Guns

If you ask anyone about the brain, their response will almost certainly involve neurons. Although neurons have been the stars of neuroscience for the past hundred years, the brain would be entirely dysfunctional if not for the variety of brain support cells, collectively known as glia.1

Glial cells serve a variety of purposes in the central nervous system. Oligodendrocytes produce an insulating fatty-material called myelin, and astrocytes maintain electrical impulses in the neuronal network.1 Perhaps the least glorious of glial functions are carried out by the microglia, which are the neurological equivalent of your household gardeners: pruning unwanted synapses and tending to the new ones. However, microglia are the first line of immune defense in the brain. From the brain’s humble beginnings as a mass of undifferentiated neurons to its affliction with the weeds of old age, microglia are tasked with neuronal maintenance and repair, meaning that deviation from their “just-right” activity can cause a variety of neural diseases. Too little activity, and one can be autistic or schizophrenic; too much activity, and one may be afflicted with Alzheimer’s or Parkinson’s. Given the large role these tiny cells play in brain protection, therapies that regulate microglial activation could be the key to curing a slew of neurological disorders.

Microglia respond to neural stress and injury through different mechanisms unique to their respective cell types: amoeboid phagocytic, resting ramified, and activated.2 Amoeboid phagocytic glia act similarly to other scavengers and ingest large amounts of cellular debris in the developing brain during gestation.3 In postnatal development, these glia transform into resting ramified glia, which remain semi-dormant until their extended branches are activated by electrical signals from neurons or the presence of harmful substances.4 Activated microglia can secrete a variety of anti-inflammatory chemicals to prevent neurological problems, such as brains tumors and axonal injury.5 Microglia can also increase the permeability of the blood-brain barrier, allowing bodily immune cells to assist with brain immune defense.2 A negative feedback mechanism in microglia regulates their own immune response as well as that of other helper immune cells.

In most pathologies, microglia experience a change in their normal activity caused by environmental factors.2 Gliomas, or tumors in the neural glial tissues, are diseases that microglia should be able to handle. However, cells from the two microglial subcategories that migrate toward gliomal cells, M1 and M2, react differently in the gliomal microenvironment. M1 microglia promote tumor degradation by activating other immune cells and phagocytizing gliomal tumor cells. However, M2 microglia promote tumor growth by inhibiting proinflammatory cytokine activity and slowing immune cell responses.6 Cytokines are small proteins that aid cell communication and regulate cellular immune response.7 Additionally, tumor necrosis factor (TNF) stimulates inactivated microglial migration into the glioma, carving a pathway for glioma to migrate to other areas of the brain. Some gliomal therapies have focused on inhibiting the activity of M2 microglia. Various drug treatments that inhibit M2 activity have been shown to decrease gliomal proliferation in vivo. However, the success of these therapies should be treated with caution: gliomal immunosuppression both inactivates multiple immune responses outside of microglia and has the plasticity to circumvent anti-tumor therapies.6

Reduced microglial activity is related to a variety of neurodevelopmental disorders such as autism that demonstrate decreased connectivity in the brain.8 Microglia are responsible for forming mature spines and eliminating immature connections in the brain during post-natal development. This seems counterintuitive; how can decreasing in microglial activity, which causes less synaptic pruning, somehow cause less connectivity in the brain? Reduced microglial activity is actually preventing the brain from eliminating immature spine connections, which leads to fewer mature connections. Failing to eliminate immature connections physically hinders other synapses from forming multiple connections. Techniques that would increase microglial activity include increasing CR3/C3 pathway activity, which triggers synaptic pruning via an unknown mechanism.9 Although microglial therapies might not entirely eliminate autism, which acts through a variety of known and unknown neurological mechanisms, there is potential for ameliorating some symptoms.

Microglia often experience increased sensitivity in the aging brain caused by an increased expression of activation markers.10 This leads to several inflammatory neurological illnesses, including Alzheimer’s disease (AD). Microglia are once again found to play contradicting roles in the progression of Alzheimer’s; their activity is critical in producing neuroprotective anti-inflammatory cytokines, removing cell debris, and degrading amyloid-β protein, the main component of amyloid plaques that cause neurofibrillary tangles.10 Alternatively, activating microglia runs the risk of hyper-reactivity, which can cause extreme detriment to the central nervous system. Non-steroidal anti-inflammatory drugs (NSAIDs) have been shown to decrease the amount of activated microglia by 33% in non-AD patients. Treatment on microglial cultures increased amyloid-β phagocytosis and decreased inflammatory cytokine secretion. However, this treatment did not alter the microglial inflammatory activity in AD patients. The ideal microglial therapy for neuroinflammatory illnesses would result in the expression of only positive microglial activity, such as amyloid-β degradation, and the elimination of negative activity, such as pro-inflammatory secretion. One mechanism that increases pro-inflammatory secretion is amyloid-β binding to formyl peptide receptor (FPR) on microglia. Protein Annexin A1 (ANXA1) binding to FPR has been seen to inhibit interactions between amyloid-β and FPR, which decreases pro-inflammatory secretion.

Central nervous system pathology researchers often speculate as to how certain bacteria and viruses are able to enter the brain and consider mechanisms such as increase in blood-brain barrier permeability and chemical exchange through cerebrospinal fluid. However, the discovery of nervous system lymphatic vessels may put much of this speculation to rest and open up an entirely new venue of neuroimmunological research.11 The interaction between microglial immune function and these lymphatic vessels could introduce treatments that recruit microglia to sites where bacterial and viral infections are introduced into the brain. Alternatively, therapies that increase bodily immune cell and microglial interactions by increasing the presence of bodily immune cells in the brain could boost the neural immune defense. Other approaches could involve introducing drugs that increase or decrease microglial-activity into more accessible lymphatic vessels elsewhere in the body for proactive treatment of neonatal brain diseases. Although we have made some steps towards curing brain diseases that involve microglial activity, coordinating these treatments with others that increase neural immune defenses has the potential to create effective treatment for those afflicted by devastating and currently incurable neurological diseases.

References

  1. Hughes, V. Nature 2012, 485, 570-572.
  2. Yang, I.; Han, S.; Kaur, G.; Crane, C.; Parsa, A. Journal of Clinical Neuroscience 2010, 17, 6-10.
  3. Ferrer, I.; Bernet, E.; Soriano, E.; Del Rio, T.; Fonseca, M. Neuroscience 1990, 39, 451-458.
  4. Christensen, R.; Ha, B.; Sun, F.; Bresnahan, J.; Beattie, M. J. Neurosci. Res. 2006, 84, 170-181.
  5. Babcock, A.; Kuziel, W.; Rivest, S.; Owens, T. Journal of Clinical Neuroscience 2003, 23, 7922-7930.
  6. Wei, J.; Gabrusiewicz, K.; Heimberger, A. Clinical and Developmental Immunology 2013, 2013, 1-12.
  7. Zhang, J.; An, J. International Anesthesiology Clinics 2007, 45, 27-37.
  8. Zhan, Y.; Paolicelli, R.; Sforazzini, F.; Weinhard, L.; Bolasco, G.; Pagani, F.; Vyssotski, A.; Bifone, A.; Gozzi, A.; Ragozzino, D.; Gross, C. Nature Neuroscience 2014, 17, 400-406.
  9. Schafer, D.; Lehrman, E.; Kautzman, A.; Koyama, R.; Mardinly, A.; Yamasaki, R.; Ransohoff, R.; Greenberg, M.; Barres, B.; Stevens, B. Neuron 2012, 74, 691-705.
  10. Solito, E.; Sastre, M. Frontiers in Pharmacology 2012, 3.
  11. Louveau, A.; Smirnov, I.; Keyes, T.; Eccles, J.; Rouhani, S.; Peske, J.; Derecki, N.; Castle, D.; Mandell, J.; Lee, K.; Harris, T.; Kipnis, J. Nature 2015, 523, 337-341.

Comment

How Bionic Eyes Are Changing the Way We See the World

Comment

How Bionic Eyes Are Changing the Way We See the World

Most blind people wear sunglasses, but what if their glasses could actually restore their vision? Such a feat seems miraculous, but the development of new bionic prostheses may make such miracles a reality. These devices work in two ways: by replacing non-functional parts of the visual pathway or by creating alternative neural avenues to provide vision.

When attempting to repair or restore lost vision, it is important to understand how we normally receive and process visual information. Light enters the eye and is refracted by the cornea to the lens, which focuses the light onto the retina. The cells of the retina, namely photoreceptors, convert the light into electrical impulses, which are transmitted to the primary visual cortex by the optic nerve. In short, this process serves to translate light energy into electrical energy that our brain can interpret. For patients suffering from impaired or lost vision, one of the steps in this process is either malfunctioning or not functioning at all.1,2

Many patients with non-functional vision can be treated with current surgical techniques. For example, many elderly individuals develop cataracts, in which the lens of the eye becomes increasingly opaque, resulting in blurred vision. This condition can be rectified fairly simply with a surgical replacement of the lens. However, loss of vision resulting from a problem with the retina or optic nerve can very rarely be corrected surgically due to the sensitive nature of these tissues. Such pathologies include retinitis pigmentosa, an inherited degenerative disease affecting retinal photoreceptors, and head trauma, which can damage the optic nerve. In these cases, a visual prosthesis may be the solution. These devices, often called “bionic eyes,” are designed to repair or replace damaged ocular functions. Such prostheses restore vision by targeting damaged components in the retina, optic nerve, or the brain itself.

One set of visual prostheses works by correcting impaired retinal function via electrode arrays implanted between the retinal layers. The electrodes serve as substitutes for lost or damaged photoreceptors, translating light energy to electrical impulses. The Boston Retinal Implant Project has developed a device involving an eyeglass-mounted camera and an antenna implanted in the skin near the eye.3 The camera transmits visual data to the antenna in a manner reminiscent of a radio broadcast. Then, the antenna decodes the signal and then sends it through a wire to an implanted subretinal electrode array, which relays it to the brain. The problem with this system is that the camera is fully external and unrelated to the eye’s position, meaning the patient must move his or her entire head to survey a scene. Germany’s Retinal Implant AG team seeks to rectify this problem with the Alpha IMS implant system. In this system, the camera itself is subretinal, and “converts light in each pixel into electrical currents.”2

The Alpha IMS system is still undergoing experimental clinical trials in Europe, but it is facing some complications. Firstly, the visual clarity of tested patients is around 20/1000, which is well below the standard for legal blindness. Secondly, the system’s power supply is implanted in a very high-risk surgical procedure, which can endanger patients. In an attempt to overcome the problems faced by both The Boston Retinal Implant Project and Retinal Implant AG, Dr. Daniel Palanker at Stanford and his colleagues are currently developing a subretinal prosthesis involving a goggle-mounted video camera and an implanted photodiode array. The camera receives incoming light and projects the image onto the photodiode array, which then converts the light into pulsed electrical currents. These currents stimulate nearby neurons to relay the signal to the brain. As Dr. Palanker says, “This method for delivering information is completely wireless, and it preserves the natural link between ocular movement and image perception.”2 Human clinical trials are slated to begin in 2016, but Palanker and his team are confident that the device will be able to produce 20/250 visual acuity or better in affected patients.

A potentially safer set of visual prostheses includes suprachoroidal implants. Very similar to the aforementioned subretinal implants, these devices also replace damaged components of the retina. The only difference is that suprachoroidal implants are placed between the choroid layer and the sclera, rather than between the retinal layers. This difference in location allows these devices to be surgically implanted with less risk, as they do not breach the retina itself. Furthermore, these devices are larger compared to subretinal implants, “allowing them to cover a wider visual field, ideal for navigation purposes.” Development of suprachoroidal devices began in the 1990s at both Osaka University in Japan and Seoul National University in South Korea. Dr. Lauren Ayton and Dr. David Nayagam of the Bionic Vision Australia (BVA) research partnership are heading more current research. BVA has tested a prototype of a suprachoroidal device in patients with retinitis pigmentosa, and results have been promising. Patients were able to “better localize light, recognize basic shapes, orient in a room, and walk through mobility mazes with reduced collisions.” More testing is planned for the future, along with improvements to the device’s design.2

Both subretinal and suprachoroidal implants work by replacing damaged photoreceptors, but they rely on a functional neural network between the retina and the optic nerve. Replacing damaged photoreceptors will not help a patient if he or she lacks the neural network that can transmit the signal to the brain. This neural network is composed of ganglion cells at the back of the retina that connect to the optic nerve; these ganglion cells can be viewed as the “output neurons of the eye.” A third type of visual prosthesis targets these ganglion cells. So-called epiretinal implants are placed in the final cell layer of the retina, with electrodes directly stimulating the optic nerve. Because these devices are implanted in the last retinal layer, they work “regardless of the state of the upstream neurons”.2 So the main advantage of an epiretinal implant is that, in cases of widespread retinal damage due to severe retinitis pigmentosa, the device provides a shortcut directly to the optic nerve.

The most promising example of an epiretinal device is the Argus II Visual Prosthesis System, developed by Second Sight. The device, composed of a glasses-mounted camera that wirelessly transmits visual data to an implanted microelectrode array, received FDA marketing approval in 2012. Clinical trials have shown a substantial increase in visual perception and acuity in patients with severe retinitis pigmentosa, and the system has been implanted in more than 50 patients to date.

The common limitation of all these visual prostheses (subretinal, suprachoroidal, and epiretinal) is that they rely on an intact and functional optic nerve. But some blind patients have damaged optic nerves due to head trauma. The optic nerve connects the eye to the brain, so for patients with damage in this region, bionics researchers must find a way to target the brain itself. Experiments in the early 20th century showed that, by stimulating certain parts of the brain, blind patients could perceive light flashes known as phosphenes. Building from these experiments, modern scientists are working to develop cortical prostheses implanted in either the visual cortex of the cerebrum or the lateral geniculate nucleus (LGN), both of which are key in the brain’s ability to interpret visual information. Such a device would not truly restore natural vision, but produce artificial vision through the elicitation of phosphene patterns.

One group working to develop a cortical implant is the Monash Vision Group (MVG) in Melbourne, Australia, coordinated by Dr. Collette Mann and co. MVG’s Gennaris bionic-vision system consists of a glasses-mounted camera, a small computerized vision processor, and a series of multi-electrode tiles implanted in the visual cortex. The camera transmits images to the vision processor, which converts the picture into a waveform pattern and wirelessly transmits it to the multi-electrode tiles. Each electrode on each tile can generate a phosphene; all the electrodes working in unison can generate phosphene patterns. As Dr. Mann says, “The patterns of phosphenes will create 2-D outlines of relevant shapes in the central visual field.”2 The Illinois Institute of Technology is developing a similar device called an intracortical visual prosthesis, termed the IIT ICVP. The device’s developers seek to address the substantial number of blind patients in underdeveloped countries by making the device more affordable. The institute says that “one potential advantage of the IIT ICVP system is its modularity,” and that by using fewer parts, they “could make the ICVP economically viable, worldwide.”4

These visual prostheses represent the culmination of decades of work by hundreds of researchers across the globe. They portray a remarkable level of collaboration between scientists, engineers, clinicians, and more, all for the purpose of restoring vision to those who live without it. And with an estimated 40 million individuals worldwide suffering from some form of blindness, these devices are making miracles reality.

References

  1. The Scientist Staff. The Eye. The Scientist, 2014, 28.
  2. Various Researchers. The Bionic Eye. The Scientist, 2014, 28.
  3. Boston Retinal Implant Project. http://biomed.brown.edu/Courses/BI108/2006-108websites/group03retinalimplants/mit.htm (accessed Oct. 9, 2015)
  4. Intracortical Visual Prosthesis. http://neural.iit.edu/research/icvp/ (accessed Oct. 10, 2015)

Comment

From a Pile of Rice to an Avalanche: A Brief Introduction to Granular Materials

Comment

From a Pile of Rice to an Avalanche: A Brief Introduction to Granular Materials

Communities living at the foot of the Alps need a way to predict the occurrence of avalanches for timely evacuation, but monitoring the entire Alpine range is impossible. Fortunately for those near the Alps, the study of granular materials has allowed scientists to move mountains into labs and use small, contained systems (like piles of rice) to simulate real-world avalanche conditions. Granular materials, by definition, are conglomerates of discrete visible particles that lose kinetic energy during internal collisions; they are neither too small to be invisible to the naked eye, nor too big to be studied as distinct objects.1 The size of granular material situates them between common objects and individual molecules.

While studying extremely small particles, scientists stumbled upon an unsettling contradiction: the classical laws governing the macroscopic universe do not always apply at microscopic scales. For example, Niels Bohr sought to apply classical mechanics to explain the orbits of electrons around nuclei by comparing them to the rotation of planets around stars. However, it was later discovered that an electron behaves in a much more complicated way than Bohr had anticipated. At its size, the electron gained properties that could only be described through an entirely new set of laws known as quantum mechanics.

Though granular materials do not exist at the quantum level, their distinct size necessitates an analogous departure from classical thought. A new category of physical laws must be created to describe the basic interactions among particles of this unique size. Intuitively, this makes sense; anyone who has cooked rice or played with sand knows that the individual grains behave more like water than solid objects. Scientists are intrigued by these materials because of the variation in their behaviors in different states of aggregation. More importantly, since our world consists of granular materials such as coffee, beans, dirt, snowflakes, and coal, their study sheds new light on the prediction of avalanches and earthquakes.

The physical properties of granular flow vary with the concentration of grains. At different concentrations, the grains experience different magnitudes of stress and dissipate energy in different ways. Since it is hard to derive a unifying formula to describe granular flows of varying concentrations, physicists use three sets of equations to fit their states of aggregation, resembling the gaseous, liquid, and solid phases. When the material is dilute enough for each grain to randomly fluctuate and translate, it acts like a gas. When the concentration increases, particles collide more frequently and the material functions as a liquid. Since these particles do not collide elastically, a fraction of their kinetic energy dissipates into heat during each collision. The increased frequency of inelastic collisions between grains in the analogous liquid phase results in increasing energy, dissipation, and greater stress. Finally, when the concentration increases to 50% or more, the material resembles a solid. The grains experience significant contact, resulting in predominantly frictional stress and energy dissipation.1

Avalanches come in two types, flow and powder, each of which requires a specific combination of the gas, liquid, and solid granular models. In a flow avalanche, the descending layer consists of densely packed ice grains. The solid phase of granular materials best models this, meaning that friction becomes the chief analytical aspect. In a powder avalanche, particles of snow do not stick together and descend in a huge, white cloud.2 The fluid and solid models of granular materials are equally appropriate here.

Physicists can use these avalanche models to investigate the phenomena leading up to a real-world avalanche. They can simulate the disturbance of a static pile of snow by constantly adding grains to a pile, or by perturbing a layer of grains on the pile’s surface. In an experiment conducted by statistical physicists Dr. Daerr and Dr. Douady, layers of glass beads of 1.8 to 3mm in diameter were poured onto a velvet surface, launching two distinct types of avalanches under different regimes decided by the tilt angle of the plane and the thickness of the layer of glass beads.3

For those of us who are not experts in avalanches, there are a few key points to take away from Daerr and Douady. They found that a critical tilt angle exists for spontaneous avalanches. When the angle of the slope remained under the critical angle, the size of the flow did not grow, even if a perturbation caused an additional downfall of grains. Interestingly, when the angle of the slope was altered significantly, the snow uphill from the perturbation point also contributed to the avalanche. That means that avalanches can affect higher elevations than their starting points. Moreover, the study found that the angle of the remaining slope after the avalanche was always less than the original angle of the slope, indicating that after a huge avalanche, mountains would remain stable until a change in external condition occured.3 A. Often, a snow mountain with slopes exceeding the critical angle can remain static and harmless for days, because of the cohesion between particles.

Situations become complicated if the grains are not completely dry, which is what happens in real snow avalanches. In these scenarios, physicists must modify existing formulas and conduct validating experiments to predict the behaviors of these systems. Granular materials are not limited to predicting avalanches. In geophysics, scientists have investigated the relation of granular materials to earthquakes. For instance, one study used sound waves and glass beads to study the effects of earthquake aftershocks.4 Apart from traditional modeling with piles of rice or sand, the understanding of granular materials under different phases paves the way for computational modeling of large-scale natural disasters like avalanches and earthquakes. These studies will not only help us understand granular materials themselves, but also help us predict certain types of natural disasters.

References

  1. Jaeger, H. M., Nagel, S. R., and Behringer, R. P. Granular Solids, Liquids and Gases. Rev. Mod. Phys., 1996, 68, No.4, 1259-1273.
  2. Frankenfield, J. Types of Spring and Summer Avalanches. http://www.mountain-guiding.com/avalanche/info/spring-types.html (accessed Oct. 29, 2015).
  3. Daerr, A. and Douady, S. Two Types of Avalanche Behaviour in Granular Media. Nature, 1999, 399, 241-243.
  4. Johnson P. A., et al. Effects of acoustic waves on stick–slip in granular media and implications for earthquakes. Nature, 2008, 451, 57-60.

Comment

Not Many Other Fish in the Sea: Our Current Overfishing Crises

Comment

Not Many Other Fish in the Sea: Our Current Overfishing Crises

Prevailing notions of the ocean make it seem as if it is "too big to fail” since it takes up 70% of the surface of Earth and contains 321,003,271 cubic miles of water.1 Additionally, most of the oceans’ immense biodiversity has yet to be documented. The Census of Marine Life estimates that there could be between 178,000 to 10,000,000 different species living in ocean shoreline habitats due to the vast abundance of photosynthesizing microbes.2 However, like any ecosystem, the oceans are not immune to anthropogenic and environmental stressors such as overfishing, climate change, and pollution. There are many interconnected problems surrounding the way in which people currently treat the oceans. Extracting large amounts of fish for human consumption threatens the dynamic balance that currently exists and threatens scientists’ potential for making groundbreaking discoveries about what lies below.

In 2010, the United Nations predicted that over 80% of the world’s fish are reported as fully exploited or overexploited, and thus “require effective and precautionary management.”3 Overexploitation refers to the extraction of marine populations to unsustainable levels.4 Fishing techniques have become exponentially more efficient since the Industrial Revolution, focusing on getting the largest catches in the fewest trips. Today’s fishing fleets are so large that it would require two to three times Earth’s supply of fish to fill them.4 These harmful practices lead to three main types of overfishing:

  1. Growth overfishing: The removal of larger fish leaves behind only individuals that are too small to maximize the yield, or full amount of fish that could theoretically be obtained.5
  2. Recruitment overfishing: When adult fish are excessively taken out of the ecosystem, recruitment and stock productivity decreases.5
  3. Ecosystem overfishing: The targeting of a particular species leads to serious trophic cascades and ecological consequences.5

Unfortunately, the most popularly consumed fish species are subject to all three practices. Bluefin tuna, sturgeon, sea bass, and Atlantic salmon are examples of large, long-lived predatory species that only provide a few offspring each breeding cycle.5 For example, Bluefin tuna release ten million eggs each year, but only a small number survive to adulthood. Even then, these tuna do not reach reproductive maturity until eight to twelve years of age.6 When the largest fish are specifically targeted, many ecological consequences arise. Removing the largest fish of the largest species in an ecosystem significantly decreases the mean size for that species. As a result, only smaller fish are left to reproduce.7 This shift causes trophic level decline: as species at higher trophic levels are overfished, fishermen decide to catch the comparatively larger fish at lower trophic levels.7 This vicious cycle continues so that the average size of fish consumed decreases significantly. This phenomenon, known as “eating down the food chain,” puts many fish at risk, including herbivorous fish in coral reef ecosystems.7 To maintain a coral-dominated state, herbivorous fish consume macro-algae that otherwise would overgrow and suffocate corals. When coral-dominated reefs become overtaken by macro-algae, habitats for many other fish and organisms are severely reduced. Over 25% of the world’s fish species live exclusively within these three-dimensional coral communities, which themselves only take up 0.1% of the ocean floor.5 Not only are species being depleted at the very top of the food chain, smaller species that are endemic to specific ocean environments are also indirectly experiencing survival pressure.

These problems are further magnified by the fact that current fishing practices produce a large amount of by-catch, or the incidental capture of non-target species.5 The rustic image of a humble fisherman using a single hook at the end of a line no longer reflects reality for most commercial fishermen. Now, longlines are weighted at the bottom and can have as many as 3,000 hooks attached, probing deeper into the water column.8 A similar weighted system exists for large fishing nets, known as trawl nets, so that shellfish and other small or bottom-dwelling organisms can be collected in larger quantities. Bottom trawling, the practice of dragging a trawl net across the ocean floor, has contributed to 95% of the damage inflicted on deep water systems by destroying and smothering benthic communities.9 These practices are non-specific in nature, and thus collect anything and everything that attaches or gets caught. Fishing gear alone has threatened around 20% of shark species with extinction and leads to over 200,000 loggerhead sea turtles deaths annually.10 Sylvia Earle, a renowned ocean-conservationist, describes these unsustainable fishing practices as “using bulldozers to kill songbirds.”11

The United Nations now predicts that by 2050, the world will run out of commercially viable catches and oceans could turn fishless.3 Driving this problem is the fact that seafood consumption has increased over the past 30 years.12 Many coastal communities and developing countries rely on fishing as their main source of income and protein, with approximately 2.9 million people relying on fish for over 20% of their animal protein intake. One of the largest importers, the United States, imports 91% (by value) from other countries with lower production costs.13 The cheap labor comes from subsistence fishermen, who meet this increased demand by opting for unsustainable practices. Consequently, a “poverty cycle” emerges, where short-term survival takes precedence over sustainability and conservation efforts, further exacerbating ecological and economic damages.14

Recognizing that environmental considerations alone could put many developing countries at risk, policymakers have adopted a community-based approach in the planning, construction, implementation, and management of preservation policies.15 This ecosystem approach to fisheries, strives to ensure that the capability of aquatic ecosystems to provide the necessary resources for human life is maintained for present and future generations.16

The establishment of Marine Protected Areas, or MPAs, is another effective technique similar to the National Park Service’s preservation programs. Although MPAs have a wide range of management plans and enforcement, all strive to limit or restrict human activity so that natural populations can be restored.5 Allowing an environment to restore its fish populations without any human mitigation can take a long time, and the most effective MPAs extend across large tracts of area that can more fully encompass fish populations and migratory species.5 Because these areas often overlap with highly profitable fishing zones, MPAs are regularly met with backlash from coastal communities and later can be hard to enforce.17

These international efforts to reduce the amount of seafood extracted from ocean environments are generally invisible in a grocery store, so it is easy for consumers to engage passively with the food they see. However, recognizing the production, labor, and ecosystem that goes into fish and fish products (and all foods) is critical for maintaining the livelihood of the world’s natural environments. The ocean may seem vast, but there is not an infinite supply of resources that can meet current demands.

References

  1. National Oceanic and Atmospheric Administration. http://oceanservice.noaa.gov/facts/oceanwater.html (accessed Oct. 31, 2015).
  2. Smithsonian Institute. http://ocean.si.edu/census-marine-life (accessed Nov. 1, 2015).
  3. Resumed Review Conference on the Agreement Relating to the Conservation and Management of Straddling Fish Stocks and Highly Migratory Fish Stocks; United Nations: New York, 2010.
  4. Marine Biodiversity and Ecosystem Functioning. http://www.marbef.org/wiki/over_exploitation (accessed Oct. 31, 2015).
  5. Sheppard, C.; David,S.; Pilling, G. The Biology of Coral Reefs,1; Oxford University Press: 2009.
  6. World Wildlife Foundation http://wwf.panda.org/what_we_do/endangered_species/tuna/atlantic_bluefin_tuna/ (accessed Nov. 1, 2015).
  7. Pauly, D., et al. Science. 1998, 279, 860-863.
  8. Food and Agriculture Organization. http://www.fao.org/fishery/fishtech/1010/en (accessed Feb. 25, 2016).
  9. The Impacts of Fishing on Vulnerable Marine Ecosystems; General Assembly of the United Nations: Oceans and the Law of the Sea Division, 2006.
  10. Monterey Bay Aquarium. http://www.seafoodwatch.org/ocean-issues/wild-seafood/bycatch. (accessed Oct. 31, 2015).
  11. Saeks, Diane Dorrans. US oceanographer Dr. Sylvia Earle. Financial Times, Aug. 9, 2013.
  12. The State of World Fisheries and Aquaculture; Food and Agriculture Organization; United Nations: Rome 2014.
  13. Gross, T. ‘The Great Fish Swap’: How America Is Downgrading Its Seafood Supply. National Public Radio, Jul. 1, 2014.
  14. Cinner, J. et al. Current Biology. 2009. 19.3, 206-212.
  15. Agardy, T. M. ; Information Needs for Marine Protected Areas: Scientific and Societal; 66.3; Bulletin of Marine Science, 2000; 875-878.
  16. Food and Agriculture Organization. http://www.fao.org/fishery/topic/13261/en (accessed Nov. 1, 2015).
  17. Agardy, T.M.; Advances in Marine Conservation: The Role of Marine Protected Areas; 9.7; Trends in Ecology and Evolution, 1994; 267-270.

 

Comment

Nomming on Nanotechnology: The Presence of Nanoparticles in Food and Food Packaging

Comment

Nomming on Nanotechnology: The Presence of Nanoparticles in Food and Food Packaging

Nanotechnology is found in a variety of sectors—drug administration, water filtration, and solar technology, to name a few—but what you may not know is that nanotechnology could have been in your last meal.

Over the last ten years, the food industry has been utilizing nanotechnology in a multitude of ways.1 Nanoparticles can increase opaqueness of food coloring, make white foods appear whiter, and even prevent ingredients from clumping together.1 Packaging companies now utilize nano-sized clay pieces to make bottles that are less likely to break and better able to retain carbonation.2 Though nanotechnology has proven to be useful to the food industry, some items that contain nanoparticles have not undergone any safety testing or labeling. As more consumers learn about nanotechnology’s presence in food, many are asking whether it is safe.

Since the use of nanotechnology is still relatively new to the food industry, many countries are still developing regulations and testing requirements. The FDA, for example, currently requires food companies that utilize nanotechnology to provide proof that their products won’t harm consumers, but does not require specific tests proving that the actual nanotechnology used in the products is safe.2 This oversight is problematic because while previous studies have shown that direct contact with certain nanoparticles can be harmful for the lungs and brain, much is still unknown about the effects of most nanoparticles. Currently, it is also unclear if nanoparticles in packaging can be transferred to the food products themselves. With so many uncertainties, an activist group centered in Washington, D.C. called Friends of the Earth is advocating for a ban on all use of nanotechnology in the food industry.2

However, the situation may not require such drastic measures. The results of a study last year published in the Journal of Agricultural Economics show that the majority of consumers would not mind the presence of nanotechnology in food if it makes the food more nutritious or safe.3 For example, one of the applications of nanotechnology within the food sector focuses on nanosensors, which reveal the presence of trace contaminants or other unwanted microbes.5 Additionally, nanomaterials could be used to make more impermeable packaging that could protect food from UV radiation.5

Nanotechnology could also be applied to water purification, nutrient delivery, and fortification of vitamins and minerals.5 Water filters that utilize nanotechnology incorporate carbon nanotubes and alumina fibers into their structure, which allows microscopic pieces of sediment and contaminants to be removed from the water.6 Additionally, nanosensors made using titanium oxide nanowires, which can be functionalized to change color when they come into contact with certain contaminants, can help detect what kind of sediment is being removed.6 Encapsulating nutrients on the nanoscale-level, especially in lipid or polymer-based nanoparticles, increases their absorption and circulation within the body.7 Encapsulating vitamins and minerals within nanoparticles slows their release from food, causing absorption to occur at the most optimal part of digestion.4 Coatings containing nano-sized nutrients are also being applied to foods to increase their nutritional value.7 Therefore, there are many useful applications of nanoparticles that consumers have already shown to support.

While testing and research is an ongoing process, nanotechnology is already making food safer and healthier for consumers. The FDA is currently studying the efficacy of nanotechnology in food under the 2013 Nanotechnology Regulatory Science Research Plan. Though the study has not yet been completed, the FDA has stated that in the interim, it “supports innovation and the safe use of nanotechnology in FDA-regulated products under appropriate and balanced regulatory oversight.”8,9 As nanotechnology becomes commonplace, consumers can also expect to see an increase in the application of nanotechnology in food and food packaging in the near future.

References

  1. Ortiz, C. Wait, There's Nanotech in My Food? http://www.popularmechanics.com/science/health/a12790/wait-theres-nanotechnology-in-my-food-16510737/ (accessed November 9, 2015).
  2. Biello, D. Do Nanoparticles in Food Pose a Health Risk? http://www.scientificamerican.com/article/do-nanoparticles-in-food-pose-health-risk/ (accessed October 1, 2015).
  3. Yue, C., Zhao, S. and Kuzma, J. Journal of Agricultural Economics. 2014. 66: 308–328. doi: 10.1111/1477-9552.12090
  4. Sozer, N., & Kokini, J. Trend Biotechnol. 2009. 27(2), 82-89.
  5. Duncan, T. J. Colloid Interface Sci. 2011. 363(1), 1-24.
  6. Inderscience Publishers. (2010, July 28). Nanotechnology for water purification. ScienceDaily. (accessed March 3, 2016)
  7. Srinivas, P. R., Philbert, M., Vu, T. Q., Huang, Q., Kokini, J. L., Saos, E., … Ross, S. A. (2010). Nanotechnology Research: Applications in Nutritional Sciences. The Journal of Nutrition, 140(1), 119–124.
  8. U.S. Food and Drug Administration. http://www.fda.gov/ScienceResearch/SpecialTopics/Nanotechnology/ucm273325.htm (accessed November 9, 2015).
  9. U.S. Food and Drug Administration. (2015). http://www.fda.gov/ScienceResearch/SpecialTopics/Nanotechnology/ucm301114.htm (accessed November 9, 2015).

Comment

Megafires

Comment

Megafires

In 2015, American forests were ravaged by larger and more destructive fires than ever before. One of the most devastating wildfires occurred in Washington State and burned over 250,000 acres of forest at a rate of 3.8 acres per second.1 These unprecedented grand burns of over 100,000 acres have been justifiably coined by researchers as “Megafires.”2 Unfortunately, megafires are becoming an increasingly common feature of the American West.

Although forest fires are a natural and essential part of a forest’s life cycle, scientific records show a worrisome trend. Data from the National Climate Center in Asheville, North Carolina indicate that recent fires burn twice the forest acreage as wildfires 40 years ago.3 In contrast to replenishing wildfires that promote forest growth, megafires scorch the landscape, disabling forest regeneration and leaving wastelands in their wake.2 In other words, they burn forests so completely that trees are unable to regrow.4 The increased incidence of megafires accordingly threatens to cause environmental change, particularly in the Western United States.5 Once-rich forests are now in danger of depletion and extinction as they give way to grasslands and shrubs. Even the hardy Ponderosa Pine, previously thought to be completely flameproof, is succumbing to megafires.5

What is the future of our forests, and what can we, as custodians of our natural lands, do to shape this future? Can we prevent megafires? Understanding the contributing causes of megafires is essential in devising a solution to prevent them. Current thinking by various ecologists identifies three primary causative factors, both behavioral and environmental: new firefighting strategies, the rise of invasive species, and climate change.1

Government policies that promote aggressive control of forest fires are deceptive in their benefits. Fire-fighters have become incredibly efficient at locating and extinguishing wildfires before they become too destructive. However, certain tree species that have flame- and temperature-resistant properties, such as the Pine Barrens, Lodgepole Pine, and Eucalyptus, require periodic fires in order to reproduce.6 When facing wildfires these types of trees survive, whereas other plant species perish. Since flame-resistant tree species are often native flora to forest ecosystems, the selective survival of these trees maintains the forest’s composition over time and prevents shrubs and grasslands from overrunning the ecosystem. Flame-resistant trees accomplish their phoenix-like regeneration and self-sustainability by releasing their seedlings during a wildfire. In addition, forest fires destroy flora that would impede the growth of new seedlings through competition for space and light. This regenerative effect of forest fires has even resulted in the return of certain endangered tree species.4 One example, the Jack Pine, maintains its seedlings in cones that melt in the presence of fire. A policy to extinguish fires prematurely can inhibit seed release, threatening Jack Pine forests and others like it.6 To date, aggressive government policies toward forest fire-fighting have led to significant changes in forest composition accompanied by buildup of tinder and debris on the forest floor. This accumulated undergrowth now fuels megafires that burn with unparalleled intensity and speed. In contrast, forest management policies that revert to the practice of allowing small, controlled fires to clear away debris would maintain the forest’s long-term survival.

Invasive, flame-susceptible species provide the perfect fuel for megafires. During their westward expansion in the 1880s, settlers were not the only ones to achieve Manifest Destiny. Several species of grass also made the journey. The most common of these species was the Cheatgrass, a grass native to Europe, southwestern Asia, and northern Africa.7 Cheatgrass was inadvertently brought to the Americas on cargo ships in the 1800s and has been a significant environmental problem ever since. The short life cycle and prolific seed production of Cheatgrass causes it to dry out by mid-June, meaning that it serves as kindling for fires during the summer. Cheatgrass increases the size and severity of fires since it burns twice as much as the endogenous vegetation.7 Since the native vegetation is slowly being choked out by Cheatgrass, the landscape of the American West is transitioning into a lawn of this invasive species, poised to erupt into an inferno.

Global warming, one of the environmental causes of megafires, is perhaps an even more critical and challenging threat than invasive species. In 2015, forest fires ravaged more than 9 million acres of the Western mainland United States and Alaska.3 Studies of global warming demonstrate that every degree Celsius of atmospheric warming is accompanied by a four-fold increase in the area of forest destruction. Thus, the increase in global temperature is directly associated with the prevalence of megafires.8 Since the 1900s, the average temperature of the planet has increased by 0.6 degrees Celsius, primarily in the twenty-first century.9 Typically, severe fires burn less often at higher altitudes, due to cooler temperature and greater moisture levels, but as global temperatures increase, these areas become drier and more prone to forest fires. This warming of the climate contributes to massive burns that are fueled by centuries of forest debris and undergrowth.9 Climate change also contributes to a lack of precipitation, which further contributes to the expansion and intensity of forest fires. Wildfires themselves also contribute to climate change; as they continue to burn they emit greenhouse gases, which can contribute to accelerating global warming.9

Ultimately, due to poor policy practice, a destructive cycle is forming that serves as a catalyst to megafires. Finding long-term solutions that will prevent the occurrence of megafires will require policy adjustments at the regional, national, and international levels.6 Currently policies are changing, endorsing smaller burns to limit build up for megafire fuel. As more data is being introduced about global warming, efforts are being made to find more renewable forms of energy such as solar and wind.9 Ideally, this shift in resources will limit the increase in global temperatures and reduce the risk of megafires. Lastly research is being done to develop grasses that can out compete the problematic Cheatgrass.7 If we can meet these challenges, then megafires may finally be extinguished.

References

  1.  Why we have such large wildfires this summer. http://www.seattletimes.com/seattle-news/northwest/why-we-have-such-damaging-wildfires-this-summer/ (accessed Oct. 9, 2015).
  2.  National Geographic: How Megafires Are Remaking American Forests. http://news.nationalgeographic.com/2015/08/150809-wildfires-forest-fires-climate-change-science/ (accessed Oct. 11, 2015)
  3. Climate Central: The Age of Western Wildfires. http://www.climatecentral.org/news/report-the-age-of-western-wildfires-14873 (accessed Oct. 9, 2015)
  4. Deadly forest fire leads to resurrection of endangered tree. http://blogs.scientificamerican.com/extinction-countdown/deadly-forest-fire-leads-to-resurrection-of-endangered-tree/ (accessed Oct. 9, 2015)
  5. Rasker, thesolutionsjournal 2015, 55-62.
  6. NPR: Why Forest-Killing Megafires Are The New Normal. http://www.npr.org/2012/08/23/159373770/the-new-normal-for-wildfires-forest-killing-megablazes (accessed Oct. 11, 2015)
  7. Keeley, International Journal of Wildland Fire, 2007, 16, 96–106
  8. Stephens, Frontiers in Ecology and the Environment 2014, 12, 115-122.
  9. Climate Central: Study Ties Warming Temps to Uptick in Huge Wildfires. http://www.climatecentral.org/news/warming-huge-wildfire-outbreaks-19521 (accessed Oct. 21, 2015)

Comment

Homo Naledi – A New Piece in the Evolutionary Puzzle

Comment

Homo Naledi – A New Piece in the Evolutionary Puzzle

Human beings share 96% of their genome with chimpanzees,1 which is why modern science has accepted the concept that humans and apes share a recent common ancestor. However, our understanding of the transition from these ancient primates to the bipedal, tool-wielding species that conquered the globe is less clear than many realize. One crucial missing chapter in the evolutionary story is the origin of our very own genus, Homo. Scientists believe that somewhere between two and three million years ago, the hominid species Australopithecus afarensis evolved into the first recognizably human species, Homo erectus. However, the details of this genealogical shift have remained a mystery. In 2013, a discovery made in the Rising Star cave by two recreational cavers may have provided revolutionary insight into this intractable problem.

The Rising Star cave lies 30 miles outside the city of Johannesburg in northern South Africa. A popular destination for spelunkers for the past 50 years, this cave is well-known and has been extensively mapped.2 Two years ago, Steven Tucker and Rick Hunter dropped into the Rising Star cave in an effort to discover new extensions to the cave, with the hope of finding something more.2 They found a tight crevice that was previously unexplored, which led to a challenging forty-foot drop through a chute. At the bottom, Hunter and Tucker came across scattered bones and fossils in what would later be named the Dinaledi chamber.2 Hunter and Tucker consulted with Dr. Lee Berger, a paleoanthropologist at the University of Witwatersrand. It was clear to Dr. Berger that these fossils were not of modern humans -- an ancient hominid species had been discovered.2

Within weeks of this discovery, Dr. Berger assembled a qualified team and set up camp at the mouth of the Rising Star cave. In the largest hominid artifact discovery in Africa, over one thousand bones from multiple bodies were extracted and analyzed.2

As the fossils were being transferred out of the cave, paleoanthropologists at the surface worked to piece together a skeleton. Some aspects of this species’ bone structure were distinctly human, like the long thumbs, long legs, and arched feet.2 Other features, including curved fingers and a flared pelvis, were indicative of a more primitive animal.2 A large skull fragment from above the left eye of one of the skeletons allowed scientists to definitively determine this hominid’s genus.

The Australopithecus skull is characterized by a large orbital ridge above the eye, with a deep concavity behind it, leading to a flatter face with pronounced brows.3 The skull fragment collected by the team, however, had a shorter ridge and less of an indentation above the frontal lobe.3 This finding led the team to conclude that they had discovered a new member of the Homo genus, which Dr. Berger named Homo naledi. ‘Naledi’ in the Sotho language means ‘star,’ a reference to the vivid stalactites emanating from the ceiling of the Dinaledi chamber.3

Dr. Berger’s discovery in the Rising Star cave was an incredible breakthrough, but finding fossils is only half the battle. The next step is to find a place for this species in the million-year narrative of human evolution we have created.

In accomplishing this feat, a logical place to start is considering how the fossils of Homo naledi ended up in their final resting place. There were no signs of predation, as no other animal fossils were found at this location. In addition, these fossils accumulated gradually, meaning that the bodies did not all die from a single event. Dr. Berger postulated that these bodies were placed there with purpose, but intentional body disposal is an advanced social behavior which, up to this point, has only been exhibited by more evolved Homo species. The brain size of the discovered hominids is estimated to be between 450 and 550 cubic centimeters, about one third the size of Homo sapiens brain and only marginally larger than that of a chimpanzee.3 The possibility of such a small-brained animal engaging in intentional body disposal challenges ideas about the cognitive abilities necessary for such advanced social behavior. Dr. William Jungers, chair of anatomical sciences at Stony Brook University, argues that advanced social intelligence was not likely at play in this instance. He claims that “intentional corpse disposal is a nice sound bite, but more spin than substance […] dumping conspecifics down a hole may be better than letting them decay around you.”4

The idea of intentional body disposal is not the only one of Dr. Berger’s conclusions that has attracted criticism. Some in the scientific community argue that Homo naledi is a distant cousin, not a direct ancestor, of modern humans. Others, like UC Berkeley’s Dr. Tim White, argue that "new species should not be created willy-nilly,” and believe that these discoveries may just be fossils of Homo erectus.5 Biologist Dr. David Menton takes the small brain size of these hominids as well as well as their “sloped face” and “robust mandible” as indication that Homo naledi does not even belong in the Homo genus.6

It is clear that while the Homo naledi fossils are extremely significant in the scientific community, their placement within the story of human evolution is contentious. Our inability to definitively date the fossils makes the task even more challenging. However, Homo naledi’s unique mosaic of human and ape-like features provides support for a new model of human evolution that has recently gained traction in the scientific community. While scientists would prefer to draw a family tree of human ancestors with modern humans at the top, our evolution is not so simple. Dr. Berger likens the reality of evolution to a braided stream.2 Like a collection of tributaries all contributing to a river basin, humans may have been the product of a collection of human ancestors, each contributing to our existence differently. We may never fully understand where we came from, but discoveries like Homo naledi bring us a little bit closer to completing the evolutionary puzzle.

References

  1. Spencer, G. New Genome Comparison Finds Chimps, Humans Very Similar at the DNA Level. National Human Genome Research Institute [Online], August 31, 2005. https://www.genome.gov/15515096 (accessed March 1st, 2016)
  2. Shreeve, J. This Face Changes the Human Story. But How? National Geographic [Online], September 10, 2015. http://news.nationalgeographic.com/2015/09/150910-human-evolution-change/ (accessed January 17, 2016)
  3. Berger, L. R. et al. ELife [Online] 2015, 4. http://elifesciences.org/content/4/e09560 (accessed January 16, 2016)
  4. Bascomb, B. Archaeology's Disputed Genius. PBS NOVA NEXT [Online], September 10, 2015. http://www.pbs.org/wgbh/nova/next/evolution/lee-berger/ (accessed January 19, 2016)
  5. Stoddard, E. Critics question fossil find, but South Africa basks in scientific glory. UK Reuters [Online], September 16, 2015. http://uk.reuters.com/article/us-safrica-fossil-idUKKCN0RG0Z120150916 (accessed January 19, 2016)
  6. Dr. Mitchell, E. Is Homo naledi a New Species of Human Ancestor? Answers in Genesis [Online], September 12, 2015. https://answersingenesis.org/human-evolution/homo-naledi-new-species-human-ancestor/ (accessed January 17, 2016)

Comment

Nano-Materials with Giga Impact

Comment

Nano-Materials with Giga Impact

What material is so diverse that it has applications in everything from improving human lives to protecting the earth? Few materials are capable of both treating prolific diseases like diabetes and creating batteries that last orders of magnitude longer than industry standards. None are as thin, lightweight, and inexpensive as carbon nanotubes.

Carbon nanotubes are molecular cylinders made entirely of carbon atoms, which form a hollow tube just a few nanometers thick, as illustrated in Figure 1. For perspective, a nanometer is one ten-thousandth the width of a human hair.1 The first multi-walled nanotubes (MWNTs) were discovered by L. V. Radushkevich and V. M. Lukyanovich of Russia in 1951.2 Morinobu Endo first discovered single-walled nanotubes (SWNTs) in 1976, although the discovery is commonly attributed to Sumio Iijima at NEC of Japan in 1991.3,4

Since their discovery, nanotubes have been the subject of extensive research by universities and national labs for the variety of applications in which they can be used. Carbon nanotubes have proven to be an amazing material, with properties that surpass those of existing alternatives such as platinum, stainless steel, and lithium-ion cathodes. Because of their unique structure, carbon nanotubes are revolutionizing the fields of energy, healthcare, and the environment.

Energy

One of the foremost applications of carbon nanotubes is in energy. Researchers at the Los Alamos National Laboratory have demonstrated that carbon nanotubes doped with nitrogen can be used to create a chemical catalyst. The process of doping involves substitution of one type of atom for another; in this case, carbon atoms were substituted with nitrogen. The synthesized catalyst can be used in lithium-air batteries which can hold a charge 10 times greater than that of a lithium-ion battery. A key parameter in the battery’s operation is the Oxygen Reduction Reaction (ORR) activity, which is a measure of a chemical species’ ability to gain electrons. The ORR activity of the nitrogen-doped material complex is not only the highest of any non-precious metal catalyst in alkaline media, but also higher than that of precious metals such as platinum.5

In another major development, Dr. James Tour of Rice University has created a graphene-carbon nanotube complex upon which a “forest” of vertical nanotubes can be grown. This base of graphene is a single, flat sheet of carbon atoms ‒ essentially a carbon nanotube “unrolled.” The ratio of height-to-base in this complex is equivalent to that of a house on a standard-sized plot of land extending into space.6 The graphene and nanotubes are joined at their interface by heptagonal carbon rings, allowing the structure to have an enormous surface area of 2000 m2 per gram and serve as a high potential storage mechanism in fast supercapacitors.7

Healthcare

Carbon nanotubes also show immense promise in the field of healthcare. Take Michael Strano of MIT, who has developed a sensor composed of nanotubes embedded in an injectable gel that can detect several molecules. Notably, it can detect nitrous oxide, an indicator of inflammation, and blood glucose levels, which diabetics must continuously monitor. The sensors take advantage of carbon nanotubes’ natural fluorescent properties; when these tubes are complexed with a molecule that then binds to a specific target, their fluorescence will increase or decrease.8

Perhaps the most important potential application for carbon nanotubes in healthcare lies in their cancer-fighting applications. In the human cell, there is a family of genes called HER2 that is responsible for the regulation of growth and proliferation of cells. Normal cells have two copies of this family, but 20-25% of breast cancer cells have three or more copies, resulting in quickly-growing tumor cells. Approximately 40,000 U.S. women are diagnosed every year with this type of breast cancer. Fortunately, Huixin He of Rutgers University and Yan Xiao of the National Institute of Standards and Technology have found that they can attach an anti-HER2 antibody to carbon nanotubes to kill these cells, as shown in Figure 2. Once inserted into the body, a near-infrared light at a wavelength of 785 nm can be reflected off the antibody-nanotube complex, indicating where tumor cells are present. The wavelength then increases to 808 nm, at which point the nanotubes absorb the light and vibrate to release enough heat to kill any attached HER2 tumor cells. This process has shown a near 100% success rate and leaves normal cells unharmed.9

Environment

Carbon nanotube technology also has environmental applications. Hui Ying Yang from Singapore has developed a water-purification membrane made of plasma-treated carbon nanotubes which can be integrated into portable, rechargeable, and inexpensive purification devices the size of a teapot. These new purifications devices are ideal for developing countries and remote locations, where large industrial purification plants would be too energy- and labor-intensive. Unlike other portable devices, this rechargeable device utilizes a membrane system that does not require a continuous power source, does not rely on thermal processes or reverse osmosis, and can filter for organic contaminants found in brine water - the most common water supply in these developing and rural areas.10

Oil spills may no longer be such devastating natural disasters either. Bobby Sumpter of the Oak Ridge National Laboratory demonstrated that doping carbon nanotubes with boron atoms alters the curvature of the tubes. Forty-five degree angles form, leading to a sponge-like structure of nanotubes. As these tubes are made of carbon, they attract hydrocarbons and repel water due to their hydrophobic properties, allowing the tubes to absorb up to 100 times their weight in oil. Additionally, these tubes can be reused, as burning or squeezing them was shown to cause no damage. Sumpter and his team used an iron catalyst in the growth process of the carbon nanotubes, enabling a magnet to easily control or remove the tubes from an oil cleanup scenario.11

Carbon nanotubes provide an incredible opportunity to impact areas of great importance to human life - energy, healthcare, and environmental protection. The results of carbon nanotube research in these areas demonstrate the remarkable properties of this versatile and effective material. Further studies may soon lead to their everyday appearance in our lives, whether in purifying water, fighting cancer, or even making the earth a better, cleaner place for everyone. Big impacts can certainly come in small packages.

References

  1. Nanocyl. Carbon Nanotubes. http://www.nanocyl.com/CNT-Expertise-Centre/Carbon-Nanotubes (accessed Sep 12, 2015).
  2. Monthioux, M.; Kuznetsov, V. Guest Editorial: Who should be given the credit for the discovery of carbon nanotubes? Carbon 44. [Online] 2006. 1621. http://nanotube.msu.edu/HSS/2006/1/2006-1.pdf (accessed Nov 15, 2015)
  3. Ecklund, P.; et al. Ugliengo, P. In International Assessment of Carbon Nanotube Manufacturing and Applications, Proceedings of the World Technology Evaluation Center, Inc. Baltimore, MD, June, 2007.
  4. Nanogloss. The History of Carbon Nanotubes – Who Invented the Nanotube? http://nanogloss.com/nanotubes/the-history-of-carbon-nanotubes-who-invented-the-nanotube/#axzz3mtharE9D (accessed Sep 14, 2015).
  5. Understanding Nano. Economical non-precious-metal catalyst capitalizes on carbon nanotubes. http://www.understandingnano.com/catalyst-nitrogen-carbon-nanotubes.html (accessed Sep 17, 2015).
  6. Understanding Nano. James’ bond: A graphene/nanotube hybrid. http://www.understandingnano.com/graphene-nanotube-electrode.html (accessed Sep 19, 2015).
  7. Yan, Z. et al. ACS Nano. Toward the Synthesis of Wafer-Scale Single-Crystal Graphene on Copper Foils 2012, 6 (10), 9110–9117.
  8. Understanding Nano. New implantable sensor paves way to long-term monitoring. http://www.understandingnano.com/carbon-nanotubes-implant-sensor.html (accessed Sep 20, 2015).
  9. Understanding Nano. Combining Nanotubes and Antibodies for Breast Cancer 'Search and Destroy' Missions. http://www.understandingnano.com/nanomedicine-nanotubes-breast-cancer.html (accessed Sep 22, 2015).
  10. Understanding Nano. Plasma-treated nano filters help purify world water supply. http://www.understandingnano.com/nanotube-membranes-water-purification.html (accessed Sep 24, 2015).
  11. Sumpter, B. et al. Covalently bonded three-dimensional carbon nanotube solids via boron induced nanojunctions. Nature [Online] 2012, doi: 10.1038/srep00363. http://www.nature.com/articles/srep00363 (accessed Mar 06, 2016).
  12.  Huixin, H. et al. Anti-HER2 IgY antibody-functionalized single-walled carbon nanotubes for detection and selective destruction of breast cancer cells. BMC Cancer 2009, 9, 351.   

Comment

Mars Fever

Comment

Mars Fever

The Greeks called it the star of Ares. For the Egyptians, it was the Horus of the Horizon. Across many Asian cultures, it was called the Fire Star. Mars has been surrounded by mystery from the time of ancient civilizations to the recent discovery of water on the planet’s surface.1 But why have humans around the world and throughout history been so obsessed with the tiny red planet?

Human fascination with Mars began in the late 19th century, when Italian astronomer Giovanni Schiaparelli first observed canali, or lines, on the planet’s surface. Yet canali was mistakenly translated as “canals” instead of “lines.”2 This led many to believe that some sort of intelligent life existed on Mars and these canals were engineered for their survival. While these lines were later found to be optical illusions, the canali revolutionized the way people viewed Mars. For perhaps the first time in history, it seemed that humans might not be alone in the universe. Schiaparelli had unintentionally sparked what became known as “Mars fever,” and indirectly influenced our desire to study, travel to, and even colonize Mars for more than 100 years. In Cosmos, Carl Sagan memorably described this odd fascination, “Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears.”

He was right.

After the canali misunderstanding, people began to believe that not only was there life on Mars, but intelligent life. The Mars craze escalated with the rise of science fiction, especially the publication of H.G. Wells’s classic Martian takeover novel War of the Worlds.2,3 In 1938, the novel was adapted for a radio broadcast narrated by actor Orson Welles. The broadcast incited mass terror, as millions of listeners mistook the fictional broadcast for news of an impending alien Armageddon. Surprisingly, this is just one of the many instances where random events have been mistaken for extraterrestrial interaction. The public image of Mars quickly evolved to reflect a mystical red landscape inhabited by intelligent, antagonistic, green creatures. Mars fever was becoming contagious.

As decades passed, it became increasingly clear that Mars contained no tiny green men and that there were no flying saucers coming to colonize the Earth. The Mariner missions found no evidence of life on Mars, and as a result, Mars fever took on a new form: without the threat of intelligent, alien life forms, who was to stop us from colonizing the Red Planet? After all, perhaps the destruction of Earth wouldn’t be caused by invaders, but by earthlings themselves. Many contemporary science fiction writers focus on this idea of a second Earth in their stories. Award-winning novelist Michael Swanwick says, "We all are running out of a lot of different minerals, some of which our civilization depends on … There is a science-fiction idea for you."4 With natural resources dwindling and pollution on the rise, Earth might need a replacement.5 Mars’ relative similarity and proximity to Earth make it a strong candidate.

Rocket scientist Werner Von Braun even wrote The Mars Project, a book outlining a Martian colonization fleet that would be assembled in earth orbit.6 It was a proposition of massive proportions, calling for $500 million in rocket fuel alone and human explorers rather than rovers such as NASA’s Opportunity and Curiosity.6 However, these colonization efforts are not simply fictional. Elon Musk, CEO of Tesla and creator of the privately funded space agency SpaceX, has put intense effort into interplanetary travel, particularly in the case of Mars, but his methods remain abstract.7 Mars One has a similar goal: establishing a Martian colony. While not an aerospace company, Mars One is a logistical center for carrying out such a mission. They focus primarily on funding and organization, leaving systems construction up to more established aerospace companies.8 While both SpaceX and Mars One are dedicated to the cause of Martian colonization, it is evident that neither company will be able to accomplish such a mission any time soon.

The possible mechanisms for colonizing Mars are endless, ranging from pioneering the landscape with 3D printable habitats to harvesting remnants of water from the Martian soil. But the challenges arguably outweigh current technologies. In order to survive, humans would need space suits that could protect against extreme temperature differentials.5 Once on the surface, astronauts would need to establish food sources that were both sustainable and suitable for long term missions.9 Scientists would need to consider accommodations for the mental health of astronauts spending more time in space than any other human in history. Beyond these basic necessities, factors like harmful cosmic rays and the sheer cost of such a mission must also be considered.10

The highly improbable nature of Mars exploration and colonization only seems to add fuel to the fire of humanity’s obsession. In spite of the challenges associated with colonization, Mars fever persists. Though Mars is 225 million kilometers away from Earth, it has piqued human curiosity throughout civilizations. Schiaparelli and his contemporaries could only dream of the possibilities that dwelled in Mars’s “canali.” However, exploration of Mars is no longer the stuff of science-fiction. This is a new era of making the impossible possible, from Neil Armstrong’s “giant leap for mankind” to the establishment of the International Space Station. We are closer to Mars than ever before, and in the coming years we might just unveil the mystery behind the Red Planet.

References

  1. Mars Shows Signs of Having Flowing Water, Possible Niches for Life, NASA Says, http://www.nytimes.com/2015/09/29/science/space/mars-life-liquid-water.html?_r=1, (accessed September 28, 2015)
  2. A Short History of Martian Canals and Mars Fever, http://www.popularmechanics.com/space/moon-mars/a17529/a-short-history-of-martian-canals-and-mars-fever/, (accessed September 28, 2015)
  3. The Myth of the War of the Worlds Panic, http://www.slate.com/articles/arts/history/2013/10/orson_welles_war_of_the_worlds_panic_myth_the_infamous_radio_broadcast_did.html, (accessed October 10, 2015)
  4. Why Colonize Mars? Sci-Fi Authors Weigh In., http://www.space.com/29414-mars-colony-science-fiction-authors.html (accessed Jan 30, 2016)
  5. Here’s why humans are so obsessed with colonizing Mars, http://qz.com/379666/heres-why-humans-are-so-obsessed-with-colonizing-mars/, (accessed Oct 10, 2015)
  6. Humans to Mars, http://history.nasa.gov/monograph21.pdf, (accessed Oct 10, 2015)
  7. SpaceX's Elon Musk to Reveal Mars Colonization Ideas This Year, http://www.space.com/28215-elon-musk-spacex-mars-colony-idea.html, (accessed October 10, 2015)
  8. About Mars One, http://www.mars-one.com/about-mars-one (accessed Jan 30, 2016)
  9. Talking to the Martians, http://www.popsci.com/martians, (accessed September 28, 2015)
  10. Will We Ever Colonize Mars?, http://www.space.com/30679-will-humans-ever-colonize-mars.html (accessed Jan 30, 2016)

Comment

Delving into a New Kind of Science

Comment

Delving into a New Kind of Science

Since ancient times, humans have attempted to create models to explain the world. These explanations were stories, mythologies, religions, philosophies, metaphysics, and various scientific theories. Then, about three centuries ago, scientists revolutionized our understanding with a simple but powerful idea: applying mathematical models to make sense of our world. Ever since, mathematical models have come to dominate our approach to knowledge, and scientists have utilized complex equations as viable explanations of reality.

Stephen Wolfram’s A New Kind of Science (NKS) suggests a new way of modelling worldly phenomena. Wolfram postulates that elaborate mathematical models aren’t the only representations of the mechanisms governing the universe; simple patterns may be behind some of the most complex phenomena. In order to illustrate this, he began with cellular automata.

A cellular automaton is a set of colored blocks in a grid that is created stage by stage. The color of each block is determined by a set of simple rules that considers the colors of blocks in a preceding stage.1 Based on just this, cellular automata seem to be fairly simple, but Wolfram illustrated their complexity in rule 30. This cellular automaton, although it follows the simple rule illustrated in Figure 1, produces a pattern that too irregular and complex for even the most sophisticated mathematical and statistical analysis. However, by applying NKS fundamentals, simple rules and permutations of the building blocks pictured can be developed to produce these extremely complex structures or models.2

By studying several cellular automata systems, Wolfram presents two important ideas: complexity can result from simple rules and complex rules do not always produce complex patterns.2

The first point is illustrated by a computer; relying on Boolean logic, the manipulation of combinations of “truths” (1’s) and “falses” (0’s), computers can perform complex computations. And with proper extensions, they can display images, play music, or even simulate entire worlds in video games. The resulting intuition, that complexity results from complexity, is not necessarily true. Wolfram shows again and again that simple rules produce immense randomness and complexity.

There are other natural phenomena that support this theory. The patterns on mollusk shells reflect the patterns generated by cellular automata, suggesting that the shells follow similar simple rules during pattern creation.2 Perhaps other biological complexities are also results of simple rules. Efforts are being made to understand the fundamental theory of physics based on ideas presented in the NKS and Wolfram’s idea might even apply to philosophy. If simple rules can create seemingly irregular complexity, the simple neuronal impulses in the brain might also cause irregular complexities, and this is what we perceive as free will.2

The most brilliant aspect of NKS lies in its underlying premises: a model for reality is not reality itself but only a model, so there can be several different, accurate representations. Our current approach to reality -- using mathematical models to explain the world -- does not have to be the only one. Math can explain the world, but NKS shows that simple rules can also do so. There may be methods and theories that have been overlooked or remain undiscovered that can model our world in better ways.

References

  1. Weisstein, E. W. Cellular Automaton. Wolfram MathWorld, http://mathworld.wolfram.com/cellularautomaton.html (accessed Mar 26, 2016).
  2. Wolfram, S. A New Kind of Science; Wolfram Media: Champaign, IL, 2002

Comment

Dangers of DNA Profiling

Comment

Dangers of DNA Profiling

DNA profiling has radically changed forensics by providing an objectively verifiable method for linking suspects to crimes. Currently, many states collect the DNA of felons in order to ensure that repeat offenders are caught and convicted efficiently.1 Over the past few years, situations in which law enforcement officials can collect DNA from suspects have increased drastically. In 2013, President Obama strongly supported the creation of a national DNA database that included samples from not only people who are convicted, but also those arrested.1 In Maryland v. King (2013), the Supreme Court declared that law enforcement officials are justified in collecting DNA prior to conviction if it aids in solving a criminal case.2 In the years since this decision, the creation of a national DNA database has become a particularly polarizing and contentious issue. Proponents argue that a database would dramatically improve the ability of law enforcement to solve crimes. However, detractors argue that the potential for misuse of genetic information is too great to warrant the creation of such a system.

DNA is popularly referred to as the “blueprint of life” and contains extremely sensitive information such as an individual’s susceptibility to genetic disorders. One of the major arguments against the creation of a national DNA database is that such information could be hacked. Yaniv Erlich, a geneticist at MIT, illustrated this when he used “genome mining” to find the true identities of individuals in a national genome registry. In his study, Erlich obtained genomes from the 1000 Genomes Project, a large database used for scientific research. He then used a computer algorithm to search for specific DNA sequences known as short tandem repeats (STRs) on the Y chromosome of males. These STRs are remarkably invariable from generation to generation. Erlich was able to use the Y chromosome’s STR marker to identify the last names of the individuals to whom the DNA belonged by using easily accessible genealogy sites.3 With just a computer and access to genome data, Erlich could identify personal information in DNA registries. Clearly, the creation of a national DNA database could give rise to widespread privacy concerns. Though there are large fines associated with unauthorized disclosure or acquisition of DNA data, current federal regulations do not technically limit health insurance companies from using genome mining in order to determine life insurance or disability care.4

Hacking of federal databases is not an unreasonable scenario—just this past July, sensitive information including the addresses, health history, and financial history of over 20 million individuals was stolen in a massive cyber-attack.4 That attack uncovered information about every single individual who has attempted to work or has worked in the United States government. A similar abuse of genetic information by third parties is undoubtedly a danger associated with a national DNA database. Despite advances in federal protections such as the Genetic Information Nondiscrimination Act, there are still numerous instances where genetic information regarding disease is used in employment decisions.5

Another potential issue associated with the creation of a DNA database is the notion of genetic essentialism. Genetic essentialism argues that the genes of an individual can predict behavioral outcomes.6 Critics of a national DNA database argue that certain factors—such as the extra Y chromosome—may lead law enforcement officials to suspect certain individuals more than others, which sets up a dangerous precedent.

The notion that chromosomal abnormalities can alter behavioral outcomes has generated numerous studies examining the link between criminality and changes in sex chromosomes—the genes that determine whether an individual is male or female. Normally, females will have two X chromosomes, whereas males have one X chromosome and one Y chromosome. However, in rare cases, males can either have an extra X chromosome (XXY) or an extra Y chromosome (XYY). General literature review suggests that XXY men have feminine characteristics and are substantially less aggressive than XYY or XY men.7 Conversely, studies like Jacobs et al. have suggested that the XYY condition can lead to increased aggression in individuals.8 However, Alice Theilgaard, one of the most prominent researchers on this topic, found that most behavioral characteristics associated with the XYY chromosomal abnormality are controversial.7 Even tests based on objective measures, like testosterone levels, have been inconclusive. Theilgaard argues that the XYY chromosomal abnormality does not cause increased aggression or propensity to commit crimes. Rather, she states that the criminality of XYY individuals might be a socially constructed phenomenon. XYY individuals often have severe acne, lowered intellect, and unusual height. This makes it difficult for people with this condition to “fit in.” As a result of their physical characteristics, XYY individuals might feel ostracized and become antisocial.8 Thus, it is reasonable to conclude that merely having an extra Y chromosome does not predispose someone to be violent; rather a wide variety of social factors play a role.

It is entirely plausible that law enforcement individuals could misinterpret genetic information. For example, they could mistakenly believe that an individual with the XYY condition is more likely to be a suspect for a violent crime. Such an assumption would hinder law enforcement officials from objectively evaluating the evidence involved in a crime and shift the focus to individual characteristics of particular suspects. People in favor of a national DNA database often argue that it would be a great method of solving crimes. Specifically, some officials argue that a database would prevent recidivism (a relapse in criminal behavior) and deter people from committing crimes. However, research done by Dr. Avinash Bhati suggests that the inclusion of DNA in a national registry only seems to reduce recidivism for burglaries and robberies; in other crime categories, recidivism is generally unaffected.9 This suggests that a convict’s knowledge that he/she is in a DNA database is not a true deterrent. The concerns raised by this study should show that databases might not be as effective a crime-fighting tools as proponents suggest.

Both genome mining and genetic essentialism present very real harms associated with the creation of a national DNA database. Having sensitive genetic information in one centralized registry could potentially lead to abuse and discriminatory behaviors by parties that have access to that information. Even if genome databases are strictly regulated, the possibility of that information being hacked still exists. Furthermore, assuming that genetics are the only determinants of behavior could lead to people with genetic abnormalities being suspected of crimes at a higher rate than “normal” individuals. Social factors often shape the way an individual acts; the possibility of law enforcement officials embracing the genetic essentialism approach is another associated harm. In the end, it seems that the negative consequences associated with the creation of a national DNA database outweigh the benefits.

References

  1. Barnes, R. Supreme Court upholds Maryland law, says police may take DNA samples from arrestees. Washington Post, https://www.washingtonpost.com/politics/supreme-court-upholds-maryland-law-says-police-may-take-dna-samples-from-arrestees/2013/06/03/0b619ade-cc5a-11e2-8845-d970ccb04497_story.html (accessed 2015).  
  2. Wolf, R. Supreme Court OKs DNA swab of people under arrest. USA Today, http://www.usatoday.com/story/news/politics/2013/06/03/supreme-court-dna-cheek-swab-rape-unsolved-crimes/2116453/ (accessed 2015).
  3. Ferguson, W. A Hacked Database Prompts Debate about Genetic Privacy. Scientific American, http://www.scientificamerican.com/article/a-hacked-database-prompts/ (accessed 2015).
  4. Davis, J. Hacking of Government Computers Exposed 21.5 Million People. The New York Times, http://www.nytimes.com/2015/07/10/us/office-of-personnel-management-hackers-got-data-of-millions.html?_r=0 (accessed 2015).
  5. Berson, S. Debating DNA Collection. National Institute of Justice, http://www.nij.gov/journals/264/pages/debating-dna.aspx (accessed 2015).
  6. Coming to Terms with Genetic Information. Australian Law Reform Commission , http://www.alrc.gov.au/publications/3-coming-terms-genetic-information/dangers-‘genetic-essentialism’ (accessed 2015).
  7. Are XYY males more prone to aggressive behavior than XY males? Science Clarified, http://www.scienceclarified.com/dispute/vol-1/are-xyy-males-more-prone-to-aggressive-behavior-than-xy-males.html (accessed 2015).
  8. Dar-Nimrod, I.; Heine, S. Genetic Essentialism: On the Deceptive Determinism of DNA. Psychological Bulletin, http://www.ncbi.nlm.nih.gov/pmc/articles/pmc3394457/ (accessed 2015).Are XYY males more prone to aggressive behavior than XY males? Science Clarified.
  9. Bhati, A. Quantifying The Specific Deterrent Effects of DNA Databases. PsycEXTRADataset. 2011. https://www.ncjrs.gov/app/publications/abstract.aspx?id=258313 (accessed May 2015).

Comment

3D Organ Printing: A Way to Liver a Little Longer

Comment

3D Organ Printing: A Way to Liver a Little Longer

On average, 22 people in America die each day because a vital organ is unavailable to them.1 However, recent advances in 3D printing have made manufacturing organs feasible for combating the growing problem of organ donor shortages.

3D printing utilizes additive manufacturing, a process in which successive layers of material are laid down in order to make objects of various shapes and geometries.2 It was first described in 1986, when Charles W. Hull introduced his method of ‘stereolithography,’ in which thin layers of materials were added by curing ultraviolet light lasers. In the past few decades, 3D printing has driven innovations in many areas, including engineering and art by allowing rapid prototyping of various structures.2 Over time, scientists have further developed 3D printing to employ biological materials as a modeling medium. Early iterations of this process utilized a spotting system to deposit cells into organized 3D matrices, allowing the engineering of human tissues and organs. This method, known as 3D bioprinting, required layer-by-layer precision and the exact placement of 3D components. The ultimate goal of 3D biological modeling is to assemble human tissue and organs that have the correct biological and mechanical properties for proper functioning to be used for clinical transplantation. In order to achieve this goal, modern 3D organ printing is usually accomplished using either biomimicry, autonomous self-assembly, and mini-tissues. Typically, a combination of all three techniques is utilized to achieve bioprinting with multiple structural and functional properties.

The first approach, biomimicry, involves the manufacture of identical components of cells and tissues. The goal of this process is to use the cells and tissues of the organ recipient to duplicate the structure of organs and the environment in which they reside. Ongoing research in engineering, biophysics, cell biology, imaging, biomaterials, and medicine is very important for this approach to prosper, as a thorough understanding of the microenvironment of functional and supporting cell types is needed to assemble organs that can survive.3

3D bioprinting can also be accomplished through autonomous self-assembly, a technique that uses the same mechanisms as embryonic organ development. Developing tissues have cellular components that produce their own extracellular matrix in order to create the structures of the cell. Through this approach, researchers hope to utilize cells themselves to create fully functional organs. Cells are the driving force of this process, as they ultimately determine the functional and structural properties of the tissues.3

The final approach used in 3D bioprinting involves mini-tissues and combines the processes of both biomimicry and self-assembly. Mini-tissues are the smallest structural units of organs and tissues. They are replicated and assembled into macro-tissue through self-assembly. Using these smaller, potentially undamaged portions of the organs, fully functional organs can be made. This approach is similar to autonomous self-assembly in that the organs are created by the cells and tissues themselves.

As modern technology makes it possible, techniques for organ printing continue to advance. Although successful clinical implementation of printed organs is currently limited to flat organs such as skin and blood vessels and hollow organs such as the bladder,3 current research is ongoing for more complex organs such as the heart, pancreas, or kidneys.

Despite the recent advances in bioprinting, issues still remain. Since cell growth occurs in an artificial environment, it is hard to supply the oxygen and nutrients needed to keep larger organs alive. Additionally, moral and ethical debates surround the science of cloning and printing organs.3 Some camps assert that organ printing manipulates and interferes with nature. Others feel that, when done morally, 3D bioprinting of organs will benefit mankind and improve the lives of millions. In addition to these debates, there is also concern about who will control the production and quality of bioprinted organs. There must be some regulation of the production of organs, and it may be difficult to decide how to distribute this power. Finally, the potential expense of 3D printed organs may limit access to lower socioeconomic classes. 3D printed organs, at least in their early years, will more likely than be expensive to produce and to buy.

Nevertheless, there is widespread excitement surrounding the current uses of 3D bioprinting. While clinical trials may be in the distant future, organ printing can currently act as an in vitro model for drug toxicity, drug discovery, and human disease modeling.4 Additionally, organ printing has applications in surgery, as doctors may plan surgical procedures with a replica of a patient’s organ made with information from MRI and CT images. Future implementation of 3D printed organs can help train medical students and explain complicated procedures to patients. Additionally, 3D printed tissue of organs can be utilized to repair livers and other damaged organs. Bioprinting is still young, but its widespread application is quickly becoming a possibility. With further research, 3D printing has the potential to save the lives of millions in need of organ transplants.

References

  1. U.S. Department of Health and Human Services. Health Resources and Services Information. http://www.organdonor.gov/about/data.html (accessed Sept. 15, 2015)
  2. Hull, C.W. et al. Method of and apparatus for forming a solid three-dimensional article from a liquid medium. WO 1991012120 A1 (Google Patents, 1991).
  3. Atala, A. and Murphy, S. 3D Bioprinting of Tissues and Organs. Nat. Biotechnol. [Online] 2013, 32, 773-785. http://nature.com/nbt/journal/v32/n8/full/nbt.2958.html (accessed Sept. 15, 2015)
  4. Drake, C. Kasyanov, V., et al. Organ printing: Promises and Challenges. Regen. Med. 2008, 3, 1-11.

Comment