The Reading Process: How Essential are Letters?

Comment

The Reading Process: How Essential are Letters?

Reading is such a basic, yet vital, component of our lives. Without the ability to read, we would be unable to comprehend a street sign telling us to stop, a crucial headline in the daily news, or an email telling us that the class we hate the most has been cancelled. Unfortunately, there are people whose ability to read is either impaired or entirely nonexistent. Much research has been done on the reading process and how it is affected by brain impairment; at Rice University, Dr. Simon Fischer-Baum and his team are currently studying the reading deficiencies of stroke patients. Before examining a special case of someone with a reading deficit, an understanding of the fundamentals of reading is necessary.

As English speakers, we might assume the reading process starts with the letters themselves. After all, children are commonly taught to identify each individual letter in the word and its sound. Next, the individual strings the individual sounds together to pronounce the word. Finally, once the words have been identified and pronounced, the person refers to his or her database of words and finds the meaning of the word being read.

While letters are the smallest tangible unit of the words being read, they actually depend on an even more basic concept: Abstract Letter Identities (ALIs). ALIs are representations of letters that allow a person to distinguish between different cases of the same letter, identify letters regardless of font, and know what sound the letter makes. It would appear that the ability to read is entirely contingent on one’s knowledge of these letter identities. However, certain scenarios indicate that this is not entirely true, raising questions about how much influence ALIs have on reading ability.

Dr. Fischer-Baum’s lab is currently exploring one such scenario involving a patient named C. H. This patient suffered from a stroke a few years ago and, as a result, has a severely impaired capacity for reading. Dr. Fischer-Baum and David Kajander, a member of the research staff, have given C. H. tasks in which he reads words directly from a list, identifies words being spelled to him, and spells words that are spoken to him. However, his case is especially interesting because he processes individual letters with difficulty (for example, matching lowercase letters with their uppercase counterparts), yet he can still read to a limited extent. This presents strong evidence against the importance of ALIs in reading because it contradicts the notion that we must have some knowledge of ALIs to have any reading ability at all. It has become apparent that C. H. is using a method of reading that is not based on ALIs.

There are several methods of reading that C. H. might be using. He could be memorizing the shapes of words he encounters and mapping those shapes onto the stimuli presented to him, a process called reading by contour. If this were the case, then he should have a limited ability to read capital letters since they are all the same height and width. C. H. could also utilize partial ALI information and making an educated guess about the rest of the word. If that were true, then he should be very good at reading uncommon words since there are fewer words that share that letter sequence.

In order to pursue this hypothesis, Dr. Fischer-Baum’s lab gave C. H. a task derived from a paper by Dr. David Howard. Published in 1987, the paper describes a patient, T. M., who shows reading deficiencies that are strikingly similar to those of C. H.1 A new series of reading tasks and lexical decision tasks from this paper required C. H. to determine whether or not a stimulus is a real word. For the reading tasks, a total of 100 stimuli, 80 words, and 20 non-words were used, all varying in length, frequency, and ease of conjuring a mental image of the stimulus. For the lexical decision tasks, 240 stimuli, 120 words, and 120 non-words were used, all varying in frequency, ease of forming a mental picture, and neighborhood density (the number of words that can be created by changing one letter in the original word). Additionally, each of the word lists was presented to C. H. in each of the following formats: vertical, lowercase, alternating case, all caps, and plus signs in between the letters. These criteria were used to create the word lists, which were then presented to C. H. in order to determine which factors were influencing his reading.

After the tasks were completed and the data was collected, C. H.’s results were organized by presentation style and stimuli characteristics. For reading tasks, he scored best overall on stimuli in the lowercase presentation style (30% correct) and worst overall on stimuli in the plus sign presentation style (9% correct). Second worst was his performance on the vertical presentation style (21% correct). For the lexical decision tasks, we saw that C. H. did best on stimuli in the all capital letter presentation style (79.58% correct) and worst on stimuli in the vertical presentation style (64.17% correct), although his second worst performance came in the plus sign presentation style (65% correct). Across both the reading and lexical decision tasks, he scored higher on stimuli that were more frequent, shorter in length, and easier to visualize. In the lexical decision tasks, he scored higher on low-neighborhood density items than high-neighborhood density items.

These results lead us to several crucial conclusions. First, C. H. clearly has a problem with reading words that contain interrupters, as evidenced by his poor performance with reading the plus sign words. Second, C. H. is not using contour information to read; if he were, then his worst performances should have come on the all caps reading tasks, since capital letters do not have any specific contour. Evidence suggests he is indeed using a partial guessing strategy to read because he performed better on low-neighborhood density words than on high-neighborhood density words. These conclusions are significant because they suggest further tests for C. H. More importantly, these conclusions could be especially helpful for people suffering similar reading deficits. For example, presenting information using short, common, and non-abstract words could increase the number of words these people can successfully read, increasing the chance of them interpreting the information correctly. Dr. Fischer-Baum’s lab plans to perform further tasks with C. H. in order to assess his capacity for reading in context.

References

  1. Howard, D. Reading Without Letters; The Cognitive Neuropsychology of Language; Lawrence Erlbaum; 1987; pp 27-58.

Comment

An Overview of the CRISPR Cas9 Genome Editing System

Comment

An Overview of the CRISPR Cas9 Genome Editing System

ABSTRACT

The clustered regularly interspaced short palindromic repeats (CRISPR) associated sequences (Cas) system is a prokaryotic acquired immunity against viral and plasmid invasion. The CRISPR Cas9 system is highly conserved throughout bacteria and archaea. Recently, CRISPR/Cas has been utilized to edit endogenous genomes in eukaryotic species. In certain contexts, it has proven invaluable for in vitro and in vivo modeling. Currently, CRISPR genome editing boasts unparalleled efficiency, specificity, and cost compared to other genome editing tools, including transcription activator-like effector nucleases (TALENs) and zinc finger nucleases (ZFNs). This review discusses the background theory of CRISPR and reports novel approaches to genome editing with the CRISPR system.

INTRODUCTION

CRISPR as a prokaryotic adaptive immune system

CRISPR was originally discovered in bacteria1 and is now known to be present in many other prokaryotic species.2,3 CRISPR systems in bacteria have been categorized into three types, with Type II as the most widely found. The essential components of a Type II CRISPR System located within a bacterial genome include the CRISPR array and a Cas9 nuclease. A third component of the Type II system is the protospacer adjacent motif on the target/foreign DNA. The CRISPR array is composed of clusters of short DNA repeats interspaced with DNA spacer sequences.4 These spacer sequences are the remnants of foreign genetic material from previous invaders and are utilized to identify future invaders. Upon foreign invasion, the spacer sequences are transcribed into pre-crisprRNAs (pre-crRNAs), which are further processed into mature crRNAs. These crRNAs, usually 20 base pairs in length, play a crucial role in the specificity of CRISPR/Cas. Upstream of the CRISPR array in the bacterial genome is the gene coding for transactivating crisprRNA (tracrRNA). tracrRNA provides two essential functions: binding to mature crRNA and providing structural stability as a scaffold within the Cas9 enzyme.5

Post-transcriptional processing allows the tracrRNA and crRNA to fuse together and become embedded within the Cas9 enzyme. Cas9 is a nuclease with two active sites that each cleaves one strand of DNA on its phosphodiester backbone. The embedded crRNA allows Cas9 to recognize and bind to specific protospacer target sequences in foreign DNA from viral infections or horizontal gene transfers. The crRNA and the complement of the protospacer are brought together through Watson-Crick base pairing. Before the Cas9 nuclease cleaves the foreign double-stranded DNA (dsDNA), it must recognize a protospacer adjacent motif (PAM), a trinucleotide sequence. The PAM sequence is usually in the form 5’-NGG-3’ (where N is any nucleotide) and is located directly upstream of the protospacer but not within it. Once the PAM trinucleotide is recognized, Cas9 creates a double-stranded breakage three nucleotides downstream of the PAM in the foreign DNA. The cleaved foreign DNA will not be transcribed properly and will eventually be degraded.5 By evolving to target and degrade a range of foreign DNA and RNA with CRISPR/Cas, bacteria have provided themselves with a remarkably broad immune defense.6

CRISPR Cas9 as an RNA-guided genome editing tool

The prokaryotic CRISPR/Cas9 system has been reconstituted in eukaryotic systems to create new possibilities for the editing of endogenous genomes. To achieve this seminal transition, virally-derived spacer sequences in bacterial CRISPR arrays are replaced with 20 base pair sequences identical to targeting sequences in eukaryotic genomes. These spacer sgRNAsequences are transcribed into guide RNA (gRNA), which functions analogously to crRNA by targeting specific eukaryotic DNA sequences of interest. The DNA coding for the tracrRNA is still found upstream of the CRISPR array. The gRNA and tracrRNA are fused together to form a single guide RNA (sgRNA) by adding a hairpin loop to their duplexing site. The complex is then inserted into the Cas9 nuclease. Within Cas9, the tracrRNA (3’ end of sgRNA) serves as a scaffold while the gRNA (5’ end of sgRNA) functions in targeting the eukaryotic DNA sequence by Watson-Crick base pairing with the complement of the protospacer (Fig. 1). As in bacterial CRISPR/Cas systems, a PAM sequence located immediately upstream of the protospacer must be recognized by the CRISPR/Cas9 complex before double-stranded cleavage occurs.5,7 Once the sequence is recognized, the Cas9 nuclease creates a double-stranded break three nucleotides downstream to the PAM’s location in the Eukaryotic DNA of interest (Fig. 1). The PAM is the main restriction on the targeting space of Cas9. Since the PAM is required to be immediately upstream of the protospacer, it is theoretically possible to replace the 20 base pair gRNA in order to target other DNA sequences near the PAM.5,7

Once the DNA is cut, the cell's repair mechanisms are leveraged to knockdown a gene, or insert a new oligonucleotide into the newly formed gap. The two main pathways of double-stranded DNA lesion repair associated with CRISPR genome editing are non-homologous end joining (NHEJ) and homology directed repair (HDR). NHEJ is mainly involved with gene silencing. It introduces a large number of insertion/deletion mutations, which manifest as premature stop codons, that effectively silence the gene of interest. HDR is mainly used for gene editing. By providing a DNA template in the form of a plasmid or a single-stranded oligonucleotide (ssODN), HDR can easily introduce desired mutations in the cleaved DNA.5

The beauty of the CRISPR system is its simplicity. It is comprised of a single effector nuclease and a duplex of RNA. The endogenous eukaryotic DNA can be targeted as long as it is in proximity to a PAM. The goal of this system is to induce a mutation, and the CRISPR Cas9 complex will cut at the site repeatedly until a mutation occurs. When a mutation does occur, the site will no longer be recognized by the complex and cleavage will cease.

Optimization and specificity of CRISPR/Cas systems

If CRISPR systems are to be widely adopted in research or clinical applications, concerns regarding off-target effects must be addressed. On average, this system has a target every eight bases in the human genome. Thus, virtually every gRNA has the potential for unwanted off-target activity. Current research emphasizes techniques to improve specificity, including crRNA modification, transfection optimization, and a Cas9 nickase mutation.

The gRNA can be modified to minimize its off-target effects while preserving its ability to target sequences of interest. Unspecific gRNA can be optimized by inserting single-base substitutions that enhance its ability to bind to target sequences in a position and base-dependent manner. Libraries of mutated genes containing all possible base substitutions along the gRNA have been generated to examine the specificity of gRNA and enzymatic activity of Cas9. It is important to note that if mutations occur near the PAM, Cas9 nucleases do not initiate cleavage. Targeting specificity and enzymatic activity are not affected as strongly by base substitutions on the 5’ end of gRNA. This leads to the conclusion that the main contribution to specificity is found within the first ten bases after the PAM on the 3’ end of gRNA.5

The apparent differential specificity of the Cas9 gRNA guide sequence can be quantified by an open source online tool (http://crispr.mit.edu/). This tool identifies all possible gRNA segments that target a particular DNA sequence. Using a data-driven algorithm, the program scores each viable gRNA segment depending on its predicted specificity in relation to the genome of interest.

Depending on the redundancy of the DNA target sequence, scoring and mutating gRNA might not provide sufficient reduction of off-target activity. Increasing concentrations of CRISPR plasmids upon transfection can provide a modest five to seven fold increase in on-target activity, but a much more specific system is desirable for most research and clinical applications. Transforming Cas9 from a nuclease to a nickase enzyme yields the desired specificity.5 Cas9 has two catalytic domains, each of which nicks a DNA strand. By inactivating one of those domains via a D10A mutation, Cas9 is changed from a nuclease to a nickase.

Two Cas9 nickases (and their respective gRNAs) are required to nick complementary DNA strands simultaneously. This technique, called multiplexing, mimics a double-stranded break by inducing single-stranded breaks in close proximity to one another. Since single-stranded breaks are repaired with a higher fidelity than double-stranded breaks, off-target effects caused by improper cleavage can be mitigated, leaving the majority of breaks at the sequence of interest. The two nickases should be offset 10—30 base pairs from each other.5 Multiplex nicking offers on-target modifications comparable to the wild type Cas9, while dramatically reducing off-target modifications (1000—1500 fold).5

DISCUSSION

CRISPR/Cas9 systems have emerged as the newest genome engineering tool and have quickly been applied in in vitro and in vivo research applications. However, before these systems can be used in clinical applications, off-target effects must be controlled. In spite of its current shortcomings, CRISPR has proven invaluable to researchers conducting high-throughput studies of the biological function and relevance of specific genes. CRISPR Cas9 genome editing provides a rapid procedure for the functional study of mutations of interest in vitro and in vivo. Tumor suppressor genes can be knocked out, and oncogenes with specific mutations can be created via NHEJ and HDR, respectively. The novel cell lines and mouse models that have been created by CRISPR technologies have thus far galvanized translational research by enabling more perspectives of studying the genetic foundation of diseases.

References

  1. Ishino, Y. et al. J Bacteriol. 1987, 169, 5429–5433.
  2. Mojica, F.J. et al. Mol Microbiol. 1995, 17, 85–93.
  3. Masepohl, B. et al. Biophys Acta. 1996, 1307, 26–30.
  4. Mojica, F.J. et al. Mol Microbiol. 2000, 36, 244–246.
  5. Cong, L. et al. Science. 2013, 6121, 819–823.
  6. Horvath, P. et al. Science. 2010, 327, 167.
  7. Ran, F.A. et al. Nat. Protoc. 2013, 8, 2281–2308.

Comment

Quantifying Cellular Processes

Comment

Quantifying Cellular Processes

Molecular biology research has generated unprecedented amounts of information about the cell. One of the largest molecular databases called Kyoto Encyclopedia of Genes and Genomes (KEGG) stores 9,736 reactions and 17,321 compounds.1 This information provides rich opportunities for application in synthetic biology, a field that engineers cells to produce large quantities of valuable compounds. Despite its strengths, synthetic biology can still be implemented more systematically and efficiently. To integrate all the cell’s reactions into a coherent picture, researchers have developed computational models of the cell’s biochemical pathways. These models ultimately aim to simulate the pathways of a real cell so that researchers can isolate the set of reactions that produce a compound of interest.

To express all the cell’s compounds and reactions, research on metabolic networks has utilized a concept called a graph. A graph consists of two primary structures: nodes, each representing a single compound or reaction, and edges, which connect related compound and reaction nodes. For example, as shown in Figure 1, the cell’s compounds can be treated as nodes, and the reactions that transform the compounds can be treated as the edges. By simplifying chemicals into symbolic representations, graphs can analyze networks relatively quickly and with minimal computational memory. This low computational cost enables analysis not only of specific pathways but also of the entire cell.

While graphs provide intuitive symbols for the reactions in metabolic networks, they were not arbitrarily invented. Current metabolic network graphs are derived from a well-studied mathematical field. Euler created graph theory in 1735, and the theorems discovered since then have enabled different methods for solving a number of mathematical problems.1 Metabolic networks research depends specifically on shortest-path algorithms, which search for the most efficient ways to reach a target node from a starting node. Shortest-path algorithms accomplish several tasks that simplify analysis of metabolic networks. They constrain the output to a finite number of pathways, and they enable output of biologically realistic pathways, which evolve to conserve energy and tend to minimize the number of intermediate compounds. Excluding convoluted pathways means avoiding unreasonably complicated and costly production methods.

However, current shortest-path algorithms must be extensively modified to generate meaningful results for metabolic networks. In a simplistic application of the shortestpath algorithm, the only parameter is the distance itself between two nodes, that is, the literal shortness of the path. Such simplicity generates multiple pathways that are biochemically impossible and in fact make little sense. This problem can be illustrated by glycolysis, a nine-step reaction pathway in the metabolism of glucose. During glycolysis, ATP, a small molecule that provides energy for many cellular reactions, is required to prime intermediates. ATP is generated in the following overall reaction: glucose + 2 NAD+ + 2 ADP + 2 Pi → 2 pyruvate + 2 ATP + 2 NADH. A simplistic graphical algorithm would suggest glucose is converted directly into ATP in a short process as indicated by the overall reaction equation. The reality is that glycolysis is a nine-step process with a vast number of enzymes, cofactors, allosteric regulators, covalent regulators, and environmental conditions, all of which must be encoded into the algorithm. The challenge in synthetic biology is considering all of these factors and more for the gamut of simultaneous reactions ongoing in a cell.

Research has produced modifications to the simplistic shortest-path algorithm to better model biological reality. Croes et al. reduced the influence of currency metabolites by constructing a weighted graph: metabolites that had many connections throughout the graph were weighted with a greater cost.2 The algorithm, searching for pathways with least total cost, would avoid pathways that incorporated costly component metabolites. This approach correctly replicated 80% of a test set of 160 metabolic pathways known to exist in cells, a noticeable improvement over the unweighted graph.

Several years later, although the weighting scheme of Croes et al. had considerable success, Blum and Kohlbacher created an algorithm that combined weighting with a systematic atom-tracking algorithm.3 The researchers mapped the correspondence of atoms between every substrate and product, recording which atoms are conserved in a chemical transformation. The algorithm deleted pathways containing reactions that did not conserve a minimum number of carbon atoms during the transformation. More so than a simple weighting scheme, atomtracking directly targeted pathways such as the glucose to ATP to ADP to pyruvate pathway, which structurally cannot occur. When tested, this new algorithm replicated actual biological pathways with more sensitivity and specificity than those using atom-tracking or weighting techniques alone.

Metabolic pathway algorithms received yet another improvement through modifications that enabled them identify branching pathways. Rather than proceeding linearly from the first to the final compound, pathways often split and converge again during intermediate steps. Pitkanen et al. introduced their branched-path algorithm Retrace, which also incorporated atom-tracking data.4 Heath, Kavraki, and Bennett later utilized atom-tracking data to create algorithms with improved search time. These branched algorithms reproduced the pathways leading toward several antibiotic compounds such as penicillin, cephalosporin, and streptomycin.5

Improvements in computational models will not merely replicate a cell’s biochemistry. By generating feasible alternative pathways, algorithms should predict undiscovered reactions that the cell could perform. For this reason, graphical algorithms, after constructing a skeleton of a cell’s metabolites, should integrate methods that account for biochemical properties. Constraint-based modelling is an alternative approach to metabolic networks research that ensures that the necessary reactants for a pathway are present in the correct proportions. Such models enable researchers to test how removing an enzyme or regulating a gene can impact the quantity of the desired compound. However, unlike graphical methods, the computational complexity of constraint-based modelling gives it a limited scale. Future research would focus on incorporating more biochemical properties into graphical methods such as atom-tracking, simplifying the constraint-based methods, or integrating the benefits of the two approaches into a comprehensive model.

Although still incomplete, the development of a fully effective computational model to guide the cellular engineering process will have critical implications. For example, the process of drug target identification to molecule optimization to approval currently takes 10 to 15 years to reach the market.6 Computational models that fully emulate a real cell will make synthetic biology rapid and systematic, accelerating the discovery and testing of the important compounds with important medical applications. Evidently, the integration of biology with mathematics will be critical to the future advancement of synthetic biology.

References

  1. Graph Theory. KEGG: Kyoto Encyclopedia of Genes and Genomes. http://www.britannica. com/EBchecked/topic/242012/graph-theory (accessed Oct. 31, 2014). 
  2. Croes, D. et al. J. Mol. Bio. 2006, 356, 222–236. 
  3. Blum, T.; Kohlbacher, O. J. Comp. Bio. 2008, 15, 565-576. 
  4. Pitkänen, E. et al. BMC Syst. Biol. 2009, 3, doi:10.1186/1752-0509-3-103. 
  5. Heath, A. P. Computational discovery and analysis of metabolic pathways. Doctoral Thesis, Rice University. 2010 
  6. Drug Discovery and Development. http:// www.phrma.org/sites/default/files/pdf/rd_ brochure_022307.pdf (accessed Feb. 14, 2015).

Comment

Killer Chili Peppers

Comment

Killer Chili Peppers

Have you experienced that burning sensation on your tongue after eating a spicy pepper? This reaction is caused by capsaicin, a colorless and odorless compound present in chili peppers. Spiciness is measured using Scoville heat units (SHU), which are based on the concentration of capsaicin in the pepper. Jalapeños have a range of 3,500 — 10,000 Scoville units, habaneros have a range of 100,000 — 350,000 Scoville units, and the world’s hottest pepper, the Trinidad Moruga Scorpion, has a range of 2,000,000 — 2,200,000 Scoville units. However, they all pale in comparison to pure capsaicin, which scores around 16 million Scoville heat units.

Capsaicin, however, does more than just add heat to your favorite foods; it has also been found to negatively affect important immune cells. In 2014, researchers at the Asan Medical Center in Seoul, South Korea found that capsaicin inhibits natural killer (NK) cells, a type of cell that is important for the surveillance of cancer.1 NK cells use cytokine signaling molecules such as interferon-γ and tumor necrosis factor-α to target and lyse cancer cells.2 Capsaicin directly decreases the cytotoxicity of NK cells by reducing cytokine production. In fact, extended exposure to capsaicin can kill NK cells.1 NK cell malfunctions have been shown to lead to higher rates of tumor formation and cancer metastasis,3 and many patients suffering from cancer exhibit defects in the function of NK cells.4 Since capsaicin has negative effects on NK cells, capsaicin may also have deleterious effects on other immune cells that protect us from cancer and other ailments.

Several studies have suggested some correlation between tumor formation and capsaicin consumption. In one study, five groups of mice were fed different levels of capsaicin for 35 days, with one group being fed no capsaicin as a control. 10% of the mice that were fed varying levels of capsaicin developed adenocarcinomas of the duodenum (gastrointestinal tumors) while no control mice developed tumors.5 Another paper by López-Carrillo et al. studied the correlation between capsaicin intake through chili peppers consumption and gastric cancer incidences in Mexico.6 They found that people who consumed high levels of capsaicin, about 90 — 250 mg daily, were at a statistically higher risk for gastric cancer than those who had a lower consumption rate. 90 — 250 mg of capsaicin, however, corresponds to about 9 — 25 jalapeño peppers every single day, so most people are not at a higher risk.

Despite its apparent carcinogenicity, capsaicin may also have therapeutic capabilities in cancer treatment. Studies have shown that capsaicin is so potent that it can actually inhibit the growth of leukemia, hepatoma, glioblastoma, and colon cancer cells.7 In one study, gastric cancer cells treated with capsaicin at concentrations of 10, 50, and 200 μM actually underwent apoptosis, or programmed cell self-destruction, at a higher rate than normal epithelial cells under the same conditions. Capsaicin is able to induce apoptosis in cells by activating p53, a tumor suppressor gene commonly dubbed as “guardian of the genome.” This gene also functions to activate DNA repair genes, pause the cell replication cycle, and initiate apoptosis when certain triggers are activated. Capsaicin was found to activate one of these triggers, causing the p53 gene to begin apoptosis. These results suggest new possible methods of cancer treatment or prevention.8

Perhaps in the near future, capsaicin can be manipulated for beneficial medical applications. Capsaicin presents an interesting duality—it has promising effects for cancer treatment, yet it can lead to cancer itself. In designing treatments, researchers will have to maximize the benefits of capsaicin’s cancer-fighting abilities while mitigating the potential damage to the body’s healthy cells; this tradeoff is analogous to the one made in chemotherapy today. The correlation between capsaicin and cancer may still be quite concerning. However, with moderation, all of us can continue to enjoy the burn of Sriracha and hot wings without worrying about putting our lives in danger—at least for those who can handle the heat.

References

  1. Kim, H. S. et al. Carcinogenesis 2014, 35, 1652–1660.
  2. Long, E. O. et al. Annu. Rev. Immunol. 2013, 31, 227–258.
  3. Hann, N. et al. J. Immunol. 1981, 127, 1754–1758.
  4. Saito, H. et al. Gastric Cancer 2012, 15, 27–33.
  5. Toth, B.; Rogan E.; Walker, B. Anticancer Res. 1984, 4, 177–179.
  6. López-Carrillo, L. et al. Int. J. Cancer. 2003, 106, 277–282.
  7. Bode, A. M. et al. Cancer Res. 2011, 71, 2809–2814.
  8. Chow, J. et al. BBA 2007, 1773, 565-576.

Comment

Recycled Water: The Future of the American West

Comment

Recycled Water: The Future of the American West

The Western United States has always been dry; San Diego, for example, derives only 15% of its annual water needs from rain.1 Engineers have constructed creative but short-sighted solutions to the problem of water shortage in response. Rivers have been diverted hundreds of miles to major cities, enabling further urban growth in areas that would otherwise not be able to support it.2 This rapid growth, however, comes at the cost of depleting these rivers. Letting entire cities, such as San Diego, Las Vegas, or Phoenix wither away is unrealistic, so scientists must create solutions that are sustainable, environmentally friendly, and relatively easy to implement in order to facilitate the survival of the Western states. Enter the concept of recycled water. Known as “toilet to tap” by its adversaries, recycled water is wastewater that, once cleaned of pathogens, viruses, pharmaceuticals, chemicals, and biological matter, is redistributed as drinking water. In order to sustain the existence and growth of Western cities, recycled water must be accepted and utilized.

To understand the need for recycled water, it’s helpful to look at the typical water shortages in the Western United States. Since 1950, there has been a 127% increase in water use nationwide, putting extreme strain on the existing infrastructure and environment.2 This increase in water usage has occurred despite the worsening droughts that affect the area. Studies conducted in Salt Lake City show that an increase by even one degree Fahrenheit causes as much as a 6.5% drop in local stream water flow per year.2 The increase in temperature will also strain and eventually exhaust the sources from which Western cities get their water. The West is faced with an issue that few wish to confront: if they continue to rely only on traditional water sources, Western cities will literally dry up, forcing residents to move elsewhere and creating massive economic instability nationwide.

One possible solution to water shortages is recycled water. The process of recycling water is intensive, which is understandable given the dangers of contamination. Wastewater is first sent to a sewage treatment plant where filters remove solids and dissolve biological material.3 The wastewater then undergoes normal groundwater treatment. First, microfiltration removes bacteria and protozoa. Then, reverse osmosis is utilized to remove viruses, salts, and pharmaceuticals. Finally, ultraviolet light and hydrogen peroxide destroy “trace organics.”3 After these steps are taken, the treatment plant adds in minerals and discharges it into a reservoir. Months later, the water from the reservoir is treated again and distributed to households.

It is a subject of debate whether water is clean enough after the treatment to be directly distributed without mixing it with reservoir water. Some say that the water produced by the treatment plant is even purer than reservoir water.4 Others, however, say that directly releasing treated water into the reservoir instead of letting it percolate through various ground layers permits impurities to remain in the water.5 Whether the water is safe enough to be directly consumed depends on the regulatory standards themselves as well as the plant producing the water. When produced according to Environment Protection Agency (EPA) guidelines, recycled water is as safe as traditionally obtained water.6

Another concern about recycled water is that treatment plants are unable to entirely remove pharmaceuticals, chemicals, and bacteria such as E. coli. The presence of E. coli could be the result of organic material remaining in the water after treatment. Traces of pharmaceuticals also pose a risk to the consumer. When medicine is flushed down the toilet or sink, it remains in the water supply and can be redistributed to other consumers. Although studies do not deny claims that such trace pharmaceuticals are found in recycled water,7 it is important to put this into perspective: even non-recycled water contains trace amounts of pharmaceuticals and is deemed safe for public consumption by the EPA.

Furthermore, many scientists who were once wary of recycled water have changed their opinions. In 1998, the National Research Council (NRC) reported that discharging recycled water into reservoirs was acceptable, “although only as a last resort.”1 Many people opposed to water reclamation cited this study, emphasizing the fact that it should only be used as a last resort, and not otherwise. The Western states are, however, facing water shortages that will soon require last resorts. More importantly, however, is the NRC’s new statement about wastewater treatment technology: “the possible health risks associated with exposure to chemical contaminants are minimal.”6 Thus, those opposed to recycled water cannot continue to use the NRC’s previous stance as backing for their claims. Recycled water that adheres to the EPA’s health and safety guidelines is necessary for the survival of states in the Western U.S.8

In addition to safety, cost is also an important factor to consider in the production of recycled water. Currently, the production of recycled water is not subsidized by the government. Due to the additional treatment that waste water requires, the production and distribution of recycled water costs four times more than that of groundwater.1 If recycled water were to be subsidized, as it is in Orange County, water production would cost only 0.0018 cents per gallon to produce, a small increase from traditional tap water’s cost of 0.0015 cents per gallon.1 With the U.S. government’s support reclamation plants in the West, the prohibitively high cost would no longer be an issue. Even if recycled water were not subsidized, the added cost could provide long-term societal benefits. By increasing the cost of tap water by introducing government recycling, thus moving the cost from taxpayers in general to those who specifically use the water, cities could decrease water use over time. Residents would, if confronted with rising water prices, make an effort to consume less which would decrease stress on the treatment plants themselves as well as natural resources.

The biggest challenge to recycled water, however, is not its cost or purported health risks, but rather its public perception. The unflattering name, “toilet to tap,” hardly brings to mind the sparkling springs associated with “safe” bottled water. There are large groups of detractors who state that recycled water can’t be trusted, and to some extent, they have reason to maintain this stance. In the past, recycled water facilities released non-potable recycled drinking water in four cities.5 Although this potential issue cannot be ignored, the benefits of recycled water when it is produced in a fully functioning facility with enforced safety standards cannot be ignored either. These isolated incidents do not indicate that all recycled water is unsafe, but rather that it must be better regulated.

Despite a number of vocal public groups in opposition to recycled water, there is growing support for the construction and utilization of reclamation facilities. In San Diego, a 2004 poll revealed that 63% of citizens opposed water reclamation; in 2011, this number dropped to 25%.1 Education is the most influential method for obtaining a higher proportion of positive response. The most common objection to using recycled water revolves around the concept being “disgusting.”4 For many, the phrase “toilet to tap” induces an image of the contents of a toilet bowl flowing directly to their kitchen sink. When they are shown the intensive treatment process, though, they understand that the water is safe to drink and environmentally friendly. About 95% of people who have taken tours of the water recycling agree it is feasible.3 If people in the Western states and nationwide are exposed to the method in which wastewater is treated, recycled water may gain popularity.

Recycled water, if able to defeat the social stigma that surrounds it, could be a literal life-saver in the Western United States. While Los Angeles shut down a reclamation facility built in 2002, there has been an effort to reopen it. Doing so could reclaim 9.7 billion gallons of water per year.9 While recycled water cannot supply cities like Los Angeles with all the water they need, it is a step in the right direction. By increasing the cost of water to a level where overuse would be discouraged and by instituting water reclamation facilities, the cities of the Western U.S. may be able to survive. The major obstacle to the implementation of recycled water is public disapproval due to ignorance about the process of water recycling and the purity of the final product. However, if educated about the process of water recycling, the public might come to see reclaimed water as a safe and effective water source.

References

  1. Barringer, F. As ‘yuck factor’ subsides, treated wastewater flows from taps. The New York Times, Feb. 9, 2012. http://www.nytimes.com/2012/02/10/science/earth/despite-yuck-factor-treated-wastewater-used-for-drinking.html?pagewanted=all&_r=0 (accessed Apr. 8, 2014).
  2. Ferner, M. These 11 cities may completely run out of water sooner than you think. Huffington Post, Dec. 4, 2013.  http://www.huffingtonpost.com/2013/12/04/water-shortage_n_4378418.html (accessed Apr. 6, 2014).
  3. Chu, K. From toilets to tap: How we get tap water from sewage. USA Today, Mar. 3, 2011. http://usatoday30.usatoday.com/money/industries/environment/2011-03-03-1Apurewater03_CV_N.htm (accessed Apr. 8, 2014).
  4. Weissmann, D. Texas town closes the toilet-to-tap loop: Is this our future water supply?. Marketplace, Jan. 6, 2014. http://www.marketplace.org/topics/sustainability/texas-town-closes-toilet-tap-loop-our-future-water-supply (accessed Apr 1, 2014).
  5. Royte, E. Bottlemania: Big business, local springs, and the battle over America’s drinking water. Bloomsbury: New York, 2008.
  6. Than, K. Reclaimed wastewater for drinking: Safe but still a tough sell. National Geographic, Jan 31, 2012. http://news.nationalgeographic.com/news/2012/01/120131-reclaimed-wastewater-for-drinking/ (accessed Mar. 29, 2014).
  7. Research Foundation. Recycled water: How safe is it?, 2011. http://www.athirstyplanet.com/sites/default/files/uploadsfiles/PDF/RA%20Backgrounder_6.4.11_Lo.pdf (accessed Apr. 1, 2014).
  8. Mayor opposes ‘toilet-to-tap’ water supply proposal. ABC News, Sept. 13, 2007. http://www.10news.com/news/mayor-opposes-toilet-to-tap-water-supply-proposal (accessed Apr. 8, 2014).
  9. Fleischer, M. Don’t Gag: It’s time for L.A. to embrace ‘toilet to tap’. Los Angeles Times, Feb. 4, 2014.  http://articles.latimes.com/2014/feb/04/news/la-ol-drought-toilet-to-tap-water-20140204 (accessed Apr. 1, 2014).

Comment

Modern Day Telepathy: Advances in Brain-to-Brain Communications

1 Comment

Modern Day Telepathy: Advances in Brain-to-Brain Communications

Is it possible to create and communicate through a mental link without using sensory networks? Telepathy, a common theme in science fiction, involves transferring thoughts from one person to another. As a result of recent developments in neuroscience and technology, scientists have discovered a new form of communication that does not require us to speak, listen, type, or text.

Previously, there have been attempts to achieve the goal of brain-to-brain communication, but all successful attempts have been invasive methods, which have limited practicality. Dr. Alvaro Pascual-Leone, the director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center, had a similar vision of creating a way to for people to communicate without being constrained by physical abilities, such as talking and listening. However, he wanted to implement a non-invasive method for brain-to-brain communication in order to create a more practical system. He assembled an international team of researchers specializing in neuroscience and robotic engineering to create this form of telepathic communication using a brain-computer interface, allowing humans to send messages to computers using non-invasive brain-computer interactions.1 On September 3, 2014, his team was able to successfully demonstrate direct brain-to-brain communication between human subjects on different continents. In order to facilitate this direct communication, the team combined several forms of brain technology.

One brain technology the researchers incorporated was the electroencephalogram. Electroencephalography (EEG) is a non-invasive form of a brain-computer interface that records brain activity. When brain cells communicate, they send impulses to each other. EEG detects these impulses through electrodes that are placed directly onto the scalp. The electrodes are connected to a recording device that displays the recorded brain activity as waves.2 EEG is often used clinically to examine the brain during seizures, monitor the depth of anesthesia, and inspect the brain for damage.

The second technology the team used was transcranial magnetic stimulation (TMS), which uses magnetic fields to stimulate nerve cells in the brain and is often used to treat depression. The process of stimulating nerve cells involves placing a large electromagnetic coil against the scalp near the forehead and producing an electric current using the electromagnet.3 Because of the current, neurons fire and become more active. This can cause different reactions including muscle twitching or seeing flashes of light.

While EEG and TMS have different medical uses, Dr. Pascual-Leone integrated the two technologies to create a groundbreaking communication system that relies on EEG to read the original message in the sender’s brain and a TMS system to relay the information into the receiving brain. This brain-to-brain communication experiment involved four volunteers—one in India and three in France. The volunteer in India sent the messages, and the three volunteers in France received them.1

The sender was connected to an EEG-based brain-computer interface, and he transmitted the words “hola” and “ciao” to the recipients who received the message through a TMS-based receiver.1 The EEG read the sender’s brain activity and converted the letters to binary code. The transmission system used a wireless EEG that sent the data to a computer for brain-computer interface processing. The message was then sent to a TMS computer in France through the Internet. Using electromagnetic stimulation, the TMS relayed the messages to the receivers by stimulating their brains to see flashes of light in their periphery called phosphenes.1 The recipients do not directly receive the thought itself in their minds; instead, they are stimulated in the form of a phosphene code and consciously interpret stimulation. These flashes are similar to Morse code in that the participants can understand the sequences and decode the information into the greetings that were sent to them. All three recipients translated the message successfully. A second experiment was done in the same manner as the previous one, except the message was sent between participants in Spain and France. The error rate for this experiment was about 15%. Human error in the participants decoding the message that was sent to them accounted for 11% of the total error.1

These experiments proved to be an enormous advance in using properties of the brain and new technology to discover new ways of communication. However, the experimenters agree that there still remains much research to be done to make the system of brain-to-brain communication more efficient and applicable. Further developments of this communication system need to make the technology more accessible and user-friendly. The sizeable challenges of reducing EEG and TMS to a small device as well as the complexity of both instruments make brain-to-brain communication seem like an unattainable goal.

The prospect of developing direct brain-to-brain interfaces raises questions on how much it can change the way people communicate in society as well as the future of written and oral communication. This technology brings us closer to a form of communication similar to telepathy, which has only been depicted in science fiction. However, one major difference is that this technology uses the Internet as an intermediate. In science fiction, telepathy is often used to send private messages from one person to another without the chance of someone else reading or hearing the message, a characteristic that makes telepathy very appealing. If the Internet is used as an intermediate, however, the information sent through this interface will not be as private or secure as people expect telepathy to be. If this brain-to-brain technology lacks the private aspect of telepathy, will it provide any benefits when compared to our current forms of communication? In its current state, users must also take the extra effort in learning how to decode phosphene signals, an additional step that makes this brain interface a more unattractive option.

Although questions of practicality discourage the commercialization of Dr. Pascual-Leone’s invention, his experiments have made significant headway into direct brain-to-brain communication. The success of this technology depends on how efficient and user-friendly scientists can make this device. Many concepts, including early computers and space exploration, were seen as implausible or impractical in their nascency but developed into commonplace technologies. Whether or not this invention is currently practical, it is amazing how scientific research has brought the seemingly outlandish idea of telepathy closer to reality.

References

  1. Grau, C. et al. Plos One. [Online] 2014. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0105225  (accessed Oct. 13, 2014).
  2. National Institutes of Health. http://www.nlm.nih.gov/medlineplus/ency/article/003931.htm (accessed Oct. 13, 2014).
  3. Johns Hopkins Medicine. http://www.hopkinsmedicine.org/psychiatry/specialty_areas/brain_stimulation/tms/ (accessed Oct. 13, 2014).

1 Comment

The Bright Future of Solar Energy

Comment

The Bright Future of Solar Energy

Renewable energy is the dream of countless environmentalists and active citizens. Adopting hybrid electric vehicles and domestic sources of sustainable energy are some of the goals of renewable energy. The increasing price of energy derived from crude oil and concerns regarding energy security have stimulated investments in sustainable resources such as solar energy.

The search for and extraction of oil are negatively impacting the environment around oil platforms. Offshore drilling platforms report spills every year that kill an estimated 315 thousand birds per platform.1 Additionally, the waste fluids ejected from the drilling process harm marine life that rely on filter feeding; these pollutants then travel up the food chain in a process called biomagnification.2-4 Research and development efforts in renewable energy sources promise to minimize these harmful practices by reducing society’s dependence on oil.

One such alternative to oil-derived energy lies in residential and commercial photovoltaic solar panels that convert sunlight into electric current. Depending on the weather, the sun provides between 3.6—6 kWh/m2 (kilowatt-hours per square meter) per day in the U.S.5 Homeowners are beginning to capitalize on this source by installing residential, small-scale “rooftop panels,” which are labeled as photovoltaic (PV) systems. These systems work to create electric current by harnessing the excitation of electrons from sunlight. These systems are becoming commonplace as their cost continues to decrease. In 2011, the cost of installing PV systems was 11—14% lower than that in 2010, and in 2013 prices were even lower.6

The most widely adopted material in these products is silicon, a material known for its conductivity. Development in conducting materials and manufacturing methods has greatly accelerated since the first application of silicon, improving energy collection. Now, total global grid-connected systems produce seven million kW.7 As the average American home uses between 2—5 kW per year, this grid system can support two to three million U.S. homes.8 A second viable option for solar energy is Concentrated Solar Power (CSP). Known as “power towers,” CSP uses mirrors to concentrate the sun’s rays, generating enough power to heat water and operate a turbine. Water is not the only fluid utilized: in a parabolic mirror system, oil is heated to 400 °C to convert water into steam via subsequent heat transfer.9

CSP plants are expanding globally; they are expected to produce a total of 5000 MWh (megawatt-hour) by 2015. This is an increase from the 2011 figure of 1000 MWh. As one might expect, these towers are set to be installed in sunlight-rich areas, with the majority of this planned construction taking place in California. Towers will also be installed in China, Israel, South Africa, and Spain.7 Typical CSP systems today can generate one MWh of electricity for every 4—12 square meters of land space, which, according to the Royal Academy of Engineering Ingenia, “can continuously and indefinitely generate as much electricity as any conventional 50 MW coal- or gas-fired power station.”10 This is relatively small, given that the average U.S. residential home occupies about 405 square meters. The average 1000 MW U.S. coal-fired power plant requires 1—4 square kilometers of land space, translating into 6—18 GWh (m2/gigawatt-hours), or 4—20 square meters per GWh. However, including the amount of land needed for mining and waste disposal, this figure can include an upper limit of 33 m2/GWh.11 This factor of land efficiency leads to these newly installed CSP plants to produce one kWh worth of electricity for $0.10—0.12, given the costs of installation and maintenance as well as other fixed and variable costs. In Houston, Texas, rates for oil-derived electricity can range from $0.08—0.15/kWh, which makes CSP a very competitive alternative.10

Opposing arguments based on the high cost of photon-collecting technology and the intermittency of solar rays are losing ground. In the U.S. alone, the PV market has grown considerably. For example, California experienced a 39% growth in residential PV system installations in the fourth quarter of 2012 (Fig. 1).12 Thus far, the cost of these systems has declined over 30% in the past few decades, and the U.S. Department of Energy’s (DOE) SunShot Vision Study is attempting to further reduce costs by 75%. With this initiative, the DOE plans to “meet 14% of U.S. electricity needs via solar energy by 2030 and 27% by 2050.”13 It is believed that this goal will be possible once solar electricity generation reaches the cost of $0.06/kWh, near the range of current fossil-fuel based generation methods.13 The SunShot Vision Study has implemented a “Rooftop Solar Challenge” aimed at improving the logistical requirements of installations in order to apply its initiative to states across the U.S.

Applications for solar energy can range from using solar cookers with the CSP model to portable solar chargers for personal electronics. These forms of energy can be scaled to any size, and their full integration into society is only hindered by the current dependence on oil. To make energy production sustainable, solar technology must be further developed and implemented. Statistical models need to be considered, academic and industrial research needs to be funded, and a united effort in adopting these technologies needs to take place. Significant progress has been made in homeowner PV system adoption and the DOE’s SunShot Vision, which serve as testaments to the viability of a sustainable energy economy. When comparing the advantages of oil against the advantages of solar energy, it is clear that solar energy has the potential to provide more efficient and environmentally friendly results. These alternatives still need technological advancement, proper location, and governmental support; once these are completed, solar alternatives will be able to meet our energy needs. Although we as a society may find ourselves too dependent on oil, there is hope for a more sustainable, responsible, and environmentally friendly world.

References

  1. Tasker, M. L. et al. The Auk. 1984, 101, 567-577.
  2. Wiese, F.K. Marine Pollution Bulletin. 2001, 42, 1285-1290.
  3. Wiese, F.K.; Robertson, G. J. Journal of Wildlife Management. 2004. 68, 627-638.
  4. Ocean Discharge Criteria Evaluation;  General Permit GMG290000; US EPA: 2012; 3. http://www.epa.gov/region06/water/npdes/genpermit/gmg290000_2012_draft/ocean_discharge_criteria_evaluation.pdf (accessed Feb. 1, 2014).
  5. George Washington University GW Solar Institute. How much solar energy is available? http://solar.gwu.edu/FAQ/solar_potential.html (accessed Feb. 1, 2014).
  6. Chen, A. Lawrence Berkeley National Laboratory. The installed price of solar photovoltaic systems in the U.S. continues to decline at a rapid pace. http://newscenter.lbl.gov/news-releases/2012/11/27/the-installed-price-of-solar-photovoltaic-systems-in-the-u-s-continues-to-decline-at-a-rapid-pace/ (accessed Feb. 1, 2014).
  7. Hamrin, J.; Kern, E. Grid-Connected Renewable Energy: Solar Electric Technologies; United States Agency of International Development (USAID): Washington, D. C. http://www.energytoolbox.org/gcre/mod_5/gcre_solar.pdf (accessed Oct. 28, 2013).
  8. Solar Energy Industries Association. Solar Energy Facts: Q3 2013. http://www.seia.org/research-resources/solar-industry-data (accessed Feb. 1, 2014).
  9. Concentrating Solar Polar (CSP) technologies. http://solareis.anl.gov/guide/solar/csp/ (accessed Feb. 3, 2014).
  10. Müller-Steinhagen, H.; Trieb, F. Concentrating solar power: a review of the technology. Royal Academy of Engineering, Ingenia. 2004, 18, 43-50.
  11. Fthenakis, V.; Kim, H. C. Renew. Sust. Energ. Rev. 2009, 37, 1465-1474.
  12. Solar Energy Industries Association. U.S. Solar Market Insight 2013. http://www.seia.org/sites/default/files/4Y8cIWF6ps2013q1SMIES.pdf?key=58959256 (accessed Nov. 10, 2013).
  13. U.S. Department of Energy. SunShot Initiative.  http://www1.eere.energy.gov/solar/sunshot/about.html (accessed Nov. 10, 2013).
  14. U.S. Department of Energy. Updated capital cost estimates for utility scale electricity generating plants. U.S. Energy Information Administration: Washington, DC, 2013. http://www.eia.gov/forecasts/capitalcost/pdf/updated_capcost.pdf  (accessed Nov. 7, 2014).

Comment

Coral Reef and Algal Interactions

Comment

Coral Reef and Algal Interactions

To look at a coral reef is to see a fantastic world of color and life. But what is perhaps most interesting lies within the chaos. Beneath the busy surface of a coral reef lies a symbiotic relationship unknown to most. This is the world of algae, specifically the algae that live within the coral themselves. It is this partnership that drives energy production within the reef and keeps alive the diverse species found therein. They are the primary producers of this underwater world, using light energy to produce nutrients that in turn feed not just themselves but the entire reef. Without these algae, coral reefs—and much other marine life—would cease to exist.

The majority of corals in the taxonomic phylum Cnidaria host within their tissues species of symbiotic algae from the genus Symbiodinium.2 The algae capture light and perform photosynthesis, giving nutrients to their coral hosts in return for shelter and protection.2 These algae are not born into the coral; rather coral polyps ingest them throughout their lives.2 This allows for the change of different species of Symbiodinium, which are separated into groups called clades, within a particular coral.2 As it turns out, different clades may be responsible for many different phenomena in corals separated by geographical distance or depth.

An interesting phenomenon is that many corals have the ability to change out their dominant Symbiodinium species symbionts after an event called bleaching, in which a coral or group of coral expel all of their symbiotic algae and lose their color, appearing ‘bleached’ afterwards.1 This appears to be a way for coral to adapt to changing ocean conditions. For instance, in recent years as the ocean acidifies, coral that can more easily change their clades may have an evolutionary advantage over those that cannot. At the same time this results in the use of fat stores to sustain the coral, a process that over the space of just a few weeks can lead to serious damage or death.2 Because they cannot physically move to change their environment, changing symbiont clade may be one of the few ways a coral can adapt to changing environmental pressures. In addition to ocean acidification, water pollution from factories and tourism spots have caused massive amounts of light-scattering debris to accumulate in coastal areas where reefs are found. If long-term ocean debris clouds the water around a coral, it may expel its symbionts in favor of a clade with different photosynthetic capabilities. Recently, these changes due to human interaction with the environment have led many coral species, sometimes in huge conglomerates, to bleach.6 This could be the corals’ attempt to change their symbiotic algae in hopes to adapt to the warmer climate, as increases in temperature also stress the coral metabolically.7 Changing the type of symbiont may allow regulation of the uptake of oxygen or the usage of light to create energy at a different rate than would otherwise be possible. However, it also appears that coral will expel their algae in response to external stressors that may or may not have anything to do with photosynthesis, including salinity changes, bacteria, and chemicals.2 This points to a much more complex relationship between coral and their algae symbionts. Although Symbiodinium species in general have a positive effect on their hosts, it is possible that upkeep of the symbiotic relationship could become energetically unfavorable for the coral, causing the coral to expend energy in times of stress. This could result in bleaching, allowing the coral to save on some of its energy sources.

Light does not pass as freely through water as it does through air. Certain light waves do not effectively penetrate the water’s surface; this is why water appears blue. Because of this, coral and their symbiotic algae cannot use the same mechanisms for light absorption that land plants use. Another consequence is that at different depths, different spectra of light dominate. In areas near the surface, red, yellow, and green light waves still reach the organisms that live there. However, at the deeper reaches of the ocean the majority of light is blue. This means that coral must adapt their choice in algal symbionts to the depth at which they live. In fact, it appears that the major species or clade within a particular coral may be a result of competitive exclusion in which different algal species fight over the limited space within the coral’s tissues until the best suited species or clade effectively outcompetes all other species.1 This points to the ability of coral to change the major clade that resides within them through bleaching in response to new external conditions like pollution. That said, there are several documented coral species that show a high specificity of symbiont preference.1 If some coral species have a specific clade with which they are always associated, that is, if they cannot as easily switch clades, this could allude to a relative advantages of certain coral species over others in rapidly changing environmental conditions like those facing our oceans today.

The dynamic relationship between coral and algae is of upmost importance for the continued balance of reef and ocean ecosystems. Corals not only put large amounts of energy into the ocean, but they also provide a home for reef fish and other organisms.2 As corals bleach and die, their skeletons are dissolved by the water and degraded by other organisms. Without fresh growth, this ends in the loss of habitat for countless fish and invertebrates. Coral is the glue that holds a reef ecosystem together, and symbiotic algae are similarly the glue that holds coral together. Without symbiotic algaes, reef ecosystems would cease to exist, affecting millions of human lives and eliminating one of the most amazing and diverse examples of speciation on the planet. Keeping this relationship alive and well is crucial to the continued wellbeing of our oceans. Currently, the reef ecosystem is under threat from ocean acidification, a direct consequence of increased carbon dioxide in the atmosphere from human consumption.7 This makes it harder for coral to grow which in turns slows reef growth as a whole.7 If this trend is not reversed, it could spell disaster for coral reefs around the world.

The reality of bleaching is obvious, regardless of one’s stance on global warming. The masses of bleaching, if anything, offer a concrete proof that our global environment is rapidly changing at the cost of the stability of one of the world’s most important ecosystems. It seems that a large part of the destabilization comes from the breakdown of this relationship between coral and algae. As the coral expel their symbionts in favor of other clades, they stress themselves to the point of starvation and death. Mass death of coral leads to a domino effect throughout the rest of the reef, with all organisms affected in some way or another. In a sense, coral reefs affect just about every ecosystem on the planet. They provide energy to other reef organisms that are harvested by marine and land animals alike, including humans. Beyond that, they help to block coastal areas from large waves, which can protect these areas from flooding. Coral reefs are an extremely important part of our world, and if we continue to pollute the oceans and allow corals to acidify, we risk losing them.

References

  1. Baker, A. C. Annu. Rev. Ecol. Evol. Syst. 2003, 34, 661-689. 
  2. Borneman, E. H. Aquarium Corals: Selection, Husbandry, and Natural History. Charlotte, VT: Microcosm: Charlotte, Vermont, 2001. 
  3. Favia favus. The IUCN Red List of Threatened Species. http://www.iucnredlist.org/ details/133569/0 (accessed Apr. 1, 2015). 
  4. Levy, O. et al. J. Exp. Biol. 2003, 206, 4041-4049. 
  5. The IUCN Red List of Threatened Species. www.iucnredlist.org (accessed Apr. 1, 2015). 
  6. Working Together Today for a Healthier Reef Tomorrow. Australian Government Great Barrier Reef Marine Park Authority; 2011. 
  7. Doney, S. C. et al. Annu. Rev. Mar. Sci. 2009, 1, 169-192. 

Comment

A Cup of Tea Against Cancer

Comment

A Cup of Tea Against Cancer

Green tea, made from the leaves of Camellia sinensis, has come a long way from its humble origins in China to its current status as the second most popular beverage worldwide. According to Chinese mythology, Shennong, the legendary ruler of China in approximately 2370 BC, drank the first cup of green tea that was brewed when a tea leaf fell into his boiled water.1 Despite his title as the divine healer, Shennong could not have possibly realized the numerous health benefits contained in the little cup. Green tea benefits health in various ways including cognitive enhancement, improvement of mental ability and alertness,2 and increased reward learning through modulation of dopamine transmission.3 Tea also helps with dieting through increased fat oxidation and prevents cardiovascular disease and diabetes.4 Recently, several studies have also credited green tea for its ability to prevent cancer development.1,4-6

When harvested from the tree, leaves of Camellia sinensis contain a high concentration of flavonoids. Flavonoids are members of the polyphenol group and have demonstrated anti-inflammatory, anti-allergic, and anti-mutagenic effects. In green tea, a group called catechin constitutes a large percentage of the flavonoids. This specific type of flavonoid, especially epigallocatechin gallate (EGCG), prevents the formation and growth of tumors.4 Normal cells take both complex and varying pathways to develop into malignant cells, but there are three crucial stages in the path to malignancy. In the initiation stage, undesirable mutations in the chromosome form due to exposure to carcinogenic substances or radiation. In the second stage of promotion, the mutation is translated and transcribed to the cytoplasm and cell membrane. The last stage is progression, during which cancer cells proliferate. By this point, accumulated mutations in chromosomes produce many genetic alterations that promote uncontrollable growth. While the numerous stages of cancer progression may complicate the search of one specific cure, they provide equal number of opportunities for regulation of carcinogenesis.5

The polyphenol substituents found in tea can suppress cancer at various stages in its progression. First, tea can prevent initiation by inactivating or eliminating the mutagens that can potentially damage the cell DNA. Potential mutagens are surprisingly common in our environment.5 Every day, we are exposed to processes that introduce dangerous reactive oxygen species (ROS) such as hydrogen peroxide and oxygen radicals that can react with DNA and induce detrimental mutations.1 Common ionizing radiation (UV and X-rays) as well as tobacco are well-documented mutagens as well. The flavonoids contained in tea are natural scavengers that destroy these free oxygen radicals.1 Catechin, a type of flavonoid, is especially effective at reducing free radicals by binding to ROS as well as to ferric ions, which are required to create ROS.6 Polyphenols of green tea can also competitively inhibit intermediates of heterocyclic aromatic amines, a new class of carcinogens, thus reducing the danger of accumulating DNA-damaging material.1 Finally, the chemical structure of the polyphenols in tea has strong affinity toward carcinogens, enabling them to bind to and neutralize the harmful substances.6 By blocking common cancer-initiating factors, tea lowers the chance of genetic mutations that may result in a tumor.

Substances in green tea can also prevent cancer by blocking angiogenesis, essentially starving the tumor cells.1 Angiogenesis is the formation of network of blood vessels through cancerous growths. In smaller tumors, cancer cells can use simple diffusion to transport necessary oxygen and nutrients. However, as the number of accumulated cells increase, tumor cells send signals to surrounding host tissues to produce the proteins necessary for blood vessel generation. These blood vessels supply large amounts of oxygen and nutrients that are unavailable through passive diffusion. Catechins in green tea stop angiogenesis by interfering with the tumor cell signals. EGCG has been shown to inhibit epidermal growth factor receptor, and thus production of vascular endothelial growth factor (VEGF), which is in charge of initiating angiogenic blood vessel formation.1 Further studies have shown direct inhibition of VEGF transcription and VEGF promoter activity in breast cancer cells by green tea extract (GTE) and EGCG.4-6 GTE also suppresses production of protein kinase C, which regulates VEGF as well. By inhibiting the signal pathway to blood vessel formation, green tea is able to reduce the progression of angiogenesis.

Another role of tea includes preventing metastasis, which is the most common cause of cancer-related mortality.1 Metastasis represents the full development of a tumor, in which the boundary that enclosed the cancer is broken and the tumor freely migrates to other parts of the body. Green tea’s flavonoids prevent degradation of membranes and proteins on the cell surface that promotes anchorage.1 Once base membranes and proteins that anchor cells to specific locations disappear, tumor cells are unfettered. EGCG in green tea has been shown to block metastasis by inhibition of membrane type 1 matrix metalloproteinase (MMP), which in turn restrains MMP-2, an enzyme crucial to degradation of the extracellular matrix. In experiments, a mixture of EGCG and ascorbic acid showed a significant suppression of metastasis by 65.9%.1

Finally, tea can prevent the unregulated proliferation of cancer cells that drives tumor formation and metastasis. Apoptosis, or the self-destruction of a cell, is actually a common and natural biological process. When a cell loses the ability to undergo apoptosis, it becomes potentially cancerous. Increasing apoptosis in cancer cells should restore balance and eliminate unrequired and harmful cells in the body. The problem lies in specifically inducing apoptosis of cancer cells without harming the normal cells, but research has shown tea’s potential in the selective promotion of apoptosis. In an experiment involving human papillomavirus 16-associated cervical cancer cells, EGCG inhibited cell growth by promoting apoptosis and cell cycle arrest.1 In head and neck carcinoma cells, EGCG also increased the percentage of cells at phase G1, the initial growth cycle of the cell, and induced apoptosis.1 Similar results were found by adding the extracted water-soluble fraction from green tea to mouse epidermal cells JB6, which both inhibited carcinogenesis and induced apoptosis.5

The extensive evidence presented here illustrates the cancer-preventive and inhibitory effects of green tea. However, we must consider that most of the data were collected through in vitro and in vivo experiments. Clinical trials with human beings have yet to confirm the preventive effects of tea polyphenol against cancer.5 Current research does not present significant evidence to determine the true effects of tea. On the other hand, a negative correlation has been observed between green tea consumption and cancer mortality along with general mortality rate in Japanese populations.5 In general, increasing the amount of green tea consumed per day indicated a reduced chance of cancer. These results suggest that tea, even with its vast number of health benefits, is not a cureall. In conjunction with regular exercise and vegetables with each meal, however, many diseases can be prevented. By drinking tea, one can partake in a tradition passed down for centuries while keeping the body healthy.

References

  1. Jain, N. K. et al. Protective Effects of Tea on Human Health; CAB International: Cambridge, 2006.
  2. Borgwardt, S. et al. Eur. J. Clin. Nutr. 2012, 66, 1187-1192.
  3. Zhang, Q. et al. Nutr. J. 2013, 12, 84.
  4. Dulloo, A. G. et al. Am. J. Clin. Nutr. 1999, 70, 1040-1045.
  5. Kuroda, Y. et al. Health Effects of Tea and Its Catechins; Kluwer Academic/Plenum Publishers: New York, 2004.
  6. Yammamoto, T. et al. Chemistry and Applications of Green Tea; CRC Press LLC: New York, 1997.

Comment

Don't Panic: The Science of Ebola Transmission

Comment

Don't Panic: The Science of Ebola Transmission

In 2014, the Ebola virus gained notoriety for the lives it took in such a short amount of time. While the panic surrounding the virus has since died down, Ebola continues to wreak havoc in West Africa. To properly contextualize the implications of this epidemic, it is important to understand the basics of how Ebola works—how the virus itself functions, how it spreads, and how it affects the human body.

The Ebola virus is classified in the order Mononegavirales, which includes the mumps, measles, and rabies viruses.1 There are currently five species of Ebola viruses: Zaire, Bundlibugyo, Sudan, Reston, and Tai Forest. The most lethal of these species, Zaire, is responsible for the 2014 outbreak. Ebola carries a negative-sense RNA genome that works by encoding mRNA (a type of RNA called messenger RNA) in the host cell.2 The mRNA strand is translated into seven viral proteins that allow replication of the Ebola virus. Newly formed Ebola virus buds out of the cell, and once the cell dies, it lyses, or bursts, releasing viral RNA to other cells within the body.

Ebola, like the swine flu, was likely first introduced into the human population through close contact with the bodily fluid of infected animals such as chimpanzees, gorillas, monkeys, porcupines, or fruit bats, which are the virus’ natural hosts.1 Although the virus is highly transmittable through bodily fluid, it is not an airborne disease; instead, the virus is transmitted only through direct contact of broken skin or mucous membranes with the bodily fluids of infected organisms and through direct contact with materials contaminated with these fluids.1 People are not infectious until they develop symptoms; however, they remain infectious as long as their bodily fluids contain the virus. For example, the Ebola virus can persist in semen for up to three months subsequent to recovery. The incubation period, or time interval from infection to onset of symptoms, is 2 to 21 days.1 The delayed onset of symptoms means that the infected individual may be unaware that he or she is carrying the disease.

During the 2014 outbreak, poor infrastructure, unsafe practices in burial ceremonies, and close contact between health-care workers and patients facilitated the spread of the virus.1 In West Africa, improperly disposed bodily fluids contaminated materials such as clothing led to infection of healthy individuals. Burial ceremonies themselves have contributed to the transmission of Ebola in West Africa; mourners who came into direct contact with the bodies of deceased individuals contracted the disease.1 In the U.S., the transmission of Ebola from patient Thomas Eric Duncan to nurses Nina Pham and Amber Vinson was likely caused by a breach in safety protocol; while healthcare workers are typically obligated to wear full body suits, any error in wearing or removing the suit could have exposed the nurses to Duncan’s bodily fluids.4

Ebola’s initial symptoms include the sudden onset of fever, fatigue, muscle pain, headache, and sore throat. These initial symptoms can often be confused with those of the flu, but they are followed by more severe ones including vomiting, diarrhea, rashes, and internal and external bleeding. While this disease has a high mortality rate of 50%,1 its transmission is more indirect than that of more common viruses such as the flu. Because of the routes of Ebola transmission are so limited, especially in areas with developed infrastructure, the CDC and other health experts predict that an Ebola outbreak in the U.S. would be highly improbable.5 The events of 2014 serve as a reminder of the importance of adequate infrastructure and safety protocols in order to limit the spread of future viral epidemics.

References

  1. World Health Organization. http://www.who.int/mediacentre/factsheets/fs103/en/ (accessed Nov. 1, 2014).
  2. Feldmann, H. K. Arch. Virol. Suppl. 1993, 7, 81–100.
  3. Mucous membrane. http://www.britannica.com/EBchecked/topic/395887/mucous-membrane (accessed Nov. 1, 2014).
  4. Alexander, H. Second Texas nurse contracts Ebola after treating Thomas Eric Duncan. http://www.telegraph.co.uk/news/worldnews/ebola/11164105/Second-Texas-nurse-contracts-Ebola-after-treating-Thomas-Eric-Duncan.html (accessed Jan. 17, 2015).
  5. Ebola transmission. http://www.cdc.gov/vhf/ebola/transmission/index.html?s_cid=cs_3923 (accessed Jan. 17, 2015).

Comment

The Reality of Virtual Reality

Comment

The Reality of Virtual Reality

People experience the world around them, what we call “reality,” by receiving sensory input and processing these messages. Sources of sensory stimulus are greatly varied and include anything from sound waves picked up by your ears to the feeling of wearing socks. Because the basis of perception is founded on stimulation, manipulation of these inputs can effectively enable people to experience false sensations. A virtual world indistinguishable from reality—perfectly stimulating all senses—is the end-goal of researchers developing virtual reality interfaces.

Virtual reality (VR) refers to computer-generated simulation of a realistic or imaginary world that uses visual, tactile, and auditory cues to manipulate the user's sensations and perceptions. While VR interfaces usually include head-mounted displays that provide visual input, more sophisticated devices for simulating senses of touch, taste, smell, and sound are being researched. An important goal of VR research is attaining the ability to provide a highly realistic environment in which individuals can interact with their surroundings and receive sensory feedback. Unfortunately, due to the limitations of computing power and research in the field of VR, a program inducing complete immersion into the virtual worlds is not yet possible.

Although VR seems to be a technology borrowed from science fiction novels, it has actually existed in some fashion for eight decades. In 1929, Edward Link invented the first flying simulator, which used pneumatics to mimic aerial maneuvers and provided haptic feedback to the user. The Link Trainer, which initially gained popularity as an amusement park ride, later became a standard training module for U.S. pilots during World War II.1 The Link Trainer is a very primitive example of VR, as it leaves much to imagination; pilots would be hard pressed to believe they were actually within the cockpit of an aircraft during a dogfight and not in a cramped box, rocking back and forth. However, since its creation, the Link Trainer has promoted the idea that simulators could be utilized to recreate situations that would otherwise be difficult to experience.

The advent of computers initiated an explosion of advanced VR interfaces such as head mounted displays (HMDs) that allow the user to interact with the virtual world and receive enhanced sensory information. In 1991, Virtuality Group developed the 1000CS gaming system, which was a pioneer in the field of head tracking and enabled players to turn their heads to view their surroundings within the games they were playing.2 The 1000CS was an important first step for commercially available VR technology. Modern HMDs are much less bulky than their predecessors, less prone to causing neck cramps due to their lighter weight, and more responsive to rotation. Recent advances in virtual reality focus on providing as realistic an environment as possible. Higher resolution screens and faster computers are becoming cheaper to produce and more widely available, spurring the growth of VR.

While most interfaces prioritize visual and audio simulation, technologies are being developed that will be able to stimulate touch, taste, and smell to provide a highly life-like world. Researchers at the Universities of York and Warwick have presented a prototype of the Virtual Cocoon, a helmet that uses tubes, fans, and a high definition (HD) screen to fully immerse the user in what the team calls “Real Virtuality.”3 Another recent advance from the University of Singapore used “non-invasive electrical and thermal stimulation … [to] recreate the taste of virtual food and drinks.”4 Such technologies could lead to a perfect virtual environment that stimulates all of the senses. Additionally, this research introduces exciting possibilities such as virtually sampling a dish before ordering it at a restaurant or experiencing the feeling of snow on a hot summer day.

The last decade has seen both great advances in VR technologies and expansion into a wide range of practical fields such as military training.5 Although simulators have been used by the military since the simple Link Trainer in 1929, new methods of virtual simulation have greatly increased the diversity and immersion of training available. Soldiers can be placed in various virtual scenarios and learn the tactics and skills necessary in real combat, including developing assault plans on military targets, managing disaster and field casualties, and adapting to new environments.5,6 While it cannot replace field experience, VR serves as a useful tool to augment their training.

There are many projected uses for VR in the field of medicine as well. An experiment led by Dr. Patrice Crochet of La Conception Hospital in Marseille tested whether surgeons using a VR surgical training simulator could improve the quality of their surgical skills. Their findings indicated that VR training improved surgeons’ dexterity, supporting the claim that VR could potentially serve as a medical training tool.7 With the high number of annual medical malpractice deaths, using VR to provide doctors with practical experience is most definitely a useful tool.

Perhaps the most anticipated application of VR is its extension into video games and other multimedia. The Oculus Rift, an HMD currently in development, is a highly anticipated game system that incorporates HD graphics with high-fidelity head tracking to create a unique gaming experience.8,9 Combined with omnidirectional treadmills and directional audio, players may soon be able to engage in a highly realistic environment. Virtual controllers such as the Leap Motion—which uses sensors to process hand and finger motions as input data—are also being incorporated for further immersion and interactive capabilities. These VR technologies are not only limited to video games; virtual tourism or impossible real-world experiences such as flying could be simulated.

With ever-increasing computer processing speeds and extremely high resolution 8K displays in development, the future of VR holds great promise. Currently, VR is proving invaluable to military and medical training. A major limitation of VR is its inability to create perfect, interference-free environments due to inadequate hardware and software capabilities. However, these obstacles will be overcome as VR technology advances. Perhaps one day it will be impossible to distinguish between simulated reality and reality itself.

References

  1. Van Embden, E. Rare flight trainer can be found at Millville Army Airfield Museum / Link Trainer one of 5 working models in world. The Press of Atlantic City, Feb. 23, 2008, p. C1.
  2. Davies, H. Dr. Waldern’s dream machines: arcade thrills for spotty youths today, but revolutionary tools for surgeons and architects tomorrow, says the pioneer of virtual reality. http://www.independent.co.uk/life-style/ the-hunter-davies-interview-dr-waldernsdream-machines-arcade-thrills-for-spottyyouths-today-but-revolutionary-tools-forsurgeons-and-architects-tomorrow-says-thepioneer-of-virtual-reality-1506176.html (accessed Jan. 17, 2015.)
  3. First virtual reality technology to let you see, hear, smell, taste, and touch. www.sciencedaily. com/releases/2009/03/090304091227.htm (accessed Jan. 17, 2015).
  4. National University of Singapore. Simulator recreates virtual taste online.http://www.sciencedaily.com/ releases/2014/01/140102114807.htm (accessed Oct. 31, 2014).
  5. Bymer, L. Virtual reality used to train soldiers in new training simulator. http://www.army.mil/ article/84453/ (accessed Jan. 17, 2015).
  6. Virtual reality army training. http://www.vrs. org.uk/virtual-reality-military/army-training. html# (accessed Oct. 31, 2014).
  7. Crochet, P. et al. Ann. Surg. 2011, 253, 1216-1222.
  8. Corriea, A. Oculus Rift HD drops you into a world so real it hurts. http://www.polygon. com/2013/6/14/4429086/oculus-rift-hd-e3 (accessed Nov. 9, 2014).
  9. Oculus Rift. https://www.oculus.com/ (accessed Jan. 17, 2015).

Comment

Honey, Where's My Supersuit? New Underwater Technology Emerges

Comment

Honey, Where's My Supersuit? New Underwater Technology Emerges

A plot line from a comic book unfolds as scientists and artists alike take inspiration from superheroes to develop technology that could allow humans to further explore ocean environments. Despite mankind's attempts to explore the world’s oceans since the 18th century, 95% of this vast, watery expanse remains a mystery.1 Heavy oxygen tanks and burdensome helmets are still needed to get to moderate depths, while the deep ocean lies mostly uncharted. Without super strength or super flexibility, divers turn to the next best superpower: the ability to breathe underwater.

Deep-sea exploration necessitates the design of equipment that functions under extreme conditions, challenge that seemingly only a technological genius like Iron Man could conquer. After all, the self-made billionaire created his own armor to escape captivity. A suit similar to Iron Man’s was needed for an expedition to an abandoned shipwreck off the coast of the Antikythera islands in the fall of 2014.2 In order to reach the 55 foot-deep shipwreck, the Canadian company HUBLOT developed the Exosuit, a 550 pound atmospheric diving system (ADS) that allows explorers to dive deeper and longer.3 The full metal suit combined with semi-closed rebreathing technology pragmatically favored function over form. A semi-closed rebreather involves a constant flow rate of oxygen, and any excess oxygen that is not inhaled is released back into the water in the form of small bubbles.4 Paired with the suit is a remotely operated vehicle (ROV) that takes high-quality photos in any lighting, which is useful in deep waters.3

Upon first glance, the suit seems incredibly cumbersome with its daunting rigidity and thick, pipe-like legs. However, many smaller components like foot pads and rotary joints make the suit more flexible; this allows the user to access previously uncharted deep waters with robotic efficiency.4 As the underwater version of Iron Man’s suit, the Exosuit has an exterior that can withstand extremely high pressures. The sturdy exterior, paired with the ROV camera, will allow scientists to identify new species of marine life, especially those that are visible due to phospholuminescence. The dangers of the depths unknown explored with a one-of-a-kind suit and camera seem to come right off the page of one of Stan Lee’s comics. Tony Stark would definitely approve.

Although Iron Man is revered for his cleverness and intelligence, the best superhero inspiration for diving technology is Aquaman and his ability to breathe freely underwater. Many innovators, including South Korean designer Jeabyun Yeon, have tried to mimic the ease with which Aquaman is able to breath below the surface. In January 2014, Yeon created a device that would allow divers to breathe underwater with only a piece of standalone equipment attached to the mouth, leaving behind the typical mask, alternate air source, air gauge, and other equipment necessary for a normal dive.5 Called the Triton, this gill-like mouthpiece extracts oxygen from water and compresses it into small storage tanks located on either side of a mouthpiece.5 While swimming, users would only need to bite into the mouthpiece for oxygen to begin flowing. Although aesthetically pleasing, this design received a lot of negative attention from scientists and scuba divers that prevented it from gaining funding from investors and traction in the media. For the design to be feasible, there must be a pump that can bring 24 gallons of water through its filtering system per minute; however no such pump is available at present.5 The Triton also does not account for possible oxygen toxicity, the condition where high pressures of stored oxygen can cause convulsions and potentially be fatal.6 Yeon originally intended for the design to be a revolutionary breakthrough in the diving community, but now the Triton is displayed on his website as a “product innovation studio project.”7 Although his project had little impact on the diving community, other scientists continue to find ways to bring Aquaman to life.

The University of Denmark are doing just that with the “Aquaman” crystal, marking a shift from developing wearable technology to researching materials science. In October of 2014, the university released news of the “Aquaman” crystal, a cobalt-based crystalline material that can absorb, store, and release oxygen without deteriorating or changing form through processes known as known as chemisorption and desorption. These processes involve multiple chemical transformations that produce a denser form of oxygen gas that can be stored in a compact form without causing toxicity to the user.8 As a result, this inorganic material possesses properties that rival diving equipment in both size and efficiency, potentially allowing divers to have almost superhuman, Aquaman-like characteristics when underwater. A powerful example of the crystal’s abilities is the absorption of the amount of oxygen in an average-sized room using just 10 liters of the material.8 Although the size of a scuba tank varies with the type of dive, the fact that the “Aquaman” crystal can hold three times as much pressurized pure oxygen as a conventional tank of the same size will inevitably decrease the weight of equipment that divers need underwater. Professor Christine Mackenzie, a scientist on the team, claims that only few grains of the crystal are needed to sustain a full trip underwater.8 The team is currently working on ways to access the stored oxygen, possibly by directly inhaling the crystal or by using a specialized tank.8 By eliminating or reducing the size of the tank, the “Aquaman” crystal would allow divers to explore hard-to-reach areas and put them in even closer contact with the organisms they are examining.

The parallels between scuba equipment and superhumans like Iron Man and Aquaman show how far underwater diving equipment has progressed. Even far-fetched concepts like the Triton give a glimpse of what the future may look like. The ability to breathe underwater opens the door to new discoveries both by granting divers to either dive more flexibly at moderate depths or get a more personal glimpse into the deep ocean. Scuba equipment continues to play a major role in how we understand one of the earth’s most mysterious ecosystems, especially in the face of climate change. What was once written off as superhuman and fantastic might just develop into our reality.

References

  1. National Oceanic and Atmospheric Administration. http://www.noaa.gov/ocean.html (accessed Oct. 12, 2014).
  2. Wallace, R. ‘Iron Man’ suit allows divers to reveal more of Antikythera shipwreck. http://www.sciencetimes.com/articles/677/20141012/iron-man-suit-allows-divers-to-reveal-more-of-antikythera-shipwreck.htm (accessed Oct. 12, 2014).
  3. The Exosuit. http://www.amnh.org/exhibitions/past-exhibitions/the-exosuit/the-exosuit (accessed Oct. 12, 2014).
  4. Scuba diving. http://www.scubadiving.com/training/basic-skills/are-you-ready-rebreathers (accessed Oct. 28, 2014).
  5. ‘Triton’ oxygen mask claims to draw oxygen from water while you swim. http://www.huffingtonpost.co.uk/2014/01/17/triton-oxygen-mask_n_4615558.html (accessed Oct. 12, 2014).
  6. Patel, D. N. et al. JIACM 2003, 4, 234-237.
  7. Yeon. Yanko Design. http://www.yankodesign.com/2014/01/03/scuba-breath/ (accessed Oct. 23, 2014).
  8. Sundberg, J. et al. Chem. Sci. 2014, 5, 4017-4025.

Comment