Viewing entries tagged
Human Body

Mastering Mega Minds: Our Quest for Cognitive Development

Comment

Mastering Mega Minds: Our Quest for Cognitive Development

Humans are continuously pursuing perfection. This drive is what motivates scientific researchers and comic book authors to dream about the invention of bionic men. It seems inevitable that this quest has expanded to target humankind’s most prized possession: our brain. Cognitive enhancements are various technologies created in order to elevate human mental capacities. However, as scientists and entrepreneurs attempt to research and develop cognitive enhancements, society faces an ethical dilemma. Policy must help create a balance, maximizing the benefits of augmented mental processing while minimizing potential risks.

Cognitive enhancements are becoming increasingly prevalent and exist in numerous forms, from genetic engineering to brain stimulation devices to cognition-enhancing drugs. The vast differences between these categories make it difficult to generalize a single proposition that can effectively regulate enhancements as a whole. Overall, out of these types, prescription pills and stimulation devices currently have the largest potential for widespread usage.

Prescription pills exemplify the many benefits and drawbacks of using cognitive enhancements. ADHD medications like Ritalin and Adderall, which stimulate dopamine and norepinephrine activity in the brain, may be the most ubiquitous example of available cognitive enhancements. These drugs are especially abused among college students, who use these pills to stay awake for longer periods of time and enhance their attention while studying. In a collection of studies, 4.1 to 10.8% of American college students reported recreationally using a prescription stimulant in the past year, while the College Life Study determined that up to a quarter of undergraduates used stimulants at least once during college.1,2 Students may not know or may disregard the fact that prolonged abuse has resulted in serious health concerns, including cardiopulmonary issues and addiction. When these medications are taken incorrectly, especially in conjunction with alcohol, users risk seizures and death.3

In addition to stimulants, there are a variety of other prescriptions that have been shown to improve cognitive function. Amphetamines affect neurotransmitters in the brain to increase consciousness and adjust sleep patterns. They achieve this by preventing dopamine reuptake and disrupting normal vesicular packaging, which also increases dopamine concentration in the synaptic cleft through reverse transport from the cytosol.4 These drugs are currently used by the armed forces to mitigate pilots’ fatigue in high-intensity situations. While usage of these drugs may help regulate pilots’ energy levels, this unfortunately means that pilots face heavy pressure to take amphetamines despite the possibility of addiction and the lack of approval from the U.S. Food and Drug Administration.5

Besides prescription medications, various technological devices exist or are being created that affect cognition. For instance, transcranial direct current stimulation (tDCS) and transcranial magnetic stimulation (TMS) are devices currently marketed to enhance cognitive functioning through online websites and non-medical clinics, even though they have not yet received comprehensive clinical evaluations for this purpose.6 tDCS works by placing electrodes on the scalp to target specific brain areas. The machine sends a small direct current through electrodes to stimulate or inhibit neuronal activity. Similarly, TMS uses magnetic fields to alter neural activity. These methods have been shown to improve cognitive abilities including working memory, attention, language, and decision-making. Though these improvements are generally short-term, one University of Oxford study used tDCS to produce long-term improvements in mathematical abilities. Researchers taught subjects a new numerical system and then tested their ability to process and map the numbers into space. Subjects who received tDCS stimulation to the posterior parietal cortex displayed increased performance and consistency up to six to seven months after the treatment. This evidence indicates that tDCS can be used for the development of mathematical abilities as well as the treatment of degenerative neurological disorders such as Alzheimer’s.7

Regulation of cognitive enhancements is a multifaceted issue for which the risks and benefits of widespread usage must be intensively examined. According to one perspective, enhancements possess the ability to maximize human efficiency. If an enhancement can enable the acceleration of technological development and enable individuals to solve issues that affect society, it could improve the quality of life for users and non-users alike. This is why bans on anabolic steroids are not directly comparable to those on cognitive enhancements. While both medications share the goal of helping humans accomplish tasks beyond their natural capabilities, cognitive enhancements could accelerate technological and societal advancement. This would be more beneficial to society than one individual’s enhanced physical prowess.

While discussing this, it should be noted that such enhancements will not instantaneously bestow the user with Einsteinian intellectual capabilities. In a recent meta-analysis of 48 academic studies with 1,409 participants, prescription stimulants were found to improve delayed working memory but only had modest effects on inhibitory control and short-term episodic memory. The report also discussed how in some situations, other methods, including getting adequate sleep and using cognitive techniques like mnemonics, are far more beneficial than taking drugs such as methylphenidate and amphetamines. Biomedical enhancements, however, have broad effects that are applicable to many situations, while traditional cognitive techniques that don’t directly change the biology behind neural processes are task-specific and only rarely produce significant improvements.8

However, if we allow enhancement use to grow unchecked, an extreme possibility is the creation of a dystopian society led by only those wealthy enough to afford cognitive enhancements. Speculation about other negative societal effects is endless; for example, widespread use of cognitive enhancements could create a cutthroat work environment with constant pressure to use prescription pills or cranial stimulation, despite side effects and cost, in order to compete in the job market.

The possibility of addiction to cognitive enhancements and issues of social stratification based on access or cost should not be disregarded. However, there are many proposed solutions to these issues. Possible governmental regulation proposed by neuroethics researchers includes ensuring that cognitive enhancements are not readily available and are only given to those who demonstrate knowledge of the risks and responsible use of such enhancements. Additionally, the creation of a national database, similar to the current system used to regulate addictive pain relievers, would also help control the amount of medication prescribed to individuals. This database could be an integrated system that allows doctors to view patients’ other prescriptions, ensuring that those who attempt to deceive s pharmacies to obtain medications for personal abuse or illegal resale could not easily abuse the system. Finally, to address the issue of potential social inequality, researchers at Oxford University’s Future of Humanity Institute proposed a system in which the government could support broad development, competition, public understanding, a price ceiling, and even subsidized access for disadvantaged groups, leading to greater equalized access to cognitive enhancements.9

Advancements have made it possible to alter our minds using medical technology. Society requires balance to regulate these enhancements, an environment that will promote safe use while preventing abuse. The regulation of cognitive enhancement technologies should occur at several levels to be effective, from market approval to individual use. When creating these laws, research should not be limited because that could inhibit the discovery of possible cures to cognitive disorders. Instead, the neuroethics community should focus on safety and public usage regulations with the mission of preventing abuse and social stratification. Cognitive enhancements have the potential to affect the ways we learn, work, and live. However, specific regulations to address the risks and implications of this growing technology are required; otherwise the results could be devastating.

References

  1. McCabe, S.E. et al. J. Psychoactive Drugs 2006, 38, 43-56.
  2. Arria, A.M. et al. Subst. Abus. 2008, 29(4), 19-38.
  3. Morton, W.A.; Stockton, G. J. Clin. Psychiatry 2000, 2(5), 159-164.
  4. España, R.; Scammel, T. SLEEP 2011, 34(7), 845-858.
  5. Rasmussen, N. Am. J. Public Health 2008, 98(6), 974-985.
  6. Maslen, H. et al. J. of Law and Biosci. 2014, 1, 68-93.
  7. Kadosh, R.C. et al. Curr. Biol. 2010, 20, 2016-2020.
  8. Ilieva, I.P. et al. J. Cogn. Neurosci. 2015, 1069-1089.
  9. Bostrom, N.; Sandberg, A. Sci. Eng. Ethics 2009, 15, 311-341.

Comment

Defect Patch: The Band-Aid for the Heart

Comment

Defect Patch: The Band-Aid for the Heart

Imagine hearing that your newborn, only a few minutes out of womb, has a heart defect and will only live a couple more days. Shockingly, 1 in every 125 babies is born with some type of con-genital heart defect, drastically reducing his or her lifespan.1 However, research institutes and hospitals nationwide are testing solutions and advanced devices to treat this condition. The most promising approach is the defect patch, in which scaffolds of tissue are engineered to mimic a healthy heart. The heart is enormously complex; mimicking is easier said than done. These patches require a tensile strength (for the heart’s pulses and variances) that is greater than that of the left ventricle of the human.1 To add to the difficulty of creating such a device, layers of the patch have to be not only tense and strong, but also soft and supple, as cardiac cells prefer mal-leable tissue environments.

Researchers have taken on this challenge and, through testing various biomaterials, have de-termined the compatibility of each material within the patch. The materials are judged on the ba-sis of their biocompatibility, biodegradable nature, reabsorption, strength, and shapeability.2 Natural possibilities include gelatin, chitosan, fibrin, and submucosa.1 Though gelatin is easily biode-gradable, it has poor strength and lacks cell surface adhesion properties. Similarly, fibrin binds to different receptors, but with weak compression.3 On the artificial side, the polyglycolic acid (PGA) polymer, is strong and porous, while the poly lactic co-glycolic acid (PLGA) polymer has regulated biological properties, but poor cell attachment. This trade-off between different components of a good patch is what makes the building and modification of these systems so difficult. Nevertheless, the future of defect patches is extremely promising.

An unnatural polymer that is often used in creating patches is polycaprolactone, or PCL. This material is covered with gelatin-chitosan hydrogel to form a hydrophilic (water-conducive) patch.1 In the process of making the patch, many different solutions of PCL matrices are pre-pared. The tension of the patch is measured to make sure that it will not rip or become damaged due to increased heart rate as the child develops. The force of the patch must always be greater than that of the left ventricle to ensure that the patch and the heart muscles do not rupture.1 Although many considerations must be accounted for in making this artificial patch, the malleability and adhesive strength of the device are the most important.1 Imagine a 12-year-old child with a defect patch implanted in the heart. Suppose this child attempts to do a cardio workout, including 100 jumping jacks, a few laps around a track, and some pushups. The heart patch must be able to reach the ultimate tensile strain under stress without detaching or bursting. The PCL core of the patch must also be able to handle large bursts of activity. Finally, the patch must be able to grow with the child and the heart must be able to grow new cells around the patch. In summary, the PCL patch must be biodegradable, have sufficient mechanical strength, and remain viable under harsh conditions.

While artificial materials like PCL are effective, in some situations, the aforementioned design criteria are best fulfilled by patches made from natural biomaterials. For instance, chitosan serves as a good template for the outside portion of the patch.4 This material is biocompatible, bioabsorbable, and shapeable. Using natural materials can reduce the risk of vascularization, or the abnormal formation of blood vessels. They can also adapt to gradual changes of the heart. Natural patches being developed and tested in Dr. Jeffrey Jacot’s lab at Rice University include a core of stem cells, which can differentiate into more specialized cells as the heart grows. They currently contain amniotic fluid-derived stem cells (AFSC) which must be isolated from hu-mans.5 Researchers prepare a layer of chitosan (or fibrin in some cases) and polyethylene gly-col hydrogels to compose the outside part of the patch.4 They then inject AFSC into this matrix to form the final patch. The efficiency of the patch is measured by recording the stem cells’ ability to transform into new cells. In experiments, AFSC are able to differentiate into virtually any cell type, and are particularly promising in regenerative medicine.5 These initial prototypes are still being developed and thoroughly tested on rodents.6 A major limitation of this approach is the ina-bility of patches to adapt in rapidly developing hearts, such as those of human infants and patch testing on humans or even larger mammals has yet to be done. The most important challenges for the future of defect patches are flexibility and adaptability.6 After all, this patch is essentially a transformed and repaired body part. Through the work of labs like Dr. Jacot’s, cardiac defects in infants and children may be completely treatable with a patch. Hopefully, in the future, babies with this “Band-Aid” may have more than a few weeks to live, if not an entire lifetime.

References

  1. Pok, S., et al., ACS Nano. 2014, 9822–9832.
  2. Pok, S., et al., Acta Biomaterialia. 5630–5642.
  3. Pok, S., et al., Journal of Cardiovascular Translational Research J. of Cardiovasc. Trans. Res. 2011, 646–654.
  4. Tottey, S., Johnson, et al. Biomaterials. 2011, 32(1), 128– 136.
  5. Benavides, O. M., et al., Tissue Engineering Part A. 1185–1194.
  6. Pok, S., et al., Tissue Engineering Part A. 1877–1887.

Comment

Homo Naledi – A New Piece in the Evolutionary Puzzle

Comment

Homo Naledi – A New Piece in the Evolutionary Puzzle

Human beings share 96% of their genome with chimpanzees,1 which is why modern science has accepted the concept that humans and apes share a recent common ancestor. However, our understanding of the transition from these ancient primates to the bipedal, tool-wielding species that conquered the globe is less clear than many realize. One crucial missing chapter in the evolutionary story is the origin of our very own genus, Homo. Scientists believe that somewhere between two and three million years ago, the hominid species Australopithecus afarensis evolved into the first recognizably human species, Homo erectus. However, the details of this genealogical shift have remained a mystery. In 2013, a discovery made in the Rising Star cave by two recreational cavers may have provided revolutionary insight into this intractable problem.

The Rising Star cave lies 30 miles outside the city of Johannesburg in northern South Africa. A popular destination for spelunkers for the past 50 years, this cave is well-known and has been extensively mapped.2 Two years ago, Steven Tucker and Rick Hunter dropped into the Rising Star cave in an effort to discover new extensions to the cave, with the hope of finding something more.2 They found a tight crevice that was previously unexplored, which led to a challenging forty-foot drop through a chute. At the bottom, Hunter and Tucker came across scattered bones and fossils in what would later be named the Dinaledi chamber.2 Hunter and Tucker consulted with Dr. Lee Berger, a paleoanthropologist at the University of Witwatersrand. It was clear to Dr. Berger that these fossils were not of modern humans -- an ancient hominid species had been discovered.2

Within weeks of this discovery, Dr. Berger assembled a qualified team and set up camp at the mouth of the Rising Star cave. In the largest hominid artifact discovery in Africa, over one thousand bones from multiple bodies were extracted and analyzed.2

As the fossils were being transferred out of the cave, paleoanthropologists at the surface worked to piece together a skeleton. Some aspects of this species’ bone structure were distinctly human, like the long thumbs, long legs, and arched feet.2 Other features, including curved fingers and a flared pelvis, were indicative of a more primitive animal.2 A large skull fragment from above the left eye of one of the skeletons allowed scientists to definitively determine this hominid’s genus.

The Australopithecus skull is characterized by a large orbital ridge above the eye, with a deep concavity behind it, leading to a flatter face with pronounced brows.3 The skull fragment collected by the team, however, had a shorter ridge and less of an indentation above the frontal lobe.3 This finding led the team to conclude that they had discovered a new member of the Homo genus, which Dr. Berger named Homo naledi. ‘Naledi’ in the Sotho language means ‘star,’ a reference to the vivid stalactites emanating from the ceiling of the Dinaledi chamber.3

Dr. Berger’s discovery in the Rising Star cave was an incredible breakthrough, but finding fossils is only half the battle. The next step is to find a place for this species in the million-year narrative of human evolution we have created.

In accomplishing this feat, a logical place to start is considering how the fossils of Homo naledi ended up in their final resting place. There were no signs of predation, as no other animal fossils were found at this location. In addition, these fossils accumulated gradually, meaning that the bodies did not all die from a single event. Dr. Berger postulated that these bodies were placed there with purpose, but intentional body disposal is an advanced social behavior which, up to this point, has only been exhibited by more evolved Homo species. The brain size of the discovered hominids is estimated to be between 450 and 550 cubic centimeters, about one third the size of Homo sapiens brain and only marginally larger than that of a chimpanzee.3 The possibility of such a small-brained animal engaging in intentional body disposal challenges ideas about the cognitive abilities necessary for such advanced social behavior. Dr. William Jungers, chair of anatomical sciences at Stony Brook University, argues that advanced social intelligence was not likely at play in this instance. He claims that “intentional corpse disposal is a nice sound bite, but more spin than substance […] dumping conspecifics down a hole may be better than letting them decay around you.”4

The idea of intentional body disposal is not the only one of Dr. Berger’s conclusions that has attracted criticism. Some in the scientific community argue that Homo naledi is a distant cousin, not a direct ancestor, of modern humans. Others, like UC Berkeley’s Dr. Tim White, argue that "new species should not be created willy-nilly,” and believe that these discoveries may just be fossils of Homo erectus.5 Biologist Dr. David Menton takes the small brain size of these hominids as well as well as their “sloped face” and “robust mandible” as indication that Homo naledi does not even belong in the Homo genus.6

It is clear that while the Homo naledi fossils are extremely significant in the scientific community, their placement within the story of human evolution is contentious. Our inability to definitively date the fossils makes the task even more challenging. However, Homo naledi’s unique mosaic of human and ape-like features provides support for a new model of human evolution that has recently gained traction in the scientific community. While scientists would prefer to draw a family tree of human ancestors with modern humans at the top, our evolution is not so simple. Dr. Berger likens the reality of evolution to a braided stream.2 Like a collection of tributaries all contributing to a river basin, humans may have been the product of a collection of human ancestors, each contributing to our existence differently. We may never fully understand where we came from, but discoveries like Homo naledi bring us a little bit closer to completing the evolutionary puzzle.

References

  1. Spencer, G. New Genome Comparison Finds Chimps, Humans Very Similar at the DNA Level. National Human Genome Research Institute [Online], August 31, 2005. https://www.genome.gov/15515096 (accessed March 1st, 2016)
  2. Shreeve, J. This Face Changes the Human Story. But How? National Geographic [Online], September 10, 2015. http://news.nationalgeographic.com/2015/09/150910-human-evolution-change/ (accessed January 17, 2016)
  3. Berger, L. R. et al. ELife [Online] 2015, 4. http://elifesciences.org/content/4/e09560 (accessed January 16, 2016)
  4. Bascomb, B. Archaeology's Disputed Genius. PBS NOVA NEXT [Online], September 10, 2015. http://www.pbs.org/wgbh/nova/next/evolution/lee-berger/ (accessed January 19, 2016)
  5. Stoddard, E. Critics question fossil find, but South Africa basks in scientific glory. UK Reuters [Online], September 16, 2015. http://uk.reuters.com/article/us-safrica-fossil-idUKKCN0RG0Z120150916 (accessed January 19, 2016)
  6. Dr. Mitchell, E. Is Homo naledi a New Species of Human Ancestor? Answers in Genesis [Online], September 12, 2015. https://answersingenesis.org/human-evolution/homo-naledi-new-species-human-ancestor/ (accessed January 17, 2016)

Comment

3D Organ Printing: A Way to Liver a Little Longer

Comment

3D Organ Printing: A Way to Liver a Little Longer

On average, 22 people in America die each day because a vital organ is unavailable to them.1 However, recent advances in 3D printing have made manufacturing organs feasible for combating the growing problem of organ donor shortages.

3D printing utilizes additive manufacturing, a process in which successive layers of material are laid down in order to make objects of various shapes and geometries.2 It was first described in 1986, when Charles W. Hull introduced his method of ‘stereolithography,’ in which thin layers of materials were added by curing ultraviolet light lasers. In the past few decades, 3D printing has driven innovations in many areas, including engineering and art by allowing rapid prototyping of various structures.2 Over time, scientists have further developed 3D printing to employ biological materials as a modeling medium. Early iterations of this process utilized a spotting system to deposit cells into organized 3D matrices, allowing the engineering of human tissues and organs. This method, known as 3D bioprinting, required layer-by-layer precision and the exact placement of 3D components. The ultimate goal of 3D biological modeling is to assemble human tissue and organs that have the correct biological and mechanical properties for proper functioning to be used for clinical transplantation. In order to achieve this goal, modern 3D organ printing is usually accomplished using either biomimicry, autonomous self-assembly, and mini-tissues. Typically, a combination of all three techniques is utilized to achieve bioprinting with multiple structural and functional properties.

The first approach, biomimicry, involves the manufacture of identical components of cells and tissues. The goal of this process is to use the cells and tissues of the organ recipient to duplicate the structure of organs and the environment in which they reside. Ongoing research in engineering, biophysics, cell biology, imaging, biomaterials, and medicine is very important for this approach to prosper, as a thorough understanding of the microenvironment of functional and supporting cell types is needed to assemble organs that can survive.3

3D bioprinting can also be accomplished through autonomous self-assembly, a technique that uses the same mechanisms as embryonic organ development. Developing tissues have cellular components that produce their own extracellular matrix in order to create the structures of the cell. Through this approach, researchers hope to utilize cells themselves to create fully functional organs. Cells are the driving force of this process, as they ultimately determine the functional and structural properties of the tissues.3

The final approach used in 3D bioprinting involves mini-tissues and combines the processes of both biomimicry and self-assembly. Mini-tissues are the smallest structural units of organs and tissues. They are replicated and assembled into macro-tissue through self-assembly. Using these smaller, potentially undamaged portions of the organs, fully functional organs can be made. This approach is similar to autonomous self-assembly in that the organs are created by the cells and tissues themselves.

As modern technology makes it possible, techniques for organ printing continue to advance. Although successful clinical implementation of printed organs is currently limited to flat organs such as skin and blood vessels and hollow organs such as the bladder,3 current research is ongoing for more complex organs such as the heart, pancreas, or kidneys.

Despite the recent advances in bioprinting, issues still remain. Since cell growth occurs in an artificial environment, it is hard to supply the oxygen and nutrients needed to keep larger organs alive. Additionally, moral and ethical debates surround the science of cloning and printing organs.3 Some camps assert that organ printing manipulates and interferes with nature. Others feel that, when done morally, 3D bioprinting of organs will benefit mankind and improve the lives of millions. In addition to these debates, there is also concern about who will control the production and quality of bioprinted organs. There must be some regulation of the production of organs, and it may be difficult to decide how to distribute this power. Finally, the potential expense of 3D printed organs may limit access to lower socioeconomic classes. 3D printed organs, at least in their early years, will more likely than be expensive to produce and to buy.

Nevertheless, there is widespread excitement surrounding the current uses of 3D bioprinting. While clinical trials may be in the distant future, organ printing can currently act as an in vitro model for drug toxicity, drug discovery, and human disease modeling.4 Additionally, organ printing has applications in surgery, as doctors may plan surgical procedures with a replica of a patient’s organ made with information from MRI and CT images. Future implementation of 3D printed organs can help train medical students and explain complicated procedures to patients. Additionally, 3D printed tissue of organs can be utilized to repair livers and other damaged organs. Bioprinting is still young, but its widespread application is quickly becoming a possibility. With further research, 3D printing has the potential to save the lives of millions in need of organ transplants.

References

  1. U.S. Department of Health and Human Services. Health Resources and Services Information. http://www.organdonor.gov/about/data.html (accessed Sept. 15, 2015)
  2. Hull, C.W. et al. Method of and apparatus for forming a solid three-dimensional article from a liquid medium. WO 1991012120 A1 (Google Patents, 1991).
  3. Atala, A. and Murphy, S. 3D Bioprinting of Tissues and Organs. Nat. Biotechnol. [Online] 2013, 32, 773-785. http://nature.com/nbt/journal/v32/n8/full/nbt.2958.html (accessed Sept. 15, 2015)
  4. Drake, C. Kasyanov, V., et al. Organ printing: Promises and Challenges. Regen. Med. 2008, 3, 1-11.

Comment