Viewing entries tagged
Attractions

The Illusion of Race

Comment

The Illusion of Race

Race is one of the most pervasive features of American social life; neglecting the concept of race would be like questioning the existence of gravity. Though we would like to consider our nation a post-racial society, we still place great importance on race by asking for it on forms ranging from voter registration to the PSAT. However, many would be surprised to realize that race does not have a biological basis – there is no single defining characteristic or gene that can be unequivocally used to distinguish one race from another.1 Rather, it is a manmade concept used to describe differences in physical appearance. Yet, we have internalized the social construct of race to such a degree that it seems to have genetic significance, masking the fact that race is actually something we are raised with. That a simple internalized ideology creates disparities in contemporary American society, from socioeconomic status to healthcare accessibility, illustrates the urgency of exposing this myth of race.

Throughout American history, racial connotations have been fluid, with different ethnographic groups falling in and out of favor based upon societal views at a given time. Race was originally conceived as a way to justify colonialism. European colonizers institutionalized their ethnocentric attitudes by creating the concept of race in order to differentiate between the civilized and the savage, the Christians and the heathens. This dichotomy facilitated mercantilism, the economic policy designed to accrue astronomical profits for the European countries through the exploitation of “inferior” races. Scholars of Critical Race Theory show, more generally, that the boundaries of racial categories shift to accommodate political realities and conventional wisdom of a given time and place.2

This definition of race changed in the United States over the centuries. For example, when the Irish and Italians first immigrated in the early 20th century, they were seen as “swarthy drunkards” – clearly not part of the white “elite.” Within two generations, however, these same people were able to assimilate into the Caucasian-dominated culture while African-Americans were still considered a separate entity. Similarly, during the era of the Jim Crow laws, courts had the power of determining who was black and who was not; in Virginia, a person was considered to be black if he or she was at least 1/16th African-American; in Florida, a black person was at least 1/8th African-American; and in Alabama, having even the smallest sliver of African-American heritage made a person black.3 Thus, a person could literally change race by simply moving from one state to another. Today, the commonly defined race classifications, as specified by the US Census, include White, Black, Asian, American Indian or Alaska Native, Pacific Islander or Native Hawaiian, Other, and Multiracial. Because there is no scientific cut-offs to determine what race a person is, racial data is largely based on self-identification, which points to its lack of biological legitimacy. For example, 30% of Americans who label themselves as White do not have at least90% European ancestry.4

We may think our conceptualization of race is based upon biological makeup, but it is actually an expression of actions, attitudes, and social patterns. When examining the science behind race, most scholars across various disciplines, including evolutionary biology, sociology, and anthropology, have come to the consensus that distinctions made by race are not genetically discrete, cannot be reliably measured, and are not meaningful in the scientific sense.5

Some argue that race is a genetic concept based upon a higher incidence of particular diseases affecting certain races. However, purely hereditary diseases are extremely rare. For example, 1/2300 births for cystic fibrosis, 1/10000 births for Huntington’s disease, and 1/3000 births for Duchenne’s muscular dystrophy.6 Rather, diseases often reflect shared lifestyles and environments instead of shared genes, because factors such as poverty and malnutrition are also often “inherited” through family lines. Even genetic polymorphisms in hemoglobin, which lead to populations with lower susceptibility to malaria, can be partly explained by environmental factors.6-8 Thus, diseases traditionally tied to certain races cannot be explicitly attributed to genes, discrediting the idea that races are genetically disparate. Genetic differences are better described as percentages of people with a particular gene polymorphism, which change according to the environment.6

Racial groupings actually reflect little of the genetic variations existing in humans. Studies have shown that about 90% of variations in human genetics is present within a population on a continent, while around 10% of genetic variation occurs between continental populations.1 Variation in physical characteristics, the traditional basis for determining race, does not imply underlying genetic differences. When we internalize the false ideology that race is genetic, we are mistakenly implying that there are human subspecies.

Although race is a social construct, it has a widespread influence on society, especially in the United States. In particular, minorities face disadvantages in numerous areas ranging from healthcare to education.7,8 Reports about Mitt Romney’s rumored adoption of a darker skin tone when addressing Latino voters or statistics indicating that the median household wealth of whites is 20 times that of blacks reinforces the existence of a racialized society.5 This is shocking and disturbing; race may not be real, but its effects contribute to real inequality. Once everyone understands this racial illusion, we can begin making effective change.

References

  1. Bamshad, M. J.; Olson, S. E. Does Race Exist? Scientific American, New York City, Nov. 10, 2003, p. 78-85.
  2. Calavita, K. Invitation to Law and Society: An Introduction to the Study of Real Law. University of Chicago Press: Boston, MA, 2007.
  3. Rothenberg, P. S. Race, Class, and Gender in the United States, 7th ed.; Worth Publishers: New York, NY, 2007.
  4. Lorber, J.; Hess, B. B.; Ferree, M. M.; Eds. Revisioning Gender; AltaMira Press: Walnut Creek, CA, 2000.
  5. Costantini, C. ABC News. http://abcnews.go.com/ABC_Univision/News/mitt-romneys-tan-draws-media-fire-makeup-artist/story?id=17290303       (Accessed Oct. 26, 2012).
  6. Pearce, N. BMJ. 2004, 328, 1070-2.
  7. Stone, J. Theor. Med. Bioethics. 2002, 23, 499-518.
  8. Witzig, R. The Medicalization of Race: Scientific Legitimization of a Flawed Social Construct. Ann. Intern. Med. 1996, 125, 675-9.
  9. Tavernise, S. The New York Times. http://www.nytimes.com/2011/07/26/us/26hispanics.html?_r=0&pagewan (Accessed Oct. 26, 2012).

Comment

Nature vs Nurture of Politics

Comment

Nature vs Nurture of Politics

If you voted in last year’s election, what made you choose the candidate for whom you voted? Was it the platform, the party, or perhaps your genes? Since Mendel and his peas, the idea that genes affect physical traits has greatly influenced science. However, their role may be greater than we thought. Aristotle first posed the question of nature vs nurture, which is now the debate surrounding the relative importance of one’s genes (nature) against one’s upbringing (nurture) in determining physical and behavioral characteristics. For instance, is one’s intelligence an innate quality or one based on years of education? If genes are involved in political ideology, does that mean political freedom is limited? Do we have a choice in voting? The answer to the age-old question is more complex, yet nowadays, more people recognize the idea that both nature and nurture are involved in trait determination.

Family values, education, and the media were originally thought to determine an individual’s political behavior. The scientific community has gradually come to embrace political views as a legitimate factor in the nature vs. nurture debate. In fact, the study of genetic influence in political decisions has a name: genopolitics.1 For instance, an early study of the topic found that identical twins show more similarities than non-identical twins in voting behavior.2

Furthermore, although many still believe environmental influences play the sole role in determining political attitudes, a recent articles suggests that genes influence political preferences.3 This harks back to the notion that both nature and nurture are important in the development of an individual’s behavior. Whether your vote is liberal or conservative is not determined by a single gene, but rather by a combination of genes and regulatory pathways with environmental factors from one’s political principles.2

However, genetics does not play a roles in all political traits. In particular, because political parties are transient and vary between countries and across time, only nurture plays a role in party identification. A liberal in American politics is not the same as a liberal in European politics. On the other hand, most genopolitic researchers agree that genetics does influence the ideology (e.g., conservatism or liberalism) of group organization, as a relatively timeless matter.4 For example, your heritage might make you favor a powerful government over a weak government but would not directly influence you vote for a Democratic nominee over a Republican nominee.

How exactly genetics could influence your vote is difficult to understand. Hatemi and McDermott conceptualize that our Prehistoric ancestors faced issues with out-groups (immigration), security (foreign policy), and welfare (resource management), among others. Through evolutionary processes occurring over thousands of years, these issues became polarizing traits that are heritable.2 This by no means signifies that one ideology is more “fit” than another simply because it has dominated in the past.

These natural factors determine conservative vs. liberal preferences. However, environment may play a larger role in directing most individual political choices. More research is necessary to show that the influence of genes on political ideology cannot be explained by purely environmental influences.6

References

  1. Stafford, T. BBC Future. http://www.bbc.com/future/story/20121204-is-politics-really-in-our-genes (Accessed Dec. 4 2012).
  2. Biuso, E. Genopolitics. New York Times, New York, Dec. 14, 2008, p. MM57.
  3. Haterni, P. K. Trends in Genetics. 2012, 28(10), 525-33.
  4. Alford, J. R. Ann. Rev. Pol. Sci. 2008, 15(2), 183-203.
  5. Body politic: THe genetics of politics, The Economist, London, Oct. 6, 2012.
  6. Haterni, P. K. JOP2011, 73(1), 271-85.

Comment

Memory-Erasing Drugs: To Forget or Not to Forget?

Comment

Memory-Erasing Drugs: To Forget or Not to Forget?

From recreational mind-altering drugs to pharmaceuticals that target neurotransmitter imbalances, a wide variety of chemical mechanisms can alter our thought processes and behaviors. While neural bases have long been known to play a role in shaping our thoughts and actions, recent advances in memory research have brought an evocative question to the forefront: what if we could change not only how we think and act, but also what we remember? The concept of a “forgetfulness” drug—an Eternal Sunshine-esque memory erasure treatment in pill form—is no longer a far-fetched fantasy. As researchers formulate a better understanding of how memories are formed and retrieved at the molecular level, the scientific community gains the ability to formulate targeted approaches to modifying the existence or emotional character of past memories. However, amid these developments, it is crucial that scientists, neuroethicists, and policymakers collaborate to evaluate the ethical costs and benefits of new therapies.

Currently, a number of drugs have shown utility in altering memory consolidation and retrieval. For example, propranolol, a beta-adrenergic blocker already approved by the FDA to treat hypertension, inhibits excess stress hormones released at the time of a psychologically traumatic event, the presence of which influences the memory consolidation of particularly emotional experiences.1 When administered during this critical period shortly after trauma, propranolol has also been shown to prevent the formation of strong, intrusive memories of the event, as well as the associated fear and anxiety that contribute to the later development of posttraumatic stress disorder (PTSD). In fact, early studies from 2002 and 2003 have demonstrated that patients who received propranolol, first administered several hours after a traumatic event and continued over a seven- or ten-day regimen, experienced lower rates of PTSD than those who did not receive propranolol.5,8

Memory-attenuating drugs can also be administered during subsequent periods of memory activation. More recently, advances in neuroscience have revealed that the process of retrieving memories is vastly different from the idea of simply activating consolidated memory traces from an archive. Instead, every time we recall a particular memory, it becomes unstable and must be re-consolidated in order to persist in the brain.7 Accordingly, a 2012 study conducted by clinical psychologists at the University of Amsterdam used propranolol to disrupt the memory reconsolidation of events associated with fear and anxiety in a learning context. Specifically, participants were threatened with painful electric shocks during a learning task; then, these acquired memories and fear conditioning were reactivated the following day during a repeat of the task. As predicted, participants who received propranolol during the memory reconsolidation process (upon activation of their memories from the previous day) showed lessened behavioral expressions and feelings of anxiety concerning the fear-related memory.7 Furthermore, within the past year, researchers have demonstrated that the injection of ζ-pseudosubstrate inhibitory peptide (ZIP) can induce cocaine-addicted rats to forget the locations where they had been receiving cocaine.4 Therefore, beyond diminishing the negative emotional experience of unpleasant memories, pharmaceutical treatments may also work toward erasing a memory altogether.

However, the power to eradicate memories comes with great responsibility—and a range of complex ethical implications. A decade ago, the President’s Council on Bioethics issued a report warning against the pharmaceutical modification of memories, citing various personal and social repercussions incurred by the use of any drugs that quell recollection of past events, regardless of how painful they may be.6 At the personal level, individuals might use such drugs to “numb” themselves from remembering incidents that could later prove to have adaptive value, thus obviating the process of learning and growing from negative experiences. On the greater social scale, some neuroethicists argue that if survivors and witnesses of catastrophic events (such as accidents, crimes, combat, or genocide) elect to eliminate the emotional charge of such memories, then their firsthand perceptions about the meaning and impact of these events—which are inevitably interlinked with powerful aversive emotions—would be altered substantially. In effect, these self-protective acts of deliberate forgetfulness would render emotionally devastating atrocities as less significant in the collective sense of justice and moral consciousness of society.6

On the other hand, not every negative memory has “redeeming” value. For example, individuals with PTSD experience recurrent traumatic memories that remain particularly vivid and emotionally distressing long after the event, often impeding day-to-day functioning. Accordingly, biomedical ethicists have likened the suffering resulting from agonizing memories to the experience of profound physical pain, the pharmaceutical alleviation of which is already a common, morally-accepted practice.2

Furthermore, a recent neuroethics editorial in Nature argued that fear surrounding the widespread abuse of pharmaceutical memory erasure is overblown and impedes the development of therapeutic applications to patients whose quality of life is curtailed by the residual effects of past traumatic experiences.3 After all, conscientious negotiation of legal policies and clinical guidelines for such drugs would reduce the possibility of large-scale abuse. From the drug administration perspective, clinicians and potential patients could work together to draft procedures for determining the types of cases in which the prescription of memory-dampening drugs is a viable option. Open communication between biomedical and legal experts would also be crucial in navigating high-stakes situations, such as when a traumatized sole witness to a violent crime seeks pharmaceutical memory erasure during an ongoing court case.

Ultimately, the ethical implications of erasing memories pose core questions surrounding our identity and humanity. Would electing to forget past events fundamentally change people—with the disappearance of certain salient memories potentially eroding away the basis of our individual perspectives and learning experiences? Or is simply forgetting a senselessly traumatic event sometimes the better option toward living a fully productive life? Although research on memory-erasing drugs is ongoing and the associated ethical issues of their implementation remain points of contention, the essence of the question lies at the individual level: if presented with the option, would you be willing to dull the emotional overtones of a personal memory, or erase that memory altogether?

References

  1. Cahill, L., et al. Nature. 1994, 371, 702-704.
  2. Illes, J. Am. J. Bioethics. 2007, 7(9), 1-2.
  3. Kolber, A. J. Nature. 2011, 476, 275-276.
  4. Li, Y-Q., et al. J. Neurosci. 2011, 31, 5436-5446.
  5. Pitman, R. K., et al. Biol. Psychiatry. 2002, 51, 189-192.
  6. President’s Council on Bioethics. Beyond Therapy: Biotechnology and the Pursuit of Human Happiness. 2003, 205-273.
  7. Soeter, M., & Kindt, M. Psychoneuroendocrinology. 2012, 37, 1769-1779.
  8. Vaiva, G., et al. Biol. Psychiatry. 2003, 54, 947-949.

Comment