Evaluating Current Public Health Practices in Villa El Salvador, Peru

Comment

Evaluating Current Public Health Practices in Villa El Salvador, Peru

Abstract

Initiatives in public health have gained more popularity in recent years due to their simplicity and effectiveness. The success rates of such projects depend heavily upon their adaptability to the community’s needs, which in turn depend on pre-existing health data. The purpose of this study was to formally quantify and evaluate the current health practices in Villa El Salvador, Peru. The study formally verifies the need for clean water and preventative care education public health projects to address the growing health concerns in this specific community.

Introduction

Public health is defined as the science and art of preventing disease, prolonging life, and promoting health and well-being through organized community effort for the sanitation of the environment.1 While such practices can enhance health in many settings, recently the trend has been to study and apply these principles to smaller communities. The motivation in targeting smaller communities lies in enacting grassroots health movements, spreading awareness of basic, yet essential health measures to a specific population. By tailoring these efforts, specific areas of health salient to the community are emphasized. While their level of success has varied, the inception of such projects has drawn awareness to the field of public health and basic health issues worldwide.

Successful and sustainable public health programs must be well adapted to the unique needs of their target community.2 However, a component frequently overlooked is feedback from the community. Prior data describing community needs is essential when planning and piloting person-specific initiatives. Despite the correlation between the availability of current public health data and the success of public health initiatives, many small communities do not have the resources to enact widespread studies.

One such example is Villa El Salvador, a community of over 400,000 people in Lima, Peru.3 Founded on May 11, 1971 by a group of nearly 200 families, Villa El Salvador continues to remain as a “self-managed” community with both commercial and residential areas. After many organized protests, most of Villa El Salvador today now has electricity and water. However, poverty is a major issue in the community. An estimated 21.9% and 0.8% of the population falls into the categories of “poverty” and “extreme poverty” respectively, according to official Peruvian standards: these levels correspond to a family of four members living with $2/$1 daily, respectively.4

As a result, nonessential “luxuries” are often spared from the budget. Healthcare is often one such example. Most people cannot afford the medical services offered by the four major hospitals in the area.5 In response, smaller community health clinics including “San Martín de Porres Centro de Salud” have attempted to bridge the socioeconomic gap of attaining quality care. People attend these clinics to receive affordable, and sometimes even free, medical attention. While such establishments have continued to serve the people of Villa El Salvador, many are unable to periodically seek medical assistance. A heightened awareness of preventative care is severely lacking in the community, which can be addressed through targeted public health initiatives. Unfortunately, accurate and current health data for Villa El Salvador does not exist.

The purpose of this study is to formally evaluate the health practices of people in Villa El Salvador. Through this initiative, I aim to provide basic, yet meaningful data through the use of surveys for future campaigns in public health and preventative care. Through the information attained from this study, I aspire to provide insight into valid points of focus for the overall improvement in community health. By attaining specific, quantifiable data firsthand from the citizens, future public health projects will be able to mold their initiatives based off of specific community needs and therefore enact consequential and sustainable change.

Experimental

I designed a public health survey to study potential factors contributing to the health issues in Villa El Salvador. After researching prior literature and assessing community needs I targeted several factors: exercise, nutrition, sources and amount of water, hindrances for medical attention, time spent washing hands, and vaccinations. The final version of the survey featured seven questions targeting the areas mentioned. All seven questions featured multiple-choice responses to minimize time spent completing the survey and maximize regularity to yield meaningful results.

I first distributed surveys on June 19, 2012 during the San Martin de Porres Centro de Salud Health Campaign, which offered free healthcare at a local park in Villa El Salvador. This event was specifically chosen as a starting point of the study to collect an accurate sample of the population, minimizing socioeconomic inequalities. The surveys were then distributed in San Martin de Porres Centro de Salud in the mornings for the following week to collect more responses. Respondents were randomly chosen as they waited for medical services offered at the center. After giving informed consent, subjects were told to mark the best response for each question with the exception of the final vaccine question, where all pertinent answer choices were selected.

A total of 98 responses were attained in the two-week span. Thirty-six respondents were between the ages of 15-30, 53 from the ages of 31-50, six from the ages of 51-65, and three from 65 years and above. Since most of the patients of the clinic are females, 19.4% of males were surveyed. Besides the differences in gender, the sample population accurately reflects the demographics of Villa El Salvador.

Results

From the population sampled, 19.4% of participants reported consuming more than two servings of fruits and vegetables combined (Figure 1). The majority of the population reported consuming 1/2 or one serving (35.7% for both categories, respectively). Furthermore, only 12.5% of the population above the age of 50 reported consuming more than one serving of fruits and vegetables daily. Finally, two percent of the respondents reported consuming no fruits and vegetables.

Forty percent of the sample population reported consuming eight or more glasses of liquid daily (Figure 2). According to the results attained, 33.7% of the people consume less than two servings of liquid. The most common source of water for the population sampled was tap by an overwhelming percentage (53%, Figure 3). Both bottle and cistern options yielded 23.5% respectively.

Cost served as the biggest obstacle to periodically visit a doctor for 38.8% of survey participants (Figure 4). However, many 17.3% of the respondents (17.3%) reported distance from a medical facility as the most significant hindrance, while fear for seeing a medical professional was the next most selected response (11.2%). It is important to note that when presented with this question, 9.2% of the respondents reported “trabajo” or work as their answer even though it was not an answer choice.

The majority of the population (52%) reported spending 10 seconds or less washing their hands per attempt, while the second most common response (30.6%) reported was up to twenty seconds (Figure 5). Only 11.2% reported spending up to 30 seconds per attempt, while more than 30 seconds was the least common response (6.2%).

Discussion

A majority of the respondents reported consuming either ½ or 1 serving of fruits and vegetables together. According to the United States Department of Agriculture, individuals should consume at least five to seven daily servings of fruits and vegetables combined, depending on factors such as gender and age.6 This survey finding contrasts the steady decrease in malnutrition Peru experienced nationwide from 2005-2010 and most significantly in small, semi-urban areas such as Villa El Salvador.7 It is clear that the majority of Peruvians are getting something to eat, at least from the perspective of the Peruvian government.

The issue then arises of what is being consumed. According to the World Health Organization, Peru is expected to have about two million people with diabetes by 2030, triple of what it had in 2010.8 The increasing prevalence of heart disease has also been documented9 An unhealthy diet may point to the rise in noncommunicable diseases in this community. The data I acquired from the study points to the reduced consumption of fruits and vegetables could serve as major reason for why this disturbing trend is present.

The third question on the survey originally asked for asked for a respondent’s daily water consumption, but much of the Peruvian diet involves juice, soup, and other milk-based products that contain water. Hence, to get an accurate tabulation of water intake, I included juice and milk in the survey. Experts recommend drinking seven to eight glasses of water daily. The majority of people consume two to five glasses of water-based liquids according to the data I attained. In addition, most people cited “tap” as their major source of water. While initiatives promoting the healthy benefits of drinking water would prove to be helpful by emphasizing the importance of increased water consumption daily, the issue of attaining clean water sources must also be addressed.

The principle of preventative care is often deemphasized in many small communities worldwide, regardless of socioeconomic status. For a community such as Villa El Salvador, the importance of this concept multiplies. Realistically, the majority of people in Villa El Salvador cannot financially afford to see a specialized healthcare professional. Hence, regular checkups with a physician to help monitor physical well-being serve as paramount health checkpoints for patients. The real issue is when even these checkups become too expensive. As discussed in the results section, this is unfortunately the case; the majority of people reported cost as their biggest obstacle to seeing the doctor periodically. Keeping in mind that they surveyed population is from a clinic that already provides relatively inexpensive medical services compared to those provided “on the street”, or outside of the clinic, the results are quite discouraging.

The health clinic cannot do much to reduce the cost; most of the employees are volunteers that work for little or no money, making layoffs and reductions in salaries imprudent. Paperwork and other administrative tasks could be streamlined via computers to help improve efficiency, but such a change would not occur overnight. Furthermore, there is always the issue of funds. While places such as the health clinic could redistribute their prices towards their more popular revenue streams and incentivize those that come often, simple public health outreach solutions could prove to be quite effective. Demonstrations in the community focusing on self-check and self-evaluations would increase accountability while upholding the idea of preventative care. In addition, other healthcare professionals besides doctors could make periodic home visits to “high-risk” patients as part of the care they receive from the Centro de Salud. While the latter would require more human resources, it could potentially give students from nearby universities the opportunity to engage in basic physical examination practices. This would be a unique outreach initiative the Centro de Salud could pilot to reduce its own patient inflow.

Hand washing is one of the most popular public health topics in terms of universality and applicability.10 Preventing the spread of infections and illnesses is key for a preventative care approach. The Centers of Disease Control and Prevention recommends washing hands for at least 20 seconds, and up to 40 seconds depending on the drying mechanism. Over 80% of the sample population reported washing hands for less than 20 seconds. This helps to explain the spread of sicknesses and parasites in Villa El Salvador. The frequency of hand washing could also play a role, though this was not evaluated in this study. There have been initiatives involving hand washing in Villa El Salvador (Centro de Salud has one once a year), but these projects are targeted towards children. While it is important for children to learn the proper technique, it is just as important (if not more) for adults to learn as well. The adults usually prepare the food, the latter serving as a major source of illness. Furthermore, they serve as role models for their children; if they engage in proper hand washing, their children are more likely to as well11 In essence, while the community has shown its support for hand washing, the older generation must take the issue more seriously.

While access to care has improved significantly in Villa El Salvador with the emergence of smaller clinics, there is still room for much improvement for the overall health of the community. The aim of this study was to quantify the current health practices of the people of Villa El Salvador to provide community-specific data. The effectiveness of follow-up studies would increase if more people were surveyed in different areas of Villa El Salvador, particularly people over the age of 50 and males. Furthermore, delving into one specific topic, such as nutrition and hand washing, would provide more depth for the respective facet of health than this study presented. Regardless, the study was successfully completed and conveys tangible information concerning the health practices the target community. It is the hope that the investigation served as a solid starting point for prospective public health initiatives in Villa El Salvador and Peru at large.

Acknowledgements

I would like to thank Enrique Bossio Montellanos, Director of Cross Cultural Solutions in Lima, Peru and Carol Soto, Head Coordinator of the San Martin de Porres Centro de Salud and the entire San Martin de Porres Centro de Salud staff for all of their support. Also, I would like to thank The Rice University Loewenstern Fellowship, and the Rice University Community Involvement Center for funding my trip. Finally, a special thanks to Sarah Hodgkinson and Mac Griswold for all of their guidance.

References

  1. Clinton County Health Department website. http://www.clintoncounty      gov.com/departments/health/aboutus.html (Accessed Jul. 7, 2012).
  2. Trust for America’s Health: Examples of Successful Community-Based Public Health Interventions (State-by-State). http://www.cahpf.org/GoDocUserFiles/601.TFAH_Examplesbystate1009.pdf (Accessed Jul. 7, 2012).
  3. Participant Handbook: Lima, Peru. New Rochelle: Cross Cultural Solutions, 2012.
  4. Perspectivas Socioeconómicas para Villa El Salvador, Observatorio Socio       Económico Laboral, Lima Sur, Lima, Peru, Jul. 2009.
  5. Portal de la Muncipalidad de Villa El Salvador. http://www.munives.gob.pe/index.php (Accessed Jul. 7, 2012).
  6. Vegetables: Choose My Plate. USDA. http://www.choosemyplate.gov/food-groups/vegetables.html (Accessed Jul. 22, 2012).
  7. Acosta, A. M. Working Papers at IDS. 2011, 367.
  8. WHO Country and Regional Data on Diabetes. http://www.who.int/diabetes/facts/world_figures/en/ (Accessed Jul. 7, 2012).
  9. Fraser, B. The Lancet. 2006, 367, 2049-50.
  10. Vessel Sanitation Program. Centers for Disease Control and Prevention, http://www.cdc.gov/nceh/vsp/cruiselines/hand_hygiene_general.htm (Accessed Jul. 20, 2012).
  11. Stephens, K. Parenting Exchange. 2004. 19, 1-2.

Comment

Engineered Piezoelectricity of Graphene

Comment

Engineered Piezoelectricity of Graphene

Abstract

To determine whether piezoelectric properties can be engineered selectively into graphene by doping one side of the two-dimensional sheet with compatible adatoms. Density Functional Theory (DFT) calculations compare piezoelectric stress (d31) and strain (e31) coefficients for several adatom combinations to other piezoelectric materials, indicating possible applications in the dynamic control of motion and deformation within nanoelectromechanical systems and structures.

Introduction

The transition from the use of microelectromechanical systems (MEMS) to that of nanoelectromechanical systems (NEMS) poses many problems, notably that many of the methods employed to engineer properties such as transduction and strain response into MEMS begin to fail when carried into the nanoscale.1,2 One of the greatest challenges faced with implementing NEMS in consumer technology is generating the ability to exert dynamic control over the motion and deformation of nano-structures. Piezoelectric materials are already featured prominently as strain sensors for vibration detectors, electromechanical transducers, and probes for atomic force microscopes to achieve this kind of control.1,3 However, no promising analogue among known materials exists to afford this at the nanoscale.

Piezoelectricity is a remarkable property of some materials in which mechanical deformation results in the production of an electric field.4 The reverse piezoelectric effect is also true, and exposure to an electric field will cause a change in the dimensions of the piezoelectric material. In the modeling of the piezoelectric behavior of a material, d31 and e31 are among several of the coefficients used. The d31 coefficient relate in-plane strain to the electric field and the electrical polarization perpendicular to the plane, while e31 describes the magnitude of the piezoelectric effect displayed in the material for a small deformation or applied electric field.5,6

The ability to produce piezoelectric nanomaterials efficient at creating desired functionalities in a nano-device is an attractive prospect. To this end, it is thought possible to make nano-scale materials piezoelectric, even if they do not intrinsically possess this quality, by introducing nanoscale inhomogeneities into the material. In such systems, provided that they are non-centrosymmetric (i.e. they lack inversion symmetry), even uniform stress would induce polarization of the material which would indicate piezoelectric behaviour.2

Theoretical demonstrations have calculated that by inducing strain in piezoelectric materials it is possible to bring about a reversal in the magnetic fields of other materials.7 This effect is studied in a new field known as straintronics. In applications such as computation and signal processing, ambient energy alone is enough to generate this level of strain, resulting in devices requiring ultralow energy inputs to function as well as a minimization of energy dissipation. This effect has been computationally displayed in lead-zirconate-titanate (PZT) nanomagnets shown to have extraordinarily low energy dependence.8 The novelty of nanoscale films with these properties would allow for versatile and energy efficient control of nanodevices. Specific control over the dimensionality of thin films allows for precise control of the magnitude of their resultant properties. This is well understood for many materials through the transition from their bulk to their nanoscale analogs.

A two-dimensional sheet would constitute the idealized thin film, but it was initially believed that two-dimensional materials would be thermodynamically unstable and could not exist. This was based primarily on the observation that the melting temperature of thin film materials decreases rapidly with decreased thickness.9 The surprising discovery of graphene provided a material stronger than any other, that was still lightweight and flexible.10 Graphene exists as a monolayer of sp2 hybridized carbon atoms one atomic layer thick, making it a two-dimensional material.9 Graphene is traditionally isolated from graphite by a mechanical exfoliation process, but has also been formed using chemical vapor deposition and arc discharge techniques, which allow for the production of graphene with high electrical conductivities.11,12,13

Because graphene is well within the nanoregime, it shows much promise in aiding the transition from MEMS to NEMS and there has been a concerted effort to uncover the extent of its potential in technology. Chemical doping is one way to tailor graphene’s properties to suit various needs within device applications. Studies demonstrate that doping graphene with potassium changes its molecular symmetry and modifies its electronic properties.14 Understanding how dopants affect graphene could provide piezoelectric nanofilms for NEMS device control. Pertinent to this is the distribution density of adatoms, and the sites they reside in. The specific binding site on the graphene sheet may affect graphene’s symmetry and subsequently its properties, as this is seen in other molecules.15 This may be above a carbon atom (top site), in the center of the hexagon (hollow site) or over a bond joining carbon atoms (bond site) (see Fig. 4A).

Computer modeling techniques such as Density Functional Theory (DFT) are useful in this regard because they enable the use of theory to predict the behavior of matter under different conditions.16 The results of these calculations can then inform of the optimal synthesis routes to pursue in generating the desired reaction or products. DFT allows us to do this through the observation that a given wave function contains much more information about a system than is necessary to describe its behavior. Ground state molecular energy, the wave function, and all other molecular electronic properties are uniquely determined by the electron probability density ρ(x,y,z), which as a three-variable function is less computationally intensive. In principle, given the ground state electron density, the Hohenberg-Kohn theorem propounds that all ground state molecular properties can be calculated from ρ.17

Methods

Density functional theory implemented into the Quantum-ESPRESSO ab initio software package was used to carry out the calculations in this study. Ion cores were treated using Vanderbilt pseudopotentials in all cases except that of potassium, in which a norm-conserving pseudopotential treatment was utilized. A nonlinear core correction was included for potassium and fluorine. Electron exchange and correlation effects were described using the spin-polarized generalized-gradient corrected Perdew-Burke-Ernzerhof (PBE) approximation. All calculations were done using periodic boundary conditions and a primitive cell with one atom for every two carbon atoms except when indicated otherwise (Fig. 1C). The electronic wave function is expanded in a plane wave basis set with an energy cut-off of 60 Ry.

Single atom adsorption on one side gives rise to an asymmetric surface with a net electric dipole moment. A dipole correction was used to cancel out the artificial electrical field that arises from this dipole moment. Focus was placed upon calculating the d31 and e31 piezoelectric coefficients, where d31 is the transverse piezoelectric coefficient that describes the deflection normal to the direction of polarization. The e31 term signifies the intrinsic piezoelectric coefficient, where a large e31 value corresponds to a large electric charge induced at a small cost of mechanical strain, or conversely a large mechanical force generated in the presence of a small electric field.

Cases examined graphene doped with uniform coverages of lithium, potassium, hydrogen, and fluorine atoms. Consideration was given to situations involving two different atom dopants on opposite sides of the graphene sheet, such as fluorine and hydrogen, or fluorine and lithium. Several atom coverage densities were modeled using lithium by placing a single lithium atom in 1 X 1, 2 X 2, 3 X 3, and 4 X 4 graphene periodic supercells. A Löwdin analysis was used to calculate the partial charges on the lithium atoms at each of concentrations. Furthermore, the effect of adatom position on piezoelectric response was examined, in addition to the effect of crystallographic patterning of adatoms on the graphene surface for a fixed concentration of C32Li2.

Results

DFT calculations showed that Li and K preferentially bound to the central hollow sites of the graphene sheets, resulting in hexagonal (6mm) point group symmetry (Fig. 1B). H and F were found however, to bind at the top site which resulted in trigonal (3m) symmetry (Fig. 1B). In the cases involving two atoms, at least one of these atoms must bind to the top site, producing trigonal (3m) symmetry (Fig. 1B).

The 6mm, 32, and 3m point groups all result from the destruction of inversion symmetry, hence these materials will be non-centrosymmetric and display piezoelectric behavior. Point group symmetry enables the determination of those materials having non-zero d31 and e31 coefficients, which are common to all configurations in Fig. 1B. Upon application of a sawtooth potential (ensuring forces acting on the system are asymmetric, allowing the system center to experience force) with a width of 10 Å, an electric field is applied to the material. A roughly linear relationship is found between the electric field and the strain induced in the material when the field amplitude lies between -0.5 and 0.5 V/Å for many graphene-adatom combinations examined.

These behaviours have been experimentally achieved in devices containing graphene.18 The d31 coefficient is equal in magnitude to the gradient of the trend-lines (Fig. 2A & 2B) and Table 1 shows the d31 coefficients extracted from these lines, which display variability within three orders of magnitude. It was found that the binding of F to the graphene sheets resulted in only minor changes to the piezoelectric coefficient. This occurs similarly when H and F bind in an alternating manner and transverse to one another. The alkali metals Li and K however, produced much larger effects on the d31 coefficient. The greatest effect is achieved when F is added to the three top sites, with Li residing in the hollow on the opposite side of the sheet, yielding a d31 value of 3 X 10-1 pm/V. This is comparable to the theoretical value for the 3D piezoelectric boron nitride (BN), which is 3.3 X 10-1 pm/V.

The e31 coefficients were obtained by calculating the change in polarization normal to the surface as a function of the equibiaxial strain in the plane (from Fig. 2B). This gives a linear relationship in all cases between low strain values of -1% to 1%. A consequence of employing the equibiaxial in-plain strain is that the gradient of the trend-line has a value of twice the e31 coefficient for each atom. It was found that both alkali metals Li and K possessed the largest values for e31, indicating that they will deform the most in the presence of a small electric field, and they will generate the greatest charge under a given strain. Unlike the results for the d31 coefficient, e31 does not undergo a significant change when F is placed in the top sites with Li in the opposite hollow site.

To compare values for 2D graphene to those of traditional and well understood 3D materials, the e31 and d31 coefficients were divided by the 3.35 Å interlayer spacing yielding an e31,3D of 0.17 C/m2 and a similar d31 value of 0.19 C/m2. When compared to the e11,3D value of 0.731 C/m2 for two dimensional BN, it is smaller by a factor of 4, but it must be noted that e11 coefficients are generally much larger than those of e31. When the e31,3D is calculated this difference becomes much smaller, specifically as 0.31 C/m2 for wurtzite BN and -0.55 C/m2 for gallium nitride. It is important to note that graphene has the potential to have much larger polarization magnitudes than either of these materials, since graphene can undergo much more elastic strain before plastically deforming. This demonstrates the possibility of engineering piezoelectric graphene comparable to known materials.

It was found that varying Li coverage (Fig. 2C) causes deviation in the relationship between the electric field and the induced strain (Fig. 3A). By plotting the d31 coefficient as a function of Li coverage density, a maximum value is obtained when n = 8, corresponding to the unit cell C8Li. When this density is decreased the d31 value shows a steep decline (Fig. 3A inset). In contrast, the static dipole moment of Li on graphene increases as the Li coverage density decreases, producing a stronger interaction with the electric field (Fig. 3B bottom inset).

However, at low coverage densities this interaction is diminished, and a maximum is found between coverage density and d¬31 as a result of their competitive effects (Fig. 3A inset). The e31 coefficient varies in a similar manner to that of d31, also showing a maximum value for the C8Li unit cell, with decreasing values as coverage density decreases. This shows that the magnitude of the piezoelectric response doped graphene will display can be varied as a function of the adatom coverage density.

Considering only the C32Li system, there is nonlinear piezoelectric behavior in d31 when the electric field is less than -0.1 V/Å. When the field strength becomes less than this, nonlinear behavior is sharply displayed, and the strain exhibits a rapid linear decrease, with a trend-line gradient of 0.19 pm/V. This is because at approximately -0.1 V/Å, Li and graphene begin to undergo a charge transfer process. It should be noted that this charge transfer process could be used to very efficiently power the opening and closing of gates in transistor technologies.

Again considering all tested values of n, for electric field strengths greater than -0.1 V/Å, Li maintains a constant charge, but when this number is lesser, the charge transfer process comes into effect and this value decreases. Furthermore, when varying the height of Li above the graphene sheet as a function of the electric field, there is a minimum value obtained for -0.1 V/Å for the C32Li system, while the other coverage densities still display linear behavior. When the electric field value is larger, varying the distance between Li and the graphene sheet produces no change in charge. When the value is smaller, charge transfer occurs. The implication is that the charge transfer from graphene to Li is theoretically responsible for the large piezoelectric response to fields lesser than -0.1 V/Å.

We also investigated the effects of adatom position on the piezoelectric response of the doped graphene. While K and Li appear to diffuse moderately across the graphene sheet, it is not expected that this will affect the materials piezoelectric behaviour. This is because the doped graphene will be non-centrosymmetric regardless of the adatom location. The d31 coefficient was calculated when Li was placed at the hollow, top, and bond sites of the unit cell and the strain varies upon subjection to an electric field normal to the surface of the adatom positions (Fig. 4A). Again, the gradient of the trend-line gives the value of the d31 coefficient, and these are all found to be within 5% of one another, indicating that the piezoelectric response is independent of the position for the adatoms.

Crystallographic patterning of adatoms on graphene sheets with an adatom coverage density of C32Li has shown that by varying the relative position of the Li atoms in the unit cell, a 20% change is produced in the d31 coefficient, showing closer resemblance to a coverage density of C8Li (Fig. 4B). Despite this, it is unlikely that the pattern of adatoms on the sheet has a significant impact on the piezoelectric response strength of the graphene-adatom system based on the calculations carried out by the authors.

Discussion

Possessing the ability to calculate ground state molecular properties allows for an accurate determination of the behavior of materials under different theoretical conditions and permits the dynamic alteration of variables affecting the system. Widespread difficulty has been associated with producing specific desired behaviors in devices at the nanoscale (such as challenges of self-assembly).19,20 As such, being able to rapidly vary parameters in a given system and observe how the theoretical model responds to those changes is a very powerful tool. Undoubtedly, modeling techniques will play a large role in the creation of next generation devices and materials, allowing us to push the limits of technology beyond the bulk, and to finally enter the realm of nanoscience with devices such as sensors/actuators and transistors, among others.21,22

The authors’ work has done much to further this goal. This study paints graphene in an entirely new light as a potential host to a myriad of applications in nanoscience and nanotechnology previously inaccessible. The novelty of demonstrating for the first time that a material which is not intrinsically piezoelectric can be made so through careful chemical processes broadens our understanding of how to tailor nanomaterials so we can make them work for us. Given this, there are many topics that can be discussed in hindsight of this theoretical study and which the authors may consider.

The ability of graphene to display piezoelectric behavior could be exploited extensively in the fledgling field of straintronics.7,8 In the previously discussed theoretical study, PZT nanomagnets were shown to be viable power sources for low-energy devices. One of the limitations of using this kind of material is its rigid nature and relatively large thickness (relative to graphene sheets) on the order of ~50 nm.8 By partially hydrogenating a graphene sheet, a compound known as graphone, a ferromagnetic and hence magnetostrictive material is obtained.23 A small number of these could be stacked with a layer of piezoelectric graphene surrounding, within, or in between these, resulting in a setup comparable to the PZT nanomagnet studied. An advantage of this type of material stems from the enormous tensile strength and high flexibility of graphene films.24,25,26

Additionally, the excellent ability of graphene to conduct thermal energy only increases its appeal for use in molecular electronics.27 Nanodevices are currently expensive to manufacture, and their oftentimes integrated nature and small sizes impossible to manually fix necessitate that if a defect occurs, the device as a whole would need to replaced. With increasing power densities as devices are scaled down, thermal effects on devices must be given increasing importance. The high thermal conductance displayed by graphene may play a key role in maintaining the functionality of nanodevices under thermal stress by facilitating heat flow away from the delicate electronics within.28 Thermal expansion necessarily resulting from such applications as thermal conductance produces strain in the graphene that could further assist in power generation in straintronics applications.

Graphene has also found uses in supercapacitor technology as electrodes. A graphene nanofiber composite with conducting polymer polyaniline has shown capacitance values as high as 480 F/g at a current density of 0.1 A/g. This nanocomposite also shows good cycling stability during the charge-discharge process.29 Traditional piezoelectric materials display problems such as brittleness (as with PZT), among others.30 Graphene based piezoelectric devices would not have issues of brittleness as a result of their high flexibility. Furthermore, piezoelectric graphene would allow for efficient energy harvesting for its use in supercapacitors. Such technologies discussed here allude to the integration of graphene as various components of the same devices based on its treatment and subsequent properties. This theme may represent a future where carbon based materials completely replace silicon in fields such as molecular electronics. Graphene has the potential to revolutionize the development of all nanodevices as a result of its ability to take on so many chemical and physical roles as a material because of its chemical versatility and its exceptional mechanical and electrical properties.

Despite all of this, graphene is not centrosymmetric and as such intrinsically is not a piezoelectric material. If the calculations in this work lead to synthetic realization of the properties modeled then it will require processing to make it such. Since BN and GaN, among others, are well established piezoelectric materials that are clearly capable of being produced as nanotubes or monolayer sheets, why should graphene be a more attractive alternative?31,32,33 This appeal is outlined by the authors and it is exactly because graphene is not intrinsically piezoelectric that it holds such potential. Since piezoelectricity arises from adatom doping, graphene could be selectively doped to produce areas or patterns that are piezoelectric, and ones that are not. Furthermore, graphene has a zero band gap, but this can easily be varied by adding dopants to the sheet. Choosing BN nanosheets necessitates that the device has a band gap of 4.60 eV, and for GaN sheets of 3.4 eV, without significant evidence for altering these values appreciably.31,33 Graphene therefore possesses much more versatility in its applications, since it can be altered in various manners to serve a multitude of different tasks in the same device.

In future studies towards this goal, it would be valuable to examine this response at higher strain. The authors model the piezoelectric effect between -1% and 1% strain, noting that a rough conversion indicates graphene may be comparable to BN and GaN. They also mention that it may be able to surpass these materials since it is capable of enduring higher strain before failing. It would be interesting to probe the extent to which graphene can be deformed as a piezoelectric material and how this affects the magnitude of the piezoelectric response.

However, the authors’ model may oversimplify the assumptions leading to the proposed unit cells. They model every carbon as associated with an adatom for top site binding, or every hollow site as associated with an adatom for hollow site binding. Hydrogen is one of the adatoms calculated where all of the top sites are occupied on the graphene sheet. Contrary to this, there have been experimental studies that have demonstrated that at high hydrogen concentrations there is a problem with the clustering of hydrogen adatoms rather than their maintained homogeneous dispersion (Fig. 5).34 It has also been demonstrated that the binding energy for fluorine with this pattern is extremely low compared to more dispersed patterning and that if this were to occur it is likely to be a highly unstable compound.35 While this may be the case for low dosages of adatoms, it may be necessary to synthesize such piezoelectric graphene to say with any great conviction how adatoms will behave at high concentrations on the sheets surface.

The authors’ work has opened up many avenues of pursuit in efforts to shift to carbon based electronics. Of particular note, in the event of its synthesis, are the applications for straintronic devices and similar molecular electronics. There are undoubtedly a host of other applications in which piezoelectric graphene could serve as an invaluable material. Research in graphene will almost certainly provide the materials necessary to transition from current technologies to smaller, more efficient, and less energy intensive ones. Graphene possesses an apparent wealth of properties through manipulation of its structure and interactions, its insurmountable thinness, and its extraordinary strength. These are just some of the properties of graphene known to us that suggest it will play a major role in the implementation of nanotechnologies as the way of the future.

References

  1. Ekinci, K. L. “Electromechanical Transducers at the Nanoscale:Actuation and Sensing of Motion in Nanoelectromechanical Systems (NEMS).” Small 8.9 (2005): 786-797.
  2. Sharma, N. D., R. Maraganti and P. Sharma. “On the Possibility of Piezoelectric Nanocomposites Without Using Piezoelectric Materials.” Journal of the Mechanics and Physics of Solids 55.11 (2007): 2338-2350.
  3. Kon, Stanley, Kenn Oldham and Roberto Horowitz. “Piezoresistive and Piezoelectric MEMS Strain Sensors for Vibration Detection.” Tomizuka, M., Chung-Bang Yun and Victor Giurguitui. Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2007. SPIE, 2007. 65292V-1 - 65292V-11.
  4. Callister, William D. Jr. and David G. Rethwisch. Materials Science and Engineering: An Introduction. 8e. Hoboken: John Wiley & Sons Inc., 2010.
  5. Do, Dal-Hyun. Investigation of Ferroelectricity and Piezoelectricity in Ferroelectric Thin Film Capacitors Using Synchotron X-Ray Microdiffration. PhD Thesis. University of Wisconsin-Madison. Madison: ProQuest Dissertations and Theses, 2006.
  6. Ouyang, Jun, R. Ramesh and A. L. Roytburd. “Intrinsic Effective Piezoelectric Coefficient e31,f for Ferroelectric Thin Films.” Applied Physics Letters 86 (2005): (152901)1-3.
  7. Iwasaka, Yoh. “Stress-Driven Magnetization Reversal in Magnetostrictive Films With In-Plane Magnetocrystalline Anisotropy.” Journal of Magnetism and Magnetic Materials 240 (2002): 395-397.
  8. Roy, Kuntal, Supriyo Bandyopadhyay and Jayasimha Atulasimha. “Hybrid Spintronics and Straintronics: A Magnetic Technology for Ultra Low Energy Computing and Signal Processing.” Applied Physics Letters 99.6 (2011): (063108)1-3.
  9. Geim, A. K. and K. S. Novoselov. “The Rise of Graphene.” Nature Materials 6 (2007): 183-191.
  10. Lee, Changgu, et al. “Measurement of the Elastic Properties and Intrinsic Strength of Monolayer Graphene.” Science (2008): 385-388.
  11. Novoselov, K. S., et al. “Electric Field Effect in Atomically Thin Carbon Films.” Science 306 (2004): 666-669.
  12. Lee, Youngbin, et al. “Wafer-Scale Synthesis and Transfer of Graphene Films.” Nano Letters 10 (2010): 490-493.
  13. Wu, Zhong-Shaui, et al. “Synthesis of Graphene Sheets with High Electrical Conductivity and Good Thermal Stability by Hydrogen Arc Discharge Exfoliation.” ACS Nano 3.2 (2009): 411-417.
  14. Ohta, Taisuke, et al. “Controlling the Electronic Structure of Bilayer Graphene.” Science 313 (2006): 951-954.
  15. Sonnleitner, Tobias, et al. “Molecular Symmetry Governs Surface Diffusion.” Physical Review Letters 107 (2011): (186103)1-4.
  16. Argaman, N. and G. Makov. “Density Functional Theory: An Introduction.” American Journal of Physics 68.1 (2000): 69-79.
  17. Levine, Ira N. Quantum Chemistry. 4th Edition. Englewood Cliffs: Prentice-Hall, 1991.
  18. Zhang, Tuanbo, et al. “Direct Observation of a Widely Tunable Bandgap in Bilayer Graphene.” Nature 459 (2009): 820-823.
  19. Katsuhiko, A., et al. “Challenges and Breakthroughs in Recent Research on Self-Assembly.” Science and Technology of Advanced Materials 9 (2008).
  20. Cui, Z. and C. Gu. “Nanofrabrication Challenges for NEMS.” 1st IEEE International Conference on Nano/Micro Engineered and Molecular Systems. 2006.
  21. Ekinci, K. L. “Electromechanical Transducers at the Nanoscale:Actuation and Sensing of Motion in Nanoelectromechanical Systems (NEMS).” Small 8.9 (2005): 786-797.
  22. Timp, G., et al. “The Ballistic Nano-Transistor.” Electron Devices Meeting. 1999. 55-58.
  23. Zhou, J., et al. “Ferromagnetism in Semihydrogenated Graphene Sheet.” Nano Letters 9.11 (2009): 3867-3870.
  24. Frank, I. and D. Tanenbaum. “Mechanical Properties of Suspended Graphene Sheets.” Journal of Vacuum Science and Technology B 25.6 (2007): 2558-2561.
  25. Bunch, Joseph. “Mechanical and Electrical Properties of Graphene Sheets.” Dissertation. 2008.
  26. Kim, B., et al. “High-Performance Flexible Graphene Field Effect Transistors with Ion Gel Gate Dielectrics.” Nano Letters 10 (2010): 3464-3466.
  27. Balandin, Alexander, et al. “Superior Thermal Conductivity of Single-Layer Graphene.” Nano Letters 8.3 (2008): 902-907.
  28. Shahil, Khan and Alexander Balandin. “Graphene-Multilayer Graphene Nanocomposites as Highly Effiicient Thermal Interface Materials.” Nano Letters 12 (2012): 861-867.
  29. Zhang, K., et al. “Graphene/Polyaniline Nanofiber Composites as Supercapacitor Electrodes.” Chemistry of Materials 22.4 (2010): 1392-1401.
  30. Wang, Lei. “Vibration Energy Harvesting By Magnetostrictive Material for Powering Wireless Sensors.” PhD Thesis. 2007.
  31. Golberg, D., et al. “Boron Nitride Nanotubes and Nanosheets.” ACS Nano 4.6 (2010): 2979-2993.
  32. Goldberger, J., et al. “Single-Crystal Gallium Nitride Nanotubes.” Letters to Nature 422 (2003): 599-602.
  33. Haiming, L., et al. “Electronic Structures and Magnetic Properties of GaN Sheets and Nanoribbons.” Journal of Physical Chemistry C 114.26 (2010): 11390-11394.
  34. Balog, R., et al. “Atomic Hydrogen Adsorbate Structures on Graphene.” Journal of the American Chemical Society 131 (2009): 8744-8745.
  35. Robinson, J., et al. “Properties of Fluorinated Graphene Films.” Nano Letters 10 (2010): 3001-3005.

Comment

3D Micro/Nano-Sculptures made of a Single Walled Carbon Nanotube Polymer Matrix via Two Photon Polymerization

Comment

3D Micro/Nano-Sculptures made of a Single Walled Carbon Nanotube Polymer Matrix via Two Photon Polymerization

Abstract

The objective of this study was to fabricate 3D micro/nano-structures with an even distribution of single-walled carbon nanotubes (SWCNTs). We utilized two-photon polymerization by means of a pulsed femtosecond laser for fabrication and Raman microscopy for the detection of carbon nanotubes in the structures. In order to test the capability of our laser, we fabricated an assortment of interesting microstructures such as an 8 micron gear, 7 micron dog, and a 100 nm diameter nano-wire. We believe that this study will open the door to the fabrication of even more intricate micro-structures with an even embedment of carbon nanotubes for practical purposes.

Introduction

Two-photon polymerization (TPP) is a well-established method used to fabricate intricate 3D micro/nano structures from polymers. These 3D structures have vast potential in applications such as micro-electromechanical devices, sensors, and targeted drug delivery systems. However, it remains necessary to functionalize and enhance the mechanical properties of these polymer micro-structures for practical applications. To this end, single-walled carbon nanotubes (SWCNTs) are critically acclaimed as ideal fillers to enhance mechanical properties of the polymer due to their high Young’s modulus (up to 1.5 TPa), high tensile strengths (up to 63 GPa), high aspect ratios (up to 106), and small diameters (~ 1 nm) 1. In this report, we establish a novel way to evenly embed SWCNTs into 3D micro/nano polymer structures by means of TPP.

Dispersing SWCNTs Evenly in Photoresin

Our photoresin consisted of 0.01 weight % SWCNTs, 96.67 wt % of R712, and 1.67 wt % of photoinitiator and photosensitizer. SWCNTs were evenly distributed in the photoresin by sonicating the mixture for 2-3 hours. After taking the UV spectrum of the photoresin (as seen in Figure 2), we observed a high absorption below 390 nm and low absorption in the IR region, enabling the photoresin photopolymerizable for two-photon polymerization.

Two Photon Polymerization

A Ti:Sapphire pulsed femto-second laser at 780 nm and an intensity of 7 mW was utilized to excite two-photon absorption on the SWCNT dispersed UV photo-polymerizing resin. It took two photons to successfully photo-polymerize the resin, because our photo-resin exhibited a high absorption at 390 nm, and our laser operated at 780 nm. This only occurred at the focus spot of the laser where the photon intensity was highest (see Figure 3). In this manner, point-by-point nanometric volumes of photo-resin were photopolymerized following the trajectory of the focus spot (whose movement was dictated by a pre-programmed CAD file). An assortment of interesting micro/nano-sculptures with an even dispersion of SWCNTs was obtained.

Fabricated Microstructures

Figure 4 shows a few examples of micro-sculptures that I fabricated this summer using two-photon polymerization. All of these structures exhibited a uniform dispersion of SWCNTs.

SWCNTs Embedded in Micro/Nano-Structures

The uniform embedment of SWCNTs in our fabricated micro/nano-structures was detected by Raman Microscopy instrumentation (Raman-11).

To this end, a laser at a wavelength of 785 nm, 10 s exposure time, and intensity of 0.56 mW was utilized. Our sample was placed on a glass substrate and exhibited Raman scattering when eradiated with the laser. The Raman scattering was separated from the Rayleigh scattering through a filter and split into its different frequencies at a grating. The CCD camera then analyzed the Raman signal and constructed a graph of Raman signal vs. wavelength. By using our computer, we were able to input a certain frequency and assign a color to the corresponding Raman intensity at that frequency. We selected the Raman intensity at the G-band frequency to be white (1590 cm-1) and every other Raman intensity signal to be black. The white color then represented a high G-band intensity (hence, a high concentration of SWCNTs). From this information, point-by-point, the computer reconstructed our Raman image by analyzing whether or not the sample had a high Raman intensity signal and assigned a white color to that pixel. If the image resembled our sample, then we had a uniform dispersion of SWCNTs.

As seen in Figure 5, by comparing the SEM image, Brightfield image, and Raman image of the micro bull and nanowire, we observed a uniform dispersion of nanotubes. Furthermore, by taking the Raman spectra on and off the structure, a sharp peak at the G-band occurred on the structure, while no signal occurred off the structure indicating nanotubes only exist on the structure.

Alignment of SWCNTs in Nano-wires

SWCNTs were found to be aligned along the axis of the fabricated nano-wires as observed by polarized Raman microscopy. To this end, we observed that the strongest Raman signal arose when the nano-wire was parallel to laser polarization, and the lowest signal occurred when the nano-wire was perpendicular. Figure 6 shows the G-band Raman intensity vs. degree of nano-wire orientation. While SWCNT alignment may be due to spatial confinement, the reason behind their alignment is still being researched by my research team at Osaka University today.

Conclusion

We have successfully demonstrated 3D micro/nano fabrication of a SWCNT polymer matrix using TPP and elucidated that SWCNTs were embedded in micro structures and aligned along the nanowire axis. We believe that these methods will open the door to a variety of applications needing SWCNT reinforced micro-structures such as drug delivery devices, sensors, and MEMs in the future.

References

  1. Coleman, J.N. Carbon 44. 2006, 1624–52.
  2. Ichida, M. Appl. Phys. A. 2004, 78, 1117–20.

Comment

Eating Wheat: Avoiding the Bad and Getting the Good

Comment

Eating Wheat: Avoiding the Bad and Getting the Good

A bagel or bowl of cereal is common for breakfast, followed by a sandwich or burger for lunch. Dinner often stars pasta, pizza, or a casserole as the main dish. There is one ingredient that lurks in nearly every American meal.

Wheat. It’s the main ingredient in bread, the most purchased packaged food in the United States.1 It plays an integral role in many diets, but if not correctly consumed, it can damage the human body. The harmful effects include increased risk of weight gain, cardiovascular disease, and even cancer.2 To avoid adverse effects while reaping the benefits wheat offers, three factors should be considered: wheat type (whole-grain or refined), the portion size, and the accompanying ingredients.

Whole Grains Instead of Refined Grains

Guidelines by the United States Department of Agriculture (USDA) recommend Americans consume whole-grain rather than refined wheat. Currently, the average consumption of whole-grain foods is approximately one serving a day, falling short of the recommended three servings.3 Wheat grains are divided into three parts: endosperm, germ, and bran (Figure 1). Whole-grain wheat grains have the germ and bran intact. In contrast, refined grains that have the bran and germ separated from the starchy endosperm comprises 80% of the grain. Unfortunately, this processing robs wheat of the majority of its nutrients, which are concentrated in the bran and germ.

Whole grain wheat has nearly ten times more dietary fiber, five times as many vitamins and cancer-preventing phenolic compounds, and three times as many essential minerals including zinc, iron, and selenium (Table 1).

The extra dietary fiber of whole-grain wheat itself is a compelling reason to choose it over refined wheat. Increased consumption of dietary fiber has been observed to improve cholesterol concentrations, lower blood pressure, and aid in weight loss. These effects all reduce the risk for coronary heart disease, the leading cause of adult deaths in the United States.4 High-fiber foods facilitate metabolic effects and control caloric intake by increasing satiety. Dietary fiber, consisting of insoluble and soluble components, promotes gastrointestinal health as a probiotic for beneficial bacteria in the colon. Both fibers also provide cardiovascular benefits by lowering “bad” cholesterol, or LDL.

In the broader context of a person’s entire diet, high-fiber foods often have lower energy density and take longer to eat. These two traits promote satiety, curbing consumption of potentially unhealthy foods and lowering total caloric intake. Eating refined wheat, such as white bread and pasta, causes one to not only forego nutrients, but also consume more calories before feeling full. Overconsumption of calories coupled with physical inactivity are major risk factors leading to heart disease and obesity.5

Control Portion Size

In addition to considering what type of wheat one eats (e.g., wholewheat instead of white bread for toast in the morning), an equally important factor is quantity. Feasting upon large portions of wholegrain wheat regularly results in damaging spikes in blood sugar that can lead to an chronic state of Type 2 diabetes.6 Since diabetes is the leading cause of kidney failure in the United States and doubles the risk of stroke, its correlation with consumption of refined wheat is important to understand.7

The biochemical phenomenon underlying this link is called insulin resistance. Insulin is a hormone stimulating various tissues to store glucose from the blood as glycogen. When carbohydrates are digested, they are broken down into glucose, which is transported into the bloodstream, consequently increasing blood sugar levels. This causes pancreatic beta cells to synthesize insulin to convert the increased glucose into glycogen. When the body does not perform these functions well, the resulting condition is Type 2 diabetes.

Even though the USDA advises adding whole-grain wheat to one’s diet, USDA guidelines do not account for the spiking effect on blood sugar when a large portion is eaten in a short time frame. Their guidelines use a rating system called the glycemic index (GI) that is widely utilized in nutrition studies as a quality standard of carbohydrate foods.8 Wonder®, fully enriched white bread, has a GI of 71 while bread made of 80% whole-grain and 20% refined wheat flour has a GI of 52.8 In practical terms, these GI values indicate a 70% increase in blood sugar compared to the blood sugar increase caused by a comparable amount of pure glucose. Likewise, whole wheat bread causes an increase in blood sugar 52% of that caused by glucose. Based on the aforementioned pathogenic contribution of blood sugar spikes, the lower GI of whole-wheat bread quantitatively demonstrates its superiority over white bread.

However, consider the following: the Twix candy bar has an even lower GI of 44. Watermelon has a GI of 72. How does this make sense? The glycemic index fails to account for realistic portion sizes. When the foods are empirically tested on people for their effects on blood sugar, the quantities eaten are equivalent to 50 grams of carbohydrates. Three-quarters of a king-sized Twix bar constitutes 50 grams of carbohydrate, but so do 5 cups of diced watermelon. This difference in volume is due to the fiber and water content of watermelon.

Realistically, a person is likely to eat a whole king-sized Twix bar or one cup of diced watermelon in one sitting. Adjusting for actual serving sizes and assuming linearity, the Twix bar has what is now called a glycemic load (GL) of 58.7 and watermelon a GL of 14.4. As a more relevant implementation of GI values, glycemic load emphasizes control of portion sizes in eating carbohydrates. The GI value of whole-grain wheat is always lower than refined wheat vary, but the difference is small enough that one cup of refined flour pasta might be better than 2 cups of whole-wheat flour pasta in preventing Type 2 diabetes.

Watch Out for Accompanying Ingredients

The final factor to consider is that wheat is rarely eaten alone. In the processing and cooking to make it edible, wheat is nearly always mixed with other ingredients that are potentially harmful. Most breads, pastas, pancakes, cereals, and other wheat products have at least five ingredients trailing behind the primary wheat ingredient, which are broadly classified as preservatives, sweeteners, emulsifiers, leavening agents, flavor enhancers, and dough conditioners. All of these additives to wheat affect short-term feelings after consumption as well as long-term effects on the body. In particular, one should avoid partially hydrogenated oils and moderate high-fructose corn syrup.

Added as dough conditioners and preservatives, partially hydrogenated oils are considerable factors in coronary artery disease, which causes at least 30,000 premature American deaths per year.9 They contain trans fats, which have been unequivocally linked to lowering “good” high-density lipoprotein (HDL) cholesterol and raising “bad” low-density lipoprotein (LDL) cholesterol. Although large companies have removed trans fats, including partially hydrogenated oils, from foods such as Kraft’s Oreos in response to mounting criticism beginning in 2005, numerous food companies still include partially hydrogenated oils in their wheat products. For example, cake mixes, packaged baked goods, and peanut butter are commercially made with partially hydrogenated oils on a regular basis because they simplify manufacturing and reduce costs while increasing the final product’s shelf life. Manufacturers obfuscate this addition by stating the trans fat content of foods as “0g” on nutrition labels. This is allowed because 0.5 grams of trans fat is one serving. However, less than 0.5 grams of trans fat per serving can accumulate when consuming multiple servings of foods such as chips or crackers. Instead, check for the words “partially hydrogenated” or “shortening” in the ingredients list.

While partially hydrogenated oils are conclusively life-threatening, high-fructose corn syrup (HFCS) is a controversial additive. Manufacturers favor the use of HFCS as a sweetener in wheat products due to lower cost, sweeter taste, and higher miscibility. Scientists hypothesize that corn-derived sugar has endocrine effects that lead to obesity, Type 2 diabetes, and metabolic syndrome.8 Insulin and leptin are key hormone signals that regulate a person’s sense of hunger, but consumption of high-fructose corn syrup depresses these internal signals from controlling calorie intake. Another consequence of foods sweetened with HFCS is plaque buildup inside the arteries.10 Nearly any sweet good made from wheat will likely contain HFCS. Although data about its health effects are still inconclusive, HFCS should be avoided.

Being a health-conscious consumer of wheat can mean significant changes in daily choices of which foods to eat and how to eat them. Whole grains provide more fiber and life-boosting nutrients than refined grains, but accompanying ingredients in available food choices need to be considered as well. More importantly, the impacts of wheat on blood sugar need to be controlled by consuming a commensurate amount of fruits and vegetables. Awareness and application of these principles are the main steps to avoiding the bad and getting the good of wheat.

References

  1. Nielsen Homescan Facts, The Nielsen Company. http://www.marketingcharts.com/television/nielsen-issues-us-top-10-lists-for-2007-2700/nielsen-2007-top-10-cpg-purchased-us-homes.jpg/(Accessed Jan. 15, 2013).
  2. Slavin, J. L. Amer. J. Clin Nutr. 1999, 70, 459S-63S.
  3. Cleveland, L. E. J. Amer. Coll. Nutr. 2000, 19, 331–8.
  4. Anderson, J. W. Nutr. Rev. 2009, 67, 188–205.
  5. Swinburn, B. Public Health Nutr. 2007, 7, 123–46.
  6. Liu, S. J. Amer. Coll. Nutr. 2002, 21, 298–306.
  7. World Health Organization: Diabetes Fact Sheet, Media Centre. 2012 http://www.who.int/mediacentre/factsheets/fs312/en/index.html (Accessed Jan. 15, 2013).
  8. Foster-Powell, K. Amer. J. Clin Nutr. 2002, 76, 5–56.
  9. Ascherio, A. Amer. J. Clin Nutr. 1997, 66, 1006S–10S.
  10. Stanhope, K. Amer. J. Clin Nutr. 2008, 88, 1733S-7S.
  11. General Mills. What is Whole Grain, Anyway? Demystifying Whole Grains. http://wholegrainnation.eatbetteramerica.com/images/content/facts_seed.jpg (Accessed Jan. 15, 2013).
  12. Thompson, L. U. Contemp. Nutr. 1992, 17.

Comment

The Ghostly Haunting of Limb Lost

Comment

The Ghostly Haunting of Limb Lost

The brain’s neural pathways are like a city’s infrastructure. Once the routes and support structures are firmly in place, it is difficult to remove them to construct a new route. This helps explain amputees’ reports of phantom limbs and the painful sensations they radiate. How much of the pain is real and how much is psychological has yet to be determined, but treatments address both sources.

The phantom limb was first documented by Dr. S. Weir Mitchell after observations with Civil War amputees.1 It is a fascinating enigma that has appeared in literature: Captain Ahab’s missing leg in Herman Melville’s Moby Dick, Captain Hook’s lost hand in J. M. Barrie’s Peter Pan, and Long John Silver’s absent leg in Robert Louis Stevenson’s Treasure Island. Why does the brain yearn for the absent limb so much that fantasizes emerge? The answer may reside in ascending sensory pathways from the peripheral nervous system. Once established, the brain finds it difficult to change expected input from these neural pathways.

During infancy, the brain examines the body to understand itself spatially and topologically, building upon this image from the senses throughout life. Interestingly, those who undergo amputations in infancy experience neither the sensation nor the pain of phantom limbs because the missing limbs had not been there long enough to establish a solid pathway.2 However, for those that retain their limbs, the development of the senses in early childhood is faster than at any other point. Changing body image at an advanced age is too drastic and demanding for the brain. One contributing factor is the elderly’s diminished brain size. On average, the brain loses 5-10% of its weight between the ages of 20 and 90, with a higher proportion lost with increasing age.3 In addition, the grooves on the surface widen while the swellings and depressions become smaller. Deep grooves in the brain indicate increased surface area for synapses, the connecting space between neurons, to form. Moreover, the formation of neurofibriallary tangles, decayed portions of the dendrites receiving the sensory information from other neurons, impede information transmission.3 Finally, abnormally hard clusters of damaged or dying neurons, known as “senile plaques,” emerge and accumulate. Neurons are not replaced when they die, so as one gets older, one literally has less to work with. Thus, with decreased plasticity, the body image becomes fixed with one’s brain regressing to the stage formed in earlier years.

This pathway, however, is not indestructible because amputees report that phantom limb sensations decrease with time. Due to the plasticity of the brain, the brain takes time to “rewire” itself by abolishing old connections in favor of new, useful connections elsewhere. For example, after an amputation, patients often describe the entire appendages, with the most awareness at the distal (end) portions of the limb (i.e. fingers and toes compared with the forearms and calves, respectively).4,5 This is because distal anatomical structures contain the greatest amount of sensory nerves and command a larger portion of the somatosensory cortex. In time, however, the phantom limb perception shrinks until it disappears into the stump.6-8

These concepts are visualized by the sensory homunculus (Figure 1), where the size of the appendage reflects the sensitivity and thus concentration of neurons there. Thus, infants use their hands, lips, and tongue frequency in order to shape and understand their world. Since more neurons are dedicated to these extremities, it takes longer to rewire the corresponding pathways. Instead, the brain completely rewires the proximal portions of the limb so that the phantom sensation in the length of the appendage seems to shrink faster than the distal portions.

When subjects encounter identical stimuli, the sensation experienced is usually comparable between them. For example, when we touch a pot on a lit stove, we feel burning and not tickling. With amputees, this precedence doesn’t hold. Each amputee’s phantom limb is unique: it can feel authentic and present but fake, painful, or painless. There is little to suggest that patients are lying about the pain, yet it is well known that the brain frequently tricks the body.

Psychological pain can also manifest itself as physical pain. Amputee patients who feared an inability to recover were hostile to and jealous of other members of society, consequently experiencing pain in the phantom limb with these heightened emotions. However, once these patients underwent therapy and obtained a positive attitude, the pain faded.2

Traditional approaches to alleviating pain, such as injection of nerve blocks, myoelectric prosthesis, and cordotomy, have been more procedural.9,10 A nerve block is an injection of a local anesthetic to stop transmission of a message along the nerve so that the brain never receives the pain signal from the stump. A myoelectric prosthesis is an artificial limb, which uses electronic sensors to translate muscle and nerve activity into the intended movement. While the brain is manipulated into replacing the phantom limb with an artifical one, prosthetics often do not alleviate phantom pain. One theory for this is that since visual sensory information contradicts tactile sensory information, the brain refuses to be tricked. Cordotomy is the most invasive of the procedures listed because it requires a neurosurgeon to disable certain rising tracts in the spinal cord. Thus, it is only employed in severe cancer- or trauma-related cases. Despite the variety of approaches, the results are slightly effective at best.9

Recently, researchers have turned to mind-body therapies to relieve chronic phantom pain, yielding tentatively successful results. A review by Dr. Vera Moura of the Department of Physical Medicine and Rehabilitation on Integrative Medicine at University of North Carolina Hospitals tied together studies that used hypnosis, guided imagery, and biofeedback (such as visual mirror exercises).11 These non-invasive mind-body alternatives consider the psychological aspect of pain. Hypnosis has been found to reduce postsurgical pain, so researchers attempted to transfer its effects to amputees.12 In several studies, arm amputees varying in sex and age saw a reduction in pain frequency and intensity after attending hypnosis sessions.13-15 These studies indicate that mind can truly triumph over matter, but caution must be taken because trial sizes were small and hypnosis is a murky field. Therefore, more research is necessary before any definite conclusions can be made.

Guided imagery is another mind-body approach that extends beyond the typical denotation of our senses, and it utilizes more neural pathways than normal to create a memorable, mental image. This treatment combines interactions between patient and therapist and patient and body image.9 In Zuckweiler’s experiment, 14 patients with diverse backgrounds had 5 to 15 imagery sessions, during which they attempted to reprogram their minds to accept the new body form.16 Patients were taught Zuckweiler Image Imprinting (ZIP), which involves taking an object and storing it as a mental image. They were then asked to compare their phantom limb pain to the object in their mental image and switch the sensations associated with the two objects. Over time, as the phantom sensation decreased by using different mental images, the discrepancy between the new body image without the limb and old body image with the limb was reconciled. Zuckweiler’s study showed successful pain intensity reduction within only six months. ZIP forces patients’ minds to accept their new bodies. Since his method encompasses visual, auditory, and kinesthetic learning, customized treatment allows patients to comprehend and create new connections.

The final mind-body approach is biofeedback, of which there are two popular kinds. Thermal biofeedback teaches patients to increase the peripheral skin temperature at the stump.17 This seems unlikely, since body temperature is an autonomic function along with vital processes such as heart rate and breathing. In some instances, however, an individual can have partial, conscious control. Although the hypothalamus is responsible for standard body temperature of 37.0°C (98.6°F), it is possible for consciousness to affect peripheral skin temperature. Successful patients begin to link skin temperature with pain.18 Physiologically, the regulation of one function often results in the coupling of the response to a stimulus. For example, thermal biofeedback was coupled with breathing relaxation techniques, which caused the temperature of the stump to increase and relax, decreasing the pain and thus increasing the patient’s ability to contend with remaining pain. It is unknown, however, if thermal biofeedback is an effective treatment for all phantom pain; like most areas of science and medicine, more research is needed.

The second biofeedback type, visual mirror feedback (VMF), uses a box with mirrors to fool the brain. A rectangular box with no top and two holes for each arm (or leg) is set in front of a patient. In the middle of the box is a one-sided mirror septum facing the limb that is intact (Figure 2). Patients are thus presented with the illusion that both appendages are whole. Dr. Ramachandran, the inventor of this technique, conducted a study in which 10 amputee patients were treated with VMF in six sessions of 5 to 15 minutes a day for several weeks.19 Every patient had a positive reaction that included reduced pain, pain intensity, mobility restriction, and spasms. Once again, there was a conscious effort to train the brain, so patients were able to redirect unpleasant sensations. This therapy is almost opposite to ZIP since the patient is picturing the limb as whole to alleviate the pain rather than ignoring it. VMF treatment is one of the most common due to its success amongst many different amputees.

A theory behind mind-body approaches’ emerging successes is the conscious effort patients put forth to overcome pain. In previously mentioned traditional procedural methods, patients passively receive a certain treatment and hope to obtain a positive result. In some cases, there are even negative side effects; for example, a nerve block may lead to rashes, itching, and an abnormal rise in blood sugar. Invasive procedural approaches like the cordotomy can only be attempted once. Mind-body approaches can be practiced, optimized over time, and are much safer than procedural methods.

Understanding of phantom pain has progressed significantly since its initial documentation during the Civil War. Traditional procedural methods to treat it have been developed, but recently, the psychological aspect of pain and sensation has been addressed in mind-body methods. Unfortunately, neither approach has achieved complete success, partially because of the individualistic nature of phantom limbs and the associated pain. The neurological explanations behind both phenomena are relatively unknown, but it is agreed the ghostly perceptions are a mixture of psychological and real sensations. Perhaps the most effective treatments are those that address both.

References

  1. Lehrer, J. Proust Was a Neuroscientist; Houghton Mifflin: Boston, 2008
  2. Kolb, L. C. The Painful Phantom, Psychology, Physiology, and Treatment; Charles C. Thomas: Springfield, Illinois, USA, 1954.
  3. Guttman, M. The Aging Brain. USC Health Magazine http://www.usc.edu/hsc/info/pr/hmm/01spring/brain.html (Accessed Jan. 22, 2013).
  4. Newton, A. Somatosensory Map. http://www.alinenewton.com/neuroscience.htm (Accessed Jan. 18, 2013).
  5. Pain and Touch, Handbook of Perception and Cognition. 2nd ed. Lawrence Kruger, Ed.; Academic: San Diego. CA, 1996.
  6. Jensen, T. S. Pain. 1985, 21, 267-78.
  7. Hunter, J. P. Neuroscience. 2008, 156, 939-49.
  8. Desmond, D. M. Int. J. Rehabil. Res. 2010, 33, 279-82
  9. Lotze, M. Nat. Neurosci. 1992, 2, 501-2.
  10. Pool, J. L. Ann. Surg. 1946, 124, 386-91.
  11. Moura, V. L. Am. J. Phys. Med. Rehabil. 2012, 8, 701-14.
  12. Black, L. M. J. Fam. Pract. 2009, 58, 155-8.
  13. Oakley, D. A. Clin. Rehabil. 2002, 18, 84-92.
  14. Bamford, C. Contemp. Hypn. 2006, 23, 115-26.
  15. Rickard, J. A. Ph.D. Dissertation, Washington State University, Pullman, WA, 2004.
  16. Zuckweiler, R. JPO. 2005, 17, 113-8.
  17. Sherman, R. A. Am. J. Phys. Med. 1986, 65, 281-97.
  18. Shaffer, F.; Moss, D. Textbook of Complementary and Alternative Medicine; 2nd Ed. Informa Healthcare: London, UK, 2006.
  19. Ramachandran, V. S. Brain. 2009, 132, 1693-710.
  20. Trivialperusal. Sensory Homunculus. http://trivialperusal.files.wordpress.com/2011/04/sensory_homunculus.jpg (Accessed Jan. 18,         2013).
  21. Phelan, L. Mirror Box Therapy. http://farm3.static.flickr.com/2567/3927573088_aa057fcc61.jpg (Accessed Jan. 18, 2013).

Comment

The Bridge from Discovery to Care: Translational Biomedical Research

Comment

The Bridge from Discovery to Care: Translational Biomedical Research

Since the 1970s, both the number of molecular biology PhD scientists and the amount of biomedical research have grown rapidly, greatly expanding our knowledge of the cell.1 This explosion has led to incredible scientific achievements, including development of the polymerase chain reaction in the 1980s and completion of the Human Genome Project in 2003.2-4 The focus of research has shifted from single genes to all genes, from single proteins to all proteins. Neither scientists nor pharmaceutical companies, however, have been able to keep pace with the sheer quantity and complexity of modern biomedical research. Additionally, while the majority of medical researchers were once physician-scientists in the 1950s and 1960s, they are predominantly PhDs today.1 Questions of basic and clinical research, once addressed side by side, are now separate.

The widening gap between scientific discovery and therapeutic impact is a result of these changes. In the United States, the dramatic increase in spending for pharmaceutical research and development has been offset by a disappointing decrease in therapeutic output (Figure 1). As this paradox becomes more apparent, translational research, which aims to convert laboratory findings into clinical successes, emerges as an increasingly important endeavor.5,6

In 2006, the U.S. National Institutes of Health (NIH), the largest source of funding for medical research in the world, focused its attention on translational research by launching the Clinical and Translational Science Awards program.7,8 However, implementing effective translational research is both time- and labor-intensive. According to Dr. Garret FitzGerald, Director of the Institute for Translational Medicine and Therapeutics at the University of Pennsylvania, challenges include a lack of human capital with translational skill sets, relevant information systems, and intellectual property incentives.9

During his leadership of the NIH from 2002 to 2008, Dr. Elias Zerhouni witnessed the consequences of clinicians lacking in training on the speed of scientific advancements for patient care.10,11 Beyond the need for manpower, an open culture of communication between scientists and clinicians is necessary.

Drug development is a one-way process from benchside to bedside in which scientists identify drug targets, conduct clinical tests, and develop marketable drugs. Many argue, however, that the communication must run in the opposite direction, too; feedback from clinical trials and doctors is valuable because understanding their concerns allows researchers to improve drug development.12 The third challenge derives from current institutional practices and regulations. An investigator’s publication record rather than their efforts to advance medicine determines success.13 Research funding is also granted on an individual basis, which does not promote the collaboration necessary for successful translational research. Lastly, the regulatory and patent processes governing drug development require much expertise and time to navigate, which offer little incentive for researchers to become involved.1

To better integrate basic science with clinical science progress, countries such as the United States are building a new team of leaders in all aspects of clinical research: medicine, pharmacology, toxicology, intellectual property, manufacturing, and clinical trial design and regulation.13 Dr. Francis Collins, Director of the NIH since 2009, has called for a partnership between academia, government, private, and patient organizations to repurpose molecular compounds previously failing in their original use.15,16 Historically, Collins referred excitedly to azidothymidine, a drug originally developed to treat cancer that later treated HIV/AIDS.14 Tremendous potential lies in applying scientific developments to other contexts, and the NIH has already drafted policy for this purpose.15

However, the growing support for translational research does not diminish the importance of basic scientific research, which poses the most interesting questions. Translational biomedical research creates an efficient environment for scientists to work at the interface of basic science and therapeutic development and to help fulfill the social contract between scientists and citizens. The full impact of translational initiatives has yet to be seen because the success of drug development, which can take up to 20 years, cannot be evaluated easily or quickly. For now, we can hope that integrating the work of scientists and clinicians will benefit both the patients, who await treatment, and the researchers, who only dream of seeing their discoveries transformed into new therapies for disease.

References

  1. Butler, D. Nature. 2008, 453, 840–2.
  2. Smithsonian Institution Archives. Smithsonian Videohistory Collection: The History of PCR (RU 9577). http://siarchives.si.edu/research/videohistory_catalog9577.html (Accessed Jan. 15, 2013).
  3. National Center for Biotechnology Information (NCBI). Probe, Reagents for Functional Genomics: PCR. http://www.ncbi.nlm.nih.gov/projects/genome/probe/doc/TechPCR.shtml (Accessed Jan. 15, 2013).
  4. Human Genome Project Information. About the Human Genome Project. http://www.ornl.gov/sci/techresources/Human_Genome/project/about.shtml (Accessed Jan. 15, 2013).
  5. CTSI (Clinical and Translational Science Institute) at UCSF. Translational Medicine at UCSF: An Interview with Clay Johnston. http://ctsi.ucsf.edu/news/about-ctsi/translational-medicine-ucsf-interview-clay-johnston (Accessed Jan.15, 2013)
  6. Helwick, C. Anticancer Drug Development Trends: Translational Medicine. American Health & Drug Benefits. http://www.ahdbonline.com/article/anticancer-drug-development-trends-translational-medicine (Accessed Jan. 15, 2013).
  7. National Institutes of Health (NIH). About NIH. http://www.nih.gov/about/ (Accessed Jan. 15, 2013).
  8. National Institutes of Health National Center for Advancing Translational Sciences (CTSA). About the CTSA Program. http://www.ncats.nih.gov/research/cts/ctsa/about/about.html (Accessed Jan. 15, 2013).
  9. Pers. comm. Dr. Garret FitzGerald, Director of the Institute for Translational Medicine & Therapeutics at the University of Pennsylvania.
  10. NIH News. Elias A. Zerhouni to End Tenure as Director of the National Institutes of Health. http://www.nih.gov/news/health/sep2008/od-24.htm (Accessed Jan. 15, 2013).
  11. Wang, S.S. Sanofi’s Zerhouni on Translational Research: No Simple Solution. The Wall Street Journal. Health Blog 2011 http://blogs.wsj.com/health/2011/05/20/sanofis-zerhouni-on-translational-research-no-simple-solution/ (Accessed Jan. 15, 2013).
  12. Ledford, H. Nature. 2008. 453, 843-5.
  13. Nature. 2008, 543, 823.
  14. TEDMED 2012. Francis Collins. http://youtu.be/spUoPC_TU_8 (Accessed Jan. 15, 2013).
  15. Wang, S. Bridge the Gap Between Basic Research and Patient Care, NIH Head Urges. The Wall Street Journal Health Blog. http://blogs.wsj.com/health/2012/04/11/bridge-the-gap-between-basic      research-and-patient-care-nih-head-urges/ (Accessed Jan. 15, 2013).

Comment

The Illusion of Race

Comment

The Illusion of Race

Race is one of the most pervasive features of American social life; neglecting the concept of race would be like questioning the existence of gravity. Though we would like to consider our nation a post-racial society, we still place great importance on race by asking for it on forms ranging from voter registration to the PSAT. However, many would be surprised to realize that race does not have a biological basis – there is no single defining characteristic or gene that can be unequivocally used to distinguish one race from another.1 Rather, it is a manmade concept used to describe differences in physical appearance. Yet, we have internalized the social construct of race to such a degree that it seems to have genetic significance, masking the fact that race is actually something we are raised with. That a simple internalized ideology creates disparities in contemporary American society, from socioeconomic status to healthcare accessibility, illustrates the urgency of exposing this myth of race.

Throughout American history, racial connotations have been fluid, with different ethnographic groups falling in and out of favor based upon societal views at a given time. Race was originally conceived as a way to justify colonialism. European colonizers institutionalized their ethnocentric attitudes by creating the concept of race in order to differentiate between the civilized and the savage, the Christians and the heathens. This dichotomy facilitated mercantilism, the economic policy designed to accrue astronomical profits for the European countries through the exploitation of “inferior” races. Scholars of Critical Race Theory show, more generally, that the boundaries of racial categories shift to accommodate political realities and conventional wisdom of a given time and place.2

This definition of race changed in the United States over the centuries. For example, when the Irish and Italians first immigrated in the early 20th century, they were seen as “swarthy drunkards” – clearly not part of the white “elite.” Within two generations, however, these same people were able to assimilate into the Caucasian-dominated culture while African-Americans were still considered a separate entity. Similarly, during the era of the Jim Crow laws, courts had the power of determining who was black and who was not; in Virginia, a person was considered to be black if he or she was at least 1/16th African-American; in Florida, a black person was at least 1/8th African-American; and in Alabama, having even the smallest sliver of African-American heritage made a person black.3 Thus, a person could literally change race by simply moving from one state to another. Today, the commonly defined race classifications, as specified by the US Census, include White, Black, Asian, American Indian or Alaska Native, Pacific Islander or Native Hawaiian, Other, and Multiracial. Because there is no scientific cut-offs to determine what race a person is, racial data is largely based on self-identification, which points to its lack of biological legitimacy. For example, 30% of Americans who label themselves as White do not have at least90% European ancestry.4

We may think our conceptualization of race is based upon biological makeup, but it is actually an expression of actions, attitudes, and social patterns. When examining the science behind race, most scholars across various disciplines, including evolutionary biology, sociology, and anthropology, have come to the consensus that distinctions made by race are not genetically discrete, cannot be reliably measured, and are not meaningful in the scientific sense.5

Some argue that race is a genetic concept based upon a higher incidence of particular diseases affecting certain races. However, purely hereditary diseases are extremely rare. For example, 1/2300 births for cystic fibrosis, 1/10000 births for Huntington’s disease, and 1/3000 births for Duchenne’s muscular dystrophy.6 Rather, diseases often reflect shared lifestyles and environments instead of shared genes, because factors such as poverty and malnutrition are also often “inherited” through family lines. Even genetic polymorphisms in hemoglobin, which lead to populations with lower susceptibility to malaria, can be partly explained by environmental factors.6-8 Thus, diseases traditionally tied to certain races cannot be explicitly attributed to genes, discrediting the idea that races are genetically disparate. Genetic differences are better described as percentages of people with a particular gene polymorphism, which change according to the environment.6

Racial groupings actually reflect little of the genetic variations existing in humans. Studies have shown that about 90% of variations in human genetics is present within a population on a continent, while around 10% of genetic variation occurs between continental populations.1 Variation in physical characteristics, the traditional basis for determining race, does not imply underlying genetic differences. When we internalize the false ideology that race is genetic, we are mistakenly implying that there are human subspecies.

Although race is a social construct, it has a widespread influence on society, especially in the United States. In particular, minorities face disadvantages in numerous areas ranging from healthcare to education.7,8 Reports about Mitt Romney’s rumored adoption of a darker skin tone when addressing Latino voters or statistics indicating that the median household wealth of whites is 20 times that of blacks reinforces the existence of a racialized society.5 This is shocking and disturbing; race may not be real, but its effects contribute to real inequality. Once everyone understands this racial illusion, we can begin making effective change.

References

  1. Bamshad, M. J.; Olson, S. E. Does Race Exist? Scientific American, New York City, Nov. 10, 2003, p. 78-85.
  2. Calavita, K. Invitation to Law and Society: An Introduction to the Study of Real Law. University of Chicago Press: Boston, MA, 2007.
  3. Rothenberg, P. S. Race, Class, and Gender in the United States, 7th ed.; Worth Publishers: New York, NY, 2007.
  4. Lorber, J.; Hess, B. B.; Ferree, M. M.; Eds. Revisioning Gender; AltaMira Press: Walnut Creek, CA, 2000.
  5. Costantini, C. ABC News. http://abcnews.go.com/ABC_Univision/News/mitt-romneys-tan-draws-media-fire-makeup-artist/story?id=17290303       (Accessed Oct. 26, 2012).
  6. Pearce, N. BMJ. 2004, 328, 1070-2.
  7. Stone, J. Theor. Med. Bioethics. 2002, 23, 499-518.
  8. Witzig, R. The Medicalization of Race: Scientific Legitimization of a Flawed Social Construct. Ann. Intern. Med. 1996, 125, 675-9.
  9. Tavernise, S. The New York Times. http://www.nytimes.com/2011/07/26/us/26hispanics.html?_r=0&pagewan (Accessed Oct. 26, 2012).

Comment

Nature vs Nurture of Politics

Comment

Nature vs Nurture of Politics

If you voted in last year’s election, what made you choose the candidate for whom you voted? Was it the platform, the party, or perhaps your genes? Since Mendel and his peas, the idea that genes affect physical traits has greatly influenced science. However, their role may be greater than we thought. Aristotle first posed the question of nature vs nurture, which is now the debate surrounding the relative importance of one’s genes (nature) against one’s upbringing (nurture) in determining physical and behavioral characteristics. For instance, is one’s intelligence an innate quality or one based on years of education? If genes are involved in political ideology, does that mean political freedom is limited? Do we have a choice in voting? The answer to the age-old question is more complex, yet nowadays, more people recognize the idea that both nature and nurture are involved in trait determination.

Family values, education, and the media were originally thought to determine an individual’s political behavior. The scientific community has gradually come to embrace political views as a legitimate factor in the nature vs. nurture debate. In fact, the study of genetic influence in political decisions has a name: genopolitics.1 For instance, an early study of the topic found that identical twins show more similarities than non-identical twins in voting behavior.2

Furthermore, although many still believe environmental influences play the sole role in determining political attitudes, a recent articles suggests that genes influence political preferences.3 This harks back to the notion that both nature and nurture are important in the development of an individual’s behavior. Whether your vote is liberal or conservative is not determined by a single gene, but rather by a combination of genes and regulatory pathways with environmental factors from one’s political principles.2

However, genetics does not play a roles in all political traits. In particular, because political parties are transient and vary between countries and across time, only nurture plays a role in party identification. A liberal in American politics is not the same as a liberal in European politics. On the other hand, most genopolitic researchers agree that genetics does influence the ideology (e.g., conservatism or liberalism) of group organization, as a relatively timeless matter.4 For example, your heritage might make you favor a powerful government over a weak government but would not directly influence you vote for a Democratic nominee over a Republican nominee.

How exactly genetics could influence your vote is difficult to understand. Hatemi and McDermott conceptualize that our Prehistoric ancestors faced issues with out-groups (immigration), security (foreign policy), and welfare (resource management), among others. Through evolutionary processes occurring over thousands of years, these issues became polarizing traits that are heritable.2 This by no means signifies that one ideology is more “fit” than another simply because it has dominated in the past.

These natural factors determine conservative vs. liberal preferences. However, environment may play a larger role in directing most individual political choices. More research is necessary to show that the influence of genes on political ideology cannot be explained by purely environmental influences.6

References

  1. Stafford, T. BBC Future. http://www.bbc.com/future/story/20121204-is-politics-really-in-our-genes (Accessed Dec. 4 2012).
  2. Biuso, E. Genopolitics. New York Times, New York, Dec. 14, 2008, p. MM57.
  3. Haterni, P. K. Trends in Genetics. 2012, 28(10), 525-33.
  4. Alford, J. R. Ann. Rev. Pol. Sci. 2008, 15(2), 183-203.
  5. Body politic: THe genetics of politics, The Economist, London, Oct. 6, 2012.
  6. Haterni, P. K. JOP2011, 73(1), 271-85.

Comment

Memory-Erasing Drugs: To Forget or Not to Forget?

Comment

Memory-Erasing Drugs: To Forget or Not to Forget?

From recreational mind-altering drugs to pharmaceuticals that target neurotransmitter imbalances, a wide variety of chemical mechanisms can alter our thought processes and behaviors. While neural bases have long been known to play a role in shaping our thoughts and actions, recent advances in memory research have brought an evocative question to the forefront: what if we could change not only how we think and act, but also what we remember? The concept of a “forgetfulness” drug—an Eternal Sunshine-esque memory erasure treatment in pill form—is no longer a far-fetched fantasy. As researchers formulate a better understanding of how memories are formed and retrieved at the molecular level, the scientific community gains the ability to formulate targeted approaches to modifying the existence or emotional character of past memories. However, amid these developments, it is crucial that scientists, neuroethicists, and policymakers collaborate to evaluate the ethical costs and benefits of new therapies.

Currently, a number of drugs have shown utility in altering memory consolidation and retrieval. For example, propranolol, a beta-adrenergic blocker already approved by the FDA to treat hypertension, inhibits excess stress hormones released at the time of a psychologically traumatic event, the presence of which influences the memory consolidation of particularly emotional experiences.1 When administered during this critical period shortly after trauma, propranolol has also been shown to prevent the formation of strong, intrusive memories of the event, as well as the associated fear and anxiety that contribute to the later development of posttraumatic stress disorder (PTSD). In fact, early studies from 2002 and 2003 have demonstrated that patients who received propranolol, first administered several hours after a traumatic event and continued over a seven- or ten-day regimen, experienced lower rates of PTSD than those who did not receive propranolol.5,8

Memory-attenuating drugs can also be administered during subsequent periods of memory activation. More recently, advances in neuroscience have revealed that the process of retrieving memories is vastly different from the idea of simply activating consolidated memory traces from an archive. Instead, every time we recall a particular memory, it becomes unstable and must be re-consolidated in order to persist in the brain.7 Accordingly, a 2012 study conducted by clinical psychologists at the University of Amsterdam used propranolol to disrupt the memory reconsolidation of events associated with fear and anxiety in a learning context. Specifically, participants were threatened with painful electric shocks during a learning task; then, these acquired memories and fear conditioning were reactivated the following day during a repeat of the task. As predicted, participants who received propranolol during the memory reconsolidation process (upon activation of their memories from the previous day) showed lessened behavioral expressions and feelings of anxiety concerning the fear-related memory.7 Furthermore, within the past year, researchers have demonstrated that the injection of ζ-pseudosubstrate inhibitory peptide (ZIP) can induce cocaine-addicted rats to forget the locations where they had been receiving cocaine.4 Therefore, beyond diminishing the negative emotional experience of unpleasant memories, pharmaceutical treatments may also work toward erasing a memory altogether.

However, the power to eradicate memories comes with great responsibility—and a range of complex ethical implications. A decade ago, the President’s Council on Bioethics issued a report warning against the pharmaceutical modification of memories, citing various personal and social repercussions incurred by the use of any drugs that quell recollection of past events, regardless of how painful they may be.6 At the personal level, individuals might use such drugs to “numb” themselves from remembering incidents that could later prove to have adaptive value, thus obviating the process of learning and growing from negative experiences. On the greater social scale, some neuroethicists argue that if survivors and witnesses of catastrophic events (such as accidents, crimes, combat, or genocide) elect to eliminate the emotional charge of such memories, then their firsthand perceptions about the meaning and impact of these events—which are inevitably interlinked with powerful aversive emotions—would be altered substantially. In effect, these self-protective acts of deliberate forgetfulness would render emotionally devastating atrocities as less significant in the collective sense of justice and moral consciousness of society.6

On the other hand, not every negative memory has “redeeming” value. For example, individuals with PTSD experience recurrent traumatic memories that remain particularly vivid and emotionally distressing long after the event, often impeding day-to-day functioning. Accordingly, biomedical ethicists have likened the suffering resulting from agonizing memories to the experience of profound physical pain, the pharmaceutical alleviation of which is already a common, morally-accepted practice.2

Furthermore, a recent neuroethics editorial in Nature argued that fear surrounding the widespread abuse of pharmaceutical memory erasure is overblown and impedes the development of therapeutic applications to patients whose quality of life is curtailed by the residual effects of past traumatic experiences.3 After all, conscientious negotiation of legal policies and clinical guidelines for such drugs would reduce the possibility of large-scale abuse. From the drug administration perspective, clinicians and potential patients could work together to draft procedures for determining the types of cases in which the prescription of memory-dampening drugs is a viable option. Open communication between biomedical and legal experts would also be crucial in navigating high-stakes situations, such as when a traumatized sole witness to a violent crime seeks pharmaceutical memory erasure during an ongoing court case.

Ultimately, the ethical implications of erasing memories pose core questions surrounding our identity and humanity. Would electing to forget past events fundamentally change people—with the disappearance of certain salient memories potentially eroding away the basis of our individual perspectives and learning experiences? Or is simply forgetting a senselessly traumatic event sometimes the better option toward living a fully productive life? Although research on memory-erasing drugs is ongoing and the associated ethical issues of their implementation remain points of contention, the essence of the question lies at the individual level: if presented with the option, would you be willing to dull the emotional overtones of a personal memory, or erase that memory altogether?

References

  1. Cahill, L., et al. Nature. 1994, 371, 702-704.
  2. Illes, J. Am. J. Bioethics. 2007, 7(9), 1-2.
  3. Kolber, A. J. Nature. 2011, 476, 275-276.
  4. Li, Y-Q., et al. J. Neurosci. 2011, 31, 5436-5446.
  5. Pitman, R. K., et al. Biol. Psychiatry. 2002, 51, 189-192.
  6. President’s Council on Bioethics. Beyond Therapy: Biotechnology and the Pursuit of Human Happiness. 2003, 205-273.
  7. Soeter, M., & Kindt, M. Psychoneuroendocrinology. 2012, 37, 1769-1779.
  8. Vaiva, G., et al. Biol. Psychiatry. 2003, 54, 947-949.

Comment

Terahertz Radiation

Comment

Terahertz Radiation

Abstract

Graphene has received significant attention due to its many unique properties, such as its two-dimensionality, zero-mass and zero-gap band structure, and unsurpassed strength. Additionally, its terahertz (THz) properties are being studied for future ultrafast electronics for information and communication, sensing, and other applications. Although many theoretical studies exist, the basic properties of graphene in the presence of THz radiation are largely unexplored. In this study, we explore the THz dynamics of graphene on an indium phosphide (InP) substrate using THz time-domain spectroscopy (THz-TDS) and laser THz emission microscopy (LTEM). Using LTEM, we compared the THz radiation from InP to the radiation through graphene on InP to find that graphene decreases the amplitude of THz. Furthermore, we investigated the effect of continuous wave (cw) lasers of different wavelengths on THz radiation and discovered that a 365 nm cw laser greatly decreased THz transmittance through graphene on InP. In contrast, an 800 nm cw laser had no effect on transmittance, proving wavelength dependence of THz generation. We also studied the spatial variation of THz absorption of graphene on InP using LTEM, which allowed us to visualize the localized transmittance distribution of graphene. Both THz-TDS and LTEM results help us understand THz functionality in graphene on InP, and this understanding can potentially contribute to the development of future ultrafast electronics.

Introduction

Although microwave and visible light applications are prevalent in electronic and photonic devices, the terahertz (THz) regime (300 GHz to 30 THz) is largely unexplored and unutilized. This “terahertz gap” has captured much attention due to potential applications, such as sensing, communications, and imaging. Graphene has attracted much attention for its unique properties and many potential applications, such as ultrafast electronics, transistors, inert coatings, and biodevices. We are particularly interested in developing ultrafast electronics by utilizing graphene’s absorbance of THz radiation. There are many possible substrates for graphene, and this is the first study to analyze the THz dynamics of graphene on an indium phosphide (InP) substrate. Additionally, we do not know the effect of several phenomena, including the interface effects of graphene on THz absorption and the effect of continuous wave (cw) laser on THz emission and transmission. We aim to characterize graphene on InP, study the interface effects of graphene, and explore the effect of cw lasers on THz emission and transmission. Comparing the effects of different wavelength cw lasers on the THz emission can lead to interpretations that are in agreement with current theories of THz generation. Studying and understanding graphene on different substrates will contribute to the development of ultrafast electronic devices.

Background

Graphene and Indium Phosphide

Graphene is capturing the attention of researchers due to its astounding properties such as its zero bandgap, strength, high mobility, and low resistivity. There are several methods of fabricating graphene, and in this study, we use chemical vapor deposition (CVD) grown graphene.

Because graphene is a single layer of carbon atoms, we must find suitable substrates in order to handle graphene. We are interested in studying graphene on InP because it has a high electron mobility (5400 cm2/(Vs) at 300 K) and a direct bandgap, which may be useful for ultrafast electronics.

Background on Terahertz Radiation

Terahertz radiation is defined as electromagnetic radiation with a frequency in the range of 300 GHz to 30 THz and a wavelength of 1 millimeter to 10 micrometers. In this study, we use two techniques: THz time-domain spectroscopy (THz-TDS) and Laser THz Emission Microscopy (LTEM).

THz-TDS is a technique that probes the properties of a material with pulses of THz radiation. A Ti:Sapphire laser generates an output beam containing a train of femptosecond pulses. The beam is split into two: the pump and probe beams. The pump beam reaches the THz emitter, where the optical pulses are converted into THz electromagnetic pulses. The THz beam goes through the sample and then meets up with the probe beam at the detector, which measures the amplitude and phase of the THz electric field. Fourier analysis of the transmitted THz radiation, as compared with a reference spectrum without a sample, provides the transmission spectrum of the sample.

LTEM measures the near field absorption of the electric field. A pump laser is raster scanned across the sample surface, which produces THz waves. By monitoring the THz amplitude, THz emission or transmission images can be observed.

Methods

In this study, two different systems were used. Both systems can perform THz-TDS and LTEM. Both systems use a beam splitter to split the laser into pump and probe beams. Both use time delay stages, GaAs bowtie-shaped antenna detectors, and lock-in amplifiers. However, the differences are highlighted in Figures 1 and 2.

RESULTS

Data from system 1 led to three major results, but data from system 2 was inconclusive. The first finding from system 1 is that the THz emission decreases more than expected in the presence of graphene on InP. We would expect graphene to cause a 2.3% decrease in THz emission due to its theoretical interband absorbance of 2.3%. However, other factors, such as air, humidity, doping, and inhomogeneity of sample may have caused this decrease to be roughly 28%. Considering that the THz generation mechanism is the surge current effect (surface field effect), it is also possible that this decrease in THz emission is caused by a decrease in band bending when graphene is on the surface of InP, as explained in Figure 5.

The next result is that we can successfully image graphene and see inhomogeneity on the surface of graphene, while the substrate is uniform (Figure 6).

The final result is that a 365 nm cw laser decreases the THz emission, but an 800 nm cw laser has no effect on THz emission. Figure 6 shows this with THz images and Figure 7 shows this with THz-TDS responses. The reason for this wavelength dependence of THz emission is that the THz generation occurs at the surface of the sample, when carriers are excited. The penetration depth of an 800 nm cw laser is too deep and does not interfere with carriers on the surface, but a 365 nm cw laser has a short penetration depth, interfering with carrier interactions at the surface, decreasing THz emission.

DISCUSSION

This research is significant because it is the first THz study that uses InP as a substrate for graphene. We can conclude that graphene affects the surge current mechanism on InP, decreasing the THz emission. The local distribution of the surface of CVD graphene can be visualized using an LTEM system, which is useful for future THz imaging applications in a variety of fields, such as medicine and security. A 365 nm cw laser clearly affects the THz emission mechanism, while an 800 nm cw laser has no effect on THz emission. These results are in agreement with current models because the cw laser has no effect on THz emission when its wavelength is too long; the laser does not interact with carriers on the surface. This helps us understand the THz emission mechanism of InP.

There are several ways to expand upon and improve this research. Firstly, the results of system 1 would be more meaningful if performed in a vacuum and with all other environmental factors held constant. This would allow us to determine the decrease in THz emission caused by graphene alone, which is significant because it gives us insight on the interaction of graphene at the surface of InP (i.e., the decrease in band bending caused by graphene). We also need to compare the THz emission of graphene on InP to that of a mirror in order to know the effect of the reflection from graphene’s shiny surface.

A potentially useful application could be to generate more THz radiation by applying a gate voltage, effectively via a battery. This would be useful in situations where high-intensity THz generation is desired, and it may also be interesting to study graphene with a bandgap tuned by gate voltage.

FUTURE WORK

The data gathered from system 2 were inconclusive, so further study needs to be done in this area. When using system 1 and looking at the THz emission, the 365 nm cw laser causes a significantly greater decrease when graphene is present compared to InP substrate alone (Figure 8). However, when using system 2 and looking at the THz transmission, the 365 nm cw laser causes a significantly greater decrease when graphene is not present (Figure 9). This leads to confusion about how the cw laser is interacting with the surface of graphene and InP.

Although the 365 nm laser clearly affects the THz emission and transmittance, we cannot determine if this effect is primarily from graphene or InP because emission (system 1) and transmission (system 2) results are contradictory. Experiments with cw lasers on InP only and graphene on InP need to be continued in order to fully understand the cw laser effects and its effect on transmission versus emission.

Work by Yuki Sano at the Tonouchi Laboratory at Osaka University is currently doing related research to better characterize the THz dynamics of graphene on InP.

ACKNOWLEDGEMENTS

This research was conducted at Osaka University as part of the NanoJapan program. This material is based upon work supported by the National Science Foundation’s Partnerships for International Research & Education Program (OISE-0968405). Special thanks to the Tonouchi lab members for helping me with this research! Thank you to Sarah Phillips, Junichiro Kono, Cheryl Matherly, and Keiko Packard for organizing this program.

References

  1. Inoue, Ryotaro, Kazuhisa Takayama, and Masayoshi Tonouchi. "Angular Dependence of Terahertz Emission from Semiconductor Surfaces Photoexcited by Femtosecond Optical Pulses." Journal of the Optical Society of America B 26.9 (2009): A14. Print.
  2. Serita, Kazunori, S. Mizuno, H. Murakami, I. Kawayama, Y. Takahashi, M. Yoshimura, Y. Mori, J. Darma, and M. Tonouchi. "Scanning Laser Terahertz Near-field Imaging System." Optics Express 163639th ser. 20.12 (2012): 1-7. Print.
  3. Suzuki, Masato, Masayoshi Tonouchi, Ken-ichi Fujii, Hideyuki Ohtake, and Tomoya Hirosumi. "Excitation Wavelength Dependence of Terahertz Emission from Semiconductor Surface." Applied Physics Letters 89.9 (2006): 091111. Print.

Comment

Finding Biodegradable Alternatives for Petroleum-Based Polymers

Comment

Finding Biodegradable Alternatives for Petroleum-Based Polymers

Introduction

There is a rapid, increasing need to manufacture biodegradable polymers to address the global dependence on non-biodegradable, petroleum-derived products.1 Current reasons for the popularity of petroleum-based non-biodegradables include superior modulus and yield strength in conjunction with high-impact strength.2 However, petroleum-based products, such as polystyrene and polyethylene, are limited in their commercial and industrial use, due to their high combustibility and non-biodegradable properties, which significantly increase the volume of landfills.3 In addition, recent standards set by the Underwriters Laboratories have specified the degree of flame retardancy for industrial materials such as polystyrene and polyethylene as UL94.4-5 Therefore, it is imperative to create a biodegradable polymer that is strong, flame retardant and has superior modulus.

Polylactic acid (PLA) and Ecoflex®

Biodegradable polymers exhibit many of the same mechanical and structural properties as petroleum-based products. Replacing petroleum-based products with bioplastics is projected to reduce air pollution by minimizing the need for synthetic polymer production in a cost-effective fashion environmentally sustainable.7-10 Using more biodegradable materials in industrial applications has been estimated to successfully reduce landfill mass by as much as 30%, plastic wastes by as much as 3,550 tons per year, and fossil fuels by as much as 4,010 tons per year.11,12

As biodegradable plastics have gained increased appeal, concern has risen regarding their flame retardant properties. Polylactic acid (PLA) and Ecoflex® are two biodegradable polymers which have become more popular in recent years due to their mechanical properties.13 However, the challenge is to control their mechanical properties in order to tailor them to different applications. One of this study’s goals was to create flame retardant biodegradable polymers which meet current industrial standards while minimizing loss of mechanical properties. These are seemingly contradictory conditions, since flame retardant formulations are known to embrittle materials.14 Here, exploration of the interactions of clays with PLA and Ecoflex® polymers to determine the optimal composite of mechanical strength, ductility, and flame retardance was examined.

Background

PLA, a semicrystalline polymer (Figure 1), can be completely biodegradable in nature, thereby rendering it suitable for environmental applications, as a support for products which cannot revert to their original forms.15-17 PLA is also very heat-resistant, as evidenced by its use as a major component of thermoplastic trays.18 It is effective in solving the problem of plastic waste management posed by the traditional petroleum-based method, and has successfully produced 43% fewer greenhouse gases while using 48% less non-renewable energy than traditional plastic polymers.13

Similarly, Ecoflex®, another biodegradable material with a poly(butylene) adipate/terephthalate chemical structure (Figure 2), is made from oil-based compounds that function similarly to polyethylene (PE, Figure 3).16,19,20 PE, as the most commonly used plastic in the world, is found in various appliances, ranging from low voltage insulation cables, pipes, petrol tanks, car bodies, airplane parts and fuel tanks, plastic sheeting, carpet fibers, toys, and clothing.21-24 With this great use in the housing, clothing, and transportation industries, it has become imperative to design a new plastic with improved flame retardance as a replacement for commonly used petroleum-based products. This plastic would also have to be more environmentally-friendly, as High Density Polyethylene (HDPE) alone takes up approximately 6.3 million yds3 of landfill volume.25

Clay has been shown to be a naturally abundant, economical mineral that is toxin-free and possesses innate flame retardant properties.28-30 Clay oligomers offer a novel approach to improving the flame retardant (FR) properties of polymers via shear-induced exfoliation that circumnavigates toxicity issues.31 Materials such as resorcinol bis(diphenyl phosphate) (RDP) clay have also been used to reinforce biodegradable polymers.31-37 RDP assists flame retardance as an additive to bioplastics.31 To exfoliate clays with PLA and Ecoflex®, RDP would be ideal due to its chemical structure (Figure 4). It has surfactant properties with both nonpolar phenol groups and polar phosphoric acid groups, which are optimal for flame retardant formulations. Their phosphoryl groups act as strong hydrogen bond acceptors, and their phosphorus groups can react with polymer residue at very high temperatures.31

As a possible replacement for PE, PLA and Ecoflex® would be useful in multiple applications, due to their great durability, mechanical strength, and biodegradability. We propose a method of producing a flame retardant PLA and/or Ecoflex® material using nano-composite technology to answer the need for safe and costeffective biodegradable polymers.3,38

We propose a formula of producing a flame retardant PLA and/or Ecoflex® material using nanocomposite technology, incorporating RDP clay, instead of Cloisite clay, as an FR additive for testing of tensile strength, impact, modulus of elasticity, and flame retardance.

Materials and Methodology

Creating Thin Film Samples of PLA and Ecoflex® Nanocomposites

Polymer and additive interaction is important when optimizing the mechanical and thermal properties of the nanocomposite. Bulk think films were made to determine if the RDP clay interacted favorably with PLA or Ecoflex®.

To create samples of thin films that had both polymer substrate and RDP clay, the graduate student applied a monolayer of the clay onto the surface of a silicon wafer using the Langmuir-Blodgett (LB) Trough by increasing the surface tension of water on which clay particles were deposited. As the wafer was lifted out of the trough, the surface tension of the water pushed the clay layer onto the surface of the wafer.

PLA and Ecoflex® (both dissolved in chloroform) were used to create a spin curve to analyze the thicknesses of the substrates deposited on the silicon wafer. Ellipsometry was performed on silicon wafers coated with concentrations of 3 mg/mL, 5 mg/ mL, 10 mg/mL, 15 mg/mL, 20 mg/mL, 25 mg/mL, and 30 mg/ mL solutions of PLA and Ecoflex® to measure the thickness of the layers of substrate. The solution that produced the optimum thickness (approximately 1000 Å) was then used to spincast on wafers coated with a monolayer of RDP clay.

Surface Analysis of Thin Films through Atomic Force Microscopy

Silicon wafers were prepared by the graduate student to be cleaned twice with hydrofluoric acid to ensure that the wafer would prevent the water from spinning out with the excess PLA solution, and cleaned once for the hydrophobic Ecoflex®. These wafers were then coated with either 100% polymer (PLA or Ecoflex®) or polymer and RDP clay combinations of 10 mg/ mL (which equated to a measured length of approximately 1000 Å). The wafers in the latter category were then separately annealed in a vacuum oven for 18 hours at 170 °C to prepare samples for comparison. Atomic force microscopy (AFM) was then performed (with assistance from the undergraduate student) on these polymer surfaces, which allowed for observation of phase separation due to clay-polymer interaction

Further analysis of this polymer-clay combination was crucial to understanding the extent of exfoliation between the PLA/ Ecoflex® and the RDP clay. This would assist in determining the extent to which RDP clay was able to improve the mechanical structure and the consequent flame retardance. The surface tensions of RDP and the polymer in question (PLA or Ecoflex®) were used to yield the interfacial tension:31

Creating Bulk Samples of PLA and Ecoflex® Nanocomposites

Homogenous blending of PLA and Ecoflex® with resorcinol bis(diphenyl phosphate) clay was done using a C.W. Brabender instrument, type EPL-V501, with a direct current drive (type GP100).14 Initial blending of the bioplastic materials was started at 20 rpm at 170°C and gradually amplified to 100 rpm for 10 minutes after RDP clay was added to the chamber at the 2-minute mark. To bypass the issue of heat-activated polymer degradation, the homogenized blend was allowed a cooling period under nitrogen gas flow.

For this study, the copolymers were variegated into four main concentrations. The control (template) sample was the pure sample (0 wt% clay) of PLA and for Ecoflex®. The other treatments were set at gradual intervals of 1 wt%, 5 wt%, and 10 wt% RDP clay. Once polymers were extracted from the Brabender, they were shaped into molds and re-melted using a Carver heat press into flame-, tensile-, and DMA-appropriate samples. Both PLA and Ecoflex® samples were then removed.

UL-94 V0 Flame Retardancy Test

Flame tests determined the resistance of polymer compositions. Samples of 125 mm x 13 mm x 1.5 mm were created. A vertical burning chamber was set up with a stand and clamp. Rectangular planes used as flame test molds were vertically aligned so that their longitudinal axes were parallel to the stand, and samples were clamped at the upper 10 mm region of the stand. The lower end of the sample was above a 50 mm x 50 mm layer of 100% cotton, approximately 0.08 g in weight. An ASTM D5025 compressed methane gas burner, fueled by gas flowing at a rate of 105 mL/min, was used to generate a 20 mm blue flame. The flame was applied at an angle of 45° to the midpoint of the bottom edge of the flame test sample for 10 seconds. The flame was withdrawn immediately to more than 150 mm away from the sample while the afterflame time (t1) was recorded. Next, the flame was reapplied for another 10 seconds, after which it was removed while the second afterflame time (t2) was determined. The layer of cotton under the sample was then examined for any flaming particles remaining at the base.

Once this procedure was completed, the criteria presented in Table 1 were used to determine the classification of the material’s flame retardance level. In such a test, V-2 characterizes a flame retardant compound, V-1 is slightly more flame retardant, and V-0 is extremely flame retardant.

Characterization of Bulk Nanocomposites

An Instron 5566 tensile machine, set to an ambient temperature of 22 °C, determined the tensile strength of the polymer. First, a tensile sample was loaded onto the machine. A constant extension rate of 2 mm/min was used for the PLA composites while Ecoflex® samples were allotted a constant testing rate of 50 mm/min due to their higher elasticities. Once the flexural points had been determined, we obtained a complete tensile profile with a curve that indicated how the material reacted to applied forces.

In the initial portion of the test for most polymers, a linear relationship between the load and elongation was observed, as expected for a Young’s modulus. To determine the durability of the polymer-clay interactions and calculate the modulus of elasticity, we also measured the impact strength, i.e., the specimen’s potential energy.

Results and Discussion

To measure the interfacial interaction between clays and polymers, we first had to produce a polymer-clay bilayer to observe whether the polymer wet the clay surface (Figure 5).

Atomic Force Microscopy of Thin Films

The goal of this study was to find an efficient polymer-RDP interaction that would be flame retardant while maintaining the standard of mechanical efficiency of a petroleum-based product. Thus, Young’s contact angle was calculated to minimize energy and degree of exfoliation.

Scans of PLA and PLA/clay thin films, as seen in Figures 6 and 7, respectively, both reveal crystalline growth on the surface of the material caused by multiple nucleating sites. The centers of these points form spherulites as the crystals continue to grow independently until they become confined by other crystals and stop growing. This explains the extreme rigidity and hardness of PLA composites.

However, it is important to note that these spherulites show similar growth in both PLA and PLA/clay thin films, indicating that the surface of the PLA “wets” when coming in contact with the RDP clay monolayer.31 When annealing, the polymer approaches a state of minimum energy by either spreading flatly across the RDP-coated monolayer surface (wetting) or by forming droplets on the uneven interface (dewetting). Since the two PLA scans (with and without RDP) are similar, we can conclude that PLA spreads evenly onto the clay surface during annealing. This degree of wetting indicates that PLA and RDP clay are likely to exfoliate to near-completion when blended together.31

AFM scans of Ecoflex® thin films after annealing revealed the interactions between RDP clay and Ecoflex® and the composite material difference between the RDP/ Ecoflex® combination and pure Ecoflex®. As demonstrated in Figure 8, RDP nucleates crystals onto the surface of Ecoflex®, appearing as a rougher plane than the control seen in Figure 9. This leads to an observed increase in surface tension of the polymers as the properties of the material change, leading to crystallization into a hard polymer.

The increased roughness of the Ecoflex®/clay thin film can also be explained by a dewetting effect that annealing produces on the polymer.31 With Ecoflex®/clay films, a contact angle of 4.668° can be seen from a section analysis (Figure 10). This signifies that a small level of dewetting occurs, meaning that Ecoflex® may or may not exfoliate when combined with clay.31

We quantified these findings by examining the interfacial tension. At values approaching 0, the clay-polymer interaction is almost completely exfoliated. For sufficiently large values of tension, the extent of exfoliation varies, and the polymer may or may not exfoliate with the RDP monolayer. In the case of PLA, the surface tension is 50 mN/m. For Ecoflex®, we used the surface tension of PE (which closely resembled Ecoflex® in molecular structure), 31 mN/m. The surface tension of RDP at room temperature was calculated to be 49.9 mN/m

Therefore, the interfacial tension between PLA and RDP was calculated to be 0.1 mN/m. This indicates nearly uniform dispersal, indicating that “wetting” occurs, which in turn yields a negligible contact angle.31 However, for Ecoflex®, there is an interfacial tension of 18.9 mN/m. This value is significantly greater than 0, and so dewetting occurs. The level of exfoliation between Ecoflex® and RDP will be examined in greater detail later.

Determining flame resistant properties through UL flame tests

Flame tests of 0-10 wt% RDP clay and polymer composites revealed an improvement in flame retardance with the addition of 1-5 wt% clay (Table 2). Flame tests of PLA/clay nanocomposites showed a decrease in t1 values and an increase in t2 values with the addition of as little as 1 wt% RDP clay to the polymer base. However, the average t2 time remained constant thereafter as the percent composition of clay increased to 5 wt%. Flame retardant properties decreased with the addition of more clay after 5 wt%, as can be seen with the increase in t2 drip time in the 10 wt% results. For PLA/clay composites, the greatest flame retardant ability is observed with 5 wt% RDP clay since there is significantly reduced dripping. This allows for close to V-0 flame rating for the composite.

Ecoflex® composites revealed great improvement in t1 times with the addition of as little as 1 wt% RDP clay, bringing about significantly reduced dripping. However, this improvement did not continue as clay concentration increased from 1 wt% to 10 wt%. It is also significant to note that while t1 times for Ecoflex® composites showed rapid improvement, this came at the cost of t2 afterflame times. This trend was then reversed with the raise to 5 wt% clay Ecoflex® samples. At a concentration of 5 wt% clay, the total afterflame time reached a minimum and started to increase as more clay was added to the composites. Despite having relatively low afterflame times at 5 wt% clay, dripping was still observed, signifying that the flame rating remained at V-2.

The optimal concentration by weight of RDP to PLA is the same as the optimal concentration for Ecoflex® and RDP, both at 5 wt% composite. For both polymers, the average t1 and t2 times were lower than those from pure, 1 wt% clay, and 10 wt% clay samples. In fact, although Ecoflex® exhibited low variation in flame retardance, PLA showed a drastic improvement at 5 wt% RDP, reaching V-0. This is a significant finding since petroleum-based non-biodegradables are not usually flame resistant, and elevating them to the highest standard of UL94 is a major accomplishment. These findings support our hypothesis of RDP being an effective flame retardant additive for biodegradable polymers.

Mechanical characterization of polymer composites

To determine the mechanical characteristics of bulk samples of PLA and Ecoflex®, we compared tensile and DMA data to flame test data to determine the degree of improved flame retardance. This allowed us to identify whether the increased flame retardance needed to achieve an improved UL flame rating resulted in a decrease in mechanical efficiency.

PLA/clay nanocomposites showed a sharp decrease of 17.7 % in tensile strength with the initial addition of clay (Figure 11). These decreases in mechanical characteristics continue as the wt% of RDP clay increases but at a slower rate. A brief summary of the tensile properties is provided in Table 3.

A steady increase in the modulus of elasticity up to 5 wt% clay can also be observed from the initial slopes of the tensile stress-strain curve. Although this modulus fluctuation is minimal, the greatest elastic properties are present at a 5 wt% concentration, coinciding with the lowest impact strength (Figure 12). This reinforces the idea that PLA exfoliates well in RDP as the incorporation of clay embrittles the composite, observed by the decrease in impact strength, and prevents dripping, observed with reduced afterflame times and attainment of the V-0 level at 5 wt%.

However, beyond 5 wt% RDP clay, Young’s modulus showed a drastic drop of 8.88% (Figure 13). Addition of as little as 1 wt% RDP clay leads to a large decrease in initial mechanical properties in PLA/clay composites. In fact, the impact strength of the PLA/ clay composites decreases by a factor of 21.3% upon initial administration of RDP clay.

Stress-strain curves for the Ecoflex®/clay films with different concentrations of RDP clay are shown in Figure 14. The corresponding tensile properties are summarized in Table 4. The addition of clay particles has a profoundly inverse effect on the tensile properties of the final material. The impact strength continues to decrease without reaching a lower boundary (Figure 15) although the modulus in this case is also relatively the same (Figure 16).

This is also supported by the flame test results as there was no change in dripping for the Ecoflex®/clay composites among different concentrations. When considered alongside the interfacial tension results, Ecoflex® appears to be largely incompatible with RDP clay additives.

For the Ecoflex®/clay composites containing 1 wt% clay, the tensile strength at break is 5.3% lower than that of pure Ecoflex®, and it does not considerably decrease in the transition from 0-5 wt% clay. However, with the addition of 10 wt% clay, tensile strength drops by 20% compared to that of 5 wt% samples. In addition, mechanical variation of the polymer can be observed with the 10 wt% samples in the tensile. Finally, the impact strength for Ecoflex® had an initial drop of 39.1% upon RDP clay administration.

The bilayer made between the polymer and the clay produced by the LB-technique not only effectively predicts whether the clay will exfoliate in a polymer matrix, but does so in a cost-effective manner by requiring small amounts of polymer.

Conclusion and Future Investigation

We have demonstrated that surface energies are important in determining the overall mechanical properties of the nanocomposite. Through careful matching of the surface energies, it is possible to use clay to engineer new materials with optimal mechanical and thermal properties. For example, in the case of PLA/clay, PLA wets the surface of the RDP clay, so RDP clay exfoliates in PLA. This was confirmed by the significant dependence of the impact of the modulus on clay concentration and the prevention of dripping during the flame test. Thus, flame retardance improved through lack of dripping at a cost to the impact strength.

In contrast, in the case of Ecoflex®, where the energy is unfavorable, we found little enhancement and continued dripping during the flame tests, yet an observed benefit was the constancy of the impact strength. Dewetting was observed as there was little or no effect on the impact strength, flame testing, and Young’s modulus.

UL94 V-0 flame tests indicated that the RDP clay’s effect on flame retardance of the composites peaked at 5 wt% clay in polymer. Although afterflame times decreased, the nanocomposites were only able to reach a flame level of V-2. It is likely that the addition of RDP clay nanotubes to the composites, forming a net-like structure on the atomic level, would significantly reduce dripping and allow the polymers to reach V-0.

AFM images of PLA/clay and Ecoflex®/clay films involved spinning a thin film of polymer on a monolayer of clay in order to study the interfacial tension. PLA was observed to completely wet clay, whereas Ecoflex® dewets with a small contact angle of 4.668°, indicating a less favorable interaction between the surfaces. Therefore, it is expected that the RDP clays will exfoliate easily in the PLA but not so in Ecoflex®. Cross-sectional scans of the composites using transmission electron microscopy should be performed to examine clay exfoliation. It would therefore be interesting to examine the effects of making clay tubes more flame retardant.

Tensile tests also confirmed a higher modulus of elasticity of the composites with addition of clay while showing a decrease in impact strength. Optimization of these properties occurred at 5 wt% clay due to a large increase in modulus while impact strength remained relatively stable. Tensile properties may be further improved by combining PLA with Ecoflex® and RDP clay. Whereas PLA combined with RDP enhanced flame retardance, the Ecoflex®/ clay composites did not show successful exfoliation with RDP. To improve the flame retardance for both, a flame retardant formulation based on RDP and aluminum trihydrate will be tested. These nanocomposites can then be engineered to resemble the toughness and elasticity of petroleum-based polymers such as polystyrene and PE.

Further uses of this investigation include a replacement for PE with biodegradable materials for use in construction, structure, clothing, and transportation. An additional goal would be to enhance both flame retardance and mechanical strength. To do this, we will meld PLA and Ecoflex® samples with RDP clay as an alternative for non-biodegradable petroleum-based products since PLA is a harder substance and Ecoflex® is softer. However, for both polymers, the ignition time is high, and additional tests could resolve flame retardant properties.

Finally, a current application that may be viable from this research is in Army Combat Uniforms.39 With the addition of natural, biodegradable plastics and FR additives like PLA/Ecoflex® and RDP clay, these suits may be made more flame retardant and durable. In this way, improvement upon biodegradable polymers, such as PLA and Ecoflex®, has thus far successfully demonstrated promising advances in serving as an alternative for nonbiodegradable petroleum-based products.

References

  1. [1] Mohanty, A. K. J. Polym. Environ. 2002, 10, 19-27.
  2. [2] Tanniru, M. Mater. Today. 2006, 47(6), 2133-46.
  3. [3] Horrocks, R. IMRI. 2005. 43.
  4. [4] Matkó, S. Polym. Degrad. Stabil. 2004, 88, 13845.
  5. [5] Özarslan, Ö. J. Appl. Polym. Sci. 2009, 114(2), 1329–38.
  6. [6] Borenstein, S. Post and Courier. http://www.postandcourier.com/news/2010/jun/12/petroleum-based-products-are-around-us-in-us/(Accessed Oct. 10, 2012).
  7. [7] Environmental Leader. Energy & Environmental News for Business. http://www.environmentalleader.com/2010/03/08/bio-plastic-waterbottles-trickle-into-marketplace/ (Accessed Oct. 10, 2012).
  8. [8] Kolybaba, M. Society for Engineering in Agricultural, Food, and Biological Systems. 2003, RRV03-7.
  9. [9] Department of the Environment, Water, Heritage and the Arts. http://www.environment.gov.au/archive/settlements/waste/index.html (Accessed Sept. 29, 2012).
  10. [10] Doi, Y.; Fukuda, K. Elsevier Science. 1994, 479-97.
  11. [11] Frequently Asked Questions - PLA. World Centric - For a Better World. http://www.worldcentric.org/about-us/faq (Accessed Sept. 10, 2012).
  12. [12] The Packaging Institute. http://www.packagingknowledge.com/degradable_biodegradable_bags.asp (Accessed Oct. 16, 2012).
  13. [13] Sohn, E. Earth News. http://news.discovery.com/earth/garbagebiodegradable-earth-month.html (Accessed Aug. 3, 2012).
  14. [14] Pack, S. Macromolec. 2009, 42(17), 5338-51.
  15. [15] Geritano, M; Plaut, M. “A Novel Procedure in the Creation of a Flame Retardant Polymer in the Nano-Macro-Scale.” Aug. 2009. MS. Department of Engineering, State University of New York at Stony Brook.
  16. [16] Wiley. Physorg. http://www.physorg.com/news178178601.html (Accessed Sept. 12, 2010).
  17. [17] Tachibana, Y. Int. J. Mol. Sci. 2009, 10(8), 3599-615.
  18. [18] Henton, D.E. Polylactic Acid Technology. 2005.
  19. [19] Toloken, S. Plastics News. http://plasticsnews.com/headlines2.html?id=18510 (Accessed Sept. 15, 2010).
  20. [20] Müller, R. J. Biotech. 2011, 86(2), 87-95.
  21. [21] BASF - The Chemical Company. http://www.packaging.basf.com/p02/Packaging/en/function:pi:/wa/plasticsEU~en_GB/function/conversions:/publish/common/upload/biodegradable_plastics/Ecoflex_Brochure.pdf (Accessed Sept. 24, 2010).
  22. [22] Polyethylene. http://sunfh.tripod.com/chem7.htm (Accessed Sept. 29, 2010).
  23. [23] Advertisement. ACE Hardware. http://www.acehardware.com/family/index.jsp?categoryId=1259793 (Accessed Sept. 29, 2010).
  24. [24] Buildings and OEMs. BOREALIS. http://www.borealisgroup.com/industry-solutions/infrastructure/building-wires-and-OEMs (Accessed Sept. 29, 2010).
  25. [25] Adverse Health Effects of Plastics. Ecology Center. http://www.ecologycenter.org/factsheets/plastichealtheffects.html (Accessed Sept.         29, 2010).
  26. [26] Cireli, A. J. Appl. Polym. Sci. 2007, 105(6), 3748-56.
  27. [27] Piringer, O.G; Baner, A.L. Plastic Packaging: Interactions with Food and Pharmaceauticals. Weinheim: WILEY-VCH, 2008.
  28. [28] Mitrus, M. “Biodegradable Material for Various Industries. Weinheim: WILEY-VCH, 2010. 1-33.
  29. [29] Jinadasa, K. B. P. N. Env. Geochem. Health. 1992, 14(1), 3-7.
  30. [30] Bakraji, E. H. J. Trace Microprobe Tech. 2003, 21(2), 397-405.
  31. [31] Pang, J. Biosens. Bioelec. 2003, 19(5), 441-5.
  32. [32] Pack, S. Macromolec. 2010, 43(12), 5338-51.
  33. [33] Jérôme, R. Polymer. 2002, 43(14), 4017-23.
  34. [34] Gorrasi, G. J. Polym. Sci. 2002, 40(11), 1118-24.
  35. [35] Alexandre, M. Polymer. 2003, 44(8), 2271-9.
  36. [36] Tortora, M. Macromolec. Mater. Eng. 2002, 287(4), 243-9.
  37. [37] Chen, B. Carb. Polymer. 2005, 61(4), 455-63.
  38. [38] Miller, C. Waste Age Magazine. http://wasteage.com/Recycling_And_Processing/hdpe-high-density-polyethylene-msw-200902 (Accessed Sept. 29, 2010).
  39. [39] Muller, R. J. Biotech. 2000, 86, 87-95.

Comment