Metallic Microlattices: As Light as Air and as Strong as an Airplane

Comment

Metallic Microlattices: As Light as Air and as Strong as an Airplane

Microlattice.png

This photo might look like a mere product of Photoshop, but it actually is an incredible feat of materials engineering.

Researchers have developed the lightest material ever—about 100 times lighter than Styrofoam [1]!­ You might be wondering what kind of magical material they discovered that could be so light, but it’s actually just made out of metal. The trick is, it’s only a fraction solid—the other 99.99% is air!

Inspired by the strength of large architectures like bridge supports and the Eiffel Tower,1 collaborators from Boeing’s HRL Laboratories, NASA, and Defense Advanced Research Projects Agency (DARPA) have created a metallic microlattice that is lighter than any other material [1]. This microlattice, which takes established principles of design and shrinks them to the micro scale, is a framework of micro-sized tubes called “struts” arranged to form cells of air pockets.

What makes the microlattice even more special is how strong it is despite weighing so little.2 Other ultra-light materials make use of air, but many are stochastic. Stochastic materials contain randomly spaced air pockets, like aerogels and foams. In contrast, researchers design microlattices with specific patterns of empty space. The patterned cells retain much more strength compared to solid material than random structures do because they can dissipate forces in controlled ways [2].

The microlattice pattern is made using the self-propagating photopolymer waveguide technique, which is a process similar to 3D printing [3]. While the name is a mouthful to say, the technique itself is pretty simple compared to many micro manufacturing processes. First, you create a 3D mold using a material that hardens into a polymer only when hit by UV light. Shining UV light at different angles through a slit mask creates the 3D lattice of polymer. Next, you coat the polymer grid with nickel-phosphorus. Finally, you dissolve the polymer, leaving behind a beautiful network of hollow metal tubes. This relatively easy manufacturing process makes the metallic microlattice even more attractive for scaling up beyond the lab.

Not only is the microlattice extremely light, strong, and easy to construct, but it also has high energy absorption and recovery capabilities. Because each of the struts can move relatively independently, the microlattice can take in large amounts of energy without breaking. This strut flexibility also means that the lattice can be compressed to 50% of its size and bounce back completely reversibly [3]! These properties make the microlattice an ideal candidate for aerospace applications like shock absorbance, where materials need to withstand large impacts and recover to keep doing their job [1]. Furthermore, because flying requires a lot of fuel, lightweight materials are even more important for energy and cost efficiency--which makes the microlattice a winning combination. So just like the metallic microlattice has taken materials engineering to new heights, it might soon do the same for you.

References:

1.      Gent, E. Lightest Metal Ever is 99.9 Percent Air. https://www.livescience.com/52973-lightest-metal-ever-is-mostly-air.html (accessed Oct 10, 2018).

2.       MacDonald, J. Microlattice: The World’s Lightest Metal. https://daily.jstor.org/microlattice-worlds-lightest-metal/ (accessed Oct 10, 2018).

3.      Schaedler, T. A.; Jacobsen, A. J.; Torrents, A.; Sorensen, A. E.; Lian, J.; Greer, J. R.; Valdevit, L.; Carter, W. B. Science 2011, 334(6058), 962–965.

4.       HRL Researchers Develop World’s Lightest Material. http://www.hrl.com/news/2011/11/17/hrl-researchers-develop-worlds-lightest-material (accessed Oct 10, 2018).

Comment


An Accidental Discovery with Monumental Impact

Comment

An Accidental Discovery with Monumental Impact

We all make mistakes—it’s just another part of life. But what if a simple error on your part led to one of the greatest scientific discoveries of all time? What if it went on to save the lives of millions of people? What if it was so monumental that you were given the honor of being knighted by the king himself? This is exactly what happened to Sir Alexander Fleming, a Scottish physician, when he discovered the world’s first antibiotic in 1927. The story began when Fleming served as a captain for the Royal Army Medical Corps during World War I. While stationed at battlefield hospitals along the Western Front in France, Fleming noticed that many soldiers who sustained wounds were not dying from the trauma itself but from bad infections. The only medicine available to treat infections at that time were antiseptics but they usually just made injuries worse. After the war ended, Fleming joined St. Mary’s Hospital as a microbiologist and began researching antibacterial drugs [1]. Despite years of research, he did not make a lot of progress.

Then in 1927, something truly amazing occurred. Fleming was studying characteristics of Staphylococcus, a fairly common bacterium that causes a wide array of diseases in humans. Before leaving for a summer vacation to Scotland, Fleming neglected to clean his lab table and left a pile of dirty Petri dishes containing colonies of Staphylococcus in the corner. Upon his return, he noticed that some type of fungus had grown inside the dishes and had managed to stunt the surrounding Staphylococcus growth. He was confused but curious about the properties of the strange fungus. After further experimentation, he was able to isolate the anti-bacterial product produced by the fungus. He named it penicillin [2].

In 1942, Anne Miller became the first patient to successfully be treated with penicillin. She had a miscarriage and was dying from an infection that causes blood poisoning.2 Because of its accidental discovery, more than 200 million lives have been saved. Penicillin can be used to combat a plethora of diseases including syphilis, tuberculosis, and pneumonia.1 Additionally, the antibiotic age led to the development of many other branches of medicine including organ transplantation and cancer therapies [3]. In other words, I can confidently say that this was one instance where putting off doing the dishes was a good idea.

References

  1. http://www.newworldencyclopedia.org/entry/Alexander_Fleming

  2. https://www.pbs.org/newshour/health/the-real-story-behind-the-worlds-first-antibiotic

  3. https://www.reactgroup.org/antibiotic-resistance/course-antibiotic-resistance-the-silent-tsunami/part-1/the-discovery-of-antibiotics/

Comment


Legislating Ignorance: Scientific Literacy, Political Partisanship, & the U.S. Economy

Comment

Legislating Ignorance: Scientific Literacy, Political Partisanship, & the U.S. Economy

Historical evidence testifies to a causal relationship between scientific awareness and a nation’s socio-economic progress. This evidence stems from the era of the ancient Greeks to the European Age of Exploration and more recent advancements in American space travel and quantum physics. The inverse dynamic also holds true since nations that lack scientific development tend to face economic stagnation and an overall impoverished way of life.  Since the early 20th century, the U.S. has consistently breached numerous frontiers of scientific research and discovery. This approach led to a plethora of technological innovations and a thriving national economy. In recent years, however, there has been a decay in American scientific policy, especially with regards to research and development (R&D), as political loyalties lead to more partisan policymaking. Deconstructing the repercussions of scientifically regressive legislation on local economies demonstrates the consequences of civic scientific unawareness in the U.S. Doing so also highlights the dangers of partisanship in national science policymaking and emphasizes the necessity of scientific literacy for all citizens today. The effects of collective scientifically illiterate governance thus impoverishes both the immediate and future economic standing and prosperity of the United States.

Advancements in science and technology through federally-funded research demonstrate strong economic returns through technological innovation and workforce employment. The most powerful drivers of political decisions on national funding in the U.S. remain to be war and economic prosperity. Preserving national security fueled much of federal scientific research during most of the twentieth century with World War II and the Cold War driving the U.S. government to advance its scientific and technological frontiers on weaponry and defense. However, more recently economic growth has driven research funding, especially following the economic crises of the 1980s and late 2000s. Analyses and assessments by the NIH, OECD, and economists consistently find a correlation between drastic downturns in national economics and spikes in federal allocation towards R&D in following years. The 2008 federal stimulus package dedicated to revitalizing the American economy included sizeable expansions to research and development in spheres of scientific research not traditionally linked to economics at all—including biomedical research and nanotechnology—and investments in these fields soon began contributing to increasing national revenue and eliminating debt [1]. Contrary to popular opinion, the challenge to this system primarily occurs in periods of economic growth and prosperity, where the public and policymakers alike take the continual status of economic success for granted and begin to view scientific research as expendable. As Congressional appropriations begin to slash the percentage of federal allocations for research, local economies housing those research centers begin feeling the repercussions of these reductions. In addition to these immediate effects, the long-term aftermath of the under-prioritization of science includes plummeting payoffs from research. Current projects in research institutions are curtailed as funding begins to stagnate and the findings of these malnourished projects lack the potential to further discovery. In recent years, as the authority of science itself has come under attack, politicians find it increasingly easier to dismiss the scientific community’s recommendations as simply the advocacy of the rival political party. As legislators continue to prioritize partisan agendas over scientific authority, they subsequently place scientific research lower on the ranks of national policymaking, thus crippling the future growth and expansion of the American economy.

Prolonged and unpredictable timelines for returns on scientific funding to manifest encourages society to prioritize lucrative short-term industries over expanding alternative sustainable enterprises. However, the vast majority of American economic and industrial growth over the past century alone has ridden almost entirely on the wave of these scientific investments rather than short-term expenditures. Indeed, academic studies and analyses from the mid-20th century demonstrate that “an estimated 67 percent of the productivity growth in the United States from 1948 to 1973 was attributable to advances in applied knowledge, technology, and in the education and experience of the labor force” [3]. While progress in all of these operates on significantly lengthier timelines than most areas of public policy, the impact of these intangible investments on national economics alone over time far outweigh the fiscal returns of any expenditure made for short-term gain. Historical observation consistently witnesses this trend occurring throughout the trajectory of scientific discoveries funded by major global superpowers. From funding the research of electromagnetism in the 1800s to spearheading discoveries in quantum physics and space exploration in the last century, pursuing almost every groundbreaking scientific or technological advancement requires some degree of vision and long-term commitment from policymakers and governments to allow these discoveries to revolutionize society over prolonged timescales [4]. However, this phenomenon begins experiencing severe ideological complications when viewed through a contemporary political lens. With the recent influx of partisanship into science policy, politicians find it increasingly difficult to justify funding research clashing with established socio-political ideologies, especially those initiatives with no immediate returns in sight but rather promised benefits over a period extending farther than any congress member’s reelection timescale [5]. This myopic condition is especially true and even aggravated for those policymakers receiving significant financial backing from corporations and entities challenged by expansions to sustainable industries. Simple scientific consensus has been eliminated from forming the crux of discussions in science policy, as partisan loyalties and socio-political ideologies have begun to take precedence in Congress. When compounded with the expanded influence of financial sway in the political process, policymakers begin to neglect undertaking extended scientific and technological projects for the sake of appeasing their corporate campaign financers and gaining political points for the next election cycle.

By allowing partisan short-sightedness to take priority over scientific awareness, the U.S. weakens its role in future global markets founded on innovation. If we take historical evidence as an indicator of future trajectories, tomorrow’s economies will be founded on the scientific and technological progress of today. Although numerous short-term remedies for this issue exist within the political framework of the government, the definitive long-term solution lies in informing the public of the indispensability of scientific discovery and the necessity to maintain impartiality in formulating science policy. A public holding low regards for scientific research or awareness will inevitably elect representatives into office who introduce legislation rooted within either fundamental scientific unawareness or partisan science policy. Policy that will eventually go on to undermine and debilitate both local and national economies within the US and the larger global community as well.


References

  1. Rosenberg, Nathan. “Science, Invention and Economic Growth.” The Economic Journal, vol. 84, no. 333, 1974, pp. 90–108. JSTOR.

  2. “Sustaining a Competitive Edge in Innovation Through a World-Class Federal Science and Technology Workforce,” Fast Track Action Committee on the Federal Science and Technology Workforce. National Science and Technology Council, July 2016.

  3. Walberg, Herbert J. “Scientific Literacy and Economic Productivity in International Perspective.” Daedalus, The MIT Press, vol. 112, no. 2, 1983, pp. 1–28. JSTOR.

  4. Wright, Carroll D. “Science and Economics.” Science, vol. 20, no. 522, 1904, pp. 897–909. JSTOR.

  5. Blute, Marion. “The Growth of Science and Economic Development.” American Sociological Review, vol. 37, no. 4, 1972, pp. 455–464. JSTOR.

  6. Lee, Stuart, and Wolff-Michael Roth. “Science and the ‘Good Citizen’: Community-Based Scientific Literacy.” Science, Technology, & Human Values, vol. 28, no. 3, 2003, pp. 403–424. JSTOR.


Comment


Hello Quantum Worlds!

Comment

Hello Quantum Worlds!

Quantum computing has been projected as a sort of messiah to the technological plateau that humanity is experiencing. The general prevailing belief is that with the advent of QC, we will be able to do the impossible — break cryptographic codes, solve problems that have eluded computer scientists for years, and even occupy interstellar space.



This is what everyone has to offer when asked what quantum computing is:  “Well, these computers can tackle multiple problems at once. It’s because classical computers can only be in one state at a time — 0 or 1 — while quantum computers can be in both states at the same instant of time!”



The only problem with this explanation is that it is wrong, and misleading to say the very least.

No, quantum computers are NOT in the same state at the same time. At least not technically, and although we might be able to break cryptographic codes faster, we won’t be able to solve puzzles miraculously.



Justin Trudeau was asked to summarize[1] quantum computing and he summed it up better than any layman explanation would:


“Very simply…normal computers work, either there’s power going through a wire or not— a one, or a zero. They’re binary systems … What quantum states allow for is much more complex information to be encoded into a single bit…a quantum state can be much more complex than that, because as we know, things can be both particle and wave at the same time.”


He was careful not to use the term “can be in both states at the same time,” which is commendable because that is where the problem lies. It turns out that staying in two states is physically impossible. Rather, quantum computers fundamentally take advantage of the superposition principle in quantum mechanics which states that any particle can be assumed to be in all states until and unless observed. Upon observation, the particle randomly takes one of the states. In other words, quantum computers form entangled states[2] of 0 or 1, and stay in those states, which is vastly different from staying in two states at the same time. A geometric way to think about it is to think of 0 and 1 as only the poles of a sphere, and a “qubit” as any point on that sphere. Due to a multitude of these points, data of many orders of magnitude[3] more can be stored using the same number of qubits and classical bits.



While we may still be able to crack conventional cryptographic techniques much faster, it is because of this enormous capability to store more data and not because of duality of states. The class of problems known as non deterministic polynomial complexity or NP — problems that don’t seem to have polynomial time solution-getting algorithms — will unfortunately still remain unsolved,  because quantum computers don’t so much mathematically model a problem as physically model it; in theory, we let nature do the math for us, and just watch where the final state ends up. The newfound capability to break crypto sequences wouldn’t be a problem in the long run either, because there are ways to make it even more secure using quantum cryptography. In fact, Google has already begun testing[4] such techniques. Even current state of the art research cannot guarantee that the “speed-up” on computational problems we expect from quantum computing will happen for all problems. Recently, Ewin Tang from the University of Texas at Austin proved that one of the major advances in quantum computing was redundant[5] and can be achieved by classical computing, which set back the quantum industry by decades. Add to that to the fact that we are at least a decade away from the world’s first meaningful quantum computer, and we’ve been that way for more than a decade, the picture is not so rosy anymore.



But there’s more reason to be optimistic than dismal. Intel has already created 49 and 17 qubit processor chips[6] that offer a glimpse into the enormous potential of quantum computing. They demonstrably prove that most traditional solvable problems will be solved in milliseconds, compared to minutes in the traditional way. The only major hurdle for stable quantum computing remains to achieve absolute zero-like temperatures. Qubits require temperatures 250 times colder than outer space to sustain their wave-like behavior. Attempts to recreate those environments in today’s laptops have yielded little fruit, however, the news that major companies have already started preparing for a quantum future is reason enough to be optimistic. And while we may not have quantum computers in our pocket anytime soon, watch out for each quantum of progress they make.




References

  1. Morris, David Z (April 17, 2016). “Justin Trudeau Explains Quantum Computing, And the Crowd Goes Wild”. Fortune Magazine. http://fortune.com/2016/04/17/justin-trudeau-quantum-computing/

  2. Beall, Abigail and Reynolds, Matt (February 16, 2018). “What are quantum computers and how do they work?”. Wired.  https://www.wired.co.uk/article/quantum-computing-explained

  3. Aaronson, Scott (2008). “The Limits of Quantum” https://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf

  4. Greenberg, Andy (July 7, 2016). “Google Tests New Crypto to Fend Off Quantum Attacks”. Wired. https://www.wired.com/2016/07/google-tests-new-crypto-chrome-fend-off-quantum-attacks/

  5. Hartnett, Kevin (July 31, 2018).“Major Quantum Computing Advance Made Obsolete by Teenager”. Scientific American. https://www.quantamagazine.org/teenager-finds-classical-alternative-to-quantum-recommendation-algorithm-20180731/

  6. Greenemeier, Larry (May 30, 2018). “How Close Are We—Really—to Building a Quantum Computer?” https://www.scientificamerican.com/article/how-close-are-we-really-to-building-a-quantum-computer/

Comment


Numbers Beyond Belief

Comment

Numbers Beyond Belief

What is the biggest number you can think of? Or better yet, what is the biggest number you can’t think of? Graham’s number is a quantity so mind-bogglingly large that if you tried to think of it, your head would quite literally turn into a black hole. The maximum amount of entropy you can store in your brain is related to a black hole with the same radius as your brain, and the entropy of this black hole carries less information than it would take to store Graham’s number in your head. The number is so large that the entire observable universe would not be able to store it, even if each digit was the size of a planck volume, the smallest measurable space. Graham’s number is a truly godly value, but where does it come from and why do we need to know about it? Come with me as we journey to the fringes of infinity as we explore one of the biggest number ever used constructively, Graham’s number.

Before we can consider Graham’s number, let us take a look at this math problem:

Let N be the smallest dimension n of a hypercube such that if the lines joining all pairs of corners are two-colored for any n≥N, a complete graph K4 of one color with coplanar vertices will be forced.

Yurtbay 3_19_1.png

 

If you are like most people who are not well versed in combinatorics, this question probably makes very little sense. Luckily, Hoffman proposed an equivalent analogy problem that is likely more accessible to the common person. The analogy problem is stated like this:

 

Consider every possible committee from some number of people n and enumerating every pair of committees. Now assign each pair of committees to one of two groups, and find N*, the smallest n that will guarantee that there are four committees in which all pairs fall in the same group and all the people belong to an even number of committees.

 

 

In a rather complex proof, Ronald Graham, an American mathematician, proved that the answer to this question is somewhere between 6 and Graham’s number.

To get an appreciation for how large Graham’s number is, we need to turn to “arrow notation”, proposed by the legendary computer scientist Don Knuth. First, let us begin with just one arrow:

 

 

3↑3=33= 27

So far, we are dealing with numbers we know and love. However, the numbers start to get really big, really fast. Let us explore two arrows now:

 

3↑↑3=3↑(3↑3)=327=7.6 trillion

As you can see, adding just one arrow escalates things dramatically. However, 7.6 trillion is a number we can still fathom. It’s about equal to the number of bacteria on eight human bodies. When you add just one more arrow, the numbers become quite literally out of this world.

 

3↑↑↑3=3↑↑(3↑↑3)=33333333....333 where there are 7.6 trillion 3’s in the stack of 3’s

We aren’t even close to Graham’s number yet. However, we now have the tools to start making sense of Graham's number. Let us first define the first pivotal quantity, g1:

 

g1=3↑↑↑3

As you know by now, g 1 is absolutely gargantuan. We can now define g 2 :

 

g2 =3↑↑↑↑........↑↑↑↑3, where there are g 1 number of arrows

Naturally, g3 has g2 number of arrows, and so on and so forth. Onwards we go until we hit g64, which has g63 number of arrows. Finally, you’re done! Graham’s number is g64.

For a long time, Graham’s number was the largest number ever used in a mathematical proof. Nowadays, tree algorithms have produced bigger numbers, including the titanic TREE(3), but Graham’s number will always have a place in mathematical lore. For most of us, numbers this big will have no impact on our lives, but in our most philosophical moments, as we ponder the universe and what is beyond, we can remember that everything in existence cannot hold such a big value, and this colossal number is infinitely smaller than an infinite amount of numbers. Eternity is quite a lot bigger than you might think.

 

References

  • Gardner, Martin (November 1977). "Mathematical Games"

  • Padilla, Tony; Parker, Matt. "Graham's Number". Numberphile. Brady Haran.

  • Ron Graham. "What is Graham's Number? (feat Ron Graham)" Numberphile. Brady Haran

Comment