The Uncertain Future of Nuclear Power

Here is another paper on nuclear power that I wrote for my ENGR 140 class in April of 2019. I cheated slightly and reused most of the content from my nuclear power essay, “Nuclear Power Subsidies: Are They Worth It“, that I wrote for my English 102 class around the same time. So there is only a little new content in this one, but I thought I’d share it any. Any critiques would be much appreciated.


Nuclear power is a huge industry comprised of about 450 nuclear reactors around the world that together supply eleven percent of global electricity demand (“Nuclear Power”). However, this vast enterprise that he world has come to depend on was not always here. In fact, nuclear power is a relatively recent development compared to other energy sources. The first nuclear reactor to deliver electricity to the power grid began operating in the Soviet city of Obninsk in July of 1954. It was a five megawatt electric (MWe) reactor called AM-1, standing for Atom Mirny (Peaceful Atom). The name alludes to its peaceful use as a power generator as opposed to the weapons-focused atmosphere under which nuclear power was developed. The year before it was built, US President Dwight D. Eisenhower had enacted a program he called “Atoms for Peace,” the purpose of which was to reallocate research funds from weapons development to electricity production. It bore its first fruit in 1960 with the Yankee Rowe, a 250 MWe pressurized water reactor (PWR). Throughout the 1960s, Canada, France, and the Soviet Union developed their own nuclear reactor systems, with Canada devising a unique reactor type called CANDU, France beginning with a type similar to the Magnox design developed in Britain in 1956, but then settling on the PWR reactor design, and the Soviet Union with two small reactors, a PWR and another type called a boiling water reactor (BWR). Britain later settled on the PWR design, as well. By the end of the 1960s, the US was manufacturing 1,000 MWe reactors of PWR and BWR design. The Soviet Union lagged a little, but by 1973 it had developed its first high power channel reactor (RBMK) rated at 440 MWe, a design that was later superseded by a 1000 MWe design.  Kazakhstan and Russia went on a brief tangent developing so called fast neutron reactors, but other countries almost invariable embraced the light water reactor (LWR) design, which includes the BWR and PWR designs. (“History of Nuclear Energy”).

            Despite all of this growth, however, during a period from 1970-2002 nuclear power underwent a “brown out” in which demand for reactors declined and previous orders were cancelled (“History of Nuclear Energy”). During this period, the Chernobyl nuclear disaster struck. However, the stagnation did not last for long, and by the 1990s nuclear power was blossoming again, beginning with the Japanese Kashiwazaki-Kariwa 6, a 1350 MWe Advanced BWR. In the 2000s, some new reactors were built in Europe and North America, but the bulk of construction occurred and is ongoing in Asia. (“History of Nuclear Energy”).

This revival in nuclear power can be attributed to several factors: policy makers around the world began searching for a sustainable, reliable, low-cost, limited carbon and secure power source to address the global energy crisis (Sovacool, “Second Thoughts” 3) and provide their countries with energy security (“History of Nuclear Energy”). These considerations continue on to the present; because of concerns about conventional energy sources like coal and oil, the need to stabilize harmfully volatile energy prices, and a growing fear of global warming, nuclear power is being considered as a possible answer to global energy exigencies (Sovacool, “Second Thoughts” 3). These ideas have garnered significant support, as an effective and well-organized effort to increase public investment in nuclear power to unprecedented levels is ongoing (Koplow 11). Nuclear power is receiving strong support from organizations like the US Department of Energy (DOE), the International Atomic Energy Agency (IAEA), and the International Energy Agency (IEA) (Sovacool, “Second Thoughts” 3). All of these policy makers seek great benefit for society from nuclear power, but it is worth considering its drawbacks, too.

            Since the development of nuclear power, there have been many tragic accidents. In fact the most recent accident was one of the worst in history. A 15-metre tsunami caused by a magnitude 9 earthquake off the Eastern coast of Honshu island shut down power and cooling to three reactors at the Fukishima Daiichi nuclear site on March 11, 2011, causing all three reactor cores to melt down over next three days and release 940 PBq radiation, giving it a 7 on the International Nuclear Event Scale (INES). 100,000 were evacuated from site, of which 1,000 died as a result of extended evacuation. (“Fukishima”). According to Swiss bank UBS, this catastrophe was more damaging to the reputation of nuclear power than the more severe Chernobyl nuclear disaster in 1968, because it occurred in Japan, a highly developed economy (Paton 2011). Even Japan was unable to control the inherently risky nature of nuclear power.

            The processes that occur inside of a nuclear reactor are just the same as those that operate inside an atomic bomb, only slower and, usually, more controlled. While everything is designed to function nicely under ideal conditions with some margin for error, sometimes reactors are struck with more than they can bear, natural disasters such as earthquakes and tsunamis. When this happens, the reactions in the reactors can “runaway” with devastating results. A report by the Guardian shortly after the 2011 meltdown in Fukushima, Japan, one of the worst nuclear disasters in history, counted thirty-four nuclear and radioactive accidents and incidents since 1952, the year the first one occurred “Nuclear Power Plant Accidents”).  Later in 2011, another incident occurred (“Factbox”), this time in France, bringing that total to thirty-five. A 2010 estimate by energy expert Benjamin K. Sovacool placed the number at ninety-nine, but he used different criteria; Sovacool expanded the definition of a nuclear incident to something that causes property damages in excess of 50,000 USD (“A Critical Evaluation of Nuclear Power”). He estimated that these incidents’ total costs in property damages exceeded twenty-billion USD, and this was before the extremely expensive Fukishima incident that occurred a year later. Even with all of these accidents, however, some still argue that nuclear power is one of the safest forms of power generation.

           The World Nuclear Association defends the nuclear industry with the following statistics: only three major accidents have occurred in over 17,000 collective reactor years of nuclear plant operation; a terrorist attack via airplane would be ineffective; few deaths would be caused by reactor failure of any magnitude; other energy sources cause many more deaths: fatalities per TWy for coal (597), natural gas (111), and hydro (10,285) all dwarf the figure for nuclear (48 (“Safety of Nuclear Reactors”). However, this does not consider several important factors: thousands have been killed either as a direct result of conditions caused by the incidents or indirectly as a result of evacuations (“Fukishima”). Another important factor that some ignore or underestimate is that there have been very significant numbers of cancer deaths attributed to nuclear accidents, not including unquantified irradiation from regularly produced nuclear wastes (Sovacool et al.); little is known about fuel cycle safety (Beckjord et al. ix). Nuclear weapons expert Lisbeth Gronlund has estimated with ninety-five percent confidence cancer deaths of 12,000-57,000 from the Chernobyl accident alone (Gronlund). It is clear, then, that nuclear power has had a much greater negative impact on society than many suppose.

            In addition to the risk of irradiation, there is the unique risk of nuclear proliferation. Several countries have successfully managed to covertly advance their nuclear weapons programs behind a clever guise of nuclear power (Sovacool et al). Because of this, nuclear power will always require government oversight to oversee waste management and control proliferation risks. According to the authors of a 2003 interdisciplinary MIT study, if the nuclear industry is to expand new international safety guidelines will be needed to overcome proliferation risks. (Beckjord et al. ix).

            These same authors of the MIT study suggest that a once-through fuel cycle where a significant amount of fissionable (although expensive to recover) uranium is wasted is the best option in terms of cost and proliferation and fuel cycle safety. The only disadvantages, they say, are in long-term fuel disposal and resource preservation considerations (Beckjord et al. 4-5). However, this, as the authors admit, leads to more toxic waste and a faster consumption of limited resources. They propose a model in which by the year 2050 1,000 new LWR reactors will be built to help displace CO2 emissions by dirtier forms of energy production. They are willing to accept the predicted four nuclear accidents that would occur during this expansion. (Beckjord et al.). To consider what the full impact of such a proposition would be on society, it is worth looking back at all of the impacts of the current nuclear industry.

            The nuclear power industry has several other ill effects, social and environmental, not previously discussed. First of all, it causes direct environmental damage: it kills much wildlife through water filtration and contamination (Sovacool, “Second Thoughts” 6), as well as creating such damage at certain sites that environmental remediation expenses sometimes exceed value of ore extracted at uranium mills (Koplow 6). It also causes indirect environmental damage by contributing to global warming.

            Nuclear power produces significant carbon dioxide emissions. This is contrary to the claims of some that nuclear power is carbon-free (Beckjord et al. 2). This common misconception arises from the fact that nuclear reactors themselves produce no emissions, but if one includes the entire nuclear fuel cycle in the calculations, this misconception is exposed. One estimate ranked greenhouse gas (GHG) emissions for power plants per unit of electricity generated in order of highest to lowest: industrial gas, lignite, hard coal, oil, natural gas, biomass, photovoltaic, wind, nuclear, and hydroelectric. Although hydroelectric is ranked at the bottom, hydroelectric plants operated in tropical regions can be 5-20 times higher than in temperate regions making their emissions on the same level as biomass. (Dones et al. 38). In this list nuclear is ranked second best, but later estimates contest this figure: Sovacool analyzed 103 studies of nuclear power plant GHG equivalent emissions for currency, originality, and transparency. He found that the mean value for nuclear power plants is 66 gCO2e/kWh, placing nuclear power above all renewables (“Greenhouse Gas Emissions” 1). And the prospects for nuclear power emissions will only get worse as time goes on. Quality of ore used can greatly skew estimates of GWH emissions (Storm van Leeuwen). As high-quality uranium reserves are depleted, the nuclear power industry will be forced to turn to lower and lower grades of uranium ore, which will drastically increase the energy required to produce fissionable nuclear fuel. Since refinement processes are powered by fossil fuels, this will also significantly increase the carbon footprint of nuclear power. Thus, emissions from the nuclear fuel cycle will match those of combined-cycle-gas-fired power plants in only a few decades. Although advanced fast-breeder or thorium reactors could potentially reduce this problem, they are not likely to commercially available for at least a couple of decades. This combined with the long deployment times for nuclear reactors effectively proves that nuclear power is not a viable long-term solution for reducing carbon dioxide emissions. (Diesendorf 8-11). Nuclear power is a significant environmental burden, but it is also a significant social burden.

            From its beginnings, the nuclear power industry received heavy subsidies from governments, meaning the general public was forced to fund this private industry. On September 2, 1957, the Price-Anderson Act, designed to limit the liability incurred by nuclear power plant licensees from possible damages to members of the public, attained force of law in the United States. This first of nuclear power subsidies was intended to attract private investment into the nuclear industry by shifting liability for nuclear accidents from private investors to the public, making it pay for damages incurred upon itself. (“Backgrounder on Nuclear Insurance”). Since then, subsidies in various forms have continued. One form of subsidy that the nuclear industry receives comes in the form of research and development funds. Out of the total energy research and development fund of International Energy Agency (IEA) member countries of about 12.7 billion USD, nuclear power received twenty percent in 2015; one third the 1975 figure (“Energy Subsidies”). Other subsidies can be categorized as follows: output-linked, production factors, risk and security management, intermediate inputs, and emissions and waste management. Output-linked subsidies grant financing based on power produced. Subsidies to production factors help cover construction costs. Risk and security management subsidies shift liability for accidents either to consumers or the government. Intermediate input subsidies lower the cost of obtaining resources necessary for generating power such as fuel and coolants. Emissions and waste management subsidies either eliminate or reduce the cost of waste disposal. (Koplow 12-13). Examples of such subsidies can be found in every aspect of nuclear power in the form of federal loan guarantees, accelerated depreciation, subsidized borrowing costs for publicly owned reactors, construction work in progress (CWIP) surcharges to consumers, property tax reductions, subsidized fuel, loan guarantees for enrichment facilities, priority access to cooling water for little to no cost, no responsibility to cover costs of potential terrorist attacks, ignored proliferation costs, lowered tax rates on decommissioning trust funds. (Koplow 5-8). Governments also covered the 70 billion dollars needed to defray excess capital costs of nuclear power plants in recent years. Furthermore, the nuclear industry is not forced to pay the true cost of the numerous accidents, some which have been estimated to exceed 100 billion dollars. (Bradford 14). If the projected model from the MIT study of 1,000 new reactors by 2050 becomes reality, the immense cost to taxpayers of the nuclear industry will only increase proportionally.

            However, there are doubts about whether this plan is even feasible. According to a 2003 interdisciplinary MIT study, there is enough uranium to fuel 1000 new reactors for forty years (Beckjord et al. 4), leaving no sustainability issues for nuclear power in the near future. However, this figure does not include an important factor: the quality of the uranium. As uranium ore quality decreases, energy costs to extract increase exponentially (Storm van Leeuwen 23). Factoring in uranium quality, energy expert Benjamin K. Sovacool estimated that global uranium reserves could only sustain a two percent increase in nuclear power production and would disappear after a mere seventy years (“Second Thoughts” 6). Furthermore, the most economical uranium ore deposits have already been discovered and nearly all are currently being mined. New deposits such as these are very unlikely to be discovered for many geologic reasons. (Storm van Leuuwen 71). Additionally, nuclear power may become a non-viable option if climate change significantly increases water demand, since according to energy researcher Doug Klopow, nuclear power is “the most water-intensive large-scale thermal energy technology in use” (7). Another sustainability concern for nuclear power plants is waste disposal. Health and environmental risks posed by spent fuel from nuclear reactors last for tens of thousands of years (Beckjord et al. 22). To date, even the authors of the favorable MIT study admit that “no nation has successfully demonstrated a disposal system for these nuclear wastes” (22). If this is the case, how can the world expect to support an over 200% increase in the number of nuclear reactors?

            In short, although nuclear power experienced intense growth throughout the 1950s and 60s and then into the beginning of the 21st century, when the true costs are considered, it seems that it may have done more harm than good. It has proven to be unsafe in many ways from the numerous accidents which have caused great direct property and environmental damage as well as indirectly caused thousands of fatalities, not to mention the risks of nuclear proliferation and the day-to-day environmental destruction involved with running nuclear power plants. The authors of the MIT study sum it up well; if new technological solutions to current problems are not developed, “nuclear power faces stagnation and decline” (ix), similar to in the later quarter of the 19th century. Thus, nuclear power is not likely to play a significant long-term role in power generation in the foreseeable future, at least not in its current state.


Works Cited

“Backgrounder on Nuclear Insurance and Disaster Relief.” United States Nuclear Regulatory Commission, January 17, 2018,                                       https://www.nrc.gov/reading-rm/doc-collections/fact-sheets/nuclear-insurance.html. Accessed April 26, 2019.

Beckjord, Eric S. et. al. “The Future of Nuclear Power.” Massachusetts Institute of Technology, 2003,                                                                                http://web.mit.edu/nuclearpower/pdf/nuclearpower-full.pdf. Accessed April 25, 2019.

Bradford, Peter A. “Wasting Time: Subsidies, Operating Reactors, and Melting Ice.” Bulletin of the Atomic Scientists, vol. 73, no. 1, Jan.                      2017, pp. 13–16. EBSCOhost, doi:10.1080/00963402.2016.1264207.

Diesendorf, Mark. “Is Nuclear Energy a Possible Solution to Global Warming?” Social Alternatives, vol. 26, no. 2, 2007 Second Quarter                      2007, pp. 8–11. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=26314563. Accessed April 26, 2019.

Dones, R., Heck T., and S. Hirschberg. “Greenhouse Gas Emissions From Energy Systems: Comparision and Overview.” Paul Scherrer                        Institute, 2004, https://inis.iaea.org/search/search.aspx?orig_q=RN:36002859. Accessed April 25, 2019.

“Energy Subsidies.” World Nuclear Association, February 2018, https://www.world-nuclear.org/information-library/economic-                                  aspects/energy-subsidies.aspx. Accessed April 25, 2019.

“Factbox: A brief history of French nuclear accidents.” Reuters, September 12, 2011, https://www.reuters.com/article/us-france-nuclear-               accidents/factbox-a-brief-history-of-french-nuclear-accidents-idUSTRE78B59J20110912. Accessed April 27, 2019.

“Fukishima Daiichi Accident.” World Nuclear Association, October 2018, http://www.world-nuclear.org/information-library/safety-and-                   security/safety-of-plants/fukushima-accident.aspx. Accessed April 25, 2019.

Gronlund, Lisbeth. “How Many Cancers Did Chernobyl Really Cause?—Updated Version.” Union of Concerned Scientists, April 17, 2011,                     https://allthingsnuclear.org/lgronlund/how-many-cancers-did-chernobyl-really-cause-updated?. Accessed April 27, 2019.

“History of Nuclear Energy.” World Nuclear Association, April 2019, http://www.world-nuclear.org/information-library/current-and-future-             generation/outline-history-of-nuclear-energy.aspx. Accessed April 25, 2019.

Koplow, Doug. “Nuclear Power: Still not Viable without Subsidies.” Union of Concerned Scientists, February 2011,                                                         https://www.ucsusa.org/sites/default/files/legacy/assets/documents/nuclear_power/nuclear_subsidies_report.pdf. Accessed                     April   26, 2019.

“Levelized Cost and Levelized Avoided Cost of New Generation Resources in the Annual Energy Outlook 2019.” Energy Information Agency,             February 2019, https://www.eia.gov/outlooks/aeo/pdf/electricity_generation.pdf. Accessed April 25, 2019.

“Nuclear Power in the World Today.” World Nuclear Association, February 2019, http://www.world-nuclear.org/information-                                       library/current-and-future-generation/nuclear-power-in-the-world-today.aspx. Accessed April 25, 2019.

“Nuclear power plant accidents: listed and ranked since 1952.” The Guardian, 2011,                                                                                                           https://www.theguardian.com/news/datablog/2011/mar/14/nuclear-power-plant-accidents-list-rank. Accessed April 27, 2019.

Paton, James. “Fukushima Crisis Worse for Atomic Power Than Chernobyl, UBS Says.” Bloomberg, April 4, 2011,                                                         https://www.bloomberg.com/news/articles/2011-04-04/fukushima-crisis-worse-for-nuclear-power-industry-than-chernobyl-ubs-             says. Accessed April 26, 2019.

“Safety of Nuclear Reactors.” World Nuclear Association, May 2018, http://www.world-nuclear.org/information-library/safety-and-                         security/safety-of-plants/safety-of-nuclear-power-reactors.aspx. Accessed April 26, 2019.

Sovacool, BenjaminK. “A Critical Evaluation of Nuclear Power and Renewable Electricity in Asia.” Journal of Contemporary Asia, vol. 40, no.             3, Aug. 2010, pp. 369–400. EBSCOhost, doi:10.1080/00472331003798350. Accessed April 26, 2019.

—. “Second Thoughts About Nuclear Power.” Research Support Unit (RSU), Lee Kuan Yew School of Public Policy, National University of                 Singapore, January 2011, http://www.fukuleaks.org/edanoleaks/Scribble_Japan_Earthquake/pdfs/201101_RSU_PolicyBrief_1-                   2nd_Thought_Nuclear-Sovacool.pdf. Accessed April 26, 2019.

—. “Valuing the greenhouse gas emissions from nuclear power: A critical survey.” Energy Policy, 2008, https://www.nirs.org/wp-                             content/uploads/climate/background/sovacool_nuclear_ghg.pdf. Accessed April 25, 2019.

Storm van Leeuwen, Jan Wilhelm. “Nuclear power- the energy balance: Part D: Uranium.” Ceedata Consultancy, October 2007,                                   https://www.stormsmith.nl/Media/downloads/partD.pdf. Accessed April 25, 2019.

Superfluidity

Here is a paper I wrote in March of 2018 about another intriguing physical phenomenon: superfluidity. I hope you find it as cool as you would have to get to observe this phenomenon, which is usually close to absolute zero! Please enjoy, and don’t hesitate to comment in the forum if you have any questions. That way more people, including me, can learn from your question!


“Superfluidity” describes a property of liquid matter, i.e. the property of having zero viscosity, or immeasurably low, to be precise (1). This means that if a superfluid were stirred, it would cycle in endless vortices, conserving  one hundred percent of its kinetic energy.  And if a hole were made in the bottom of the vessel, then the superfluid would flow out very quickly compared to other fluids, e.g. honey. The rate of flow of course depends on the size of the hole, so for comparisons of viscosity assume that the holes are the same size. If one fills a cup with honey and then pokes a hole in it, then the honey will eventually flow out, but very slowly.  A superfluid, on the other hand, would flow out the highest possible rate, only limited by the size of the hole.  (3). To understand this an understanding of viscosity is needed. The fundamental principle behind viscosity is friction: the resistance that one object has to moving over another. In the case of liquids, which lack structure, it is the friction between the molecules or atoms of the liquid itself that causes viscosity. (2). In the case of a superfluid, there is no such internal friction due to special molecular or atomic makeup. That is the fundamental definition of a superfluid, but these materials can exhibit many other strange properties. (1). Along with the property of zero viscosity comes lower density and a significant change in specific heat. But is this property of zero viscosity that produces the strangest effects.

                Superfluidity and superconductivity are normally only exhibited at extremely low temperatures.  This is because the particles that condense must always be indistinguishable. The Debroglie wavelengths of indistinguishable particles must have a high degree of overlap. Debroglie wavelength equals Planck’s constant divided my momentum, mass times velocity. Since large particles, e.g. baseballs have high mass, their Debroglie wavelengths will be infinitesimally small, which is it is so unlikely that large particles will exhibit wave-like behavior. For Debroglie wavelengths to overlap, they need to be very long, achieved by cold temperatures, and it helps to have a high packing density of the particles. (8). Cold temperatures are very costly to generate, so this requirement limits the practical applications of superfluidity and superconductivity. Some materials used to make the superconductors are also very expensive, e.g. niobium. (9).

                But, there exist so called “high temperature superconductors” that exhibit superconductivity at a balmy eighty kelvins. The BCS theory does not account for the existence of these, but they will probably prove very useful in the future, as they do exist. One current use of superconductors is in particle accelerators, such as CERN. (10).

                Superfluids also exhibit extremely high thermal conductivity, according to some sources infinitely high (12). Heat is transmitted so quickly that thermal waves are created. This is possible due to the fact that the particles of a superfluid are in the same quantum state, i.e. if one particle moves, they all are moved. Heat is conducted when excited particles bump into each other. If a particle bumps into superfluid particle it will move at the same time as all of the other molecules in the superfluid, transmitting the heat from one side of the superfluid to the other instantly. (11).

                Having no internal friction allows for some spectacular displays, e.g. if a vessel is filled with liquid helium-4 which is then cooled below 2.17 kelvin, condensing into a superfluid, it will flow up the walls of the vessel and out of the vessel. This effect is caused by minor differences in temperature and atmospheric pressure inside and outside the vessel. These tiny differences are enough to move the superfluid against the force of gravity because friction does not hinder its flow. (3).

                A vessel with tiny, molecule-sized holes in the bottom would hold liquid helium-4, but if it were cooled below 2.17 kelvins, then it would immediately begin flowing through those holes, again as a consequence of its immeasurably low viscosity. (3).

                Just a month after the discovery of superfluidity, in 1932, another odd effect was observed accidentally bay British physicist Jack Allen. He had a long, thin tube sticking above a liquid helium bath and packed with fine emery powder. He shined a flashlight on the apparatus and the emery powder absorbed the light, slightly elevating the temperature of the superfluid. As long as the light shone, a fountain of liquid helium emitted from the end of the tube above the bath. The heat creates a back pressure that forces the superfluid helium up. (13).

                As stated above, a superfluid could be used to create perpetual motion. Of course, however, no one is deceived that this means that a surplus energy could be produced and thereby unlimited energy; a superfluid can only conserve the energy that it is given and no more, it does not produce any of its own. However, it is a special property of helium that it never settles into solid state, not even at absolute zero; it always remains a liquid. This is because the helium atoms are so weakly attracted to one another that the slight jiggling caused by the quantum uncertainty principle is enough to keep them apart, at least at standard pressure. (3). It would be naive to think that infinite energy could be harvested from the quantum uncertainty principle; as physicists learn more about quantum mechanics, they will probably learn how no energy created or destroyed.

                Superfluidity was first demonstrated in two studies published in Nature in 1938 by the Brittish duo John Allen and Don Meisner, and independently by Russian physicist Pyotr Kapitza.  Alan and Meisner measured the flow of liquid helium-4 through long, thin tubes and found that it flowed with zero viscosity at temperatures below 2.17 degrees kelvin, almost absolute zero. Kapitza made comparable observations on the flow of liquid helium-4 between two glass discs. He also hypothesized a connection between superfluidity, the resistanceless flow of atoms or molecules, and superconductivity, the resistanceless flow of electrons, which had been discovered some years earlier in 1911 by Dutchman Heike Kamerlingh Onnes. These two concepts were at the frontier of physics back then, and it wasn’t until 1957 that Bardeen, Cooper, and Schriefer (BCS) devised a complete theory to link the phenomena.  (4).

                However, it was very soon after the discovery of superfluidity that an explanation was offered: Bose-Einstein condensation. This is the process whereby  particles known as bosons condense to form a single, quantum state. (4). A boson is a particle whose spin, or intrinsic angular momentum, is zero or an integer. The only other possible class of particle is the fermion, a particle with half-integer spin.  Spin dictates the energy distribution of a particle. Bosons obey Bose-Einstein statistics and fermions obey Fermi-Dirac statistics. Bose-Einstein statistics allows for an unlimited number of particles to occupy a single energy level, unlike fermi-dirac statistics that follow the Pauli exclusion principle, which dictates that no two associated fermions can occupy the same quantum state. (5). Imagine two fermions, say electrons. Let p equal the probability that electron 1 occupies state a and electron 2 occupies state b. Let p1 equal the probability that electron one occupies state a and let p2 equal the probability that electron 2 is in state b. Electrons are indistiguishable, so it is impossible to tell which electron occupies which state. So  p1 and  p2 must be arranged like this:

p=p1p2-p1p2

The above relationship describes a wave function, and if both electrons occupy the same state, a or b, the wave function will vanish. Physicists draw from this the Pauli exclusion principle. The same equation can be used for bosons, except the minus sign must be changed to a plus sign. It is their ability to condense together in unlimited numbers occupying the same energy state that allows bosons to form bose-einstein condensates. (6). But liquid helium-4 atoms are composed of six fermions (two protons, neutrons, and electrons) and no bosons! Liquid helium-4 atoms can form a bose-einstein condensate only because an even number, e.g. 6, of interacting fermions can form a composite boson. This allows liquid helium-4 atoms to condense into the lowest possible energy state and become a superfluid. (4).

                The BCS theory also offers an explanation for superconductivity. An electron moving through a lattice of superconducting material with attract the lattice toward it causing a ripple in the direction of its motion. An electron moving in the opposite with be attracted to this disturbance and the two electrons are coupled together forming what is called a Cooper pair. These cooper pairs can also act like bosons and condense into a state of zero electrical resistance called superconductivity. (7).

                After WW II, large quantities of the light isotope helium-3 became available  because it was part of the manufacturing process of tritium, to be used in hydrogen bombs. It would seem in the light of the information thus far presented that helium-3, having an odd number of fermions (two, protons, electrons, and a neutron), could not condense into a superfluid. But it might be possible, according to the BCS theory, for the helium-3 atoms themselves to form Cooper pairs and thus become a superfluid. The theoretical properties of this hypothetical superfluid helium-3 were explored in the 1960s, before the actual discovery of this superfluid at temperatures below .003 kelvins in 1972. (4).

                The spin quantum number (S) and the orbital quantum number (L) of Cooper pairs characterize two types of angular momentum. Normal BCS superconductors have S=0 and L=0, but superfluid helium-3 has S=1 and L=1. These non-zero quantum numbers cause helium-3 superfluid to break certain basic symmetries of normal liquid state, namely rotational and time reversal symmetries, entailing a non-trivial topology for the Cooper pairs. A variation of the BCS theory was required to understand this, marking the beginning of unconventional superconductivity, but the strange behaviors of unconventional superconductors were only just beginning to be discovered. (4).

                Superfluid helium-3 has two phases, A and B (in the absence of a magnetic field). The B phase exists over a much wider range of temperatures and pressures. It can also exist in many different excited states due to its lack of rotational symmetry, and these states are classified according to the total angular momentum of the cooper pairs (J), with the possibilities J=0,1, or 2. One remarkable feature of the J=2 state of the B phase is its ability to transmit transverse sound waves, something that was previously only thought possible in rigid solids. (4).

                Since the discovery of superfluid helium-3, numerous other unconventional superconductors have been discovered, e.g. cuprates. But only one other superconducting material has been found to have two superfluid phases, namely UPt3. (4).

                Even today, physicists are finding new superfluid properties that defy understanding. The most recent developments in the study of superfluidity involved helium-3 in ultr-light aerogels that was shown to exhibit never before seen phases of superfluids, that are currently being studied. (4).

                Superfluidity was just one of the amazing and counter-intuitive discoveries about quantum mechanics made in the twentieth century, and scientists continue to learn more about it in the twenty-first century. Scientists are a long way from fully understanding the phenomena harnessing its full potential.


References:

  1. Schmitt, Andreas. (2014). “Introduction to Superfluidity.” Springer. https://arxiv.org/pdf/1404.1284.pdf. Date-accessed: 4/18/2018.
  2. “What is viscosity?” (n.d.) Princeton. https://www.princeton.edu/~gasdyn/Research/T-C_Research_Folder/Viscosity_def.html. Date-accessed: 4/18/2018.
  3. Minkel, J. R. (2009). “Strange but True: Superfluid Helium can Climb Walls.” Scientific American. https://www.scientificamerican.com/article/superfluid-can-climb-walls/. Date-accessed: 4/18/2018.
  4. Halperin, William P. (2018). “Eighty Years of Superfluidity.” Nature. https://www.nature.com/articles/d41586-018-00417-7?error=cookies_not_supported&code=4fe886f1-804a-495e-a2a6-830b84b16621#ref-CR10. Date-accessed: 4/18/2018.
  5. Nave, R. (n.d.). “Spin Classification.” HyperPhysics. http://hyperphysics.phy-astr.gsu.edu/hbase/Particles/spinc.html#c3. Date-accessed: 4/18/2018.
  6. Nave, R. (n.d.). “Pauli Exclusion Principle.” Hyperphysics. http://hyperphysics.phy-astr.gsu.edu/hbase/pauli.html#c2. Date-accessed: 4/18/2018.
  7. Nave, R. (n.d.). “Cooper Pairs.” Hyperphysics. http://hyperphysics.phy-astr.gsu.edu/hbase/Solids/coop.html#c1. Date-accessed: 4/18/2018.
  8. Nave, R. (n.d.). “Wave Nature of Electron.” Hyperphysics. http://hyperphysics.phy-astr.gsu.edu/hbase/debrog.html#c3. Date-accessed: 4/18/2018.
  9. Cooley, Lance. Pong, Ian. (2016). “Cost drivers for very high energy p-p collider magnet conductors.” Fermilab. https://indico.cern.ch/event/438866/contributions/1085142/attachments/1257973/1858756/Cost_drivers_for_VHEPP_magnet_conductors-v2.pdf. Date-accessed: 4/19/2018.
  10. “Superconductivity.” (2018). CERN. https://home.cern/about/engineering/superconductivity. Date-accessed: 4/19/2018.
  11. “Properties of fuperfluid.” (n.d.). http://ffden-2.phys.uaf.edu/212_fall2003.web.dir/Rodney_Guritz%20Folder/properties.htm. Date-accessed: 4/19/2018.
  12. “Infinite Thermal Conductivity.” (n.d.). https://superfluidsiiti.weebly.com/index.html. Date-accessed: 4/19/2018.
  13. “Superfluidity II- The Fountain Effect.” (2006). Nature Publishing Group. https://www.nature.com/physics/looking-back/superfluid2/index.html#. Date-accessed: 4/19/2018.

Bell’s Theorem

This is a paper that I wrote in September of 2017 about Bell’s theorem, a very impactful and interesting discovery in physics. I wish I understood more about it. In explaining Bell’s theorem, I also elucidated some basic concepts in physics such as locality and how light works. I apologize for referencing YouTube videos, the faux pas of citation, but, hopefully, you can excuse this singular error and enjoy the content of the paper. Want to talk more about Bell’s theorem? Head on over to the forum.


John Stewart Bell Was born on 28 June 1928, or 6/28/28, in Belfast, Northern Ireland. He died of cerebral hemorrhage at 77 on 1 October 1990 at Belfast. Bell worked for CERN as a particle physicist, but accomplished his most important work in his off time as a hobby: developing his theorem. Bell’s thesis was contradictory to Einstein and was that reality must be nonlocal. This has been supported by many experiments after him, but has been challenged by others, and remains controversial (1). To understand his wonderful and amazing discoveries, one must comprehend also some crucial underlying physics that the following paragraphs first explain.

            Realism as a physical theory is not defined with invariance. It is concerned with the essence of scientific knowledge. Scientific realists believe in the epistemic, having faith in information an observer receives through scientific processes (2). Bell’s theorem contests this.

            Locality states that no information or particles can travel with superluminous speed. Bell’s Theorem contests this also (3).

            Now an explanation for light waves must be provided. It is helpful to first break down the term electromagnetic field. It is a word combing the terms electric field and magnetic field. An  electric field can be imagined as a plane with many vectors on it, each representing a point in space. These are force vectors exerting a force on any charged particle in space, in the direction of the vector and proportional to the length of the vector and the charge of the particle. Now imagine another vector field like the previous one. This represents the magnetic field. Only when a charged particle is moving across it, does it act with force perpendicular to the direction of the particle’s motion and the magnetic field, with a strength proportional to the length of the magnetic field vector, the particle’s charge, and its velocity. Maxwell’s equations describe the interplay between these two fields. When an electric field is circular, i.e. the vectors point in such a way to form a circle, a magnetic field will increase in strength perpendicular to that plane. And conversely, a loop of magnetic field created a change a change in the electric field perpendicular to the plane of the loop. The result of this is electromagnetic radiation, electric and magnetic fields oscillating perpendicular to each other and to the direction of propagation (4).

            The electric and magnetic field components are most easily described separately for mathematical purposes. And it would make the mathematical representation even more convenient if the light represented were horizontally polarized. Polarization refers to the direction that a field is oscillating in. For example, vertical polarization describes a field oscillating up and down (4).

            The electric field component of electromagnetic waves can be mathematically modeled by a cosign function, a variable t for time, a variable a for amplitude, a variable p for phase shift (or where the function is at time t), and a variable f for frequency. That would look like this: a(cos(360ft+p)) (4).

            Every valid wave in a vacuum solves Maxwell’s equations. They are linear equations comprised of combinations of derivatives that mathematically modify the electric and magnetic fields so that they equal zero. Every valid wave in a vacuum gives zero when entered into Maxwell’s equations. Therefore  a valid wave, 0, plus another valid wave, 0, gives yet another valid wave, 0! The third valid wave in physics is called a superposition, or sum, of the original two waves. The superposition of two waves depends for its characteristics on the amplitude and phase shift of the original two vectors. If the original two waves have different phase shift, instead of oscillating up and down or left side to right side, the superposition will oscillate in an ellipse. If the two original vectors have the same amplitude and are ninety degrees out of phase with each other, the superposition will oscillate circularly, in what is known as circular polarization (4).

            Every wave can be described  as a superposition of two oscillating vectors, one on the vertical axis, and the other on the horizontal axis. Although, all waves could also be described with respect to perpendicular diagonal axes. The attitude of the perpendicular axes you choose is known as your choice of basis. Depending on the application, it might be more convenient to choose one basis over another (4).

            What has been presented above is the classical understanding. Most of it translates directly in the quantum world. Classically, the energy of a wave is considered to be the square of the amplitude. Theoretically, this should result in an infinite number of possible energy levels for waves. This seems intuitive, but physicists now know that the energy of a wave is always a discrete multiple of the smallest possible unit of energy. Imagine a staircase. Each step represents an energy level. Every wave is on a specific step and nowhere in between. The height of each step represents the smallest possible increase of decrease in the energy of a wave. This smallest amount is known as Planck’s constant, or h. Every wave has an energy equal to an integer multiple of h times its frequency. This means that there is minimum energy level that a photon can have and if it somehow loses energy at that level, it ceases to exist (4).

            Energy, then, comes in discrete packets of different sizes, but there is a minimum size and all other sizes are a multiple of this minimum one. Now, different frequencies of light only form when the the right size packet is available. When one arrives, they zoom off with it and a light ray is formed. The higher the energy of the packet, the higher the energy of the wave that will take it away. That’s why yhe hotter a fire is, the brighter it is and why its color shifts towards violet as the temperature increases. In normal ambient conditions, only little packets are available, so humans don’t get blinded or cooked to death! These little packets are called quanta, hence quantum mechanics (5).

            An electromagnetic wave at the minimum possible level is known as a photon. The reason photons in themselves can have different energies is because of the third variable in the equation for the energy of an electromagnetic wave: frequency. A different photon can exist at any possible frequency (4).

            In quantum mechanics, the superposition of two perpendicular oscillating vectors to describe any electromagnetic wave must have a new definition. This is because in classical understanding a photon would be a superposition of two vectors with a fractional modulus, and the quantum understanding knows this to be impossible because photons carry the minimum possible energy at their frequency. Classically, the squares of the moduli of the 2 vector components of each wave, tell what percentage of that waves energy can be found in a given direction. However in quantum understanding, a photon must have all of its energy in one direction because its energy cannot be subdivided. So, the amplitudes of the component vectors give the probability that the photon can be found in a certain direction. If the probability is fifty percent for a given direction, half of the time a photon of a certain frequency will be in that direction, and half of the time it will not (4).

            Now the reader is prepared to delve into Bell’s theorem. Proof of Bell’s Theorem involves the use of what are called polarizing filters. A polarizing filter either blocks light from passing through it, or polarizes it in one direction determined by its attitude. What follows is a description of an easy demonstration of Bell’s Theorem and not the actual experiment which is quite complex (5).

            Imagine one vertically polarizing filter. All light photons oscillating in the vertical direction will be let through one hundred percent of the time. Photons oscillating at a forty-five degree angle from vertical will only pass through fifty percent of the time. Now imagine a second philter is placed on top of the first. As the second filter is rotated away from vertical towards ninety degrees away from vertical, less and less light is let through until at ninety degrees away from vertical, light passes through both filters zero percent of the time, provided the filters are perfect. This is because all light that passes through the first filter is vertically polarized, meaning that it has zero percent chance to pass through a horizontally polarized filter (5).

            Now imagine that the second filter is angled at ninety degrees, vertical being zero degrees. Add a third filter in between at forty-five degrees, and more light passes through than before! Twenty five percent of the light passing through the first filter to be exact. This is because the filter in the middle angled at forty-five degrees lets fifty percent of the vertically polarized light through, and that fifty percent becomes polarized at forty-five degrees. Fifty percent of the light polarized at forty-five degrees passes through the third filter at ninety degrees. This seems natural and intuitive (3).

            Many people  have speculated that quantum mechanics isn’t intrinsically probabilistic as shown by how photons pass through a polarizing filter, but that there are some “hidden variables” that man has yet to grasp that describe a fundamental state photons that actually determines whether a photon will pass through a filter or not, not probability (3).

            Bell’s Theorem rests on what happens when a filter at 22.5 degrees, B, is placed on top of a filter at zero degrees, A, and is below a filter on top of the other two at forty-five degrees, C. Based on previous demonstrations, it would be reasonable to expect that seventy-five percent of the vertically polarized light would pass through B and C, because without B, fifty percent of the vertically polarized light passes through, and 22.5 degrees falls halfway between zero and forty-five degrees. Actually, only fifteen percent get blocked at B and another fifteen percent at C (3)!

            To disprove hidden variable theory, first it is necessary to assume it is true. Imagine 100 photons that do have a mysterious hidden variable that answers these following questions. Would a photon pass through A? Would a photon pass through B? Would a photon pass through C? Assume all photons start out being vertically polarized and therefore all pass through A. Fifteen percent get blocked at B, so eighty-five make it through. Another small about, about fifteen percent of eighty-five, get blocked at C. That is much less than the 50 that would get blocked if B weren’t in the middle. So experiments contradict hidden variable theory (3).

            Except, there’s a loophole: if passing through one filter affects how a photon will interact with future filters, then the phenomenon is easily explainable (3).

            But there is a way to circumvent that loophole. It is called the Einstein Podolsky Rosen, or EPR, experiment that was published May 15, 1935 in Physical Review. It uses entangled pairs of photons to measure the probabilities of photons passing through different combinations of filters A, B, and C at the same point in space. Basically, it proves that it is impossible for reality to be locally real (6).

            However, experiments up until 2015 couldn’t prove this unequivocally due to flaws in the equipment used and the experimental setup. But in 2015, this result became unequivocal with a loophole free experiment (3).

References

  1. https://www.youtube.com/watch?v=i1TVZIBj7UA
  2. https://plato.stanford.edu/entries/qt-epr/
  3. https://www.youtube.com/watch?v=MzRCDLre1b4
  4. https://www.youtube.com/watch?v=zcqZHYo7ONs
  5. https://plato.stanford.edu/entries/scientific-realism/