Be Heading Towards Worm Regeneration

An earthworm at its one week post-execution check-up

Several weeks ago, I was working on repairing a set of stone stairs. Some of the stones had fallen loose and the whole hillside that they ran down needed a retaining wall anyway. Unexpectedly, this lead to an opportunity to learn about worm regeneration.

Stair-capped stone retaining wall
The stairs leading into our dooryard. We just love building this kind of long-lasting, beautiful, sustainable stone-work!

While digging, my shovel severed the head of an unfortunate earthworm. I was saddened by this, but also curious to see how much harm I had actually done to the poor creature. I had heard that earthworms had incredible regenerative abilities, and I wanted to see if this was true. So I gathered some tasty looking loam and left it above ground inside a jar along with my vermian victim. Definitely not a sound scientific study, but enough to satisfy my curiosity. A week later, I checked inside the jar. To my utter astonishment and joy, the afflicted annelid (a Lumbricus terrestris) slithered lethargically in the soil. And not only that, the severed stump of its head had completely healed over!

An earthworm at its one week post-execution check-up
A previously beheaded Lumbricus lazarus on the recovery one week post-execution.

And if this picture isn’t enough to convince you of the liveliness of this headless earthworm, here is some videographic evidence as well:

The undying earthworm, one week after an unfortunate encounter with a shovel blade.

After I had verified its indisputable vivacity, I replaced its loam and this time interred the jar and worm (not a burial as a funerary procedure, but rather as a return (in earthworm terms) to the land of the living) and waited two more weeks.

On the second checkup, the amazing annelid was nowhere to be found, and I presumed it to have returned to the great underground, to live tilling the soil till, tilling, it could till no more. While this experience put me in awe of the regenerative abilities of earthworms (on top of their amazing work ethic and indispensable role in soil health), it also has reduced my reservations about digging in the soil, which inevitably carries the risk of accidentally disturbing or dissecting them, if it is done for a good purpose.

Disturbing the ground is often an unavoidable part of essential building projects, such as building a house. Digging in the ground is often the only way to provide a firm, level foundation for a building. And for us hobbit-hole enthusiasts, it is absolutely unavoidable. This experience has added weight to my conviction that Tolkien’s hobbits were far more ethically, environmentally, and technologically advanced than we are in terms of housing. I think it is far better to temporarily disturb a hundred earthworms (most of which live in top soil, anyway) and build an underground house than to condemn a pregnant strip of land to be the bearer of a barren building for years. Even if worms are harmed in the process, most of them will regenerate; it seems they were designed to endure this kind of thing.

Learn more about earthworms’ regenerative capabilities:


Radio Astronomy

Radio astronomy is a way of looking at extraterrestrial objects using the radio waves that they emit or reflect.

Radio waves are part of what is known as the electromagnetic spectrum. The electromagnetic spectrum contains all the different frequencies of electromagnetic radiation, of which visible light is a small part, specifically 4 *10^14 to 7*10^14 hertz (7.5*10^-07 to 4.3*10^-7 meters). As you dip below the range of visible light you get into the range of infrared light. Infrared light is the range of the electromagnetic spectrum between 7*10^14 hertz ( 4.3*10^-7 meters) and 10^13 hertz (3*10^-5 meters). Once below infrared, there is microwave radiation. Microwaves continue from 10^13 to 3*10^11 hertz (3*10^-5 to 1*10^-3 meters). Any electromagnetic radiation with a frequency below 3*10^11 hertz (1*10^-3 meters) is considered radio waves. They have a wavelength of at least 1 millimeter(“The EM Spectrum”).

The first form of electromagnetic radiation outside of the visible spectrum was discovered in the year 1800 by Sir William Herschel. He performed an experiment in which he separated light with a prism. He then placed thermometers in each distinct color of light, including one above and below the visible light. The thermometer that was outside of the visible light, on the red end, measured the highest temperature, though it seemed to be wholly outside of the light. It was this experiment that first showed the existence of infrared radiation(NASA).

The theory of electromagnetism was proposed in 1873, by the Scottish physicist, James Clerk Maxwell. He determined that there were four main electromagnetic interactions. The first is that the force of attraction or repulsion between electric charges is inversely proportional to the square of the distance between them, the second is that magnetic poles come in pairs that attract the opposite pole and repel like poles, the third is that an electric current in a conductor produces a magnetic field, the poles of the field being determined by the direction of the current, and the fourth is that moving an electric field relative to a conductor produces a magnetic field, and vice versa (Lucas, “Electromagnetic Radiation”).

The theory of electromagnetism by Maxwell predicted the existence of radio waves, and in 1886, Heinrich Hertz, a German physicist, used Maxwell’s theories to produce and receive radio waves. Maxwell used an induction coil and a Leyden Jar to produce the radio waves he detected. Hertz was the first person to transmit and receive radio waves, and the basic SI unit of frequency, the hertz, was named in his honor, as one electromagnetic cycle per second(Lucas, “Radio Waves”).

Sir Oliver Lodge was the first person to attempt to detect radio waves from an extraterrestrial source. He was conducting experiments on the propagation and reception of electromagnetic waves. He went on to create the “trembler”. It was a device based on the work of the French physicist Édouard Branly, who showed that loose iron filings react to electromagnetic waves inside of a glass tube. Lodge’s trembler used the shaking of iron filings to detect Morse code carried by electromagnetic radiation(Britannica). During his experiments with electromagnetic radiation, Lodge viewed the Sun as a possible source for radio waves. He attempted to detect them in the centimeter wavelengths, but though he failed, he was still the first one to try.

Many attempts were made to detect radio waves from extraterrestrial sources, but none were successful until 1931. Karl Jansky, while working for Bell Laboratories, tried to determine the source of  radio interference that was present around 20 MHz. Jansky built a large steerable antenna that would receive in wavelengths of 5 to 30 meters, or 15 to 30 MHz frequency. The antenna allowed him to locate the sources of static. He found three distinct sources, local thunderstorms, distant thunderstorms, and a steady hiss of static that was coming from the center of the Milky Way galaxy, as he showed using the source’s position in the sky (“UREI-Radio Astronomy Tutorial-Sec.4”).

Jansky’s findings were largely ignored for quite a while. It wasn’t until 1937, when Grote Reber read Jansky’s work that there were any major advancements. Reber was an electronics engineer, and an avid radio amateur. After reading Jansky’s work, Reber built the first real radio telescope, to which most telescopes today are very similar. It was a 9.5 meter parabolic reflector dish that he built alone in his backyard. He spent years studying radio emissions of varying wavelengths until he detected celestial emissions at a wavelength of 2 meters. Reber confirmed that the emissions were from the galactic plane, and continued to observe various radio sources until in 1944 he published the first radio frequency sky maps. His telescope is still on display today at the Green-Bank Observatory, in Green-Bank, West Virginia(“UREI-Radio Astronomy Tutorial-Sec.4”).

The first radio emissions from the Sun were discovered in 1942 by J.S. Hey, who was working with the British Army Operational Research Group to analyze occurrences of radio jamming of Army radar sets. There was a system for observing and recording potential jamming signals, which led Hey to conclude that the sun was emitting intense radio waves. It was later in the same year that G.C. Southworth  made the first successful observations of thermal radio emission from the sun at centimeter wavelengths, a frequency of less than 30 GHz(“UREI-Radio Astronomy Tutorial-Sec.4”).

The next major discovery was in 1963 when Bell Laboratories assigned Arno Penzias and Robert Wilson the task of tracing a radio noise that was interfering with the development of communication satellites. They discovered that no matter what direction they pointed the antenna, it would always receive the interference even where the sky was visibly empty. They had discovered cosmic background radiation, an ambient radiation that is always present. It is now widely believed that  cosmic background radiation is energy that remains from the creation of the universe. Wilson and Penzias went on to win the Nobel Prize for their discovery in 1978 (“UREI-Radio Astronomy Tutorial-Sec.4”).

A typical radio telescope consists of several different parts. They are the antenna, the reflector dish, the amplifier, and the computer that receives, records and processes the data. The  dish is used to reflect and focus the radio waves onto the antenna. The antenna of a radio telescope, at the focal point of the dish, receives the radio waves and converts them to electricity. The resulting electrical signals are sent to the amplifier, which as the name implies, amplifies the signal. Once the signal has been amplified it is usually sent to a computer where it is processed and recorded. Many telescopes are also mounted on a base that allows for movement, so they can be aimed at different parts of the sky(“Design of a Radio Telescope”).

The structure of radio telescopes haven’t changed much since the one that Reber built. Perhaps the largest advancement is telescope arrays. An array is many smaller radio telescopes that have been linked together. The multiple telescopes create an effect similar to having a single telescope with a dish radius as large as the radius of the array. This is due to the different positions from which each telescope views the sky, causing them to receive the same signals that a single dish would if it extended as far as the array. There are many problems with building a large single dish telescope. It would be very hard to build, as it would require an immense amount of materials and have difficulty supporting its own weight. An array has some drawbacks of its own, such as the light gathering abilities of an array are worse than what they would be if it was a single dish telescope of the same size, as light gathering abilities are determined by the total area covered. Another advantage of arrays is that many smaller telescopes are much easier to direct than a large one.

One example of a telescope array is the Very Large Array, or VLA, in New Mexico. It is composed of 28 separate dishes and antennas. Each dish is 82 feet wide and has 8 receivers. They are all steerable. They are arranged in a “Y” shape that varies in size, because the telescope are all on rails, making them mobile to adjust the size of the array. The array is half a mile to 23 miles across, depending on where the telescopes are on the rails(“Very Large Array”). It operates in frequencies from 1 to 50 GHz. The surfaces of the dishes are made of aluminum panels.  The VLA has made and contributed to many interesting discoveries, such as ice on Mercury, study of black-holes, and the center of our galaxy. It also observed an effect predicted by Einstein, Einstein rings. These are rings of electromagnetic radiation that have been distorted through intense gravity of massive objects such as black-holes.

One of the largest single dish telescopes is the Arecibo Observatory, in Puerto Rico. It is called the National Astronomy and Ionosphere Center (NAIC) and was built in 1960, in Arecibo, Puerto Rico. It is built into a volcanic crater and has a 305 meter wide reflector dish. The antenna is suspended above it by cables and can be moves to track different parts of the sky. The antenna of the telescope is also unusual. It is a dome that contains multiple reflecting dishes that further focus the radio waves. This is done so that the antenna can be mobile, to aim the telescope, as the dish is stationary. The Arecibo telescope discovered the first extra-solar planets. It was also used to produce detailed radar maps of the surface of Venus and Mercury, and discovered the first binary pulsar.

The EHT (Event Horizon Telescope) array covers the largest distance of any array. It is composed of 10 different telescopes and arrays around the world. Due to the spread of its components it has an effective aperture the size of the earth, which gives it the highest angular resolution possible on the surface of the earth. The Atacama Pathfinder Experiment (APEX), an array that is a collaboration between the Max Planck Institut für Radioastronomie (MPIfR), the Onsala Space Observatory (OSO), and the European Southern Observatory (ESO) is one part of the EHT array. The 30-meter telescope on Pico Veleta in the Spanish Sierra Nevada, the James Clark Maxwell Telescope operated by the East Asian Observatory, the Large Millimeter Telescope Alfonso Serrano built on the summit of Volcán Sierra Negra, the Sub-millimeter Array (SMA) located near the summit of  Mauna Kea in Hawaii, the Atacama Large Millimeter Array (ALMA) in Chile, and the South Pole Telescope (SPT) all make up the worldwide EHT array (“Array”).

The EHT array has been used to monitor black holes. It has watched the black-holes SgrA at the center of the Milky Way and M87 in the center of the Virgo A galaxy. It has been studying the validity of many theories about the mass of black holes, the event horizon, gravitational lensing, and possible emissions. The silhouettes of the black-holes matched predictions. In 2019, the EHT also managed to take a direct picture of the silhouette of the black-hole in the center of our galaxy, SgrA.

Radio telescopes have made many discoveries and observations. One such observation is the imaging of asteroids. Scott Hudson and Steven Ostro were the first people to image an asteroid. They imaged the peanut-shaped asteroid 4769 Castalia using the Arecibo observatory in 1989.

Another discovery is that of millisecond pulsars. The discovery was made in 1983, by Donald C. Backer, Miller Goss, Michael Davis, Carl Heiles and Shrinivas Kulkarni at the Arecibo telescope. A millisecond pulsar is one with a very fast rotational period. The one discovered is known as PSR B1937+21, and spins about 641 times a second. Since this discovery nearly 200 more have been discovered (Maggio).

One of the many applications for radio astronomy has been in the search for extraterrestrial intelligence. The term most commonly used for scientific pursuits of extraterrestrial life is SETI (Search for ExtraTerrestrial Intelligence). SETI has involved many arrays and telescopes around the world. There are also several projects that allow people to help observe the sky and donate idle processing power to analyzing the radio signals. SETI has not found any definite signs of other intelligent life, but it has detected signs that may indicate other intelligent life, such as the WOW signal. The WOW signal was an unusual burst of electromagnetic radiation, as if another planet was sending out radio bursts in the chance that someone detects them as we are.

Mercury’s rotational period was determined by radio telescopes. Using the Arecibo telescope, in 1964, Pettengill found that Mercury actually completed a full rotation every 59 days. It had long been thought to be tidally locked, maing the rotational period the same as the orbital, at 88 days (Maggio).

Binary pulsars were also discovered through radio astronomy. They were discovered in 1974. A binary pulsar is a pulsar with a white dwarf or neutron star orbiting it (Maggio).

In 2008, Arecibo was used to detect organic molecules in a starburst, an unusually fast developing star system that consumes available gases much more quickly than normal, 250 million light-years from Earth. Methanimine., a carbon based molecule (CH3N), and hydrogen cyanide (HCN) were discovered in the starburst Apr 220, which lies in the constellation Serpens. The discovery of organic molecules is very important for the possibility of life in other solar systems (Maggio).

Another very important discovery of radio telescopes is that of exoplanets. An exoplanet is any planet that exists outside of our solar system. On January 9, 1992, the astronomers Alex Wolszczan and Dale Frail, using the Arecibo telescope, discovered exoplanets orbiting a pulsar named PSR 1257+12. It is 2,300 light-years away in the constellation Virgo (“Top Astronomical Discoveries”).

Radio astronomy has made many interesting discoveries, and given rise to many theories.  It has been used for many different purposes, from photographing the event horizon of black-holes to searching for extraterrestrial life. It will likely continue to be enlightening and interesting into the future as it has been for the past 90 years

Have a question?

Works Cited


The EM Spectrum, EM spectrum.html.

A Brief History of Radio Astronomy,

UREI-Radio Astronomy Tutorial-Sec.4,

Design of a Radio Telescope,

“Array.” Event Horizon Telescope,

Britannica, The Editors of Encyclopaedia. “Sir Oliver Joseph Lodge.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 8 June 2019,               

Lucas, Jim. “What Are Radio Waves?” LiveScience, Purch, 27 Feb. 2019, 

Lucas, Jim. “What Is Electromagnetic Radiation?” LiveScience, Purch, 12 Mar. 2015, 

Maggio, Patricia K. “The Top Five Discoveries Made by Radio Telescopes.” Sciencing, 2 Mar. 2019,                  radio-telescopes-7566858.html.

“Top Astronomical Discoveries Made by Radio Telescopes.” Home Page -, 7 Nov. 2018,                             discoveries-made-by-radio-telescopes/.

“Very Large Array.” National Radio Astronomy Observatory,

The Uncertain Future of Nuclear Power

Here is another paper on nuclear power that I wrote for my ENGR 140 class in April of 2019. I cheated slightly and reused most of the content from my nuclear power essay, “Nuclear Power Subsidies: Are They Worth It“, that I wrote for my English 102 class around the same time. So there is only a little new content in this one, but I thought I’d share it any. Any critiques would be much appreciated.

Nuclear power is a huge industry comprised of about 450 nuclear reactors around the world that together supply eleven percent of global electricity demand (“Nuclear Power”). However, this vast enterprise that he world has come to depend on was not always here. In fact, nuclear power is a relatively recent development compared to other energy sources. The first nuclear reactor to deliver electricity to the power grid began operating in the Soviet city of Obninsk in July of 1954. It was a five megawatt electric (MWe) reactor called AM-1, standing for Atom Mirny (Peaceful Atom). The name alludes to its peaceful use as a power generator as opposed to the weapons-focused atmosphere under which nuclear power was developed. The year before it was built, US President Dwight D. Eisenhower had enacted a program he called “Atoms for Peace,” the purpose of which was to reallocate research funds from weapons development to electricity production. It bore its first fruit in 1960 with the Yankee Rowe, a 250 MWe pressurized water reactor (PWR). Throughout the 1960s, Canada, France, and the Soviet Union developed their own nuclear reactor systems, with Canada devising a unique reactor type called CANDU, France beginning with a type similar to the Magnox design developed in Britain in 1956, but then settling on the PWR reactor design, and the Soviet Union with two small reactors, a PWR and another type called a boiling water reactor (BWR). Britain later settled on the PWR design, as well. By the end of the 1960s, the US was manufacturing 1,000 MWe reactors of PWR and BWR design. The Soviet Union lagged a little, but by 1973 it had developed its first high power channel reactor (RBMK) rated at 440 MWe, a design that was later superseded by a 1000 MWe design.  Kazakhstan and Russia went on a brief tangent developing so called fast neutron reactors, but other countries almost invariable embraced the light water reactor (LWR) design, which includes the BWR and PWR designs. (“History of Nuclear Energy”).

            Despite all of this growth, however, during a period from 1970-2002 nuclear power underwent a “brown out” in which demand for reactors declined and previous orders were cancelled (“History of Nuclear Energy”). During this period, the Chernobyl nuclear disaster struck. However, the stagnation did not last for long, and by the 1990s nuclear power was blossoming again, beginning with the Japanese Kashiwazaki-Kariwa 6, a 1350 MWe Advanced BWR. In the 2000s, some new reactors were built in Europe and North America, but the bulk of construction occurred and is ongoing in Asia. (“History of Nuclear Energy”).

This revival in nuclear power can be attributed to several factors: policy makers around the world began searching for a sustainable, reliable, low-cost, limited carbon and secure power source to address the global energy crisis (Sovacool, “Second Thoughts” 3) and provide their countries with energy security (“History of Nuclear Energy”). These considerations continue on to the present; because of concerns about conventional energy sources like coal and oil, the need to stabilize harmfully volatile energy prices, and a growing fear of global warming, nuclear power is being considered as a possible answer to global energy exigencies (Sovacool, “Second Thoughts” 3). These ideas have garnered significant support, as an effective and well-organized effort to increase public investment in nuclear power to unprecedented levels is ongoing (Koplow 11). Nuclear power is receiving strong support from organizations like the US Department of Energy (DOE), the International Atomic Energy Agency (IAEA), and the International Energy Agency (IEA) (Sovacool, “Second Thoughts” 3). All of these policy makers seek great benefit for society from nuclear power, but it is worth considering its drawbacks, too.

            Since the development of nuclear power, there have been many tragic accidents. In fact the most recent accident was one of the worst in history. A 15-metre tsunami caused by a magnitude 9 earthquake off the Eastern coast of Honshu island shut down power and cooling to three reactors at the Fukishima Daiichi nuclear site on March 11, 2011, causing all three reactor cores to melt down over next three days and release 940 PBq radiation, giving it a 7 on the International Nuclear Event Scale (INES). 100,000 were evacuated from site, of which 1,000 died as a result of extended evacuation. (“Fukishima”). According to Swiss bank UBS, this catastrophe was more damaging to the reputation of nuclear power than the more severe Chernobyl nuclear disaster in 1968, because it occurred in Japan, a highly developed economy (Paton 2011). Even Japan was unable to control the inherently risky nature of nuclear power.

            The processes that occur inside of a nuclear reactor are just the same as those that operate inside an atomic bomb, only slower and, usually, more controlled. While everything is designed to function nicely under ideal conditions with some margin for error, sometimes reactors are struck with more than they can bear, natural disasters such as earthquakes and tsunamis. When this happens, the reactions in the reactors can “runaway” with devastating results. A report by the Guardian shortly after the 2011 meltdown in Fukushima, Japan, one of the worst nuclear disasters in history, counted thirty-four nuclear and radioactive accidents and incidents since 1952, the year the first one occurred “Nuclear Power Plant Accidents”).  Later in 2011, another incident occurred (“Factbox”), this time in France, bringing that total to thirty-five. A 2010 estimate by energy expert Benjamin K. Sovacool placed the number at ninety-nine, but he used different criteria; Sovacool expanded the definition of a nuclear incident to something that causes property damages in excess of 50,000 USD (“A Critical Evaluation of Nuclear Power”). He estimated that these incidents’ total costs in property damages exceeded twenty-billion USD, and this was before the extremely expensive Fukishima incident that occurred a year later. Even with all of these accidents, however, some still argue that nuclear power is one of the safest forms of power generation.

           The World Nuclear Association defends the nuclear industry with the following statistics: only three major accidents have occurred in over 17,000 collective reactor years of nuclear plant operation; a terrorist attack via airplane would be ineffective; few deaths would be caused by reactor failure of any magnitude; other energy sources cause many more deaths: fatalities per TWy for coal (597), natural gas (111), and hydro (10,285) all dwarf the figure for nuclear (48 (“Safety of Nuclear Reactors”). However, this does not consider several important factors: thousands have been killed either as a direct result of conditions caused by the incidents or indirectly as a result of evacuations (“Fukishima”). Another important factor that some ignore or underestimate is that there have been very significant numbers of cancer deaths attributed to nuclear accidents, not including unquantified irradiation from regularly produced nuclear wastes (Sovacool et al.); little is known about fuel cycle safety (Beckjord et al. ix). Nuclear weapons expert Lisbeth Gronlund has estimated with ninety-five percent confidence cancer deaths of 12,000-57,000 from the Chernobyl accident alone (Gronlund). It is clear, then, that nuclear power has had a much greater negative impact on society than many suppose.

            In addition to the risk of irradiation, there is the unique risk of nuclear proliferation. Several countries have successfully managed to covertly advance their nuclear weapons programs behind a clever guise of nuclear power (Sovacool et al). Because of this, nuclear power will always require government oversight to oversee waste management and control proliferation risks. According to the authors of a 2003 interdisciplinary MIT study, if the nuclear industry is to expand new international safety guidelines will be needed to overcome proliferation risks. (Beckjord et al. ix).

            These same authors of the MIT study suggest that a once-through fuel cycle where a significant amount of fissionable (although expensive to recover) uranium is wasted is the best option in terms of cost and proliferation and fuel cycle safety. The only disadvantages, they say, are in long-term fuel disposal and resource preservation considerations (Beckjord et al. 4-5). However, this, as the authors admit, leads to more toxic waste and a faster consumption of limited resources. They propose a model in which by the year 2050 1,000 new LWR reactors will be built to help displace CO2 emissions by dirtier forms of energy production. They are willing to accept the predicted four nuclear accidents that would occur during this expansion. (Beckjord et al.). To consider what the full impact of such a proposition would be on society, it is worth looking back at all of the impacts of the current nuclear industry.

            The nuclear power industry has several other ill effects, social and environmental, not previously discussed. First of all, it causes direct environmental damage: it kills much wildlife through water filtration and contamination (Sovacool, “Second Thoughts” 6), as well as creating such damage at certain sites that environmental remediation expenses sometimes exceed value of ore extracted at uranium mills (Koplow 6). It also causes indirect environmental damage by contributing to global warming.

            Nuclear power produces significant carbon dioxide emissions. This is contrary to the claims of some that nuclear power is carbon-free (Beckjord et al. 2). This common misconception arises from the fact that nuclear reactors themselves produce no emissions, but if one includes the entire nuclear fuel cycle in the calculations, this misconception is exposed. One estimate ranked greenhouse gas (GHG) emissions for power plants per unit of electricity generated in order of highest to lowest: industrial gas, lignite, hard coal, oil, natural gas, biomass, photovoltaic, wind, nuclear, and hydroelectric. Although hydroelectric is ranked at the bottom, hydroelectric plants operated in tropical regions can be 5-20 times higher than in temperate regions making their emissions on the same level as biomass. (Dones et al. 38). In this list nuclear is ranked second best, but later estimates contest this figure: Sovacool analyzed 103 studies of nuclear power plant GHG equivalent emissions for currency, originality, and transparency. He found that the mean value for nuclear power plants is 66 gCO2e/kWh, placing nuclear power above all renewables (“Greenhouse Gas Emissions” 1). And the prospects for nuclear power emissions will only get worse as time goes on. Quality of ore used can greatly skew estimates of GWH emissions (Storm van Leeuwen). As high-quality uranium reserves are depleted, the nuclear power industry will be forced to turn to lower and lower grades of uranium ore, which will drastically increase the energy required to produce fissionable nuclear fuel. Since refinement processes are powered by fossil fuels, this will also significantly increase the carbon footprint of nuclear power. Thus, emissions from the nuclear fuel cycle will match those of combined-cycle-gas-fired power plants in only a few decades. Although advanced fast-breeder or thorium reactors could potentially reduce this problem, they are not likely to commercially available for at least a couple of decades. This combined with the long deployment times for nuclear reactors effectively proves that nuclear power is not a viable long-term solution for reducing carbon dioxide emissions. (Diesendorf 8-11). Nuclear power is a significant environmental burden, but it is also a significant social burden.

            From its beginnings, the nuclear power industry received heavy subsidies from governments, meaning the general public was forced to fund this private industry. On September 2, 1957, the Price-Anderson Act, designed to limit the liability incurred by nuclear power plant licensees from possible damages to members of the public, attained force of law in the United States. This first of nuclear power subsidies was intended to attract private investment into the nuclear industry by shifting liability for nuclear accidents from private investors to the public, making it pay for damages incurred upon itself. (“Backgrounder on Nuclear Insurance”). Since then, subsidies in various forms have continued. One form of subsidy that the nuclear industry receives comes in the form of research and development funds. Out of the total energy research and development fund of International Energy Agency (IEA) member countries of about 12.7 billion USD, nuclear power received twenty percent in 2015; one third the 1975 figure (“Energy Subsidies”). Other subsidies can be categorized as follows: output-linked, production factors, risk and security management, intermediate inputs, and emissions and waste management. Output-linked subsidies grant financing based on power produced. Subsidies to production factors help cover construction costs. Risk and security management subsidies shift liability for accidents either to consumers or the government. Intermediate input subsidies lower the cost of obtaining resources necessary for generating power such as fuel and coolants. Emissions and waste management subsidies either eliminate or reduce the cost of waste disposal. (Koplow 12-13). Examples of such subsidies can be found in every aspect of nuclear power in the form of federal loan guarantees, accelerated depreciation, subsidized borrowing costs for publicly owned reactors, construction work in progress (CWIP) surcharges to consumers, property tax reductions, subsidized fuel, loan guarantees for enrichment facilities, priority access to cooling water for little to no cost, no responsibility to cover costs of potential terrorist attacks, ignored proliferation costs, lowered tax rates on decommissioning trust funds. (Koplow 5-8). Governments also covered the 70 billion dollars needed to defray excess capital costs of nuclear power plants in recent years. Furthermore, the nuclear industry is not forced to pay the true cost of the numerous accidents, some which have been estimated to exceed 100 billion dollars. (Bradford 14). If the projected model from the MIT study of 1,000 new reactors by 2050 becomes reality, the immense cost to taxpayers of the nuclear industry will only increase proportionally.

            However, there are doubts about whether this plan is even feasible. According to a 2003 interdisciplinary MIT study, there is enough uranium to fuel 1000 new reactors for forty years (Beckjord et al. 4), leaving no sustainability issues for nuclear power in the near future. However, this figure does not include an important factor: the quality of the uranium. As uranium ore quality decreases, energy costs to extract increase exponentially (Storm van Leeuwen 23). Factoring in uranium quality, energy expert Benjamin K. Sovacool estimated that global uranium reserves could only sustain a two percent increase in nuclear power production and would disappear after a mere seventy years (“Second Thoughts” 6). Furthermore, the most economical uranium ore deposits have already been discovered and nearly all are currently being mined. New deposits such as these are very unlikely to be discovered for many geologic reasons. (Storm van Leuuwen 71). Additionally, nuclear power may become a non-viable option if climate change significantly increases water demand, since according to energy researcher Doug Klopow, nuclear power is “the most water-intensive large-scale thermal energy technology in use” (7). Another sustainability concern for nuclear power plants is waste disposal. Health and environmental risks posed by spent fuel from nuclear reactors last for tens of thousands of years (Beckjord et al. 22). To date, even the authors of the favorable MIT study admit that “no nation has successfully demonstrated a disposal system for these nuclear wastes” (22). If this is the case, how can the world expect to support an over 200% increase in the number of nuclear reactors?

            In short, although nuclear power experienced intense growth throughout the 1950s and 60s and then into the beginning of the 21st century, when the true costs are considered, it seems that it may have done more harm than good. It has proven to be unsafe in many ways from the numerous accidents which have caused great direct property and environmental damage as well as indirectly caused thousands of fatalities, not to mention the risks of nuclear proliferation and the day-to-day environmental destruction involved with running nuclear power plants. The authors of the MIT study sum it up well; if new technological solutions to current problems are not developed, “nuclear power faces stagnation and decline” (ix), similar to in the later quarter of the 19th century. Thus, nuclear power is not likely to play a significant long-term role in power generation in the foreseeable future, at least not in its current state.

Works Cited

“Backgrounder on Nuclear Insurance and Disaster Relief.” United States Nuclear Regulatory Commission, January 17, 2018,                              Accessed April 26, 2019.

Beckjord, Eric S. et. al. “The Future of Nuclear Power.” Massachusetts Institute of Technology, 2003,                                                                       Accessed April 25, 2019.

Bradford, Peter A. “Wasting Time: Subsidies, Operating Reactors, and Melting Ice.” Bulletin of the Atomic Scientists, vol. 73, no. 1, Jan.                      2017, pp. 13–16. EBSCOhost, doi:10.1080/00963402.2016.1264207.

Diesendorf, Mark. “Is Nuclear Energy a Possible Solution to Global Warming?” Social Alternatives, vol. 26, no. 2, 2007 Second Quarter                      2007, pp. 8–11. EBSCOhost, Accessed April 26, 2019.

Dones, R., Heck T., and S. Hirschberg. “Greenhouse Gas Emissions From Energy Systems: Comparision and Overview.” Paul Scherrer                        Institute, 2004, Accessed April 25, 2019.

“Energy Subsidies.” World Nuclear Association, February 2018,                                  aspects/energy-subsidies.aspx. Accessed April 25, 2019.

“Factbox: A brief history of French nuclear accidents.” Reuters, September 12, 2011,               accidents/factbox-a-brief-history-of-french-nuclear-accidents-idUSTRE78B59J20110912. Accessed April 27, 2019.

“Fukishima Daiichi Accident.” World Nuclear Association, October 2018,                   security/safety-of-plants/fukushima-accident.aspx. Accessed April 25, 2019.

Gronlund, Lisbeth. “How Many Cancers Did Chernobyl Really Cause?—Updated Version.” Union of Concerned Scientists, April 17, 2011,            Accessed April 27, 2019.

“History of Nuclear Energy.” World Nuclear Association, April 2019,             generation/outline-history-of-nuclear-energy.aspx. Accessed April 25, 2019.

Koplow, Doug. “Nuclear Power: Still not Viable without Subsidies.” Union of Concerned Scientists, February 2011,                                                Accessed                     April   26, 2019.

“Levelized Cost and Levelized Avoided Cost of New Generation Resources in the Annual Energy Outlook 2019.” Energy Information Agency,             February 2019, Accessed April 25, 2019.

“Nuclear Power in the World Today.” World Nuclear Association, February 2019,                                       library/current-and-future-generation/nuclear-power-in-the-world-today.aspx. Accessed April 25, 2019.

“Nuclear power plant accidents: listed and ranked since 1952.” The Guardian, 2011,                                                                                                  Accessed April 27, 2019.

Paton, James. “Fukushima Crisis Worse for Atomic Power Than Chernobyl, UBS Says.” Bloomberg, April 4, 2011,                                                            says. Accessed April 26, 2019.

“Safety of Nuclear Reactors.” World Nuclear Association, May 2018,                         security/safety-of-plants/safety-of-nuclear-power-reactors.aspx. Accessed April 26, 2019.

Sovacool, BenjaminK. “A Critical Evaluation of Nuclear Power and Renewable Electricity in Asia.” Journal of Contemporary Asia, vol. 40, no.             3, Aug. 2010, pp. 369–400. EBSCOhost, doi:10.1080/00472331003798350. Accessed April 26, 2019.

—. “Second Thoughts About Nuclear Power.” Research Support Unit (RSU), Lee Kuan Yew School of Public Policy, National University of                 Singapore, January 2011,                   2nd_Thought_Nuclear-Sovacool.pdf. Accessed April 26, 2019.

—. “Valuing the greenhouse gas emissions from nuclear power: A critical survey.” Energy Policy, 2008,                             content/uploads/climate/background/sovacool_nuclear_ghg.pdf. Accessed April 25, 2019.

Storm van Leeuwen, Jan Wilhelm. “Nuclear power- the energy balance: Part D: Uranium.” Ceedata Consultancy, October 2007,                          Accessed April 25, 2019.

The Ancient History of Antibiotics

Here is a paper I wrote in February of 2018 about the little known ancient history of antibiotics. When most people think about the history of antibiotics, they think were invented only about a century ago, but they actually go back thousands of years. I hope you enjoy learning about the ancient history of antibiotics and let me know what you think or if you have any questions or ideas for further research (for the latter two, we’d appreciate a comment in the forum).

Abram Leyzorek


The Ancient History of Antibiotics

            Names like Alexander Fleming and Paul Ehrlich come to mind when we think of the origin of antibiotics. It was Alexander Fleming’s accidental discovery of the antibiotic mold penicillin in 1929, that seemed, at the time, to overcome humanity’s age-old adversary, the bacterial infection. Paul Ehrlich’s research led to an effective treatment of syphilis via antibiotics. These pioneers ushered in the modern age of antibiotics that has saved countless human lives. (1).

            Little is known, however, about an even earlier use of antibiotics by Emmerich and Low in 1899. They used a compound known as Pyacyonase derived from Pseudomonas aeruginosa in an attempt to cure various bacterial diseases, but soon found the substance to be unfeasible due to excessive toxicity. (1).

            Even less is known is known about certain discoveries that show the use of natural antibiotics by ancient humans; Tetracycline has been found in the bones of Sudanese Nubians dating back to 350-550 AD. Late-Roman skeletons from the Dakhleh Oasis, Egypt were found to contain markers that indicated the regular intake of tetracycline. The red soils of Jordan have long been used as a cheap alternative to prescribed antibiotics that treat skin infections and were found to contain an actinomycete1 bacterium that produces the antibiotics actinomycin C2 and actinomycin C3. Many herbs in the tradition of Chinese medicine have antimicrobial2 properties, including the artemisia, or the mugworts, from which a potent, anti-malarial drug, qinghaosu or artemisinin, was extracted. (1).

            This tradition goes back thousands of years, but the arms race between bacteria and antibiotics has been ongoing for millions of years, long before humans joined in. The phylogeny3 of certain genes for antibiotic resistance against natural antibiotics, reveals that they developed long ago; the serine and metallo-β-lactamases enzymes, for example, developed two billion years ago. They have been present in plasmids4 for millions of years. (1).

            Modern humans are making big waves in the world of microbes with new, synthetic antibiotics, but they were not, as is commonly believed, the first organisms to use antibiotics. Pre-modern humans used antibiotics extensively, and before them, microorganisms secreted antibiotics. The microbiota inside of animals that consumed plants and soils containing natural antibiotics needed to develop resistance.  (1).

            Humans are suffering today from antibiotic resistant microorganisms, but we have only been mass producing antibiotics for the biological blink-of-an-eye. Much of the resistances have developed over the millions of years that microorganisms were exposed to natural antibiotics. (1).

            But they are capable of extremely rapid mutation to evade our best efforts to exterminate them. Most of the antibiotics that we have developed are already ineffective and the golden age of new antibiotic discovery is long passed. All of the “new” antibiotics developed today are actually just modifications on previous compounds. Their is a delicate equilibrium between the rate at which humans develop “new” treatments, and the rate at which microorganisms develop new resistances. Our rate of discovery seems to be slowing down. Humans may be losing the battle. (1).

            But a new discovery may warrant new hope: researchers at the Rockefeller University in New York have discovered what could be a genuinely new class of antibiotics. They have called it malacidin, short for metagenomic5. acidic lipoprotein6 antibiotic cidin7. It was found in soil samples containing calcium dependent genes; the researches were searching for new treatments related to an exceptionally effective and long-lasting antibiotic called daptomycin, which uses calcium to rupture the cell walls of bacteria. But, in the long run, the microorganisms will always mutate and we will need new treatments. (2).


1.  Actinomycetes are filamentous, rod shaped bacteria of the order Actinomycetales. (3).

2.  Antimicrobial and antibiotic are essentially synonymous, and both have an adjectival form. But here, to avoid confusion, “antimicrobial” describes things that kill microbes, and “antibiotic” refers to drugs that do this. (4) (5).

3. Phylogeny is the evolutionary development of a species or higher taxonomic group. (6).

4.  A plasmid is a genetic cellular structure capable of replication independent from the chromosomes. (7).

5. Describes things associated with the metagenome, the collective genome of all the microorganisms in an environment. (8).

6. Proteins  that combine with and transport lipids in blood plasma. (9).

7. Comes from Latin root cid- meaning cut. (10). In English it has come to mean death or kill, e.g. infanticide, herbicide, etc.


  1. Aminov, R. I. (2010). “A Brief History of the Antibiotic Era: Lessons Learned and Challenges for the Future.” NCBI. Date-accessed: 2/14/2018.
  2. Healy, M. (2018). “In soil-dwelling bacteria, scientists find a new weapon to fight drug-resistant superbugs.” Los Angleses Times. Date-accessed: 2/14/2018.
  3. “Actinomycetes.” (2018). Merriam Webster. Date-accessed: 2/14/2018.
  4. “Antibiotic.” (2018). Merriam Webster. Date-accessed: 2/14/2018.
  5. “Antimicrobial.” (2018). Merriam Webster. Date-accessed: 2/14/2018.
  6. “Phylogeny.” (2018). Merriam Webster. Date-accessed: 2/14/2018.
  7. “Plasmid.” (2018). Merriam Webster. Date-accessed: 2/14/2018.
  8. Hover, Bradley M. et. al. (2018). “Metagenomics.” Nature. Date-accessed: 2/14/2018.
  9. “Lipoprotien.” (n.d.). Google dictionary. Date-accessed: 2/14/2018.
  10. Schodde, Carla. (2013). “Far too many Latin words for kill.” Found in Antiquity. Date-accessed: 2/14/2018.

Life in the Bathypelagic Zone

Here is a paper I wrote in March of 2018 about the outlandish life of the bathypelagic zone, part of the deep ocean layer known as the midnight zone where pressures are extremely high and no light penetrates. I hope you enjoy learning about it and let me know what you think below and comment in the forum with any questions.

Oceanographers divide the open sea into layers, drawing boundaries according to the distance that light penetrates through the ocean. The surface layer of the ocean is known as the epipelagic zone, the sunlit zone, or the euphotic zone. Photosynthesis is prevalent in this zone to utilize the abundant sunlight. It extends from the surface of the ocean down to 200 meters below. Here, little to no light can filter through; the quality of the lighting is eternal dusk. Levels of light are insufficient to support photosynthesis in this zone, but here a new light source shines, known as bioluminescence. This zone is known as the twilight zone, mesopelagic zone, or disphotic zone. It extends from 200 meters below sea level to 1000 meters below sea level. The next layer receives no sunlight whatsoever; it is called the aphotic zone or the midnight zone. This layer is commonly divided into three sub-layers: the bathypelagic zone, abyssopelagic zone, and hadalpelagic zone. Sometimes the bathypelagic zone by itself is called he aphotic zone or the midnight zone. Since no sunlight passes past 1000 meters below sea level, the next layers must be determined on an alternative basis: The  bathypelagic zone beginning at the continental slope and extending past it, extending from about -1000 meters -4000 meters; Below is the Abyssopelagic zone which is the zone beginning where the continental slope levels off, extending from approximately -4000 meters to -6000 meters; The lowest zone is the hadalpelagic zone which is the volume inside oceanic trenches, extending from around -6000 meters to a maximum depth of -10994 meters (2). (1).

            Since light penetrates to varying depths in different areas according to the transparency of the water, the boundaries determined according to light penetration cannot be absolute or precise. For example, in some tropical waters, light can penetrate as far as 600 meters (3), but in other places . The same degree of uncertainty applies to the aphotic zones as the continental slope is not entirely uniform and oceanic trenches vary in depth (2).

            The particular focus of this paper shall be the bathypelagic zone. It is unique in several ways: there is no sunlight, there is very high pressure (100-400 atm.), it is relatively cold, it has a high mineral and nutrient density, and the conditions are constant, due to lack of wind, sunlight, and because water in the deep sea comes from dense, polar water which sinks to the bottom and slowly flows across the ocean floor and thus the deep sea water is a constant temperature. These extreme conditions have selected for some rather extreme adaptations, the lack of sunlight having the greatest adaptive repercussions; since there is no sunlight, there is no photosynthesis which means that almost no primary production occurs. All of the food in the bathypelagic zone comes in the form of organic particles drifting down from the layers above. There is only enough to support a very low population density; even though the bathypelagic zone accounts for ninety percent of the oceans’ volume,  it has a very low population and biodiversity relative to the layers above. And these things continue to decline as a function of depth. (4).

            What few organisms are supported by this organic snow are not over-abundantly supplied with sustenance; they were forced to develop means of conserving energy and, in a world of dark darkness, of luring the prey to them. Creatures, such as jellyfish and angler-fish of the deep can often be found floating motionless. (2). Due to cold temperatures, they have very low metabolic rates which helps further to conserve energy. Since they don’t move very often, they don’t waste energy in forming a streamlined body; they tend to be bulky and lumpy. Many are merely living lures; traps baited with light. (4).

            The light is produced by a phenomenon called bioluminescence, caused by reaction between a molecule called luciferin and oxygen. Some animals that produce luciferin also produce a catalyst to speed up the reaction called luciferase. An organism can control the intensity and color of the reaction called bioluminescence, as well as when they light up. Some organisms borrow bioluminescence from glowing bacteria; they provide a favorable environment and the bacteria glow for them. (5).

            This is a tool used all throughout the animal kingdom from insects to plankton to deep sea fishes and invertebrate. Even humans bioluminesce, although the intensity is one thousand times fainter than would be visible and does not involve luciferin or serve any purpose (6).. Nor is it limited to the deep sea; the phenomenon can be observed all throughout the water column, only it is very common in the aphotic zone; about ninety percent of deep sea species have the capability to produce bioluminosity. Bioluminescence can serve multiple purposes from mate attraction to luring prey to startling predators. (5).

            Living in an area devoid of visibility, the organisms inhabiting the bathypelagic zone have developed alternative sensory techniques, enhanced old ones, and dropped others. Some organisms, like angler-fish, have long tentacles that act like feline whiskers to increase the distance at which they can detect predators and prey. (7). Another adaptation of the angler-fish is its extreme sexual dimorphism and sexual parasitism. The male is minute in comparison to the female and lacks a fishing rod and bioluminescence. He locates a female mainly utilizing his enlarged nasal orifices, but also perhaps by the female’s bioluminescence once he reaches the appropriate proximity. When he finds her, he bites into her underside and fuses with her body, sharing her blood in exchange for sperm. This is often how bioluminescence is used, to distinguish between the sexes. (8).

            But some organisms lack functional eyes because they serve no purpose in a world of darkness, save for detecting bioluminescence. And if an organism has no eyes, it can’t be lured in the bell of a jellyfish or the maw of an angler-fish. (9).

            This leads to an interesting question: why are organisms attracted to the light in the first place? At night, when humans introduce an artificial light, little zooplankton become illuminated. This makes them a target for small fish that hunt by sight to whom they were invisible prior to illumination. When the little fish begin feeding, they too become illuminated and attract larger predators, and so on. Fishermen take advantage of this phenomenon. (10). A similar chain of events may occur in the bathypelagic zone as well.

            Another important characteristic of the bathypelagic zone is the extreme pressure. This does not have an enormous effect on the creatures because they were born there and spend most of their lives their, so the pressures inside and outside their bodies are equalized leaving no net effect. But, at least one adaptation has arisen due to high pressure and it involves an organ common to many fish species called a swim bladder. It is a gas-filled chamber inside some fish that allows for passive flotation. Regulation of the amount of gas stored can help the fish rise or sink. The swim bladder is absent from the physiology of bathypelagic organisms, or it is filled with fluid, not gas. This is because gases are compressible while fluids are not. This also explains the complete lack of air spaces in deep sea organisms. (11).

            Oxygen is something that might be expected to exist only in very small amounts in the bathypelagic zone; Because no photosynthesis occurs, no oxygen is replenished and it is constantly consumed. But, the cold polar waters are actually saturated with oxygen. (4). However, above the bathypelagic zone exists, from about -300 to -400 meters, a so called oxygen minimum zone. Here, adaptations to this lower oxygen environment may include the utilization of more efficient oxygen processing enzymes (12) and increased surface area. (13).

            Many deep sea organisms at some point travel nearer to the surface, for various reasons. An innumerable quantity of small organisms move up at night and descend to safety during the day, when they would be visible beyond the bathypelagic zone. At night, they like to take advantage of the increased availability of food in shallower, warmer layers. (10). This exposes them to varying pressure and temperature, which alters certain bilayer membranes of functional importance. Adaptations are required to handle this. (14). Some organisms, like the angler-fish, rise near the surface to breed, and require the same environmental versatility to survive, although many of them don’t survive anyway (7).

            Those described above are just a few of the numerous adaptations necessary for the survival of life in the bathypelagic zone.


  1. Nelson, Rob. (2018). “Deep Sea Biome.” Untamed Science. Date-accessed: 4/4/2018.
  2. Stenstrom, Jonas. (2018). “Pelagic Biome.” Untamed Science. Date-accessed: 4/4/2018.
  3. The editors of Encyclopaedia Britannica. (2015). “Bathyal Zone.” Encylopaedia Britannica. Date-accessed: 4/4/2018.
  4. “Ocean Zones.” (n.d.). Ocean Explorer. Date-accessed: 4/4/2018.
  5. The Ocean Portal Team. (2017). “Bioluminescence.” Smithsonian National Mutseum of Natural History. Date-accessed: 4/4/2018.
  6. Kobayashi, Masaki et. al. (2009). “Imaging of Ultraweak Spontaneous Photon Emission from Human Body Displaying Diurnal Rhythm.” PlOS ONE. Date-accessed: 4/4/2018.
  7. Langin, Katie. (2018). “Exclusive: ‘I’ve never seen anything like it.’ Video of mating deep-sea anglerfish stuns biologists.” Science.
  8. Pietsch, Theodore W. (2005). “Dimorphism, parasitism, and sex revisited: modes of reproduction among deep-sea ceratioid anglerfishes (Teleostei: Lophiiformes).” Ichthyological Research. file:///C:/Users/Abram/Downloads/20-Dimorphism.pdf. Date-access: 4/42018.
  9. NOAA Ocean Explorer Webmaster. (2013). “The Bulk of the Ocean is Deep Sea Habitat with no Light.” Ocean Explorer. Date-accessed: 4/4/2018.
  10. Carilli, Jessica. (2016). “Why Lights Attract Ocean Life at Night.” Scitable by nature education. Date-accessed: 4/4/2018.
  11. “If a giant squid has a soft body, how can it survive in such deep water pressure, when even the best submarines can’t got as deep that deep?” (2004). USCB ScienceLine. Date-accessed: 4/4/2018.
  12. Han, Huazhi et. al. (2011). “Adaptation of aerobic respiration to low O2 environments.” PNAS. Date-accessed: 4/4/2018.
  13. Levin, Lisa A. (2002). “Deep Ocean Life Where Oxygen is Scarce.” American Scientist.
  14. Cossins, A. R. Macdonald, A. G. (1989). “The adaptation of biological membranes to temperature and pressure: fish from the deep and cold.

Are Crystals Alive?

Here is a paper I wrote in October of 2018 examining the question: “Are crystals living things?” This seemingly simple question bifurcates into an inconclusive study of the many definitions of life and an intriguing comparison of crystals to living things based off of these definitions. What do you think; are crystals alive? Comment down below or in the forum, and feel free to be as intuitive and/or scientific as you want!

Abram Leyzorek

15 October 2017

Analysis of the Shared Characteristics Between Crystals and Living Things and Study on Definitions of life

Definitions for “life”

Life is defined differently in dictionaries (1) (7), by different scientific fields addressing the subject (2) (3), and by individual scientists studying those fields (9) (10). Entities such as viruses and self replicating proteins fuel a debate concerning whether or not they should be classified as alive or dead. In addition they also provide gray areas, blurring many definitions of life and spawning new ones (2). One conventional and well accepted definition for life requires:

  1. Cellular composition.
  2. Capacity for metabolism.
  3. Capacity for growth and development.
  4. Capacity to reproduce.
  5. Capacity to pass on individual characteristics to offspring through DNA: Heredity.
  6. Tends toward homeostasis.
  7. Capacity to respond to stimuli.
  8. The capacity for adaptation through evolution.

A definition similar to this can be found in many textbooks on biology (4). It will be referred to in this paper as the textbook definition. The aforementioned dictionary definitions (1) (7) will not feature in this paper, as their content and more is provided by the textbook definition and, therefore, reviewing them seems irrelevant to the following purpose: This paper explores whether or not crystals can be considered living under the above definition and others, first by scrutinizing crystals under the textbook definition criteria and then under other definitions. This paper then speculates about the definitions of life and why they are important or not.

Do crystals have cells?

A crystal is defined as a grouping of atoms or molecules arranged in an ordered, repeating pattern. The specific patterns are known as crystal lattices and are defined by the geometric structure of their unit cell. A unit cell is the basic structure that is repeated to form a crystal lattice (5). All crystals must maintain a charge balance; an equal amount of positive and negative charge. Crystals whose external boundaries are described by well developed faces are known as euhedral. However, not all crystal possess this feature. Additionally, crystals may not maintain their structures under changing conditions such as increased or decreased temperature, pressure, etc, but rather will assume different forms, called polymorphs, under certain conditions, while maintaining original elemental composition (6). These characteristics of crystals may demonstrate that they can be defined as living.

1. The structural, functional and biological unit of all organisms.

2. An autonomous self-replicating unit that may exist as functional independent unit of life (as in the case of unicelluar organisms), or as sub-unit in a multicelluar organism (such as in plants and animals) that is specialized into carrying out particular functions towards the cause of the organisms as a whole.

3. A membrane bound structure containing biomolecules such as nucleic acids and polysaccharides.

Definition number one is the most general and will be considered first. Different domains of living creatures have cells organized different ways, i.e. prokaryotic, eukaryotic, and archaeic (11), and have different although similar functions and composition. The same is true even at the taxonomic level of kingdom, e.g. plant cells vs. animal cells (12). If science were to accept another general classification of living creatures, e.g. crystals, it might find that that kingdom also differed in its cellular structure.

The definition of a crystal, as provided above, includes that crystals are made up of cells. They are called unit cells and are the basic building blocks of crystals. They make up the crystal lattice, the structural component. They control the functional aspects of crystals by having interstices, vacancies, and other “defects” that shape the physical properties of crystals and control the movement of atoms in and on crystals. They do this by a process where atoms move from areas of higher atomic concentration to lower atomic concentration called solid state diffusion (13). This process also controls the uptake of elements and compounds into solid solution (15). Since crystals are composed of one of the seven types of unit cells (16), their functioning on a “cellular” level is determined by their unit cells. Unit cells are the basic unit of crystalline solids, so, granted crystals are alive, unit cells are biological units. Thus the first definition of a cell can be satisfied.

In the second definition, the word “autonomous” is used. It is, of course, defined in the biological sense of the word. In that sense, it simply means having an independent existence and governing laws (17). Certainly a unit cell satisfies this definition.

The second term in definition number two is “self-sustaining.” Now, as an article by Astrobiology Magazine entitled, “Defining Life”, (18) points out, no organism can survive by itself; all life needs access to free energy and materials. So, it seems that “self sustaining” must mean that the organism can gather the energy needed to survive from some necessary materials, if they are available. Could a human, for example, self-sustain itself in space? Of course not; the proper environment in which a human could survive is not provided in space. One would not stretch logic to say that every organism needs a specialized environment to survive with proper temperature, weather conditions, food supply, etc. For a crystal to form, it needs its constituent elements close at hand. One example of a favorable environment for a crystal is a solution supersaturated with its constituent element(s). In this environment, crystals will form via nucleation (19) and will impose their structural template onto free atoms of their constituent element and organize them into more crystals (20).

The stipulations of the second definition have all been covered save one: the requirement for cells to be “specialized.” The biological definition for this term is to be set apart for a particular function (21). There are several types of defects in crystals that can enhance certain ones of their functions (13). These “defective” unit cells are set apart from the others and perform a different function; arguably, they are specialized.

At the tertiary definition of a biological cell comes a screen that some crystals cannot pass through. That is the requirement for cells to contain biomolecules and to be wrapped in a membrane. A “biomolecule” is simply an organic (containing carbon) molecule produced by a living creature (22). Obviously this criterion is impossible for a crystal not containing carbon, i.e. inorganic, to meet. But many crystals are organic (23). And therefore all of those, except those solely of carbon, contain biomolecules, if it is granted that crystals are alive. Yet if one grants that crystals are alive, obviously the current scientific conclusion that all life as we know it is carbon-based (25), dissolves. Furthermore, the article referenced (25) goes on to accept the possibility that, although the carbon atom seems the most suited for life, life forms could be based around other elements such silicon or germanium. Why, then, should the definition of life be shackled to carbon?

The next hurdle, however, seems too lofty to leap: the unit cells of these crystals are not surrounded by any membranes. This shortcoming is perhaps excusable because nothing is surrounded by a physical, as opposed to an electrical, membrane at the molecular and atomic levels. Organic cells are enormous in comparison to unit cells. For example, a red blood cell is eight micrometers across (39) and a unit cell of Nickel is about 350 picometers across (40): the red blood cell is about a little under 5000 times larger than a unit cell of nickel, length wise, and even more astronomically tiny by volume. Since membranes, in general, are made up of molecules, how could something the size of a molecule, such as a unit cell, have a membrane? Additionally, if one could grant, for the purposes of argument, that crystals are alive, they would be an entirely different sort of living creature from those biologists are accustomed to. There is no reason to assume that such an entirely different form of life would necessarily depend upon membranes.

In sum, it has been determined that crystals are composed of cells, granted a general definition of the term.

Do crystals have metabolism?

            Crystals are now going to tested by the second criterion: the capacity for metabolism. As above, the discussion will begin with a definition; the Cambridge Dictionary (24) defines metabolism as “the chemical and physical processes by which a living thing uses food for energy and growth.” The previous section on the cellular composition of crystals explained that crystals do grow. Their “food” is comprised of the elements that constitute them. These elements align themselves into the crystal lattice, so the crystal is using their energy to grow. This growth will be covered in further detail in the following section. Crystals, then have a capacity for metabolism.

Can crystals grow and develop?

            Closely related to metabolism is the capacity for growth and development, the third criterion. Anabolism is the specific term for growth (26).  Crystals can grow out of supersaturated solution (20), vapor, and solid mineral deposits (27). The process by which crystals grow has been explained above. As crystals grow, they attain a greater size and their individual compliment of defects and impurities. This is the unequivocal growth and development of a crystal.

Can crystals reproduce?

            The next criterion and perhaps the most important is the capacity to reproduce. Reproduction simply means “the production of offspring by organized bodies” (28). In addition to being able to form naturally by nucleation (19), crystals can form much more quickly by a process called seeding. Seeding involves placing microscopic crystals into a favorable environment for crystallization to accelerate the growth of crystals (29). This is the way in which a crystal reproduces: a piece of a crystal is chipped off the parent and the chip, or seed, carries the information to form new crystals in its unit cells. When it finds a favorable environment abundant with “food”, new crystals, or offspring, are made. This process is asexual, because the offspring are clones of the parent (28).

            The offspring described in the previous paragraph cannot rightly be offspring if they don’t share characteristics of their parent through heredity, the capacity for which is the fifth criterion. The offspring of asexual reproducers are clones of the parent and have essentially identical features. This is, of course, untrue if the offspring grow up in a much different environment than the parent did, and develop differently. The way in which crystals pass their traits to offspring is through a universal code that defines the structure of each crystal. The crystal structure acts as the blue print, like DNA for a new organic organism, a new crystal.

Unfortunately, this process cannot meet the definition of heredity which is complex and restrictive. Heredity is defined as the natural process by which parents pass genes to offspring (30). Genes are chemical patterns on chromosomes that shape the development of offspring (31). Crystals cannot be said to have this characteristic. However, the purpose of genes is to shape the offspring to be like the parent. Although the process described above cannot be called heredity in the strict sense, it clearly accomplishes the same thing. If crystals are a new life form, they have simply found a different way of passing on their characteristics to offspring.

Do crystals maintain homeostasis?

            Every living organism needs a specific needs a specific set of conditions to survive; temperature, pH, salinity, etc must all be within a certain range for a given creature to live. Internal conditions are even more important than external ones. Homeostasis is the process that an organism uses to maintain the same internal conditions despite external changes (32). These processes only work within limits, of course: if a human were plunged into the Sun, homeostasis would not help it.

Crystals have something which could be thought of as homeostasis: an equilibrium crystal shape (ECS). ECS is the shape of a crystal at which it has minimum surface free energy, given a constant volume (33). This is the shape at which a crystal is “happiest.” To demonstrate,  imagine a crystal at its ECS. If one filed off a corner, after a while the crystal would reorganize itself back into the ECS (34). A NASA article acknowledges that crystals can maintain equilibrium (2). These data affirm that crystals perform homeostasis.

Can crystals respond to stimuli?

            Probably the easiest criterion to satisfy, the capacity of life to respond to stimulation, is considered next. Several pieces of information already provided exemplify response to stimulation. The homeostasis of crystals described above constitutes a response to stimulation. Also, liquid crystals can respond to light, heat, and mechanical stress (35). Certain photonic crystals are responsive, as well (36). The aforementioned NASA article (2) also states that crystals can “move” in response to stimuli. Surely this point is affirmed.

Can crystals evolve?

            The final criterion: The capacity to evolve through adaptation. This is, arguably, the characteristic furthest from relevance to the discussion, because it is unclear and hitherto unproven whether bodies normally considered alive today do indeed evolve (3). However, according to the article referenced, there is an enormous collection of data that perhaps the majority of scientists think validates the evolutionary process, as described by a website entitled “Understanding Evolution” (14). Seemingly, though, another individual could consider the facts and develop a different interpretation, or theory. Furthermore, Steven Benner in an article entitled, “Defining Life” (8) describes how fictional characters, such as androids, that humans today would be forced to consider alive, would not be subject to Darwinian evolution.

Benner posited that humans would acknowledge the living status of such hypothetical creatures, based on our “values” concerning what is alive. For example, if an unconventional being such as a cloud were to float one day into a person’s path, and verbally refused to move, displaying sentience, could that person consider the cloud dead? The capacity for evolution does not seem to be one of the characteristics of life familiar to us that people value, such as response to stimulation, reproduction, etc. The reason for this may simply be that evolution is  not observable. It is inconsequential to transient individuals. In fact, most creatures, including some humans, normally only think about reproduction, growth and development, and response to stimuli. The other, less visible ones, such as heredity and cellular composition, are at least observable within a lifetime.

If this weren’t enough, some definitions for life completely exclude the capacity for evolution as a criterion (37). So even if crystals do fail this test, that will not negatively impact their prospects of meeting the criteria of accepted definitions. But, for the purposes of argument, let us assume universal evolution to be true; Do crystals undergo this process?

To answer this question the Cairns-Smith theory will be considered. It stipulates that the first organic life arose from clay crystals that stored and replicated a genome simple enough to have spontaneously arisen (38). The article referenced explores how hypothetical clay crystals might have evolved due to resource scarcity. An ability of crystals to run programs that predict the most abundant resource available in their environment, which would allow crystals to grow faster, might have developed. The research in the paper concludes that it is conceivable for real crystals to have evolved into responsive, sensitive, creatures. This is no real evidence, but a possible proof of concept.

Although this paper has failed to demonstrate that crystals evolve, and therefore failed to completely satisfy the definition it set out to, other definitions that, perhaps rightly, exclude the evolution criterion, have been satisfied.

Cybernetic definition of life

            The textbook definition scrutinized above is not the only proposed definition of life. A cybernetic definition defines life as a network of regulatory mechanisms subordinated to a potential for expansion (9). Crystals certainly possess a network of regulatory mechanisms, and harness these to expand. Unless that extrapolation harbors a misinterpretation of the cybernetic definition, crystals do satisfy it and contain “the essence of life” which, as the authors of the referenced article suggest, the cybernetic definition embodies.

Value based definitions of life

            A previously referenced article mentioned that one way of determining whether or not something is alive based on our values (14). People in general seem to value life for its responsiveness, its growth and development, and its reproduction. The one of those features that isn’t obvious in crystals is responsiveness, although they do possess this characteristic, as argued above. More importantly, though, they don’t respond in ways that humans can naturally interpret. This lack of intuitive understanding is probably a reason why people generally do not consider crystals alive.

At first glance, a coral reef may look inanimate, but with a scientific background one knows that they are alive. Science has provided this viewpoint. It is hard for the unaided human to observe the living properties, such as growth, of a coral reef. Crystals naturally grow very slowly as well. This slow growth of crystals is perhaps another reason that crystals aren’t considered living.

In contrast, a tree seems to grow just quickly enough for people to observe considerable growth in a lifetime. In addition, trees are far more abundant and ubiquitous. Crystals, however, are less obvious and generally paid less attention. Seemingly, for a long time humans in general failed to observe these important characteristics of crystals as a consequence of there inattentiveness and short life spans. Yet, when science began to study crystals, it was too late: people had already developed a system of somewhat arbitrary values that determined in their minds what was alive and what was not. 

In consequence, humans have thought nothing of harvesting and exploiting crystals for their beauty and their great utility in electronics, building, etc. However, this would not be surprising even if humans did consider crystals to be alive. Consider what they have done to creatures in the modern farming and livestock industries. Obviously nothing can be done for the unfortunate case of crystals while those atrocities on more obviously living things (including humans) continue. Of course all living creatures depend on each other for survival, but humans have learned to satisfy their greed for wealth and convenience at an unprecedented level, to the detriment of all life, crystals (perhaps) included.


  1.  “Life.” Merriam-Webster, Merriam-Webster, Accessed 20 Jan. 2020.
  2. Dunbar, Brian. “Life’s Working Definition: Does It Work?” NASA, NASA,’s_working_definition.html. Accessed 20 Jan. 2020.
  3. Benner, Steven A. “Defining Life.” Astrobiology, Mary Ann Liebert, Inc., Dec. 2010, Accessed 20 Jan. 2020.
  4. “Reading Essentials for Biology: an Interactive Student Textbook.” Reading Essentials for Biology: an Interactive Student Textbook, by Glencoe, Glencoe Mcgraw-Hill, 2011.
  5. “What Is a Crystal? – International Gem Society – IGS.” International Gem Society, Accessed 20 Jan. 2020.
  6. “What Is a Crystal?” What Is a Crystal?, University of California, Berkely,
  7.  “Life.”,, Accessed 20 Jan. 2020.
  8. Steiger, Frank. Is Evolution Only a Theory? Tufts University, 1996,
  9. Korzeniewski, Bernard. “Cybernetic Formulation of the Definition of Life.” Journal of Theoretical Biology, Academic Press, 25 May 2002,
  10. Chaltin, G. J. “To a Mathematical Definition of ‘Life’.” ACM SIGACT News,
  11. Ruiz-Mirazo, Kepa, et al. “A Universal Definition of Life: Autonomy and Open-Ended Evolution.” SpringerLink, Kluwer Academic Publishers, June 2014,
  12. “17 Differences Between Plant and Animal Cells: Plant Cell vs Animal Cell.” Bio Explorer, 22 June 2019, Accessed 20 Jan. 2020.
  13. “Solid State Diffusion.” University of Oslo, 2005,
  14. Welcome to Evolution 101!, Accessed 20 Jan. 2020.
  15. Stipp, Susan L., et al. “Cd2+ Uptake by Calcite, Solid-State Diffusion, and the Formation of Solid-Solution: Interface Processes Observed with near-Surface Sensitive Techniques (XPS, LEED, and AES).” Geochimica Et Cosmochimica Acta, Pergamon, 3 Apr. 2003,
  16.  “Unit Cells.” Bodner Research Web,
  17. “Autonomous.” Biology Online, 12 May 2014,
  18. “Defining Life.” Astrobiology Magazine, 19 June 2002, Accessed 20 Jan. 2020.
  19. Deniz Erdemir, Alfred Y. Lee, and Allan S. Myerson, “Nucleation of Crystals from Solution: Classical and Two-Step Models.” Accounts of Chemical Research 2009 42 (5), 621-629. DOI: 10.1021/ar800217x
  20. Seely, Oliver. “Crystallization of Sodium Acetate from a Supersaturated Solution.” Crystallization from a Supersaturated Solution,
  21. “Specialization.” Biology Online, 12 May 2014, Accessed 20 Jan. 2020.
  22.  “Biomolecule.” Biology Online, 12 May 2014, Accessed 20 Jan. 2020.
  23. Schnieders, Michael J, et al. “The Structure, Thermodynamics and Solubility of Organic Crystals from Simulation with a Polarizable Force Field.” Journal of Chemical Theory and Computation, U.S. National Library of Medicine, 8 May 2012,
  24. “METABOLISM: Meaning in the Cambridge English Dictionary.” Cambridge Dictionary, Accessed 21 Jan. 2020.
  25. Cosmic Evolution – Future, Accessed 21 Jan. 2020.
  26. Customers. “Anabolism.” Biology, 12 May 2014, Accessed 21 Jan. 2020.
  27. Minerals, Rocks & Rock Forming Processes, Accessed 25 Jan 2020.
  28. Customers. “Reproduction.” Biology, 12 May 2014, Accessed 21 Jan. 2020.
  29. Bergfors, Terese. “Seeds to Crystals.” Journal of Structural Biology, Academic Press, 19 Apr. 2003,
  30. “HEREDITY: Definition in the Cambridge English Dictionary.” HEREDITY | Definition in the Cambridge English Dictionary,
  31. “GENE: Definition in the Cambridge English Dictionary.” GENE | Definition in the Cambridge English Dictionary,
  32. “HOMEOSTASIS: Definition in the Cambridge English Dictionary.” HOMEOSTASIS | Definition in the Cambridge English Dictionary, Accessed 21 Jan. 2020.
  33. Kovalenko, O., et al. “The Equilibrium Crystal Shape of Iron.” Scripta Materialia, Pergamon, 18 June 2016,
  34. Equilibrium Crystal Shapes, Accessed 21 Jan. 2020.
  35. Akamatsu, N, et al. “Thermo-, Photo-, and Mechano-Responsive Liquid Crystal Networks Enable Tunable Photonic Crystals.” Soft Matter, U.S. National Library of Medicine, 25 Oct. 2017,
  36. Iqbal, et al. “Photo-Responsive Shape-Memory and Shape-Changing Liquid-Crystal Polymer Networks.” MDPI, Multidisciplinary Digital Publishing Institute, 2 Jan. 2013, Accessed 21 Jan. 2020.
  37. Wilkin, Douglas, and Niamh Gray-Wilson. “Characteristics of Life.” CK, CK-12 Foundation, 20 Nov. 2019, Accessed 21 Jan. 2020.
  38. Cell Size and Scale, Accessed 21 Jan. 2020.
  39. Schulman, Rebecca, and Erik Winfree. “How Crystals That Change and Respond to the Environment Evolve.” Metabolism.dvi, California Institute of Technology,
  40. Face-Centered Cubic Problems, Accessed 21 Jan. 2020.

The Lead and Iron Oxides

Here is a chemistry paper on the lead and iron oxides that I wrote in September of 2018. It provides definitions for the terms “compound” and “oxidation”, as well as giving physical and chemical descriptions of each of the three oxides of lead and the three oxides of iron. I hope you learn something, enjoy, and, if you have a question, ask in the forums!

Lead and Iron are two elements most ancient in their utilization by humans for a vast array of tools and products from swords to ceramics. Quite often, however, they were used in impure forms as oxides. Here a brief explanation of oxidation might prove useful: oxidation is an example of a chemical reaction, which is any interaction between atoms of one or more elements in specific ratios to form a new substance, called a compound (“Definition of Compound”, 2017, pp. 1). The constituents of the compound become chemically bonded and cannot be separated by physical means (“Definition of Compound”, 2017, pp. 1). The resulting compound may have entirely different properties from the constituents (Chemical Reactions, n.d.). One way to form such compounds is by oxidation, the process by which one element, the oxidizer, accepts electrons from another element thus becoming bonded to it (Clark, 2016). Oxidation was named after the gaseous element oxygen, because oxygen is an oxidizing element, as is it highly electronegative, eager to steal electrons. In fact, oxidation was originally understood as in terms of oxygen transfer, rather than the more accurate model of electron transfer (Clark, 2016). Lead and iron are both more electropositive than oxygen, so they will be oxidized in a reaction with oxygen. Depending upon the conditions in which this reaction takes place, it can lead to several different compounds with different properties and uses (Winn, 2004).

Beginning with iron, the first possible compound is a mineral called hematite. Hematite, or Fe2O3, is one of the most common minerals in the world, and is present, at least in small amounts, in many rocks, e.g. sandstone, that have a reddish or brownish coloration, caused by the presence of hematite, although the mineral itself can vary greatly in color, from gray to silver-gray, black to brown and reddish brown (Winn, 2004, pp. 1). In fact, hematite was used until recently to make a dye of the latter color, before cheaper alternatives were developed. It is also responsible for the coloration of Mars, the Red Planet (Winn, 2004, pp. 2). Although it is only paramagnetic under normal conditions, it becomes strongly magnetic when heated, similar to another iron oxide, magnetite. Its hardness ranges from 6-7 on the Mohs Scale, and it may contain small amounts of Titanium. As the principle ore of iron, hematite is mined for the industrial production of iron and is the source of approximately ninety percent of all iron. Fine mineral specimens can be found in several localities, including Minas Garais (Brazil), Cumberland (Cumbria, England), and Ria Marina (island of Elba, Italy). (Friedman, “The Mineral Hematite”, 2018). The chemical reaction that forms hematite looks like this:

4Fe+3O2→2Fe2O3 (Winn, 2004)

That is what happens if there is ample oxygen available, but a different result occurs if the oxygen is less plentiful: Fe3O4, or magnetite. Notice that magnetite has higher iron to oxygen ratio than its cousin, hematite. (Winn, 2004, pp. 4)As referenced before, magnetite earns its name for being a natural magnet and the only mineral with this property. In coloration it is dark gray to black with a hardness slightly greater than hematite at 5.5-6.5. It also differs from its duller cousin in luster, having a metallic luster. Like hematite, though less widely used, it is an important ore of iron. It is of scientific interest due its pronounced magnetic properties. Magnetite can be found almost anywhere around the world, but there are a few noteworthy sources, such as Binn Tal (Wallis, Switzerland), Parachinar (Pakistan), and Cerro Huanaquino (Potosi, Bolivia). (Friedman, “The Mineral Magnetite”, 2018). The chemical reaction that forms magnetite looks like this:

6Fe+4O2→2Fe3O4 (Winn, 2004)

The final oxide of iron is known wüstite. This compound was named after geologist and paleontologist Ewald Wüst (1875-1934) of the University of Kiel in Germany. Wüstite has a hardness of 5-5.5 and occurs mainly in meteorites and anthropogenic slags. (“Wüstite”, 2018). Although its chemical formula is often given as FeO, it breaks the law of definite proportions; the ratio of iron to oxygen ranges between 0.85-0.95/1. Because of this, it is known as a nonstoichiometric compound. This technically allows for an almost infinite number of iron oxides, but all the non-stoichiometric oxides of iron are categorized as wüstite. (Winn, 2004, pp. 9)

Despite this anomaly, most compounds do have distinct stoichiometries, like the lead oxides. first lead oxide is lead monoxide, or PbO. It forms when is heated in the presence of oxygen and can take one of two forms, litharge or massicot, differentiated by their crystal structure. Both are yellowish solids, litharge has a tetragonal crystal structure and massicot has an orthorhombic crystal structure. (“Lead”, 2018, pp. 12). They both have a hardness of 2 on the Mohs scale and have dull, greasy lusters. Litharge has a variety of uses, including in lead acid batteries, glazing pottery, pigments, lead glass, and oil refining. Litharge mines occur on every continent of the world, with an especially high concentration in European countries, such as Sweden, the United Kingdom, and Germany. (“Litharge”, 2018). Massicot mines can be found in many countries around the world, including Madagascar, Namibia, Australia, and Germany (“Massicot”, 2018).

The second lead oxide is known as minium, after the Minius river located in the Northwest of Spain. The chemical formula is Pb3O4, lead tetroxide. Another name for it is red lead, because it can be made into a beautiful read pigment that has been used in paintings since the time of the ancient Romans. Paintings made with minium are called miniatures. (“Red Lead”, n.d.). The hardness of minium is 2.5, and it has a tetragonal crystal structure just like litharge, with a similar luster. Mines are concentrated in Europe but can be found on every continent. (“Minium”, 2018).

The final oxide of lead is plattnerite, otherwise known as lead dioxide (PbO2). It was named in honor of Karl Friederich Plattner (1800-1858) who served as professor of metallurgy and assaying at the Bergakademie of Freiburg in Saxony, Germany, by Karl Wilhelm von Haidinger. It is a brown to black mineral that is commercially produced in a process involving the oxidation of the lead oxide previously discussed, minium, by chlorine (“Lead”, 2018, pp. 13). Plattnerite is used in curing polysulfide rubbers, matches and pyrotechnics, and dyes (“Lead”, 2018, pp. 13). The hardness of plattnerite is 5.5 and it has a dull, metallic luster. The plurality of plattnerite mines are in North America, and of those approximately half are in Mexico and half are in the United States, concentrated in the Western side of the country. (“Plattnerite”, 2018).

These three oxides of iron and three oxides of lead are all very useful and very different from each other. This demonstrates the power of chemical reactions, to take the same two elements in different proportions and create new substances with different properties. However, as was touched on briefly, the chemical composition of a compound is not the sole determining factor of the properties of a substance; other factors, such as crystal structure, also play very important roles, as seen in the two forms of lead monoxide, litharge and massicot (“Lead”, 2018, pp. 12). Regardless of their composition and crystal structure, human beings have used the six oxides discussed above for a long time, some for millennia (“Red Lead”, n.d.), and will probably continue utilizing these useful compounds long into the future.


“Chemical Reactions”. (n.d.). Retrieved September 4, 2018, from 

Clark, J. (2016, May 1). Definitions of Oxidation and Reduction. Retrieved September 3, 2018, from 

“Definition of Compound”. (2017). Retrieved September 3, 2018, from 

Friedman, H. (2018). The Mineral Hematite. Retrieved September 3, 2018, from 

Friedman, H. (2018). The Mineral Magnetite. Retrieved September 3, 2018, from 

‘Lead”. (2018). Retrieved September 3, 2018, from 

“Litharge (Lead(II) Oxide), Lead Monoxide”. (2018). Retrieved September 3, 2018, from 

“Litharge”. (2018). (Hudson Institute of Minerology) Retrieved September 3, 2018, from 

“Massicot”. (2018). (Hudson Institute of Minerology) Retrieved September 3, 2018, from 

“Minium”. (2018). (Hudson Institute of Minerology) Retrieved September 3, 2018, from 

“Plattnerite”. (2018). (Hudson Institute of Minerology) Retrieved September 3, 2018, from 

“Red Lead”. (n.d.). Retrieved September 2018, 2018, from

Winn, J. S. (2004, January 6). Stoichiometry of Iron Oxides. Retrieved September 3, 2018, from 

“Wüstite”. (2018). (Hudson Institute of Minerology) Retrieved September 3, 2018, from 

A Short Exposition of Photosynthesis

Here is a short research paper on photosynthesis, that most wonderful and complex phenomenon that makes life possible. It deserves much deeper treatment that a short, high-school level report, but I hope this will provide a decent starting point on your learning journey. Don’t forget to ask your questions in the forum!

Photosynthesis is the process by which energy from the sun is used to chemically combine carbon dioxide (CO2) with water (H2O) to make oxygen and glucose. Green plants, i.e. plants with chlorophyll, and some other organisms utilize this chemical reaction to make food. Plants are primary producers occupying the lowest trophic level. They support all higher trophic levels and thus their level has the highest biomass. Without photosynthesis, most life on Earth would not exist. (1).

            The majority of photosynthesis in plants takes place in the middle layer of the leaves, or the mesophyll. The cells in the mesophyll are equipped with organelles called chloroplasts, specifically designed for carrying out photosynthesis. Inside the chloroplasts are what resemble stacks of coins. Each coin is called a thylakoid and has a green pigment called chlorophyll in its membrane. The entire stack is a called a granum. The grana are occupy a fluid-filled space called the stroma. (1).

            Photosynthesis is actually a complex series of chemical reactions, some being light-dependent and others being light-independent. The light-dependent reactions take place in the thylakoid membranes where the chlorophyll absorbs light which is converted to adenosine triphosphate (ATP), an energy carrying molecule, and NADPH, an electron carrying molecule. It is here that the oxygen we breathe is created from water as a byproduct and diffuses out through the stomata, tiny pores in the surface layer of leaves letting oxygen diffuse out and carbon dioxide diffuse in. (1).

            Then begin the light-independent reactions that are collectively known as the Calvin cycle. They occur in the stroma and use the ATP and NADPH to fix carbon for use in constructing cells and form three-carbon sugars, glyceraldehyde-3-phosphate, or G3P, molecules,  that link up to make glucose. (1).

            In summary, the heat energy from sunlight ends up stored as chemical energy in the bonds of the sugar molecules that can be metabolized by plants and other organisms. (1). The reaction absorbs heat so it can be described as endothermic. (2).


  1. “Intro to Photosynthesis.” (2018). Khan Academy. Date-accessed: 5/14/2018.

2. Helmenstine, Anne Marie. (2018). “Endothermic Reaction Examples.” ThoughtCo.


Here is a paper I wrote in March of 2018 about another intriguing physical phenomenon: superfluidity. I hope you find it as cool as you would have to get to observe this phenomenon, which is usually close to absolute zero! Please enjoy, and don’t hesitate to comment in the forum if you have any questions. That way more people, including me, can learn from your question!

“Superfluidity” describes a property of liquid matter, i.e. the property of having zero viscosity, or immeasurably low, to be precise (1). This means that if a superfluid were stirred, it would cycle in endless vortices, conserving  one hundred percent of its kinetic energy.  And if a hole were made in the bottom of the vessel, then the superfluid would flow out very quickly compared to other fluids, e.g. honey. The rate of flow of course depends on the size of the hole, so for comparisons of viscosity assume that the holes are the same size. If one fills a cup with honey and then pokes a hole in it, then the honey will eventually flow out, but very slowly.  A superfluid, on the other hand, would flow out the highest possible rate, only limited by the size of the hole.  (3). To understand this an understanding of viscosity is needed. The fundamental principle behind viscosity is friction: the resistance that one object has to moving over another. In the case of liquids, which lack structure, it is the friction between the molecules or atoms of the liquid itself that causes viscosity. (2). In the case of a superfluid, there is no such internal friction due to special molecular or atomic makeup. That is the fundamental definition of a superfluid, but these materials can exhibit many other strange properties. (1). Along with the property of zero viscosity comes lower density and a significant change in specific heat. But is this property of zero viscosity that produces the strangest effects.

                Superfluidity and superconductivity are normally only exhibited at extremely low temperatures.  This is because the particles that condense must always be indistinguishable. The Debroglie wavelengths of indistinguishable particles must have a high degree of overlap. Debroglie wavelength equals Planck’s constant divided my momentum, mass times velocity. Since large particles, e.g. baseballs have high mass, their Debroglie wavelengths will be infinitesimally small, which is it is so unlikely that large particles will exhibit wave-like behavior. For Debroglie wavelengths to overlap, they need to be very long, achieved by cold temperatures, and it helps to have a high packing density of the particles. (8). Cold temperatures are very costly to generate, so this requirement limits the practical applications of superfluidity and superconductivity. Some materials used to make the superconductors are also very expensive, e.g. niobium. (9).

                But, there exist so called “high temperature superconductors” that exhibit superconductivity at a balmy eighty kelvins. The BCS theory does not account for the existence of these, but they will probably prove very useful in the future, as they do exist. One current use of superconductors is in particle accelerators, such as CERN. (10).

                Superfluids also exhibit extremely high thermal conductivity, according to some sources infinitely high (12). Heat is transmitted so quickly that thermal waves are created. This is possible due to the fact that the particles of a superfluid are in the same quantum state, i.e. if one particle moves, they all are moved. Heat is conducted when excited particles bump into each other. If a particle bumps into superfluid particle it will move at the same time as all of the other molecules in the superfluid, transmitting the heat from one side of the superfluid to the other instantly. (11).

                Having no internal friction allows for some spectacular displays, e.g. if a vessel is filled with liquid helium-4 which is then cooled below 2.17 kelvin, condensing into a superfluid, it will flow up the walls of the vessel and out of the vessel. This effect is caused by minor differences in temperature and atmospheric pressure inside and outside the vessel. These tiny differences are enough to move the superfluid against the force of gravity because friction does not hinder its flow. (3).

                A vessel with tiny, molecule-sized holes in the bottom would hold liquid helium-4, but if it were cooled below 2.17 kelvins, then it would immediately begin flowing through those holes, again as a consequence of its immeasurably low viscosity. (3).

                Just a month after the discovery of superfluidity, in 1932, another odd effect was observed accidentally bay British physicist Jack Allen. He had a long, thin tube sticking above a liquid helium bath and packed with fine emery powder. He shined a flashlight on the apparatus and the emery powder absorbed the light, slightly elevating the temperature of the superfluid. As long as the light shone, a fountain of liquid helium emitted from the end of the tube above the bath. The heat creates a back pressure that forces the superfluid helium up. (13).

                As stated above, a superfluid could be used to create perpetual motion. Of course, however, no one is deceived that this means that a surplus energy could be produced and thereby unlimited energy; a superfluid can only conserve the energy that it is given and no more, it does not produce any of its own. However, it is a special property of helium that it never settles into solid state, not even at absolute zero; it always remains a liquid. This is because the helium atoms are so weakly attracted to one another that the slight jiggling caused by the quantum uncertainty principle is enough to keep them apart, at least at standard pressure. (3). It would be naive to think that infinite energy could be harvested from the quantum uncertainty principle; as physicists learn more about quantum mechanics, they will probably learn how no energy created or destroyed.

                Superfluidity was first demonstrated in two studies published in Nature in 1938 by the Brittish duo John Allen and Don Meisner, and independently by Russian physicist Pyotr Kapitza.  Alan and Meisner measured the flow of liquid helium-4 through long, thin tubes and found that it flowed with zero viscosity at temperatures below 2.17 degrees kelvin, almost absolute zero. Kapitza made comparable observations on the flow of liquid helium-4 between two glass discs. He also hypothesized a connection between superfluidity, the resistanceless flow of atoms or molecules, and superconductivity, the resistanceless flow of electrons, which had been discovered some years earlier in 1911 by Dutchman Heike Kamerlingh Onnes. These two concepts were at the frontier of physics back then, and it wasn’t until 1957 that Bardeen, Cooper, and Schriefer (BCS) devised a complete theory to link the phenomena.  (4).

                However, it was very soon after the discovery of superfluidity that an explanation was offered: Bose-Einstein condensation. This is the process whereby  particles known as bosons condense to form a single, quantum state. (4). A boson is a particle whose spin, or intrinsic angular momentum, is zero or an integer. The only other possible class of particle is the fermion, a particle with half-integer spin.  Spin dictates the energy distribution of a particle. Bosons obey Bose-Einstein statistics and fermions obey Fermi-Dirac statistics. Bose-Einstein statistics allows for an unlimited number of particles to occupy a single energy level, unlike fermi-dirac statistics that follow the Pauli exclusion principle, which dictates that no two associated fermions can occupy the same quantum state. (5). Imagine two fermions, say electrons. Let p equal the probability that electron 1 occupies state a and electron 2 occupies state b. Let p1 equal the probability that electron one occupies state a and let p2 equal the probability that electron 2 is in state b. Electrons are indistiguishable, so it is impossible to tell which electron occupies which state. So  p1 and  p2 must be arranged like this:


The above relationship describes a wave function, and if both electrons occupy the same state, a or b, the wave function will vanish. Physicists draw from this the Pauli exclusion principle. The same equation can be used for bosons, except the minus sign must be changed to a plus sign. It is their ability to condense together in unlimited numbers occupying the same energy state that allows bosons to form bose-einstein condensates. (6). But liquid helium-4 atoms are composed of six fermions (two protons, neutrons, and electrons) and no bosons! Liquid helium-4 atoms can form a bose-einstein condensate only because an even number, e.g. 6, of interacting fermions can form a composite boson. This allows liquid helium-4 atoms to condense into the lowest possible energy state and become a superfluid. (4).

                The BCS theory also offers an explanation for superconductivity. An electron moving through a lattice of superconducting material with attract the lattice toward it causing a ripple in the direction of its motion. An electron moving in the opposite with be attracted to this disturbance and the two electrons are coupled together forming what is called a Cooper pair. These cooper pairs can also act like bosons and condense into a state of zero electrical resistance called superconductivity. (7).

                After WW II, large quantities of the light isotope helium-3 became available  because it was part of the manufacturing process of tritium, to be used in hydrogen bombs. It would seem in the light of the information thus far presented that helium-3, having an odd number of fermions (two, protons, electrons, and a neutron), could not condense into a superfluid. But it might be possible, according to the BCS theory, for the helium-3 atoms themselves to form Cooper pairs and thus become a superfluid. The theoretical properties of this hypothetical superfluid helium-3 were explored in the 1960s, before the actual discovery of this superfluid at temperatures below .003 kelvins in 1972. (4).

                The spin quantum number (S) and the orbital quantum number (L) of Cooper pairs characterize two types of angular momentum. Normal BCS superconductors have S=0 and L=0, but superfluid helium-3 has S=1 and L=1. These non-zero quantum numbers cause helium-3 superfluid to break certain basic symmetries of normal liquid state, namely rotational and time reversal symmetries, entailing a non-trivial topology for the Cooper pairs. A variation of the BCS theory was required to understand this, marking the beginning of unconventional superconductivity, but the strange behaviors of unconventional superconductors were only just beginning to be discovered. (4).

                Superfluid helium-3 has two phases, A and B (in the absence of a magnetic field). The B phase exists over a much wider range of temperatures and pressures. It can also exist in many different excited states due to its lack of rotational symmetry, and these states are classified according to the total angular momentum of the cooper pairs (J), with the possibilities J=0,1, or 2. One remarkable feature of the J=2 state of the B phase is its ability to transmit transverse sound waves, something that was previously only thought possible in rigid solids. (4).

                Since the discovery of superfluid helium-3, numerous other unconventional superconductors have been discovered, e.g. cuprates. But only one other superconducting material has been found to have two superfluid phases, namely UPt3. (4).

                Even today, physicists are finding new superfluid properties that defy understanding. The most recent developments in the study of superfluidity involved helium-3 in ultr-light aerogels that was shown to exhibit never before seen phases of superfluids, that are currently being studied. (4).

                Superfluidity was just one of the amazing and counter-intuitive discoveries about quantum mechanics made in the twentieth century, and scientists continue to learn more about it in the twenty-first century. Scientists are a long way from fully understanding the phenomena harnessing its full potential.


  1. Schmitt, Andreas. (2014). “Introduction to Superfluidity.” Springer. Date-accessed: 4/18/2018.
  2. “What is viscosity?” (n.d.) Princeton. Date-accessed: 4/18/2018.
  3. Minkel, J. R. (2009). “Strange but True: Superfluid Helium can Climb Walls.” Scientific American. Date-accessed: 4/18/2018.
  4. Halperin, William P. (2018). “Eighty Years of Superfluidity.” Nature. Date-accessed: 4/18/2018.
  5. Nave, R. (n.d.). “Spin Classification.” HyperPhysics. Date-accessed: 4/18/2018.
  6. Nave, R. (n.d.). “Pauli Exclusion Principle.” Hyperphysics. Date-accessed: 4/18/2018.
  7. Nave, R. (n.d.). “Cooper Pairs.” Hyperphysics. Date-accessed: 4/18/2018.
  8. Nave, R. (n.d.). “Wave Nature of Electron.” Hyperphysics. Date-accessed: 4/18/2018.
  9. Cooley, Lance. Pong, Ian. (2016). “Cost drivers for very high energy p-p collider magnet conductors.” Fermilab. Date-accessed: 4/19/2018.
  10. “Superconductivity.” (2018). CERN. Date-accessed: 4/19/2018.
  11. “Properties of fuperfluid.” (n.d.). Date-accessed: 4/19/2018.
  12. “Infinite Thermal Conductivity.” (n.d.). Date-accessed: 4/19/2018.
  13. “Superfluidity II- The Fountain Effect.” (2006). Nature Publishing Group. Date-accessed: 4/19/2018.

Bell’s Theorem

This is a paper that I wrote in September of 2017 about Bell’s theorem, a very impactful and interesting discovery in physics. I wish I understood more about it. In explaining Bell’s theorem, I also elucidated some basic concepts in physics such as locality and how light works. I apologize for referencing YouTube videos, the faux pas of citation, but, hopefully, you can excuse this singular error and enjoy the content of the paper. Want to talk more about Bell’s theorem? Head on over to the forum.

John Stewart Bell Was born on 28 June 1928, or 6/28/28, in Belfast, Northern Ireland. He died of cerebral hemorrhage at 77 on 1 October 1990 at Belfast. Bell worked for CERN as a particle physicist, but accomplished his most important work in his off time as a hobby: developing his theorem. Bell’s thesis was contradictory to Einstein and was that reality must be nonlocal. This has been supported by many experiments after him, but has been challenged by others, and remains controversial (1). To understand his wonderful and amazing discoveries, one must comprehend also some crucial underlying physics that the following paragraphs first explain.

            Realism as a physical theory is not defined with invariance. It is concerned with the essence of scientific knowledge. Scientific realists believe in the epistemic, having faith in information an observer receives through scientific processes (2). Bell’s theorem contests this.

            Locality states that no information or particles can travel with superluminous speed. Bell’s Theorem contests this also (3).

            Now an explanation for light waves must be provided. It is helpful to first break down the term electromagnetic field. It is a word combing the terms electric field and magnetic field. An  electric field can be imagined as a plane with many vectors on it, each representing a point in space. These are force vectors exerting a force on any charged particle in space, in the direction of the vector and proportional to the length of the vector and the charge of the particle. Now imagine another vector field like the previous one. This represents the magnetic field. Only when a charged particle is moving across it, does it act with force perpendicular to the direction of the particle’s motion and the magnetic field, with a strength proportional to the length of the magnetic field vector, the particle’s charge, and its velocity. Maxwell’s equations describe the interplay between these two fields. When an electric field is circular, i.e. the vectors point in such a way to form a circle, a magnetic field will increase in strength perpendicular to that plane. And conversely, a loop of magnetic field created a change a change in the electric field perpendicular to the plane of the loop. The result of this is electromagnetic radiation, electric and magnetic fields oscillating perpendicular to each other and to the direction of propagation (4).

            The electric and magnetic field components are most easily described separately for mathematical purposes. And it would make the mathematical representation even more convenient if the light represented were horizontally polarized. Polarization refers to the direction that a field is oscillating in. For example, vertical polarization describes a field oscillating up and down (4).

            The electric field component of electromagnetic waves can be mathematically modeled by a cosign function, a variable t for time, a variable a for amplitude, a variable p for phase shift (or where the function is at time t), and a variable f for frequency. That would look like this: a(cos(360ft+p)) (4).

            Every valid wave in a vacuum solves Maxwell’s equations. They are linear equations comprised of combinations of derivatives that mathematically modify the electric and magnetic fields so that they equal zero. Every valid wave in a vacuum gives zero when entered into Maxwell’s equations. Therefore  a valid wave, 0, plus another valid wave, 0, gives yet another valid wave, 0! The third valid wave in physics is called a superposition, or sum, of the original two waves. The superposition of two waves depends for its characteristics on the amplitude and phase shift of the original two vectors. If the original two waves have different phase shift, instead of oscillating up and down or left side to right side, the superposition will oscillate in an ellipse. If the two original vectors have the same amplitude and are ninety degrees out of phase with each other, the superposition will oscillate circularly, in what is known as circular polarization (4).

            Every wave can be described  as a superposition of two oscillating vectors, one on the vertical axis, and the other on the horizontal axis. Although, all waves could also be described with respect to perpendicular diagonal axes. The attitude of the perpendicular axes you choose is known as your choice of basis. Depending on the application, it might be more convenient to choose one basis over another (4).

            What has been presented above is the classical understanding. Most of it translates directly in the quantum world. Classically, the energy of a wave is considered to be the square of the amplitude. Theoretically, this should result in an infinite number of possible energy levels for waves. This seems intuitive, but physicists now know that the energy of a wave is always a discrete multiple of the smallest possible unit of energy. Imagine a staircase. Each step represents an energy level. Every wave is on a specific step and nowhere in between. The height of each step represents the smallest possible increase of decrease in the energy of a wave. This smallest amount is known as Planck’s constant, or h. Every wave has an energy equal to an integer multiple of h times its frequency. This means that there is minimum energy level that a photon can have and if it somehow loses energy at that level, it ceases to exist (4).

            Energy, then, comes in discrete packets of different sizes, but there is a minimum size and all other sizes are a multiple of this minimum one. Now, different frequencies of light only form when the the right size packet is available. When one arrives, they zoom off with it and a light ray is formed. The higher the energy of the packet, the higher the energy of the wave that will take it away. That’s why yhe hotter a fire is, the brighter it is and why its color shifts towards violet as the temperature increases. In normal ambient conditions, only little packets are available, so humans don’t get blinded or cooked to death! These little packets are called quanta, hence quantum mechanics (5).

            An electromagnetic wave at the minimum possible level is known as a photon. The reason photons in themselves can have different energies is because of the third variable in the equation for the energy of an electromagnetic wave: frequency. A different photon can exist at any possible frequency (4).

            In quantum mechanics, the superposition of two perpendicular oscillating vectors to describe any electromagnetic wave must have a new definition. This is because in classical understanding a photon would be a superposition of two vectors with a fractional modulus, and the quantum understanding knows this to be impossible because photons carry the minimum possible energy at their frequency. Classically, the squares of the moduli of the 2 vector components of each wave, tell what percentage of that waves energy can be found in a given direction. However in quantum understanding, a photon must have all of its energy in one direction because its energy cannot be subdivided. So, the amplitudes of the component vectors give the probability that the photon can be found in a certain direction. If the probability is fifty percent for a given direction, half of the time a photon of a certain frequency will be in that direction, and half of the time it will not (4).

            Now the reader is prepared to delve into Bell’s theorem. Proof of Bell’s Theorem involves the use of what are called polarizing filters. A polarizing filter either blocks light from passing through it, or polarizes it in one direction determined by its attitude. What follows is a description of an easy demonstration of Bell’s Theorem and not the actual experiment which is quite complex (5).

            Imagine one vertically polarizing filter. All light photons oscillating in the vertical direction will be let through one hundred percent of the time. Photons oscillating at a forty-five degree angle from vertical will only pass through fifty percent of the time. Now imagine a second philter is placed on top of the first. As the second filter is rotated away from vertical towards ninety degrees away from vertical, less and less light is let through until at ninety degrees away from vertical, light passes through both filters zero percent of the time, provided the filters are perfect. This is because all light that passes through the first filter is vertically polarized, meaning that it has zero percent chance to pass through a horizontally polarized filter (5).

            Now imagine that the second filter is angled at ninety degrees, vertical being zero degrees. Add a third filter in between at forty-five degrees, and more light passes through than before! Twenty five percent of the light passing through the first filter to be exact. This is because the filter in the middle angled at forty-five degrees lets fifty percent of the vertically polarized light through, and that fifty percent becomes polarized at forty-five degrees. Fifty percent of the light polarized at forty-five degrees passes through the third filter at ninety degrees. This seems natural and intuitive (3).

            Many people  have speculated that quantum mechanics isn’t intrinsically probabilistic as shown by how photons pass through a polarizing filter, but that there are some “hidden variables” that man has yet to grasp that describe a fundamental state photons that actually determines whether a photon will pass through a filter or not, not probability (3).

            Bell’s Theorem rests on what happens when a filter at 22.5 degrees, B, is placed on top of a filter at zero degrees, A, and is below a filter on top of the other two at forty-five degrees, C. Based on previous demonstrations, it would be reasonable to expect that seventy-five percent of the vertically polarized light would pass through B and C, because without B, fifty percent of the vertically polarized light passes through, and 22.5 degrees falls halfway between zero and forty-five degrees. Actually, only fifteen percent get blocked at B and another fifteen percent at C (3)!

            To disprove hidden variable theory, first it is necessary to assume it is true. Imagine 100 photons that do have a mysterious hidden variable that answers these following questions. Would a photon pass through A? Would a photon pass through B? Would a photon pass through C? Assume all photons start out being vertically polarized and therefore all pass through A. Fifteen percent get blocked at B, so eighty-five make it through. Another small about, about fifteen percent of eighty-five, get blocked at C. That is much less than the 50 that would get blocked if B weren’t in the middle. So experiments contradict hidden variable theory (3).

            Except, there’s a loophole: if passing through one filter affects how a photon will interact with future filters, then the phenomenon is easily explainable (3).

            But there is a way to circumvent that loophole. It is called the Einstein Podolsky Rosen, or EPR, experiment that was published May 15, 1935 in Physical Review. It uses entangled pairs of photons to measure the probabilities of photons passing through different combinations of filters A, B, and C at the same point in space. Basically, it proves that it is impossible for reality to be locally real (6).

            However, experiments up until 2015 couldn’t prove this unequivocally due to flaws in the equipment used and the experimental setup. But in 2015, this result became unequivocal with a loophole free experiment (3).