Story Transcript
Modern Physics FA L L 2 0 2 1 FIRST EDITION
Modern Research of the last 20 years Special Edition 3
Gravitational Lensing and Dark Mystery it Reveals by Tyler Karasinski
12 Quantum Entanglement and Teleportation by Ryan Carvel 24 A Glance in the Quantum Nature of Photosynthesis by Cora Ferguson 37 What’s the Outcome of a Major Astronomical Event? A Major Discovery by Logan Ritchie 45 New Groundbreaking Experiment Takes a Step Towards Quantum Gravity by Austin Riesenberger 55 Hot Research and Even Hotter Superconductors by Shawn Farrell 65 Can quantum tunneling exceed the speed of light? by Minh N. Nguyen 76 Dark Energy: the Motor of Universal Expansion by Alexander Bjerser 85 Simulating the Universe by Cole Ames 95 Relativity: The Core of GPS by Max Arnone
2|Page
Gravitational Lensing and the Dark Mystery it Reveals By Tyler Karasinski The observable universe that one may examine has been estimated to comprise less than five percent of what truly exists in the vast expanse of outer space. Hidden from direct observation, dark matter, a little understood phenomenon, comprises the remaining majority of the universe in which humankind presides. Described by the framework of general relativity as an explanation for astronomical certainties such as galaxy rotation speeds and dynamics of galaxy clusters,1 dark matter’s distribution throughout the universe is near-impossible to observe due to its invisible nature. Gravitational lensing – an optical phenomenon in which light is distorted due to the presence of large amounts of mass between the light’s source and the observer – may aid in overcoming this observational barrier. It is through gravitational lensing that dark matter may be indirectly analyzed and identified in positioning.
1
e.g., Zwicky 1933; Dodelson 2003; Salucci 2019
3|Page
Gravitational lensing is a distortion of light which is created when light rays travelling from a distant source pass through a strong gravitational field created by a large presence of mass. This bends the light before it can reach an observer on Earth, creating a stretched appearance of the celestial bodies within the gravitational field. This is somewhat like the distortion of objects viewed through the edge of a magnifying glass or a plastic water bottle. Initially predicted by Albert Einstein in 1915 and verified by the lensing observed during the solar eclipse of 1919, astronomers often use this consequence of distorted spacetime to analyze the distorted galaxies, light sources, and in this instance, the possible density distribution of elusive dark matter.
4|Page
This figure shows the path of light before detection. The background galaxy acts as a light source. The beams from this source then pass through the gravitational field of the foreground galaxy, becoming bent and concentrated for the Earth-bound observer producing the inserted image above. This is the basis for the formation of an Einstein ring which is used in this experiment to examine the possibility of dark matter being present in the analyzed area.
Because Einstein’s theory of relativity requires the presence of dark matter to explain the aforementioned astronomical phenomenon such as galaxy rotation speeds, general relativity has recently come under scrutiny as evidence has failed to emerge for the favored heavy particle interpretation for dark matter,2 bringing its theorized existence into question. According to researchers Mandy C. Chen, Tom Broadhurst, Jeremy Lim, Sandor M. Molnar, Jose M. Diego, Masamune Oguri, and Lilian L. Lee, “this warrants serious consideration of alternate gravity theories including modified Newtonian dynamics (MOND),3 scale-invariant gravity,4 and emergent gravity.”5 In modelling a lensed image utilizing each of these gravitational models, the researchers may compare the produced replication to the observed image and determine if dark matter is present as described by general
2
This theory describes an interpretation for dark matter which dictates that dark matter is comprised of extremely high mass particles to explain its possibly extreme gravitational presence. 3 Modified Newtonian dynamics (MOND) is an alternative gravitational theory which modifies Newton’s interpretation of gravity and inertia to describe astronomical gravitational interactions. 4 Scale-invariant gravity is an alternative gravity theory which describes gravity as an unchanging universal constant based on matter distribution throughout the universe. 5 Chen et al. 2020
5|Page
relativity. If it is present and accurately distributed, then Einstein’s longstanding theory is supported. If it is not, then the astronomical physics favored by the vast majority today must be rewritten. In consideration of this hotly contested debate, the researchers theorize that an Einstein Ring – a near-circular distortion of light around a galaxy due to its gravitational pull – must be produced only by the gravitational pull of visible matter distribution based on what can be directly imaged and observed in the absence of dark matter. If this gravitational calculation is deemed insufficient to produce such an image, they must conclude that dark matter is present. Although it may not be seen by the researchers, dark matter’s gravitational interactions with the light must work constructively with that of the visible matter to produce the observed Einstein Ring.
6|Page
Left is pictured the Einstein ring around the galaxy cluster A3827. This image is composed by compiling different images with multicolor lenses and subtracting the IMFIT galaxy light model6 from Hubble Space Telescope imaging.
The Einstein Ring surrounding the center of galaxy cluster A3827 provides a detailed and well-defined subject for these observations and calculations. Utilizing Hubble Space Telescope imaging,7 the galaxy cluster is photographed numerous times with each photograph being compiled together to best represent the core of the cluster. Determining the magnitudes of identified galaxies G1 – G5 within this compiled image, the researchers then subtract the best-fit models of the magnitude of foreground objects to produce a clearer image of the lensed cluster by reducing foreground contamination. These best-fit models are derived in each individual compiled imaging filter to create the above visual representation of the Einstein ring.
6 7
Erwin 2015 Massey et al. 2015, 2018
7|Page
The researchers must then define estimates for the masses of these celestial objects so they may determine if their gravitational interactions are sufficient for lensing without the presence of the mysterious dark matter. To do this, they elect use of the stellar mass-to-light ratio (M/L) using the Yggdrasil model.8 The stellar mass-to-light ratio is a ratio of the mass in units of solar mass to overall luminosity. This estimate, based on a determined stellar population of each galaxy, allows for an approximation of the total visible mass in the analyzed galaxy. It is important to note that these values are estimations and differ slightly from previous studies conducted on similar topics yet will not have a lasting effect on the experimental analysis.9 With mass estimates determined, the calculated data is analyzed in the context of dark matter. It is here that the presence of dark matter may or may not be revealed and the theory of general relativity may or may not be supported, establishing gravitational lensing as a new tool in the set of astronomical observation.
8
The mass-to-light ratio is the quotient of a mass and its luminosity. These values are typically reported in reference to the baseline mass-to-light ratio of the Sun (ϒ☉ = 5133 kg/W). Yggdrasil Model (Zackrisson et al. 2011). 9 Massey et al. 2018
8|Page
The researchers examine several gravitational models in comparison with the compiled image to see which most closely replicates the actual Einstein ring image. These modeled reproductions are pictured in the figure below with Models 1, 3, 4, 5, and 6 all assuming some stellar mass provided by dark matter. Model 2, in which the researchers compile an image of A3827’s Einstein Ring using only the gravitational properties of observed visible matter, provides the most interesting image for debate. This assumes a total absence of dark matter. If this elusive substance is truly absent, the researchers expect to see an identical image to the compiled figure from the Hubble Space Telescope. This is not what occurs. The four beaded, near complete Ring encompassing the galaxy cluster becomes stretched and disfigured, losing one bead altogether. This differs from the other, dark matter-based gravitational models which all reproduce the observed image with relative accuracy, indicating a supporting conclusion for general relativity’s requirement for dark matter.
9|Page
This figure juxtaposes the compiled image of the analyzed Einstein Ring on the left, with each gravitational model’s reconstruction on the right for comparison. The most notable model, Model 2, assumes a total absence of dark matter, and as such differs the most from the collected imaging data.
The study then seeks to further support the claim that dark matter must supplement the stellar mass to create the observed gravitational lensing by stating that the mass must also be properly geometrically distributed throughout the galaxy cluster. The matter distribution of the same six gravitational models may be qualitatively examined in comparison to the matter distribution of the observed Einstein ring. Because a gravitational model may have the correct total stellar mass but the incorrect mass distribution, this extra consideration aids the research by 10 | P a g e
invalidating any model which incorrectly distributes its predicted mass.
The left figure displays the matter density distribution of each analyzed gravitational model. Model 3 is notably closest to the actual matter distribution observed in the produced Einstein ring.
The notable inaccuracies of Model 2 indicate that observed gravitational lensing may be the effect of a combination of visible and dark matters and their gravitational interactions. Humankind may now have a definable process for identifying dark matter. Although unseen, dark matter may no longer go undetected as there is now quantitative evidence of its presence. Dark matter must exist as described by Einstein in his pioneering theory of general relativity. Without this new and exciting astronomical form of matter, the lensed image of galaxy cluster A3827 would be inexplicable, an anomaly. Dark matter may be invisible to the observer, but its interactions with the surrounding galaxies and celestial bodies reveal just one of many unknowns regarding the substance. With support for general relativity, and gravitational 11 | P a g e
lensing as a new tool for identifying the location and distribution of dark matter, astronomers may now begin to delve into this new, vast, and unexplained mystery of the universe.
Primary Reference Chen MC, Broadhurst T, Lim J, Molnar SM, Diego JM, Oguri M, Lee LL. 2020. Geometric Support for Dark Matter by an Unaligned Einstein Ring in Abell 3827. The Astrophysical Journal. 898(1):81. doi:10.3847/1538-4357/ab9ebc. [accessed 2021 Dec 3]. https://arxiv.org/abs/2007.05603. Additional References Kizer Whitt, K. EarthSky | What is gravitational lensing? 2021 Sep 8. earthskyorg. [accessed 2021 Dec 3]. https://earthsky.org/space/what-is-gravitational-lensing-einsteinring/#:~:text=Gravitational%20lensing%20occurs%20when%20massive%20foreground%20objects%2C%2 0such.
Hrala J. Dark Matter Might Be Made of Super-Heavy Particles Almost as Big as Human Cells. ScienceAlert. [accessed 2021 Dec 3]. https://www.sciencealert.com/dark-matter-might-actually-be-super-heavy-particlesthe-size-of-human-cells. Sanders RH, McGaugh SS. 2002. Modified Newtonian Dynamics as an Alternative to Dark Matter. Annual Review of Astronomy and Astrophysics. 40(1):263–317. doi:10.1146/annurev.astro.40.060401.093923. Barbour J. 2003. Scale-Invariant Gravity: Particle Dynamics. Classical and Quantum Gravity. 20(8):1543– 1570. doi:10.1088/0264-9381/20/8/310. [accessed 2021 Dec 3]. https://arxiv.org/abs/gr-qc/0211021.
12 | P a g e
Quantum Entanglement and Teleportation By Ryan Carvel Teleportation has been the subject of science fiction and pop culture for centuries. Whether you’re watching Captain Kirk travel back to the USS Enterprise, or you’re going to the nether in Minecraft, these examples imply the instantaneous relocation of an object over an arbitrarily long distance. How amazing would it be if we could harness this tool? Well, we can, and we have… at least on a quantum level. Teleportation was only recently experimentally verified in 1997, but as of 2016 multiple countries had already developed long-range communication networks using quantum teleportation. In 2017 China commercialized its first teleportation-based quantum communication network and in 2019 IBM revealed a quantum computing system relying on teleportation1. Quantum teleportation continues to grow more frequently in scientific practice and many researchers propose its use in everyday applications. One day, Quantum teleportation may be used as an encryption method in every personal device across the globe. Therefore, one can see why it is valuable to learn of the processes involved to obtain successful
13 | P a g e
teleportation of quantum particles. In this reading, I will work to walk you through the first experimental verification of quantum teleportation. In 1997, Bouwmeester et al.2 successfully teleported photons in a lab. In order to understand the steps they took, we must first start with photons themselves. These quantum particles have a characteristic known as polarization which is the oscillatory direction (expressed as a rotational/slant angle) in which they can propagate through space.
Figure 1: Slant Polarization (+45°/-45°)3
Like any other quantum particle, photons are subject to a phenomenon known as quantum entanglement. Quantum entanglement is the very principle that permits quantum teleportation to
14 | P a g e
be possible. Although quantum entanglement sounds complicated, it truly is not. Quantum entanglement means that the physical state of one quantum particle is dependent on the physical state of another quantum particle. Let’s say we have a photon source that emits two different photons with opposite polarization angles, one polarized at +45° and the other at -45°. Before observing either of the photons, we don’t know either of their polarizations, all we know is that they are opposite. This means that before we measure them, both photons are in a physical superposition of being polarized at both +45° and -45°. Now, we choose to measure one of the photons and we find it to be polarized at -45°. After this measurement, we instantaneously know that the other photon is polarized at +45° because it is opposite to the first. The polarization angle of one photon depends on the polarization angle of the other. This means that the two photons are entangled with each other. Additionally, it is important to note that once the photons are entangled, the distance between them doesn’t have an impact on their entanglement and their polarizations will always remain opposite. Also of importance, the act of physically measuring one of the photons forces both photons to fall out of
15 | P a g e
superposition providing us with information of the exact polarization angle of both photons. Only through quantum entanglement can we begin to understand the theory that drives quantum teleportation. We will start with Alice and Bob (See Figure 2). Alice has photon 1, polarized at an unknown angle that she wants to teleport to Bob. Step 1 of this process is for Alice to gather two additional photons that are entangled with each other, photons 2 and 3. This means that the polarization angles of photons 2 and 3 are opposite. Step 2 is for Alice to send photon 3 to Bob without measuring any of the photons. Step 3 is for Alice to perform some action to entangle photons 1 and 2, forcing their polarizations to be opposite. At this point, the unknown polarization angle of photon 3 has been forced into the initial polarization angle of photon 1. Effectively, photon 1 has been teleported to photon 3.
Figure 2: Theoretical Teleportation
16 | P a g e
Here’s how this works: Initially, photon 2 was entangled with photon 3. This means that photon 3’s polarization angle is opposite to that of photon 2. In step 3, Alice entangles photons 1 and 2. This means that the polarization angle of 1 is opposite to the polarization angle of 2. If photons 1 and 3 both have polarization angles opposite to the polarization angle of 2, this means that photons 1 and 3 must have the same polarization angle. Therefore, photon 3 has been forced into the initial polarization state of photon 1 and effectively has been teleported. It is called teleportation because when photons 1 and 2 are entangled, the initial polarization angle of 1 is destroyed and sent to photon 3. This is the exact theory applied in the experiment performed by Bouwmeester et al.2 Moving to the experiment itself, we must first begin by taking a deeper look at quantum entanglement and the multiple methods available to create entangled particles. Bouwmeester et al.2 used two separate methods. The first is known as parametric downconversion using an Einstein–Podolsky–Rosen (EPR) source. In an EPR-source, a singular photon hits a nonlinear crystal and decays spontaneously into two entangled photons polarized at exact opposite angles.
17 | P a g e
The second method to produce entangled photon pairs is a very sensitive process and much different than the first, called a joint Bell-state measurement. It is a type of measurement performed on two photons that randomly projects them in 1 of 4 possible states known as Bell-states. A Bell-state is the state of entanglement and superposition of the two photons. In this experiment, only one specific Bell-state properly entangles the two photons permitting teleportation. Therefore, the method only works 25% of the time. Additionally, the method is so sensitive that to achieve proper entanglement, the two target photons must be in the correct place at the correct time (called spatial and temporal overlap). If there is a small delay in their temporal overlap, the probability of entanglement decreases significantly These two methods of entanglement are the driving force behind the experiment.
18 | P a g e
Figure 3: Theoretical Principles Bouwmeester et al.2
Figure 3 depicts the use of an EPR-source to create entangled photons 2 and 3. It also depicts photons 1 and 2 being sent to a joint Bell-state measurement device (BSM). It is within this device that photons 1 and 2 will be attempted to become entangled. Only if the joint Bell-state measurement is successful in entangling photons 1 and 2, will photon 1 be teleported to Bob.The above principles allow us to understand the elaborate experimental setup used by Bouwmeester et al.2
19 | P a g e
Figure 4: Experimental Setup of Bouwmeester et al.2,4
Figure 4 depicts a UV-Pulse that is sent to the EPRsource where it decays into entangled photons 2 and 3 which are sent to Alice and Bob respectively. The UV-pulse also strikes a mirror and returns through the crystal to further decay into another set of entangled photons, 1 and 4. Photon 1 is sent to Alice and Photon 4 hits a detector to notify that the experiment has begun. While photons 1 and 2 arrive at Alice, they enter a BSM device that will attempt to entangle them. It is essential that photons 1 and 2 enter the BSM device with the correct temporal overlap. Therefore, the individuals running the experiment can adjust the delay between the temporal 20 | P a g e
overlap of the photons by adjusting the mirror in the EPR-source. If a coincidence is recorded between detectors f1 and f2, photons 1 and 2 have been projected into the correct Bell-state and are successfully entangled. As previously cited, once photons 1 and 2 are entangled, photon 3 is forced into the initial polarization state of photon 1 and teleportation is successful. To evaluate the results, in Figure 4 photon 3 hits a polarizing beam splitter once arriving at Bob where d2 indicates it is polarized at +45° and d1 indicates it is polarized at -45°. If photon 1 is polarized at +45° initially a three-fold (d2f1f2) coincidence and a lack of a three-fold (d1f1f2) coincidence indicates that photon 3 is measured to be +45° and has therefore undergone teleportation.
21 | P a g e
Figure 5: Theoretical prediction for the three-fold coincidence probability between the two Bell-state detectors (f1, f2) and the detectors (d1, d2) analyzing the teleported polarization state (photon 1 polarized at +45°)4
Figure 5 shows there is a constant value for the three-fold (d2f1f2) coincidence rate meaning photon 3 is measured to be +45°. Figure 5 also shows a dip to zero in the three-fold (d1f1f2) coincidence rate when the temporal delay is adjusted to zero and it means that photon 3 is not being detected at -45°. The dip to 0 is very indicative that teleportation has occurred, and the shaded region shows where. Also, we see that the maximum probability is 0.25 which means teleportation works 25% of the time as cited previously.
22 | P a g e
Now let’s look at what Bouwmeester et al. actually found when doing this experiment.
Figure 6: Experimental Results of Bouwmeester et al.2
Bouwmeester et al. used the experimental methods previously cited for two different trials. In the first trial, they teleported photons polarized at +45° and in the second trial they teleported photons polarized at +90°. They recorded the three-fold coincidence rate for both trials in Figure 6. By comparing these results to the theoretical predictions in Figure 5, we see the indicative dip to 0 present for both trails. This demonstrates
23 | P a g e
that Bouwmeester et al. successfully teleported photons polarized in both directions, +45°, and +90°. Bouwmeester et al.2 was the first group of researchers to experimentally achieve quantum teleportation following the theory’s prediction in 19931. The results of this experiment have altered the front of theoretical physics research and applications of quantum teleportation continue to grow exponentially. It is just a matter of time before it applies directly to you. Taking the time to understand the methods involved in achieving something like this becomes valuable once you see the big picture. Scientific research continues to create new possibilities and 24 years later, quantum teleportation remains a mind bending culminations of scientific effort.
References
Liu T. The applications and challenges of quantum teleportation. Journal of Physics: Conference Series. 2020;1634(1):012089. doi:10.1088/1742-6596/1634/1/012089
Bouwmeester D, Pan W, Mattle K, Eibl M, Weinfurter H, Zeilinger A. Experimental quantum teleportation. Nature. 1997;390(6660):575–579. doi:10.1038/37539
Antenna Polarization Basics. Mimosa Networks. 2021. https://mimosa.co/white-papers/antenna-polarization
Will S. Quantum Teleportation. Seminar on Quantum Optics, Summer semester 2004. 2004.
24 | P a g e
A Glance into the Quantum Nature of Photosynthesis By: Cora Ferguson Perhaps one of the most intriguing applications of quantum biology lies in photosynthesis.
Streams of light from the sun hit
chlorophyll and other light absorbing molecules within leaves and using oxygen and water, plants convert this energy into simple sugars.
Upon first glance, it seems like this system is
inherently nonspecific: there appears to be random distribution of light, random detection of light by photoreceptor molecules, and random electron excitation chains.
With all this randomness, how
are plants able to take advantage of light energy to generate the fuel they need to survive?
At the root of all this, quantum
biology is creating order at a level that we are only beginning to understand.1
Figure 1: Diagram of Chloroplast. The quantum interactions involved in photosynthesis are found in the thylakoid membrane.10
25 | P a g e
The key behind photosynthesis lies in a plant’s ability to convert energy from sunlight into glucose which can then be used as fuel by the plant.
To do this, plant cells use specialized
organelles called chloroplasts that contain hundreds of light sensitive photoreceptors in the thylakoid membrane.5,6 The thylakoid membrane contains antenna proteins and reaction centers which are responsible for the light reactions involved in photosynthesis.5 This process is driven by energy from the sun which takes the form of electromagnetic radiation.
Electromagnetic radiation
consists of all waves that travel through space.8,9
These waves
have a certain amount of energy that is proportional to their frequency.
𝐸𝑃ℎ𝑜𝑡𝑜𝑛 = ℎ𝑓
This can be modeled mathematically as:
Equation 1: Energy of a photon is dependent on frequency9
The “h” represents Planck’s constant which is the amount of energy that a photon, a particle of light, can carry.
The f
represents the frequency of the wave which is dependent on the speed and wavelength of the light.9
All waves within the
electromagnetic spectrum travel at the speed of light (c) which is 3.0 x 108 ms-1. 𝑐
𝑓=𝜆
Mathematically, this relationship is modeled as:
Equation 2: Frequency is dependent on the velocity and wavelength of light9
26 | P a g e
The frequency of light is inversely proportional to the wavelength of light (λ) so as wavelength increases, the frequency of the wave decreases, and the resulting energy of the photon also decreases.9
We are only able to see visible light which as you
can see in figure 2, is a very small part of the electromagnetic spectrum.
However, the waves in the visible light spectrum are
also the waves that drive photosynthesis.7,8
Figure 2: The electromagnetic spectrum. As humans we are only able to see wavelengths within the visible light spectrum but there are many types of waves with varying degrees of energy outside of those.11
Previously, it was thought that these light reactions happened following a similar theory as the photoelectric effect.7 The photoelectric effect was discovered in 1887 by Heinrich Hertz and it describes the phenomena that occurs when electrons are ejected from a metal plate in response to direct electromagnetic radiation.8,9
27 | P a g e
Figure 3: The photoelectric effect. The squiggly lines represent electromagnetic waves hitting a metal surface. As a result, electrons are ejected from the surface shown by the purple dots.12
The previously hypothesized pathway for photosynthesis (and the one found in my AP biology textbook) followed similar logic; electromagnetic waves from the sun hit antennae proteins in leaves - causing electrons to be ejected from receptor proteins and triggering an electron transfer cascade throughout chlorophyll. This created a chain reaction between different chlorophyll molecules within the reaction core, and the ejected electrons provided the energy needed convert light into chemical energy.6,7
28 | P a g e
Figure 4: The historically hypothesized schematic for photosynthesis. Photons energize electrons in photosystem II by hitting them and ejecting them from a surface – similar to how the photoelectric effect works. This causes a chain reaction of energy transfers that stretch over the electron transport chain and then to photosystem I where the process repeats itself.13
While this theory does explain some of the perceived randomness within the system, evidence is emerging that interactions at the quantum level play a role in creating order and maximizing efficiency.2
Under normal conditions, the amount
of sunlight that is converted into biomass is relatively low and primarily dependent on the frequency of the light from the sun. The pigment molecules in chloroplasts are most sensitive to the visible light part of the electromagnetic spectrum, specifically red and blue light.5,6
In plant cells, light-capturing antenna
proteins are attached to protein scaffolds within the thylakoid membrane.
The total antenna system is comprised of the pigment
and the reaction center.
The pigment absorbs light and the
reaction center takes that light and uses it to fuel the electron 29 | P a g e
excitation chain that powers the rest of the photosynthetic process.2
New advances in detector technology have allowed
biologists and physicists to explore how this energy transfer works at a microscale and observe how plants use quantum mechanics to maximize efficiency. Before we dive into the details, we must first become familiar with different properties of quantum mechanics that drive this system forward.
Quantum mechanics is the study of small,
subatomic interactions that occur between atoms.9
In the case of
photosynthesis, quantum biophysicists look at how electrons involved in the excitation chain interact with the pigment when light is present in the system.
Because these interactions are so
small, according to the Heisenberg Uncertainty principle, we cannot know the exact position or momentum of these particles at the same time.
∆𝑥∆𝑝 =
ħ 2
This property is mathematically modeled as:
Equation 3: Heisenberg’s Uncertainty Principle9
Where ∆x is the uncertainty in position, ∆p is the uncertainty in momentum, and ħ is Plank’s constant (h) divided by 2π.9
Remember that Plank’s constant refers to the amount of
energy that a photon can carry – this uncertainty relationship is derived from the famous Schrödinger equation, but energy at this level is quantized. This means that the system can only have 30 | P a g e
certain energies that are broken into different levels instead of following a continuous model.9
Figure 5: Continuous energy from classical mechanics is similar to the ramp on the left; you can have any energy at any given time. Quantized energy is visually represented by the stairs on the right. Instead of being able to have any energy at any given time, it is broken into different energy levels. To jump up to a new energy level, you must have a certain amount of energy to overcome the potential difference.14
Since this is the case, we can predict the probability of a particle existing at a certain location depending on the energy in the system.
In photosynthesis, the frequency of the light
absorbed by pigments determines the energy in the system (Equation 1).
The energy absorbed is then passed to electrons linked to
chlorophyll molecules to bring them to higher energy levels and this triggers the electron excitation chain that powers photosynthesis.5
31 | P a g e
Figure 6: Chlorophyll in the excited state (where it has been energized by light through absorption) compared to its nonexcited state. You can see how the electrons must overcome a specific potential difference (2ev) in order to jump up to the next energy level.4
When electrons are excited, it can cause them to jump up to a higher energy state and this can lead to delocalization, which is the separation of electrons from atoms.9
Electrons are negatively
charged, and they are attracted to positively charged protons in the nucleus of atoms – like kitchen magnets sticking to your fridge.
However, for electrons that are located further from the
nucleus, this positive attraction is much weaker, and it takes less energy to strip them off the atom.
We cannot define the
position of delocalized electrons with certainty, but they can populate several bonds and atoms within a structure, and they travel at quantized energy levels.4,5
Their behavior follows the
wave function which is a mathematical model used to describe their 32 | P a g e
movement.
The wave function describes the probability of finding
an electron at any given point in time and space and we can use this to figure out the probability of finding states of quantum coherence within a system.9 Quantum coherence is when the wave functions of different molecules line up in phase and are added together (this is also referred to as superposition).
The term “being in phase” is
another way to describe the probability of finding an electron at a given location and how waves interact.
If two different wave
functions are in phase, then their probabilities align which increases the likelihood of finding an electron at a specific location.
To be in phase, molecules must be vibrating together at
the same frequency.6
This typically happens at extremely low
temperatures in highly controlled labs because even particles as small as those in our air can disrupt these states.
Coherent
systems have many unique properties that allow them to distribute charge over a broad area and behave in ways that matter typically does not behave.
This is why it is so incredible that plants are
able to achieve these states and take advantage of them!2
33 | P a g e
Figure 7: Comparison of waves that are in phase vs out of phase. Coherent states consist of waves that are in phase and decoherent states consist of waves that are out of phase.4
In photosynthesis, this state causes excited chlorophyll molecules to vibrate at the same frequency – turning them into a biological superconductor which allows energy to be transferred very quickly throughout the system.3
This energy is then used to
power the chemical reactions that convert oxygen and water into simple sugars that the plant cells can metabolize as chemical energy.7 With incoherent states, it takes more energy to excite these chlorophyll molecules because there is not enough delocalization in the system and the molecules cannot effectively share that excited state.
However, when all the chlorophyll molecules are
vibrating at the same frequency and in the same phase, the energy of excited chlorophylls will be distributed throughout the chain of chlorophyll molecules.5
That energy can then be transferred
over a greater area to power more chemical reactions leading to a greater production of biomass.
34 | P a g e
Figure 8: Distribution of energy in Chlorophyll that is in an incoherent state (top) compared to a coherent state (bottom). In the incoherent state, the energy from the energized electrons can only be passed to a few of the chlorophyll molecules which limits the reach of the energy from the light. In the coherent state, the energy from the light is shared amongst all of the chlorophyll molecules that are vibrating at the same frequency and in the same phase which allows this energy to flow over a greater area.4
From the University of Toronto, researchers Elisabetta Collini and Cathy Wong used two-dimensional photon echo spectroscopy to look at coherency states within the chloroplasts of cryophytes.
These organisms are marine algae that are known to
photosynthesize in low-light conditions.2
As stated above,
coherent states typically only occur at extremely low temperatures with a lot of energy.
In low light conditions, researchers are 35 | P a g e
skeptical of plants’ abilities to photosynthesize because the intensity of the light could be too low to provide the energy needed to excite electrons.
However, within cryophytes, Collini
and Wong detected coherent oscillations for approximately 400 femtoseconds (4.0 x 10-13 sec) after exposure to 520nm laser light (red light).
The intensity of light refers to the amount of light
given off by a light source.9
This thinking parallels the
photoelectric effect because when you increase the intensity of light hitting a plate, more electrons are ejected.
The beauty of
quantized energy states is that it is dependent on the frequency of light – not the intensity (Equation 1).
Therefore, the
intensity of the light does not matter; if light is present, the frequency of the light will be enough to bring the electrons to the properly excited state and the coherence of the chlorophyll molecules distributes this energy over a greater area where it is then converted into chemical energy.
References [1] Ball, P. 2011. Physics of life: The dawn of quantum biology. Nature [Internet]. [cited 2021 November 18]; 474 (7351): 272-274. Available from: https://nature.com/articles/474272a. [2] Collini, E. et al. 2010. Coherently wired light-harvesting in photosythetic marine algae at ambient temperature. Nature [Internet]. [cited 2021 November 18]; 463(1): 644-647. Available from: http://dx.doi.org/10.1038/nature08811 [3] Hoyer, S. 2013. Understanding and manipulating electronic quantum coherence in photosynthetic lightharvesting. UC Berkley. [cited 2021 November 18];1-30. Availabile from: https://escholarship.org/uc/item/7h5340b9.
36 | P a g e
[4] Keren, N, eat al. 2018. Photosynthetic Energy Transfer at the Quantum/Classical Border. Trends in Plant Science 23 [Internet]. [cited 2021 November 18]; 23(6): 497–506. Available from: https://doi.org/10.1016/j.tplants.2018.03.007. [5] 2018. Light Absorption for Photosynthesis. Hyperphysics [Internet]. [cited 2021 November 11]. Available from: http://hyperphysics.phy-astr.gsu.edu/hbase/Biology/ligabs.html. [6] Mazumder, S, et al 2009. Review: biofunctionalized quantum dots in biology and medicine. Journal of Nanomaterials [Internet]. [cited 2021 November 18] (2009): 38. Available from: https://hindawi.com/journals/jnm/2009/815734. [7] Romero, E, Ramunas, A, Vladimir I. N, et al. 2014. Quantum coherence in photosynthesis for efficient solar-energy conversion. Nature Physics [Internet]. [cited 2021 November 18] 10 (9): 676-682. Available from: https://nature.com/articles/nphys3017. [8] Romero, E, Vladimir I. N, et al. 2014. Quantum Effects in Biology: Excitation energy transfer and energy conversion in photosynthesis. [Internet]. [cited 2021 Novembter 18]. Available from: https://cambridge.org/core/books/quantum-effects-in-biology/excitation-energy-transfer-and-energyconversion-in-photosynthesis/41e4532a1ebf029fefea0f86e89bcbb6. [9] Townsend, J. S., 2010. Quantum Physics: A fundamental approach to modern physics. [Print]. 1st ed. Sausalito (CA): University Science Books. [10] Petruzzello, M., 2021. Chloroplast. Britannica [Internet]. [cited 2021 November 18]. Available from: https://www.britannica.com/science/chloroplast [11] Armstrong, M., 2019. Shema Frequency: Understanding cycles. ArmstrongEconomics [Internet].[cited 2021 November 18]. Available from: https://www.armstrongeconomics.com/tag/schema-frequency/ [12] 2021. Einstein’s Explanation of Photoelectric Effect. Byju’s [Internet].[cited 2021 November 18]. Available from: https://byjus.com/physics/einsteinsexplaination/#:~:text=The%20photoelectric%20effect%20is%20a,sufficient%20frequency%20is%20in cident%20upon.&text=But%20electron%20emission%20occurred%20only,threshold%20frequency% 20(%CE%BD0). [13] Rahman, A., et al., 2017. Electron Transport System in Chloroplast. Slideshare [Internet].[cited 2021 November 18]. Available from: https://www.slideshare.net/kayeenvadakkan/electron-transportsystem-in-chloroplast [14] Chang, A., 2020. Planck’s Quantum Theory. Chemistry LibreTexts [Internet].[cited 2021 November 18]. Available from: https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map %3A_Physical_Chemistry_for_the_Biosciences_(Chang)/11%3A_Quantum_Me
37 | P a g e
What’s the Outcome of a Major Astronomical Event? A Major Discovery By Logan Ritchie Imagine two massive black holes deep in space colliding. What is created by such a collision? Well, Albert Einstein, the mastermind behind the understanding of modern physics, formulated his theory of general relativity in 1916. From his theory, he deduced the idea that gravitational waves could be a possibility because waves were one of the solutions to his equations from general relativity.
Gravitational waves, according to Oxford
Dictionary, are energy-carrying waves propagating through a gravitational field, produced when a massive body is accelerated or otherwise disturbed. The wave solutions were so small that Einstein did not have the facilities to collect the data to prove that these waves exist. Therefore, his idea was heavily debated in the physics world for decades after his theory of general relativity was published. The discovery of gravitational waves proves Einstein was correct with his calculations over 100 years ago and has allowed for a better understanding of how the universe works. Throughout this paper, I will be explaining how gravitational waves are detected, examining the results from the detection of these waves, simplifying the results, and explaining the importance of 38 | P a g e
gravitational waves and what they mean for the physics world moving forward. How Are Gravitational Waves Detected? The problem with gravitational waves is being able to detect them. Throughout the mid-1900s, physicists understood that these waves are generated from massive astronomical events such as the merging of two black holes.
The energy created from an event like this,
as stated in the definition, sends a massive disturbance through space-time that causes a ripple of radiation permeating through space.
Figure 9: Scaled Representation of LIGO Koberlein, B. 2019. LIGO Will Squeeze Light To Overcome The Quantum Noise of Empty Space. Universe Today. [Internet]. [cited 2019 December 5]. Available from: https://www.universetoday.com/144272/ligo-will-squeezelight-to-overcome-the-quantum-noise-of-empty-space/
39 | P a g e
Now that physicists knew what they were looking for, different theories were developed on how to actually detect gravitational waves. The Laser Interferometer Gravitational-Wave Observatory (LIGO) has become the main way for detecting gravitational waves. LIGO is a remarkable feat of engineering and an extremely sensitive interferometer with the capability of detecting the extremely small gravitational waves created by astronomical events. LIGO uses a scaled-up Michelson interferometer. As light moves from the light source through the light-beam splitter, it gets sent down two perpendicular 4-kilometer paths where it reflects off of a mirror and rejoins at the light-beam splitter. If nothing is there to disturb the light along its path, the two split beams will rejoin and destructively interfere, so no light is observed at the photodetector. If a disturbance occurs, one of the split beams will travel a different distance than the other beam, therefore, when it returns to the beam splitter and sent to the photodetector, light will be detected because there is no destructive interference anymore. Figure 1 shows a visual representation of the set-up of LIGO and the propagation of gravitational waves. 40 | P a g e
LIGO has two locations in the United States: one in Washington state and the other in Louisiana. Because the disturbance caused by gravitational waves is small, the two locations are used to check if the detected disturbance was actually a gravitational wave or not. If only one LIGO location picks up the disturbance, then gravitational waves were not the reason for the detection of light at the photodetector. Gravitational waves flow through the Earth, so if both LIGO locations detect the same disturbance at virtually the same time, then gravitational waves are more than likely the reason for the disturbance. Interpreting the Data The first detection of gravitational waves by LIGO occurred on September 14, 2015. GW150914, the name for the signal identified, traveled through each LIGO interferometer ten milliseconds apart. This is the same amount of time it would take light to travel from one LIGO detector to the other, therefore, the event was recorded and reported within three minutes of the discovery. Physicists, then, used relativity to match up the two events to compare the size and magnitude of the waves. The event lasted for two-tenths of a second. Over this small amount of time, the 41 | P a g e
amplitude increased, and the frequency increased from 35 Hz to 150 Hz. The event can be traced back to a point in space with 90% certainty.
Because of the increase in frequency, one theory for
the source of the gravitational waves was two large masses spiraling into each other. This was determined by calculating the chirp mass from the two masses and frequency. Chirp mass of a binary system (two-mass system) gives the leading-order orbital evolution of the system as a result of energy loss from emitting gravitational waves. In simpler terms, chirp mass helps figure out how far apart the two masses are and explains the frequency of merger through the release of energy. It is calculated by using the mass of each object involved or by the speed of light (c), the universal gravitational constant, G, the observed frequency of the waves, and the time derivative of that frequency. This is the equation for chirp mass using the two masses of the binary system.
Because of the 150-Hz frequency and application of the chirp mass of the binary system, the source was ruled to be the orbiting and collision of two black holes. Other proposed ideas were a twoneutron star system or a black hole-neutron star system. The 42 | P a g e
neutron star binary system could not be the source because neutron stars do not have the required mass to produce waves at this frequency, and the black hole-neutron star binary system would create a lower frequency than 150 Hz at merger. All of these results are possible due to laser interferometry in which LIGO is based. As the gravitational waves pass through the arms of LIGO, they bend and stretch the two arms of the interferometer. The stretching of one arm and bending of the other is the disturbance needed to shift the path length of the laser to form an interference pattern that produces light at the photodetector instead of no light at the photodetector. It is from the amount of this “interference” that the disturbance creates that physicists are able to determine the amplitude and frequency of the gravitational waves. Simplifying the Results Being able to understand the math behind the chirp mass and relativity components in calculating the results from this detection is not as important as conceptually understanding it. The gravitational waves detected at LIGO occurred millions of light years away from Earth. This means the event actually happened that many years ago since that is the amount time it took 43 | P a g e
the waves to reach Earth traveling at the speed of light. The waves are bending space-time as they propagate from the source, so when they passed through Earth, they bent space ever so slightly. This bending is where LIGO comes into play as the lasers are sensitive enough to detect this “bending”. It is from these detections that physicists are able to determine the frequency and amplitude of the gravitational waves, the location of the source, and the source itself. Each LIGO facility experienced the same disturbance, and, using relativity, the disturbance was determined to be caused by the same source, therefore, signifying the first detection of gravitational waves. A major breakthrough in the world of physics since it proved Einstein’s theory to be correct nearly 100 years earlier. What does this mean for the future? So, now what? Why is the detection of gravitational waves so important in the physics world? Gravitational waves can be used to look back into the beginnings of the universe. One experiment that came after LIGO’s detection occurred at the Laser Interferometer Space Antenna (LISA) in 2020. This experiment looked at “Gravitational waves (GWs) produced by sound waves in the primordial plasma during a strong first-order phase transition in the early Universe.”
This was the first experiment trying to 44 | P a g e
observe these waves in the range of milli-Hertz.
Being able to
harness the ability to understand where gravitational waves can be detected will drive the understanding of wave physics and the age old question of how and when the universe came to be. Over 100 years have passed since Albert Einstein formulated the possibility of gravitational waves. His theories of special and general relativity continue to stand the test of time. As technology continues to advance, the physics world can guarantee that his ideas and contributions will persist. Without Einstein, LIGO, LISA, and many other quantum physics-related applications may not be used for what they are today, and therefore, major astronomical events could go undetected for centuries.
References
5Abbot,
B.P., et al. 2016. Observation of Gravitational Waves from a Binary Black Hole Merger. Physical Review Letters. [Internet]. [cited 2016 February 11]; 116(6): 061102-3. Available from: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102 6Abbot, B.P., et al. 2016. Observation of Gravitational Waves from a Binary Black Hole Merger. Physical Review Letters. [Internet]. [cited 2016 February 11]; 116(6): 061102-3. Available from: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102 7Ossowski, M. 2021. Chirp mass-distance distributions of the sources of gravitational waves. [Internet]. [cited 2021 January 26]; 1. Available from: https://arxiv.org/pdf/2105.10516.pdf 8Abbot, B.P., et al. 2016. Observation of Gravitational Waves from a Binary Black Hole Merger. Physical Review Letters. [Internet]. [cited 2016 February 11]; 116(6): 061102-3. Available from: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102 9Schmitz, K. 2020. LISA Sensitivity to Gravitational Waves from Sound Waves. Symmetry. [Internet]. [cited 2020 September 9]; 12(9): 1477. Available from: https://doi.org/10.3390/sym12091477 10Schmitz, K. 2020. LISA Sensitivity to Gravitational Waves from Sound Waves. Symmetry. [Internet]. [cited 2020 September 9]; 12(9): 1477. Available from: https://doi.org/10.3390/sym12091477
45 | P a g e
New Groundbreaking Experiment Takes a Step Towards Quantum Gravity By Austin Riesenberger With the development of modern technology, new experiments which we previously thought would be impossible can be done to find out more about the quantum world. The implications from this are endless and unimaginable. The development of quantum mechanics in the mid 20th century is what’s responsible for some of the largest technological advances since, like the phone in your pocket and the computer on your desk. According to physicist Chad Orzel, “Depending on the wave nature of electrons, and because we understand that wave nature, we can manipulate the electrical properties of silicon”.1 This is how we created the transistor which is the fundamental building block of a computer. The math and physics from quantum mechanics that was used to understand the wave nature of electrons and create the transistor was discovered decades before the transistor was developed. So just imagine all the possible world changing technology and ideas that may be developed from the new information we are learning right now in the quantum world. One advancement that could reveal more information about the quantum world is an atom chip developed by a group of Israeli 46 | P a g e
researchers led by world renowned physical chemist, Yair Margalit. Their atom chip controls highly accurate magnetic fields which allowed them to create the worlds first fully functioning fullloop Stern-Gerlach interferometer. In Margalit’s report, he mentions “several theoretical studies have explained why a SternGerlach interferometer is a formidable challenge”.2 However, with the implementation of their highly accurate magnetic field controlling chips, they provided “a detailed account of the realization of a full-loop Stern-Gerlach interferometer for single atoms”.2 First, we will dive into what this means and then we will explore why this is so important. A Stern-Gerlach interferometer (SGI) combines two quantum experiments that have been around for over 100 years. The first experiment is the Stern-Gerlach experiment which was developed by Otto Stern and Walther Gerlach. A model of the Stern-Gerlach experiment is shown below in figure 1.
47 | P a g e
Figure 10: Diagram Stern-Gerlach Experiment. Credits: Image by HyperPhysics
The experiment involves shooting silver atoms in a beam through an unbalanced magnetic field and on to a viewing screen. Each atom is displaced differently from the magnetic field based off its angular momentum. What they found is that the atoms traveled two separate places on the viewing screen and not a gradient of atoms as classical physics would describe. This shows that each atom had one of two possible angular momentums, spin up or spin down. However, this experiment doesn’t show whether the particles were in superposition of following two different trajectories to the viewing screen at the same time. A superposition is the state that very small particles like electrons and photons exist in when they are not being measured. They don’t have an exact location or energy state, instead they have a probability of existing in many locations and states all at the same time. The double slit experiment is useful for showing 48 | P a g e
that particles are in superposition. A diagram of this shown below in figure 2.
Figure 11: Diagram of the Double Slit Experiment. Image by Discovery
This experiment involves shining light through two small slits onto a viewing screen. What appears on the screen is an interference pattern due to the constructive and destructive interference of the light which behaved like a wave. The strangest part happened when the scientists tried viewing the light to see what was happening to light as it went through the slits. When they did this, the experiment behaved as it would classically resulting in two lines on the viewing screen. This shows that as the photons of light passed through the slits, they were in a superposition of being in the first slit, the second slit and both slits all at the same time. The simple act of viewing where a photon went took it out of superposition causing it to act as it would following the rules of classical mechanics. 49 | P a g e
Yair Margalit in collaboration with other researchers from Israel used parts and ideas from both the Stern-Gerlach and double slit experiments while implementing their chips that have high level of control over magnetic field gradients, to create a complete full-loop Stern-Gerlach interferometer. In their experiment, they took a sample of rubidium atoms and cooled them down into a Bose-Einstein condensate, which is a phase of matter right above absolute zero. When atoms get this cold, they have no free energy to move relative to each other. So, they clump together and exist in the same energy states as each other. This causes all the atoms to be in superposition with each other making the group of atoms act as if it were just one big atom. All these atoms have the same properties. Then the scientists applied a series of four magnetic pulses to the superposition of atoms using their extremely accurate atom chip. The first pulse splits the superposition into two parts. Then with the second pulse, the groups of atoms are stopped in comparison to each other. The third pulse accelerates the two super positions of atoms back toward each other. Finally with the fourth magnetic pulse, the groups of atoms are stopped and recombined. A diagram showing the separation and recombination of the wave packets is shown below in Figure 3.
50 | P a g e
Figure 12: Diagram of Separation and Recombining Super Position of Atoms. Image by Science Advances
The green and purple curves represent the location of the two different superpositions of atoms over time. The purple rectangles represent where and when the magnetic pulses were applied to the atoms. As you can see at time 1 the atoms were split, at time 2 they were stopped, at time 3 they were reversed, and at time 4 they were stopped again and recombined. In a previous study, the researchers created a half-loop Stern-Gerlach interferometer. In this experiment they only separated the atoms and then let them expand over time until they overlapped, creating an interference pattern that the researchers viewed on a screen like the double slit experiment. This is where this new full-loop Stern-Gerlach experiment differs from a regular 51 | P a g e
Stern-Gerlach experiment. Now with the full-loop Stern-Gerlach interferometer, instead of just recording where the particles land on a viewing screen, the atoms are recombined. This is important because it allows the scientists to detect the interference in a much more accurate way than the use of a viewing screen. Instead, the scientists detect the interference through variation in spin population in the recombined atoms. This is more accurate because they wouldn’t need an extremely high-resolution image to view the screen. Also, they could detect the interference in a much shorter amount of time which is important because the more time this experiment requires, the more likely outside forces will have an impact on the results. This device can be implemented in many ways to help find some answers that we still have about our universe. In their paper, Yair Margalit and fellow researchers outlined how they could repeat the experiment with larger particles called nano-diamond particles. If they can make their atom chips more accurate and reduce background interference, then repeating their experiment with these nano-diamonds may be possible. This can have some very significant implications. Margalit mentions that “placing two such SGIs in parallel next to each other will allow probing the quantum nature of gravity”.2 This is incredibly important because by
52 | P a g e
probing the quantum nature of gravity we can continue working towards finding the solution to quantum gravity. Quantum gravity is essentially combining the two main theories of how things interact in our universe, the Standard Model of Particle Physics and General Relativity. The Standard Model of Particle Physics successfully describes electromagnetism and the strong and weak nuclear forces while General Relativity describes gravity and space time. These two theories have been confirmed by numerous different scientific experiments. However as soon as you try to combine the two theories, things start to break down. One example of this is from the double slit experiment. When the quantum particles are traveling in a superposition of being in one slit, the other slit, and both slits, what will their gravity look like? Einstein’s theory of gravity requires that objects have an exact location to calculate how their gravity interferes with the gravity of objects around them. As of now we haven’t been able to measure the gravitational force from particles this small since gravitational force gets extremely insignificant with micro sized particles especially compared to the strong nuclear force. Also, even if we measured the gravitational force from these particles as they were traveling through the slits would they come out of superposition resulting in us still finding nothing about what the gravity of quantum objects looks like. Therefore, the only way 53 | P a g e
that we could tell what gravity looks like on a quantum scale is to find a theory that connects The Standard Model of Particle Physics and General Relativity. Since most quantum particles have such little mass, gravity doesn’t have much of an effect on them, so it isn’t very relevant to understand. However, in places that have an extreme amount of mass packed into a tiny little point is where particle physics and General Relativity collide. One example of this is black holes. In the very center of black holes insane amounts of matter and energy are squished down into a single point called a singularity. If we had a theory that described the quantum nature of gravity, then we may be able to understand the physics of black holes which have baffled scientists since they were theorized by Einstein. On top of that we may be able to understand how our universe was created. The most accepted theory for how our universe was made is the Big Bang theory. This consists of our entire universe expanding from a single point. Just like in a black hole as more and more matter compresses into one spot, the force of gravity becomes more significant. Therefore, if we could find a theory of quantum gravity then we could understand the physics behind how our universe was created.
54 | P a g e
Finding a theory that combines The Standard Model of Particle Physics and General Relativity has endless unknown implications. If we discovered a theory of quantum gravity, we would understand much more about one of the most mysterious objects in our universe, black holes. We would also learn more about how our universe was created. With this information who knows what we could develop and accomplish. Maybe we could invent teleportation or create a source of free unlimited energy. Maybe we could even create a small universe of our own. That last one could be a stretch, but without a theory of quantum gravity, we will have no idea what possibilities are out there. That’s why it is so important to support scientists like Yair Margalit and his team who are developing experiments that may be able to test the quantum nature of gravity. We are on the forefront of discovering more about the world we live in and only time will tell what the quantum world can teach us about ours.
References 1. Orzel, Chad. 2015 August 13. “What Has Quantum Mechanics Ever Done for Us?” Forbes, Forbes Magazine. [Internet]. [cited 2021 October 24]. Available from: https://www.forbes.com/sites/chadorzel/2015/08/13/what-has-quantum-mechanics-ever-donefor-us/?sh=1e1af8d24046. 2. Margalit, Yair, et al. 2021 May 28. “Realization of a Complete Stern-Gerlach Interferometer: Toward a Test of Quantum Gravity.” Science Advances, American Association for the Advancement of Science. [Internet]. [cited 2021 October 24]. Available from: https://www.science.org/doi/10.1126/sciadv.abg2879
55 | P a g e
Hot Research and Even Hotter Superconductors By Shawn Farrell
Imagine a world where we can have supercomputers in the palms of our hands, a world where electricity can be transmitted long distances with no loss, or even a world where maglev trains are everywhere. This world is not as far away as many would think. Right now transmission of energy through conductors tends to be an inefficient process. When electricity is being transmitted energy is lost through heat. With the development of superconductors we are able transmit electricity with little to no loss. However, there is one big problem with superconductors: they only work when at very low temperatures. This is why the development of room temperature superconductors is so groundbreaking. There are a series of important aspects of room temperature superconductors, understanding how they work, the research that is currently being conducted on the topic, and the applications of superconductors and how being able to use them at room temperature can improve many processes.3
5
The science of superconductors can be difficult to understand, especially if the basics of how they work is unknown. One of the primary principles that is important when it comes to understanding them is cooper pairs. Cooper pairs are loosely bound 56 | P a g e
electrons that have two opposing spins. In the case of cooper pairs and many other quantum processes spin references an angular momentum carried by elementary particles, in this case electrons. Due to the nature of these electrons they are able to move freely through superconductors transmitting electricity without any loss. These cooper pairs are found in cuprates, which are present in superconductors. Cuprates are ceramic compounds that are composed of layered copper and oxygen atoms with other elements mixed in. At room temperature these cuprates are great insulators, but once cooled to low temperatures their properties change dramatically. The cooper pairs form and they go from strong insulators to highly effective superconductors. The process of materials becoming superconductive also lends itself to Bardeen-Cooper-Schrieffer (BCS) theory. In a nutshell this theory states that when cooper pairs form within a metal, that metal becomes superconductive. More specifically the formation of these pairs locks some of the electrons into place. When a potential is put across the superconducting material free electrons will begin to flow. To keep the explanation simple, each of the electrons that are locked into place will pull along a positively charged ion, which then brings a free electron along with it due to the opposite charges. There is still more to the science of superconductors in the form of Van der Waals materials.2
5
57 | P a g e
Van der Waals materials have unique properties of strong inplane covalent bonding and weak interlayer interactions. The formation of these structures allow the material to carry high charges efficiently. While this makes them great as superconductors they do face problems. One of these problems is their 2D nature. To make them more practical, they can be combined with more traditional 3D materials, such as cuprates. The combination of the 2D van der Waals and 3D materials allows room temperature superconductors to be produced. The research on this relationship is still in its early stages and is not fully understood, but this process is one of the key parts of developing room temperature superconductors.1
58 | P a g e
Figure 1 This graphic shows how the Van der Waals materials can be layered with other structures to create a 3 dimensional structure. Traditionally superconductors only work when near 0° Kelvin. While this is how superconductors commonly function, recent research shows that room temperature superconductors are not only possible, they are being produced in laboratories. While this is an incredible jump from having to be near 0° Kelvin, there are very specific conditions for them to be possible, the most restrictive being the immense amount of pressure required to keep them in this state. It was found that approximately 267 GPa of pressure was enough to keep these materials superconducting up to 287.7 K.(14.55 C). The scientists attempted to push the temperature past this point, but that only resulted in the 59 | P a g e
inevitable failure of the material. Another important aspect of room temperature superconductors is the material out of which it is made. Only by applying the concepts and techniques in the second and third paragraphs was this possible. The material they used was a sample of carbonaceous sulfur hydride. This material proved to be the most effective for this method of room temperature superconductivity.
4
Figure 2 Graphic a shows the resistance compared to the temperature across various runs at different pressures. Graphic c shows the temperature compared to the pressure across several runs. Another important and interesting aspect of superconductors is their magnetic properties. One of the most effective ways to test if a material is superconductive is to test its a.c. magnetic 60 | P a g e
susceptibility and look for a strong diamagnetic transition. A diamagnetic material produces an induced magnetic field in the opposite direction of a source magnet. This means that when a magnet is brought near a superconductor the superconductor will repel that magnet. For the carbonaceous sulfur hydride samples this method was used to determine if they were becoming superconductive. During the testing the team of scientists determined that when this material is becoming superconductive, there will be a drop in its magnetic susceptibility by 10-15 nano volts. This points to a diamagnetic transition, which shifts as the temperature and pressure increase. Using this method to observe the magnetic susceptibility also gave them an opportunity to observe the transition temperatures. Throughout the experiment the highest transition temperature was 198° K, which is only the midpoint of the transition. The highest pressure reached was 189 GPa. To put this into perspective 198° K is about -75° Celsius and -103° Fahrenheit, while these seem like very low temperatures compared to what traditional superconductors need to function, it is incredibly warm. It is closer to room temperature than it is 0 K. When the pressure is converted to atmospheres the 189 GPa becomes more clear. 189 GPa is approximately 1.865 x 10^6 atmospheres. Using these methods to study the magnetic
61 | P a g e
susceptibility they were unable to go higher than 200 GPa due to their technological limitations.4 The image on the left shows the relationship between the a.c. susceptibility and the temperature with the amount of pressure applied indicated. The image on the right shows Figure 3
The superconducting nature of Carbonaceous Sulfur Hydride can be confirmed even further by looking at the relationship between magnetic fields and the superconductor. Bardeen-Cooper-Schrieffer theory is applied and it explains how an external magnetic field exerts a Lorentz force in the opposite momenta of the electrons present in the Cooper pairs. This is the diamagnetic effect discussed above. It also induces a Zeeman effect which polarizes the initial spin-paired states in these pairs. This breaks the cooper pairs, which reduces the maximum temperature at which the material will be superconductive. 62 | P a g e
When using this method the transition temperature was reduced by 22° K when at 267 GPa, the magnetic field was 9 T, see figure 4. This shows that the temperature is dependent on the magnetic field being applied. It can be expressed using two mathematical methods: the first, Ginzburg-Landau and the second, conventional WethamerHelfand-Hohenberg model. Simply put these methods express the change in critical temperature at which the transition occurs. They not only provide more evidence that a transition is occurring but point to where it is. The transition to a superconductor is a short process that yields impressive results. The transition from standard superconductors to room temperature ones would have a similar effect on plenty of industries.4 This graph shows how a magnetic field impacts a superconductor's critical temperature. Figure 4
The applications of room temperature superconductors are nearly endless; one of the largest 63 | P a g e
impacts would be on the power grid. Assuming that superconductors could be developed where immense pressure and low temperatures are not required, transmissions of electricity would become an extremely efficient process. Right now superconductors are used in computer chips, MRI machines, maglev trains, and even particle accelerators. All of which rely heavily on being near 0° K. In the case of the computer chips, if they were made of room temperature superconductors we would be able to have supercomputers right in our homes.. MRI machines work through the use of electromagnets, which need to be supercooled in order for the machine to produce images. If the electromagnets were able to function at room temperature supercooled liquids like helium and nitrogen would not be needed, thus the machines would become far cheaper to use. The same concept is applied to both maglev trains and the particle accelerators, which would benefit greatly from room temperature superconductors. One of the most practical and wide-spread applications would be for the power grid. When electricity is being transmitted the resistance of the wires causes some energy to be lost as heat, if they were superconductors this would no longer happen and the price of electricity would plummet. The research being conducted on room temperature superconductors will lead to incredible developments that would change the way the world operates.3
5
64 | P a g e
References 1. Berger M. Van der waals heterostructures. Nanotechnology. 2021 Jul 2 [accessed 2021 Dec 4]. https://www.nanowerk.com/van-der-Waals-heterostructures.php 2. Cartlidge E. Superconductivity theory comes a step closer. Physics World. 2020 Aug 6 [accessed 2021 Dec 4]. https://physicsworld.com/a/superconductivitytheory-takes-a-step-closer/ 3. Global superconductor applications. ECS. 2016 Jul 13 [accessed 2021 Dec 4]. https://www.electrochem.org/superconductors 4. Snider E, Dasenbrock-Gammon N, McBride R, Debessai M, Vindana H,Vencatasamy K, Lawler KV, Salamat A, Dias RP. Room-temperature superconductivity in a carbonaceous sulfur hydride. Nature. 2020;586(7829):373–377. doi:10.1038/s41586-020-2801-z 5. Sutter P. What is a superconductor? LiveScience. 2021 Jul 23 [accessed 2021 Dec 4]. https://www.livescience.com/superconductor
65 | P a g e
Can quantum tunneling exceed the speed of light? By Minh N. Nguyen Would you believe me if I tell you quantum mechanics can help you catch the Hogwarts Express train and open the door to the magical world of Harry Porter and his friends by walking through platform 9 ¾ of King Cross station in London?
Even though this
seem to only exist in the magical world, it is “normal in the weird world of quantum physics”9. The phenomenon that a particle and matter waves transmitting through potential a barrier or a wall is called quantum tunnelling. The idea of quantum tunnelling may be implausible and illusory but it is in fact commonplace in the real world.
Quantum tunneling phenomenon is an important consequence of quantum mechanics. Classically, a particle moving towards a potential barrier can never make a cross the other of side the barrier without sufficient energy1. But different from classical mechanics, much of understanding quantum one cannot be explained by classical methods. “Quantum mechanics produces some bizarre effects in the microscopic world, but we do not see these phenomena in our macroscopic, “classical reality”3. So how does quantum physics explain the tunneling phenomenon?
66 | P a g e
Quantum theory is a theory of matter; or more precisely it is a theory of the small components that comprise familiar matter. “The ordinary matter of tables and chairs, omelets and elephants are made up of particles, like electrons, protons and neutrons”8. Quantum theory provides us our best account of these particles. It is proposed that any matter can exist as both a particle and wave at the same time. Because matter can also behave as a wave, they also have a wavelength that is inversely proportional to their velocity, in another word, De Broglie wavelength10. In the quantum world, matters act differently. They do not exist in definite states which means that they do not have exact position or energy and cannot predict with certainty but rather probability of possible states and positions with uncertainty. Because of those ideas, quantum tunneling does happen. There are possibilities that a particle can tunnel through a potential barrier without sufficient energy or going over the barrier. This is due to the fact that matters in the quantum world act as wave; and when they encounter the barrier, it will begin to decay exponentially. If the barrier is short enough this wave will get over the barrier before decreasing to zero. In this case, there are possibilities that we can find the particle tunnel through and appear on the other side of the potential barrier. This possibility of the particle can make it across the other side is 67 | P a g e
T, where T can be merely mathematically expressed by the formula11:
√2𝑚Δ𝐸
𝑇 = 𝑒 −2 ℏ 𝐿 Where “m” is the mass of the particle, E is energy different in potential energy of the barrier and the energy of the particle, L is the thickness of the wall and ħ is Planck’s constant. These possibilities are extremely small and difficult to account for. But We can see that the smaller the particle and the smaller the width of the barrier or the greater the energy E, matters will have more chances to get through the other side. So, without magic, you might be lucky to pass through the other side of platform 9¾ to catch the Hogwarts Express! But these fundamental theories do not stop the curiosity of scientists around the world. The idea of quantum tunneling and its applications have been studied for more than hundred years. “The history of quantum centers in many places, from Göttingen, Leipzig, and Berlin, Germany; Copenhagen, Denmark; Cambridge, both England and Massachusetts; Princeton, New Jersey; and Pasadena, California”10. Before the tunneling became standard, it was known as “penetration of, or leaking through a barrier”10. The Josephson effect-the flow of electric current between two pieces of superconducting material separated by a thin insulating layer,
68 | P a g e
nuclear fusion and instantons in high energy in physics have proved the phenomenon of tunneling in the quantum world. Alpha decay is one of the models of tunneling. The alpha decay characteristic of Polonium-212, which emits an 8.78 Mega-electron Volts (MeV) alpha particle with half-lives of 0.3 micro-second but the Coulomb barrier faced by an alpha particle with this energy is about 26 MeV, so by classical physics it cannot escape but quantum mechanics give a small possibility that the alpha can penetrate through the barrier4.
Still, the issue of time and speed for a particle to tunnel to through a potential still remains controversial for decades. Some experiments suggest that faster than light tunneling is possible and some experiments have superficially suggested that this phenomenon has been observed8. This violation happens when a particle at a wall vanishes and reappears on the other side of the barrier instantaneously. A reference to Harry Potter, when Harry and his friend run through the platform 9 ¾, they have made some movement at distance but effortlessly pass through a solid wall at an instant. This controversial idea has violated one of the most fundamental theories in quantum physics that nothing travels faster than the speed of light.
69 | P a g e
To have the answer for this question how long does the particle spend tunneling and what is its speed. Raymond Y. Chiao and his colleagues used photons to determine these values. Light is made of photons. By using two light sources that emit two photons simultaneously.
Each photon will travel the same distance
and arrive at a different detector. To see the effect of tunneling, the scientists placed a barrier-which acted as a potential barrier-in the first path, while the other one freely travelled in the second path to the detector. The barrier was a special mirror that consisted of thin, alternating layers of two different types of transparent glass that reflected 99 percent of incident light at the photon energy. This made it impossible for light to go through this region. Figure 19 explains the setup.
Figure 136
By the result of tunneling, 1 other percent will tunnel through; so that the scientists can compare the time both photons travel and the time for the photon to tunnel. The result was surprising. The photons that were tunneling through the barrier arrived before the one traveled through air. At the same time, 70 | P a g e
they found out that the velocity of tunneling photons was 1.7 times greater than the speed of light. One explanation was that the reshape of the wave function. In quantum mechanics, Figure 148
particles act as waves so that photon was best described by a bell-shaped probability
distribution. Due to this, after tunneling, the wave packet (Figure 26) gets reshaped and its peak moves forward to the front. At the same time, the best prediction of a photon is the location of the peak so that the detector receives the signal best from the peak. It is easier to imagine like this. There are two-train A and B-heading to their destination. Both trains have a light sensor (red dot) in the middle (Figure 37); and only when the lights pass through the destination line that it will give feedback to the control bureau that they have arrived. During the way, train A goes through a tunnel that makes the light sensor move to the head of the train (Figure 47).
71 | P a g e
Figure 15
7
Figure 167 Even though two trains cross their destination lines at the same time, the control bureau receives the signal from train A first due to the light sensor on train A passing through the line first. This may explain the reason why tunneling photons arrive before the free-travel one. But this explanation did not compromise our scientists. Our scientists made an assumption that due to the uncertainty principle in physics that the two photons did not launch out of the light source at the same time. To solve this problem, they 72 | P a g e
added an additional material to the apparatus. Instead of using two light sources, they use only one. Then the light beam would strike on a crystal of potassium dihydrogen phosphate (KDP), which has X nonlinearity (Figure 5). This setup would make a single parent photon split into two sub-photons, which will have the same launch time and at
Figure 176
speed in vacuum. In the end, the result did not change. The detector on the path where the tunneling barrier was placed still gave earlier signal which means there are existence of superluminal of the tunneling photon They believed to carry out precise measurement, they had to make sure the photons travel the same distance. To do so, they determine how much time it takes for
a
photon to travel from the source to the detector for each path. This would make the two paths equal if only the travel times were the same. With the help of Leonard Mandel and his co-workers at the University of Rochester5, Raymond could time the photons travel. They used an optical
Figure 185
73 | P a g e
element called beam splitter which will transmit half the photons striking it and reflect the other half. Two light sources will be released at the same time but from opposite sides. This means that there will be 4 different outcomes: both photons transmitted through the beam splitter (Figure 6b5), both reflected from the beam splitter (Figure 6c5); or both could go in one direction; both could go in another direction. By changing the length of one of the paths until the first 2 outcomes disappear, Raymond and his co-workers’ made photons reach the beam splitter at the same time, effectively creating two same racing lanes. The principle behind this method is that all particles in nature are either bosons or fermions. Pairs of photons are bosons like being together; which explains that after hit the beam splitter simultaneously, they prefer to head in the same path. When finished determining an identical path, they can install the barrier (at angle 𝜃 = 0°) and let the race go on. By measuring the average delay time, they can compare the speed of the tunneling photons to the light speed.6
Figure 1914
74 | P a g e
Figure 76 shows that the coincidence minimum with the tunnel barrier in the beam is displaced to a negative delay value when compared to the coincidence minimum without the tunnel barrier in the beam6. This negative shift indicates a superluminal. tunneling delay. Even though the experiments indicate the existence of fasterthan-light speed, Raymond and his co-workers seem not to agree with the results. They believe no “one can communicate faster than light”5. There is still no explanation for these experiments. So, controversies about quantum tunneling are still unsolved. The experiments carried out exciting results related to the superluminal phenomenon and the speed of light which might open up to future discovery of faster than light communication where humans can actually send signals to other end at instant or more applications in quantum computers. This mystery is yet to be studied and quantum tunneling is a mind-blowing topic.
75 | P a g e
Reference 1. 4.9: Quantum-Mechanical Tunneling. 2014. Chemistry LibreTexts. [cited 2021 Dec 3]. https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_107B%3A_Physica l_Chemistry_for_Life_Scientists/Chapters/4%3A_Quantum_Theory/4.09%3A_QuantumMechanical_Tunneling?fbclid=IwAR3oxj3YK8yLcvnvJRThNxqWOasCNwswglQ9fdry1JEryBSgfiZv NgRtSWs. 2. Alpha Particle Tunneling. Gsu.edu. [cited 2021 Dec 3]. http://hyperphysics.phyastr.gsu.edu/hbase/Nuclear/alptun.html 3. Folger T. 2018. Crossing the Quantum Divide. Scientific American. [cited 2021 Dec 3]. 319(1):28–35. doi:10.1038/scientificamerican0718-28. 4. Alpha half-life vs Kinetic Energy. Hyper Physics. Georgia State University. [cited 2021 Dec 3]. http://hyperphysics.phy-astr.gsu.edu/hbase/Nuclear/alptun.html 5. Chiao RY, Kwiat PG, Steinberg AM. 1993. Faster than Light? Scientific American. [cited 2021 Dec 3]. 269(2):52–60. https://www.jstor.org/stable/pdf/24941578.pdf?refreqid=excelsior%3A49e1cd50b152d7609e2ef142 9ee0ebd2. 6. Chiao RY. 1999. Tunneling Times and Superluminality: a Tutorial. Mysteries, puzzles, and paradoxes in quantum mechanics. [cited 2021 Dec 3]. :3–13. doi:10.1063/1.57888. https://arxiv.org/abs/quantph/9811019. 7. HOW To DRAW AN OLD-FASHIONED TRAIN EASY FOR KIDS. 2020. CUTEEASYDRAWING. [cited 2021 Dec 3]. https://www.cuteeasydrawings.com/pictures/How-to-Draw-an-Old-Fashioned-TrainEasy-for-Kids-2327.html 8. John D. Norton. Origins of Quantum Theory. University of Pittsburgh. [cited 2021 Dec 3] https://sites.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/quantum_theory_origins/index.html. 8. Jugandi. Wave packet. [cited 2021 Dec 3]. https://en.m.wikiversity.org/wiki/File:Wave_packet.gif 9. Measuring time in a quantum tunnel. MAX-PLANCK-GASELLCHAFT. [cited 2021 Dec 3]. https://www.mpg.de/11419700/time-measurement-quantum-tunnel?fbclid=IwAR0-b4FoK68y4aGTcnjZ2KqNxbITlwb9IV_zU0Ggyz6_H2KYM47yt8taJ4. 10. Merzbacher E. 2002. The Early History of Quantum Tunneling. Physics Today.[cited 2021 Dec 3] .55(8):44–49. doi:10.1063/1.1510281
76 | P a g e
Dark Energy: the Motor of Universal Expansion By Alexander Bjerser Why is the universe getting bigger and is the source of this answer also the answer to how old our universe is? Although a lot is unknown and confirmed regarding dark energy, it is believed that a lot of information regarding it will be revealed through future experiments. Experiments today have already revealed the magnitude of dark energy that is believed to be surrounding us. As much as 70% of all matter is believed to be dark energy while atoms are only about 5%.
All while not exactly
knowing what dark energy is. There are many different theories surrounding the subject, yet none really explain all aspects. One thing we have figured out however is that dark energy is believed to be the cause of an accelerated expansion of the universe. What we know today about dark energy is that nothingness existing in space is not nothing. The vacuum is believed to consist of dark energy in the form of tiny particles and antiparticles spontaneously appearing and disappearing in extremely short periods of time.
These particles have energy and can create a
negative or positive gravitational pull. Given what we know today about our universe expanding, these particles should have negative energy and therefore cause the expansion of space that is believed 77 | P a g e
to be occurring. The expansion of space was discovered 20 years ago to be accelerating and dark energy was dubbed the reason as to why. The belief is also that dark energy is weak. Which is a problem because it is much weaker than what has predicted, this is simply another part of dark energy that we have not managed to explain just yet. However, in order to get into the parts of dark energy we do know thanks to observation, as mentioned, the major topic explained by dark energy is the expansion of the universe. Before this theory, what was believed for a long time was that the expansion of the universe was slowing down due to an increasing gravitational force. This would make sense as, how could the universe continue to expand at a constant speed if the gravitational force is the only force acting on it which is within the universe itself. However, several experiments, primarily supernova observations, have confirmed that the expansion of the universe is actually accelerating rather than decelerating. Edwin Hubble, an American astronomer, was the first cosmetologist that discovered the accelerating expansion, and he did so while trying to measure the rate of which the universe’s expansion was slowing down.
Hubble observed a shift in light coming from other 78 | P a g e
galaxies. This later became proof, after multiple observations and measurements, of an accelerating expansion of the universe. The shift in light was a crucial discovery and is called redshift. The best way to understand redshift is to first understand the Doppler effect. The doppler effect explains what happens to sound waves when the source of said waves are moving. Think of the sound an ambulance makes as it is approaching you in traffic or on the street. It sounds different when it is approaching you than when it is moving away from you. As the source of the sound waves moves towards or away from the observer the sound will be experienced differently due to the change in wavelengths caused by the velocity of the source. As the source moves towards the observer the wavelength will be shorter and as the source moves away from the observer the wavelengths will be longer. A similar effect works on light waves as the universe’s expansion accelerates. As the source of light is moving towards you it will have a shorter wavelength and as it moves away from you it will have a longer wavelength. What happens when the light gets a longer wavelength is that it becomes ‘redder’ (as can be seen in Figure 1) and although the light waves are not exactly moving away from us the expansion of the universe is essentially stretching the waves due to its accelerating expansion.
This is what is called redshift
and what helps us detect the pace at which the universe is 79 | P a g e
expanding since we can observe and measure the wavelength of the photons, in other words light, and see at what pace the universe is expanding away from us.
Figure 1 shows that as the wavelength of light gets longer the more red it gets.
What Hubble discovered was that galaxies further away were subject to more redshift than those closer to us as those galaxies further from us recede at a higher rate than those closer to us.
This was
groundbreaking as it gave foundation for the first experimental evidence of this acceleration. The first experiment after Hubble’s observation showcasing this acceleration was not a supernovae observation. It was however, the first piece of evidence that suggested that the expansion of the universe was in fact not slowing down, contrary to common belief.
80 | P a g e
Through observation of photons in the universe and measuring the wavelength of those photons moving through the universe, we can find a temporary measurement of the size of the universe.
These
observations reveal the size of the universe at the time which then lead to observations of an expanding universe at an accelerating rate when it first was discovered. Although this in itself was fairly solid evidence, supernova experiments, as mentioned earlier, solidified the theory regarding universe expansion. Evidence for this theory has been observed in supernova measurements of the universe’s expansion. These experiments measure photons from the supernova which, conveniently, represent a direct relation to the expansion rate of the universe in their wavelength.
An example of a graph of these
measurements can be found in Figure 2. Essentially what this means is that observing multiple wavelengths in photons gives us the acceleration of the universe. Thanks to this straightforward method, we can find and prove that the rate of the universe’s expansion is accelerating.
81 | P a g e
Figure 2
82 | P a g e
Figure 2 shows a Hubble plot of observations made from multiple supernovae. The y-axis represents brightness of the observed photons, which go down with an increasing redshift. Hence on the top graph we can see a measurement that represents the age of our universe, where x=0, thanks to the observed light from supernovae and on the bottom graph we can see the decrease in the overall brightness as the redshift increases. Given that dark energy can explain the expansion of the universe and redshift can measure the rate of it, this allows us to measure the age of the universe. We do this by creating graphs such as Figure 2 which show the age of the universe by finding the redshift and brightness of different supernovae and placing them on a graph. The dots allow us to see the expansion over time which results in us being able to find the point of which the universe began to expand. The discovery first made by Hubble opened a new door for anyone in cosmology. It resulted in one of the largest and most significant discoveries in the area and completely changed the way our universe was looked at, as well as giving us an idea of how long our universe has been around. This opens possibilities of learning more about our origin. What is active in the immediate future,
83 | P a g e
however, is to explore more about what dark energy is by probing clusters of dark energy through a variety of methods. Although these discoveries are exciting evidence for the beginning of the universe, there are still many things unknown regarding dark energy. Given uncertainties in what we know, we will need to find out a lot more to become certain of what dark energy is and what it does in our universe other than cause expansion. However, we should all be excited as it is something that will open many secrets about our future, as it already has without us knowing what it exactly is.
References Robinson P. Introductory astronomy: Analyzing the hubble plot - youtube. Introductory Astronomy: Analyzing the Hubble Plot. 2013 [accessed 2021 Dec 2]. https://www.youtube.com/watch?v=EUiULlbuFGM Riess, A. G., & Livio, M. (2016). The Puzzle of Dark Energy. Scientific American, 314(3), 38–43. https://www.jstor.org/stable/26046875 Frieman J, Turner M, Huterer D. Dark energy and the accelerating universe. arXiv.org. 2008 Mar 7 [accessed 2021 Dec 2]. https://arxiv.org/abs/0803.0982 ESA. What is 'red shift'? ESA. [accessed 2021 Dec 2]. https://www.esa.int/Science_Exploration/Space_Science/What_is_red_shift Shutterstock. Visible light spectrum wavelength frequency images. Shutterstock. [accessed 2021 Dec 2]. https://www.shutterstock.com/search/visible+light+spectrum+wavelength+frequency Bahcall NA. Hubble's law and the expanding universe. PNAS. 2015 Mar 17 [accessed 2021 Dec 2]. https://www.pnas.org/content/112/11/3173 Perlmutter, S. (2003). Dark Energy: Recent Observations and Future Prospects. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 361(1812), 2469–2478. http://www.jstor.org/stable/3559228
84 | P a g e
Simulating the Universe By Cole Ames String theory is one of physicists most recent attempts to explain and make predictions about the universe as we know it. Unfortunately, the universe is far too complex to be modeled by a simple set of equations and laws. As James Halverson and Cody Long, the authors of “Statistical Predictions in String Theory and Deep Generative Models”, put it, “If string theory is true, fundamental physics is a complex system.”1 In the scientific world, the term complex holds more significance than in everyday life. The paper “What is a Complex System” contains many definitions and explanations of complex systems, but perhaps their best summary of it is, “complexity starts when causality breaks down.” The authors go on to explain that complex systems tend to have many independent yet interacting variables that are extremely affected by initial conditions.2 String theory is then easily defined as a complex system as there are numerous variables to make even the simplest calculations. The goal of Halverson and Long is to make such calculations. The general equation they use is:1
85 | P a g e
The left side of the equation is the expectation value of an observable O. P(i) is set equal to D(i), the dynamical factor, multiplied by A(i), the anthropic factor. The dynamical factors are those that are time dependent, while anthropic factors have to do with the constants of nature, and therefore are not changing over time. The summation is taken over the set Svac, which is the full set of vacua. Vacua are essentially different vacuums with different definitions of what is empty. In any type of physics, a definite state is used for calculations with base assumptions. If there were only one type of vacuum, these calculations would be quite simple. Unfortunately, string theory contains 10500 vacua, an incomprehensible number. To get an idea of just how large that number is, measuring the universes age (13.8 billion years) in milliseconds yields a result on the order of 1020, which does not begin to scratch the surface of the number of vacua employed by string theory. Obviously, the set Svac is not known in full, but neither are the factors D(i) and A(i). For these reasons, performing exact calculations of string theories predictions is impossible, so physicists turn to alternative methods, asking the question “if exact calculations in string theory are too slow, could fast-but-accurate simulation suffice?”1
86 | P a g e
The simulations that Halverson and Long decided to use are generative models. Generative models are used to find approximations of incalculable distributions of data. The actual distribution, Pd, is predicted by putting a random variable z from P(z) into the function:1
F(z) is a sample of a distribution (P) that depends on the parameter . Algorithms are then used to optimize , leading to P being a better and better approximation of Pd. To optimize the parameter general adversarial networks (GANs) are used.1 The GAN consists of two main components: the generator and the adversary, which is also known as the discriminator. The generator produces fake data that is meant to resemble a sample of real data, and the discriminator attempts to label the data set it is presented with as real or fake. The functions that do this look like:1
G, the generator function, produces a data set. The discriminator function, D, then labels that set as 1 (real) or 0 (fake). Here, and are both parameters for their respective functions.1 The first step in running the GAN is training. During training, the generator is made to simulate a data set. The 87 | P a g e
discriminator is then either presented with the simulated data set or a real data set of the same size. If the discriminator is fooled, in other words it marks a fake data set as real, it gets penalized. On the other hand, the generator is penalized if its fake data set is marked as fake. After a trial the generator and discriminator can update their respective parameters to try and avoid being penalized again. Over time this updating of parameters and competition between the two functions causes the discriminator to get very good at spotting fake data sets, which in turn makes the generator very good at producing realistic data sets.1 A variation of GANs, known as Wasserstein GANs (WGANs), are more complicated than conventional GANs but have an additional benefit. WGANs focus on a metric ρ(Pθ, Pd) that measures how close the model distribution (Pθ) is to the real distribution (Pd), where ρ is a distance or divergence. Changing ρ affects convergence, and the ρ with the best convergence properties is known as the Wasserstein distance (W). Initially, W cannot be used in distributions of higher dimensions, but Kantorovich duality allows it to be rewritten. With a few assumptions the new W function can be written as the computable function:1
88 | P a g e
This function allows for the difference between WGANs and typical GANs to occur. In GANs, the discriminator can become overtrained, limiting the amount of times the generator is able to update its parameters. In WGANs, however, the discriminator only approximates a new W value if the discriminator is well trained, thus only updating the generator when the discriminator is strong. For this reason, the discriminator is WGAN is commonly known as a critic instead, because instead of opposing the generator it helps refine and improve the generator. These GANs and WGANs will play a major role in the simulation of string data.1 Typically, string theory is simulated using random tensor approximations (RTAs) of low energy Lagrangians, measures of energy within a system, from string compactification. To compute observables that are associated with different Svac values, the Lagrangian (Li) can be computed for low energy fluctuation around i. (Li) is dependent on a factor known as coupling tensors, which are in turn dependent on the vacuum. As there are over 10500 vacua in string theory, it is impossible to find all these coupling tensors. The only alternative is to calculate approximations using a matrix ensemble. This was typically done by an experimenter that would carefully create the matrix ensemble and hope that it would accurately approximate real data. Using simulations, the goal is to use GANs to learn a model directly from the data.1 89 | P a g e
To prove that this is in fact possible, experiments set out to learn an RTA of an ensemble of coupling tensors from string theory. They used Kahler metrics of a Kahler moduli space as a basis for their parameters. Moduli are the “backgrounds” of string theory. Moduli are similar to the different vacua within string theory but emphasize geometry and structures. In Kahler moduli space, the metric produced gives coupling tensors as matrices. By evaluating the Kahler metrics at the apex of a stretched Kahler cone the matrix ensemble on different topologies can be calculated and hopefully extrapolated. This focus on geometries stems from the fact that string theory hypothesizes that the universe is 11dimensional, with 10 spatial and one time dimension. It is nearly impossible to imagine the existence of more than three spatial dimensions, but they are explained as being microscopic contortions of spacetime geometry. The different topologies, such as the Kahler cone, are examples of specific conformations of the additional dimensions.1 Before any simulations can be done, the physics of the Kahler moduli space needs to be evaluated. Using the potential equation with Kahler moduli space the Kahler metric (Kij) was obtained in the form of a matrix of polynomials at fixed values of a constant (h11). The polynomials have a complex structure that is given by the topology of the specific moduli space. The metric can then be 90 | P a g e
evaluated at specific points which will allow for RTAs to be calculated. If a generative model can accurately approximate this metric it will allow for RTAs to be calculated broadly.1 Generative models rely on their ability to come to increasingly accurate simulations. Therefore, it is important to be able to measure the difference between the simulated and real data numerically. Since the metric being simulated is in the form of a matrix, the eigenvalues of the matrices are used as numerical indicators of the accuracy of the simulations. The difference between real and simulated data is compared by a logarithmic distribution of the eigenvalues of real and simulated (Kij). This will yield the Wasserstein distance between the two metrics, which should decrease as training occurs. It is important to note that this is not the same as the Wasserstein distance used as a variable in training WGANs, but rather the final indicator of the difference between the metrics.1 Simulation of the metrics requires many parameters to be set:1
91 | P a g e
In the model names, DC signifies a “deep convolutional architecture” within the GANs that can improve their training speed and accuracy. The experimenters ran tests with each model type with all other parameters set as constants, producing this graph of their performance:1
The x-axis shows the number of “batches”, or epochs, that the system went through, and the y-axis indicates the distance between simulated and real metrics. As shown by the red line, the DCWGAN clearly trained the fastest and produced the most accurate data. This makes sense, as the W and DC elements are both used to improve the functionality of a typical GAN. Using the DCWGAN that was shown to be most effective, a similar set of tests were ran, varying the parameter nz from 5-50. There was no difference in results, showing that this parameter is not a factor in simulating accurate metrics.1 These tests have made it clear that the experiments are able to simulate known string data using their DCWGAN, but the goal is 92 | P a g e
to be able to simulate data in areas when the exact computation cannot be done. In other words, it is important that the system can accurately interpolate, estimate values between known values, and extrapolate, estimate values outside of known ranges. To test this ability, the experiments upgraded the DCWGAN to a conditional DCWGAN (cDCWGAN), allowing for certain conditions to be set. They then chose the value h11 to check for interpolation and extrapolation as they know the real metrics created when h11 values are {20,22,24,26,28,30}. To test for interpolation, this set was used:1
The system was trained on values on either side of the set to test how accurately the system could predict the metrics for h11 = {24,26} without being trained on either value. Likewise, extrapolation was tested with this set:1
The system was only trained on lower values, letting experimenters observe how well it could predict metrics from larger values h11 = {28,30}. Both tests were run under two separate conditions: multiple nz value and a single nz value. All four tests yielded
93 | P a g e
results that support interpolation and extrapolations as possibilities:1 In each test, the machines produced more accurate results as the epochs increased, which is consistent with initial test results. There is also a trend of increasing accuracy with greater h values, which the experiments say could be solved with “more clever architecture that allows for non-uniform data.” Nevertheless, each experiment showed training over time, as well as consistency with the known values, for the values that the system was not trained on. Therefore, it can be concluded that interpolation and extrapolation are both possible using this method of simulation.1 This approach’s success is important because it greatly improves upon the old method for calculating string data. The trained conditional DCWGAN used in this trial generates Kahler metrics at h11=30 250x faster than the previous leading method of obtaining such results. Additionally, the old method becomes slower as h11 increases, meaning metrics at large h11 values cannot be feasibly calculated. The cDCWGAN, on the other hand, takes the same amount of time to calculate metrics for any value of h11. This means that, if it is true that it can extrapolate accurately, the system will be able to produce metrics for previously incalculable values. Currently, the experimenters have 94 | P a g e
not extrapolated past h11=30 due to the additional time it would take to model the metrics to become bimodal, but they are confident that it will be possible with future advancements to the system.1 As discussed, string theory describes an incredibly complex system that is nearly impossible to take accurate data from in large quantities. The methods used by the experiments allow them to generate models that “trad[e] some error for efficient sampling.” The success of this experiment suggests that there is a high level of structure within string theory, which is why extrapolation works. Data points are “accompanied by…topological and geometric information,” and are “related to one another by deformation or discrete operations in a mathematically rigorous space. While this specific experiment was done using Kahler metrics, the same method can be applied to other types of data, both within string theory and in completely different fields. This work by Halverson and Long as paved the way for future discoveries in string theory and provided a useful method to experimenters from various disciplines.1 References 1. Halverson J, Long C. 2020 Apr 18. Statistical Predictions in String Theory and Deep Generative Models. Fortschritte der Physik.:2000005. doi:10.1002/prop.202000005. 2. Ladyman J, Lambert J, Wiesner K. 2012. What is a complex system? European Journal for Philosophy of Science. 3(1):33–67. doi:10.1007/s13194-012-0056-8.
95 | P a g e
Relativity: The Core of GPS By Max Arnone GPS is physically possible due to relativity, but the advancements in atomic clocks and other forms of accuracy refining techniques have pushed the technology from rudimentary military use to a necessity in the lives of most. GPS devices are able to calculate position through constant communication with four satellites far into Earth’s orbit and by utilizing one of the principles of special relativity; that the speed of light is constant. Each satellite is transmitting a signal from a known location, 𝑟𝑖 , at a known time, 𝑡𝑖 , where 𝑖 is simply an integer value from one to four representative of an individual satellite.
Fig. 1. Satellites transmit signals down to Earth from positions 𝑟𝑖 , relying on the constancy of 𝑐 to determine the GPS receiver’s position.1
The GPS device back on Earth receives these signals at an unknown location, 𝑟, and a known time, 𝑡. By solving four iterations, 𝑖 = 1, 2, 3, 4 in order to account for all four satellites, of the equation 96 | P a g e
|𝑟 − 𝑟𝑖 | = 𝑐(𝑡 − 𝑡𝑖 ) the GPS and by extension the user’s position, 𝑟, can be determined.1 However, this process is not without limitations; the equation above is a “propagation-delay equation” which is “the basis for position and time determination by the GPS receivers” and is made possible by relativity.1
The theory of special relativity, published in 1905 by Einstein, is the explanation of how speed, mass, time and space are connected. Following the equation 𝐸 = 𝑚𝑐 2, where 𝐸 and 𝑚 represent energy and mass respectively, special relativity shows how the speed of light, 𝑐, determines the relationship between energy and matter. In plain English, the above equation means that “as an object approaches the speed of light, the object’s mass becomes infinite and so does the energy required to move it”.2 As a result, it is impossible for anything to travel at a speed greater than the speed of light; this fact would become the basis for a new era of thinking within physics.
In his memoir, Einstein wrote about the thought experiment that helped lead him towards this discovery. The thought experiment was this: imagine a section of train tracks that span a distance between two trees. One person stands next to the tracks as another 97 | P a g e
person stands inside a train travelling between the trees at the speed of light. The two frames of reference in this scenario are then that of the stationary person and that of the moving person. The question that special relativity answered, through time dilation, was the following: if lightning was to strike both trees while the train passes between them, what would each observer see? Einstein determined that the observer inside the train, who is moving towards one tree and away from the other, would see lightning strike the tree ahead of them before the tree behind them; whereas the stationary observer would see both lightning strikes occur simultaneously.
Working with special relativity and time dilation requires two frames of reference, internal and non-internal, where time passes at different rates comparatively. The internal frame of reference is the stationary frame of reference. Time in this frame of reference is often called ‘proper time’. Conversely, the noninternal frame of reference is that of the moving object. A main implication of special relativity is that time moves relative to the observer. As a result, the non-internal frame experiences time dilation, as an object’s speed increases time begins to slow within that reference frame. Dilated time, Δ𝑡′, can be expressed by
98 | P a g e
the equation Δ𝑡 ′ =
Δ𝑡 2
√1−𝑣2
where Δ𝑡, 𝑣, and 𝑐 represent proper time,
𝑐
velocity in terms of 𝑐 and the speed of light respectively.2 The relationship between dilated time and proper time is known as, and often represented by the Lorentz factor; 𝛾 =
Δ𝑡′ Δ𝑡
where 𝛾 =
1 2
√1−𝑣2
.2
𝑐
Fig. 2. Lorentz factor as a function of speed, in natural units, where 𝑐 = 1.3
As Figure 2 shows, the Lorentz factor grows as the speed of the non-internal frame approaches the speed of light. This is expected; as the non-internal frame accelerates towards 𝑐, time continues to pass more slowly in relation to the passage of time in the internal frame of reference, which is unchanging.
The orbiting component of GPS consists of a couple dozen satellites, positioned so that at any point and at any time four or more are above the horizon. The first clue that special 99 | P a g e
relativity plays a large role in GPS is the fact that all GPS satellites carry multiple atomic clocks onboard for the purpose of transmitting signals in unison with the other satellites in the area. The core of GPS navigation has to do with the speed of light being a constant value, which is a fundamental aspect of special relativity. Assuming the precise positions of the 4+ satellites in the area are unknown it is possible to use the speed of light and the time it takes for each satellite to transmit signals back and forth from the surface of the Earth to determine their location. However, for this to occur the atomic clocks onboard each satellite must be both incredibly accurate and updated very often, in order to whittle down any time discrepancy that is produced by their different frames of reference. The maximum deviation from perfect alignment allowed is around 4 nanoseconds. More than that and location errors start to become sizable. The principal signal that is transferred back and forth by a satellite and a user’s GPS is “the so-called the L1 Signal at 1575.42 MHz”.1 The principal signal is a multiple of the fundamental frequency, 10.23 MHz, which is “synthesized from an atomic clock aboard each satellite”.1 For that wave to become useful to computers back on Earth it must be converted into binary. To do that, the phase of the wave is reversed in order to signal a 1 and the phase is left in its original state to signal a 0. Since satellites are never 100 | P a g e
still there is a doppler shift, an observed change in the frequency of a wave emitted by a moving source with respect to a stationary observer, associated with the signal it receives back.
Fig. 3. A visualization of wave crests emitted as the source moves left, observers on opposite sides would experience different waves than one another.4
The doppler shift is small, within a few Hz, but can be used to one’s advantage in some cases. By extrapolating the doppler shift one is currently observing, GPS devices are able to give location data at times in the future, which is an indispensable part of GPS. To understand the main way this used, think about all the traffic data that must be accessed when someone drives anywhere. Instead of only being able to tell them their location, GPS can formulate their future position and associated information with that future position at the future time they will be there.
Another relativistic effect, the Sagnac effect, appears in rotational reference frames as we have with the orbital paths of 101 | P a g e
satellites. The Sagnac effect can cause errors of hundreds of nanoseconds in GPS navigation, which would be devastating to the accuracy of a GPS since all that is needed to start causing issues is a difference of about 4 nanoseconds. As the network of synchronized atomic clocks on Earth’s surface grew and the clocks themselves became more and more accurate, the Sagnac effect became more pronounced since the network is open to “asymmetric, pathdependent effects arising from the planet’s rotation”.1
A larger issue that must be overcome to ensure GPS accuracy is the varying gravitational effects acting upon the satellites. Since the atomic clocks contained within the satellites are at different heights relative to Earth and are in varying states of motion they are influenced by different gravitational shifts and as a result, different time-dilation frequency shifts.1 These differences are represented by the following equation:
Fig. 4. The equation used to “generate coordinate time 𝑡, as defined in the Earth-centered inertial frame”.1
The ECI, or Earth-centered inertial frame, has its origin at the Earth’s center of mass and is stationary with respect to the stars. The ECI can be thought of as simply a larger internal frame akin to that of the person standing beside the train tracks in 102 | P a g e
Einstein’s original thought experiment. The difference is that instead of a train passing by there are dozens of satellites orbiting around the origin of the ECI. In order to maintain synchronization, coordinate time 𝑡, represented by the left hand integral of Figure 4, must be calculated. Earth’s gravitational potential and the satellite’s velocity are the main contributing factors to these differences, represented, in the right hand integral of Figure 4, by Φ and 𝑣 respectively.
Since the first GPS satellite was launched in 1977, the improvements in accuracy as well as the growth of GPS applications have been massive.1 In early GPS technology, the required frequency offsets for the onboard atomic clocks had not yet been realized. This could lead to a sizable “shift of the clock’s rate” culminating in location errors close to 100m.1 Whereas today, GPS has seemingly endless applications and almost pinpoint accuracy. “Monitoring of plate tectonics, navigation in deep space, timestamping of financial transactions”, and countless civilian uses are all the result of one underlying principle: that the speed of light is a constant.1
103 | P a g e
References 1. Ashby, N. 2002. Relativity and the Global Positioning System. Physics Today [Internet]. [cited 2021 Dec 3]; 55(5):41–47. Available from: https://physicstoday.scitation.org/doi/10.1063/1.1485583 2. Stein, V. 2020. Einstein’s Theory of Special Relativity. Space [Internet]. [cited 2021 Dec 3]. Available from: https://www.space.com/36273-theory-special-relativity.html
3. Wikipedia contributors. 2021. Time dilation. Wikipedia, The Free Encyclopedia [Internet]. [cited 2021 Dec 3]. Available from: https://en.wikipedia.org/w/index.php?title=Time_dilation&oldid=1055787667
4. Fowler, M. Doppler Effect. University of Virginia [Internet]. [cited 2021 Dec 3]. Available from: https://galileo.phys.virginia.edu/classes/152.mf1i.spring02/DopplerEffect.htm
104 | P a g e
105 | P a g e