Category: Physics

A Biological Supercomputer?!

cci16z_ucaeyche_1024
An artistic representation of the new biological microchip. Source: http://www.sciencealert.com/scientists-have-developed-the-world-s-first-living-breathing-supercomputer

Supercomputers are truly marvellous examples of what technology can accomplish, being used in many areas of science to work through some incredibly complex calculations. Their computational power is truly a feat of human engineering. But, unfortunately, they’re not perfect. Not only are they absolutely huge, often taking up an entire room, but they’re also expensive, prone to overheating, and a huge drain on power. They require so much of the stuff they often need their own power plant to function.

But fear not! As always, science is capable of finding a solution, and this one comes in the form of a microchip concept that uses biological components that can be found inside your own body. It was developed by an international team of researchers,  and it uses proteins in place of electrons to relay information. They’re movement is also powered by Adenosine Triphosphate (ATP), the fuel that provides energy for all biological processes occurring in your body right now. It can quite literally be described as a living microchip.

The chip’s size may not seem like much, measuring only 1.5 cm2, but if you zoom in you get a very different picture. Imagine, if you will, you’re in a plane looking down at an organised and very busy city. The streets form a sort of grid spanning from one end of the city to the other, which closely resembles the layout of this microchip. The proteins are then the vehicles that move through this grid, consuming the fuel they need as they go. The main difference being that, in this case, the streets are actually channels that have been etched into the chip’s surface.

“We’ve managed to create a very complex network in a very small area” says Dan Nicolau Sr. a bioengineer from McGill University in Canada, adding that the concept started as a “back of an envelope idea” after what he thinks was too much rum. I guess some innovative ideas require a little help getting started.

Once the rum was gone and the model created, the researchers then had to demonstrate that this concept could actually work. This was done by the application of a mathematical problem, with a successful result being if the microchip was able to identify all the correct solutions with minimal errors.

The process begins with the proteins in specific “loading zones” that guide them into the grid network. Once there, the journey through the microchip begins! The proteins start to move through the grid, via various junctions and corners, processing the calculation as they go.  Eventually, they emerge at one of many exits, each of which corresponds to one possible solution to the problem. In the specific case described by the researchers, analysis of their results revealed that correct answers were found significantly more often than incorrect ones, indicating that model can work as intended.

The researchers claim that this new model has many advantages over existing technology, including a reduction in cost, better energy efficiency, and minimal heat output, making it ideal for the construction of small, sustainable supercomputers. They also argue that this approach is much more scalable in practice, but recognise that there is still much to do to move from the model they have to a full on functioning supercomputer. It’s early days, but we know the idea works.

So, while it may be quite some time before we start seeing these biological supercomputers being actually put to use, it certainly seems like a fruitful path to follow. Society would no doubt benefit from the reduced cost and power usage that this new technology would bring, and these aspects would also make their application in scientific research much easier.

In fact, if the decrease in cost and power usage is a dramatic one, then scientists could potentially use a larger amount of these computers than they do at the moment. This a change that would have a huge impact on the kind of calculations that could be performed, and could potentially revolutionise many areas of science. Even though we’ll have to wait, that’s something I am very much looking forward.

Sources:

Advertisements

Confirmed: Gravitational Waves are a Thing!

If you dedicate any amount of time to following science these days then you WILL have heard about the recent detection of gravitational waves. Science media truly went mental when the discovery was announced on February 11th, and it’s not surprising when you consider what this means for the fields of physics and astronomy. But before we get started with all that jazz, we should probably look at the specifics regarding what the hell these waves are and how the discovery was made.

What they are is a disturbance in the fabric of space-time, much like how dragging your hand through a still pool of water will produce ripples that follow and spread out from it. Why is this a valid comparison? Well Einstein described the universe as made from a “fabric” hewn from both space and time. This fabric can be pushed and pulled as objects accelerate through it, creating these ripples. A similar distortion is also the cause of gravitational attraction, which is nicely demonstrated in the video below.

Almost any object moving through space can produce gravitational waves provided they are not spherically or cylindrically symmetrical. For example, a supernova will produce some if the mass is ejected asymmetrically, and a spinning star will produce some if it’s lumpy rather than a spherical. Unfortunately, the vast majority of sources produce waves that have dissipated long before they get anywhere near us, with only incredibly massive objects producing some that we have a chance of detecting.

Okay! Now that we have at least some idea of what these gravitational waves are, we can look at who and what detected them. This discovery can be attributed to the great minds and machinery involved in the LIGO experiment, which aims to detect gravitational waves by observing the effect they have on space-time. But how would they do this? Space-time isn’t even something we can see! Well my friends, the answer is very clever indeed.

It all involves a machine known as an Interferometer (Figure 1). This device starts by splitting a single laser beam into two, which then shoot off in lines perpendicular to each other. These beams travel exactly the same distance down long vacuum tubes, bounce off mirrors located at the end, and return. Since both beams have travelled the same distance they will still be alignment when they return to the source. They will then destructively interfere with each other and no light will reach the detector.

ifo
Figure 1: Diagram of a basic Interferometer design. Source: http://www.ligo.org/science/GW-IFO.php

However, a passing gravitational wave, with its space-time distorting powers, can actually change the distance that one of the beams travels.  This would mean they are no longer in alignment when they return to the source and won’t cancel each other out. Some light would therefore be able to reach the detector.

And voila! A gravitational wave has been detected… or has it? Well, it actually has in this case, but the point I’m making here is that this amazing machinery is incredibly sensitive to noise. If a gravitational waves were to pass by, it would only change the beam’s distance by about 1/10000th the width of an atom’s nucleus, which is a size I have trouble comprehending.

To pick up such a teeny-tiny change LIGO has to filter out any and all sources of noise, which can include earthquakes and nearby traffic. In fact, to test the research groups ability to distinguish a genuine gravitational wave from noise, senior members of the team secretly inserted “blind injections” of fake gravitational waves into the data stream. While it does seem a bit cruel, it seems their training paid off.

Now we move on to the understandably common question of why this matters to people who aren’t hardcore science nerds. Well, beyond the fact that this discovery will almost certainly win a Nobel Prize this year and that it confirms the final prediction made by Einstein’s general theory of relativity, it could also have a huge impact on the field of astronomy.

Similar to how we use various electromagnetic wavelengths like visible light, infra-red, and x-rays to study a wide range of things, gravitational waves could act as a new analytical tool. Scientists would listen to these waves to learn more information about the objects producing them, which include black holes, neutron stars, and supernovae.

So, while this discovery won’t exactly change your life, it’s easy to see how big of a discovery this was for the field of physics, giving us both a new way to observe the cosmos and further cementing the theory of relativity. Once again, Einstein has been proven right many decades after his death. That’s a feat that very few people have achieved.

12729233_575552555935283_2676206007201081052_n

Sources:

A Step Forward for Wearable Electronics

3dp_graphine_image
An artistic representation of Graphene. Source: http://3dprint.com/61659/graphene-ink-capabilities/

Research on flexible, wearable electronic devices is already well under way, with products such as the Wove Band attracting a great deal of attention. In fact, it’s a field of increasing research interest due to many potential applications. These include monitoring health and fitness, functional clothes, as well as many mobile and internet uses.

Such technology could have many implications in several areas of life. These might involve more effective and immediate monitoring of patients outside hospital, potentially reducing response time if something were to go wrong, and moving communications technology into an entirely new age. The smartphone as we know could be a thing of the past once this technology takes off.

Given the plethora of uses and the high profile of the research, it’s no surprise that many materials have already been considered. Silver nanowires, carbon nanotubes, and conductive polymers have all been explored in relation to flexible electronics. Unfortunately, problems have been reported in each case, such as high manufacturing costs in the case of silver nanowires and stability issues for some polymers.

But fear not my fellow science enthusiasts! Another material has appeared to save the day. It’s one you’re probably quite familiar with by now – Graphene! This two-dimensional hexagonal array of carbon atoms has great potential in the field of flexible electronics due to its unique properties, which include great conductivity and stability. However, known production methods for the Graphene sheets that would be needed give structures with a rather high surface resistance, which is not ideal.

Luckily, the invention of conductive Graphene inks provided a way to overcome this problem. This allows for sheets of superior conductivity, greater flexibility, a lighter weight, and a lower cost. That sounds VERY good for a wearable, flexible electronic device. These inks can also be prepared with or without a binder, a chemical that helps the ink stick to a surface. This also brings advantages and disadvantages, as a binder can improve the sheets conductivity, but also requires high temperature annealing processes. This limits its use on heat sensitive substrates such as papers and textiles.

Well, a new paper published in Scientific Reports in December claims to have found a production method that doesn’t require a binder and has a high conductivity. The research was conducted by scientists at the University of Manchaster, United Kingdom, and it represents and important step forward in making flexible Graphene based electronics a reality. The production method first involves covering a surface with an ink containing Graphene nanoflakes, then drying it at 100oC. This forms a highly porous coating, which is not ideal since it leads to high contact resistance and an unsmooth electron pathway.

The authors overcame this problem by compressing the dry coating, which led to a thin, highly dense layer of Graphene. This not only improved the adhesion of the nanoflakes, but the structure became much less porous, improving its conductivity. It is also noted that greater compression led to higher conductivity values, with the highest being 4.3×104 S/m. But the science didn’t end there! The authors then went on to test how flexible electronic components made from this material would perform with regard to communications technology. Both transmissions lines (TLs) and antennae were created from the Graphene sheets, and tested in various scenarios.

TLs are conductors designed to carry electricity or and electrical signal, and are essential in any circuitry. The ones created here were tested in three positions: unbent, bent but not twisted, and bent and twisted. This was done to determine if the material performed well in various positions; a necessity for a wearable, flexible device. Turns out the TLs performed well in all three positions, with data showing only slight variations in each case.

The Graphene based antennae were also tested in various positions, both unbent and with increasing amounts of bending. In each case the antennae were found to function in the frequency range matching Wi-Fi, Bluetooth, WLAN, and mobile cellular communications. This is an excellent indication that this material could be ideal for use in wearable communications technology. It was also tested in a pseudo-real life scenario, with antennae being wrapped around the wrists of a mannequin. These results were also promising, showing that an RF signal could be both radiated and received.

So, you can hopefully see that this work represents a real step forward towards wearable electronic devices, as it shows that Graphene is truly a prime candidate. That said, there is still a great deal of work to do, such as incorporating all these components into a complete device and figuring out how to produce the technology on a commercial scale. There would also need to be more research to see if these Graphene sheets could be modified in some way to include applications outside of communications. But putting that aside, I’m quite excited about this research bringing us a little bit closer. Keep an eye out to see where it goes from here.

Sources:

  • Fuente, J. (2016). Properties Of Graphene. Graphenea. Retrieved 18 January 2016, from http://www.graphenea.com/pages/graphene-properties#.VpzceyqLSwV
  • Huang, G.-W. et al. Wearable Electronics of Silver-Nanowire/Poly(dimethylsiloxane) Nanocomposite for Smart Clothing. Sci. Rep. 5, 13971; doi: 10.1038/srep13971 (2015).
  • Huang, X. et al. Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications.Sci. Rep. 5, 18298; doi: 10.1038/srep18298 (2015).
  • Matzeu, G., O’Quigley, C., McNamara, E., Zuliani, C., Fay, C., Glennon, T., & Diamond, D. (2016). An integrated sensing and wireless communications platform for sensing sodium in sweat. Anal. Methods, 8(1), 64-71. http://dx.doi.org/10.1039/c5ay02254a

Where are all the Aliens?

milky-way-you-are-here
The Milky Way is a big place. Are we really the only ones here? Source: http://sites.psu.edu/vansonspace/2015/04/10/space-is-the-place/

Are we alone in the Universe? To this day it remains one of the most intriguing questions in Science, and probably one of the most discussed by non-scientists everywhere. It’s likely been around for quite some time, but it wasn’t until 1984 with the birth of the SETI (Search for Extraterrestrial Intelligence) institute that we started making meaningful strides towards finding an answer.

But despite great public visibility and inherent curiosity factor, the institute has been pushed to the edges of scientific research. It has failed to attract any serious funding, and received only small amounts of dedicated observation time on world class telescopes.

Well all that is about to change! Thanks to Russian entrepreneur Yuri Milner and physicist Stephen Hawking, the SETI Institute will receive a total of $100 billion in funding over the next decade. The project is known as “Breakthrough Listen”, and will allow for state-of-the-art radio and optical surveys to take place on some of the world’s best telescopes. The project is actually supposed to start making observations some time this year!

So, now that we have the resources available to do some searching, the next question is – what do we search for? We ideally want to find a planet that shares characteristics with our own. That is, one with a rocky surface, of a similar size, orbiting a similar star, and a surface temperature that can allow for liquid water.

This aspect has not proven to be much of a problem, with observations, primarily from the Kepler Space Telescope, showing that the Milky Way contains around a billion planets that meet these specifications. But once we’ve identified such a planet, how do we go about searching for life?

Well for it to be in any way detectable from a distance, life needs to have evolved to the point where it dominates the planet’s surface chemistry. This will actually change the composition of the atmosphere, creating so-called “biosignatures”. A chemical indication of the presence of life.

An example is an atmosphere of at least 20% O2, since our own planet shows that such a composition can almost entirely be created by biological processes. But there is a very real risk of a false positive with any of these biosignatures, since there is always the possibility of a non-biological source. In the case of O2, the splitting of vaporised H2O by UV radiation could easily create such high levels.

This means that we need to find ways to either back up promising signatures, or identify a false positive. For example, detecting methane (CH4) in the planets atmosphere as well as O2 would significantly strengthen the possibility of a life-based origin. On the other hand, an atmosphere rich in steam would suggest that the splitting of H2O is the most likely source.

But what if we want to be more ambitious? What if we want to, rather than searching for any form of life, jump straight to searching for intelligence? There are a few options available to us here, one of which would be the detection of an intelligent, non-natural radio transmission. This is currently the main aim of the SETI program, and while the risk of a false positive is significantly lower than with biosignatures, it’s not without problems. The main one being that radio communication might be considered archaic by and advanced lifeform, so they might not even be using it.

It would also be possible to search for evidence of energy consumption, a necessity for an advanced civilization that seems impossible to conceal. There are many potential energy sources for a civilization with advanced technology, with nuclear fusion being a likely one. There is also the incredible concept of the “Dyson Sphere”, a megastructure surrounding a star to harvest the energy it emits. In either case the production of waste heat is inevitable, and would produce a detectable mid-infrared (MIR) signal.

But one final problem remains. What if, as so much sci-fi media suggests, biological life is only a brief stage for an evolving intelligence? What if the next logical step is the dominance of artificial, inorganic lifeforms?  If so, we wouldn’t really know where to look. It is likely that they would not be found on a planet, as gravity is only advantageous for emerging biological life, but otherwise a nuisance. They would, however, still need to be close to a power source for energy considerations. A star seems to be the most likely source, so that at least gives us a place to start.

There is also the possibility that such intelligence might be broadcasting a signal in their own attempt to find out if they’re alone in the Universe. But if such an advanced civilization were to do such a thing, it is unlikely that our feeble organic brains would be able to detect or understand it.

So, it looks like this amazing question is no closer to being answered than when the effort first began in 1984, but that’s not really surprising since it’s quite a difficult question. However, given that SETI has just been given a new lease of life, it might have gotten a little bit easier. I hope we’ll be learning a lot about this in the coming decade, and who knows, we might actually find someone.

Sources:

 

Snowflake Science!

snowflake-6
Example of a possible snowflake shape. Source: http://feelgrafix.com/group/snowflake.html

It honestly feels like Winter never really happened this year. I remember hearing rumours of snow on Christmas day sometime in November, and I must confess I got excited. Even though weather predictions are known to be near impossible if they’re after more than a few days.

But sadly that snow never happened, and in England at least we’ve had to settle with a near constant Autumn. Its true that we still have all of January and February for Winter to actually happen, but I think its unlikely that we’ll see any snow this year.

So since I still needed my snowy fix I decided to learn a bit about the formation of snow and how that knowledge can be of use to us. It may not be actual snow, but it might make your imaginary snow a bit more realistic.

Let’s start with what a snowflake actually is! Its a pretty broad term, with a huge variety of structures qualifying as a snowflake. The only concrete part of the definition is that the structure consist of more than one snow crystal, which is a single crystal of ice. These form in clouds when water vapour condenses into ice, a process which as two specific conditions for occurring.

The first is known as “Supersaturation”, which occurs when the amount of water vapour in the air exceeds the ordinary humidity limit. What does this mean? Well at every available temperature there is a maximum amount of water vapour that can be supported, with a higher temperature allowing for more water vapour.

If we cool a volume of air that’s already at 100% humidity then it now contains more water than is stable, and has become supersaturated. The excess water will then condense out, either into water droplets or directly into ice.

The second condition is “Supercooling”, or rather the lack there of. This is when a substance remains in a liquid state below its freezing point. It is possible for pure water to remain in a liquid state below 0oC, as the thermal motion of the molecules prevents crystallisation. In fact, the temperature has to drop to -42oC before freezing will occur!

On the other hand, tap water will readily freeze at 0oC due to the impurities it contains. These provide a surface for the molecules to cling to, reducing the effects of thermal motion. The scientific term for what these impurities provide is a “nucleation point”, a starting point for crystal growth. This also occurs in clouds when snow crystals form, as the many impurities such as dust and pollen particles provide nucleation points.

So! Now that we know what a snowflake is and the conditions for their formation, we can look at the process of crystal growth. It begins when the water molecules arrange themselves around the nucleation point. There are actually 14 possible lattice structures for ice, but ice 1h (short for “Form 1 Hexagonal”) is the most stable between 0 and 100oC, so its the most common form found in nature. In this arrangement the water molecules bond in a hexagonal lattice structure shown in Figure 1.

300x226xrtemagicc_fig1_ice_ih-pagespeed-ic-xixag9ndvs
Figure 1: The hexagonal lattice structure of an ice crystal. Red spheres represent Oxygen atoms, and white spheres represent Hydrogen. Source: http://www.thenakedscientists.com/HTML/articles/article/science-of-snowflakes/

The growth then continues as shown in Figure 2, with “rough” areas filling in faster than “smooth” ones. Why do they do this? Well a rough surface is one with multiple binding sites available, as more surface molecules are exposed. This makes it easier for incoming molecules to bind in these locations, and this growth pattern defines the hexagonal shape of the initial crystal.

300x208xrtemagicc_fig2_facets-pagespeed-ic-kwmy7hjw6u
Figure 2: Diagram showing how additional water molecules bind as the ice crystal grows. Source: http://www.thenakedscientists.com/HTML/articles/article/science-of-snowflakes/

This crystal continues to grow as atmospheric water binds and becomes incorporated into the structure. However, from here on the growth is not uniform, with the corners growing fastest since they now offer the most exposed surface molecules. This is what causes the six “arms” that extend out from the corners of the central hexagon, and their size and shape will be determined by the ever changing conditions as the snowflake moves through the air.

The final shape of the snowflake will be determined by many factors including temperature, humidity, and how those conditions varied during it’s formation. This makes it extremely unlikely that you’ll ever find two identical snowflakes, as the number of possible combinations and variations is truly staggering.  Its actually made even less likely when you consider that the majority of snowflakes will not be perfectly symmetrical, as different parts of the snowflake can experience different conditions as well.

Now, while this is all very interesting, what is the actual point of studying the complex formation of snowflakes? Given that snowflake formation was successfully simulated by a research team from both Germany and London, it would be nice to know its not all for nothing! Well it turns out that this knowledge, while not having many immediate applications, could be very useful in the future.

Crystals are applied and used in many areas these days. These include semiconductor crystals for electronics, optical crystals for telecommunications, artificial diamonds for machining and grinding, the list goes on. So by studying snowflakes we gain a deeper understanding how crystals form and grow. Knowledge that may help us form new and better types of crystals in the future.

Some more interesting, and perhaps more important, things we can learn are the principles behind self-assembling structures. While us humans usually make things by carving structures out from a block of material, nature often has structures assembling themselves from smaller components. This production method will likely become HUGELY important as the electronics industry constantly moves towards smaller devices.

So now you can see that snowflakes are not only both beautiful and amazingly scientific, but also potential useful to us. Something I must confess I was unaware of before writing this post. Now while all this doesn’t change the fact that IT HASN’T SNOWED YET (this makes me very sad), at least you can madly rant about the science when it does.

Sources not mentioned in text:

vOICe: Helping People See with Sound

A demonstration of the vOICe experiment. Photo credit: Nic Delves-Broughton/University of Bath Source: http://www.theguardian.com/society/2014/dec/07/voice-soundscape-headsets-allow-blind-see

It seems like there is an almost constant stream of awesome new technology these days, and there has been a rather fantastic addition! A device is being researched at both the California Institute of Technology (Caltech) in the US and the University of Bath in the UK, with a very noble goal in mind; to build better vision aids for the blind.

Now it has long been known that blind people often rely on sound as a substitution for sight, with some individuals sense of hearing heightened to the point of being able to use echolocation. Well, it turns out that sound can also be designed to convey visual information, allowing people to form a kind of mental map of their environment. This is achieved by the device known as “vOICe”; a pair of smart glasses capable of translating images into sounds.

The device itself consists of a pair of dark glasses with a camera attached, all of which is connected to a computer. The system can then convert the pixels in the camera’s video feed into a soundscape that maps brightness and vertical location to an associated pitch and volume. This means that a bright cluster of pixels at the top of the frame will produce a loud sound with a high pitch, and a dark area toward the bottom will give the opposite; a quiet sound with a low pitch.

But what is really impressive about this technology is that this soundscape appears to be intuitively understood, requiring little to no training at all! In a test performed by researchers Noelle Stiles and Shinsuke Shimojo at Caltech, blind people with no experience of using the device were able to match shapes to sounds just as well as those who had been trained, with both groups performing 33% better than pure chance. In contrast, when the coding was reversed (high point = low pitch, bright pixels = quiet etc.) volunteers performed significantly worse. So how did they achieve such an intuitive system?

Well it began with Stiles and Shimojo working to understand how people naturally map sounds to other senses. Both blind and sighted volunteers were involved in the systems development, with sighted people being asked to match images to sounds, and blind volunteers being asked to do the same with textures. The pattern of choices during these trials directly shaped vOICe’s algorithm, and appeared to produce an intuitive result. This seemed to be a surprise to the researchers, as they wrote that “the result that select natural stimuli could be intuitive with sensory substitution, with or without training, was unexpected”.

This information successfully managed to get me excited, and already had me itching to learn more. It was then that I found out the research at the University of Bath further emphasised the importance of having such an intuitive system. Here the researchers claim that some users are exceeding the level of visual performance often achieved by more invasive restoration techniques, such as stem cell implants or prosthetics. While people who receive such surgery are rarely able to make out more than abstract images, some long-term users of vOICe claim to form images in their brain somewhat similar to sight, as their brains become rewired to “see” without use of their eyes.

Michael J Proulx, an associate professor at the university’s department of psychology, gave the example of a man in his 60s who had been born blind. Proulx reports that he initially thought the idea was a joke, too sci-fi to be real, but “after 1 hour of training, he was walking down the hall, avoiding obstacles, grabbing objects on a table. He was floored by how much he could do with it”. He also reports that after a few weeks of use, some people were able to achieve levels of vision of 20/250. To put that into perspective for you, a short-sighted person who removed their glasses would have a level around 20/400. That’s right, this tech could allow the completely blind to see better than those who are still partially sighted! That’s something to wrap your head around.

But slow down there with your excitement! While this technology is truly revolutionary, it is worth pointing out that there is a huge gulf between distinguishing patterns in a lab environment and using vOICe to actually observe and understand the real world. For example, we don’t know how a busy street, with an already large amount of visual and auditory information, would affect both the device’s signals and how they are interpreted. But there is no denying that this work represents and important step on the road to developing better vision aids, and given that the World Health Organisation estimates a total of 39 million blind people in the world, this technology could bring about a dramatic increase in quality of life across the globe.

But that’s not all this technology could do, as the results are challenging the concept of what being able to “see” actually means. This is illustrated by a quote from Shimojo at Caltech, where she mentions that “our research has shown that the visual cortex can be activated by sound, indicating that we don’t really need our eyes to see”. This has profound implications for the field of neuroscience, and has led to another study beginning at the University of Bath to examine exactly how much information is required for a person to “see” in this way. This could not only lead to optimisation of this technology, but to a deeper understanding of how the human brain processes sensory information.

Now I don’t know about you, but I remember when stuff like this was considered to be firmly stuck in the realm of science fiction, and the fact that such talented scientists keep bringing it closer to reality still surprises me. Combine this with an incredible rate of progress, and there really is no way of knowing what futuristic tech they’ll come up with next. This can make keeping up with it all one hell of a challenge, but fear not, my scientific friends! I shall remain here to shout about anything new that comes along.

Sources not mention in text:

Did you know Neutrinos have Mass? I didn’t even know they where Catholic!

Terrible jokes aside, this was actually a HUGE discovery in the world of physics, so it’s no surprise that two of the scientists responsible, Takaaki Kajita and Arthur B. McDonald, where awarded this year’s Nobel Prize. Their research led to the discovery of the phenomenon now called “Neutrino Oscillations”, proving that these elementary particles do in fact have mass. Now, at this point that will likely not mean anything to you (it meant f**k all to me at first!), and before we dive into the explanation, we’re going to need a brief history of these elusive particles.

Neutrinos were first proposed by physicist Wolfgang Pauli when he attempted to explain conservation of energy in beta-decay; a type of radioactive decay in atomic nuclei. Noticing that some energy was missing upon this decay, he suggested that some of it was carried away by an electrically neutral, weakly interacting, and extremely light particle. This concept was such a mind-f**k that Pauli himself had a hard time accepting it’s existence – “I have done a terrible thing, I have postulated a particle that cannot be detected.” But this all changed in June 1956 when physicists Frederick Reines and Clyde Cowan noticed that these particles had left traces in their detector. This was big news, and as a result many experiments began to both detect and identify them.

So! Where do these particles come from? Well some have been around since the very beginning of the Universe, created during the Big Bang, and others are constantly being created in a number of processes both in Space and on Earth. These processes include exploding supernovas, reactions in nuclear power plants, and naturally occurring radioactive decay. This can even occur inside our bodies, with an average of 5000 per second being produced when an isotope of potassium decays. Don’t worry! These things are harmless (remember – weakly interacting) so there’s no need to go on a neutrino freak-out. In fact most of the neutrinos that reach Earth originate in nuclear reactions inside the Sun, a fact we’ll need to remember for later. There are also three types (or “flavors”) of neutrino according to the Standard Model of Physics (electron-neutrinos, muon-neutrinos, and tau-neutrinos) and the exact flavor is determined by which charged particle is also produced during the decay process (electron / muon / tau-lepton). The Standard Model also requires these particles to be massless, which will also be important later on.

Now that we know all this, we can let the experimentation begin! Both of the Nobel Prize winning scientists were working with research groups attempting to detect, quantify, and identify neutrinos arriving on Earth, albeit on different parts of the globe. It is also worth noting that both detectors were built deep underground in order to reduce interference from neutrinos produced in the surrounding environment. Takaaki Kajita was working at the Super-Kamiokande detector, which became operational in 1996 in a mine north-west of Tokyo. This was able to detect both muon and electron-neutrinos produced when cosmic radiation particles interact with molecules in Earth’s atmosphere, and could take readings from both neutrinos arriving from the atmosphere above the detector, and from those that had arrived on the other side of the globe and moved through the mass of the whole planet. Given that the amount of cosmic radiation doesn’t vary depending on position, the number of neutrinos detected from both directions should have been equal, but more were observed arriving from above the detector. Neutrinos were the cause of yet another mind-f**k… and it was suggested that if they had changed flavor, from muon / electron to tau-neutrinos, then this discrepancy would make sense.

Fast forward a few years to 1999 and the Sudbury Neutrino Observatory had become active in a mine in Ontario, Canada. This is where Arthur B. McDonald and his research group began measuring neutrinos arriving on Earth from the Sun using two methods; one could only detect electron-neutrinos, the other could detect all three flavors but not distinguish between them. Remember that most of the neutrinos arriving on Earth come from the Sun? Well it was also known that reactions within the sun only produce electron-neutrinos. This meant that both detection methods should have yielded the same results, as only electron-neutrinos would be detected. However, measurements of all three flavors were greater than the readings for electron-neutrinos only. This could really only mean one thing, the neutrinos must be able to change flavors.

This is where things get REALLY confusing, as neutrinos need to have mass to be able to change flavors. Why? The answer lies in Quantum Mechanics, and a phrase i’ve frequently heard is: if you claim to understand Quantum Mechanics, that only confirms how much you don’t. Now, I’m gonna need you to bear with me here, as I’m going to attempt to explain this while confusing you as little as possible, a task that gave me a BAD headache while planning and researching. We’ll start this endeavor by stating that neutrinos can be classified in one of two ways, by their flavor (three types) or by their mass (also three types). We’ll also need to point out that, thanks to the “Uncertainty Principle”, if you know the flavor of a Neutrino, you cannot know it’s mass, and vice versa. This means that you cannot know the mass of a muon-neutrino / electron-neutrino etc. At all. It’s simply not possible. This ALSO means a neutrino of a precise and identified flavor exists as a precise superposition (or mix) of all three mass types. It’s also worth noting that each flavor is a different mix of all mass types, but it is exactly this property that allows a neutrino to change identity. Welcome to the f**ked up world of Quantum Mechanics!

Einstein’s theory of special relativity states that a particle’s velocity is dependant on its mass and its energy. So, if we have an electron-neutrino moving through space, each of the three mass types it consists of move at slightly different velocities. It is this small difference that causes the mix of mass types to change as the particle moves, and by changing the mix, you change the flavor of the neutrino. Congratulations! You are now somewhat closer to understanding (or not understanding I guess…) the phenomenon of “Neutrino Oscillations”!

While all of this is excellent at causing brain pain, it also opens the gateway to completely new physics as, like I mentioned before, the Standard Model REQUIRES neutrinos to be massless, which is clearly not the case. This discovery marked the first successful experimental challenge to this model in over 20 years, and it is now obvious that it cannot be a complete theory of how the fundamental constituents of the Universe function. Physics now has many new questions.

Did you make it this far? Well done! Go lie down and let your brain rest. It won’t make any more sense tomorrow.

Sources:

  • Neutrino Types and Neutrino Oscillations. Of Particular Significance: Conversations about Science with Theoretical Physicist Matt Strassler. Link: http://profmattstrassler.com/articles-and-posts/particle-physics-basics/neutrinos/neutrino-types-and-neutrino-oscillations/
  • How Are Neutrino Flavors Different? Maybe There Is Only One Vanilla. Cosmology Science by David Dilworth. Link: http://cosmologyscience.com/cosblog/how-neutrino-flavors-are-different/
  • Neutrino Physics. SLAC Summer Institute on Particle Physics (SS104), Aug. 2-13, 2004. Author: Boris Kayser. Link: http://www.slac.stanford.edu/econf/C040802/papers/L004.PDF
  • The chameleons of space. The Nobel Prize in Physics 2015 – Popular Science Background. The Royal Swedish Academy of Sciences. Link: http://www.nobelprize.org/nobel_prizes/physics/laureates/2015/popular-physicsprize2015.pdf
  • Velocity Differences of Neutrinos. Of Particular Significance: Conversations about Science with Theoretical Physicist Matt Strassler. Link: http://profmattstrassler.com/articles-and-posts/particle-physics-basics/neutrinos/neutrino-types-and-neutrino-oscillations/velocity-differences-of-neutrinos/