Month: October 2015

What’s the deal with GMOs?

Okay… I was hoping I wouldn’t have to write about this any time soon, but with some countries within the EU deciding to ban the cultivation of genetically modified crops, I think the time has come.

A total of 17 european countries announced this ban at the beginning of October, and it exposes just how far Europe has gone in setting itself against the modern scientific consensus. In fact, the decision seems to have been made without considering the science at all, but we’ll get into that later. I think we should begin by educating ourselves on what GMOs actually are.

A GMO (genetically modified organism) can be defined as an organism that has acquired, by artificial means, one or more genes from another species or from another variety of the same species.

Humans have been modifying the genomes of plants and animals for thousands of years through the process of artificial selection (selective breeding), which involves selecting organisms with desirable traits and breeding them so that these characteristics are passed on. An example of this are the “Belgian Blue” cows, which have been selectively bred to have greater muscle mass.

Unfortunately this is limited to only naturally occurring variations of a gene, but genetic engineering allows for the introduction of genes from completely unrelated species. These genes could lead to resistance to certain diseases and pesticides, or to enhanced nutritional content. The list goes on; but why do we need these organisms?

Given that the Food and Agriculture Organization of the UN predicts that we’ll need to produce 70% more food by 2050 to feed the growing population, we will need to find new ways to meet food demands. There are several ways to do this, but options such as increased deforestation and giving up meat consumption to better utilise the required crops are not appealing for a number of reasons.

The more realistic options are investing more in hydroponics (growing crops indoors), which many countries are doing, or growing GM crops. This is because the central idea behind GM crops is to combat problems that threaten food security, such as pests, climate change, or disease. Modifications that remove these problems could allow certain foods to be effectively grown in locations where it was previously not possible, as well as improving the chances of the crops surviving in harsh conditions.

Despite these benefits, there are still many controversies surrounding GMOs, such as the unintended spread of modified genes, but this excellent story on clearly outlines many of these problems, as well as pointing out why they are really no cause for concern.

So why do so many people still have a problem with GMOs? The benefits are clearly huge and many of the legitimate concerns have already been addressed. Well, many people seem to believe that GMOs are somehow bad for their health, even poisonous, and that they can damage the environment, despite overwhelming scientific evidence to the contrary.

Some researchers published a paper in the journal “Trends in Plant Science”, arguing that the negative representations of GMOs are popular because they are intuitively appealing. In other words, many people oppose GMOs because it “makes sense” that they would pose a threat. The paper is also very well summarised by one of the authors in an article from Scientific American.

One reason they give is the concept of “Psychological Essentialism”, making us perceive an organisms DNA as its “essence”. Following this logic, DNA is an unobservable core causing an organisms behaviour and determining its identity. This means that, when a gene is transferred between two distantly related species, people are likely to believe that some characteristics of the source organism will emerge in the recipient.

They also report that an opinion survey in the US showed that more than half of the respondents thought that a tomato modified with fish DNA would taste like fish. This is NOT how DNA works.

However, it is worth pointing out that not all criticisms of GMOs are unfounded, as many people are skeptical of how the business world will change with their introduction. It has already been reported that the US supreme court has ruled in favour of Monsanto’s claim to copyright GMO seeds, as well as the ability to sue farmers whose fields become contaminated with Monsanto products, whether it is accidental or not.

Now I don’t feel I can safely comment on all of this as the world of business is not something I am educated in, but, while Monsanto’s business practices may be ethically questionable, they are not the only company involved in GMO research and distribution. Many academic institutions and non-profit organisations are also involved, and such groups are responsible for the introduction of Golden Rice, a GMO that has had only beneficial effects for society.

Knowing this, to dismiss all such organisms simply because one questionable company produces some of them is extremely narrow-minded. Another valid criticism is that it is not possible to say that future GMOs will be safe, and that each organism should be evaluated individually. I would agree with this, as a newly created GMO may and likely will have problems associated with it.

But these will be addressed in the research phase, in the same way that a newly synthesised drug has to undergo trials to determine and correct problems, and any product that gets a commercial release will have been thoroughly evaluated by the scientists involved with the research. The problem appears when people claim the gene editing techniques themselves are dangerous, which has no scientific grounding whatsoever.

So, now we go back to the problem of the European Union’s decision to ban GM crops. It is worth noting that this ban doesn’t apply to scientific research, so they are clearly not opposed to the development of new GMOs, just the cultivation of ones that have already been proven safe. Sounds confusing right? I should also point out that this decision was made without consulting the scientific advisor of the European Commission (EC), because they currently don’t have one!

Last November, the EC’s president, Jean-Claude Juncker, chose not to appoint a chief scientific advisor due to lobbying from Greenpeace and other environmental groups, who seemed to have a problem with what the previous advisor was saying about GMOs. Ignoring the fact that the advisors comments reflected the scientific consensus, they wrote “We hope that you as the incoming commission president will decide not to nominate a chief scientific advisor”.

This is extremely worrying, especially since the scientific consensus on the safety of genetic engineering is as solid as that which underpins human-caused climate change. This is especially strange as Greenpeace appears to support the consensus on climate change. You can’t pick and choose which science you agree with; you either support science, or you don’t.

I would assume this ban is due to the negative public opinion of GMOs, and the idea that all scientists that advocate for them have somehow been “bought” by large corporations like Monsanto. Speaking as someone who has experience with scientists and scientific research, I can say that the process has no agenda. Yes, the researchers may prefer one outcome to another, but if the evidence contradicts what they want to find, then they accept that. To do otherwise goes against the very nature of science, and given the amount of work and studying that goes into such a career, very few people go into science without a great deal of passion and respect for the process. You don’t have to trust the corporations, but you should trust the science.

Sources not mentioned in text:

Artificial Skin for Robotic Limbs

Since the new Star Wars trailer has brought with it an air of Sci-Fi, I felt our next story should fit the mood. This led me to a recent article published in the journal “Science” in which a team of researchers created an incredible form of artificial skin. Currently, prosthetic limbs can restore an amputee’s ability to walk or grip objects, but can in no way restore a sense of touch. Such a sense is critical to the human experience, said coauthor Benjamin Tee, an electrical and biomedical engineer at the Agency for Science Technology and Research in Singapore. Restoring feeling in amputees and people with paralysis could allow them to carry out several activities that were previously hindered, such as cooking, contact sports etc.

A break down of the components in the Artificial Skin discussed here, and how the optical-neural interface was constructed. Credit: Science. Source:

Well, these researchers at Stanford University have taken us one step closer to this goal by creating an electronic skin that can detect and respond to changes in pressure. The team named this product the “Digital Tactile System”, or DiTact for short, and it consists of two main components shown in the image to the right. The upper layer consists of microscale resistive pressure sensors shaped like tiny upside-down pyramids. These structures are made from a carbon nanotube-elastomer composite capable of generating a direct current that changes amplitude based on the applied pressure. This is because the nanotube structures are capable of conducting electricity. When these structures are moved closer together, electrictity can flow through the sensor. The distance between them will vary with the applied pressure, and the greater the pressure the smaller the distance. This decrease in distance will allow for a greater flow of electricty between the structures, causing the amplitude of the current to increase.

But one problem still remains! The human brain cannot interpret this information, as it is usually received in pulsed signals, similar to Morse Code, with greater pressure increasing the frequency of these pulses. The signal therefore had to be converted into something the brain could actually recognise, which is where the second layer of the artificial skin comes into play. This layer consists of a flexible organic ring-oscillator circuit – a circuit that generates voltage spikes. The greater the amplitude of the current flowing through this circuit, the more frequent the voltage spikes. And viola! We now have a pulsed signal. But the team had to show that this could be recognised by a biological neuron, otherwise the signal would stop once it reached such a cell. To do this, they bioengineered some mouse neuron cells to be sensitive to specific frequencies of light, and translated the pressure signals from the artificial skin into light pulses. These pulses were then sent through an optical fiber to the sample of neurons, which were triggered on and off in response. This combination of optics and genetics is a field known, oddly enough, as “Optogenetics”, and it successfully proved that the artificial skin could generate a sensory output compatible with nerve cells. However, it is worth noting that this method was only used as an experimental proof of concept, and other methods of stimulating nerve cells are likely to be used in real prosthetic devices.

This work is “…just the beginning…” according to Zhenan Bao, the leader of the team, adding that they also hope to mimic other sensing functions of human skin, such as the ability to feel heat, or distinguish between rough and smooth surfaces, and integrate them into the platform, but this will take time. There are a total of six types of biological sensing mechanisms in the hand, and this experiment reports success in just one of them. Nevertheless, the work represents “an important advance in the development of skin-like materials that mimic the functionality of human skin on an unprecedented level” according to Ali Javey, who is also working on developing electronic skin at the University of California, Berkley. Adding that “It could have important implications in the development of smarter prosthetics”.

With thought-controlled robotic limbs already being very real, this research represents the next key step in producing completely functioning prosthetic limbs, that could one day be almost indistinguishable from the real thing! Imagine being able to regain all forms of sense and movement in a limb you had once thought lost forever. That would be HUGE, and could drastically improve the quality of life of many amputees. Unfortunately, such a mechanical marvel is still very much in the future, but this research is an important stepping stone, and I wouldn’t be surprised if we start hearing more about this technology in the years to come. My next question would be, once we have achieved all of this, will we start working on a prosthetic arm able to mimic the effects of using The Force? I think we should make and arm that can use The Force! I think we all (secretly) want that.

Sources not mentioned in text:

A New Kind of Nature Reserve!

A Family of Elk. Credit: Valeriy Yurko. Source:

There’s no question that the Chernobyl disaster in 1986 was… well… a disaster! Due to a flawed reactor design and inadequately trained personnel, a huge explosion occurred and there were many fires, with at least 5% of the radioactive reactor core being released into the atmosphere and downwind. Both the explosion and the resulting radiation killed around 30 people within a few weeks of the accident, and the local wildlife in the area was all but destroyed at the time. People were evacuated and relocated, an effort still ongoing, and the area was left abandoned. But it seems, some three decades later, that wildlife is finding its way back.

It’s no secret that plant-life has been flourishing in the immediate vicinity of the explosion for years, as some drone footage filmed by Danny Cooke revealed in 2014. But a new study published in the journal “Current Biology” seems to indicate that some mammalian wildlife is starting to call the area home once again. While previous studies showed a large reduction in the wildlife population, this study not only shows that numbers have increased, but that some species are actually thriving in the now human-free 4200 km2 exclusion zone. Measurements were taken from both aerial surveys and by assessing the number/density of animal tracks in the area, and it shows that the populations of Elk, Roe Deer, Red Deer, and Wild Boar are similar to those of four uncontaminated nature reserves in the region, and the Wolf population is around seven times higher! This would suggest that there are abundant mammal communities in the area regardless of the potential effects of radiation.

The study determined this by proposing and testing three different hypotheses.

  1. Mammal abundances are negatively correlated with levels of radioactive contamination in the area.
  2. Density of large mammals are suppressed in the exclusion zone compared to four uncontaminated nature reserves in the area.
  3. Density of large mammals declined in the period between 1 and 10 years after the accident.

In all three cases, the hypothesis was rejected by the evidence the research group collected, making a special note that the huge increase in Wolf population was likely due to the large amounts of prey now available to them. The paper also reports that “this represents unique evidence of wildlife’s resilience in the face of chronic radiation stress”. but I feel this claim is not supported by their evidence, as they also point out that this data cannot separate possible positive effects of a human-free environment from the potential negative effects of radiation. While this would require more studies focussing on each factor to determine for sure, it seems that removing human activity from an area is much better for animal populations than radiation is a detriment. As Jim Smith, a professor of environmental science at the University of Portsmouth, told The Guardian, “What we do, our everyday inhabitation of an area – agriculture, forestry – they’ve damaged wildlife more than the world’s worst nuclear accident”, adding that “It doesn’t say that nuclear accidents aren’t bad, of course they are. But it illustrates that the things we do every day, the human population pressure, damages the environment. It’s kind of amazing isn’t it.”

Amazing it certainly is! I mean, it’s quite a kick in the teeth to learn that simply the presence of humans in an area can have more persistently damaging effects to the environment than chronic radiation exposure. But don’t get too depressed! As Timothy Mousseau, a professor of biological sciences at the University of South Carolina, has pointed out to the BBC, “This study only applies to large animals under hunting pressure, rather than the vast majority of animals – most birds, small mammals, and insects – that are not directly influenced by human habitation.” So maybe we aren’t all that bad. Still, it’s something to think about.

In any case, I think we can all agree that it’s nice to see that we essentially have a new form of nature reserve, albeit a radioactive one. But if and when future studies truly identify the scope of the negative impact we humans have on the environment, I think it might be time to re-think how we behave on this planet. Don’t you?

Sources not mentioned in text:

DNA is Unstable! Luckily your Cells can handle that.

Another Nobel Prize story?! DAMN RIGHT! This time it’s the prize for chemistry, and Tomas Lindahl, Paul Modrich, and Aziz Sancar will collectively bask in the glory for their outstanding work in studying the mechanisms of DNA repair. Given the billions of cell divisions that have occurred in your body between conception and you today, the DNA that is copied each time remains surprisingly similar to the original that was created in the fertilized egg that you once were. Why is that strange? Well from a chemical perspective that should be impossible, with all chemical processes being subject to random errors from time to time. Along with that, DNA is subjected to damaging radiation and highly reactive substances on a daily basis. This should have led to chemical chaos long before you even became a foetus! Now, I would hope that’s not the case for you, so how do our cells prevent this descent into madness? I’ll tell you! It’s because DNA is constantly monitored by various proteins that all work to correct these errors. They don’t prevent the damage from occurring, they just hang around waiting for something to fix, and all three of the winning scientists contributed to our understanding of how our cells achieve this. So! Where do we begin?

A good place to start would be a brief description of the structure of DNA, as this will make things much clearer when we start discussing the research. DNA is primarily a chain of nucleotides, which are themselves made up of three components: a deoxyribose sugar, a phosphate group, and a nitrogenous base. These components are shown bonded together in Figure 1. It is also worth noting that there are four possible bases, each with a slightly different structure, and the one shown in the image is specifically Adenine. The others are known as Thymine, Cytosine, and Guanine, and all attach to the sugar in the same place. The two negative charges on the phosphate group allow it form another bond to the adjacent nucleotide, and this continues on to form a long chain. Two separate chains are then joined together as shown in Figure 2, and voila! A molecule of DNA is formed!

Figure 1: The basic components of DNA. Source:
Figure 2: Representation of how the two chains of Nucleotides bond together to form a molecule of DNA. Source:
A comparison of Cytosine and its Methylated equivalent. Source:

Now that we have a basic understanding of the structure of DNA, the research should make a hell of a lot more sense, and it begins with Tomas Lindahl. In the 1960s, Lindahl found himself asking a question; how stable is our DNA, really? At the time the general consensus among scientists was that it was amazingly resilient. I mean… how else could it remain so constant? If genetic information was in any way unstable, multicellular organisms like us would have never come into existence. Lindahl began his experiments by working with RNA, another molecule found in our cells with a lot of structural similarities to DNA. However, what was surprising was that the RNA rapidly degraded during these experiments. Now it was known that RNA is the least stable of the two molecules, but if was destroyed so easily and quickly, could DNA really be all that stable? Continuing his research, Lindahl demonstrated that DNA does, in fact, have limited chemical stability, and can undergo many reactions within our cells. One such reaction is Methylation, in which a CH3 (methyl) group is added on to one of the bases in the DNA strand. The difference this causes is shown in Figure 3, and can occur with or without the aid of an enzyme. This reaction will become relevant later on, as will the fact that it changes the shape of the base, affecting how other proteins can bind to it. All of these reactions can alter the genetic information stored in DNA, and if they were allowed to persist, mutations would occur much more frequently than they actually do.

Realising that these errors had to be corrected somehow, Lindahl began investigating how DNA was repaired, and by 1986 he had pieced together a molecular image of how “base excision repair” functions. The process involves many enzymes (and I don’t have the time or patience to describe them all), but a certain class known as “DNA glycolsylases” are what actually break the bond between the defective base and the deoxyribose sugar, and the base is removed. Our cells actually contain many enzymes of this type, each of which targets a different type of base modification. Several more enzymes then work together to fill the gap with the correct, undamaged base and there we have it! A mutation has been prevented. To help you visualise all this, you’ll find a graphical representation of it below in Figure 4.

Figure 4: Graphical representation of the process of Base Excision Repair. Source:

But the science doesn’t end there folks! Remember, there were three winners, the second of which is Aziz Sancar, who discovered another method of DNA repair. This one is called “nucleotide excision repair”, and involves the removal of entire sets of nucleotides, rather than individual bases. Sancar’s interest was piqued by one phenomenon in particular; when bacteria are exposed to deadly doses of UV radiation, they can suddenly recover if exposed to visible blue light. This was termed “photoreactivation” for… obvious reasons. He was successful in identifying an isolating the genes and enzymes responsible, but it later became clear that bacteria had a second repair mechanism that didn’t require exposure to light of any kind. But Sancar wasn’t about to let these bacteria out-fox him and, after more investigations, he’d managed to identify, isolate, and characterise the enzymes responsible  for this process as well. The bacteria were no match for his chemical prowess!

“But how does it work?!” I hear you shout. Well calm the f**k down and I’ll tell you! UV radiation can be extremely damaging, and can cause two adjacent Thymine bases in a DNA strand to directly bind to each other, which is WRONG! A certain endonuclease enzyme, known as an “exinuclease”, is aware of this wrongness, and decides that this damage must be fixed. It does this by making two incisions on each side of the defect, and a fragment roughly 12 nucleotides long is removed. DNA polymerase and DNA ligase then fill in and seal the gap, respectively, and now we have a healthy strand of bacterial DNA! Sancar later investigated this repair mechanism is humans in  parallel with other research groups, and while it is much more complicated, involving many more enzymes and proteins, it functions very similarly in chemical terms. You want a picture to make it easier? You’ll find it below in Figure 5!

Figure 5: Graphical representation of Nucleotide Excision Repair. Source:

The final recipient of the Nobel Prize this year was Paul Modrich, who identified YET ANOTHER repair system (there are loads, you know), which he named the “mismatch repair” mechanism. Early on in his career, Modrich was examining various enzymes that affect DNA, eventually focussing on “Dam Methylase” which couples methyl groups to DNA bases (I TOLD YOU THAT REACTION WOULD BE RELEVANT!). He showed that these methyl groups could basically behave a labels, helping restriction enzymes cut the DNA strand at the right location. But, only a few years earlier, another scientist called Matthew Meselson, suggested that they also indicate which strand to use a template in DNA replication. Working together, these scientists synthesised a virus with DNA that had incorrectly paired bases, and methylated only one of the two DNA strands. When the virus infected, and injected its DNA into the bacteria, the mismatched pairs were corrected by altering the unmethylated strand. It would appear that the repair mechanism recognised the defective strand by the lack of methyl groups. Does it work that way in humans? Probably not. Modrich did manage to map the mismatch repair mechanism in humans, but DNA methylation serves many other functions in human cells, particularly those to do with gene expression and regulation. It is thought that strand-specific “nicks” (lack of a bond between a phosphate group and a deoxyribose sugar) or ribonucleotides (nucleotide components of RNA) present in DNA may direct repair, but the mechanism remains to be found at this point.

Figure 6: Structure of Olaparib. Source:

But why should we care? Granted it is nice to know this stuff (at least I think so), but what can this information be used for? Well, it actually has applications within the world of medicine, as errors in repair mechanisms can often lead to cancer. In many forms of cancer these mechanisms have been at least partially turned off, but the cells are also heavily reliant on the mechanisms that remain active. As we mentioned earlier, a lack of these mechanisms leads to chemical chaos, and that would cause the cancer cells to just die. This has led to drugs designed to inhibit the remaining repair systems to slow down or stop cancer growth entirely! One such drug is Olaparib, and you can see the structure in Figure 6. This drug functions by inhibiting two specific proteins (PARP-1 and PARP-2), which are integral in detecting certain flaws in replicated DNA and directing repair proteins to the site of damage. Cancer cells treated with this drug have been shown to be more sensitive to UV radiation, making one form of treatment much more effective.

And with that, we bring our Nobel Prize stories for this year to an end! I think it’s safe to say that the work described here deserved a prize of some sort, as it not only takes a lot of skill and dedication, but it has led to new medical treatments and a MUCH greater understanding of how our DNA behaves. Have you enjoyed our time spent on the science of the Nobel Prize? DAMN RIGHT YOU HAVE. O_O


Did you know Neutrinos have Mass? I didn’t even know they where Catholic!

Terrible jokes aside, this was actually a HUGE discovery in the world of physics, so it’s no surprise that two of the scientists responsible, Takaaki Kajita and Arthur B. McDonald, where awarded this year’s Nobel Prize. Their research led to the discovery of the phenomenon now called “Neutrino Oscillations”, proving that these elementary particles do in fact have mass. Now, at this point that will likely not mean anything to you (it meant f**k all to me at first!), and before we dive into the explanation, we’re going to need a brief history of these elusive particles.

Neutrinos were first proposed by physicist Wolfgang Pauli when he attempted to explain conservation of energy in beta-decay; a type of radioactive decay in atomic nuclei. Noticing that some energy was missing upon this decay, he suggested that some of it was carried away by an electrically neutral, weakly interacting, and extremely light particle. This concept was such a mind-f**k that Pauli himself had a hard time accepting it’s existence – “I have done a terrible thing, I have postulated a particle that cannot be detected.” But this all changed in June 1956 when physicists Frederick Reines and Clyde Cowan noticed that these particles had left traces in their detector. This was big news, and as a result many experiments began to both detect and identify them.

So! Where do these particles come from? Well some have been around since the very beginning of the Universe, created during the Big Bang, and others are constantly being created in a number of processes both in Space and on Earth. These processes include exploding supernovas, reactions in nuclear power plants, and naturally occurring radioactive decay. This can even occur inside our bodies, with an average of 5000 per second being produced when an isotope of potassium decays. Don’t worry! These things are harmless (remember – weakly interacting) so there’s no need to go on a neutrino freak-out. In fact most of the neutrinos that reach Earth originate in nuclear reactions inside the Sun, a fact we’ll need to remember for later. There are also three types (or “flavors”) of neutrino according to the Standard Model of Physics (electron-neutrinos, muon-neutrinos, and tau-neutrinos) and the exact flavor is determined by which charged particle is also produced during the decay process (electron / muon / tau-lepton). The Standard Model also requires these particles to be massless, which will also be important later on.

Now that we know all this, we can let the experimentation begin! Both of the Nobel Prize winning scientists were working with research groups attempting to detect, quantify, and identify neutrinos arriving on Earth, albeit on different parts of the globe. It is also worth noting that both detectors were built deep underground in order to reduce interference from neutrinos produced in the surrounding environment. Takaaki Kajita was working at the Super-Kamiokande detector, which became operational in 1996 in a mine north-west of Tokyo. This was able to detect both muon and electron-neutrinos produced when cosmic radiation particles interact with molecules in Earth’s atmosphere, and could take readings from both neutrinos arriving from the atmosphere above the detector, and from those that had arrived on the other side of the globe and moved through the mass of the whole planet. Given that the amount of cosmic radiation doesn’t vary depending on position, the number of neutrinos detected from both directions should have been equal, but more were observed arriving from above the detector. Neutrinos were the cause of yet another mind-f**k… and it was suggested that if they had changed flavor, from muon / electron to tau-neutrinos, then this discrepancy would make sense.

Fast forward a few years to 1999 and the Sudbury Neutrino Observatory had become active in a mine in Ontario, Canada. This is where Arthur B. McDonald and his research group began measuring neutrinos arriving on Earth from the Sun using two methods; one could only detect electron-neutrinos, the other could detect all three flavors but not distinguish between them. Remember that most of the neutrinos arriving on Earth come from the Sun? Well it was also known that reactions within the sun only produce electron-neutrinos. This meant that both detection methods should have yielded the same results, as only electron-neutrinos would be detected. However, measurements of all three flavors were greater than the readings for electron-neutrinos only. This could really only mean one thing, the neutrinos must be able to change flavors.

This is where things get REALLY confusing, as neutrinos need to have mass to be able to change flavors. Why? The answer lies in Quantum Mechanics, and a phrase i’ve frequently heard is: if you claim to understand Quantum Mechanics, that only confirms how much you don’t. Now, I’m gonna need you to bear with me here, as I’m going to attempt to explain this while confusing you as little as possible, a task that gave me a BAD headache while planning and researching. We’ll start this endeavor by stating that neutrinos can be classified in one of two ways, by their flavor (three types) or by their mass (also three types). We’ll also need to point out that, thanks to the “Uncertainty Principle”, if you know the flavor of a Neutrino, you cannot know it’s mass, and vice versa. This means that you cannot know the mass of a muon-neutrino / electron-neutrino etc. At all. It’s simply not possible. This ALSO means a neutrino of a precise and identified flavor exists as a precise superposition (or mix) of all three mass types. It’s also worth noting that each flavor is a different mix of all mass types, but it is exactly this property that allows a neutrino to change identity. Welcome to the f**ked up world of Quantum Mechanics!

Einstein’s theory of special relativity states that a particle’s velocity is dependant on its mass and its energy. So, if we have an electron-neutrino moving through space, each of the three mass types it consists of move at slightly different velocities. It is this small difference that causes the mix of mass types to change as the particle moves, and by changing the mix, you change the flavor of the neutrino. Congratulations! You are now somewhat closer to understanding (or not understanding I guess…) the phenomenon of “Neutrino Oscillations”!

While all of this is excellent at causing brain pain, it also opens the gateway to completely new physics as, like I mentioned before, the Standard Model REQUIRES neutrinos to be massless, which is clearly not the case. This discovery marked the first successful experimental challenge to this model in over 20 years, and it is now obvious that it cannot be a complete theory of how the fundamental constituents of the Universe function. Physics now has many new questions.

Did you make it this far? Well done! Go lie down and let your brain rest. It won’t make any more sense tomorrow.


  • Neutrino Types and Neutrino Oscillations. Of Particular Significance: Conversations about Science with Theoretical Physicist Matt Strassler. Link:
  • How Are Neutrino Flavors Different? Maybe There Is Only One Vanilla. Cosmology Science by David Dilworth. Link:
  • Neutrino Physics. SLAC Summer Institute on Particle Physics (SS104), Aug. 2-13, 2004. Author: Boris Kayser. Link:
  • The chameleons of space. The Nobel Prize in Physics 2015 – Popular Science Background. The Royal Swedish Academy of Sciences. Link:
  • Velocity Differences of Neutrinos. Of Particular Significance: Conversations about Science with Theoretical Physicist Matt Strassler. Link:

Chemical Analysis of Mars from Orbit? But how?!

When I found out that water had been found on Mars my first response was to flail and shout with excitement. But once I had calmed down I started to think; how does one actually go about analysing the surface of another planet without actually being on the surface yourself? It was then that I found out NASA have managed to do all of this using a satellite currently orbiting the red planet at an altitude of 300 km (186 miles)! That’s some pretty impressive tech right there (and I imagine the specs are a well-kept trade secret). The satellite itself is known as the Mars Reconnaissance Orbiter (MRO), and it’s equipped with an analytical tool known as CRISM, or the “Compact Reconnaissance Imaging Spectrometer for Mars” if you’re feeling excessive. This device can detect and measure the wavelengths and intensity of both visible and infrared light that has been reflected or scattered from the martian surface; a technique known as “Reflectance Spectroscopy”.

Reflectance Spectroscopy functions on the principle that when light comes into contact with a material, the chemical bonding and molecular structure will cause some of this light to be absorbed. The exact wavelengths absorbed will vary depending on the type of bonding and the elements involved, and the remaining light will either be scattered or reflected depending on the macro-scale properties of the material, such as shape and size. On Mars, most of these materials seem to be grains of some sort and the potentially complex shape of such a structure can cause the light to be scattered in all sorts of directions. However, some of this light will reach the MRO, and CRISM can then detect and measure which wavelengths have been absorbed based on a decrease in intensity. How they found a way to do all of this FROM ORBIT still mystifies me, but I imagine NASA prefers it that way. This whole process then gives an output known as an “absorption spectrum”, an example of which is shown in Figure 1.

Figure 1: An example of an absorption spectrum showing wavelength (x-axis) and reflectance (y-axis). Source: CRISM website: Link:

So! What have they actually found on Mars using this technique? Well, they appear to have detected “Aqueous Minerals”, which are chemical structures that form in the presence of water by chemical reaction with the surrounding rock. The exact mineral that will form is determined by many factors, including the temperature, pH, and salt content (salinity) of the environment, as well as the composition of the parent rock. Given that this process takes an extremely long time to occur naturally, it can show where water has been present long enough to cause such a reaction, and can give an excellent indication of what the martian surface was like in the past. For example, chloride and sulfate minerals generally indicate very saline water, as well as suggesting that it was more acidic, whereas phyllosilicates and carbonates suggest less salinity and a more neutral pH. What I find most exciting is that this data can suggest where to begin looking for fossilized evidence of ancient life (if it existed at all). If the past water appears to not be too acidic and the elements for life are present, then it is certainly a possibility.

It seems that Mars just keeps getting more exiting with each new discovery, and all we can do now is wait for next announcement to be made. Here’s hoping it’s evidence of life! Also, speaking of life on Mars, everyone should go see The Martian movie in cinemas now, it’s f**king brilliant!


  • The CRISM Website. Link:
  • USGS Spectroscopy Lab – About Reflectance Spectroscopy. Link:
  • PBS Newshour. “Mars has flowing rivers of briny water, NASA satellite reveals”. Link:
  • NASA Mars Reconnaissance Orbiter Website. Link:

Nobel Prize! Drugs to take on parasites bring home the award!

As you may or may not have heard, the winners of the Nobel Prize for Medicine and Physiology were announced today, and a total of three scientists received this award: Satoshi Ōmura (a Japanese microbiologist), William C. Campbell (an expert on parasite biology), and Youyou Tu (a Chinese medical scientist and pharmaceutical chemist). But more important than who they are, is what they achieved, and each of these great minds has somehow contributed immensely to the treatment of infections caused by some nasty parasites.

Both Ōmura and Campbell contributed to the discovery of a new drug that is extremely effective in killing Microfilariae larvae; an organism known to cause Onchocerciasis, or River Blindness as it might be known to us laymen. The story begins with Ōmura, described as an expert in isolating natural products, deciding to focus on a particular group of bacteria known as Streptomyces. These bacteria are known to produce many compounds with antibacterial properties when isolated, including Streptomycin, the drug initially used to treat Tuberculosis. Ōmura, using his biological powers, successfully isolated and cultivated new strains of Streptomyces, and from thousands of cultures, he selected the 50 best candidates that merited further analysis. One of these turned out to be Streptomyces Avermitilis.

Skip forward a bit, and we see that Campbell acquired some of these samples and set about exploring their antibacterial efficiency. He was then able to show that a component produced by Streptomyces Avermitilis was very efficient against parasites in both domestic and farm animals, which he then purified and named Avermectin. This was initially used as a veterinary drug given Campbell’s findings, but subsequent modification on the molecular level (an addition of only two hydrogen atoms!) gave us the “wonder drug” Ivermectin. This was then shown to out-perform the previously used DEC (Diethylcarbamazine for the chemistry nerds) primarily due to the lack of side effects such as inflammation. This has made the drug extremely safe for human use, allowing it to be administered to patients by non-medical staff and even individuals in small rural communities with no hospital experience at all (provided some very basic training). This is what makes this drug so special, as it can be safely used in some less developed parts of Africa and South America, where River Blindness is most common and advanced medical care may be unreachable or unaffordable for some individuals.

The other major advancement in this field worthy of the Nobel Prize was the work of Youyou Tu, who developed an effective treatment for another well-known parasitic infection: Malaria! Inspired by traditional Chinese medicine, she identified an interesting extract from the plant Artemisia Annua, or as you may know it, Sweet Wormwood. Despite initial results appearing inconsistent, Tu revisited some literature and found clues that lead to the successful extraction of the active component, Artemisinin. She then became the first to demonstrate that this compound was highly effective against Malaria, killing the parasite at even early stages of development. While the precise mechanism of how Artemisinin achieves this is still debated, many of the current theories and hypotheses are that the drug forms a highly reactive compound within the Malaria organism which is then capable of irreversibly modifying and damaging proteins and other important molecules, and the parasite goes down!

The consequences of these discoveries have been felt across the globe, but the countries most affected by these diseases stand to gain the most. River Blindness is nowhere near the huge problem it used to be, with treatment scenarios moving away from control and into eradication and elimination. This could be MASSIVE for the global economy, with estimates stating a potential saving of US$ 1.5 billion over a few decades! Eradication could also be great for local economies; as such serious illnesses can affect employee attendance, due to actual infection or having to care for a relative. This decrease in workforce can potentially lead to an economic downturn and further unemployment. When combined with the more direct costs of treatment, the losses can be huge. Malaria alone is thought to cost Africa around US$ 12 million a year in lost GDP, and continues to slow growth by more than 1% each year. Just imagine what eliminating these diseases could do for these countries. Not only could their economies rise to a more globally competitive level, it could also lead the way to alleviating the more poverty-stricken areas. Families would be freer to go out and earn a living, with no need to worry about potential infection or having to care for sick family members. This could afford more food, better healthcare, and leisure activities, drastically increasing quality of life. Granted it would also cause significant population growth in areas with already high birth rates, and the current food crises would not be helped by this, but these problems would be easier to control and solve in a disease-free society.

That is not to say there are no concerns associated with disease eradication, although it is extremely unlikely that these would out-weigh the benefits. It could be that the process of natural selection would be halted, as these diseases weed out weaker immune systems in the population. But with advances in technology and medicine allowing for new treatments of both genetic and infectious diseases, if such a thing could happen, the causes are already present across the globe, so why should it influence our decision in this case? I mean, Malaria was eradicated in the US many years ago and there have been no obvious downsides to this. You could also look at the example of smallpox, which not only saved the world around US$ 1.35 billion a year, but has had no clear effect on our immune systems. Even if it were to have such an effect, it would take many thousands of years for such a change to occur, and it is possible that our technology and medical treatments will have advanced enough to counter this. While this question does remain to be answered, there is no evidence of a decline in our immune systems, and I think it is safe to say that the benefits of eliminating these diseases would go way beyond the realm of public health. So should we let a completely hypothetical downside influence our decision? Answer! No we shouldn’t.


The Key Publications mentioned by the Nobel Assembly at Karolinska Institutet:

  • Burg et al., Antimicrobial Agents and Chemotherapy (1979) 15:361-367
  • Egerton et al., Antimicrobial Agents and Chemotherapy (1979) 15:372-378.
  • Tu et al., Yao Xue Xue Bao (1981) 16, 366-370 (Chinese)