Tag: Nobel Prize

DNA is Unstable! Luckily your Cells can handle that.

Another Nobel Prize story?! DAMN RIGHT! This time it’s the prize for chemistry, and Tomas Lindahl, Paul Modrich, and Aziz Sancar will collectively bask in the glory for their outstanding work in studying the mechanisms of DNA repair. Given the billions of cell divisions that have occurred in your body between conception and you today, the DNA that is copied each time remains surprisingly similar to the original that was created in the fertilized egg that you once were. Why is that strange? Well from a chemical perspective that should be impossible, with all chemical processes being subject to random errors from time to time. Along with that, DNA is subjected to damaging radiation and highly reactive substances on a daily basis. This should have led to chemical chaos long before you even became a foetus! Now, I would hope that’s not the case for you, so how do our cells prevent this descent into madness? I’ll tell you! It’s because DNA is constantly monitored by various proteins that all work to correct these errors. They don’t prevent the damage from occurring, they just hang around waiting for something to fix, and all three of the winning scientists contributed to our understanding of how our cells achieve this. So! Where do we begin?

A good place to start would be a brief description of the structure of DNA, as this will make things much clearer when we start discussing the research. DNA is primarily a chain of nucleotides, which are themselves made up of three components: a deoxyribose sugar, a phosphate group, and a nitrogenous base. These components are shown bonded together in Figure 1. It is also worth noting that there are four possible bases, each with a slightly different structure, and the one shown in the image is specifically Adenine. The others are known as Thymine, Cytosine, and Guanine, and all attach to the sugar in the same place. The two negative charges on the phosphate group allow it form another bond to the adjacent nucleotide, and this continues on to form a long chain. Two separate chains are then joined together as shown in Figure 2, and voila! A molecule of DNA is formed!

Figure 1: The basic components of DNA. Source: http://pmgbiology.com/2014/10/21/dna-structure-and-function-igcse-a-understanding/
Figure 2: Representation of how the two chains of Nucleotides bond together to form a molecule of DNA. Source: http://www.d.umn.edu/claweb/faculty/troufs/anth1602/pcdna.html
A comparison of Cytosine and its Methylated equivalent. Source: http://blog-biosyn.com/2013/05/15/dna-methylation-and-analysis/

Now that we have a basic understanding of the structure of DNA, the research should make a hell of a lot more sense, and it begins with Tomas Lindahl. In the 1960s, Lindahl found himself asking a question; how stable is our DNA, really? At the time the general consensus among scientists was that it was amazingly resilient. I mean… how else could it remain so constant? If genetic information was in any way unstable, multicellular organisms like us would have never come into existence. Lindahl began his experiments by working with RNA, another molecule found in our cells with a lot of structural similarities to DNA. However, what was surprising was that the RNA rapidly degraded during these experiments. Now it was known that RNA is the least stable of the two molecules, but if was destroyed so easily and quickly, could DNA really be all that stable? Continuing his research, Lindahl demonstrated that DNA does, in fact, have limited chemical stability, and can undergo many reactions within our cells. One such reaction is Methylation, in which a CH3 (methyl) group is added on to one of the bases in the DNA strand. The difference this causes is shown in Figure 3, and can occur with or without the aid of an enzyme. This reaction will become relevant later on, as will the fact that it changes the shape of the base, affecting how other proteins can bind to it. All of these reactions can alter the genetic information stored in DNA, and if they were allowed to persist, mutations would occur much more frequently than they actually do.

Realising that these errors had to be corrected somehow, Lindahl began investigating how DNA was repaired, and by 1986 he had pieced together a molecular image of how “base excision repair” functions. The process involves many enzymes (and I don’t have the time or patience to describe them all), but a certain class known as “DNA glycolsylases” are what actually break the bond between the defective base and the deoxyribose sugar, and the base is removed. Our cells actually contain many enzymes of this type, each of which targets a different type of base modification. Several more enzymes then work together to fill the gap with the correct, undamaged base and there we have it! A mutation has been prevented. To help you visualise all this, you’ll find a graphical representation of it below in Figure 4.

Figure 4: Graphical representation of the process of Base Excision Repair. Source: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2015/popular-chemistryprize2015.pdf

But the science doesn’t end there folks! Remember, there were three winners, the second of which is Aziz Sancar, who discovered another method of DNA repair. This one is called “nucleotide excision repair”, and involves the removal of entire sets of nucleotides, rather than individual bases. Sancar’s interest was piqued by one phenomenon in particular; when bacteria are exposed to deadly doses of UV radiation, they can suddenly recover if exposed to visible blue light. This was termed “photoreactivation” for… obvious reasons. He was successful in identifying an isolating the genes and enzymes responsible, but it later became clear that bacteria had a second repair mechanism that didn’t require exposure to light of any kind. But Sancar wasn’t about to let these bacteria out-fox him and, after more investigations, he’d managed to identify, isolate, and characterise the enzymes responsible  for this process as well. The bacteria were no match for his chemical prowess!

“But how does it work?!” I hear you shout. Well calm the f**k down and I’ll tell you! UV radiation can be extremely damaging, and can cause two adjacent Thymine bases in a DNA strand to directly bind to each other, which is WRONG! A certain endonuclease enzyme, known as an “exinuclease”, is aware of this wrongness, and decides that this damage must be fixed. It does this by making two incisions on each side of the defect, and a fragment roughly 12 nucleotides long is removed. DNA polymerase and DNA ligase then fill in and seal the gap, respectively, and now we have a healthy strand of bacterial DNA! Sancar later investigated this repair mechanism is humans in  parallel with other research groups, and while it is much more complicated, involving many more enzymes and proteins, it functions very similarly in chemical terms. You want a picture to make it easier? You’ll find it below in Figure 5!

Figure 5: Graphical representation of Nucleotide Excision Repair. Source: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2015/popular-chemistryprize2015.pdf

The final recipient of the Nobel Prize this year was Paul Modrich, who identified YET ANOTHER repair system (there are loads, you know), which he named the “mismatch repair” mechanism. Early on in his career, Modrich was examining various enzymes that affect DNA, eventually focussing on “Dam Methylase” which couples methyl groups to DNA bases (I TOLD YOU THAT REACTION WOULD BE RELEVANT!). He showed that these methyl groups could basically behave a labels, helping restriction enzymes cut the DNA strand at the right location. But, only a few years earlier, another scientist called Matthew Meselson, suggested that they also indicate which strand to use a template in DNA replication. Working together, these scientists synthesised a virus with DNA that had incorrectly paired bases, and methylated only one of the two DNA strands. When the virus infected, and injected its DNA into the bacteria, the mismatched pairs were corrected by altering the unmethylated strand. It would appear that the repair mechanism recognised the defective strand by the lack of methyl groups. Does it work that way in humans? Probably not. Modrich did manage to map the mismatch repair mechanism in humans, but DNA methylation serves many other functions in human cells, particularly those to do with gene expression and regulation. It is thought that strand-specific “nicks” (lack of a bond between a phosphate group and a deoxyribose sugar) or ribonucleotides (nucleotide components of RNA) present in DNA may direct repair, but the mechanism remains to be found at this point.

Figure 6: Structure of Olaparib. Source: http://www.reagentsdirect.com/index.php/small-molecules/small-molecules-1/olaparib/olaparib-263.html

But why should we care? Granted it is nice to know this stuff (at least I think so), but what can this information be used for? Well, it actually has applications within the world of medicine, as errors in repair mechanisms can often lead to cancer. In many forms of cancer these mechanisms have been at least partially turned off, but the cells are also heavily reliant on the mechanisms that remain active. As we mentioned earlier, a lack of these mechanisms leads to chemical chaos, and that would cause the cancer cells to just die. This has led to drugs designed to inhibit the remaining repair systems to slow down or stop cancer growth entirely! One such drug is Olaparib, and you can see the structure in Figure 6. This drug functions by inhibiting two specific proteins (PARP-1 and PARP-2), which are integral in detecting certain flaws in replicated DNA and directing repair proteins to the site of damage. Cancer cells treated with this drug have been shown to be more sensitive to UV radiation, making one form of treatment much more effective.

And with that, we bring our Nobel Prize stories for this year to an end! I think it’s safe to say that the work described here deserved a prize of some sort, as it not only takes a lot of skill and dedication, but it has led to new medical treatments and a MUCH greater understanding of how our DNA behaves. Have you enjoyed our time spent on the science of the Nobel Prize? DAMN RIGHT YOU HAVE. O_O

Sources:

Advertisements

Did you know Neutrinos have Mass? I didn’t even know they where Catholic!

Terrible jokes aside, this was actually a HUGE discovery in the world of physics, so it’s no surprise that two of the scientists responsible, Takaaki Kajita and Arthur B. McDonald, where awarded this year’s Nobel Prize. Their research led to the discovery of the phenomenon now called “Neutrino Oscillations”, proving that these elementary particles do in fact have mass. Now, at this point that will likely not mean anything to you (it meant f**k all to me at first!), and before we dive into the explanation, we’re going to need a brief history of these elusive particles.

Neutrinos were first proposed by physicist Wolfgang Pauli when he attempted to explain conservation of energy in beta-decay; a type of radioactive decay in atomic nuclei. Noticing that some energy was missing upon this decay, he suggested that some of it was carried away by an electrically neutral, weakly interacting, and extremely light particle. This concept was such a mind-f**k that Pauli himself had a hard time accepting it’s existence – “I have done a terrible thing, I have postulated a particle that cannot be detected.” But this all changed in June 1956 when physicists Frederick Reines and Clyde Cowan noticed that these particles had left traces in their detector. This was big news, and as a result many experiments began to both detect and identify them.

So! Where do these particles come from? Well some have been around since the very beginning of the Universe, created during the Big Bang, and others are constantly being created in a number of processes both in Space and on Earth. These processes include exploding supernovas, reactions in nuclear power plants, and naturally occurring radioactive decay. This can even occur inside our bodies, with an average of 5000 per second being produced when an isotope of potassium decays. Don’t worry! These things are harmless (remember – weakly interacting) so there’s no need to go on a neutrino freak-out. In fact most of the neutrinos that reach Earth originate in nuclear reactions inside the Sun, a fact we’ll need to remember for later. There are also three types (or “flavors”) of neutrino according to the Standard Model of Physics (electron-neutrinos, muon-neutrinos, and tau-neutrinos) and the exact flavor is determined by which charged particle is also produced during the decay process (electron / muon / tau-lepton). The Standard Model also requires these particles to be massless, which will also be important later on.

Now that we know all this, we can let the experimentation begin! Both of the Nobel Prize winning scientists were working with research groups attempting to detect, quantify, and identify neutrinos arriving on Earth, albeit on different parts of the globe. It is also worth noting that both detectors were built deep underground in order to reduce interference from neutrinos produced in the surrounding environment. Takaaki Kajita was working at the Super-Kamiokande detector, which became operational in 1996 in a mine north-west of Tokyo. This was able to detect both muon and electron-neutrinos produced when cosmic radiation particles interact with molecules in Earth’s atmosphere, and could take readings from both neutrinos arriving from the atmosphere above the detector, and from those that had arrived on the other side of the globe and moved through the mass of the whole planet. Given that the amount of cosmic radiation doesn’t vary depending on position, the number of neutrinos detected from both directions should have been equal, but more were observed arriving from above the detector. Neutrinos were the cause of yet another mind-f**k… and it was suggested that if they had changed flavor, from muon / electron to tau-neutrinos, then this discrepancy would make sense.

Fast forward a few years to 1999 and the Sudbury Neutrino Observatory had become active in a mine in Ontario, Canada. This is where Arthur B. McDonald and his research group began measuring neutrinos arriving on Earth from the Sun using two methods; one could only detect electron-neutrinos, the other could detect all three flavors but not distinguish between them. Remember that most of the neutrinos arriving on Earth come from the Sun? Well it was also known that reactions within the sun only produce electron-neutrinos. This meant that both detection methods should have yielded the same results, as only electron-neutrinos would be detected. However, measurements of all three flavors were greater than the readings for electron-neutrinos only. This could really only mean one thing, the neutrinos must be able to change flavors.

This is where things get REALLY confusing, as neutrinos need to have mass to be able to change flavors. Why? The answer lies in Quantum Mechanics, and a phrase i’ve frequently heard is: if you claim to understand Quantum Mechanics, that only confirms how much you don’t. Now, I’m gonna need you to bear with me here, as I’m going to attempt to explain this while confusing you as little as possible, a task that gave me a BAD headache while planning and researching. We’ll start this endeavor by stating that neutrinos can be classified in one of two ways, by their flavor (three types) or by their mass (also three types). We’ll also need to point out that, thanks to the “Uncertainty Principle”, if you know the flavor of a Neutrino, you cannot know it’s mass, and vice versa. This means that you cannot know the mass of a muon-neutrino / electron-neutrino etc. At all. It’s simply not possible. This ALSO means a neutrino of a precise and identified flavor exists as a precise superposition (or mix) of all three mass types. It’s also worth noting that each flavor is a different mix of all mass types, but it is exactly this property that allows a neutrino to change identity. Welcome to the f**ked up world of Quantum Mechanics!

Einstein’s theory of special relativity states that a particle’s velocity is dependant on its mass and its energy. So, if we have an electron-neutrino moving through space, each of the three mass types it consists of move at slightly different velocities. It is this small difference that causes the mix of mass types to change as the particle moves, and by changing the mix, you change the flavor of the neutrino. Congratulations! You are now somewhat closer to understanding (or not understanding I guess…) the phenomenon of “Neutrino Oscillations”!

While all of this is excellent at causing brain pain, it also opens the gateway to completely new physics as, like I mentioned before, the Standard Model REQUIRES neutrinos to be massless, which is clearly not the case. This discovery marked the first successful experimental challenge to this model in over 20 years, and it is now obvious that it cannot be a complete theory of how the fundamental constituents of the Universe function. Physics now has many new questions.

Did you make it this far? Well done! Go lie down and let your brain rest. It won’t make any more sense tomorrow.

Sources:

  • Neutrino Types and Neutrino Oscillations. Of Particular Significance: Conversations about Science with Theoretical Physicist Matt Strassler. Link: http://profmattstrassler.com/articles-and-posts/particle-physics-basics/neutrinos/neutrino-types-and-neutrino-oscillations/
  • How Are Neutrino Flavors Different? Maybe There Is Only One Vanilla. Cosmology Science by David Dilworth. Link: http://cosmologyscience.com/cosblog/how-neutrino-flavors-are-different/
  • Neutrino Physics. SLAC Summer Institute on Particle Physics (SS104), Aug. 2-13, 2004. Author: Boris Kayser. Link: http://www.slac.stanford.edu/econf/C040802/papers/L004.PDF
  • The chameleons of space. The Nobel Prize in Physics 2015 – Popular Science Background. The Royal Swedish Academy of Sciences. Link: http://www.nobelprize.org/nobel_prizes/physics/laureates/2015/popular-physicsprize2015.pdf
  • Velocity Differences of Neutrinos. Of Particular Significance: Conversations about Science with Theoretical Physicist Matt Strassler. Link: http://profmattstrassler.com/articles-and-posts/particle-physics-basics/neutrinos/neutrino-types-and-neutrino-oscillations/velocity-differences-of-neutrinos/

Nobel Prize! Drugs to take on parasites bring home the award!

As you may or may not have heard, the winners of the Nobel Prize for Medicine and Physiology were announced today, and a total of three scientists received this award: Satoshi Ōmura (a Japanese microbiologist), William C. Campbell (an expert on parasite biology), and Youyou Tu (a Chinese medical scientist and pharmaceutical chemist). But more important than who they are, is what they achieved, and each of these great minds has somehow contributed immensely to the treatment of infections caused by some nasty parasites.

Both Ōmura and Campbell contributed to the discovery of a new drug that is extremely effective in killing Microfilariae larvae; an organism known to cause Onchocerciasis, or River Blindness as it might be known to us laymen. The story begins with Ōmura, described as an expert in isolating natural products, deciding to focus on a particular group of bacteria known as Streptomyces. These bacteria are known to produce many compounds with antibacterial properties when isolated, including Streptomycin, the drug initially used to treat Tuberculosis. Ōmura, using his biological powers, successfully isolated and cultivated new strains of Streptomyces, and from thousands of cultures, he selected the 50 best candidates that merited further analysis. One of these turned out to be Streptomyces Avermitilis.

Skip forward a bit, and we see that Campbell acquired some of these samples and set about exploring their antibacterial efficiency. He was then able to show that a component produced by Streptomyces Avermitilis was very efficient against parasites in both domestic and farm animals, which he then purified and named Avermectin. This was initially used as a veterinary drug given Campbell’s findings, but subsequent modification on the molecular level (an addition of only two hydrogen atoms!) gave us the “wonder drug” Ivermectin. This was then shown to out-perform the previously used DEC (Diethylcarbamazine for the chemistry nerds) primarily due to the lack of side effects such as inflammation. This has made the drug extremely safe for human use, allowing it to be administered to patients by non-medical staff and even individuals in small rural communities with no hospital experience at all (provided some very basic training). This is what makes this drug so special, as it can be safely used in some less developed parts of Africa and South America, where River Blindness is most common and advanced medical care may be unreachable or unaffordable for some individuals.

The other major advancement in this field worthy of the Nobel Prize was the work of Youyou Tu, who developed an effective treatment for another well-known parasitic infection: Malaria! Inspired by traditional Chinese medicine, she identified an interesting extract from the plant Artemisia Annua, or as you may know it, Sweet Wormwood. Despite initial results appearing inconsistent, Tu revisited some literature and found clues that lead to the successful extraction of the active component, Artemisinin. She then became the first to demonstrate that this compound was highly effective against Malaria, killing the parasite at even early stages of development. While the precise mechanism of how Artemisinin achieves this is still debated, many of the current theories and hypotheses are that the drug forms a highly reactive compound within the Malaria organism which is then capable of irreversibly modifying and damaging proteins and other important molecules, and the parasite goes down!

The consequences of these discoveries have been felt across the globe, but the countries most affected by these diseases stand to gain the most. River Blindness is nowhere near the huge problem it used to be, with treatment scenarios moving away from control and into eradication and elimination. This could be MASSIVE for the global economy, with estimates stating a potential saving of US$ 1.5 billion over a few decades! Eradication could also be great for local economies; as such serious illnesses can affect employee attendance, due to actual infection or having to care for a relative. This decrease in workforce can potentially lead to an economic downturn and further unemployment. When combined with the more direct costs of treatment, the losses can be huge. Malaria alone is thought to cost Africa around US$ 12 million a year in lost GDP, and continues to slow growth by more than 1% each year. Just imagine what eliminating these diseases could do for these countries. Not only could their economies rise to a more globally competitive level, it could also lead the way to alleviating the more poverty-stricken areas. Families would be freer to go out and earn a living, with no need to worry about potential infection or having to care for sick family members. This could afford more food, better healthcare, and leisure activities, drastically increasing quality of life. Granted it would also cause significant population growth in areas with already high birth rates, and the current food crises would not be helped by this, but these problems would be easier to control and solve in a disease-free society.

That is not to say there are no concerns associated with disease eradication, although it is extremely unlikely that these would out-weigh the benefits. It could be that the process of natural selection would be halted, as these diseases weed out weaker immune systems in the population. But with advances in technology and medicine allowing for new treatments of both genetic and infectious diseases, if such a thing could happen, the causes are already present across the globe, so why should it influence our decision in this case? I mean, Malaria was eradicated in the US many years ago and there have been no obvious downsides to this. You could also look at the example of smallpox, which not only saved the world around US$ 1.35 billion a year, but has had no clear effect on our immune systems. Even if it were to have such an effect, it would take many thousands of years for such a change to occur, and it is possible that our technology and medical treatments will have advanced enough to counter this. While this question does remain to be answered, there is no evidence of a decline in our immune systems, and I think it is safe to say that the benefits of eliminating these diseases would go way beyond the realm of public health. So should we let a completely hypothetical downside influence our decision? Answer! No we shouldn’t.

Sources:

The Key Publications mentioned by the Nobel Assembly at Karolinska Institutet:

  • Burg et al., Antimicrobial Agents and Chemotherapy (1979) 15:361-367
  • Egerton et al., Antimicrobial Agents and Chemotherapy (1979) 15:372-378.
  • Tu et al., Yao Xue Xue Bao (1981) 16, 366-370 (Chinese)