Category: Chemistry

The CRISPR Revolution: How will Gene Editing change our World?

maxresdefault
CRISPR will offer unprecedented control in gene editing. Image Source: http://gizmodo.com/everything-you-need-to-know-about-crispr-the-new-tool-1702114381

The age of gene editing is upon us! Or it will be soon thanks to the revolutionary new technology known as CRISPR. I would be VERY surprised if you haven’t at least heard of it by now, especially when you consider the attention it gets from the science media. Attention that is very understandable once you start looking at exactly what this technology can do, and what that potentially means.

The story began in 2013, when some researchers claimed that they had used CRISPR-Cas9 to successfully slice the genome in human cells at sites of their choosing. This understandably triggered a massive ethics debate which is still going on today. A huge amount of the conversation focussed on how it could be used to fight genetic diseases or even edit human embryos, but there are many more potential applications. As the debate continued, researches set about editing the genomes of many other organisms, including plants and animals, and even exploring it’s potential for studying specific gene sequences. The range of applications is truly remarkable.

Some claim that the real revolution right now is in the lab, where CRISPR has made the study of genetics significantly easier. There are two main components to the CRISPR-Cas9 system: a Cas9 enzyme that acts as a pair of molecular scissors, cutting through the DNA strand, and a small RNA molecule that directs the system to a specific point. Once the cut is made, the cell’s own DNA repair mechanisms will often mend it, but not without making the occasional mistake.

Even a small error during DNA repair can completely alter the structure of the protein it codes for, or stop it’s production altogether. By exploiting these traits and errors, scientists can study what happens to a cell or organism when a specific gene or protein is altered. Such a level of control will likely make these studies less prone error,  and lead to a much better understanding of the role played by specific genes and proteins.

But it doesn’t stop there! There exists a second repair mechanism that mends the DNA strand according to a template provided by the cell. If the researchers were to remove this template and provide one of their own, then they could potentially insert nearly any DNA sequence at whatever site they desire. A prospect that would allow genomes to not just be edited, but actively designed.

That idea may sound somewhat futuristic at this point, but in reality it’s already being put to use. Due to the relative precision and ease that CRISPR offers, scientists have already made a start on editing the genes of animals for applications ranging from agriculture to the revival of extinct species. Some CRISPR modified animals are even being marketed as pets! As you can imagine, regulators are still working out how to deal with such animals, as there are some obvious safety and ecological concerns, but that hasn’t stopped the science from happening.

Disease resistance is one of the more popular uses for CRISPR, and it can provide a variety of agricultural and ecological benefits. For example, there is hope that such an application could help stem the huge loss of honey bee populations around the world. This loss can in part be attributed to disease and parasites, but biotechnology entrepreneur Brian Gillis may have found a solution. He has been studying the genomes of so-called “hygienic bees”, which get their name from their obsessive hive cleaning habits and removal of sick or infested larvae.

Gillis’ idea states that, if genes responsible for this behaviour can be identified, they could then be edited into the genomes of other populations and significantly improve hive health. But whether or not this can be done remains to be seen, as no such genes have been found as of yet, and the roots of the behaviour may prove to be much more complex. At this point we’ll just have to wait and see.

Another potential application, one that I personally find much more interesting, is the revival of extinct species. Around 4000 years ago the woolly mammoth went extinct, an event that was at least partially due to hunting by humans. Well it now looks like we might be able to make up for that mistake! This is thanks to scientist George Church from Harvard Medical School in Boston, who has plans to use CRISPR to bring back this long lost species. He hopes to transform some endangered Indian Elephants into a breed of cold resistant elephants that will closely resemble the mammoth, and release them into a reserve in Siberia where they will have space to roam.

But the process of editing, birthing, and then raising these mammoth-like elephants is no easy task. The first problem is how to go from an edited embryo to a developed baby, and Church has said it would be unethical to implant the embryos into endangered elephants for the sake of an experiment. Since that option is off the table, his lab is currently looking into the possibility of an artificial womb; a device that does not currently exist.

It’s worth pointing out that I’ve just barely scratched the surface of what CRISPR can do for animals, let alone the many other organisms it can be applied to. I could very easily write an entire post about it to be honest, but there is one final point the definitely deserves some attention. We’ve already seen how amazingly versatile CRISPR is, and it stands to reason that, if you can edit the genomes of animals in this way, you can almost certainly do the same to humans.

As I’m sure you can imagine, there is a very heated debate about how it could and should be used to modify the genomes of human embryos. One key argument for is that many currently available technologies already allow parents to do this. These include prenatal genetic screening to check for conditions like down syndrome, and in-vitro fertilisation allowing parents to select embryos that don’t have certain disease-causing mutations. Once could say that direct genome editing is simply the next step in technology of this nature.

On the other hand, one needs to consider what genome editing would mean for society as a whole. For example, by allowing parents to edit out traits they see as debilitating,  we could potentially create a less inclusive society. In such a world even the tiniest of flaws might be seen as a huge disadvantage, with everyone being subjected to much harsher judgement. Would that be beneficial for the human race? Unfortunately that’s not a question we can answer, but it doesn’t sound like a pleasant world to live in.

Whether or not you’re in favour of the human race dictating the genetics and characteristics of future generations seems to be a matter of opinion right now, but it’s certainly not fair to say that either side of the debate outweighs the other. To me, there doesn’t seem to be an obvious answer. On the one hand, we have the chance to truly improve the human race by ensuring the we continue to improve and adapt as time goes on, assuming of course that we have the knowledge to do so. But I cannot say for sure what type of society that would create, or if it’s one I’d really like to live in.

Regardless of your opinion on CRISPR and gene editing, you can’t deny that this new technology has the potential to completely change our world and our society. Given that it can improve our understanding of genetics and allow us to physically alter the DNA of living creatures, one could easily describe it as the beginning of a genetic revolution. We’ll have to just wait and see if it will be put to use in the ways I’ve explored here, but it’s certainly something I will be keeping an eye on. Hopefully I got you interested enough to do the same.

Sources:

Advertisements

A Biological Supercomputer?!

cci16z_ucaeyche_1024
An artistic representation of the new biological microchip. Source: http://www.sciencealert.com/scientists-have-developed-the-world-s-first-living-breathing-supercomputer

Supercomputers are truly marvellous examples of what technology can accomplish, being used in many areas of science to work through some incredibly complex calculations. Their computational power is truly a feat of human engineering. But, unfortunately, they’re not perfect. Not only are they absolutely huge, often taking up an entire room, but they’re also expensive, prone to overheating, and a huge drain on power. They require so much of the stuff they often need their own power plant to function.

But fear not! As always, science is capable of finding a solution, and this one comes in the form of a microchip concept that uses biological components that can be found inside your own body. It was developed by an international team of researchers,  and it uses proteins in place of electrons to relay information. They’re movement is also powered by Adenosine Triphosphate (ATP), the fuel that provides energy for all biological processes occurring in your body right now. It can quite literally be described as a living microchip.

The chip’s size may not seem like much, measuring only 1.5 cm2, but if you zoom in you get a very different picture. Imagine, if you will, you’re in a plane looking down at an organised and very busy city. The streets form a sort of grid spanning from one end of the city to the other, which closely resembles the layout of this microchip. The proteins are then the vehicles that move through this grid, consuming the fuel they need as they go. The main difference being that, in this case, the streets are actually channels that have been etched into the chip’s surface.

“We’ve managed to create a very complex network in a very small area” says Dan Nicolau Sr. a bioengineer from McGill University in Canada, adding that the concept started as a “back of an envelope idea” after what he thinks was too much rum. I guess some innovative ideas require a little help getting started.

Once the rum was gone and the model created, the researchers then had to demonstrate that this concept could actually work. This was done by the application of a mathematical problem, with a successful result being if the microchip was able to identify all the correct solutions with minimal errors.

The process begins with the proteins in specific “loading zones” that guide them into the grid network. Once there, the journey through the microchip begins! The proteins start to move through the grid, via various junctions and corners, processing the calculation as they go.  Eventually, they emerge at one of many exits, each of which corresponds to one possible solution to the problem. In the specific case described by the researchers, analysis of their results revealed that correct answers were found significantly more often than incorrect ones, indicating that model can work as intended.

The researchers claim that this new model has many advantages over existing technology, including a reduction in cost, better energy efficiency, and minimal heat output, making it ideal for the construction of small, sustainable supercomputers. They also argue that this approach is much more scalable in practice, but recognise that there is still much to do to move from the model they have to a full on functioning supercomputer. It’s early days, but we know the idea works.

So, while it may be quite some time before we start seeing these biological supercomputers being actually put to use, it certainly seems like a fruitful path to follow. Society would no doubt benefit from the reduced cost and power usage that this new technology would bring, and these aspects would also make their application in scientific research much easier.

In fact, if the decrease in cost and power usage is a dramatic one, then scientists could potentially use a larger amount of these computers than they do at the moment. This a change that would have a huge impact on the kind of calculations that could be performed, and could potentially revolutionise many areas of science. Even though we’ll have to wait, that’s something I am very much looking forward.

Sources:

New Fuel Cell Technology keeps the Environment in mind!

I imagine you’re all pretty familiar with fuel cell technology at this point. It’s been around for quite some time and is often heralded as the answer to green, renewable energy. For the most part that is quite true, as there are a number of advantages that this technology has over current combustion-based options. It not only produces smaller amounts of greenhouse gases, but none of the air pollutants associated with health problems.

That being said, the technology isn’t perfect, with many improvements still to be made. One problem is that a fuel cell’s environmental impact depends greatly on how the fuel is acquired. For example, the by-products of a Hydrogen (H2) fuel cell may only be heat and water, but if electricity from the power grid is used to produced the H2 fuel then CO2 emissions are still too high.

The technology also requires the use of expensive or rare materials. Platinum (Pt) is easily the most commonly used catalyst in current fuel cell technology, and this is a rare metal often costing around 1000 US dollars per ounce. This really hurts the commercial viability of the fuel cell, but research regarding alternative materials is progressing.

While I’m certain these kinks will be worked out eventually, it is still worth considering other options. One such option is the Microbial Fuel Cell (MFC), a bio-electrochemical device that uses respiring microbes to convert an organic fuel into electrical energy. These already have several advantages over conventional fuel cell technology, primarily due to the fact that bacteria are used as the catalyst.

The basic structure of an MFC is shown in Figure 1, and you can see that it closely resembles that of a conventional fuel cell. In fact the method by which it produces electricity is exactly the same, the only differences are the fuel and the catalyst.

1-s2-0-s1110016815000484-gr1
Figure 1: The basic structure of an MFC. Source: http://www.sciencedirect.com/science/article/pii/S1110016815000484

The fuel for an MFC is often an organic molecule that can be used in respiration. In the figure it is shown to be glucose, and you can see that its oxidation yields both electrons and protons. It is worth noting that the species shown as “MED” is a mediator molecule used to transfer the electrons from the bacteria to the anode. Such molecules are no longer necessary, as most MFCs now use electrochemically active bacteria known as “Exoelectrogens”. These bacteria can directly transfer electrons to the anode surface via a specialised protein.

As I mentioned before, this technology has several advantages over conventional fuel cell technology in terms of cost and environmental impact. Not only are bacteria both common and inexpensive when compared to Pt, but some can respire waste molecules from other processes. This not only means that less waste would be sent to a landfill, but would actually be a source of energy. This has already be applied in some waste-water treatment plants, with the MFCs producing a great deal of energy while also removing waste molecules.

Now you’re probably thinking, “Nathan, this is all well and good, but it’s not exactly new technology”. You’d be right there, but some scientists from the Universities of Bristol and West England have made a big improvement. They have designed an MFC that is entirely biodegradable! The research was published in the journal ChemSusChem in July of 2015, and it represents a great improvement in further reducing the environmental impact of these fuel cells.

Many materials were tried and tested during the construction process. Natural rubber was used as the membrane (see Figure 1), the frame of the cell was produced from polylactic acid (PLA) using 3D printing techniques, and the anode was made from Carbon veil with a polyvinyl alcohol (PVA) binder. All of these materials are readily biodegradable with the exception of the Carbon veil, but this is known to be benign to the environment.

The cathode proved to be more difficult, with many materials being tested for conductivity and biodegradability. The authors noted that conductive synthetic latex (CSL) can be an effective cathode material, but lacks the essential biodegradability. While this meant it couldn’t be used in the fuel cell, it was used as a comparison when measuring the conductivity of other materials.

Testing then continued with egg-based and a gelatin-based mixtures being the next candidates. While both of these were conductive, they weren’t nearly good enough to be used. CSL actually performed 5 times better than either of them. But science can not be beaten so easily! Both mixtures were improved by modification with lanolin, a fatty substance found in Sheep wool, which is known to be biodegradable. This caused a drastic increase in performance for both mixtures, with the egg-based one outperforming CSL! This increase easily made it the best choice for the cathode.

With all the materials now decided, it was time to begin construction on the fuel cell. A total of 40 cells were made and arranged in various configurations. These are shown in Figure 2, and each configuration was tested to determine its performance. Of these three, the stack shown in Figure 2C was found to be able to continuously power an LED that was directly connected. It was also connected to some circuitry that harvested and stored the energy produced, and the authors report that the electricity produced by this method could power a range of applications.

4
Figure 2: a) A set of 5 fuel cells connected in parallel. Known as a “parallel set”. b) A stack of 4 parallel sets. c) A stack of 8 parallel sets. Source: http://onlinelibrary.wiley.com/wol1/doi/10.1002/cssc.201500431/full

While there is much to celebrate here, the authors also address some of the concerns associated with this technology. The most notable concern is how long the fuel cells can operate, and the authors report that after 5 months of operation the stacks were still producing power. This could potentially be longer in an application, as the operational environment of a fuel cell rarely mimics natural conditions.

They also discuss how these MFCs didn’t perform as well as some produced in other studies, but these were the first to be made from cheap, environmentally friendly materials. If anything, this research shows that such fuel cells can at least be functional, and are an excellent target for further research.

So we’ll have to wait for more research to see if this technology will actually take off, and given the timescale of this study it’s likely that we’ll be waiting quite some time. Even so, this is an important step on the road to completely sustainable living, as it shows that even our power sources could be made from completely environmentally friendly materials. Now we just have to hope people take notice. Let’s make sure they do!

Sources not mentioned in text:

A Step Forward for Wearable Electronics

3dp_graphine_image
An artistic representation of Graphene. Source: http://3dprint.com/61659/graphene-ink-capabilities/

Research on flexible, wearable electronic devices is already well under way, with products such as the Wove Band attracting a great deal of attention. In fact, it’s a field of increasing research interest due to many potential applications. These include monitoring health and fitness, functional clothes, as well as many mobile and internet uses.

Such technology could have many implications in several areas of life. These might involve more effective and immediate monitoring of patients outside hospital, potentially reducing response time if something were to go wrong, and moving communications technology into an entirely new age. The smartphone as we know could be a thing of the past once this technology takes off.

Given the plethora of uses and the high profile of the research, it’s no surprise that many materials have already been considered. Silver nanowires, carbon nanotubes, and conductive polymers have all been explored in relation to flexible electronics. Unfortunately, problems have been reported in each case, such as high manufacturing costs in the case of silver nanowires and stability issues for some polymers.

But fear not my fellow science enthusiasts! Another material has appeared to save the day. It’s one you’re probably quite familiar with by now – Graphene! This two-dimensional hexagonal array of carbon atoms has great potential in the field of flexible electronics due to its unique properties, which include great conductivity and stability. However, known production methods for the Graphene sheets that would be needed give structures with a rather high surface resistance, which is not ideal.

Luckily, the invention of conductive Graphene inks provided a way to overcome this problem. This allows for sheets of superior conductivity, greater flexibility, a lighter weight, and a lower cost. That sounds VERY good for a wearable, flexible electronic device. These inks can also be prepared with or without a binder, a chemical that helps the ink stick to a surface. This also brings advantages and disadvantages, as a binder can improve the sheets conductivity, but also requires high temperature annealing processes. This limits its use on heat sensitive substrates such as papers and textiles.

Well, a new paper published in Scientific Reports in December claims to have found a production method that doesn’t require a binder and has a high conductivity. The research was conducted by scientists at the University of Manchaster, United Kingdom, and it represents and important step forward in making flexible Graphene based electronics a reality. The production method first involves covering a surface with an ink containing Graphene nanoflakes, then drying it at 100oC. This forms a highly porous coating, which is not ideal since it leads to high contact resistance and an unsmooth electron pathway.

The authors overcame this problem by compressing the dry coating, which led to a thin, highly dense layer of Graphene. This not only improved the adhesion of the nanoflakes, but the structure became much less porous, improving its conductivity. It is also noted that greater compression led to higher conductivity values, with the highest being 4.3×104 S/m. But the science didn’t end there! The authors then went on to test how flexible electronic components made from this material would perform with regard to communications technology. Both transmissions lines (TLs) and antennae were created from the Graphene sheets, and tested in various scenarios.

TLs are conductors designed to carry electricity or and electrical signal, and are essential in any circuitry. The ones created here were tested in three positions: unbent, bent but not twisted, and bent and twisted. This was done to determine if the material performed well in various positions; a necessity for a wearable, flexible device. Turns out the TLs performed well in all three positions, with data showing only slight variations in each case.

The Graphene based antennae were also tested in various positions, both unbent and with increasing amounts of bending. In each case the antennae were found to function in the frequency range matching Wi-Fi, Bluetooth, WLAN, and mobile cellular communications. This is an excellent indication that this material could be ideal for use in wearable communications technology. It was also tested in a pseudo-real life scenario, with antennae being wrapped around the wrists of a mannequin. These results were also promising, showing that an RF signal could be both radiated and received.

So, you can hopefully see that this work represents a real step forward towards wearable electronic devices, as it shows that Graphene is truly a prime candidate. That said, there is still a great deal of work to do, such as incorporating all these components into a complete device and figuring out how to produce the technology on a commercial scale. There would also need to be more research to see if these Graphene sheets could be modified in some way to include applications outside of communications. But putting that aside, I’m quite excited about this research bringing us a little bit closer. Keep an eye out to see where it goes from here.

Sources:

  • Fuente, J. (2016). Properties Of Graphene. Graphenea. Retrieved 18 January 2016, from http://www.graphenea.com/pages/graphene-properties#.VpzceyqLSwV
  • Huang, G.-W. et al. Wearable Electronics of Silver-Nanowire/Poly(dimethylsiloxane) Nanocomposite for Smart Clothing. Sci. Rep. 5, 13971; doi: 10.1038/srep13971 (2015).
  • Huang, X. et al. Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications.Sci. Rep. 5, 18298; doi: 10.1038/srep18298 (2015).
  • Matzeu, G., O’Quigley, C., McNamara, E., Zuliani, C., Fay, C., Glennon, T., & Diamond, D. (2016). An integrated sensing and wireless communications platform for sensing sodium in sweat. Anal. Methods, 8(1), 64-71. http://dx.doi.org/10.1039/c5ay02254a

Glass Almost as Hard as Steel!

It seems like the days of finding a shattered screen after dropping your smartphone are coming to an end. You may have already heard of Gorilla Glass, the wonder material from Corning that as boinged on to the smartphone scene in recent years. But even that can fail and breakages have been reported. But as engineers strive to push what is possible for these small gadgets, so to do they find new ways to tweak and enhance the properties of glass.

Glass is usually made by heating minerals to very high temperatures and allowing them to cool, but much of the glass used in smartphones, and skyscrapers for that matter, is made stronger by the addition of metal atoms. For example, Gorilla glass is made stronger by the addition of Potassium (K), an alkali metal. But now, a team of Japanese research scientists have found a way to add an oxide of Aluminium, known as Alumina (Al2O3), to the glass structure. This oxide has long been coveted as a new candidate for making super strong glass, as it possesses some of the strongest chemical bonds known to man, with a dissociation energy of 131 kJ / cm3.

The scientists had hypothesised that adding Alumina to glass would make a super robust new material, but producing it wasn’t going to be easy. In their first attempts, adding the Alumina caused Silicon Dioxide (SiO2) crystals to form where the mix met the surface that it was being held in. These crystals made the made no longer see-through, and effectively worthless. It would be quite pointless to have a super strong glass that wasn’t see through, as that throws many potential applications right out the window, and the researchers knew this. They needed to develop a new production method.

Aerodynamic levitation is what they came up with, and it’s almost as sci-fi as it sounds. It involves holding the Alumina/glass mixture in the air while it forms by pushing it from below with a flow of oxygen gas. A laser is then used as a spatula to mix the material as it cools, and the result is a material that contains more Alumina than any glass to date, and it was found to be both transparent and reflective. Testing then revealed that the glass was very hard; harder than other oxidised glasses as well as most metals, and almost as hard as steel according to an article from phys.org.

The researchers remain hopeful that aerodynamic levitation could make it possible to produce all sorts of other super strong glasses, but first they need to figure out how to scale up the production process, as it currently only works in small batches. Nevertheless, it is nice to think that the despair you feel when discovering a broken screen may be a thing of the past. I don’t think anything will be able to live up to the integrity of those old Nokia bricks though. Those things could truly take a beating.

Sources not mentioned in text:

Chemistry with a Bang: The Science of Fireworks

A colourful firework display! Source: http://londontheinside.com/2015/10/16/east-village-fireworks/

Loud bangs and pretty colours; that’s what fireworks are known for. I imagine we have all been to a huge and impressive firework display at some point, and during a significantly smaller one that my family had at the end of Halloween, I realised I didn’t actually know how these miniature rockets worked. How do they achieve the patterns and colours they are known for?

The basic structure of a firework. Source: http://www.explainthatstuff.com/howfireworkswork.html

Well it turns out they are an excellent example of the everyday application of the physical sciences, with some very interesting Chemistry dictating both the bangs and the colours. But before we get into that, lets take a look at the structure of a firework and see how everything is bound together.

  1. This is the Stick, or “tail” if you prefer, which consists of wood or plastic. This long stick ensures that the firework shoots in a straight line, and doesn’t just fire off in any direction. This not only helps prevent a firework to the face, but also allows for display organisers to position the firework effects with precision, allowing for a well coordinated display.
  2. This is where the fuse is located, which consists of a small bit of paper or fabric that can be lit a flame or by electrical charge. This starts the fuel of the firework burning and can set off other, smaller fuses that can make the effects explode later than the main firework.
  3. This is the Charge, which is a fairly crude explosive, designed to shoot the firework upward. These charges can sometimes reach heights of several hundred meters (roughly 1000 ft) and can achieve speeds of several hundred km or miles per hour. This is also where the fuel is located, and is often made up of tightly packed gunpowder, which we will discuss the composition of later.
  4. This is the fun part where the Effects are contained. The compounds stored in here are what actually produce the display once the firework is in the air. There can be one or multiple effects, usually packed into separate compartments, that can fire off in a predetermined sequence or all at once.
  5. This component is not particularly special. It is often referred to as the “Head” of the firework, and can be aerodynamically designed to improve the flight on more expensive models, but is often just a flat cap on cheaper ones. This part is designed to contain the effects.

Now we get to the hardcore science! All fireworks contain the same basic chemical ingredients: an oxidant, fuel, a colour producer, and a binder. The binder is the simplest of the ingredients, used to hold the everything together in a paste-like mixture. The most common type of binder is known as dextrin, which is a type of starch. The other components are a bit more complex.

The oxidant is required in order for the mixture to burn. Common examples are Potassium Nitrate (K2NO3) or Potassium Perchlorate (KClO4), both of which decompose when heated, and yield Oxygen. This Oxygen then allows for the effective combustion of the fuel.

The fuel can consist of many chemicals, such as Carbon or Sulfur containing compounds, as well as organic based material like poly(vinyl chloride) (PVC). They can also contain powdered Aluminium (Al) or Magnesium (Mg) to help the mixture reach the high temperatures necessary to cause rapid combustion. The most common fuel used today is gunpowder, sometimes called black powder, which consists of a mixture of charcoal, Sulfur, and K2NO3. This allows it to act as both a fuel and an oxidant.

The special effects, such as the bright colours, are provided by additives to the mixture. These additives are often metal compounds known as metal salts, using elements from Group 2 of the periodic table. An example of this is the Barium (Ba) salt Barium Carbonate (BaCO3), which is added to produce green flames when the firework goes off. The species actually responsible for the colour in this case is the gaseous BaCl+, which is produced when Barium ions (Ba2+) combine with Chloride ions (Cl). The Barium ions are produced when the BaCO3 salt decomposes, and the Chloride ions can come from the decomposition of KClO4 oxidant or PVC fuel, depending on which is used.

When the firework explodes, the newly formed BaClgas is extremely hot, and contains a great deal of kinetic energy. This means that the many atoms in the explosion will frequently collide with each other, and the kinetic energy will be transferred from one atom to another. The energy from both these collisions and the heat of the explosion can then be absorbed by the electrons surrounding the atom, and they become “excited” into a higher energy state. Now these excited states are unstable, so the electrons will naturally return the lowest energy state available, known as the “ground state”. The energy that was absorbed is then released in the form of light, with the colour depending on the amount of energy released. In this case, you would see a bright green flame emitted by the BaClgas. The specific range of colours emitted in this process is called the “emission spectrum”, and each element / molecule will have a unique spectrum as the structure will determine which energy states are available for the electrons to occupy.

There are many other metal salts that can be used, each yielding a different colour. Strontium salts can give of red light, whereas Copper compounds produce a nice blue. But pure colours require pure ingredients, and even trace amounts of other metal compounds are sufficient to alter or completely overpower other colours. The skill of the manufacturer, as well as the age of the firework, will therefore greatly influence the final display.

So next time you’re at a public firework display, you can shamelessly point at the sky and scream “SCIENCE!”, or proudly describe the Chemistry to the person next to you. Although I highly doubt either option will make you very popular.

Sources:

What’s the deal with GMOs?

Okay… I was hoping I wouldn’t have to write about this any time soon, but with some countries within the EU deciding to ban the cultivation of genetically modified crops, I think the time has come.

A total of 17 european countries announced this ban at the beginning of October, and it exposes just how far Europe has gone in setting itself against the modern scientific consensus. In fact, the decision seems to have been made without considering the science at all, but we’ll get into that later. I think we should begin by educating ourselves on what GMOs actually are.

A GMO (genetically modified organism) can be defined as an organism that has acquired, by artificial means, one or more genes from another species or from another variety of the same species.

Humans have been modifying the genomes of plants and animals for thousands of years through the process of artificial selection (selective breeding), which involves selecting organisms with desirable traits and breeding them so that these characteristics are passed on. An example of this are the “Belgian Blue” cows, which have been selectively bred to have greater muscle mass.

Unfortunately this is limited to only naturally occurring variations of a gene, but genetic engineering allows for the introduction of genes from completely unrelated species. These genes could lead to resistance to certain diseases and pesticides, or to enhanced nutritional content. The list goes on; but why do we need these organisms?

Given that the Food and Agriculture Organization of the UN predicts that we’ll need to produce 70% more food by 2050 to feed the growing population, we will need to find new ways to meet food demands. There are several ways to do this, but options such as increased deforestation and giving up meat consumption to better utilise the required crops are not appealing for a number of reasons.

The more realistic options are investing more in hydroponics (growing crops indoors), which many countries are doing, or growing GM crops. This is because the central idea behind GM crops is to combat problems that threaten food security, such as pests, climate change, or disease. Modifications that remove these problems could allow certain foods to be effectively grown in locations where it was previously not possible, as well as improving the chances of the crops surviving in harsh conditions.

Despite these benefits, there are still many controversies surrounding GMOs, such as the unintended spread of modified genes, but this excellent story on iflscience.com clearly outlines many of these problems, as well as pointing out why they are really no cause for concern.

So why do so many people still have a problem with GMOs? The benefits are clearly huge and many of the legitimate concerns have already been addressed. Well, many people seem to believe that GMOs are somehow bad for their health, even poisonous, and that they can damage the environment, despite overwhelming scientific evidence to the contrary.

Some researchers published a paper in the journal “Trends in Plant Science”, arguing that the negative representations of GMOs are popular because they are intuitively appealing. In other words, many people oppose GMOs because it “makes sense” that they would pose a threat. The paper is also very well summarised by one of the authors in an article from Scientific American.

One reason they give is the concept of “Psychological Essentialism”, making us perceive an organisms DNA as its “essence”. Following this logic, DNA is an unobservable core causing an organisms behaviour and determining its identity. This means that, when a gene is transferred between two distantly related species, people are likely to believe that some characteristics of the source organism will emerge in the recipient.

They also report that an opinion survey in the US showed that more than half of the respondents thought that a tomato modified with fish DNA would taste like fish. This is NOT how DNA works.

However, it is worth pointing out that not all criticisms of GMOs are unfounded, as many people are skeptical of how the business world will change with their introduction. It has already been reported that the US supreme court has ruled in favour of Monsanto’s claim to copyright GMO seeds, as well as the ability to sue farmers whose fields become contaminated with Monsanto products, whether it is accidental or not.

Now I don’t feel I can safely comment on all of this as the world of business is not something I am educated in, but, while Monsanto’s business practices may be ethically questionable, they are not the only company involved in GMO research and distribution. Many academic institutions and non-profit organisations are also involved, and such groups are responsible for the introduction of Golden Rice, a GMO that has had only beneficial effects for society.

Knowing this, to dismiss all such organisms simply because one questionable company produces some of them is extremely narrow-minded. Another valid criticism is that it is not possible to say that future GMOs will be safe, and that each organism should be evaluated individually. I would agree with this, as a newly created GMO may and likely will have problems associated with it.

But these will be addressed in the research phase, in the same way that a newly synthesised drug has to undergo trials to determine and correct problems, and any product that gets a commercial release will have been thoroughly evaluated by the scientists involved with the research. The problem appears when people claim the gene editing techniques themselves are dangerous, which has no scientific grounding whatsoever.

So, now we go back to the problem of the European Union’s decision to ban GM crops. It is worth noting that this ban doesn’t apply to scientific research, so they are clearly not opposed to the development of new GMOs, just the cultivation of ones that have already been proven safe. Sounds confusing right? I should also point out that this decision was made without consulting the scientific advisor of the European Commission (EC), because they currently don’t have one!

Last November, the EC’s president, Jean-Claude Juncker, chose not to appoint a chief scientific advisor due to lobbying from Greenpeace and other environmental groups, who seemed to have a problem with what the previous advisor was saying about GMOs. Ignoring the fact that the advisors comments reflected the scientific consensus, they wrote “We hope that you as the incoming commission president will decide not to nominate a chief scientific advisor”.

This is extremely worrying, especially since the scientific consensus on the safety of genetic engineering is as solid as that which underpins human-caused climate change. This is especially strange as Greenpeace appears to support the consensus on climate change. You can’t pick and choose which science you agree with; you either support science, or you don’t.

I would assume this ban is due to the negative public opinion of GMOs, and the idea that all scientists that advocate for them have somehow been “bought” by large corporations like Monsanto. Speaking as someone who has experience with scientists and scientific research, I can say that the process has no agenda. Yes, the researchers may prefer one outcome to another, but if the evidence contradicts what they want to find, then they accept that. To do otherwise goes against the very nature of science, and given the amount of work and studying that goes into such a career, very few people go into science without a great deal of passion and respect for the process. You don’t have to trust the corporations, but you should trust the science.

Sources not mentioned in text:

Artificial Skin for Robotic Limbs

Since the new Star Wars trailer has brought with it an air of Sci-Fi, I felt our next story should fit the mood. This led me to a recent article published in the journal “Science” in which a team of researchers created an incredible form of artificial skin. Currently, prosthetic limbs can restore an amputee’s ability to walk or grip objects, but can in no way restore a sense of touch. Such a sense is critical to the human experience, said coauthor Benjamin Tee, an electrical and biomedical engineer at the Agency for Science Technology and Research in Singapore. Restoring feeling in amputees and people with paralysis could allow them to carry out several activities that were previously hindered, such as cooking, contact sports etc.

A break down of the components in the Artificial Skin discussed here, and how the optical-neural interface was constructed. Credit: Science. Source: http://cen.acs.org/articles/93/i41/Artificial-Skin-Transmits-Signals-Neurons.html

Well, these researchers at Stanford University have taken us one step closer to this goal by creating an electronic skin that can detect and respond to changes in pressure. The team named this product the “Digital Tactile System”, or DiTact for short, and it consists of two main components shown in the image to the right. The upper layer consists of microscale resistive pressure sensors shaped like tiny upside-down pyramids. These structures are made from a carbon nanotube-elastomer composite capable of generating a direct current that changes amplitude based on the applied pressure. This is because the nanotube structures are capable of conducting electricity. When these structures are moved closer together, electrictity can flow through the sensor. The distance between them will vary with the applied pressure, and the greater the pressure the smaller the distance. This decrease in distance will allow for a greater flow of electricty between the structures, causing the amplitude of the current to increase.

But one problem still remains! The human brain cannot interpret this information, as it is usually received in pulsed signals, similar to Morse Code, with greater pressure increasing the frequency of these pulses. The signal therefore had to be converted into something the brain could actually recognise, which is where the second layer of the artificial skin comes into play. This layer consists of a flexible organic ring-oscillator circuit – a circuit that generates voltage spikes. The greater the amplitude of the current flowing through this circuit, the more frequent the voltage spikes. And viola! We now have a pulsed signal. But the team had to show that this could be recognised by a biological neuron, otherwise the signal would stop once it reached such a cell. To do this, they bioengineered some mouse neuron cells to be sensitive to specific frequencies of light, and translated the pressure signals from the artificial skin into light pulses. These pulses were then sent through an optical fiber to the sample of neurons, which were triggered on and off in response. This combination of optics and genetics is a field known, oddly enough, as “Optogenetics”, and it successfully proved that the artificial skin could generate a sensory output compatible with nerve cells. However, it is worth noting that this method was only used as an experimental proof of concept, and other methods of stimulating nerve cells are likely to be used in real prosthetic devices.

This work is “…just the beginning…” according to Zhenan Bao, the leader of the team, adding that they also hope to mimic other sensing functions of human skin, such as the ability to feel heat, or distinguish between rough and smooth surfaces, and integrate them into the platform, but this will take time. There are a total of six types of biological sensing mechanisms in the hand, and this experiment reports success in just one of them. Nevertheless, the work represents “an important advance in the development of skin-like materials that mimic the functionality of human skin on an unprecedented level” according to Ali Javey, who is also working on developing electronic skin at the University of California, Berkley. Adding that “It could have important implications in the development of smarter prosthetics”.

With thought-controlled robotic limbs already being very real, this research represents the next key step in producing completely functioning prosthetic limbs, that could one day be almost indistinguishable from the real thing! Imagine being able to regain all forms of sense and movement in a limb you had once thought lost forever. That would be HUGE, and could drastically improve the quality of life of many amputees. Unfortunately, such a mechanical marvel is still very much in the future, but this research is an important stepping stone, and I wouldn’t be surprised if we start hearing more about this technology in the years to come. My next question would be, once we have achieved all of this, will we start working on a prosthetic arm able to mimic the effects of using The Force? I think we should make and arm that can use The Force! I think we all (secretly) want that.

Sources not mentioned in text:

DNA is Unstable! Luckily your Cells can handle that.

Another Nobel Prize story?! DAMN RIGHT! This time it’s the prize for chemistry, and Tomas Lindahl, Paul Modrich, and Aziz Sancar will collectively bask in the glory for their outstanding work in studying the mechanisms of DNA repair. Given the billions of cell divisions that have occurred in your body between conception and you today, the DNA that is copied each time remains surprisingly similar to the original that was created in the fertilized egg that you once were. Why is that strange? Well from a chemical perspective that should be impossible, with all chemical processes being subject to random errors from time to time. Along with that, DNA is subjected to damaging radiation and highly reactive substances on a daily basis. This should have led to chemical chaos long before you even became a foetus! Now, I would hope that’s not the case for you, so how do our cells prevent this descent into madness? I’ll tell you! It’s because DNA is constantly monitored by various proteins that all work to correct these errors. They don’t prevent the damage from occurring, they just hang around waiting for something to fix, and all three of the winning scientists contributed to our understanding of how our cells achieve this. So! Where do we begin?

A good place to start would be a brief description of the structure of DNA, as this will make things much clearer when we start discussing the research. DNA is primarily a chain of nucleotides, which are themselves made up of three components: a deoxyribose sugar, a phosphate group, and a nitrogenous base. These components are shown bonded together in Figure 1. It is also worth noting that there are four possible bases, each with a slightly different structure, and the one shown in the image is specifically Adenine. The others are known as Thymine, Cytosine, and Guanine, and all attach to the sugar in the same place. The two negative charges on the phosphate group allow it form another bond to the adjacent nucleotide, and this continues on to form a long chain. Two separate chains are then joined together as shown in Figure 2, and voila! A molecule of DNA is formed!

Figure 1: The basic components of DNA. Source: http://pmgbiology.com/2014/10/21/dna-structure-and-function-igcse-a-understanding/
Figure 2: Representation of how the two chains of Nucleotides bond together to form a molecule of DNA. Source: http://www.d.umn.edu/claweb/faculty/troufs/anth1602/pcdna.html
A comparison of Cytosine and its Methylated equivalent. Source: http://blog-biosyn.com/2013/05/15/dna-methylation-and-analysis/

Now that we have a basic understanding of the structure of DNA, the research should make a hell of a lot more sense, and it begins with Tomas Lindahl. In the 1960s, Lindahl found himself asking a question; how stable is our DNA, really? At the time the general consensus among scientists was that it was amazingly resilient. I mean… how else could it remain so constant? If genetic information was in any way unstable, multicellular organisms like us would have never come into existence. Lindahl began his experiments by working with RNA, another molecule found in our cells with a lot of structural similarities to DNA. However, what was surprising was that the RNA rapidly degraded during these experiments. Now it was known that RNA is the least stable of the two molecules, but if was destroyed so easily and quickly, could DNA really be all that stable? Continuing his research, Lindahl demonstrated that DNA does, in fact, have limited chemical stability, and can undergo many reactions within our cells. One such reaction is Methylation, in which a CH3 (methyl) group is added on to one of the bases in the DNA strand. The difference this causes is shown in Figure 3, and can occur with or without the aid of an enzyme. This reaction will become relevant later on, as will the fact that it changes the shape of the base, affecting how other proteins can bind to it. All of these reactions can alter the genetic information stored in DNA, and if they were allowed to persist, mutations would occur much more frequently than they actually do.

Realising that these errors had to be corrected somehow, Lindahl began investigating how DNA was repaired, and by 1986 he had pieced together a molecular image of how “base excision repair” functions. The process involves many enzymes (and I don’t have the time or patience to describe them all), but a certain class known as “DNA glycolsylases” are what actually break the bond between the defective base and the deoxyribose sugar, and the base is removed. Our cells actually contain many enzymes of this type, each of which targets a different type of base modification. Several more enzymes then work together to fill the gap with the correct, undamaged base and there we have it! A mutation has been prevented. To help you visualise all this, you’ll find a graphical representation of it below in Figure 4.

Figure 4: Graphical representation of the process of Base Excision Repair. Source: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2015/popular-chemistryprize2015.pdf

But the science doesn’t end there folks! Remember, there were three winners, the second of which is Aziz Sancar, who discovered another method of DNA repair. This one is called “nucleotide excision repair”, and involves the removal of entire sets of nucleotides, rather than individual bases. Sancar’s interest was piqued by one phenomenon in particular; when bacteria are exposed to deadly doses of UV radiation, they can suddenly recover if exposed to visible blue light. This was termed “photoreactivation” for… obvious reasons. He was successful in identifying an isolating the genes and enzymes responsible, but it later became clear that bacteria had a second repair mechanism that didn’t require exposure to light of any kind. But Sancar wasn’t about to let these bacteria out-fox him and, after more investigations, he’d managed to identify, isolate, and characterise the enzymes responsible  for this process as well. The bacteria were no match for his chemical prowess!

“But how does it work?!” I hear you shout. Well calm the f**k down and I’ll tell you! UV radiation can be extremely damaging, and can cause two adjacent Thymine bases in a DNA strand to directly bind to each other, which is WRONG! A certain endonuclease enzyme, known as an “exinuclease”, is aware of this wrongness, and decides that this damage must be fixed. It does this by making two incisions on each side of the defect, and a fragment roughly 12 nucleotides long is removed. DNA polymerase and DNA ligase then fill in and seal the gap, respectively, and now we have a healthy strand of bacterial DNA! Sancar later investigated this repair mechanism is humans in  parallel with other research groups, and while it is much more complicated, involving many more enzymes and proteins, it functions very similarly in chemical terms. You want a picture to make it easier? You’ll find it below in Figure 5!

Figure 5: Graphical representation of Nucleotide Excision Repair. Source: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2015/popular-chemistryprize2015.pdf

The final recipient of the Nobel Prize this year was Paul Modrich, who identified YET ANOTHER repair system (there are loads, you know), which he named the “mismatch repair” mechanism. Early on in his career, Modrich was examining various enzymes that affect DNA, eventually focussing on “Dam Methylase” which couples methyl groups to DNA bases (I TOLD YOU THAT REACTION WOULD BE RELEVANT!). He showed that these methyl groups could basically behave a labels, helping restriction enzymes cut the DNA strand at the right location. But, only a few years earlier, another scientist called Matthew Meselson, suggested that they also indicate which strand to use a template in DNA replication. Working together, these scientists synthesised a virus with DNA that had incorrectly paired bases, and methylated only one of the two DNA strands. When the virus infected, and injected its DNA into the bacteria, the mismatched pairs were corrected by altering the unmethylated strand. It would appear that the repair mechanism recognised the defective strand by the lack of methyl groups. Does it work that way in humans? Probably not. Modrich did manage to map the mismatch repair mechanism in humans, but DNA methylation serves many other functions in human cells, particularly those to do with gene expression and regulation. It is thought that strand-specific “nicks” (lack of a bond between a phosphate group and a deoxyribose sugar) or ribonucleotides (nucleotide components of RNA) present in DNA may direct repair, but the mechanism remains to be found at this point.

Figure 6: Structure of Olaparib. Source: http://www.reagentsdirect.com/index.php/small-molecules/small-molecules-1/olaparib/olaparib-263.html

But why should we care? Granted it is nice to know this stuff (at least I think so), but what can this information be used for? Well, it actually has applications within the world of medicine, as errors in repair mechanisms can often lead to cancer. In many forms of cancer these mechanisms have been at least partially turned off, but the cells are also heavily reliant on the mechanisms that remain active. As we mentioned earlier, a lack of these mechanisms leads to chemical chaos, and that would cause the cancer cells to just die. This has led to drugs designed to inhibit the remaining repair systems to slow down or stop cancer growth entirely! One such drug is Olaparib, and you can see the structure in Figure 6. This drug functions by inhibiting two specific proteins (PARP-1 and PARP-2), which are integral in detecting certain flaws in replicated DNA and directing repair proteins to the site of damage. Cancer cells treated with this drug have been shown to be more sensitive to UV radiation, making one form of treatment much more effective.

And with that, we bring our Nobel Prize stories for this year to an end! I think it’s safe to say that the work described here deserved a prize of some sort, as it not only takes a lot of skill and dedication, but it has led to new medical treatments and a MUCH greater understanding of how our DNA behaves. Have you enjoyed our time spent on the science of the Nobel Prize? DAMN RIGHT YOU HAVE. O_O

Sources:

Chemical Analysis of Mars from Orbit? But how?!

When I found out that water had been found on Mars my first response was to flail and shout with excitement. But once I had calmed down I started to think; how does one actually go about analysing the surface of another planet without actually being on the surface yourself? It was then that I found out NASA have managed to do all of this using a satellite currently orbiting the red planet at an altitude of 300 km (186 miles)! That’s some pretty impressive tech right there (and I imagine the specs are a well-kept trade secret). The satellite itself is known as the Mars Reconnaissance Orbiter (MRO), and it’s equipped with an analytical tool known as CRISM, or the “Compact Reconnaissance Imaging Spectrometer for Mars” if you’re feeling excessive. This device can detect and measure the wavelengths and intensity of both visible and infrared light that has been reflected or scattered from the martian surface; a technique known as “Reflectance Spectroscopy”.

Reflectance Spectroscopy functions on the principle that when light comes into contact with a material, the chemical bonding and molecular structure will cause some of this light to be absorbed. The exact wavelengths absorbed will vary depending on the type of bonding and the elements involved, and the remaining light will either be scattered or reflected depending on the macro-scale properties of the material, such as shape and size. On Mars, most of these materials seem to be grains of some sort and the potentially complex shape of such a structure can cause the light to be scattered in all sorts of directions. However, some of this light will reach the MRO, and CRISM can then detect and measure which wavelengths have been absorbed based on a decrease in intensity. How they found a way to do all of this FROM ORBIT still mystifies me, but I imagine NASA prefers it that way. This whole process then gives an output known as an “absorption spectrum”, an example of which is shown in Figure 1.

Figure 1: An example of an absorption spectrum showing wavelength (x-axis) and reflectance (y-axis). Source: CRISM website: Link: http://crism.jhuapl.edu/instrument/surface/sees.php

So! What have they actually found on Mars using this technique? Well, they appear to have detected “Aqueous Minerals”, which are chemical structures that form in the presence of water by chemical reaction with the surrounding rock. The exact mineral that will form is determined by many factors, including the temperature, pH, and salt content (salinity) of the environment, as well as the composition of the parent rock. Given that this process takes an extremely long time to occur naturally, it can show where water has been present long enough to cause such a reaction, and can give an excellent indication of what the martian surface was like in the past. For example, chloride and sulfate minerals generally indicate very saline water, as well as suggesting that it was more acidic, whereas phyllosilicates and carbonates suggest less salinity and a more neutral pH. What I find most exciting is that this data can suggest where to begin looking for fossilized evidence of ancient life (if it existed at all). If the past water appears to not be too acidic and the elements for life are present, then it is certainly a possibility.

It seems that Mars just keeps getting more exiting with each new discovery, and all we can do now is wait for next announcement to be made. Here’s hoping it’s evidence of life! Also, speaking of life on Mars, everyone should go see The Martian movie in cinemas now, it’s f**king brilliant!

Sources:

  • The CRISM Website. Link: http://crism.jhuapl.edu/index.php
  • USGS Spectroscopy Lab – About Reflectance Spectroscopy. Link: http://speclab.cr.usgs.gov/aboutrefl.html
  • PBS Newshour. “Mars has flowing rivers of briny water, NASA satellite reveals”. Link: http://www.pbs.org/newshour/rundown/mars-flowing-rivers-briny-water-nasa-satellite-reveals/
  • NASA Mars Reconnaissance Orbiter Website. Link: http://mars.nasa.gov/mro/