With extreme weather becoming increasingly frequent, there is one particular question that logically follows – Did climate change cause these events? As recent as a decade ago scientists would have confidently argued that this question cannot be answered, but thanks to rapid improvements in both the understanding of weather systems and the analytical methods used to study them, they are now able to provide some more meaningful responses.
But even with this new knowledge it’s still not possible to answer the exact question above, as the the question itself is flawed. No weather event ever has a single cause, in fact there are multiple, independent factors, most of which are natural. Climate change is but one variable in many, and it’s influence can be quite subtle.
So what can the scientists tell us? Well, according to a report from the National Academies of Science, Engineering, and Medicine (NASEM) released on March 11th, they can now examine how the likelihood and intensity of an event have been altered. And thus, the science of “event attribution” is born!
However, even when scientists are armed with their new understanding and analytical methods, statements and predictions can’t be made without a HUGE amount of data to back it up. This can be obtained in many ways, with some studies using observational data obtained from similar events in the past, and others using climate and weather models to compare conditions in worlds with and without human-caused climate changes. But no one data set is perfect, and the NASEM report states that results are often most reliable when multiple methods are used.
SO! We now have a rough idea of what event attribution is and how this new area of science works. Let’s start looking at what it can do! So far, the most reliable attribution findings that scientists can give us are for those related to temperature. There is little doubt in the scientific community that human activities have had a noticeable impact on this aspect of the climate, and it’s effect on various weather events are already known.
Apart from increasing the likelihood of extremely hot days and doing the opposite for cold days, a warmer climate can have some rather unexpected effects. Such warming can cause greater evaporation of water from the Earth’s surface, which not only increases the intensity of droughts, but also the amount of atmospheric moisture available to storms. This could lead to more severe heavy rainfall and snowfall events, and you can find an explanation of how that would work in another post I’ve written on the formation of snow.
But the implications of event attribution go beyond simply determining the influence of human-caused climate change. By gaining a deeper understanding of what causes extreme weather events, scientists can improve their ability to accurately predict and project future weather and climate states. If they’re able to predict both the frequency and intensity of extreme events, it could help lessen their impact by avoiding the destruction they could cause. For example, an accurate prediction may allow a community to evacuate long before the event even arrives, and knowledge regarding it’s frequency can help them decide between rebuilding or relocating.
It should now be quite clear what the science of event attribution can offer us, but we should also be aware of the challenges that this relatively new science faces. According to the NASEM report statements about event attribution are quite sensitive to the way the questions are framed, as well as their context. Given this sensitivity, many choices need to be made about defining the duration of the event, the geographical area impacted by it, what variables to study and many more. These decisions will likely drastically improve reliability, as they can all influence how results are interpreted.
But despite it’s problems, the science of event attribution has a lot it can offer to society in terms of limiting the impact of extreme weather, as well as drawing people’s attention to the reality of climate change. Real extreme weather events get people’s attention, and attribution studies could bring an end to the notion of climate change as a distant threat, and help people realise that there is a very real need for us to act on it now. Let’s just hope it can do it fast enough.
Welcome! In this episode we discuss the concept of a biobased economy, how it could be achieved, and look at some of the industries that would be affected by it.
We also talk about some new gene sequencing technology, potentially the largest solar power plant in North America, and how AlphaGo is defeating humanity. Our funny story is about someone called Brian.
The age of gene editing is upon us! Or it will be soon thanks to the revolutionary new technology known as CRISPR. I would be VERY surprised if you haven’t at least heard of it by now, especially when you consider the attention it gets from the science media. Attention that is very understandable once you start looking at exactly what this technology can do, and what that potentially means.
The story began in 2013, when some researchers claimed that they had used CRISPR-Cas9 to successfully slice the genome in human cells at sites of their choosing. This understandably triggered a massive ethics debate which is still going on today. A huge amount of the conversation focussed on how it could be used to fight genetic diseases or even edit human embryos, but there are many more potential applications. As the debate continued, researches set about editing the genomes of many other organisms, including plants and animals, and even exploring it’s potential for studying specific gene sequences. The range of applications is truly remarkable.
Some claim that the real revolution right now is in the lab, where CRISPR has made the study of genetics significantly easier. There are two main components to the CRISPR-Cas9 system: a Cas9 enzyme that acts as a pair of molecular scissors, cutting through the DNA strand, and a small RNA molecule that directs the system to a specific point. Once the cut is made, the cell’s own DNA repair mechanisms will often mend it, but not without making the occasional mistake.
Even a small error during DNA repair can completely alter the structure of the protein it codes for, or stop it’s production altogether. By exploiting these traits and errors, scientists can study what happens to a cell or organism when a specific gene or protein is altered. Such a level of control will likely make these studies less prone error, and lead to a much better understanding of the role played by specific genes and proteins.
But it doesn’t stop there! There exists a second repair mechanism that mends the DNA strand according to a template provided by the cell. If the researchers were to remove this template and provide one of their own, then they could potentially insert nearly any DNA sequence at whatever site they desire. A prospect that would allow genomes to not just be edited, but actively designed.
That idea may sound somewhat futuristic at this point, but in reality it’s already being put to use. Due to the relative precision and ease that CRISPR offers, scientists have already made a start on editing the genes of animals for applications ranging from agriculture to the revival of extinct species. Some CRISPR modified animals are even being marketed as pets! As you can imagine, regulators are still working out how to deal with such animals, as there are some obvious safety and ecological concerns, but that hasn’t stopped the science from happening.
Disease resistance is one of the more popular uses for CRISPR, and it can provide a variety of agricultural and ecological benefits. For example, there is hope that such an application could help stem the huge loss of honey bee populations around the world. This loss can in part be attributed to disease and parasites, but biotechnology entrepreneur Brian Gillis may have found a solution. He has been studying the genomes of so-called “hygienic bees”, which get their name from their obsessive hive cleaning habits and removal of sick or infested larvae.
Gillis’ idea states that, if genes responsible for this behaviour can be identified, they could then be edited into the genomes of other populations and significantly improve hive health. But whether or not this can be done remains to be seen, as no such genes have been found as of yet, and the roots of the behaviour may prove to be much more complex. At this point we’ll just have to wait and see.
Another potential application, one that I personally find much more interesting, is the revival of extinct species. Around 4000 years ago the woolly mammoth went extinct, an event that was at least partially due to hunting by humans. Well it now looks like we might be able to make up for that mistake! This is thanks to scientist George Church from Harvard Medical School in Boston, who has plans to use CRISPR to bring back this long lost species. He hopes to transform some endangered Indian Elephants into a breed of cold resistant elephants that will closely resemble the mammoth, and release them into a reserve in Siberia where they will have space to roam.
But the process of editing, birthing, and then raising these mammoth-like elephants is no easy task. The first problem is how to go from an edited embryo to a developed baby, and Church has said it would be unethical to implant the embryos into endangered elephants for the sake of an experiment. Since that option is off the table, his lab is currently looking into the possibility of an artificial womb; a device that does not currently exist.
It’s worth pointing out that I’ve just barely scratched the surface of what CRISPR can do for animals, let alone the many other organisms it can be applied to. I could very easily write an entire post about it to be honest, but there is one final point the definitely deserves some attention. We’ve already seen how amazingly versatile CRISPR is, and it stands to reason that, if you can edit the genomes of animals in this way, you can almost certainly do the same to humans.
As I’m sure you can imagine, there is a very heated debate about how it could and should be used to modify the genomes of human embryos. One key argument for is that many currently available technologies already allow parents to do this. These include prenatal genetic screening to check for conditions like down syndrome, and in-vitro fertilisation allowing parents to select embryos that don’t have certain disease-causing mutations. Once could say that direct genome editing is simply the next step in technology of this nature.
On the other hand, one needs to consider what genome editing would mean for society as a whole. For example, by allowing parents to edit out traits they see as debilitating, we could potentially create a less inclusive society. In such a world even the tiniest of flaws might be seen as a huge disadvantage, with everyone being subjected to much harsher judgement. Would that be beneficial for the human race? Unfortunately that’s not a question we can answer, but it doesn’t sound like a pleasant world to live in.
Whether or not you’re in favour of the human race dictating the genetics and characteristics of future generations seems to be a matter of opinion right now, but it’s certainly not fair to say that either side of the debate outweighs the other. To me, there doesn’t seem to be an obvious answer. On the one hand, we have the chance to truly improve the human race by ensuring the we continue to improve and adapt as time goes on, assuming of course that we have the knowledge to do so. But I cannot say for sure what type of society that would create, or if it’s one I’d really like to live in.
Regardless of your opinion on CRISPR and gene editing, you can’t deny that this new technology has the potential to completely change our world and our society. Given that it can improve our understanding of genetics and allow us to physically alter the DNA of living creatures, one could easily describe it as the beginning of a genetic revolution. We’ll have to just wait and see if it will be put to use in the ways I’ve explored here, but it’s certainly something I will be keeping an eye on. Hopefully I got you interested enough to do the same.
Supercomputers are truly marvellous examples of what technology can accomplish, being used in many areas of science to work through some incredibly complex calculations. Their computational power is truly a feat of human engineering. But, unfortunately, they’re not perfect. Not only are they absolutely huge, often taking up an entire room, but they’re also expensive, prone to overheating, and a huge drain on power. They require so much of the stuff they often need their own power plant to function.
But fear not! As always, science is capable of finding a solution, and this one comes in the form of a microchip concept that uses biological components that can be found inside your own body. It was developed by an international team of researchers, and it uses proteins in place of electrons to relay information. They’re movement is also powered by Adenosine Triphosphate (ATP), the fuel that provides energy for all biological processes occurring in your body right now. It can quite literally be described as a living microchip.
The chip’s size may not seem like much, measuring only 1.5 cm2, but if you zoom in you get a very different picture. Imagine, if you will, you’re in a plane looking down at an organised and very busy city. The streets form a sort of grid spanning from one end of the city to the other, which closely resembles the layout of this microchip. The proteins are then the vehicles that move through this grid, consuming the fuel they need as they go. The main difference being that, in this case, the streets are actually channels that have been etched into the chip’s surface.
“We’ve managed to create a very complex network in a very small area” says Dan Nicolau Sr. a bioengineer from McGill University in Canada, adding that the concept started as a “back of an envelope idea” after what he thinks was too much rum. I guess some innovative ideas require a little help getting started.
Once the rum was gone and the model created, the researchers then had to demonstrate that this concept could actually work. This was done by the application of a mathematical problem, with a successful result being if the microchip was able to identify all the correct solutions with minimal errors.
The process begins with the proteins in specific “loading zones” that guide them into the grid network. Once there, the journey through the microchip begins! The proteins start to move through the grid, via various junctions and corners, processing the calculation as they go. Eventually, they emerge at one of many exits, each of which corresponds to one possible solution to the problem. In the specific case described by the researchers, analysis of their results revealed that correct answers were found significantly more often than incorrect ones, indicating that model can work as intended.
The researchers claim that this new model has many advantages over existing technology, including a reduction in cost, better energy efficiency, and minimal heat output, making it ideal for the construction of small, sustainable supercomputers. They also argue that this approach is much more scalable in practice, but recognise that there is still much to do to move from the model they have to a full on functioning supercomputer. It’s early days, but we know the idea works.
So, while it may be quite some time before we start seeing these biological supercomputers being actually put to use, it certainly seems like a fruitful path to follow. Society would no doubt benefit from the reduced cost and power usage that this new technology would bring, and these aspects would also make their application in scientific research much easier.
In fact, if the decrease in cost and power usage is a dramatic one, then scientists could potentially use a larger amount of these computers than they do at the moment. This a change that would have a huge impact on the kind of calculations that could be performed, and could potentially revolutionise many areas of science. Even though we’ll have to wait, that’s something I am very much looking forward.
Nicolau, D., Lard, M., Korten, T., van Delft, F., Persson, M., & Bengtsson, E. et al. (2016). Parallel computation with molecular-motor-propelled agents in nanofabricated networks. Proceedings Of The National Academy Of Sciences. http://dx.doi.org/10.1073/pnas.1510825113
So I recently got started on a podcast project with a friend of mine focussing on climate change, green energy, sustainability and science of that nature. If all goes to plan this will be a weekly series so keep an eye out for the posts here or subscribe on YouTube if you’re interested.
Episode 1 just went live yesterday, and if you could check it out and leave any feedback it would be greatly appreciated! You can find the link below!
In this episode we discuss climate change, the rising sea levels, and what you can do about it. We also mention an exciting new medical trial going on and look at some more amusing science stories. Hope you enjoy!
Apologies for the unbalanced audio! Lessons were learned and it will be fixed on the next episode 🙂
If you dedicate any amount of time to following science these days then you WILL have heard about the recent detection of gravitational waves. Science media truly went mental when the discovery was announced on February 11th, and it’s not surprising when you consider what this means for the fields of physics and astronomy. But before we get started with all that jazz, we should probably look at the specifics regarding what the hell these waves are and how the discovery was made.
What they are is a disturbance in the fabric of space-time, much like how dragging your hand through a still pool of water will produce ripples that follow and spread out from it. Why is this a valid comparison? Well Einstein described the universe as made from a “fabric” hewn from both space and time. This fabric can be pushed and pulled as objects accelerate through it, creating these ripples. A similar distortion is also the cause of gravitational attraction, which is nicely demonstrated in the video below.
Almost any object moving through space can produce gravitational waves provided they are not spherically or cylindrically symmetrical. For example, a supernova will produce some if the mass is ejected asymmetrically, and a spinning star will produce some if it’s lumpy rather than a spherical. Unfortunately, the vast majority of sources produce waves that have dissipated long before they get anywhere near us, with only incredibly massive objects producing some that we have a chance of detecting.
Okay! Now that we have at least some idea of what these gravitational waves are, we can look at who and what detected them. This discovery can be attributed to the great minds and machinery involved in the LIGO experiment, which aims to detect gravitational waves by observing the effect they have on space-time. But how would they do this? Space-time isn’t even something we can see! Well my friends, the answer is very clever indeed.
It all involves a machine known as an Interferometer (Figure 1). This device starts by splitting a single laser beam into two, which then shoot off in lines perpendicular to each other. These beams travel exactly the same distance down long vacuum tubes, bounce off mirrors located at the end, and return. Since both beams have travelled the same distance they will still be alignment when they return to the source. They will then destructively interfere with each other and no light will reach the detector.
However, a passing gravitational wave, with its space-time distorting powers, can actually change the distance that one of the beams travels. This would mean they are no longer in alignment when they return to the source and won’t cancel each other out. Some light would therefore be able to reach the detector.
And voila! A gravitational wave has been detected… or has it? Well, it actually has in this case, but the point I’m making here is that this amazing machinery is incredibly sensitive to noise. If a gravitational waves were to pass by, it would only change the beam’s distance by about 1/10000th the width of an atom’s nucleus, which is a size I have trouble comprehending.
To pick up such a teeny-tiny change LIGO has to filter out any and all sources of noise, which can include earthquakes and nearby traffic. In fact, to test the research groups ability to distinguish a genuine gravitational wave from noise, senior members of the team secretly inserted “blind injections” of fake gravitational waves into the data stream. While it does seem a bit cruel, it seems their training paid off.
Now we move on to the understandably common question of why this matters to people who aren’t hardcore science nerds. Well, beyond the fact that this discovery will almost certainly win a Nobel Prize this year and that it confirms the final prediction made by Einstein’s general theory of relativity, it could also have a huge impact on the field of astronomy.
Similar to how we use various electromagnetic wavelengths like visible light, infra-red, and x-rays to study a wide range of things, gravitational waves could act as a new analytical tool. Scientists would listen to these waves to learn more information about the objects producing them, which include black holes, neutron stars, and supernovae.
So, while this discovery won’t exactly change your life, it’s easy to see how big of a discovery this was for the field of physics, giving us both a new way to observe the cosmos and further cementing the theory of relativity. Once again, Einstein has been proven right many decades after his death. That’s a feat that very few people have achieved.
I imagine you’re all pretty familiar with fuel cell technology at this point. It’s been around for quite some time and is often heralded as the answer to green, renewable energy. For the most part that is quite true, as there are a number of advantages that this technology has over current combustion-based options. It not only produces smaller amounts of greenhouse gases, but none of the air pollutants associated with health problems.
That being said, the technology isn’t perfect, with many improvements still to be made. One problem is that a fuel cell’s environmental impact depends greatly on how the fuel is acquired. For example, the by-products of a Hydrogen (H2) fuel cell may only be heat and water, but if electricity from the power grid is used to produced the H2 fuel then CO2 emissions are still too high.
The technology also requires the use of expensive or rare materials. Platinum (Pt) is easily the most commonly used catalyst in current fuel cell technology, and this is a rare metal often costing around 1000 US dollars per ounce. This really hurts the commercial viability of the fuel cell, but research regarding alternative materials is progressing.
While I’m certain these kinks will be worked out eventually, it is still worth considering other options. One such option is the Microbial Fuel Cell (MFC), a bio-electrochemical device that uses respiring microbes to convert an organic fuel into electrical energy. These already have several advantages over conventional fuel cell technology, primarily due to the fact that bacteria are used as the catalyst.
The basic structure of an MFC is shown in Figure 1, and you can see that it closely resembles that of a conventional fuel cell. In fact the method by which it produces electricity is exactly the same, the only differences are the fuel and the catalyst.
The fuel for an MFC is often an organic molecule that can be used in respiration. In the figure it is shown to be glucose, and you can see that its oxidation yields both electrons and protons. It is worth noting that the species shown as “MED” is a mediator molecule used to transfer the electrons from the bacteria to the anode. Such molecules are no longer necessary, as most MFCs now use electrochemically active bacteria known as “Exoelectrogens”. These bacteria can directly transfer electrons to the anode surface via a specialised protein.
As I mentioned before, this technology has several advantages over conventional fuel cell technology in terms of cost and environmental impact. Not only are bacteria both common and inexpensive when compared to Pt, but some can respire waste molecules from other processes. This not only means that less waste would be sent to a landfill, but would actually be a source of energy. This has already be applied in some waste-water treatment plants, with the MFCs producing a great deal of energy while also removing waste molecules.
Now you’re probably thinking, “Nathan, this is all well and good, but it’s not exactly new technology”. You’d be right there, but some scientists from the Universities of Bristol and West England have made a big improvement. They have designed an MFC that is entirely biodegradable! The research was published in the journal ChemSusChem in July of 2015, and it represents a great improvement in further reducing the environmental impact of these fuel cells.
Many materials were tried and tested during the construction process. Natural rubber was used as the membrane (see Figure 1), the frame of the cell was produced from polylactic acid (PLA) using 3D printing techniques, and the anode was made from Carbon veil with a polyvinyl alcohol (PVA) binder. All of these materials are readily biodegradable with the exception of the Carbon veil, but this is known to be benign to the environment.
The cathode proved to be more difficult, with many materials being tested for conductivity and biodegradability. The authors noted that conductive synthetic latex (CSL) can be an effective cathode material, but lacks the essential biodegradability. While this meant it couldn’t be used in the fuel cell, it was used as a comparison when measuring the conductivity of other materials.
Testing then continued with egg-based and a gelatin-based mixtures being the next candidates. While both of these were conductive, they weren’t nearly good enough to be used. CSL actually performed 5 times better than either of them. But science can not be beaten so easily! Both mixtures were improved by modification with lanolin, a fatty substance found in Sheep wool, which is known to be biodegradable. This caused a drastic increase in performance for both mixtures, with the egg-based one outperforming CSL! This increase easily made it the best choice for the cathode.
With all the materials now decided, it was time to begin construction on the fuel cell. A total of 40 cells were made and arranged in various configurations. These are shown in Figure 2, and each configuration was tested to determine its performance. Of these three, the stack shown in Figure 2C was found to be able to continuously power an LED that was directly connected. It was also connected to some circuitry that harvested and stored the energy produced, and the authors report that the electricity produced by this method could power a range of applications.
While there is much to celebrate here, the authors also address some of the concerns associated with this technology. The most notable concern is how long the fuel cells can operate, and the authors report that after 5 months of operation the stacks were still producing power. This could potentially be longer in an application, as the operational environment of a fuel cell rarely mimics natural conditions.
They also discuss how these MFCs didn’t perform as well as some produced in other studies, but these were the first to be made from cheap, environmentally friendly materials. If anything, this research shows that such fuel cells can at least be functional, and are an excellent target for further research.
So we’ll have to wait for more research to see if this technology will actually take off, and given the timescale of this study it’s likely that we’ll be waiting quite some time. Even so, this is an important step on the road to completely sustainable living, as it shows that even our power sources could be made from completely environmentally friendly materials. Now we just have to hope people take notice. Let’s make sure they do!
Rahimnejad, M., Adhami, A., Darvari, S., Zirepour, A., & Oh, S. (2015). Microbial fuel cell as new technology for bioelectricity generation: A review. Alexandria Engineering Journal, 54(3), 745-756. http://dx.doi.org/10.1016/j.aej.2015.03.031