Month: November 2015

The Flowers of the Future!

What do I mean by flowers of the future? I mean cyberplants! Researchers working at Linköping University in Sweden have found a way to create a rose plant with electronic circuitry running through its tissue, and the effects and implications are very interesting indeed.

Now the idea of cyberplants is not entirely new, as some research by Michael Strano at the Massachusetts Institute of Technology revealed that spinach chloroplasts can incorporate carbon nanotubes (CNTs) into their structure. The report stated that this boosted the rate of photosynthesis, as the CNTs could absorb wavelengths of light that the chloroplasts could not. So if this has been done before, what makes this new discovery special?

Well, this research is the first example of someone incorporating a working electronic circuit into a plant’s anatomy. This was done with a polymer known as PEDOT, or Poly(3,4-ethylenedioxythiophene) if you’re one of our chem nerds, and the structure can be seen below. This material is an excellent conductor when hydrated and is commonly used in printable electronics, making it an excellent contender for the cyberplant project.

pedot
The repeating unit structure of PEDOT. Source: https://en.wikipedia.org/wiki/Poly(3,4-ethylenedioxythiophene)

The researchers tested many materials before they made their choice, but none of them were ideal. Some caused the plant to produce toxic compounds, essentially poisoning it, while others clogged the plant’s water transportation systems. They eventually settled on PEDOT, which didn’t cause any noticeable problems, and found a way to incorporate it into the plant’s stem and leaves. They created the world’s first cyber-rose!

This was done by soaking each component in separate PEDOT solutions, and then manipulating them in some way to cause the polymer to migrate into the plant tissue.

In the case of the stem, natural capillary action pulled the polymer out of solution and into the plant’s vascular tissue. The natural structure of the stem then allowed the polymer to self-assemble into wires up to 10 cm long! The conductivity of these structures was then measured using two gold (Au) probes, and the performance was found to be on par with conventional printed PEDOT circuits according to Magnuo Breggren, one of the team members.

The leaves proved to be more tricky. They were first placed in a mixed solution of PEDOT and cellulose nanofibres, then a vacuum was applied. This caused all the air in the leaf tissue to be expelled, and the PEDOT then moved out of the solution and into the empty space the air left behind. This gave the leaves a very interesting property, causing the colour to shift between blueish and greenish hues when a voltage was applied.

Now, while this all seems very interesting, some scientists have questioned what the practical implications of this research could be. “It seems cool, but I am not exactly sure what the implication is” says Zhenan Bao, who works with organic electronics at Stanford University in California.

But Breggren suggests that these electronics could provide an alternative to genetic engineering for monitoring and regulating plant behaviour. While the genetic modification of plants is safe and extremely easy, certain traits, such as flowering times, might be too disruptive to an ecosystem if there is a permanent change. Especially if those changes get passed on to other plants in the area. But electronic switches would not have this risk, and could be easily reversed when needed.

However, if this research progresses to practical applications, the team would have to show that the polymers they use are not harmful to the environment in any way, and in the case of food crops, that the material doesn’t end up in any edible portions of the plant. But this may not be a problem in the future, as the team hopes to make use of biological chemicals to create the circuitry, bypassing any potential environmental and health hazards.

Given the success of their initial study, the team is now collaborating with biologists to develop their research further, and investigating any new directions it could go in. For example, Breggren is apparently investigating whether these PEDOT devices could be used to develop a system to allow the plant to act as a living fuel cell, a project he has rather amusingly named “flower power”.

Regardless of how well this research pans out in the future, it does have the value of being inherently interesting, a trait that drives a great deal of scientific research. But what really interests me, as is this case with a lot of the stuff I write about, is that this is yet another step close to the world of science fiction. We’re getting closer people! All we have to do is wait.

Sources:

Advertisements

The Age of Antibiotics Could Soon be Over

Antibiotic awareness week has been given a whole new meaning this year due to one particularly eye-opening discovery. We have been slowly emptying our armory of antibiotics for a while now, with few new examples being developed in the past two decades and new infectious diseases being discovered almost every year. We’re also living in a time when diseases are evolving and becoming increasingly resistant to antibiotics, and it looks like they’ve now breached our last line of defense.

A report in The Lancet Infectious Diseases has just revealed the existence of bacteria with resistance to the most aggressive of our antibiotics; a drug known as Colistin. Colistin has had a rough history, being deemed too toxic for human use not long after its discovery due to the damage it caused to kidney cells. But it made a comeback in the early 2000s when more drug resistant bacteria began to emerge, and kidney damage started to seem like the lesser of two evils. And by 2012, the World Health Organisation had classified Colistin as critically important for human health.

But, unknown to the many medical professionals in the West, Colistin was also being used in China. While it was never approved for human use, understandable considering its toxicity, it was approved for use in animals. It has been known for quite some time that feeding animals with low doses of antibiotics fattens them up, and the local pig farmers took to using large quantities of Colisten for this very reason.

This near constant use of Colisten meant that bacteria were being repeatedly exposed to it; long enough for them to learn how to fight back. Colistin resistance has occurred in the past, but the relevant gene was found in the chromosomal DNA, and could not be passed on to non-resistant bacteria. But these guys were a cut above the rest. This time the mutation, now dubbed MCR-1, occurred in the plasmid. This is a circular loop of DNA that all bacteria posses, and it can be passed on in a process called horizontal gene transfer. This is outlined in the graphic below.

86775358_antibiotic_resistance
The process of horizontal gene transfer. Source: http://www.bbc.co.uk/news/health-34857015

This means there is now potential for the resistance gene to end up in the DNA of many different species of bacteria, and it has already been found in some known to cause infections in humans, such as E.Coli and Klebsiella Pneumonia. Now this wouldn’t be so bad if the gene could effectively quarantined, but the researchers report that the gene is already widespread in southern China. The study found that the gene was present in 15% of meat samples and 21% of animals tested between 2011 and 2014, as well as 16 of 1322 samples taken from humans. To make matters worse, there is also some evidence that the gene has managed to spread outside of China in to countries such as Laos and Malaysia.

If this gene continues to spread, which is highly likely since reports state that it has an extremely high transfer rate, then we could see the emergence of “pan-resistant bacteria” – bacteria resistant to all known methods of treatment. This is a very frightening prospect for modern medicine, and if MCR-1 combines with other resistance genes, then medicine could be launched back in to the dark ages. As Professor Timothy Walsh told BBC news, “At that point if a patient becomes seriously ill, say with E. Coli, then there is virtually nothing you can do”.

But the apocalypse is not upon us yet! Although the prospect of the MCR-1 gene going global seems to be a case of when not if, we still have time to prevent a complete breakdown of modern medicine if we act fast enough. There are even new antibiotics that could help delay the onset of the post-antibiotic era, such as Teixobactin, that are currently being researched. But this is not something we should rely on, as it is still a long way from being ready for medical use.

This is one hell of a wakeup call, and the authors of the report know this, stating that their findings “emphasise the urgent need for coordinated global action” in the fight against antibiotic resistance. Whether it’s through the discovery of new antibiotics or entirely new methods of treatment, we need to work together to restock our armory and find new weapons to combat this new breed of superbug. If not, deaths from routine surgeries and minor infections could become commonplace once again due to the lack of treatment options. So let’s hope our scientists are on the case! They have quite the challenge ahead.

Sources:

New Self-Folding Material takes the Effort out of Origami

This self-folding material can walk on its own! Source: http://www.engadget.com/2015/11/09/graphene-paper-walks-laser/

I’ve never been any good at origami. Hell, I remember struggling to make a paper aeroplane when I was a kid, and I doubt I would fare much better today. But, thanks to some scientists at Donghua University in China, a new material has been invented that is capable of doing all the folding itself! Albeit into a predetermined shape. Finally! Something appropriate for my level of skill.

Self-folding materials have become a major research topic recently, with the majority of research focussing on “active polymers”. These materials are capable of converting other forms of energy, such as light or heat, into mechanical work. But they are far from seeing practical applications due to their complex production methods, unrealistic operating conditions, or complicated combinations of materials, making them fragile.

This is where this new material enters the picture! Here, the researchers have successfully created a sheet of Graphene Oxide (GO) “paper” that overcomes many of the previously mentioned problems. GO, while not quite as spectacular as pure Graphene, possesses many impressive properties, boasting both incredible strength and integrity. The paper’s self-folding properties are also capable of operating under physiological conditions, making it a prime candidate for future applications.

The material itself consists of a sheet of GO with some areas treated with Polydopamine (PDA). These treated areas act like sponges, absorbing water from the environment and swelling in humid conditions, whereas the rest of the sheet remains fairly inert. Application of heat or infrared light then causes this water to evaporate and the PDA treated areas to shrink and pull on the surrounding material, bending the paper into the designed shape. The researchers also noted the speed of this response, with a single strip of the paper able to fold and unfold in around five seconds.

Now all of that is pretty impressive, but what I find way cooler is what devices they constructed using this material. Through careful placement of the PDA treated areas, the researchers were able to create various self-folding objects, including a self assembling box and an artificial hand able to grasp and hold objects up to five times its own weight!

The most popular of these objects is the walking device shown in the gif at the top of this page. Created from a rectangular sheet of GO paper, this device has three PDA treated bands running across it that get progressively wider from front to back. When infrared light is applied, the bending of these areas causes the sheet to curve into an arch, with the rear of the paper curving to greater degree due to the varying width of the treated bands. When the light is removed, the sheet relaxes and moves forwards, with a fast response time that means five of these steps only take two seconds. You can see a video of this little robot in this Nature article on the same subject.

This technology has many potential applications including robotics, materials for sensors, and artificial muscles. One researcher in particular, Hongzhi Wang, has very high hopes for the future of this material, suggesting that it could be integrated into solar cells to build self folding panels. But, while that potential does exist, the researchers have indicated that there are still improvements to be made, as well as new questions to be answered. These include improving the 1.8% energy conversion efficiency, which remains a limiting factor, and exploring how a reduction in size will affect the properties and performance of the material.

So, while all of this COULD open the way for a new type of effortless origami, I highly doubt that it ever will. At least not any time soon. Not when there are more pressing and important applications. I mean, artificial muscles! That’s something I can’t wait to see.

Sources:

vOICe: Helping People See with Sound

A demonstration of the vOICe experiment. Photo credit: Nic Delves-Broughton/University of Bath Source: http://www.theguardian.com/society/2014/dec/07/voice-soundscape-headsets-allow-blind-see

It seems like there is an almost constant stream of awesome new technology these days, and there has been a rather fantastic addition! A device is being researched at both the California Institute of Technology (Caltech) in the US and the University of Bath in the UK, with a very noble goal in mind; to build better vision aids for the blind.

Now it has long been known that blind people often rely on sound as a substitution for sight, with some individuals sense of hearing heightened to the point of being able to use echolocation. Well, it turns out that sound can also be designed to convey visual information, allowing people to form a kind of mental map of their environment. This is achieved by the device known as “vOICe”; a pair of smart glasses capable of translating images into sounds.

The device itself consists of a pair of dark glasses with a camera attached, all of which is connected to a computer. The system can then convert the pixels in the camera’s video feed into a soundscape that maps brightness and vertical location to an associated pitch and volume. This means that a bright cluster of pixels at the top of the frame will produce a loud sound with a high pitch, and a dark area toward the bottom will give the opposite; a quiet sound with a low pitch.

But what is really impressive about this technology is that this soundscape appears to be intuitively understood, requiring little to no training at all! In a test performed by researchers Noelle Stiles and Shinsuke Shimojo at Caltech, blind people with no experience of using the device were able to match shapes to sounds just as well as those who had been trained, with both groups performing 33% better than pure chance. In contrast, when the coding was reversed (high point = low pitch, bright pixels = quiet etc.) volunteers performed significantly worse. So how did they achieve such an intuitive system?

Well it began with Stiles and Shimojo working to understand how people naturally map sounds to other senses. Both blind and sighted volunteers were involved in the systems development, with sighted people being asked to match images to sounds, and blind volunteers being asked to do the same with textures. The pattern of choices during these trials directly shaped vOICe’s algorithm, and appeared to produce an intuitive result. This seemed to be a surprise to the researchers, as they wrote that “the result that select natural stimuli could be intuitive with sensory substitution, with or without training, was unexpected”.

This information successfully managed to get me excited, and already had me itching to learn more. It was then that I found out the research at the University of Bath further emphasised the importance of having such an intuitive system. Here the researchers claim that some users are exceeding the level of visual performance often achieved by more invasive restoration techniques, such as stem cell implants or prosthetics. While people who receive such surgery are rarely able to make out more than abstract images, some long-term users of vOICe claim to form images in their brain somewhat similar to sight, as their brains become rewired to “see” without use of their eyes.

Michael J Proulx, an associate professor at the university’s department of psychology, gave the example of a man in his 60s who had been born blind. Proulx reports that he initially thought the idea was a joke, too sci-fi to be real, but “after 1 hour of training, he was walking down the hall, avoiding obstacles, grabbing objects on a table. He was floored by how much he could do with it”. He also reports that after a few weeks of use, some people were able to achieve levels of vision of 20/250. To put that into perspective for you, a short-sighted person who removed their glasses would have a level around 20/400. That’s right, this tech could allow the completely blind to see better than those who are still partially sighted! That’s something to wrap your head around.

But slow down there with your excitement! While this technology is truly revolutionary, it is worth pointing out that there is a huge gulf between distinguishing patterns in a lab environment and using vOICe to actually observe and understand the real world. For example, we don’t know how a busy street, with an already large amount of visual and auditory information, would affect both the device’s signals and how they are interpreted. But there is no denying that this work represents and important step on the road to developing better vision aids, and given that the World Health Organisation estimates a total of 39 million blind people in the world, this technology could bring about a dramatic increase in quality of life across the globe.

But that’s not all this technology could do, as the results are challenging the concept of what being able to “see” actually means. This is illustrated by a quote from Shimojo at Caltech, where she mentions that “our research has shown that the visual cortex can be activated by sound, indicating that we don’t really need our eyes to see”. This has profound implications for the field of neuroscience, and has led to another study beginning at the University of Bath to examine exactly how much information is required for a person to “see” in this way. This could not only lead to optimisation of this technology, but to a deeper understanding of how the human brain processes sensory information.

Now I don’t know about you, but I remember when stuff like this was considered to be firmly stuck in the realm of science fiction, and the fact that such talented scientists keep bringing it closer to reality still surprises me. Combine this with an incredible rate of progress, and there really is no way of knowing what futuristic tech they’ll come up with next. This can make keeping up with it all one hell of a challenge, but fear not, my scientific friends! I shall remain here to shout about anything new that comes along.

Sources not mention in text:

Why the EU matters for British Science.

With the referendum on Britain’s EU membership getting ever closer, there have been many arguments both for and against, but there seems to be one issue that is woefully underreported; the effect that it could have on British scientific research. So far, the science community has strongly aligned itself with the “in” camp, arguing that British research would suffer if the country were outside the EU, and the presidents of the Royal Society, both past and present, have all spoken up about the benefits of membership. But beneath all of this is once central concern; if Britain left the EU, our scientists would be left isolated.

Britain is currently a scientific powerhouse, producing 16% of the world’s highest quality research despite hosting just 1% of its population. But the reason for this excellence is that British scientists are able to collaborate with some of the best in the world, many of whom are working in other EU countries. Couple that with the fact that much of our research funding comes from the EU science budget, and you can see that leaving would massively damage this enterprise.

Those campaigning for a British exit of the EU have countered this argument by saying that leaving would mean some of the UK’s contribution to the EU budget could then be invested in research within the country. True, but the problem is that much of our research now involves collaboration with other EU scientists, which would be harmed. The UK’s increasingly networked nature has allowed it to truly excel in collaborative science, with more than 50% of UK research papers being international, compared to just 33% in the US. Papers with international teams of authors also have a much greater impact, with one-third of the best journal papers resulting from international collaborations.

The fact that we collaborate so much is extremely beneficial, as it is exactly what the EU science budget supports. The European Research Council funding requires projects to involve researchers from at least three different EU member (or associate) states, and some of the most prestigious and valuable research grants in Europe are awarded by this organisation. This funding not only helps launch the best British scientists on to the world stage, but such scientists have consistently earned more back in grants than the UK has contributed in every year this scheme has existed, receiving around £1.40 for every £1 that we contributed.

But this is not the only scheme we benefit from, as the EU Marie Slodowska-Curie mobility fellowships support and fund EU scientists to come to Britain as postdoctoral fellows, who are the main drivers of bench science in many disciplines. Through these fellowships, British labs were awarded €1 billion between 2007 and 2014. Again, we gained more money than we contributed, receiving nearly double the amount of Germany, the next best funded country.

All of this means that the EU directly pays for a huge amount of British scientific research and innovation, and because British science is of a very high quality, there is both net financial and scientific gain. We absolutely cannot afford to lose out on such successful source of EU funding, especially since research is usually a fairly low priority in the political arena.

Withdrawing from the EU would also affect how freely scientists can both enter and leave Britain, and success in science is heavily dependant on the movement of people, more so than most other disciplines. Results are not only exchanged formally by publication, but also discussed more directly by individuals in international networks, and scientists will frequently move countries to work in new labs and with new research teams. Strict visa regimes already limit many non-EU scientists from contributing to British science, and the idea that this could potentially extend to an even larger number of EU researchers is a frightening prospect.

The evidence should now be clear; if Britain leaves the EU, we would massively damage an enterprise we are becoming so well-known for.

But why should the voters care? How does investing in scientific research affect those who are not directly involved? Well it has been shown that such investment, through national and EU funding streams, yields historically proven economic returns. All while tackling important social challenges in areas such as healthcare, sustainability, and the environment.

By showing commitment to science funding, the UK can bring in excellent, internationally mobile scientists, engineers, and the industries that seek to employ them, which will give immediate gains through tax revenues and employment. This would also help attract more overseas students, who collectively contributed £5 billion in 2008/09. Couple this with the fact that nearly 30% of the UK’s GDP is produced by sectors involved in science and technology, and it becomes undeniable that this is good for the economy.

So, if Britain cares about science, and cares about maintaining its excellent reputation for research, then it needs the EU. But if Britain leaves, then our scientists will be left stranded on this island, without influence or funding, and begin fading into obscurity.

Sources:

Glass Almost as Hard as Steel!

It seems like the days of finding a shattered screen after dropping your smartphone are coming to an end. You may have already heard of Gorilla Glass, the wonder material from Corning that as boinged on to the smartphone scene in recent years. But even that can fail and breakages have been reported. But as engineers strive to push what is possible for these small gadgets, so to do they find new ways to tweak and enhance the properties of glass.

Glass is usually made by heating minerals to very high temperatures and allowing them to cool, but much of the glass used in smartphones, and skyscrapers for that matter, is made stronger by the addition of metal atoms. For example, Gorilla glass is made stronger by the addition of Potassium (K), an alkali metal. But now, a team of Japanese research scientists have found a way to add an oxide of Aluminium, known as Alumina (Al2O3), to the glass structure. This oxide has long been coveted as a new candidate for making super strong glass, as it possesses some of the strongest chemical bonds known to man, with a dissociation energy of 131 kJ / cm3.

The scientists had hypothesised that adding Alumina to glass would make a super robust new material, but producing it wasn’t going to be easy. In their first attempts, adding the Alumina caused Silicon Dioxide (SiO2) crystals to form where the mix met the surface that it was being held in. These crystals made the made no longer see-through, and effectively worthless. It would be quite pointless to have a super strong glass that wasn’t see through, as that throws many potential applications right out the window, and the researchers knew this. They needed to develop a new production method.

Aerodynamic levitation is what they came up with, and it’s almost as sci-fi as it sounds. It involves holding the Alumina/glass mixture in the air while it forms by pushing it from below with a flow of oxygen gas. A laser is then used as a spatula to mix the material as it cools, and the result is a material that contains more Alumina than any glass to date, and it was found to be both transparent and reflective. Testing then revealed that the glass was very hard; harder than other oxidised glasses as well as most metals, and almost as hard as steel according to an article from phys.org.

The researchers remain hopeful that aerodynamic levitation could make it possible to produce all sorts of other super strong glasses, but first they need to figure out how to scale up the production process, as it currently only works in small batches. Nevertheless, it is nice to think that the despair you feel when discovering a broken screen may be a thing of the past. I don’t think anything will be able to live up to the integrity of those old Nokia bricks though. Those things could truly take a beating.

Sources not mentioned in text:

Chemistry with a Bang: The Science of Fireworks

A colourful firework display! Source: http://londontheinside.com/2015/10/16/east-village-fireworks/

Loud bangs and pretty colours; that’s what fireworks are known for. I imagine we have all been to a huge and impressive firework display at some point, and during a significantly smaller one that my family had at the end of Halloween, I realised I didn’t actually know how these miniature rockets worked. How do they achieve the patterns and colours they are known for?

The basic structure of a firework. Source: http://www.explainthatstuff.com/howfireworkswork.html

Well it turns out they are an excellent example of the everyday application of the physical sciences, with some very interesting Chemistry dictating both the bangs and the colours. But before we get into that, lets take a look at the structure of a firework and see how everything is bound together.

  1. This is the Stick, or “tail” if you prefer, which consists of wood or plastic. This long stick ensures that the firework shoots in a straight line, and doesn’t just fire off in any direction. This not only helps prevent a firework to the face, but also allows for display organisers to position the firework effects with precision, allowing for a well coordinated display.
  2. This is where the fuse is located, which consists of a small bit of paper or fabric that can be lit a flame or by electrical charge. This starts the fuel of the firework burning and can set off other, smaller fuses that can make the effects explode later than the main firework.
  3. This is the Charge, which is a fairly crude explosive, designed to shoot the firework upward. These charges can sometimes reach heights of several hundred meters (roughly 1000 ft) and can achieve speeds of several hundred km or miles per hour. This is also where the fuel is located, and is often made up of tightly packed gunpowder, which we will discuss the composition of later.
  4. This is the fun part where the Effects are contained. The compounds stored in here are what actually produce the display once the firework is in the air. There can be one or multiple effects, usually packed into separate compartments, that can fire off in a predetermined sequence or all at once.
  5. This component is not particularly special. It is often referred to as the “Head” of the firework, and can be aerodynamically designed to improve the flight on more expensive models, but is often just a flat cap on cheaper ones. This part is designed to contain the effects.

Now we get to the hardcore science! All fireworks contain the same basic chemical ingredients: an oxidant, fuel, a colour producer, and a binder. The binder is the simplest of the ingredients, used to hold the everything together in a paste-like mixture. The most common type of binder is known as dextrin, which is a type of starch. The other components are a bit more complex.

The oxidant is required in order for the mixture to burn. Common examples are Potassium Nitrate (K2NO3) or Potassium Perchlorate (KClO4), both of which decompose when heated, and yield Oxygen. This Oxygen then allows for the effective combustion of the fuel.

The fuel can consist of many chemicals, such as Carbon or Sulfur containing compounds, as well as organic based material like poly(vinyl chloride) (PVC). They can also contain powdered Aluminium (Al) or Magnesium (Mg) to help the mixture reach the high temperatures necessary to cause rapid combustion. The most common fuel used today is gunpowder, sometimes called black powder, which consists of a mixture of charcoal, Sulfur, and K2NO3. This allows it to act as both a fuel and an oxidant.

The special effects, such as the bright colours, are provided by additives to the mixture. These additives are often metal compounds known as metal salts, using elements from Group 2 of the periodic table. An example of this is the Barium (Ba) salt Barium Carbonate (BaCO3), which is added to produce green flames when the firework goes off. The species actually responsible for the colour in this case is the gaseous BaCl+, which is produced when Barium ions (Ba2+) combine with Chloride ions (Cl). The Barium ions are produced when the BaCO3 salt decomposes, and the Chloride ions can come from the decomposition of KClO4 oxidant or PVC fuel, depending on which is used.

When the firework explodes, the newly formed BaClgas is extremely hot, and contains a great deal of kinetic energy. This means that the many atoms in the explosion will frequently collide with each other, and the kinetic energy will be transferred from one atom to another. The energy from both these collisions and the heat of the explosion can then be absorbed by the electrons surrounding the atom, and they become “excited” into a higher energy state. Now these excited states are unstable, so the electrons will naturally return the lowest energy state available, known as the “ground state”. The energy that was absorbed is then released in the form of light, with the colour depending on the amount of energy released. In this case, you would see a bright green flame emitted by the BaClgas. The specific range of colours emitted in this process is called the “emission spectrum”, and each element / molecule will have a unique spectrum as the structure will determine which energy states are available for the electrons to occupy.

There are many other metal salts that can be used, each yielding a different colour. Strontium salts can give of red light, whereas Copper compounds produce a nice blue. But pure colours require pure ingredients, and even trace amounts of other metal compounds are sufficient to alter or completely overpower other colours. The skill of the manufacturer, as well as the age of the firework, will therefore greatly influence the final display.

So next time you’re at a public firework display, you can shamelessly point at the sky and scream “SCIENCE!”, or proudly describe the Chemistry to the person next to you. Although I highly doubt either option will make you very popular.

Sources:

The more we cool, the warmer we get.

The world is heating up, and rather ironically, our obsession with keeping things cold is contributing a great deal. With the increasing income and urbanisation of developing world countries, the demand for both refrigeration and cooling technologies is increasing rapidly. China alone purchased 50 million air conditioning units in 2010, and is expected to surpass the U.S. as the world’s biggest consumer of electricity for such units by 2020. To put that into perspective, the U.S. currently consumes more energy for air conditioning than the rest of the world combined, and uses more energy for cooling than the entire continent of Africa uses for all purposes. That is A LOT of energy.

Right now, refrigerants (fluids that absorb and release heat efficiently at the right temperatures) are the key to cooling technologies, but they also cause a great deal of trouble when released in to the atmosphere. Previously used examples such as chlorofluorocarbons (CFCs) are being phased out due to the damaging effect that they had on the Earth’s Ozone layer, but the less aggressive substitutes are still powerful greenhouse gases. The most prominent of these new compounds are hydrofluorocarbons (HFCs), which do have a smaller climate warming potential than the substances they’re replacing, but still have up to 4000 times the potency of CO2. This is on a pound-for-pound basis however, and the large quantities of CO2 still make it a much larger threat.

Now air conditioning technology has improved a great deal since it’s invention, with the energy requirements steadily decreasing over the years. In 1980 an individual unit used 1474 kWh/year and cost around $178 to run, according to figures by the U.S. Environmental Protection Agency. Today, certain units only use 597 kWh/year and cost around $75 to run. But while the energy requirements have been decreasing, we are using significantly more air conditioning units. Couple this with the fact that air conditioning runs on electricity, which is almost entirely generated from the burning of coal, and we have a huge problem for the climate.

In contrast, a large amount of energy used for heating also comes from fossil fuels, but those with somewhat smaller carbon emissions than coal, such as oil and natural gas. This means that, on average, an air conditioning unit emits more greenhouse gases removing heat than a heater does supplying that same quantity of heat, and the climate impact of these units in the U.S. alone is currently almost half a billion tonnes of CO2 per year. This is already bad, but it could be made a lot worse if global consumption reaches the predicted 10 trillion kWh/year, equal to half the world’s entire electrical supply.

With such greenhouse emissions already having increased the Earth’s temperature by about 0.56oC, and scientists predicting that the trend will continue, it is safe to say that more energy will be used in cooling, and less in heating, as people attempt to find relief from the increasing temperatures. In fact, the Netherlands Environmental Assessment Agency predicts that, in this warming world, the increased emissions from cooling technology will be faster than the decline in emissions from heating. There is hope that this could be prevented by use of renewables to meet energy demands, but this is unlikely. Even if electricity from sources such as wind, solar, biomass, and geothermal expanded to five times their current production, it would still be unable to cover the air conditioning demand for the U.S. alone.

But that is not to say that countries are not attempting to find low energy methods for cooling, as many are in progress on every continent. For example, there are “Passive Cooling” projects in China, India, Egypt and other countries that combine traditional technologies, such as wind towers and water evaporation, with new architectural designs that are ventilation friendly. Solar adsorption is also being experimented with, which uses the heat from the Sun to cool the indoor air… somehow. However, this is not currently affordable or adaptable to home use.

All of this would suggest that we should attempt to prevent the continued growth of air conditioning technology. But despite all of these problems, studies have shown that air conditioning does improve people’s lives. It not only helps sleep patterns, but makes workers more efficient and has prevented potential heat stroke deaths during hot summers. Given this information, is it fair to demand that developing countries go without air conditioning when so many people in the developed world use it freely? I don’t think so. Which means we have to hope that the world continues to find and develop new ways to adapt to increasingly warmer temperatures, and that we can act fast enough to prevent a world that is too warm for even humans to comfortably live in.

Sources: