Month: January 2016

New Fuel Cell Technology keeps the Environment in mind!

I imagine you’re all pretty familiar with fuel cell technology at this point. It’s been around for quite some time and is often heralded as the answer to green, renewable energy. For the most part that is quite true, as there are a number of advantages that this technology has over current combustion-based options. It not only produces smaller amounts of greenhouse gases, but none of the air pollutants associated with health problems.

That being said, the technology isn’t perfect, with many improvements still to be made. One problem is that a fuel cell’s environmental impact depends greatly on how the fuel is acquired. For example, the by-products of a Hydrogen (H2) fuel cell may only be heat and water, but if electricity from the power grid is used to produced the H2 fuel then CO2 emissions are still too high.

The technology also requires the use of expensive or rare materials. Platinum (Pt) is easily the most commonly used catalyst in current fuel cell technology, and this is a rare metal often costing around 1000 US dollars per ounce. This really hurts the commercial viability of the fuel cell, but research regarding alternative materials is progressing.

While I’m certain these kinks will be worked out eventually, it is still worth considering other options. One such option is the Microbial Fuel Cell (MFC), a bio-electrochemical device that uses respiring microbes to convert an organic fuel into electrical energy. These already have several advantages over conventional fuel cell technology, primarily due to the fact that bacteria are used as the catalyst.

The basic structure of an MFC is shown in Figure 1, and you can see that it closely resembles that of a conventional fuel cell. In fact the method by which it produces electricity is exactly the same, the only differences are the fuel and the catalyst.

1-s2-0-s1110016815000484-gr1
Figure 1: The basic structure of an MFC. Source: http://www.sciencedirect.com/science/article/pii/S1110016815000484

The fuel for an MFC is often an organic molecule that can be used in respiration. In the figure it is shown to be glucose, and you can see that its oxidation yields both electrons and protons. It is worth noting that the species shown as “MED” is a mediator molecule used to transfer the electrons from the bacteria to the anode. Such molecules are no longer necessary, as most MFCs now use electrochemically active bacteria known as “Exoelectrogens”. These bacteria can directly transfer electrons to the anode surface via a specialised protein.

As I mentioned before, this technology has several advantages over conventional fuel cell technology in terms of cost and environmental impact. Not only are bacteria both common and inexpensive when compared to Pt, but some can respire waste molecules from other processes. This not only means that less waste would be sent to a landfill, but would actually be a source of energy. This has already be applied in some waste-water treatment plants, with the MFCs producing a great deal of energy while also removing waste molecules.

Now you’re probably thinking, “Nathan, this is all well and good, but it’s not exactly new technology”. You’d be right there, but some scientists from the Universities of Bristol and West England have made a big improvement. They have designed an MFC that is entirely biodegradable! The research was published in the journal ChemSusChem in July of 2015, and it represents a great improvement in further reducing the environmental impact of these fuel cells.

Many materials were tried and tested during the construction process. Natural rubber was used as the membrane (see Figure 1), the frame of the cell was produced from polylactic acid (PLA) using 3D printing techniques, and the anode was made from Carbon veil with a polyvinyl alcohol (PVA) binder. All of these materials are readily biodegradable with the exception of the Carbon veil, but this is known to be benign to the environment.

The cathode proved to be more difficult, with many materials being tested for conductivity and biodegradability. The authors noted that conductive synthetic latex (CSL) can be an effective cathode material, but lacks the essential biodegradability. While this meant it couldn’t be used in the fuel cell, it was used as a comparison when measuring the conductivity of other materials.

Testing then continued with egg-based and a gelatin-based mixtures being the next candidates. While both of these were conductive, they weren’t nearly good enough to be used. CSL actually performed 5 times better than either of them. But science can not be beaten so easily! Both mixtures were improved by modification with lanolin, a fatty substance found in Sheep wool, which is known to be biodegradable. This caused a drastic increase in performance for both mixtures, with the egg-based one outperforming CSL! This increase easily made it the best choice for the cathode.

With all the materials now decided, it was time to begin construction on the fuel cell. A total of 40 cells were made and arranged in various configurations. These are shown in Figure 2, and each configuration was tested to determine its performance. Of these three, the stack shown in Figure 2C was found to be able to continuously power an LED that was directly connected. It was also connected to some circuitry that harvested and stored the energy produced, and the authors report that the electricity produced by this method could power a range of applications.

4
Figure 2: a) A set of 5 fuel cells connected in parallel. Known as a “parallel set”. b) A stack of 4 parallel sets. c) A stack of 8 parallel sets. Source: http://onlinelibrary.wiley.com/wol1/doi/10.1002/cssc.201500431/full

While there is much to celebrate here, the authors also address some of the concerns associated with this technology. The most notable concern is how long the fuel cells can operate, and the authors report that after 5 months of operation the stacks were still producing power. This could potentially be longer in an application, as the operational environment of a fuel cell rarely mimics natural conditions.

They also discuss how these MFCs didn’t perform as well as some produced in other studies, but these were the first to be made from cheap, environmentally friendly materials. If anything, this research shows that such fuel cells can at least be functional, and are an excellent target for further research.

So we’ll have to wait for more research to see if this technology will actually take off, and given the timescale of this study it’s likely that we’ll be waiting quite some time. Even so, this is an important step on the road to completely sustainable living, as it shows that even our power sources could be made from completely environmentally friendly materials. Now we just have to hope people take notice. Let’s make sure they do!

Sources not mentioned in text:

Boredom isn’t Boring

1c00898
What does Boredom do to us? Image source: https://www.linkedin.com/pulse/20140203092316-64875646-bored-at-work-here-s-what-to-do

We’re all VERY familiar with the feeling of ennui. That desire for mental stimulation but unable to think of or find any satisfactory activity. It’s something everyone experiences quite regularly, and since we’re so used to it you might be surprised to hear that the word “bored” only entered the English language in 1852. This was thanks to Charles Dickens, not because he bored people, but because he used it in his book “Bleak House” to describe how Lady Deadlock feelings about her marriage.

Well scientists have taken and interest! The study of boredom can actually be said to have officially begun in 1885, when Francis Galton published a short note in Nature titled “The Measure of Fidget”. This was an account of how a restless audience behaved during a scientific conference, but no ground-breaking conclusions or discoveries were made.

Just over a century passed with nothing really happening in the study of boredom. The area could be said to have become boring itself until Norman Sundberg and Richard Farmer of the University of Oregon published their “Boredom Proneness Scale (BPS)” in 1986. This was the first ever systematic way to measure boredom, and it involved a series of questions to determine how inclined a person was to the feeling of ennui. An article published in Nature earlier this month actually allows you to take the test.

However, the BPS has some widely acknowledge flaws, such as its inherent subjectivity and an inability to distinguish between trait and state boredom. These have actually be identified as two different things, with the former being a susceptibility to boredom and the latter being the level of intensity.

While scientists are still working on improving the BPS, there are already some alternatives like the “Multidimensional State Boredom Scale (MSBS)”. This is a big improvement as it attempts to determine how bored subjects feel at that moment, as opposed to asking about their habits and personality traits. However, one problem still remains in the study of boredom; a reliable technique for inducing it.

There are some rather obvious techniques like simply asking participants to do nothing for a period of time. This would certainly work for me, but it’s not particularly reliable since different people vary in how much their own thoughts can entertain them. Another horrifically dull and more reliable one involves clicking a mouse button to rotate a computer icon of a peg one quarter of a turn clockwise. Over. And. Over. AGAIN. Using techniques like these scientists are now reasonably well equipped to induce and determine a participant’s boredom, which means they can start studying what it actually does.

It seems to have both positive and negative effects on our minds and bodies, with there already been a number of studies on the matter. It has already been identified that boredom can push people towards self-destructive and unhealthy behaviour. A tendency to smoke, drink, and take drugs has already linked to levels of boredom, with a study involving South African youths showing a a noticeable influence on substance use. It has also been shown to increase the both the desire for and consumption of snack food, which has obvious risks.

But it’s not all bad! Boredom is also thought to enhance one of our most notable traits; our insatiable curiosity. Boredom could push us towards exploring new experiences and ideas, leading to an increase in innovation. You’ve probably found yourself doing something remarkably creative with very simple objects in an attempt to stop feeling bored. I once started making blu-tack sculptures.

The downside to this is that it can also push us to take more risks, some of which can end up hurting us. One study published in  a 2014 issue of Science revealed that, when given the option, people would opt to give themselves a small electric shock than be left alone with their thoughts for around 15 minutes. The amount of shocks they chose varied between men and women, but most went for between 0 and 9, with one particular thrill-seeking outlier shocking himself 190 times. They likely chose to do this since it was the only available way to break up the tedium, and this same desire could explain why bored people turn to unhealthy behaviours.

So where does the science go next? Well there are still improvements to be made in how it’s studied, mainly in further refining the various measuring techniques. It might also be worth studying brain structures and chemistry to see if there are notable differences in people who score high on boredom scales and those who don’t. This could also help to understand why boredom is often correlated with other mental states.

It’s unfortunate that it can cause so many unhealthy habits, but knowing that it can also make us more creative is a nice silver-lining. I think that should be something we embrace, utilising the boredom to push use to create new and exiting things. I think from now on I’ll try to use my boredom to come up with new subjects for my posts, or to find new ways to improve my writing. Imagine the things you could accomplish if you harnessed this creativity-booster.

Sources not mentioned in text:

A Step Forward for Wearable Electronics

3dp_graphine_image
An artistic representation of Graphene. Source: http://3dprint.com/61659/graphene-ink-capabilities/

Research on flexible, wearable electronic devices is already well under way, with products such as the Wove Band attracting a great deal of attention. In fact, it’s a field of increasing research interest due to many potential applications. These include monitoring health and fitness, functional clothes, as well as many mobile and internet uses.

Such technology could have many implications in several areas of life. These might involve more effective and immediate monitoring of patients outside hospital, potentially reducing response time if something were to go wrong, and moving communications technology into an entirely new age. The smartphone as we know could be a thing of the past once this technology takes off.

Given the plethora of uses and the high profile of the research, it’s no surprise that many materials have already been considered. Silver nanowires, carbon nanotubes, and conductive polymers have all been explored in relation to flexible electronics. Unfortunately, problems have been reported in each case, such as high manufacturing costs in the case of silver nanowires and stability issues for some polymers.

But fear not my fellow science enthusiasts! Another material has appeared to save the day. It’s one you’re probably quite familiar with by now – Graphene! This two-dimensional hexagonal array of carbon atoms has great potential in the field of flexible electronics due to its unique properties, which include great conductivity and stability. However, known production methods for the Graphene sheets that would be needed give structures with a rather high surface resistance, which is not ideal.

Luckily, the invention of conductive Graphene inks provided a way to overcome this problem. This allows for sheets of superior conductivity, greater flexibility, a lighter weight, and a lower cost. That sounds VERY good for a wearable, flexible electronic device. These inks can also be prepared with or without a binder, a chemical that helps the ink stick to a surface. This also brings advantages and disadvantages, as a binder can improve the sheets conductivity, but also requires high temperature annealing processes. This limits its use on heat sensitive substrates such as papers and textiles.

Well, a new paper published in Scientific Reports in December claims to have found a production method that doesn’t require a binder and has a high conductivity. The research was conducted by scientists at the University of Manchaster, United Kingdom, and it represents and important step forward in making flexible Graphene based electronics a reality. The production method first involves covering a surface with an ink containing Graphene nanoflakes, then drying it at 100oC. This forms a highly porous coating, which is not ideal since it leads to high contact resistance and an unsmooth electron pathway.

The authors overcame this problem by compressing the dry coating, which led to a thin, highly dense layer of Graphene. This not only improved the adhesion of the nanoflakes, but the structure became much less porous, improving its conductivity. It is also noted that greater compression led to higher conductivity values, with the highest being 4.3×104 S/m. But the science didn’t end there! The authors then went on to test how flexible electronic components made from this material would perform with regard to communications technology. Both transmissions lines (TLs) and antennae were created from the Graphene sheets, and tested in various scenarios.

TLs are conductors designed to carry electricity or and electrical signal, and are essential in any circuitry. The ones created here were tested in three positions: unbent, bent but not twisted, and bent and twisted. This was done to determine if the material performed well in various positions; a necessity for a wearable, flexible device. Turns out the TLs performed well in all three positions, with data showing only slight variations in each case.

The Graphene based antennae were also tested in various positions, both unbent and with increasing amounts of bending. In each case the antennae were found to function in the frequency range matching Wi-Fi, Bluetooth, WLAN, and mobile cellular communications. This is an excellent indication that this material could be ideal for use in wearable communications technology. It was also tested in a pseudo-real life scenario, with antennae being wrapped around the wrists of a mannequin. These results were also promising, showing that an RF signal could be both radiated and received.

So, you can hopefully see that this work represents a real step forward towards wearable electronic devices, as it shows that Graphene is truly a prime candidate. That said, there is still a great deal of work to do, such as incorporating all these components into a complete device and figuring out how to produce the technology on a commercial scale. There would also need to be more research to see if these Graphene sheets could be modified in some way to include applications outside of communications. But putting that aside, I’m quite excited about this research bringing us a little bit closer. Keep an eye out to see where it goes from here.

Sources:

  • Fuente, J. (2016). Properties Of Graphene. Graphenea. Retrieved 18 January 2016, from http://www.graphenea.com/pages/graphene-properties#.VpzceyqLSwV
  • Huang, G.-W. et al. Wearable Electronics of Silver-Nanowire/Poly(dimethylsiloxane) Nanocomposite for Smart Clothing. Sci. Rep. 5, 13971; doi: 10.1038/srep13971 (2015).
  • Huang, X. et al. Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications.Sci. Rep. 5, 18298; doi: 10.1038/srep18298 (2015).
  • Matzeu, G., O’Quigley, C., McNamara, E., Zuliani, C., Fay, C., Glennon, T., & Diamond, D. (2016). An integrated sensing and wireless communications platform for sensing sodium in sweat. Anal. Methods, 8(1), 64-71. http://dx.doi.org/10.1039/c5ay02254a

Where are all the Aliens?

milky-way-you-are-here
The Milky Way is a big place. Are we really the only ones here? Source: http://sites.psu.edu/vansonspace/2015/04/10/space-is-the-place/

Are we alone in the Universe? To this day it remains one of the most intriguing questions in Science, and probably one of the most discussed by non-scientists everywhere. It’s likely been around for quite some time, but it wasn’t until 1984 with the birth of the SETI (Search for Extraterrestrial Intelligence) institute that we started making meaningful strides towards finding an answer.

But despite great public visibility and inherent curiosity factor, the institute has been pushed to the edges of scientific research. It has failed to attract any serious funding, and received only small amounts of dedicated observation time on world class telescopes.

Well all that is about to change! Thanks to Russian entrepreneur Yuri Milner and physicist Stephen Hawking, the SETI Institute will receive a total of $100 billion in funding over the next decade. The project is known as “Breakthrough Listen”, and will allow for state-of-the-art radio and optical surveys to take place on some of the world’s best telescopes. The project is actually supposed to start making observations some time this year!

So, now that we have the resources available to do some searching, the next question is – what do we search for? We ideally want to find a planet that shares characteristics with our own. That is, one with a rocky surface, of a similar size, orbiting a similar star, and a surface temperature that can allow for liquid water.

This aspect has not proven to be much of a problem, with observations, primarily from the Kepler Space Telescope, showing that the Milky Way contains around a billion planets that meet these specifications. But once we’ve identified such a planet, how do we go about searching for life?

Well for it to be in any way detectable from a distance, life needs to have evolved to the point where it dominates the planet’s surface chemistry. This will actually change the composition of the atmosphere, creating so-called “biosignatures”. A chemical indication of the presence of life.

An example is an atmosphere of at least 20% O2, since our own planet shows that such a composition can almost entirely be created by biological processes. But there is a very real risk of a false positive with any of these biosignatures, since there is always the possibility of a non-biological source. In the case of O2, the splitting of vaporised H2O by UV radiation could easily create such high levels.

This means that we need to find ways to either back up promising signatures, or identify a false positive. For example, detecting methane (CH4) in the planets atmosphere as well as O2 would significantly strengthen the possibility of a life-based origin. On the other hand, an atmosphere rich in steam would suggest that the splitting of H2O is the most likely source.

But what if we want to be more ambitious? What if we want to, rather than searching for any form of life, jump straight to searching for intelligence? There are a few options available to us here, one of which would be the detection of an intelligent, non-natural radio transmission. This is currently the main aim of the SETI program, and while the risk of a false positive is significantly lower than with biosignatures, it’s not without problems. The main one being that radio communication might be considered archaic by and advanced lifeform, so they might not even be using it.

It would also be possible to search for evidence of energy consumption, a necessity for an advanced civilization that seems impossible to conceal. There are many potential energy sources for a civilization with advanced technology, with nuclear fusion being a likely one. There is also the incredible concept of the “Dyson Sphere”, a megastructure surrounding a star to harvest the energy it emits. In either case the production of waste heat is inevitable, and would produce a detectable mid-infrared (MIR) signal.

But one final problem remains. What if, as so much sci-fi media suggests, biological life is only a brief stage for an evolving intelligence? What if the next logical step is the dominance of artificial, inorganic lifeforms?  If so, we wouldn’t really know where to look. It is likely that they would not be found on a planet, as gravity is only advantageous for emerging biological life, but otherwise a nuisance. They would, however, still need to be close to a power source for energy considerations. A star seems to be the most likely source, so that at least gives us a place to start.

There is also the possibility that such intelligence might be broadcasting a signal in their own attempt to find out if they’re alone in the Universe. But if such an advanced civilization were to do such a thing, it is unlikely that our feeble organic brains would be able to detect or understand it.

So, it looks like this amazing question is no closer to being answered than when the effort first began in 1984, but that’s not really surprising since it’s quite a difficult question. However, given that SETI has just been given a new lease of life, it might have gotten a little bit easier. I hope we’ll be learning a lot about this in the coming decade, and who knows, we might actually find someone.

Sources:

 

New Evidence of Internet Gaming Disorder

video-game_2141739b
Various consoles and controllers. Source: http://www.telegraph.co.uk/news/science/science-news/9088262/Playing-video-games-improves-eyesight.html

Internet gaming is still a relatively new concept, yet it is one we are already very familiar with as it bounced into the limelight over the past decade. And I really mean it when I say “bounced”, with the market in China alone being estimated to be worth $12 billion!

There is also a growing amount of literature looking at how video games can affect both our physical and mental health, and it looks like online gaming may have brought about a new kind of mental illness.

This new condition is known as “Internet Gaming Disorder” (IGD), and its more than a simple enjoyment of online games. People with IGD play to the detriment of other areas in their life, neglecting their health, school work, even their family and friends. They also experience withdrawal symptoms if they are prevented from getting their fix.

All that being said, the Diagnostic and Statistical Manual of Mental Health Disorders 5th Edition (DSM-5) does not currently list IGD, stating that its a “condition warranting more clinical research” before its included. Well this new research might be what was asked for, as it provides new evidence of brain differences between people who do and do not have the disorder.

The study participants, all adolescent males between 10 and 19 years of age, were screened in South Korea where online gaming is an even greater social activity than it is in the US. In fact, most research on this matter comes from young males all around Asia since it’s where the disorder is most commonly found. The Korean government also supported the research, hoping to be able to identify and treat addicts.

The research was a collaboration between the University of Utah School of Medicine and Chung-Ang University in South Korea, and was published online in Addiction Biology. It involved taking MRI scans of all participants, 78 of whom were seeking treatment for IGD and 73 who were not.

What they found was that participants suffering from IGD showed hyperconnectivity between several pairs of brain networks, and you can find a list of all the pairs here. Some of these changes could help gamers respond to new information, whereas other are associated with being easily distracted and having poor impulse control.

One of the potentially beneficial changes was improved coordination between areas that process vision or hearing and the Salience Network. This is the area of the brain responsible for focussing a person’s attention on important events and preparing them to react. You can probably see why that would be useful to an online gamer, allowing them to dodge a hoard of bullets or react to a charging foe.

According to author Jeffrey Anderson, M.D, Ph.D, this could lead to “a more robust ability to direct attention towards targets and recognise novel information in the environment”, and  “could essentially help someone to think more efficiently”. But without follow up studies to determine if performance is actually improved, this is only a hypothesis.

A more worrying find was that participants with IGD showed weaker coordination between the Dorsolateral Prefrontal Cortex and the Temporoparietal Junction than those without the disorder. These same changes are seen in patients with Schizophrenia, Down Syndrome, Autism, and those with poor impulse control. It is thought that this could also lead to increased distractability.

But despite all these findings it is currently unclear if chronic video game playing causes these changes in the brain, or if people with these differences are drawn to the gaming world. Much more research will be required before that question can be answered.

So should you spend less time in the virtual world of video games? Well at this point we don’t really know. It might be good for you, but there might also be more benefits than drawbacks. Either way, this is an area of research that is continuing to grow, and its certainly worth keeping an eye on. I know I will be.

Sources not mentioned in text:

Snowflake Science!

snowflake-6
Example of a possible snowflake shape. Source: http://feelgrafix.com/group/snowflake.html

It honestly feels like Winter never really happened this year. I remember hearing rumours of snow on Christmas day sometime in November, and I must confess I got excited. Even though weather predictions are known to be near impossible if they’re after more than a few days.

But sadly that snow never happened, and in England at least we’ve had to settle with a near constant Autumn. Its true that we still have all of January and February for Winter to actually happen, but I think its unlikely that we’ll see any snow this year.

So since I still needed my snowy fix I decided to learn a bit about the formation of snow and how that knowledge can be of use to us. It may not be actual snow, but it might make your imaginary snow a bit more realistic.

Let’s start with what a snowflake actually is! Its a pretty broad term, with a huge variety of structures qualifying as a snowflake. The only concrete part of the definition is that the structure consist of more than one snow crystal, which is a single crystal of ice. These form in clouds when water vapour condenses into ice, a process which as two specific conditions for occurring.

The first is known as “Supersaturation”, which occurs when the amount of water vapour in the air exceeds the ordinary humidity limit. What does this mean? Well at every available temperature there is a maximum amount of water vapour that can be supported, with a higher temperature allowing for more water vapour.

If we cool a volume of air that’s already at 100% humidity then it now contains more water than is stable, and has become supersaturated. The excess water will then condense out, either into water droplets or directly into ice.

The second condition is “Supercooling”, or rather the lack there of. This is when a substance remains in a liquid state below its freezing point. It is possible for pure water to remain in a liquid state below 0oC, as the thermal motion of the molecules prevents crystallisation. In fact, the temperature has to drop to -42oC before freezing will occur!

On the other hand, tap water will readily freeze at 0oC due to the impurities it contains. These provide a surface for the molecules to cling to, reducing the effects of thermal motion. The scientific term for what these impurities provide is a “nucleation point”, a starting point for crystal growth. This also occurs in clouds when snow crystals form, as the many impurities such as dust and pollen particles provide nucleation points.

So! Now that we know what a snowflake is and the conditions for their formation, we can look at the process of crystal growth. It begins when the water molecules arrange themselves around the nucleation point. There are actually 14 possible lattice structures for ice, but ice 1h (short for “Form 1 Hexagonal”) is the most stable between 0 and 100oC, so its the most common form found in nature. In this arrangement the water molecules bond in a hexagonal lattice structure shown in Figure 1.

300x226xrtemagicc_fig1_ice_ih-pagespeed-ic-xixag9ndvs
Figure 1: The hexagonal lattice structure of an ice crystal. Red spheres represent Oxygen atoms, and white spheres represent Hydrogen. Source: http://www.thenakedscientists.com/HTML/articles/article/science-of-snowflakes/

The growth then continues as shown in Figure 2, with “rough” areas filling in faster than “smooth” ones. Why do they do this? Well a rough surface is one with multiple binding sites available, as more surface molecules are exposed. This makes it easier for incoming molecules to bind in these locations, and this growth pattern defines the hexagonal shape of the initial crystal.

300x208xrtemagicc_fig2_facets-pagespeed-ic-kwmy7hjw6u
Figure 2: Diagram showing how additional water molecules bind as the ice crystal grows. Source: http://www.thenakedscientists.com/HTML/articles/article/science-of-snowflakes/

This crystal continues to grow as atmospheric water binds and becomes incorporated into the structure. However, from here on the growth is not uniform, with the corners growing fastest since they now offer the most exposed surface molecules. This is what causes the six “arms” that extend out from the corners of the central hexagon, and their size and shape will be determined by the ever changing conditions as the snowflake moves through the air.

The final shape of the snowflake will be determined by many factors including temperature, humidity, and how those conditions varied during it’s formation. This makes it extremely unlikely that you’ll ever find two identical snowflakes, as the number of possible combinations and variations is truly staggering.  Its actually made even less likely when you consider that the majority of snowflakes will not be perfectly symmetrical, as different parts of the snowflake can experience different conditions as well.

Now, while this is all very interesting, what is the actual point of studying the complex formation of snowflakes? Given that snowflake formation was successfully simulated by a research team from both Germany and London, it would be nice to know its not all for nothing! Well it turns out that this knowledge, while not having many immediate applications, could be very useful in the future.

Crystals are applied and used in many areas these days. These include semiconductor crystals for electronics, optical crystals for telecommunications, artificial diamonds for machining and grinding, the list goes on. So by studying snowflakes we gain a deeper understanding how crystals form and grow. Knowledge that may help us form new and better types of crystals in the future.

Some more interesting, and perhaps more important, things we can learn are the principles behind self-assembling structures. While us humans usually make things by carving structures out from a block of material, nature often has structures assembling themselves from smaller components. This production method will likely become HUGELY important as the electronics industry constantly moves towards smaller devices.

So now you can see that snowflakes are not only both beautiful and amazingly scientific, but also potential useful to us. Something I must confess I was unaware of before writing this post. Now while all this doesn’t change the fact that IT HASN’T SNOWED YET (this makes me very sad), at least you can madly rant about the science when it does.

Sources not mentioned in text: