Tag: Medicine

New Evidence of Internet Gaming Disorder

video-game_2141739b
Various consoles and controllers. Source: http://www.telegraph.co.uk/news/science/science-news/9088262/Playing-video-games-improves-eyesight.html

Internet gaming is still a relatively new concept, yet it is one we are already very familiar with as it bounced into the limelight over the past decade. And I really mean it when I say “bounced”, with the market in China alone being estimated to be worth $12 billion!

There is also a growing amount of literature looking at how video games can affect both our physical and mental health, and it looks like online gaming may have brought about a new kind of mental illness.

This new condition is known as “Internet Gaming Disorder” (IGD), and its more than a simple enjoyment of online games. People with IGD play to the detriment of other areas in their life, neglecting their health, school work, even their family and friends. They also experience withdrawal symptoms if they are prevented from getting their fix.

All that being said, the Diagnostic and Statistical Manual of Mental Health Disorders 5th Edition (DSM-5) does not currently list IGD, stating that its a “condition warranting more clinical research” before its included. Well this new research might be what was asked for, as it provides new evidence of brain differences between people who do and do not have the disorder.

The study participants, all adolescent males between 10 and 19 years of age, were screened in South Korea where online gaming is an even greater social activity than it is in the US. In fact, most research on this matter comes from young males all around Asia since it’s where the disorder is most commonly found. The Korean government also supported the research, hoping to be able to identify and treat addicts.

The research was a collaboration between the University of Utah School of Medicine and Chung-Ang University in South Korea, and was published online in Addiction Biology. It involved taking MRI scans of all participants, 78 of whom were seeking treatment for IGD and 73 who were not.

What they found was that participants suffering from IGD showed hyperconnectivity between several pairs of brain networks, and you can find a list of all the pairs here. Some of these changes could help gamers respond to new information, whereas other are associated with being easily distracted and having poor impulse control.

One of the potentially beneficial changes was improved coordination between areas that process vision or hearing and the Salience Network. This is the area of the brain responsible for focussing a person’s attention on important events and preparing them to react. You can probably see why that would be useful to an online gamer, allowing them to dodge a hoard of bullets or react to a charging foe.

According to author Jeffrey Anderson, M.D, Ph.D, this could lead to “a more robust ability to direct attention towards targets and recognise novel information in the environment”, and  “could essentially help someone to think more efficiently”. But without follow up studies to determine if performance is actually improved, this is only a hypothesis.

A more worrying find was that participants with IGD showed weaker coordination between the Dorsolateral Prefrontal Cortex and the Temporoparietal Junction than those without the disorder. These same changes are seen in patients with Schizophrenia, Down Syndrome, Autism, and those with poor impulse control. It is thought that this could also lead to increased distractability.

But despite all these findings it is currently unclear if chronic video game playing causes these changes in the brain, or if people with these differences are drawn to the gaming world. Much more research will be required before that question can be answered.

So should you spend less time in the virtual world of video games? Well at this point we don’t really know. It might be good for you, but there might also be more benefits than drawbacks. Either way, this is an area of research that is continuing to grow, and its certainly worth keeping an eye on. I know I will be.

Sources not mentioned in text:

Advertisements

The Age of Antibiotics Could Soon be Over

Antibiotic awareness week has been given a whole new meaning this year due to one particularly eye-opening discovery. We have been slowly emptying our armory of antibiotics for a while now, with few new examples being developed in the past two decades and new infectious diseases being discovered almost every year. We’re also living in a time when diseases are evolving and becoming increasingly resistant to antibiotics, and it looks like they’ve now breached our last line of defense.

A report in The Lancet Infectious Diseases has just revealed the existence of bacteria with resistance to the most aggressive of our antibiotics; a drug known as Colistin. Colistin has had a rough history, being deemed too toxic for human use not long after its discovery due to the damage it caused to kidney cells. But it made a comeback in the early 2000s when more drug resistant bacteria began to emerge, and kidney damage started to seem like the lesser of two evils. And by 2012, the World Health Organisation had classified Colistin as critically important for human health.

But, unknown to the many medical professionals in the West, Colistin was also being used in China. While it was never approved for human use, understandable considering its toxicity, it was approved for use in animals. It has been known for quite some time that feeding animals with low doses of antibiotics fattens them up, and the local pig farmers took to using large quantities of Colisten for this very reason.

This near constant use of Colisten meant that bacteria were being repeatedly exposed to it; long enough for them to learn how to fight back. Colistin resistance has occurred in the past, but the relevant gene was found in the chromosomal DNA, and could not be passed on to non-resistant bacteria. But these guys were a cut above the rest. This time the mutation, now dubbed MCR-1, occurred in the plasmid. This is a circular loop of DNA that all bacteria posses, and it can be passed on in a process called horizontal gene transfer. This is outlined in the graphic below.

86775358_antibiotic_resistance
The process of horizontal gene transfer. Source: http://www.bbc.co.uk/news/health-34857015

This means there is now potential for the resistance gene to end up in the DNA of many different species of bacteria, and it has already been found in some known to cause infections in humans, such as E.Coli and Klebsiella Pneumonia. Now this wouldn’t be so bad if the gene could effectively quarantined, but the researchers report that the gene is already widespread in southern China. The study found that the gene was present in 15% of meat samples and 21% of animals tested between 2011 and 2014, as well as 16 of 1322 samples taken from humans. To make matters worse, there is also some evidence that the gene has managed to spread outside of China in to countries such as Laos and Malaysia.

If this gene continues to spread, which is highly likely since reports state that it has an extremely high transfer rate, then we could see the emergence of “pan-resistant bacteria” – bacteria resistant to all known methods of treatment. This is a very frightening prospect for modern medicine, and if MCR-1 combines with other resistance genes, then medicine could be launched back in to the dark ages. As Professor Timothy Walsh told BBC news, “At that point if a patient becomes seriously ill, say with E. Coli, then there is virtually nothing you can do”.

But the apocalypse is not upon us yet! Although the prospect of the MCR-1 gene going global seems to be a case of when not if, we still have time to prevent a complete breakdown of modern medicine if we act fast enough. There are even new antibiotics that could help delay the onset of the post-antibiotic era, such as Teixobactin, that are currently being researched. But this is not something we should rely on, as it is still a long way from being ready for medical use.

This is one hell of a wakeup call, and the authors of the report know this, stating that their findings “emphasise the urgent need for coordinated global action” in the fight against antibiotic resistance. Whether it’s through the discovery of new antibiotics or entirely new methods of treatment, we need to work together to restock our armory and find new weapons to combat this new breed of superbug. If not, deaths from routine surgeries and minor infections could become commonplace once again due to the lack of treatment options. So let’s hope our scientists are on the case! They have quite the challenge ahead.

Sources:

vOICe: Helping People See with Sound

A demonstration of the vOICe experiment. Photo credit: Nic Delves-Broughton/University of Bath Source: http://www.theguardian.com/society/2014/dec/07/voice-soundscape-headsets-allow-blind-see

It seems like there is an almost constant stream of awesome new technology these days, and there has been a rather fantastic addition! A device is being researched at both the California Institute of Technology (Caltech) in the US and the University of Bath in the UK, with a very noble goal in mind; to build better vision aids for the blind.

Now it has long been known that blind people often rely on sound as a substitution for sight, with some individuals sense of hearing heightened to the point of being able to use echolocation. Well, it turns out that sound can also be designed to convey visual information, allowing people to form a kind of mental map of their environment. This is achieved by the device known as “vOICe”; a pair of smart glasses capable of translating images into sounds.

The device itself consists of a pair of dark glasses with a camera attached, all of which is connected to a computer. The system can then convert the pixels in the camera’s video feed into a soundscape that maps brightness and vertical location to an associated pitch and volume. This means that a bright cluster of pixels at the top of the frame will produce a loud sound with a high pitch, and a dark area toward the bottom will give the opposite; a quiet sound with a low pitch.

But what is really impressive about this technology is that this soundscape appears to be intuitively understood, requiring little to no training at all! In a test performed by researchers Noelle Stiles and Shinsuke Shimojo at Caltech, blind people with no experience of using the device were able to match shapes to sounds just as well as those who had been trained, with both groups performing 33% better than pure chance. In contrast, when the coding was reversed (high point = low pitch, bright pixels = quiet etc.) volunteers performed significantly worse. So how did they achieve such an intuitive system?

Well it began with Stiles and Shimojo working to understand how people naturally map sounds to other senses. Both blind and sighted volunteers were involved in the systems development, with sighted people being asked to match images to sounds, and blind volunteers being asked to do the same with textures. The pattern of choices during these trials directly shaped vOICe’s algorithm, and appeared to produce an intuitive result. This seemed to be a surprise to the researchers, as they wrote that “the result that select natural stimuli could be intuitive with sensory substitution, with or without training, was unexpected”.

This information successfully managed to get me excited, and already had me itching to learn more. It was then that I found out the research at the University of Bath further emphasised the importance of having such an intuitive system. Here the researchers claim that some users are exceeding the level of visual performance often achieved by more invasive restoration techniques, such as stem cell implants or prosthetics. While people who receive such surgery are rarely able to make out more than abstract images, some long-term users of vOICe claim to form images in their brain somewhat similar to sight, as their brains become rewired to “see” without use of their eyes.

Michael J Proulx, an associate professor at the university’s department of psychology, gave the example of a man in his 60s who had been born blind. Proulx reports that he initially thought the idea was a joke, too sci-fi to be real, but “after 1 hour of training, he was walking down the hall, avoiding obstacles, grabbing objects on a table. He was floored by how much he could do with it”. He also reports that after a few weeks of use, some people were able to achieve levels of vision of 20/250. To put that into perspective for you, a short-sighted person who removed their glasses would have a level around 20/400. That’s right, this tech could allow the completely blind to see better than those who are still partially sighted! That’s something to wrap your head around.

But slow down there with your excitement! While this technology is truly revolutionary, it is worth pointing out that there is a huge gulf between distinguishing patterns in a lab environment and using vOICe to actually observe and understand the real world. For example, we don’t know how a busy street, with an already large amount of visual and auditory information, would affect both the device’s signals and how they are interpreted. But there is no denying that this work represents and important step on the road to developing better vision aids, and given that the World Health Organisation estimates a total of 39 million blind people in the world, this technology could bring about a dramatic increase in quality of life across the globe.

But that’s not all this technology could do, as the results are challenging the concept of what being able to “see” actually means. This is illustrated by a quote from Shimojo at Caltech, where she mentions that “our research has shown that the visual cortex can be activated by sound, indicating that we don’t really need our eyes to see”. This has profound implications for the field of neuroscience, and has led to another study beginning at the University of Bath to examine exactly how much information is required for a person to “see” in this way. This could not only lead to optimisation of this technology, but to a deeper understanding of how the human brain processes sensory information.

Now I don’t know about you, but I remember when stuff like this was considered to be firmly stuck in the realm of science fiction, and the fact that such talented scientists keep bringing it closer to reality still surprises me. Combine this with an incredible rate of progress, and there really is no way of knowing what futuristic tech they’ll come up with next. This can make keeping up with it all one hell of a challenge, but fear not, my scientific friends! I shall remain here to shout about anything new that comes along.

Sources not mention in text:

DNA is Unstable! Luckily your Cells can handle that.

Another Nobel Prize story?! DAMN RIGHT! This time it’s the prize for chemistry, and Tomas Lindahl, Paul Modrich, and Aziz Sancar will collectively bask in the glory for their outstanding work in studying the mechanisms of DNA repair. Given the billions of cell divisions that have occurred in your body between conception and you today, the DNA that is copied each time remains surprisingly similar to the original that was created in the fertilized egg that you once were. Why is that strange? Well from a chemical perspective that should be impossible, with all chemical processes being subject to random errors from time to time. Along with that, DNA is subjected to damaging radiation and highly reactive substances on a daily basis. This should have led to chemical chaos long before you even became a foetus! Now, I would hope that’s not the case for you, so how do our cells prevent this descent into madness? I’ll tell you! It’s because DNA is constantly monitored by various proteins that all work to correct these errors. They don’t prevent the damage from occurring, they just hang around waiting for something to fix, and all three of the winning scientists contributed to our understanding of how our cells achieve this. So! Where do we begin?

A good place to start would be a brief description of the structure of DNA, as this will make things much clearer when we start discussing the research. DNA is primarily a chain of nucleotides, which are themselves made up of three components: a deoxyribose sugar, a phosphate group, and a nitrogenous base. These components are shown bonded together in Figure 1. It is also worth noting that there are four possible bases, each with a slightly different structure, and the one shown in the image is specifically Adenine. The others are known as Thymine, Cytosine, and Guanine, and all attach to the sugar in the same place. The two negative charges on the phosphate group allow it form another bond to the adjacent nucleotide, and this continues on to form a long chain. Two separate chains are then joined together as shown in Figure 2, and voila! A molecule of DNA is formed!

Figure 1: The basic components of DNA. Source: http://pmgbiology.com/2014/10/21/dna-structure-and-function-igcse-a-understanding/
Figure 2: Representation of how the two chains of Nucleotides bond together to form a molecule of DNA. Source: http://www.d.umn.edu/claweb/faculty/troufs/anth1602/pcdna.html
A comparison of Cytosine and its Methylated equivalent. Source: http://blog-biosyn.com/2013/05/15/dna-methylation-and-analysis/

Now that we have a basic understanding of the structure of DNA, the research should make a hell of a lot more sense, and it begins with Tomas Lindahl. In the 1960s, Lindahl found himself asking a question; how stable is our DNA, really? At the time the general consensus among scientists was that it was amazingly resilient. I mean… how else could it remain so constant? If genetic information was in any way unstable, multicellular organisms like us would have never come into existence. Lindahl began his experiments by working with RNA, another molecule found in our cells with a lot of structural similarities to DNA. However, what was surprising was that the RNA rapidly degraded during these experiments. Now it was known that RNA is the least stable of the two molecules, but if was destroyed so easily and quickly, could DNA really be all that stable? Continuing his research, Lindahl demonstrated that DNA does, in fact, have limited chemical stability, and can undergo many reactions within our cells. One such reaction is Methylation, in which a CH3 (methyl) group is added on to one of the bases in the DNA strand. The difference this causes is shown in Figure 3, and can occur with or without the aid of an enzyme. This reaction will become relevant later on, as will the fact that it changes the shape of the base, affecting how other proteins can bind to it. All of these reactions can alter the genetic information stored in DNA, and if they were allowed to persist, mutations would occur much more frequently than they actually do.

Realising that these errors had to be corrected somehow, Lindahl began investigating how DNA was repaired, and by 1986 he had pieced together a molecular image of how “base excision repair” functions. The process involves many enzymes (and I don’t have the time or patience to describe them all), but a certain class known as “DNA glycolsylases” are what actually break the bond between the defective base and the deoxyribose sugar, and the base is removed. Our cells actually contain many enzymes of this type, each of which targets a different type of base modification. Several more enzymes then work together to fill the gap with the correct, undamaged base and there we have it! A mutation has been prevented. To help you visualise all this, you’ll find a graphical representation of it below in Figure 4.

Figure 4: Graphical representation of the process of Base Excision Repair. Source: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2015/popular-chemistryprize2015.pdf

But the science doesn’t end there folks! Remember, there were three winners, the second of which is Aziz Sancar, who discovered another method of DNA repair. This one is called “nucleotide excision repair”, and involves the removal of entire sets of nucleotides, rather than individual bases. Sancar’s interest was piqued by one phenomenon in particular; when bacteria are exposed to deadly doses of UV radiation, they can suddenly recover if exposed to visible blue light. This was termed “photoreactivation” for… obvious reasons. He was successful in identifying an isolating the genes and enzymes responsible, but it later became clear that bacteria had a second repair mechanism that didn’t require exposure to light of any kind. But Sancar wasn’t about to let these bacteria out-fox him and, after more investigations, he’d managed to identify, isolate, and characterise the enzymes responsible  for this process as well. The bacteria were no match for his chemical prowess!

“But how does it work?!” I hear you shout. Well calm the f**k down and I’ll tell you! UV radiation can be extremely damaging, and can cause two adjacent Thymine bases in a DNA strand to directly bind to each other, which is WRONG! A certain endonuclease enzyme, known as an “exinuclease”, is aware of this wrongness, and decides that this damage must be fixed. It does this by making two incisions on each side of the defect, and a fragment roughly 12 nucleotides long is removed. DNA polymerase and DNA ligase then fill in and seal the gap, respectively, and now we have a healthy strand of bacterial DNA! Sancar later investigated this repair mechanism is humans in  parallel with other research groups, and while it is much more complicated, involving many more enzymes and proteins, it functions very similarly in chemical terms. You want a picture to make it easier? You’ll find it below in Figure 5!

Figure 5: Graphical representation of Nucleotide Excision Repair. Source: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2015/popular-chemistryprize2015.pdf

The final recipient of the Nobel Prize this year was Paul Modrich, who identified YET ANOTHER repair system (there are loads, you know), which he named the “mismatch repair” mechanism. Early on in his career, Modrich was examining various enzymes that affect DNA, eventually focussing on “Dam Methylase” which couples methyl groups to DNA bases (I TOLD YOU THAT REACTION WOULD BE RELEVANT!). He showed that these methyl groups could basically behave a labels, helping restriction enzymes cut the DNA strand at the right location. But, only a few years earlier, another scientist called Matthew Meselson, suggested that they also indicate which strand to use a template in DNA replication. Working together, these scientists synthesised a virus with DNA that had incorrectly paired bases, and methylated only one of the two DNA strands. When the virus infected, and injected its DNA into the bacteria, the mismatched pairs were corrected by altering the unmethylated strand. It would appear that the repair mechanism recognised the defective strand by the lack of methyl groups. Does it work that way in humans? Probably not. Modrich did manage to map the mismatch repair mechanism in humans, but DNA methylation serves many other functions in human cells, particularly those to do with gene expression and regulation. It is thought that strand-specific “nicks” (lack of a bond between a phosphate group and a deoxyribose sugar) or ribonucleotides (nucleotide components of RNA) present in DNA may direct repair, but the mechanism remains to be found at this point.

Figure 6: Structure of Olaparib. Source: http://www.reagentsdirect.com/index.php/small-molecules/small-molecules-1/olaparib/olaparib-263.html

But why should we care? Granted it is nice to know this stuff (at least I think so), but what can this information be used for? Well, it actually has applications within the world of medicine, as errors in repair mechanisms can often lead to cancer. In many forms of cancer these mechanisms have been at least partially turned off, but the cells are also heavily reliant on the mechanisms that remain active. As we mentioned earlier, a lack of these mechanisms leads to chemical chaos, and that would cause the cancer cells to just die. This has led to drugs designed to inhibit the remaining repair systems to slow down or stop cancer growth entirely! One such drug is Olaparib, and you can see the structure in Figure 6. This drug functions by inhibiting two specific proteins (PARP-1 and PARP-2), which are integral in detecting certain flaws in replicated DNA and directing repair proteins to the site of damage. Cancer cells treated with this drug have been shown to be more sensitive to UV radiation, making one form of treatment much more effective.

And with that, we bring our Nobel Prize stories for this year to an end! I think it’s safe to say that the work described here deserved a prize of some sort, as it not only takes a lot of skill and dedication, but it has led to new medical treatments and a MUCH greater understanding of how our DNA behaves. Have you enjoyed our time spent on the science of the Nobel Prize? DAMN RIGHT YOU HAVE. O_O

Sources:

Nobel Prize! Drugs to take on parasites bring home the award!

As you may or may not have heard, the winners of the Nobel Prize for Medicine and Physiology were announced today, and a total of three scientists received this award: Satoshi Ōmura (a Japanese microbiologist), William C. Campbell (an expert on parasite biology), and Youyou Tu (a Chinese medical scientist and pharmaceutical chemist). But more important than who they are, is what they achieved, and each of these great minds has somehow contributed immensely to the treatment of infections caused by some nasty parasites.

Both Ōmura and Campbell contributed to the discovery of a new drug that is extremely effective in killing Microfilariae larvae; an organism known to cause Onchocerciasis, or River Blindness as it might be known to us laymen. The story begins with Ōmura, described as an expert in isolating natural products, deciding to focus on a particular group of bacteria known as Streptomyces. These bacteria are known to produce many compounds with antibacterial properties when isolated, including Streptomycin, the drug initially used to treat Tuberculosis. Ōmura, using his biological powers, successfully isolated and cultivated new strains of Streptomyces, and from thousands of cultures, he selected the 50 best candidates that merited further analysis. One of these turned out to be Streptomyces Avermitilis.

Skip forward a bit, and we see that Campbell acquired some of these samples and set about exploring their antibacterial efficiency. He was then able to show that a component produced by Streptomyces Avermitilis was very efficient against parasites in both domestic and farm animals, which he then purified and named Avermectin. This was initially used as a veterinary drug given Campbell’s findings, but subsequent modification on the molecular level (an addition of only two hydrogen atoms!) gave us the “wonder drug” Ivermectin. This was then shown to out-perform the previously used DEC (Diethylcarbamazine for the chemistry nerds) primarily due to the lack of side effects such as inflammation. This has made the drug extremely safe for human use, allowing it to be administered to patients by non-medical staff and even individuals in small rural communities with no hospital experience at all (provided some very basic training). This is what makes this drug so special, as it can be safely used in some less developed parts of Africa and South America, where River Blindness is most common and advanced medical care may be unreachable or unaffordable for some individuals.

The other major advancement in this field worthy of the Nobel Prize was the work of Youyou Tu, who developed an effective treatment for another well-known parasitic infection: Malaria! Inspired by traditional Chinese medicine, she identified an interesting extract from the plant Artemisia Annua, or as you may know it, Sweet Wormwood. Despite initial results appearing inconsistent, Tu revisited some literature and found clues that lead to the successful extraction of the active component, Artemisinin. She then became the first to demonstrate that this compound was highly effective against Malaria, killing the parasite at even early stages of development. While the precise mechanism of how Artemisinin achieves this is still debated, many of the current theories and hypotheses are that the drug forms a highly reactive compound within the Malaria organism which is then capable of irreversibly modifying and damaging proteins and other important molecules, and the parasite goes down!

The consequences of these discoveries have been felt across the globe, but the countries most affected by these diseases stand to gain the most. River Blindness is nowhere near the huge problem it used to be, with treatment scenarios moving away from control and into eradication and elimination. This could be MASSIVE for the global economy, with estimates stating a potential saving of US$ 1.5 billion over a few decades! Eradication could also be great for local economies; as such serious illnesses can affect employee attendance, due to actual infection or having to care for a relative. This decrease in workforce can potentially lead to an economic downturn and further unemployment. When combined with the more direct costs of treatment, the losses can be huge. Malaria alone is thought to cost Africa around US$ 12 million a year in lost GDP, and continues to slow growth by more than 1% each year. Just imagine what eliminating these diseases could do for these countries. Not only could their economies rise to a more globally competitive level, it could also lead the way to alleviating the more poverty-stricken areas. Families would be freer to go out and earn a living, with no need to worry about potential infection or having to care for sick family members. This could afford more food, better healthcare, and leisure activities, drastically increasing quality of life. Granted it would also cause significant population growth in areas with already high birth rates, and the current food crises would not be helped by this, but these problems would be easier to control and solve in a disease-free society.

That is not to say there are no concerns associated with disease eradication, although it is extremely unlikely that these would out-weigh the benefits. It could be that the process of natural selection would be halted, as these diseases weed out weaker immune systems in the population. But with advances in technology and medicine allowing for new treatments of both genetic and infectious diseases, if such a thing could happen, the causes are already present across the globe, so why should it influence our decision in this case? I mean, Malaria was eradicated in the US many years ago and there have been no obvious downsides to this. You could also look at the example of smallpox, which not only saved the world around US$ 1.35 billion a year, but has had no clear effect on our immune systems. Even if it were to have such an effect, it would take many thousands of years for such a change to occur, and it is possible that our technology and medical treatments will have advanced enough to counter this. While this question does remain to be answered, there is no evidence of a decline in our immune systems, and I think it is safe to say that the benefits of eliminating these diseases would go way beyond the realm of public health. So should we let a completely hypothetical downside influence our decision? Answer! No we shouldn’t.

Sources:

The Key Publications mentioned by the Nobel Assembly at Karolinska Institutet:

  • Burg et al., Antimicrobial Agents and Chemotherapy (1979) 15:361-367
  • Egerton et al., Antimicrobial Agents and Chemotherapy (1979) 15:372-378.
  • Tu et al., Yao Xue Xue Bao (1981) 16, 366-370 (Chinese)