Radio Freethinker

Vancouver's Number 1 Skeptical Podcast and Radio Show

Author Archive

A Mountain of Data can mean Anything

Posted by Jenna Capyk on December 20, 2011

Although we  hear about DNA sequencing technology in everything from forensic televisions shows to descriptions of new scientific discoveries, the implications of this technology, and what it requires of scientists, is not always that well understood. The power and limitations of all of the sequencing going on in labs around the globe was recently brought to my attention through one of my research projects and I’d like to provide a breakdown of the data-overflow we have been arguably reluctant to acknowledge as a scientific community.

What is genome sequencing?
As most of us know, all of the “blueprints” to build a human being are contained in our DNA. DNA is a long, string-like molecule made of four different components or bases, which are usually abbreviated A, C, G, and T. You can think of this like a chain built of different coloured links that can occur in any order. The order, or sequence, of these different pieces holds the coded information. When we talk about genome sequencing, we’re talking about determining the sequence (written out as a string of As, Cs, Gs, and Ts) of all of the DNA in a human cell (or a plant, bacterial, squid, or any other kind of cell). Just to give you an idea of scale, the E. coli genome would have about 4.6 million letters if you wrote them out, while the human genome corresponds to a list of about 3.2 billion As, Cs, Gs, and Ts.

What does this technology allow us to do scientifically? (Bioinformatics)
Some of the applications of DNA sequencing are obvious. As we all know, the sequence of an individual’s DNA is completely unique to that person (unless they have an identical twin), and your DNA sequence can be used to specifically identify you. Because our DNA holds the information for our biological makeup, it also holds the information about any congenital diseases, or perhaps risk factors for other conditions that also have environmental factors. As genome sequencing becomes more and more accessible to the average individual there are issues with privacy and other policy-related questions that we have to think about.

Genome sequences also represent an entirely different treasure trove of information to be mined by scientists. Evolution proceedes by random DNA mutation events that eventually lead to speciation and the amazing variety of beings we know in our world today. This means that by comparing the DNA of different species, we can trace their evolutionary trajectory and classify their genetic level of relatedness. We’ve all heard that we are very closely “related” to the chimpanzee, for example. Genome sequencing  technology allows us to evaluate these types of relationships for all biological entities.. This type of analysis can be valuable in a lot of research contexts.

The problems with too much data
This all sounds really great. Lots of data, lots of information: awesome. Right? The problem is that when I say lots of data, I mean LOTS of data. If you’ll remember, the bacterial E. coli genome has several million base-pairs. As more and more genomes are sequenced (mostly bacterial at this point) this number is multiplying rapidly. We’re talking about information for billions upon billions of “letters” in DNA sequences. This amount of data can be difficult to process. The problem is basically being able to see the “signal” through the “noise”. With this much data, it is hard to look at all the data at once. How do we pick out what is significant? what is normal? what is connected?

Obviously analysis like this can’t be done manually, so researchers have developed tools to look at this bioinformatic data. Many of these tools have been adopted as standard tools in this field. On the one hand, adopting standardized methods allows datasets to be processed more quickly: it’s like having a known routine that you perform to get out your answer. It also allows different scientists to compare results produced in different laboratories. If everyone were doing their own thing in isolation it would be hard to advance any science.

The problem with standardized tools, however, is that the challenges that arise in biological research tend not to be “standard” in nature. A lot of the time, different methods are required to approach each unique problem. Perhaps more important than this, many of the most popular biological data processing tools were designed at a time when less data was available. As more and more sequence information becomes available these programs remain powerful tools, but greater care needs to be taken with how they are applied. When people get too comfortable using a standard set of tools in a pre-determined way, they might not take a close enough look at how well those tools are doing the job. This automatism can lead to major misunderstandings about the biological world. If we are going to go to all the trouble of generating so much data, we should do our best to listen to what the data can tell us.

An example, recent project
As a more concrete example, I’ve been working for the past four and a half years on my PhD in biochemistry and most of this time has been spend studying a specific type of protein. Reading the literature, I had a specific understanding of these proteins. This was the same understanding, in fact, as most scientists who worked with them: they only occurred in bacteria, they only performed a specific reaction, most of them had a second subunit. The bottom line is that there was a certain “stereotype” for what these proteins were assumed to be and everything else was thought to be an outlier example. I like to think of this like an alien looking at one person on the street and thinking that all people have an umbrella, red hair, and are kind of fat. In reality, of course, this is only a small subset of people.

For years people had been using the standard tools to look at the sequences for these proteins in those vast databases. How this works: you plug in a sequence and the program spits out a bunch of things that look like it. Because there was so much data, the program was spitting out things that looked a LOT like it, so people thought that it was an accurate representation of the typical protein. To go back to the alien example, the alien performing the same type of search would mean taking the red-haired, umbrella-laden fat man and asking his flying saucer computer to search for things that looked like that. If the alien asked for the top 50 similar things in a specific park, he might see a lot of variety in the types of people the program spit out. As you increase the number of people you search (the amount of data), however, the search results are going to be more and more similar. Search for 50 “things” that look like the red-haired fat umbrella man in the whole world and you’re probably going to come up with a room of near doppelgangers.

Moving from fat men back to proteins, I started to challenge the idea that all of these proteins were really so similar and went about collecting all of the known sequences of these proteins to analyze them. This isn’t easy, as, like I said, it’s a LOT of data. Even for this specific type of protein there are thousands of sequences. When all was said and done, however, a very different picture emerged. The “stereotype” profile of the protein that people held previously turned out to be a very small subset of the actual group. Much like how assuming that all people look like the pudgy umbrella man, using this profile to describe the whole family of proteins turned out to be very wrong.

This example shows how estimating diversity in a population is more and more difficult as you get more data. This is mainly true because it’s hard to look at all the data at once. It also shows that using the wrong tools can give you the wrong information, but that it might be hard to KNOW that it’s wrong without having an idea of the right information. That is to say that with enough data and insufficient tools, the results of an experiment can tell you almost anything.

What needs to be done
So what can we do about this? It seems as though I’ve been arguing that the problem is too much data. Does this mean we should stop generating data? No. The situation in bioinformatics that I’ve described can be a bit counterintuative. Consider that in most instances we criticize studies for not having ENOUGH data. More data has the potential to give us more information IF (and it’s a big “if”) we take care to analyze it properly. The same way that conspiracy theories and some superstitions spring from seeing patterns in noise, in the “clumpiness” of data, a mountain of data can be made to mean anything. As with anything you want information from, you have to approach it analytically, not automatically, and use the right tools for the situation.


Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , | Leave a Comment »

You can be Fat as long as you’re Fit

Posted by Jenna Capyk on December 13, 2011

It seems like there’s always a new study popping up in the news about diet, fitness, obesity, etc. Granted, these are very important topics for public education as the overall sedentary trend in our society is beginning to look rather alarming. I saw a commercial this morning claiming that today’s elementary school kids have a shorter life expectancy, on average, that their parents! So while I think that studies about general health are important, all too often these studies mean next to nothing because of small sample sizes, poorly controlled experiments, or inappropriate study groups. A study in this week’s edition of Circulation, however, caught my eye. It had a relatively large sample size and seemed to use more than the usual rigour in the data analysis phase.

The crux of the study is that BMI (or Body Mass Index, a measure of how over- or underweight a person may be) doesn’t have a significant effect on mortality from cardiovascular disease or any other cause. This is in contrast to cardiovascular fitness (measured by a treadmill test) which is highly correlated with mortality. Basically: you can be fat as long as you’re fit.

This study is based on data from 14 345 men. This number is smaller than the over 16 000 men originally subjected to the experiment, as those with conditions like cardiovascular syndromes, cancer, or other obvious death-causing diseases were excluded. Other study rejects included those who were underweight, had unexplained weight fluctuations, showed an unusually low heart rate under a stress test, and a constellation of other indicators of subclinical disorders. All of the study participants had at least two physical evaluations over 6 years, from which researchers were able to glean their baseline stats for body weight and fitness, as well as the change over the study period. This period was followed by eleven years of followup where researchers looked at the death rates of the men from any cause and from cardiovascular disease, specifically.

The bottom line from this study: fitness trumps BMI when it comes to avoiding death. Nine hundred fourteen deaths were recorded in the followup period, 300 of which were from cardiovascular disease. Men who increased in fitness were significantly less likely to die during the followup period than those who had stable fitness levels or those who decreased. In fact, for every unit increase in fitness level, men were 15% less likely to die, and 19% less likely to die of cardiovascular disease. The kicker? No such correlation was observed for BMI changes, after accounting for fitness changes. That is to say that even if BMI increased, if fitness also increased the risk of death was still lower. Perhaps even more surprising, a decrease in fitness correlated with higher mortality no matter what the BMI was doing.

While the big numbers and stringent statistical analysis in this study make the findings pretty convincing, there are a couple of important factors to consider. Firstly, all of the participants were of middle to high socioeconomic status and college grads. They were almost all white men. Furthermore, they were slightly overweight on average, and less than ten percent were obese. It’s important to keep these factors in mind as the results may only be applicable to people in this group. For example, it’s known that people of Indian decent exhibit fatty tissue with different characteristics than Caucasian people and this can impact their risk factors for obesity-related complications. You might also notice I keep saying men, this is because women were not included in the study. Other obesity-related complications might also make BMI a better risk indicator for obese people as these conditions are more serious in more obese people. BMI has also been a controversial method for assessing body fat content and obesity, although the authors did test this by measuring body fat percentage by other methods.

Regardless of specific limitations, the findings of this study carry some pretty interesting implications for a significant proportion of the North American population. Firstly, while BMI is popularly associated with health, this would suggest that for the average-ish case, body weight is more correlative than causative with respect to overall health, and that being overweight might be a symptom of inactvitiy and poor fitness than a cause of mortality. Again, these findings are likely not to extrapolate well to the morbidly obese. The authors are careful not to overreach their real findings, and only suggest that physical activity is likely to be the top factor influencing fitness change (the alternative I suppose being magically aquiring fitness-related powers through pure will-power). They also note that a previous twin study performed in Sweden backs up their results as this group found that weight loss from dieting was associated with increased mortality, while weight loss due to physical activity was not.

The bottom line seems to be right in line with the public health mandates seeming to constantly come down the tube: get active, stay active, live longer. While “extreme caloric restriction” has been hailed by many as the proverbial elixir of life when it comes to longevity, it seems to me that enjoying a cheeseburger, or a nice plate of pancakes, after a good bike ride might be the way to go if you’re talking quality of life. Maybe Hal Johnson and JoAnn McLeod had it right all those years: stay fit and have fun!

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , , | Leave a Comment »

Engineering Enzymes: Steps in the Right Direction

Posted by Jenna Capyk on December 6, 2011

Enzymes: you’ve heard me rave about their incredible abilities before and you’ll hear it again. These specially-abled proteins can perform in microseconds all kinds of nifty chemistry that would otherwise take centuries or even millennia. In order to accomplish their specific chemical tasks, each enzyme has been honed by thousands of years of evolution to have the specificity and chemical capacity necessary to keep all biological systems alive and ticking.

Of course, scientists, with their big egos (kidding) aren’t going to be out-done by natural selection. Oh no, they have to go ahead and make their OWN enzymes. Really, a group of scientists form the University of Michigan reported this week in Nature Chemistry that they’ve managed to create an enzyme “de novo” (read: from scratch) that is able to perform the same chemical reaction as one of our own enzymes.

What enzyme did these scientists create? They created a whole new enzyme, but its reactivity is based on human carbonic anhydrase II. This enzyme takes carbon dioxide and water and spits out carbonate and protons. Like all enzymes, the natural carbonic anhydrase has a specific shape that contains a small location called the active site and this is where the chemistry happens. The scientists in Michigan came up with an entirely new shape, totally new scaffold, and stuck in something that looks a bit like the carbonic anhydrase active site. The result: it works. Well, it is able to perform the same carbon dioxide transformation as well as another reaction that carbonic anhydrase catalyzes. How well does this frankinenzyme do what it was literally designed to do? About 550 times less well than the natural enzyme, but to be fair, natural evolution has had a lot longer to work on it.

Now, you might argue that this type of research is completely unnecessary: wasting money to reinvent the wheel. Why would anyone ever spend tons of time and resources on designing an enzyme, from scratch no less, that is far less efficient than an enzyme that already exists in nature? If you made this argument, however, I would argue that your argument is wrong. This work is an incredibly interesting and important step forward in our understanding of how enzymes work. You can tinker with an engine all you want, but you won’t really understand one (or so I’m told) until you put it together from the individual pieces. The same is true for enzymes. By starting from scratch and designing this protein from the bottom up, these scientists are testing our understanding of the fundamental principles governing enzyme function. The fact that this totally synthetic protein is able to perform the exact chemistry it was designed to catalyze demonstrates that we have a decent grasp on the basics of how enzymes work, and that we can apply these concepts to create proteins that perform specific reactions.

As mentioned above, the specific reaction that this novel enzyme performs is already catalyzed by a natural enzyme. In fact, it’s catalyzed really quickly by a natural enzyme. So why did this group decide to target a reaction that we already have in our biological tool-belt? The answer is that they needed something to compare it against. The natural enzyme is kind of a benchmark that we can work toward. Using this standard enables scientists to create what can then be deemed an “efficient” enzyme. Without having a natural enzyme to compare it to, there’s no way to tell how fast the chemistry really has to be to be considered “good”. Having this benchmark allows these scientists to evaluate their progress with a realistic goal in mind.

So where does this type of research lead? This progress represents a really interesting development because it is moving toward creating brand-spanking-new enzymes to perform chemistry that is not already covered by naturally evolved enzymes: think breaking down Styrofoam, or any number of other human-created chemicals nature has a tough time getting rid of. Scientists have been working on the same problem by altering existing enzymes, but creating them from scratch might prove to be a strategy allowing for a broader range of novel chemical reactions.

Some people might call this type of research playing God, I like to think of it as playing Evolution, or at least learning from it.

Want to know more? Check out my blog post at And That’s Science! While you’re there you can read some of the other posts about what enzymes are, what they do, and how.

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , | Leave a Comment »

Making Mutants, the Safe Way

Posted by Jenna Capyk on November 28, 2011

The practice of medicine has come a long way from the days of the four humours. Instead of being filled with a delicate balance blood, snot, and various forms of “bile,” we now understand that our bodies are made of millions of cells, each with a specific job to do. We are now able to pinpoint the exact type of cell not doing its job in many diseases, and to figure out just what job it’s not doing. This knowledge can help us change or replace parts in our cells in order to treat some diseases. In the future, making our own engineered human mutants may help us accomplish medical feats not possible with traditional drugs.

Each cell in our body is specialized to perform a specific function. Our nerve cells conduct electricity to pass messages around the body, our red blood cells carry oxygen, and cells on the inside of our stomachs secrete acids to help with digestion. In order to carry out these jobs, our different types of cells produce specific enzymes and other proteins. Without these specialized components, the cells don’t function like they should. For some diseases there may be only a single protein that is not being produced, and this deficiency results in a whole host of symptoms. For example, type I diabetes is an entire disease caused by the body being unable to make a single protein: insulin. All proteins are made by decoding DNA sequences, and researchers are looking at treating diseases like diabetes by adding back the specific DNA needed to make the proteins that are missing or damaged. By inserting these specific DNA sequences into the genetic material already in our cells, we hope to engineer helpful cellular mutants that can produce missing proteins and reverse disease.

Sticking helpful genes into human cells can be a great way to fill the gap of a missing protein, but there are many dangers inherent in introducing new DNA. If a new piece of DNA is inserted into one of your chromosomes, it can land in any number of places, including in the middle of another gene. This can potentially result in mutation of a different protein, in which case you might cure the first disease but could actually cause another. A more common and potentially more dangerous scenario is if the new piece of DNA disrupts an oncogene (see DNA Repair just doesn’t give me the same Buzz). In this case adding the extra DNA can cause cancer, and this only needs to happen in a single cell to result in life-threatening disease.

While humans have only been sticking extra bits of DNA into cells for a few years, viruses have been accomplishing this task for millennia. For many viruses, part of their life cycle involves taking part of their DNA and inserting it into a chromosome of the cell they’re invading. In order to do this efficiently, viruses use a specific DNA-insertion tool: an enzyme called an integrase. This enzyme recognizes specific sequences on the DNA to be inserted and on the host cell DNA and stitches in the inserted piece at specific sites on the host chromosome.

Nature doesn’t always do what we want it to (sometimes disease happens), but it also produces some pretty useful tools that we can borrow. In the case of gene-based therapies, the danger is not knowing where an introduced piece of DNA will insert into our own DNA, and whether or not that might cause even more problems. By borrowing the ability of viral integrases to insert DNA pieces into specific places in the genome, we drastically cut down this risk. Scientists are now working on giving patients the specific DNA segments needed to replace proteins missing in certain diseases and including viral integrase enzymes to make sure that DNA is inserted in safe spots in our genome.

To most people messing with their DNA, the prospect of cancer, and viral enzymes are all pretty scary concepts. By understanding how each component works, however, we are able to use these concepts to treat diseases in innovative ways. Knowing more about almost anything can help it to be less worrisome and more useful.

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , , | Leave a Comment »

Deep Sedation: Not for Everyone

Posted by Jenna Capyk on November 21, 2011

We don’t often include personal stories on the blogs, but in this particular post I’m giving a short description of some a surgery I recently underwent. I had all four of my wisdom teeth removed in an ostensibly routine procedure. As it turns out, what transpired was not entirely routine. I would therefore like to tell you a story about the Little Maxillofacial Surgeon that could, alternate title: A Tale of Anesthesia?

Some things in my surgery went well, or at least went as planned. For example. I went in to the appointment with four more teeth than I left with. Well, strictly speaking they gave me the rather gory extracted teeth in a paper cup, but I definitely had four fewer teeth in my mouth. I also left with a pocket full of rather delightful drugs and returned to a house full of cold jello and luke-warm soup. What didn’t go so swimmingly was the anesthesia. Picture this, you’re lying on a table hooked up to a machine monitoring your vitals and a doctor sticks a needle in your arm, says you’re about to drift off “just like you’ve had too many margaritas” and then starts working away. But wait, there’s pain, and you’re awake, and you’re being held down by the shoulders as your legs shake and you cry and yell and squirm on the table. They call for more drugs, but to no avail. They strap on the laughing gas but alas, there is no laughter to be had. Four teeth later you’re sitting in a recovery room, bleeding, crying, and rather traumatized. It happened to me, but the question is, could it happen to you?

As it turns out there isn’t actually much in the literature about failed anaesthesia during dental surgery. Given the how these procedures are, I found this rather surprising. I was, however, able to find one study that looked at the incidence of failed “deep sedation” and correlations with various conditions on the part of the patient. I was a little shocked to find that my experience actually would have been deemed as successful deep sedation, under this study, notwithstanding the fact that the sedation was not nearly as deep as I would have liked. The reason for this is that they defined “failure” as the inability to complete the surgery due either to patient combativeness or unsafe vital signs during the procedure. So perhaps if I had been lucid enough to kick my surgeon in the groin I would have a couple more teeth and a true story of “failed deep sedation.”

What is deep sedation then? Deep sedation and general anesthesia are both defined by depressed consciousness or unconsciousness including partial loss of protective reflexes like the ability to keep an airway open, to respond purposefully to physical stimuli or oral commands, and the ability to swat away sharp objects entering your oral cavity. Such sedation is accomplished through intravenous administration of a variety of different drugs or combinations of drugs.

From my research, it looks like about 1 % to 2 % of all patients experience failure of such anesthesia, although the study looking specifically at maxillofacial procedures like mine only counted those where they were unable to finish the procedure. Criteria for failed anesthesia in other fields varies, and so a direct comparison is pretty difficult.

The conclusions from the maxillofacial study were within this 1-2% failure range, and two main factors were suggested to be paramount in determining how a patient was likely to respond to deep sedation. The first is, unsurprisingly, pain. Local anesthesia (your run-of-the-mill freezing) is required along with the deep sedation to make sure that you don’t painfully startle the patient into wakefulness. The authors of this study assert: “painful stimuli in an already suffering and frightened patient are enough to arouse them during deep sedation.” As someone who was recently similarly aroused: duh. The point here, though, is that the doctor is not able to count on the patient’s lack of consciousness as adequate pain control and very effective local freezing actually helps the effectiveness of the more general doping.

The second factor that was perceived to be even more important than local pain control was the mental state of the patient. Anxiety was brought up again and again as one factor that can make deep sedation less effective. This is complicated by the fact that most patients choose the deep sedation option specifically because they are already anxious and are seeking to relieve said anxiety. You can see the circular logic here. The study authors urge offices to try to create a soothing environment, but comically concede that this can be rather difficult in the presence of noise from a dental drill. It turns out that anxiety is such an issue in dentistry and dental surgery that there is actually a Corah Dental Anxiety Scale. This is a questionnaire designed to assess the patient’s attitude toward different dental-related scenarios. I can only assume that this ranges from post-op jello eating to awaking screaming mid-molar-drilling. The bottom line is that anxiety is a major contributing factor in the success of dental procedures in general, however there is almost no literature on the correlation between prior anxiety and procedural outcome.

Things that did not turn out to be much of a factor in anesthesia effectiveness were length of procedure and pre-existing medical conditions. There are some exceptions with cardiovascular conditions as deep sedation causes different responses in these physiological systems. The authors also noted that experience of the surgeon could have something to do with it. Again, duh. Although there are some speculation about GABA-A receptors and alcohol abuse being contributing physiological factors, it is unsupported by the evidence at this time.

For myself, I can say that I’m glad I had a technically “successful deep sedation” experience, wish I had had a subjectively “successful deep sedation” experience, and hope to be back on steak and raw vegetables soon. In the mean time, remain calm, and always bring your anesthesiologist cookies.

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , | 1 Comment »

Fall Back into that Old Rhythm

Posted by Jenna Capyk on November 15, 2011

As any of our readers who suddenly found themselves an hour early for an appointment or eating dinner in the dark might know, last weekend was the daylight savings time switchover. There’s been a fair amount of press, as there is every year, on this predictable phenomenon, and there seem to be a lot of issues to cover regarding it. I saw articles on increased and decreased heart attack and car accident rates, on physiological effects of this demarcation of the dismal slide into winter, political dissections of which geographical areas do and do not observe daylight savings time and when they begin and end. There is also a specific biological implication of not only the one hour shift in the clocks, but in how the clocks sync up to the daylight in general. That’s right, this post is a brief introduction to circadian rhythms.

Firstly, what is a circadian rhythm? It’s is a pattern or cycle of behavioral or physical characteristics in plants, animals, fungi, etc. One example of this is our tendency to get sleepy during the night and be wakeful during the day. Insect activity pattern, flowers opening and closing daily, and animal foraging behavior are all examples of these cycles. To be considered “circadian” a rhythm must meet four criteria:

1) It must have a roughly 24-hour pattern. There are other biological cycles over larger time periods (monthly or yearly, for example) but these aren’t strictly circadian.

2) It must be internally generated, or be endogenous. There are certain things that might happen daily but the causes for this periodicity are external. For example, you might eat ice cream every day at four o’clock because that’s when the ice cream truck comes by. This would not be circadian because your body does not generate the ice cream truck. You might, however, also be hungry for ice cream every day at four, and if the ice cream truck doesn’t come for several days, you might still experience this craving every day at four o’clock. That craving is not generated by the truck, but by your body. It’s endogenous and could therefore be potentially considered circadian. For the record, I think this is really an example of a different kind of physiological conditioning but it makes a good example to explain endogenous and entrainable characteristics.

3) The cycle must also be entrainable to be called circadian. In the ice cream truck example, you may crave ice cream every day at four, but if the ice cream truck started coming every day at two, after a few weeks your body would adjust the timing of the craving to sync up with the arrival of the truck. This means that although the craving is still generated internally, it can be trained to coincide with external cues. For circadian rhythms, the most important cue is the presence of light.

4) The fourth criteria a potential circadian rhythm has to meet is temperature compensation. This means the cycle persists over a range of temperatures relevant to the body and lifestyle of whatever the cycle is in. Squirrels are still more active in the day in the winter, even though it’s a heck of a lot colder. This is because the light is the stronger cue.

Circadian rhythms are interesting biologically because they’re not conscious. They’re actually controlled by cycling amounts of chemicals and molecules in your blood. So some rhythms are controlled by daily hormone cycles while others are controlled by fluctuating levels of specific proteins in your bloodstream. This is why even if you’re up all night, you still might find it difficult to get the same solid sleep you achieve at night time when the sun is up. Although the cycle is synced up to the daylight, it’s not determined by the light on a day to day basis. The cycles are generated by your own chemistry.

The conflict between the external light cues and internal body clock set up by the circadian rhythms is the reason for jet-lag, and for the minor physiological problems people have in adjusting to the change in the clocks every spring and fall. When internal cues and external cues fail to coincide, especially when this change does not happen gradually, other things can go wrong in our body. Your body has multiple circadian cycles of chemicals, and if some are faster at adjusting to the change in light in your daily cycle than others, it may result in different combinations of those chemicals occurring at the same time, and this can have all kinds of consequences. This is why some people don’t just feel tired when they’re jet-lagged, but actually sick. The body falls out of balance chemically because the systems used to keep it in balance are in flux.

So if you’d like to take an extra sick day this “fall back” season, be my guest. Blame it on your circadian rhythms. Blame it on the endogenous but entrainable 24 hour cycles, or for us Vancouverites, blame it on the rain.

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , | Leave a Comment »

The Midas Mind: Explorations of the Golden Section

Posted by Jenna Capyk on November 14, 2011

The Golden Section, the Golden Proportion, the Golden Mean. What is this curious and literally irrational number? This blog post will what it is, where it shows up, how we use it, and the science and skepticism behind finding it everywhere.

The Golden Section: What is it?

The shortest answer to this question is: a number. Specifically, the golden mean is an irrational number. This does not mean that the number has trouble thinking clearly and making good decisions, but rather that it can’t be written as a fraction, also known as a ratio. Practically what this means is that the golden proportion, like pi, is a number with an endless stream of digits after the decimal place. This also means, of course, that you can impress girls at parties by wowing them with how you’ve memorized it to the n’th decimal place. Also like pi, the golden ratio is associated with a monosyllabic greek letter: phi. For those readers who like actual digits, the “golden section” is around 0.618, while phi represents the inverse of this number, around 1.618. Because this ratio has very special properties these terms are used pretty much interchangeably in general conversation.

Before delving into the properties of this “special” number, I’d like to put out the disclaimer that I am in no way a mathematician but will try my best not to misrepresent the math. There are a lot of ways to describe phi mathematically. A lot of them either involve algebraic equations or visual representations. My favourite way to explain phi is a number describing a continuous, balanced proportionality. For example, it is the only way to cut a piece of string such that the ratio of the length of the smaller piece to the bigger piece is equal to the ratio of the length of the bigger piece to the whole thing before you cut it. Mathematically, this is described by the equation a/b = b/(a+b). It’s also mathematically important for being the only solution to the equation x² = x + 1, and as such the golden section (~0.618) plus one is equal to the inverse of the golden section (~1.618), or phi.  It is also  the limit to the ratio between any two numbers in a Fibionnacci series.

You may, at this point, be asking yourself: who cares how a line is cut up? How would this show up in my life? Although the line example is a good way to explain what the ratio is, this very basic principal can be used to build all kinds of other shapes and forms. For example, the so-called “golden spiral” is the only spiral shape that looks exactly the same no matter how far you zoom in or zoom out on it, and there are also golden rectangles, triangles, pentagons, pentagrams etc. that are intimately related to the golden mean. There are some excellent diagrams in the review article “All that glitters: a review of psychological research on the aesthetics of the golden section” by Christopher D. Green. These basic shapes can in turn, be used to make up more and more complex shapes and things. Even things like people, galaxies, and quasicrystals.

The Golden Mean in the Physical World


The intriguing thing about the Golden Mean is that it is not only very important in many mathematical models, but shows up in a very real and practical sense in many places in the natural world. I don’t know many nautilus’, abalone, or tritons interested in algebra or geometry, but all of these sea creatures grow shells in the characteristic golden spiral. The same spiral can be found in pine cones, pineapples, and sunflower seed growth patterns. Likewise many many plants use pentagonal symmetry, based on the phi-related pentagon. The Fibonacci series mentioned above can also describe population growth patterns of very different types of species (such as bees vs. rabbits) and these numbers have very close mathematical ties to the golden section. Many people have also suggested that humans and other animals exhibit proportions close to the Golden Mean. For example the ratio of the length of the sections of your fingers divided by your knuckles approximates this ratio. These claims are harder to investigate, especially as they involve assumed “ideal ratios” for a species.

Non-biological sciences

Aside from biology, the shapes of the Golden Ratio show up in physical representations in the natural world. For example, tropical hurricanes and spiral galaxies often spiral in a golden or logarithmic spiral. Beaches can form in this shape due to erosive wave action. Many, many places that you wouldn’t necessarily expect to find expressions of deep mathematical laws turn out, in the end, to be ruled by these same principals.


One of these arguably counter-intuitive manifestations of the golden ratio is between molecules that make up quasi-crystalline material. Peak positions in x-ray diffraction patterns of quasi-crystals are related by the golden mean. In fact, this property is an indicator of the quasi-crystalline state. So while regular crystals are made up of evenly spaced molecules, molecules in a quasi-crystal are not spaced at regular intervals, but at intervals that relate to each other by the golden ratio.

Esthetics of the Golden Mean

The case has been argued over and over that objects that reflect the golden ratio are intrinsically pleasing from an aesthetic point of view. Knowledge of the golden mean existed in ancient Egypt and was then passed along to different ancient civilizations. Such historical figures as Euclid and Vetruvius are recorded as referencing the wonderful properties of this number. James Sully included the first English-language use of the term “golden section” in his article on aesthetics in the 1875 edition of the Encyclopedia Britannica. It’s been called the most beautiful ratio, and whether by accident or on purpose, humans have used their knowledge of this number to create works of art of all kinds. Evidence of this number shows up in architecture in ancient Egypt. Leonardo da Vinci illustrated the book De divina proportione by Luca Pacioli di Brogo, and also used this proportion heavily in many of his own works. People have even made claims that the division points in major symphonic works conform to this ratio. In a modern twist to this aesthetic appeal, plastic surgeons, most often cosmetic dentists, have used this proportion to sell their clients the “most perfectly beautiful” smile.

One of the important things to remember in studying such occurrences is the strong human tendency for confirmation bias. If you go looking for the golden proportion, you WILL find it. It then becomes a process of untangling both the numbers and the motivations surrounding phi’s true prevalence in nature and the man-made world. The importance of phi in basic mathematics is hard to refute, and I mentioned several robust examples in the natural world, but what about more tenuous examples? Finding ratios that are “really close”? Finding it in a painting? Is it coincidence? Does it occur more frequently than other ratios? How many other ratios are reflected in the same thing? As far as finding that phi really is a prevalent component of a work of art or other man-made thing, is this on purpose? Did the creator of this work incorporate this ratio explicitly or was it a consequence of an unconscious aesthetic appreciation? What does either outcome say about the intrinsic value of this number to human consciousness?

Research into the true aesthetics of Phi

Our interest in how we, as humans, perceive things expressing the golden ratio is ancient, and scientific research into phi-related psychology is some of the earliest empirical studies conducted in psychology. Many scientific studies have suggested that people really do like things that conform to the golden ratio. Various experiments have shown that when asked to choose their favourite “thing” (such as a single rectangle) from a group of other “things” (such as a bunch of rectangles with different length-to-width ratios) they tend to overwhelmingly prefer the thing which conforms most closely to phi. For example, there have been studies looking at really pared-down manifestations of phi (literally lines and rectangles) all the way up to measuring “landmarks” on human faces, assigning them a number based on how closely they conform to the golden ratio, and then asking people to rate their level of attractiveness.

Lately, a lot of the research has gone into trying to prove that humans don’t have an intrinsic response to the golden ratio. That is, a lot of scientists have been taking a skeptical look at the long-held assumption that people are able to recognize and subconsciously be affected by things conforming to this ratio, and specifically it’s aesthetic value. The alternative hypothesis is that people don’t have an intrinsic ability to pick out and like this ratio, but that as a society we have an understanding of its prevalence and a somewhat superstitious belief that it’s beautiful. According to Dr. Green, the jury is still out. While many groups have gone into this type of research with each opposing hypothesis, there appears to be no strong consensus as to whether people are truly drawn to things with phi proportionality, or whether it’s more or less just a number our society knows about and therefore names a lot. Dr. Green also brings up the point that if the effect is genuine, we are even further from knowing whether it falls on the nature or nurture side of biology; that is, whether this preference is from a physiological or learned psychological part of ourselves.

The Golden Mean and Human Cognition

One of most interesting things about our fascination with the golden ratio is that it represents an interesting experiment in the way that people can look at seemingly significant things. This number shows up legitimately in all kinds of places in math and in nature. This fact can be looked at in two skeptically opposite ways:

1) This has been worked into the fabric of the universe by some cognizant being

2) There is something about the way this number corresponds with relationships between things that can tell us something about the fundamental laws of the universe we live in

With the Golden proportion there is also the extra layer of aesthetics. In this case, we also have to consider that when we do encounter this ratio, we seem to find it pleasing. A non-skeptical view of this might be that we are recognizing the signature of a cosmic creator that made us using this formula. A more skeptical viewpoint might be to ask first IF we intrinsically find it pleasing, and then WHY. Again, it’s the difference between explaining it away with mysticism and questioning whether there is something more to be learned from this observation that we could explore scientifically and really gain some insight into our world.

Encountering something that seems mystical at first glance can be the starting point of an amazing scientific discovery. It’s often the things that don’t fit our current models that we consider magic, when really these represent an opportunity to expand the model through understanding such outliers. Some people argue that this is taking the magic out of life, but the more you understand it, our world is pretty awesome all on its own. No magic required.

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , , | Leave a Comment »

Biomimetic Backflips: When Robots inform Biology

Posted by Jenna Capyk on November 2, 2011

The area of biomimetics is pretty much what it sounds like: mimicking biology. These principals are used extensively for many types of engineering problems and are now an area of very prolific research in robotics. It’s also a really logical way to proceed, considering the mechanism of = natural selection. Through natural biological evolution, the physical traits best suited to perform a specific task develop and are then refined. This means that many the traits we see today have had hundreds of thousands of years of improvements, and are impeccably suited for their specific role. By borrowing the physical strategies that we observe in nature, we are basically hijacking millennia of evolutionary innovation. Luckily, Nature has notoriously inept patent lawyers.
A lot of the time, the goal of biomimetic research is to find out how to make the stuff we want to create better by using strategies that we find in nature. Insects are often used as biomimetic models because their basic body plan and exoskeleton make their traits particularly amenable to robotic copying. This is the case for the robot “Dash,” modeled off of the cuddly cockroach. It is designed as a tough, all-terrain scampering robot that can go where it may be too cramped or dangerous for humans to go. What the researchers who created Dash maybe didn’t expect, however, was for this metal critter to provide some real insight into evolutionary biology.

To solve some of the balance problems that Dash was having in it’s clambering activities, the researchers added flapping wings and a tail fin from a toy airplane. Interestingly, this helped the running ability dramatically. The degree of the slope the robot was able to handle was greatly increased, both up and down, and the speed of the electronic creeper’s scuttle almost doubled.

Although these improvements were impressive in and of themselves, they were also a test of some theories in the evolution of flight that before had only been supported by a scant fossil record. In this field, computer models had suggested that for biological scamperers, they would need to triple their running speed to be able to achieve flight. Dash’s wings doubled the speed, but fell short of this mark. The experiment therefore actually provides evidence for a theory of flight evolution suggesting that flight started as gliding in tree-dwelling creatures, rather than from land-based animals requiring “takeoff speed”. Obviously it’s not conclusive as there are a lot of variables, but an interesting and partially accidental experiment none-the-less.

It’s interesting to think that while we use nature as a model to create things in science, the very things we create can be used as a model to help understand nature. Ah science, you never know which way it’s going to go!

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , , | Leave a Comment »

Dreams, Another Exercise in Things we Don’t Know

Posted by Jenna Capyk on October 25, 2011

When it comes to sleep and dreams a few things are clear. Firstly, people really like to study them. Secondly, some of the gross physiological features of sleep can be observed and measured. Thirdly, despite all of this research, there is plenty of contraversy in the field of sleep research. Scientifically it’s a fascinating topic involving an altered physical and mental state that is somewhat independent of the conscious mind. It also seems to be another one of those ubiquitous phenomena in the human experience that we have trouble really putting our fingers on.

Before we talk about the things we know about sleep, lets talk about how we know them. The gold standard for measuring sleep phenomena is polysomnography. This technique measures three things. It encorporates an electromyogram (EMG), which is a measure muscle tone; an electro-oculogram (EOG), which is a measure of eye movement; and an electroencephalogram which is a measure of brain activity. In addition to these three measurements making up the polysomnograph, researchers often use direct observation to monitor gross muscle movements, or sensors that measure chest wall movements, leg movements, or oxygen saturation of lung and other tissues. Body temperature is also sometimes measured.

Like I said earlier, there are several basic things that we know about sleep. For example, there are two main kinds of sleep: rapid eye movement or REM sleep and non-REM sleep. Non-REM (N-REM sleep) can be further broken down into four stages and these are all defined by measurements of the polysomnophraph. First, if you’re drowsy but not asleep you have low voltage alpha waves on your EEG reading. Stage 1 N-REM sleep is characterized by low voltage theta waves and slow, asynchronous eye movements. This is the only stage of N-REM sleep in which your eyes are observed to move, and it is also the only stage of sleep during which you may not perceive yourself as having been sleeping upon waking. Stage 2 N-REM sleep is characterized by “sleep spindle” patterns and “k-complexes” in the EEG. “Sleep spindles”
are 1.5 sec long 12-14 Hz EEG waves that are generated when groups of nerves in your thalamus become synchronized by a pacemaker mechanism. I find this really cool because it reminds me of synchronization of your heart nerves and muscles by a similar internal pacemaker. It creates a regular rhythm by having the nerves work together. Stage three and four N-REM sleep show very slow delta waves. This is called deep sleep or slow wave sleep and stage four is basically just deeper than stage three.

In all of these stages of N-REM sleep the EEG, measuring brain activity, shows very different patterns than in wakefulness. These patterns tend to be both synchronized and rhythmic. In REM sleep, however, we see low voltage, high frequency waves that are similar to relaxed wakefulness. REM sleep is defined by the presence of this activated and desynchronized EEG pattern, rapid eye movements, and very low muscle tone. REM sleep also has two phases: tonic which is continuous and has the typical EEG pattern and muscle atonia; and phasic which is intermittent and involves the bursts of eye movement and irregular breathing and heart rate. About 80% of all dreaming happens during REM sleep although the two states are more loosely associated than previously thought.

As you fall asleep you pass through the stages of N-REM sleep in order and then into REM sleep after about 90 minutes. These cycles then repeat 4-7 times throughout the night with stage 3 and 4 N-REM sleep making up the largest proportion of normal sleep time and the transitional stage 1 N-REM making up the smallest. You spend about 20-25% of the night in REM sleep, although this number is larger for small children. Each stage involves less muscle tension than the one before and in all stages but phasic REM sleep the parasympathetic autonomic nervous system dominates.

The biological function of sleep itself is still under debate. We know that during N-REM sleep the body temperature is controlled at a lower set-point whereas in REM sleep temperature regulation ceases altogether. These effects have prompted the theory that sleep is important for the mechanisms of body temperature control. Another theory is that it is crucial for consolidation and maintenance of memory. This has been one of the most prevalent theories for a long time, however there is much contradictory evidence. Another theory is that it is necessary for general rejuvenation and neural growth. This theory is supported by the fact that REM sleep is crucial for CNS development in young animals, and the rather obvious observation that when you wake up you feel better than when you’re tired. Similarly, there are parts of the brain that actually undergo growth of new neurons throughout life, and sleep deprivation (specifically REM sleep deprivation) slows this neural growth.

If our understanding of the reasons behind sleep is sketchy, it’s nothing to the sketchiness surrounding our basic understanding of dreams. As I mentioned, dreaming has been closely associated with REM sleep, and this sleep stage is also poorly understood in terms of its biological role. While the other stages of sleep, as I was saying, can be explained as having a regenerative role on the body and mind, there is no obvious adaptive role for REM sleep. That is it’s hard to see what selective pressures would have resulted in this sleep stage developing through evolution.

People (at least adult people) can basically do without REM sleep. There are several pharmaceuticals in wide use that suppress REM sleep and people on these drugs undergo a massive reduction of REM sleep, basically to the point of eliminating this phase. These people do not suffer mental collapse, however, and seem to undergo no ill effects. In fact, some studies suggest that they show improvements in memory. Similarly, some people lose the ability to have REM sleep because of a brain injury and these people seem just fine too.

REM sleep is not just a human trait, however. All land animals and possibly all birds have REM sleep cycles, although of course we can’t ask the animals if they are experiencing what we consider dreams. Dreaming isn’t actually a necessary component of REM sleep in humans. Injury to a relatively small cortical region in humans eliminates the ability to dream while retaining REM sleep cycles. Some people also report never dreaming, although they also experience REM sleep. Small children experience a lot more REM sleep than adults, but they also report fewer dreaming experiences.

At one time, neurologists tried to explain dreaming as spontaneous and random neural signals originating from the part of the brain that generates REM sleep; This theory posited that dreams were basically a byproduct of REM sleep generation. A systematic investigation of dreams, however, showed that although they can be strange, they are definitely ordered and non-random and incorporate many components of waking life. This seems to suggest that they may have a role of their own rather than simply being a product of a different process.

To be honest, when I started researching dream science I expected to find a bit more scientific consensus on the subject. It does seem, however that the jury is basically out on everything from the specific role of sleep in general, to the reason we have REM sleep, to how closely connected REM sleep is to dreaming, and why we dream. I guess it’s not that surprising that dreams are hard to research when you consider them as an experiential rather than empirically measure-able phenomenon. The human mind, and the underlying neurological circuitry, is really a beautiful and complex thing. It’s a bit ironic that one of the hardest things to wrap our minds around is, in fact, our minds.



Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , , | Leave a Comment »

Stereotyping: Not just for high school anymore

Posted by Jenna Capyk on October 17, 2011

            Escherichia coli (E.coli): if you are an average member of the public you might think of this bacteria in terms of some really nasty food poisoning. The name might conjure up warnings on the news about spinach and sprouts. On the other hand if you, like me, are something of a microbiologist you know E. coli as the most studied bacteria on the globe, and possibly the organism we know the most about on earth. What is important to remember, however, is that all of this research wasn’t conducted with “E. coli” but with a specific strain of the bug: K-12.

In science we accept certain limitations. You can’t study everything and so in biology and in microbiology you tend to study “model organisms.” These are defined species that researchers concentrate their studies on. The benefit of this approach is that everyone can compare apples to apples (or zebrafish to zebrafish, as the case may be). It is useful, however, to remember that when you’re studying a nematode, you’re not studying “worms” you’re studying a nematode (which is a worm), and when you’re studying a mouse, you’re not studying “mammals”, you’re studying a mouse. Obviously it’s important to keep this in mind when drawing conclusions about the implications of research. The questionable applicability of studies into one species to a separate species is precisely why there is so much controversy both inside and outside scientific fields about animal drug testing and its value: a mouse is not a human.

In the bacterial world there is incredible diversity. There is a far greater number of bacterial species than of animal species. For this reason, it’s important to remember to take a look at just how wide a spectrum is being distilling down to one point by concentrating on one organism. I attended a recent talk by professor Erick Denamur who is interested in precisely this subject matter. His lab studies the diversity and “lifestyles” of E. coli from around the world, and how these variables relate to differences in their genetic makeup.

As a little bit of background, the scientific community has focused intense research effort into investigations of the K-12 strain of E.coli. This was isolated from the faeces of a single convalescent diphtheria patient in the early 20th century. To reiterate, this single strain is our basis for much of what we know about bacteria in general. E. coli does, however, have a much wider genetic and phenotypic range than this. The primary habitat for E. coli is as commensal organisms in the intestines of vertebrates. This means they live with us, without causing us any problems, and do the same for birds, pigs, and lots of other animals. They can also live in their secondary habitat: fresh water and sediments. This makes sense, as the contents of one’s intestine tend to end up on the ground if one is your average vertebrate. While these might seem a first glance to be two well defined habitats, there is actually great diversity in the conditions in the intestines of different species. Indeed, even in humans there are different “gut phenotypes” which support the growth of different bacteria in different individuals. Likewise, there are vast differences in conditions and available resources in different soil and water environments around the world. Studies conducted by professor Denamur show that to go along with all of this ecological diversity, the species E. coli also has massive genetic diversity. Depending on the strain, each one can have between 4300-5300 genes, and only just under 2000 are conserved in all the strains. This means that less than half of the genes can be compared in the same species of bacteria, and we’re extrapolating what we know about it to many other species.

As might be expected, Denamur’s lab also found that the different genotypes or genetic makeups correlated with different niches. For example, almost half of people sampled from the US are carriers of a specific strain whereas in a jungle in Africa, less than two percent of people carry the same strain. There are also different profiles for birds than for humans, and different levels of pathogenicity in the different strains: some kill mice, some don’t. This is all, of course, a product of the natural selection process.

So why is this a skeptical subject? Biological model organsisms are one of the most valuable tools that we can use such that each individual study contributes to a larger field, rather than each research group working on a different organism in isolation. Its a good idea, however, to do studies like those of professor Denamur to effectively take a skeptical look at the applicability of biological models by getting a handle on what you don’t know. Having a better idea of the diversity of the group that you’re using one member to represent allows us to more rationally draw conclusions about the group as a whole. Positive knowledge is great, but sometimes it can be just as powerful to estimate and accept the scope of what we don’t yet know.

Posted in Blogs, Emeritus, Jenna's Blogs | Tagged: , , , , | Leave a Comment »