Friday, April 22, 2011

Membrane madness

The problem
ONE IMPORTANT STEP in water treatment is filtration. Nobody wants little gritty pieces of dreg or oily bits of gunk in their drinking water. River water is passed through membranes in water treatment plants which block oversized contaminants from going any farther. Making membranes with large surface areas, but with small enough pores (micro- or nanofiltration), is possible by casting polymers into a film. These films are either hydrophilic (water-loving) or hydrophobic (water-hating).
It turns out that most contaminants are oily, hydrophobic gook, and since “the enemy of my enemy is my friend” the contaminants are very likely to bury themselves in the hydrophobic membrane to hide from the water. They can’t get through at first, but eventually contamination degradates the membrane’s performance.
On the other hand, hydrophilic membranes have their own set of problems. The contaminants don’t like the membrane, don’t bury themselves in it, and so degradation is slower. But hydrophilic membranes tend to be significantly weaker than hydrophobic ones, and so will often break during water treatment.

The researcher
Takeshi Matsuura is a chemical engineer at the University of Ottawa who develops modified membranes that can improve the distillation and filtration processes. In particular, he is interested in modifying surfaces using large macromolecules that can be attached to membranes.

The project
Filtration science would really benefit from filters that are strong, and that do not rapidly degrade. Hydrophobic and hydrophilic membranes each have their drawbacks, but by combining them, Matsuura hopes that he can get the best of both worlds.

The key
While other scientists cast a strong hydrophobic membrane, and later modify it by grafting hydrophilic polymers on top to make a protective coating, Matsuura thinks this is too slow (and costly). He mixes the hydrophilic and hydrophobic polymers together in solution and then casts them. As the water evaporates, the polymers naturally separate. Matsuura is left with a single membrane with a strong bottom layer and protective coating on top. In just one step, he gets a surface modified film that has the strength of a hydrophobic filter, but degrades slowly like a hydrophilic filter. That really is the best of both worlds.

Sunday, April 17, 2011

Busy bees


The problem
THERE’S AN OLD urban myth that scientists don’t know how bees fly, that their wings can’t beat fast enough to keep the bees in the air. In reality, bees beat their wings 200–300 times a second, courtesy of the most efficient metabolic rate in nature. This ultra-efficient energy use makes them the ideal creature for studying the workhorse of biological systems: the metabolism.
This system breaks down large energy storing molecules via a series of chemical reactions, each assisted by enzymes. This creates ATP, the energy-carrying molecule that provides power to all the other functions in the body. The general pathways of this process are common to all higher organisms, so what’s true for bees is true for the animal kingdom.

The researcher
University of Ottawa professor Charles Darveau studies metabolism from a physiological and evolutionary perspective, using bees as a model organism. By comparing metabolic differences across different species of bees and weighing them with physiological differences, he can identify which changes in the series of reactions that make up bees’ metabolisms are important.

The project
Most people are familiar with the honeybee and bumblebee, but there are over 300 species of bees in Ontario alone. Darveau has a wide array of species to make comparisons. In addition to this cross-species approach, he can also look at variations within a species to examine these changes. The goal is to develop a complete characterization of the metabolic process: which enzymes make key changes, what steps bottleneck the metabolic rate, and what kind of system is most favoured evolutionarily.

The key
Darveau’s research is on the fundamentals of physiological change, but it has a number of immediate consequences. Because bees burn energy so quickly and efficiently, their wing muscles can heat up to 40°C. This allows them to flourish in environments where many other pollinators would be unable to survive, making bees an important part of northern ecosystems. Darveau’s fundamental characterization also allows other researchers to determine which species have metabolisms suited to adapting to different conditions, vital in determining the impact of climate change on fauna.

Monday, March 28, 2011

Experimenting with evolution

The problem
LOOK AROUND YOU. The world is brimming with the diversity of life. The great assortment of species is so much a part of our world that we take it for granted. It’s easy to say that diversity results from the theory of evolution and be done with it. But why is there such a wide gamut of life and how does diversification actually unfold? The question isn’t ‘Does evolution happen?’ but rather ‘How does evolution happen?’
When we look back in time we see that evolution has been punctuated by bursts of spectacularly rapid diversification during which many new species suddenly appeared. This process (called adaptive radiation) is very fast compared to the usually steady march of evolution, but it’s still too slow for scientific study.

The researcher
Rees Kassen is the University of Ottawa’s Research Chair in Experimental Evolution. When it comes down to it, Kassen wants to know the answer to a straightforward question: Why are there so many different kinds of living things in the world?

The project
To study the process of biodiversity, Kassen needs to watch evolution take place in his laboratory. He can do this by studying microbes. Since microbes live for only a short time, Kassen can observe changes that occur over generations in only a matter of days. This makes microbes an ideal model for studying adaptive evolution.

The key
When Kassen places colonies of microbes in a beaker of nutrient-rich broth, the colonies choose to live at the centre where there’s the most food. Early on the colony is smooth and round. After a while, resources become scarcer and competition becomes more fierce. Some of the colonies realize that if they stop fighting for control of the centre and move to the fringes they will have an ecological niche all to themselves. And so some colonies fall to the bottom of the beaker where they evolve into brush-shaped colonies. Others rise to the top where they change into very wrinkly colonies.
The new ecosystem offers the microbes opportunities to specialize and to a certain extent determines the form of the diversity. On the other hand, it is competition for resources that drives the specialization. Kassen suspects that these two factors cause adaptive radiation to occur quickly and helps explain why diversification happens in bursts.

Saturday, March 12, 2011

Blowing shit up, science style

The problem
RECENT IMPROVEMENTS IN technology have allowed scientists to accelerate electrons in ways that create high-energy, extremely bright, and short laser pulses. Before the invention of these lasers, scientists could not study how high-intensity ultraviolet and X-ray light interacts with matter. Now that such lasers exist, everybody’s dying to know what happens when you blast stuff with short, high-intensity, high-energy laser beams.
The obvious answer is that you blow shit up, and that’s cool and all, but the potential applications of these beams are much greater than that. High-intensity ultraviolet and X-ray lasers might be able to image materials that are currently challenging to study. But before scientists can use these lasers, they have to understand this completely unexplored area of light-matter interaction.

The researcher
Lora Ramunno studies computational photonics at the University of Ottawa. Using her parallel supercomputer (equivalent to about 600 desktops), Ramunno studies nonlinear optical imaging and the interaction between matter and intense laser beams.

The project
Ramunno decided to look at how tiny clusters of matter interact with a high-intensity laser pulse by simulating each and every one of the atoms. When atoms are hit by a photon of light there is some probability that they will absorb the photon and eject one of their electrons. This leaves the atom as a positively charged ion. At every step of her simulation, Ramunno’s computer program must stop and evaluate the quantum probabilities that give these rates before it can move on to the next step.

The key
Before the laser blows up the cluster of atoms, electrons escape from their atomic orbitals and the cluster becomes a plasma. The first few electrons that are ejected simply fly away and leave behind a charged cluster of ions. However, the electrons emitted later find themselves in this charged environment that they can’t escape from.
These electrons are free to zip around, but can’t leave the cluster, and from time to time they collide with unionized atoms. They usually don’t have enough energy to free an orbiting electron from the atom they collided with, but they can excite one of the atom’s electrons up to higher energy. Ramunno found that if a second free electron collides with the same atom that had been energized by an earlier collision, it has a better chance of releasing the orbiting electron. When pairs of free electrons work together like this the cluster charges more quickly than if the laser didn’t have any help and so the cluster explodes in a shorter period of time. Ramunno calls this process Augmented Collision Ionization.

Monday, March 7, 2011

Beating bromine

The problem
THE QUANTUM WORLD works quite contrary to our own concrete and everyday existence. When we are first taught about atoms, we are shown a solar system-like model with electrons orbiting the nucleus like planets orbit the sun. But physicists have known for nearly a hundred years that this picture is too simple.
Electrons are both a particle and a wave at the same time. So electrons shouldn’t just be thought of like planets, but also like a vibrating guitar string. These so-called wavefunctions can be experimentally probed and scientists understand them very well. But for more complicated molecules—combinations of more than just one atom—it becomes very difficult to directly see that theory and reality are the same.

The researcher
On top of being a professor in the physics department at the University of Ottawa, Paul Corkum heads the Attosecond Science Laboratory at the National Research Council. He is renowned for using a laser pulse to accelerate an electron out of its atom, turn the electron around, and drive it back into its orbital. When the electron recollides with the atom, short bursts of light are given off that tells Corkum about the environment in which the electron settles.
Using very short laser pulses, Corkum was able to take a high definition snap shot of the quantum cloud that defines where the electrons are around the atom.

The project
Taking high-resolution pictures of quantum orbitals is one thing, but filming a movie of the wavefunctions during a chemical reaction is another altogether. And yet, this is exactly the challenge Corkum set for his lab. Using the same technology he invented for imaging orbitals, Corkum wanted to watch a single molecule of bromine disassociate into two separate bromine atoms.

The key
Corkum blasted bromine gas with blue light. Blue is exactly the right colour to excite bromine molecules. Immediately after the blue light excites the bromine, the short laser pulse that knocks an electron out and drives it back in is shot at the gas.
Corkum saw that the blue light had not excited all the bromine molecules. This turned out to be an advantage since the resulting bursts of light from the recollision of the excited and the non-excited molecules mixed into beats.
The non-excited bromine acted exactly like a tuning fork: Corkum could use the beating between the bursts of light from molecules of bromine and from excited, separate atoms to see the difference between the two.

Monday, February 28, 2011

Mighty mice meet their match


The problem
IT’S AN EVOLUTIONARY arms race out there. Viruses that infect organisms evolve to evade the immune systems of their hosts. Every time that happens, host animals like us must create strategies to battle the infections and diseases they cause.
For example, retroviruses are a family of viruses that have an RNA genome. While it’s often said that the fundamental building block of life is DNA, the genetic material of these viruses is RNA. Retroviruses produce DNA from their RNA and insert it into a host’s genome, changing the host forever. From then on, the virus replicates with the host cell’s DNA.
Our immune system protects us against most retroviruses. Only the human immunodeficiency virus (HIV) and the human T-lymphotropic virus (HTVL) have been shown to cause diseases in humans. Both have evolved ways to get around our immune defences.

The researcher
Marc-Andre Langlois does his research for the Faculty of Medicine at Roger Guindon Hall on the General Hospital Campus. He studies how retroviruses replicate and infect cells—specifically how cells are able to protect themselves against retroviruses.

The project
One of the best armaments our cells have is a family of proteins called APOBEC3. It’s still a mystery how they do it, but APOBEC3 proteins can completely deactivate all retroviral invaders by mutating the attacking DNA before it can be inserted into the host’s genome. The exceptions are HIV and HTVL. Those two have out evolved our protein parapets.
While primates have seven APOBEC3 proteins, mice have only one. This really interests Langlois. The mice APOBEC3 protein is more general than any of our seven. However, even mice can be infected by retroviruses. One of their versions of HIV is called AKV.

The key
Langlois was able to observe the arms race between AKV and the mouse APOBEC3 protein. Mice with diverse abundances of APOBEC3 were better at restricting AKV than mice with any specific form of APOBEC3. They can mutate (and so deactivate) more variations of the retrovirus. Langlois concludes that, since APOBEC3 stops infections by mutating the attackers, it pressures AKV to evolve. Because the mice’s own weapons against AKV cause it to mutate at an exaggerated rate, a broad set of deterrents provides for the best defense against such a varied viral foe.

Miniscule monsters


The problem
Genomicists have a serious bias toward “model” organisms. Model organisms are species that have historically been well studied. Fruit flies, yeast, zebrafish, and mice are examples of model organisms. So are humans.
But these model organisms are each just some leaf on a random twig of the tree of life. Scientists are only beginning to realize the true extent of biodiversity and the staggering variety of differing genes and structures that make up genomes.

The researcher
Nicolas Corradi studies comparative genomics, which means that he sequences organisms’ genomes and then compares their genes and structure to those of other species. Corradi’s lab in the biology department at the University of Ottawa focuses on unicellular eukaryotes, single-celled micro-organisms that harbour curious genomes in their nuclei.

The project
Corradi’s favourite eukaryotes are microsporidia, parasitic unicellular fungi. These little monsters are highly adapted for infecting host cells. They are opportunistic bugs that steal everything they need to survive from their host. In fact, the only time microsporidia spend outside of a host cell is as spores, scouring to invade other cells.

The key
Corradi sequenced the genome of the microsporidia Encephalitozoon intestinalis. This particular microsporidia has the smallest nuclear genome of any known organism. It is made of only 1,800 genes (1,500 times smaller than the human genome and 20 per cent smaller than the next smallest genome ever sequenced).
Why do they have such small genomes? Because these microsporidia are marauding picaroons. They don’t do anything they don’t have to. They steal so much from their hosts that they have shed every gene but the bare minimum needed to function.
Evolutionarily speaking, it is easier to lose genes than to gain them, so these microsporidia are extremely adapted for their parasitic lifestyle. Their genome is so compact that Corradi believes it may represent the limit for a fully functional genome.

Friday, February 4, 2011

Digital drugs

The problem
IT IS POSSIBLE to cure certain cancers by surgically removing the tumors, but this requires that every single cancer cell is extracted. If any cancer cells remain, or if they spread to further, undetected sites, only remission has been achieved—not a complete cure.
Therefore, tracking surviving cancer cells is vitally important. Given the opportunity, they will grow into deadly new tumors. Unfortunately, treatments that can deal with remaining cells, like radiation or chemotherapy, indiscriminately kill cancer cells and healthy cells alike, making the treatments brutal on the body. Targeted therapies are at the forefront of cancer treatment.

The researcher
In the Department of Chemistry, professor Maxim Berezovski has a laboratory that is, in many ways, obsessed with selectivity. In one project, Berezovski studies separation techniques that can teach him about biochemical reaction rates. In another, he isolates biomarkers from cells. In yet another, he marks cells of one type without marking any of the others. The flags he uses to mark cells are called aptamers.

The project
Aptamers are short polymers of nucleic acids that bind to specifically targeted molecules. In many ways, researchers can use them as synthetic artificial antibodies. Berezovski builds them from little chunks of DNA to target the surface of different cells, in particular cancer cells. The selectivity of aptamers makes them perfect for marking or attacking cancer cells while ignoring the healthy ones.

The key
Berezovski proposes that once a tumor is surgically removed, a cocktail of aptamers can be specifically designed for those individual tumor cells. Tumors that reappear are actually clones of the original tumor. This means that the personal recipe of aptamers for the original tumor could be kept as a digital record in case of recurrence. Since there is no need to keep the actual aptamers, Berezovski refers to this record as a digital drug.
The digital drug could be used to produce a personalized mixture of aptamers that will target clones of the original tumor. Doctors could then attach labels to the aptamers to track cancer cells that escaped surgical removal or to identify new tumors. The selectivity of aptamers could even direct the delivery of toxins or medicine specifically to the tumor, allowing for a more finite cancer survival rate.

Shut up, Science

The impact of censorship in science research on our democracy

FREEDOM OF EXPRESSION is the root of the twins of Enlightenment: Science and Democracy. The essence of science is the freedom to question any dogma, the freedom to discover truth. And that right to question lies at the core of democracy. Without the freedom to exchange information among all people, how can political debate in a democracy have any hope? It’s impossible to overstate the importance of the dissemination of information and the right to free enquiry of our political system.
And yet, Canadians sanction the censorship of science by their silence. Before we look in the mirror, let’s talk about our neighbours. Five years ago, James Hansen, the head of NASA’s institute on planetary science, accused NASA public relations staff of suppressing his public statements on the causes of climate change. It became clear that the political appointee who tried to silence Hansen’s findings was following orders to ensure that scientists’ communication with the press was in line with the official stance of the White House.
Hansen’s experience with scientific censorship wasn’t an isolated case. Nearly half of federal climate scientists in the U.S. claim that they have been pressured to remove the words “global warming” or “climate change” from their reports. They claim their work has been edited by bureaucrats, and many said they too have been prevented from talking to the media. More recently—and despite a new government that has promised to “restore science to its rightful place”—federal scientist talking about the BP oil spill have required government clearance before speaking to the press about their findings.
Bad Americans.
Oh wait—things may sound bad in the United States, but here in Canada the situation is even worse. In this country, politics always trump science.
Stephen Harper’s Conservative government is all about message control—both within the Conservative party and also for federal employees. In 2007, Environment Canada implemented a new federal communications policy that demanded that federal scientists obtain permission from the federal government prior to giving any interviews. The regulation is reminiscent of the Bush administration’s attitudes toward scientific debate, but is far more institutionalized and overarching. By ignoring or denying interview requests, the government steals the ability of the country’s news outlets to talk to experts and cover scientific findings.
Effectively, the Conservative Party has complete control over media coverage on climate science. Since the Harper government introduced the new rules, media coverage of climate science dropped by more than 80 per cent. It seems that when the conclusions of the Canadian government’s own climate research run counter to the Conservative government’s stance on the Kyoto Protocol, the oil sands, or any of the party’s policies toward the environment, potential debate is simply squelched.
After the loss of the mandatory long-form census, the Professional Institute of the Public Service of Canada, a union for federal scientists, launched a campaign against Canada’s “worrying trend away from evidence-based policy-making.” Canadian scientists have begun to fight back, but federally employed climate scientists remain gagged.
Environment Minister Jim Prentice’s campaign of soft censorship through reduced funding to independent research is also an attack that can’t be ignored. In theory, agencies like the Canadian Foundation for Climate and Atmospheric Science fund university-based research independently from political bodies, but last year Prentice threatened these investments. Without money to conduct research, scientists can’t provide the public with evidence informing debate.
The people of Canada pay taxes to fund scientific research, but the government of Canada doesn’t let us hear the results. Scientists get public funding to research questions that have serious ramifications in modern political debate. We must demand that they get the chance to report back to Canadians with accuracy—otherwise it amounts to a conscious effort on the part of the government to keep the Canadian voters uninformed about the consequences of federal policies.

(Almost) everything you ever needed to know about isotopes

By Tyler Shendruk

Atoms aren’t unchanging blocks of matter. Let me tell you, it’s nearly impossible to figure out where an electron is at any given moment. And the nucleus! Nuclei are constantly jumping from one energy state to another as protons and neutrons push and pull, sometimes absorbing energy and sometimes ejecting it. Every once in a while, they decay and become something else entirely. Nuclei are constantly hopping down the periodic table.
So it’s not surprising that the number of neutrons in a nucleus isn’t always the same as the number of protons. Oxygen isn’t just oxygen—it’s any atom with eight protons. The number of neutrons can be anything from 4 to 20! Atoms with the same number of protons are called isotopes: it doesn’t matter how many neutrons there are. The exceeding majority of isotopes aren’t stable. Some decay radioactively and their radiation can be used for all kinds of great scientific and medical purposes.

Friday, January 28, 2011

Do academics dream of electric sheep?


The problem
WHO KNOWS WHAT other people dream of? Unless you’re a character in Inception, your dreams are for you and you alone. That’s good for you if your dreams involve your best friend’s girlfriend, but bad for scientists who want to study dreams using the scientific method.
Since dreams are not directly observable, you have to collect thousands of dream reports and depend on the reliability of the subjects’ recollection in order to systematically study their content.
To make matters worse, each of the dream reports must be analyzed. To do this, an expert judge must evaluate the content and code the accounts into a set of rankings. Say you want to study emotional content, you have to go through each report and rank how positive or negative the emotions in a dream were.
Not only can the subjects distort the research by failing to perfectly recall the dream, but human bias during coding is virtually unavoidable.

The researcher
University of Ottawa’s Joseph De Koninck studies what our minds are busy doing while we sleep. He studies the (usually more negative than positive) emotions of dreaming, and is interested in how these emotions develop throughout the dreams. As a dream psychologist, De Koninck must continually work with human error introduced during the coding process.

The project
What if researchers could eliminate the need for a human judge altogether? Computers can be taught to identify the level of emotion in a written text. This sort of Artificial Intelligence uses algorithms that can be trained from databases of reports and their corresponding rankings by human judges. The computer model uses individual words and the reoccurrence of words throughout the text to shift rankings and take into account words like “not” that flip the meaning.
Most importantly for De Koninck’s research, the computer algorithm can follow the evolution of rankings as dreams progress. By quickly ranking many accounts, it can give statistical information on the evolution of dreamers’ emotions.

The key
The computer program has the possibility to agree with the human judge 65 per cent of the time, and was hardly ever worse than a ranking from the human judge. That’s quite good considering that human judges only agree 60–80 per cent of the time. With such good agreement, these electronic judges could be used to quickly mine the huge number of dream accounts available. De Koninck wouldn’t have to rank each one individually or worry about human bias—and that sounds like a dream come true for scientists.

Sunday, January 23, 2011

Energy enthusiasts

The problem
THE TRANSPORT AND storage of energy is one of the main challenges of life. Creatures hunt and consume each other, stealing nutrients. Life constantly juggles energy: lifeforms evolve in order to change energy from one form to another, and store it away in their bodies. In particular, carbohydrates and fats are Mother Nature’s biological batteries.
Modern society has the same set of problems. We fight over resources; we mine fuels, like coal and petroleum; we extract energy from dams and wind farms. Then we store energy on power grids, in batteries or in the fuel tanks of our cars until we need it.
But humanity’s current sources of energy can’t sustain our needs. We need to access large amounts of energy that have been efficiently stored.

The researchers
André Tremblay and Marc Dubé are a pair of professors at the University of Ottawa whose research interests overlap when it comes to biofuels.

The project
Biodiesel is made from fatty acids produced from vegetable oils, animal fats, algae, or even waste grease. Amazingly, biodiesel can be used in current diesel engines without any modifications.
But the production of biodiesel is a remarkably difficult venture. What’s needed is a simple, single-step process that can continuously produce high-purity fuel without leaving residual gunk in the resultant.
Tremblay and Dubé have been working on a process to do just that.

The key
The reaction that turns waste grease into biodiesel is called transesterification. This reaction occurs at the surface of oil droplets mixed in alcohol. The transformation of oil into biodiesel is fast at first, when there’s very little fuel in the alcohol, but becomes less and less efficient as the alcohol becomes saturated with biodiesel.
To counter this, Tremblay and Dubé purify the results as the reaction occurs rather than after it. The oil flows through a reactor pipe. The pipe is formed by a ceramic membrane with tiny pores too small for the droplets to escape through. The biodiesel, on the other hand, can per- meate the reactor membrane easily.
What’s left? Lots of oil droplets in the reactor that are continually undergoing efficient transesterification on one side of the membrane and high purity biodiesel on the other.

Tanning turtles


The problem
NORTHERN MAP TURTLES (that’s Glyptemys geographica for those of you who like Latin) are found in northern states and southern Ontario and Quebec. They hibernate through the coldest parts of the year in communal groups on the floor of lakes and rivers. They don’t come up to breathe for the entire hibernation. Since they spend a lot of time in the sun during the summer, Map Turtles like areas that have fallen trees or other objects to bask on near large bodies of water. Basking sets their body temperature, but the more important question is just how energetically vital is sun-basking to these northern turtles?

The researcher
University of Ottawa professor Gabriel Blouin-Demers studies the physiological ecology of reptiles. He integrates laboratory experiments with field observations to better understand how phenotypes or biological traits—especially behavioural—are set by reptiles’ physiologies.
Blouin-Demers hopes that the research coming out of his laboratory can contribute to reptile conservation. Reptiles are in fact the most threatened vertebrates in Canada.

The project
The northern Map Turtles that Blouin-Demers studied were from Lake Opinicon (100 km south of Ottawa). He implanted thermometers into the abdomen of juvenile turtles to continuously monitor their body temperature for two years. Using their body temperature, Blouin-Demers can calculate the turtles’ metabolic rates to estimate how important thermoregulation is to the energy available for growth and reproduction.

The key
Blouin-Demers determined that basking has a huge impact on the energy budgets of northern Map Turtles. The turtles spend three-quarters of their day basking in the sun.
More importantly, he found that if northern Map Turtles don’t bask, their metabolic rate slows by as much as a third. This amounts to a huge loss in available energy for growth, reproduction, and everyday turtlely business. Despite the clear importance of basking, Blouin-Demers discovered that the turtles bask a little less than the theoretically expected optimal amount. Blouin-Demers speculates that this is because basking is a mutually exclusive behaviour: Turtles can’t multi-task while basking. They bask on land but do all their other important activities (like foraging and mating) in the water, so they must compromise.

Serious about solar

The problem
OIL IS EVIL. Humanity go green now. Solar panels suck. Apocalypse therefore inevitable.

The researcher
Karin Hinzer is the Canada Research Chair in Photonic Nanostructures and Integrated Devices. In 2007, Hinzer founded SunLab at the University of Ottawa, and since then has collaborated with many industrial partners. Just this year, a collaborative effort earned SunLab the 2010 Canadian Innovation Award.

The project
One such joint project is the Advancing Photovoltaics for Economical Concentrator Systems (APECS). APECS is a project that demonstrates the use of innovative technology in a practical setting. In January, Hinzer will install experimental solar panels on the roof of the Sports Complex parkade here at the University of Ottawa and at a sister site in northern California.

The key
Hinzer will use efficient gallium/arsenide(Ga/As) based multi-junction solar cells in the APECS project. Traditional silicon solar cells only absorb a small range of photons efficiently, while multi-junction cells absorb over a broader spectrum and so increase the efficiency. However, these solar cells are not cheap. APECS seeks to bring the cost down in a couple of ways.
First, Ga/As cells are usually grown on expensive germanium sheets, but Hinzer is testing Ga/As cells grown on much cheaper silicon sheets. Secondly, Hinzer is reducing the cost by getting more light to a smaller area. The way Hinzer does this is analogous to a kid using a magnifying glass to turn a normal sunbeam into a highly focused death ray for burning ants. But instead of using normal lenses, Hinzer uses waveguides that have been tested in SunLab under an artificial sun. Since the waveguides are much lighter than traditional lens systems, it won’t need the same kind of heavy-duty foundation that other big solar panels require. Therefore, it can be set on rooftops.
But having an efficient solar cell is only half the battle. If there’s only a little light shining on the solar panels, they won’t produce much electricity, no matter how efficient they are. To get around this, the modules will automatically track the motion of the sun to maximize their efficiency throughout the day. Solar panels work best when the sun’s rays are perpendicular to their surface. This is why APECS will have one station here in Ontario and a second one in California. Each panel will have an associated weather station, which Hinzer will use to compare any differences in efficiency to differences in latitude and weather.

Setting the satellite cells


The problem
HAS IT EVER amazed you how quickly children seem to recover from injuries? Tumbles and falls are just part of everyday play. But that’s not what it’s like for adults—or grandparents, for that matter. Falling or breaking a bone can be a dangerous event, because injuries are not easily healed.

Why is that?
The foundation of the muscles’ repair system is a particular group of stem cells called satellite cells. Unlike most cells in the body, stem cells don’t have a unique type. Instead, they have the ability to transform into any specialized cell that is required by the body. This talent allows them to regenerate injured tissue by replenishing the old and damaged cells.
But satellite cells aren’t as industrious in adults as they are in children. In fact, their activity diminishes as we age. They’re still in the body, but if modern medicine wants to harness them for directed muscle repair, the stem cells will require stimulation.

The researcher
Dr. Michael Rudnicki is the director of the Regenerative Medicine Program and the Sprott Centre for Stem Cell Research at the Ottawa Hospital Research Institute. His laboratory researches the molecular mechanisms that control stem cells during tissue regeneration.

The project
One of Rudnicki’s special interests is the function of stem cells in adult skeletal muscle—the muscle attached to bones by fibrous tendons.
Satellite cells may be the foundation of the muscles’ repair system, but they certainly don’t work alone. Ridnicki’s work stresses that myogenesis, the formation of muscle tissue, requires coordination between many different cells.

The key
A key player in myogenesis is a group of cells called fibro/adipogenic progenitors (FAPs). Rudnicki found that in healthy muscle, FAPs are dormant, but, in the event of acute muscle damage, they rapidly multiply.
That’s because FAPs are the distress beacon that signal the satellite cells. Rudnicki’s research shows that FAPs encourage the satellite cells to get active and to fuse with damaged muscle fibres or produce new fibres entirely. FAPs do this by establishing a specialized environment in the damaged muscle that facilitates the satellite cells. In myogenesis, FAPs stimulate satellite cells to regenerate muscle. In children, FAPs stimulate satellite cells and then are free to leave once myogenesis is complete; however, in elderly muscle, FAPs become firmly engrafted to the damaged site, and can no longer move on to the next injury.

Setting the centre of cells


The problem
IMAGINE YOU’RE AN anaerobic bacteria. You’ve swam around eating up nutrients, but you can hear that biological clock ticking. It’s time to have your very own bouncing baby bacteria. But how do you guarantee that you and your daughter turnout exactly the same? Of course, your DNA will be unchanged, but what about everything else? What molecular interactions ensure that your daughter is exactly the same size as you—that you divide symmetrically at the midpoint?

The researcher
Natalie Goto, an associate professor at the University of Ottawa’s Chemistry Department, sees protein as the machinery of life. Proteins bind molecules together in very specific ways and their interactions act as a clock, telling cells what phase of life they are in. Goto is interested in how the shapes of proteins mediate the interactions between them.

The project
In order to divide symmetrically, rod-shaped cells must construct a new wall at their exact midpoint. In bacteria, this process is controlled by the Min family of proteins.
The protein called MinC inhibits wall formation, but only when it is binded with its sister protein, MinD. MinD likes to moor on the cell wall. MinC and MinD have an affinity for each other.—whenever MinC floats by a MinD, it cuddles up and forms a complex that stops the cell from growing a dividing wall.
But there’s one last member of the Min family: MinE. MinE continually pushes its sister proteins around. MinE shoulders its way between MinC and MinD, displacing MinC. Afterward, MinE leaves MinD, forcing it to dissociate from the wall and driving it from the middle toward the pole of the cell. With no MinC and MinD left at the centre to stop the formation of a new wall, the cell divides.

The key
MinE has a special site in the cell that it uses to breakup MinC and MinD and push them from the centre. Goto found that MinE folds to keep this binding site wrapped inside itself. Only when MinE opens itself up can the site disunite the other pair of proteins. Goto suspects that by keeping the binding site inaccessible, MinE can specifically focus on chasing its sister proteins from the centre. By pushing MinC and MinD duos from the centre and into the poles, MinE frees the cell to form a new wall and divide symmetrically.

Caging carbon


The problem
INDUSTRIAL NATIONS EMIT countless millions of tons of carbon dioxide (CO2) into the atmosphere every year. Coal combustion produces approximately a third of all that pollution and there is an immediate need to reduce emissions. One controversial idea is to bury the emissions deep in the ground before the CO2 can escape into the atmosphere and contribute to the greenhouse effect.
But you can’t just bury gas. You have to capture it first. Unfortunately, current methods of scrubbing CO2 out of a coal plant’s exhaust would require at least a quarter of all the energy produced by the power plant. It’s a prohibitively expensive procedure.

The researcher
Tom Woo is a researcher in the the Department of Chemistry and Centre for Catalysis Research and Innovation at the University of Ottawa. Woo specializes in molecular simulations and uses computer algorithms to model chemical systems at the molecular level. His simulations give fellow chemists insight into their experimental results and point them toward potential new designs for engineering materials.

The project
Compounds called metal-organic frameworks are special crystals of metal ions linked together by organic molecules. They are special because they can form very porous structures. In fact, these nanoporous materials can selectively capture CO2 molecules in their pores and hold the greenhouse gas trapped there. The rest of the combustion exhaust would float by and the CO2 would be left, filtered out of the gas.
But there’s one problem: the energy binding the CO2 to the pore is a little too weak. The material currently captures water vapour better than CO2. If the interaction trapping gas can be increased and the material made to not bind water, then nanoporous materials could be the short term solution to reducing carbon emissions.

The key
In order to design nanoporous material that better imprisons CO2, chemists must first understand the forces that hold the pollutant gas in the pore cavity. Woo’s simulations show that the forces responsible for keeping the CO2 captured are almost entirely made up of dispersion forces—a type of force that is weaker than most chemical bonds.
Woo believes that future materials can be designed to replace dispersion forces with stronger electrostatic forces. Using a stronger force ensures that the CO2 stays securely imprisoned while discouraging the seizure of water. Nanoporous materials engineered to use electrostatic interactions to selectively bind CO2 to their cavities would be an important step forward in carbon capture technology.

Mending a broken heart


The problem
HEART DISEASE IS a blanket term for any illness that causes the cardiac muscles to lack circulation (coronary heart disease) or to weaken (cardiomyopathy). Traditional medicine can only help patients cope with a weakened heart. However, techniques in cell therapy may one day allow doctors to direct special cells to regenerate tissue and repair heart damage through stem cell transplantation.
Unfortunately, stems cells are hard to come by and their use in clinics is strictly regulated. To make matters worse, cells taken from patients with cardiovascular disease are often dysfunctional. There is a desperate need for alternative cell therapies for tissue regeneration.

The researcher
Erik Suuronen is the director of the Cardiovascular Tissue Engineering Lab at the University of Ottawa Heart Institute. Suuronen wants to use stems cells and tissue engineering to treat heart disease. He hopes that one day these cell therapies will allow patients to regenerate new muscle and blood vessels rather than live their lives with chronic disease.

The project
Therapeutic cells already exist in the body, called progenitor cells. Rather than transplanting cells to the weakened muscle, Suuronen’s research aims to attract the body’s own progenitor cells to perform the repair and cause tissue regeneration. Normally, only a small number of these cells reach the damaged tissue, but if the target site could be encouraged to attract more of them then the progenitor cells would mobilize to repair and regenerate damaged tissue.

The key
Suuronen has developed a matrix of collagen (collagen is a common extracellular protein) and a complex sugar called sialyl LewisX. Sialyl LewisX instructs the progenitor cells to attract more therapeutic cells and regenerate the damaged tissue while the collagen acts as a “smart” scaffold that supports them during the repair.
Suuronen injected the enhanced matrix into the thigh muscles of rats with damaged blood vessels and dying muscles. The enhanced matrix recruited progenitor cells from the rats’ bone marrow into the bloodstream, leading them to the damaged site. The recruited cells then grew into new blood vessels and galvanized muscle regeneration. By successfully stimulating new muscle growth to replace lost tissue, this research suggests that heart damage could one day be repaired through cellular therapy.

The big game


The problem
MODERN SPORTING EVENTS have grown into megaprojects. Tournaments like the FIFA World Cup or Universiade are huge investment projects that host international teams, are watched worldwide, and require vast management administrations.
With such huge costs, and equally huge potential economic benefits, the organization of such games is taken very seriously. Planning is already well underway for the 2015 Pan American Games to be hosted by Toronto.
However, with so many people involved and with so much at stake, creating an efficient framework for communication amongst the network of coordinating bodies can be a daunting task.

The researcher
Milena Parent is an expert in sports administration at the University of Ottawa’s School of Human Kinetics. She specializes in strategic management and organization theory for large-scale sporting events.

The project
By chronicling and understanding the coordination network that existed for organizing the 2010 Vancouver Olympic Games, Parent can develop broad network theories for the management of large-scale sporting events that can then be used by future organizers.
The city of Vancouver began planning for the 2010 Olympic Games nine years before the opening ceremonies. A total of 97 separate federal, provincial, and municipal departments were involved in the planning and those were just the governmental bodies.
The coordination network of stakeholders included sponsors, organizational committees, community groups, governmental departments, the media, and delegations of athletes. Each stakeholder had his or her own interests and each was needed for the sporting event to be a success.

The key
Traditional theory presents the organizational network as a wheel with the organizing committee as the hub and the stakeholders as spokes, but Parent found a strikingly different picture. She discovered centralized control of the planning process lay with the local communities or “people on the ground,” and consequently, played a more pivotal role than that assumed by officials.
In practice, there wasn’t one centralized hub, but rather groups that formed multiple hubs of organization. None of the hubs were well connected to the entire coordination network. Instead, each had strong ties to a handful of stakeholders. Stakeholders formed strong local contacts with each other, but these local networks were relatively independent with only weak links between them. According to Parent, organizers who bridged two or more of these local networks had some of the strongest positions in the planning process since they acted as the main lines of communications between the fractured groups.

Tsunami simulations


The problem
IN THE YEAR 1700, a megathrust earthquake (that’s science talk for scary-big-earthquake) occurred along the Cascadia fault in the Pacific Ocean. The fault runs along the coast from Vancouver Island down to northern California. The earthquake triggered a tsunami off the Pacific Coast, which resulted in a flood that reached inland as far as the mouth of the Fraser River, travelling all the way across the Pacific Ocean and striking the coast of Japan.

The researcher
Engineering professor Ioan Nistor is fascinated by such tsunamis. After the 2004 Southeast Asia earthquake, he and his collaborators were the first research team in the tsunami-affected areas of Thailand and Indonesia. While in the field, they inspect damage to buildings and structures. Back in his University of Ottawa lab, Nistor measures the force of surge impacts on models. He then compares what he saw in the field to laboratory and numerical models in order to gain a better understanding of the effects of tsunami bores—the fast moving walls of water that occur once tsunamis break near shore.

The project
Nistor ruminates over the various scenarios that could result from a modern-day earthquake along the Cascadia fault. By simulating earthquakes at various points along the fault and the propagation of the waves towards shore, he is able to predict the resulting tsunami’s height and speed as it crashes inland. Nistor uses these values to estimate the strength of disastrous forces to which coastal buildings would be subject.

The key
Even though the major Canadian cities on the West Coast are located on inland waterways, simulations show that they would not be spared from devastation in the event of another Cascadia tsunami. Even though its approach is slowed in shallow waters, Nistor still expects 25-metre high surges.
Current building codes in Canada do not explicitly provide special design guidelines for structures located on tsunami-vulnerable shores. They do not account for the initial surge forces, the sweeping drag force, the increase in hydrostatic pressure, or the buoyancy force as the building floats away from its foundation. By properly quantifying these extreme loads on structures during inundation, new design guidelines for structures in tsunami-prone zones can be recommended and, in the event of a disaster, save countless lives.

Not your grandma’s network


The problem THE WORLD IS more interconnected than ever before. Social networks, the global economy, the Internet, and even delivery routes can all seem like a jumbled mess. Nowadays, it is common to see complex nets of relationships everywhere we look.
The simplest network we can imagine is life as an employee on a production line: Our neighbour to our left passes us some widget, we add our component and pass it on to the neighbour on the right. It isn’t a web at all; it’s just a chain.
Now imagine we work in a more complicated factory. Imagine we can get different widgets from multiple neighbours. In fact, even coworkers far from our workstation can toss us widgets. To make matters worse, the foreman lets us wander to and work at any part of the production line we want! What a disaster. We’d be doing a random walk on a random network while receiving random input to deal with.

The researcher
Vadim Kaimanovich, a professor in the Department of Mathematics and Statistics, creates mathematical methods that can predict the nature of complex networks. His goal is to understand when the chaotic evolution of random systems can lead to stable and predictable output.

The project
Kaimanovich uses the analogy of a production line to ask: if we start the production line at a slightly different workstation—one that is close but not exactly the same—will we get a similar widget or something completely different? If the widget doesn’t change, the production is stable. If it’s different, the production diverges.
Scientists have noted many systems that seem as complicated as our crazy production line, but seem to have stable output. However, there were no mathematically rigorous proofs for the existence of stable solutions.

The key
Using a mathematically precise measure to decipher which widgets are similar and which are different, Kaimanovich demonstrated that certain sorts of abstract “random production lines” must have groups of workstations that give stable solutions for the same kind of random input. Not surprisingly, this is the first proof of it’s kind.

Fishy neurons

The problem
PARKINSON’S DISEASE (PD) deteriorates a patient’s central nervous system and debilitates motor skills. Doctors don’t know the cause of 90 per cent of PD cases, but better understand the source of the other 10 per cent. Heredity and genetics are the culprits in this type, called early-onset PD.
Surprisingly, the genes associated with PD are found in all kinds of life forms, including mice, yeast, and zebrafish. These genes play an important role in the special cells that control body motion and make dopamine, an indispensable chemical needed to transmit signals between neurons—these cells are called dopaminergic neurons.

The researcher
Marc Ekker, a biology professor at the U of O, works in the Center for Advanced Research in Environmental Genomics to better understand the genetics of PD. Ekker genetically alters zebrafish, whose genes are simpler than those of humans and can be associated with the disease, in order to further study the causes of PD.

The project
Since zebrafish are transparent, Ekker is able to genetically alter their neurons to fluorescent, enabling him to watch the destruction and regeneration of the dopaminergic neurons in the fishes’ brains while they are alive. He can therefore destroy individual neurons with a laser blast, poison, or alternatively, he can genetically block the gene altogether, making it inactive for the fishes’ entire life—essentially giving the zebrafish PD.

The key
Ekker looks at the genetically altered neurons in the brain and studies what they are doing to the fishes’ motion. Fish larva whose dopaminergic neurons are destroyed have very limited motor skills, and young fish without dopaminergic neurons will not respond with evasive motion when gently poked. Ekker’s zebrafish share the same symptoms as PD patients. Zebrafish, however, can regenerate the neurons. We can’t.
They can do this because of stem cells. Stem cells are different from common cells because they aren’t committed to becoming any one type such as a blood cell or a neuron. While humans have only a limited number of stem cells, zebrafish make stem cells throughout their entire life. The fish can draw on their bank of stem cells to replace the neurons.
Lucky fish.

Mercury munching microbes (om nom nom)

The problem
THE MAJORITY OF MERCURY in the atmosphere is generated by human industries like coal combustion and gold mining. The rest comes from natural sources such as volcanoes and forest fires. Either way, when mercury is dispersed into the atmosphere, it is carried poleward where it is oxidized and becomes heavier, falling into sensitive Arctic regions as a toxic contaminant.
Mercury binds to proteins and then accumulates in organisms, causing mercury compounds from the environment to enter the Arctic’s atmosphere when they get soaked up by the tiny microbes that form the ecosystem’s foundation. Since there’s nowhere for it to go, mercury is passed from prey to predator. Eventually, high levels of mercury accumulate in the top of the food chain—that’s us, friend.

The researcher
Alexandre Poulain, a professor at the U of O, studies how microbes alter the mobility and the toxicity of metals and metalloids in the environment. He focuses on aquatic systems in polar regions and ventures out into the Arctic to bring samples home for analysis in his lab.

The project
Anaerobic microbes, bacteria that don’t use oxygen, alter the nature of the mercury. Some make metals more toxic by turning ordinary mercury into very harmful methylmercury. Dissimilarly, other bacteria break down methylmercury, making a gas and venting it out of the ecosystem. Poulain looks at the difference between the rates of these two processes by analyzing the production of proteins (ribonucleic acid or RNA) that control whether microbes create toxic mercury or whether they detoxify the Arctic atmosphere.

The key
Upon sensing mercury in their environment, certain northern microbes activate genes naturally encoded in their DNA. Poulain can determine which of these genes are active in biomass samples from polar regions and can even tell exactly which genes are needed to defend against the toxic nature of mercury.
Poulain’s goal is to bridge global-scale environmental science and microscopic biology The reduction of Arctic mercury by tiny microbes plays a major role in regulating the toxicity of the Far North and could possibly be used in integrated approaches to environmental management.

Tomorrow’s butterflies

The problem
WORLDWIDE SHIFTS IN land use and global climate change are transforming the environment at a concerning pace. Only recently have scientists become aware of just how significant the impact of our actions has been. Average global temperatures have risen sharply over the past few decades, in addition to the loss of natural habitats by conversion into agriculturally cultivated land.
Intuitively, it is clear that such intense environmental changes will have repercussions that increase extinction rates, but the world’s ecosystems are complicated, and predicting how species diversity responds to climate change is no easy matter. Improving conservation and recovering endangered species requires accurate predictions of future shifts in biodiversity.

The researcher
Jeremy Kerr’s lab, the Canadian Facility for Ecoinformatics Research, is located in the Biosciences complex on campus. There he researches changes in biodiversity across entire continents rather than in any one, local ecosystem. This means that he deals with enormous amounts of information, requiring him to be on the forefront of ecoinformatics, the science of information in ecology.

The project
In order to test whether he can accurately predict future changes in biodiversity over larger areas, Kerr pretended to go back in time. He used a macroecological computer model to predict gradients in butterfly diversity over the entire 20th century. By comparing the predicted richness in butterfly species to actual historical records of 139 species, Kerr was able to judge the predictive power of his model.

The key
Starting from the year 1900 and inputting historical data sets on climate, elevation, land cover, and human population density, Kerr was able to accurately simulate how butterfly diversity changed across Canada throughout the 20th century. In northerly areas, butterfly diversity increased while at lower latitudes it decreased. This observation suggests that macroecological theory can indeed forecast where species will be found well into the future.
The ability to predict how species diversity will respond to climate change could improve conservation planning in the 21st century.

Goldfish on Prozac

The problem
WHEN YOU THINK of pollution, what jumps to mind? Heavy metals, BP oil spill, carbon tax? What about the words antibiotics, the pill, nicotine, or Prozac? These so-called pharmaceutical pollutants are seeping out of our medicine cabinets and into our rivers and lakes.
Drugs are only partially metabolized in your body; the rest of them are flushed down the toilet. To make matters worse, traditional sewage treatment plants fail to cleanse the water of these chemicals, allowing them to flow right into rivers and lakes.
Last year Canadians filled 483 million prescriptions (that’s 14 prescriptions per person and doesn’t count the large amounts of antibiotics given to livestock).
So what happens when all the fish in the pond are on Prozac?

The researcher
Vance Trudeau is a neuroendocrinologist at the U of O and the Centre for Advanced Research in Environmental Genomics. He studies how hormones control brain function and how, in turn, the brain regulates sexual development.

The project
Fluoxetine, the trade name for Prozac, can be found in the brain and liver tissues in wild fish, and, just like in people, increases fishes’ serotonin levels. To understand how the drug upsets sex hormone levels in wild fish populations, Trudeau studies normal goldfish whose food intake, seasonal growth rates, and reproduction have been previously well studied.

The key
When Trudeau’s research group studied female goldfish injected with flouxetine, they found that multiple genes in the brain were affected, causing a decrease in estrogen levels in the blood. Some of these genes are known to have an impact on the reproductive and social behavior of fish. To make matters worse, fluoxetine has an impact on the secretion of growth hormones, causing the fish to feed less and to become underweight.
To simulate the levels of Prozac detected in the environment, another test was done where fluoxetine was added directly to the tanks of male goldfish. Trudeau’s team then added potent female sex pheromones to the water. This should have stimulated the healthy, normal males to release their sperm and fertilize the eggs. However, male goldfish that had been exposed to the fluoxetine completely fail to release their sperm.
Poor goldfish.

Building biological barcodes

The problem
Medical tests required to diagnose diseases need to be performed at specialized centres, causing long wait times and expensive costs. In addition, current analytical tools are limited to looking at only handful of the biomolecules that signal the onset of diseases, such as cancer.

The researcher
Michel Godin, an assistant professor in the Department of Physics at the U of O, dreams of making disease testing as easy as scanning a barcode. Godin is part of the Interdisciplinary Nanophysics Centre labs where he mixes physics, chemistry, and biology to engineer hand-held microfluidic devices for the health sciences.

The project
Microfluidic devices are the computer chips of the chemistry world. Medical lab technicians search for biomolecules associated with disease—also called biomarkers—the way you would do math on an abacus: one by one. Godin wants to design a device that can take less than a drop of blood, purify it, and identify the presence of hundreds of biomarkers within seconds. That kind of speed would resolve the earlier inconveniences of wait time and would also allow better statistics for analysis. The device would be smaller than your cell phone and potentially cheap enough to be used in developing countries. Bigger isn’t always better—at least when you’re talking about microfluidic devices.

The key
But how would Godin’s device tell the hundreds of biomarkers apart? Some microfluidic devices integrate ultra-sensitive detectors that push biomarkers through tiny nanoscopic tunnels (or nanopores) capable of detecting single molecules as they pass. However, detecting molecules and telling them apart are two very different processes. While a nanopore might be able to detect biomarkers, it can’t distinguish between those that signal disease and perfectly normal biomolecules. To identify them, Godin wants to create a DNA scaffold—a long chain of single-stranded DNA that would attach specific biomarkers to unique spots along the DNA chain. By threading the DNA through the nanopore, Godin could read what biomarkers are present in the blood—exactly like scanning a barcode.

Truly unbelievable

Former on-campus researcher creates a media motion machine

THANE HEINS, SUPPOSED inventor of a perpetual motion machine, identifies with Thomas Edison, Nikola Tesla, Alexander Graham Bell, and the Wright brothers. Despite his lack of any university education, he compares himself to these heroes of science because, like each of them, he claims to have invented an unbelievable technology. But is there a difference?

Controversial Claim

Heins, whose company Potential Difference was recently asked to leave the University of Ottawa’s SITE laboratory they were occupying, claims that using his discovery “generators can now accelerate themselves... It’s a cancelling of the work-energy principle.”
The work-energy principle describes the conservation of energy for mechanical work: the work done is exactly equal to the change in energy. Any violation of this would call into question humanity’s entire understanding of the physical world—you can’t get something from nothing.
Heins claims “Our generator can create power from no power. What that means is [that] it’s not a perpetual motion machine, but it is more than 100 per cent efficient. There’s a huge difference.”

Severely Skeptical

Not everyone sees the difference. Brian Dunning, the host and producer of Skeptoid, a popular weekly pro-science, anti-pseudoscience podcast, says in an email to the Fulcrum that “Heins has built another in a very long line of variations on electric motors, claimed by the inventors to be ‘over unity’ or ‘free energy’ machines, where more energy is produced than is put in. Think of pouring a litre of water into a measuring cup, and expecting to get two litres out. That’s not the way the universe works. It would be nice, but it just isn’t so. The basic laws of thermodynamics state that over unity machines are impossible, and all known experimentation supports that.”
Dunning, who has never seen Heins’ machine, sees problems even with his fundamental concept of full energy efficiency.
MIT-educated electrical engineer Seanna Watson also sees problems with the details of Heins’ experiments. Watson and a group of engineers from Ottawa Skeptics visited Heins’ lab in 2008.
“From what I could tell at the time, he was taking measurements and he was, for example, measuring volt-amps instead of watts, not taking into account phase differentials, and he was doing some rather odd math,” explains Watson about her doubts regarding Heins’ invention.
Watson made the results of the group’s investigation public through the Ottawa Skeptics website. She summarizes the skeptics’ disquietude saying “there seems to be people who do not have enough of a background to be able to look at what he is doing and see a problem with it ... It’s a concern that he’s trying to dupe people. And when I say ‘dupe’ I have to be a little bit careful because I don’t believe that he is deliberately trying to deceive anybody. I think he really does believe in what he is doing, but I think that he is very badly mistaken.”

Still Invested

Despite the validity of the skeptics’ claims, not everybody has always been so apprehensive. According to the Dean of Engineering, Claude Laguë, the University of Ottawa’s Faculty of Engineering opened its doors to Heins in order that he might get Potential Difference on its feet at the request of Ottawa Centre for Research and Innovation (OCRI). However, on March 1, after two years of facilitating Heins with lab space and access to the expertise of campus professors, the faculty asked Heins to vacate SITE due to his claims of external funding and a lack of return from his lengthy residency.
“After two years, our assessment was that we had moved beyond what we consider the normal start-up period. The company had also indicated that they were expecting financing from external sources. Due to that change to the situation, we felt that it was no longer appropriate for the faculty to continue to provide resource to that company free of charge,” Laguë explains of the faculty’s decision.
Heins has claimed financial support from various individuals over the years. In a 2008 Ottawa Citizen article by Tim Shufelt, Heins claimed that a $15-million investment was offered by influential Oregon private investor Jacques Nichols. The Fulcrum contacted Nichols by email about his investment.
“I met Mr. Heins during the summer of 2008 and we discussed his company and its capital requirements. No offer to invest was made, and I heard nothing more,” says Nichols in response to Heins’ claim.
Currently, Heins is financed by a number of personal investors including Robert Clark, founder of VesCells, a company that treats heart disease by stem cell therapy, who optimistically expects to “be able to clearly see the returns,” and Kevin Thistle, president of Coppingwood Golf Club, who has already invested nearly $250,000 in capital.

Attaining Attention

Heins can attribute some of his investors’ attention to the notoriety given to him by the media. When energy and green technologies columnist Tyler Hamilton wrote about Heins, his article became the Toronto Star’s second most read online story of 2008.
“I think Heins used it to his advantage to try to get in the door because it gave him a bit of [a] profile… He benefited from that and he rode that exposure,” asserts Hamilton. Although Hamilton says his intention was never to create debate, the Star’s article gave a level of credence to Heins—and started its own chain reaction of perpetual media attention. Canadian Business wrote an article. Heins garnered a mention on Gizmodo, Slashdot, BoingBoing, Wired.com, and innumerable private blogs. The Internet was abuzz, and both the Ottawa Citizen and the Toronto Star each devoted an article to all the attention he was getting.
Just this month, on the very heels of Heins’ exodus from campus, EV World published an article entitled “The Heins Effect,” in which tech editor Micheal Brace’s admitted purpose was to laud Heins with tenability. Brace writes that “[Heins] asked me to write this article because he’s hoping to change the public perception of his discovery.”
Dr. Riadh Habash, the U of O engineering professor who opened his lab to Heins, is not interested in discussing supposed controversy.
“We worked with him and we couldn’t prove his claims and, in science, to prove your claim you should be able to demonstrate that experimentally. In addition, you might write that in terms of a paper reviewed by others ... When you do research in science you shouldn’t contact journalists.”
The role of journalism in scientific debate is an important one in modern society, and the degradation of that debate is a main concern of each of the skeptics approached by the Fulcrum.

Mixed Media

Robert Park knows all about public debate regarding scientific issues. Park, who spent 25 years in Washington representing the American Physical Society to politicians and the press, sees a critical problem with the media.
“Many people in the media who write science stories do not themselves have a real appreciation for the basic laws of science, so they are perfectly willing to violate the second law of thermodynamics. That doesn’t trouble them at all.”
Park says about five new perpetual motion machines are brought to his attention each year and he finds that astounding.
“Five perpetual motion machines a year? And you know, every one of those is a drag on the economy, but, worse than that, it encourages people to believe in this kind of mythology.” Dunning agrees with Park.
“The media is not engaged in the charitable act of educating people; they are engaged in the business of drawing attention ... The problem is that the media is the main source of science information for most people, and viewers are offered little reason to suspect the information that’s reported might not be complete or correct. Such reporting erodes the already low level of public understanding of science, technology, and medicine.”
Béla Joós is not only the head of the Physics Department at the University of Ottawa, but also the editor of Physics in Canada, a monthly periodical published by the Canadian Association of Physicists. Physics in Canada reports on research findings, but also keeps physicists informed about important issues relevant to the scientific community.
“A newspapers’ true purpose is just announcing things, but their purpose is not in that sense critical analysis,” Joós says.
Joós does not necessarily see this as a fault, but does note the need for caution.
“Journalists do have a responsibility to not take as fact what is being proclaimed by one solitary voice.”
Joós points to the benefits of the peer review system in which fellow researchers in the same field are asked to evaluate scientific work before it can be published in reputable journals.
“Nobody can be a specialist in everything, so peer review is essential to make sure that the proposed new results have followed the scientific method of reproducibility, quality of data or error calculation, and spurious effects which may explain the data which are not being accounted for ... Peer review manages also to identify questionable steps which have been taken or questionable assumptions that are not based on reality.”

Sayonara Science

Heins has had more success with the media than with scientific journals. According to Heins, “People were more critical than they should have been,” and so he has chosen to focus on the mass media rather than the scientific community.
“My initial approach was the scientific approach. Have it evaluated, have it legitimized, go through the scientific route, but we hit a wall—we hit a wall that you couldn’t get over.”
And so with no discernible support from the academics on campus, Heins continues his “letter writing campaign” to Macleans, National Geographic, the CBC and whomever will listen—even the Fulcrum.