What’s the Pressure Inside an Exploding Whale?

I was equal parts grossed out and astonished (ok, maybe a little more grossed out than astonished) when I watched this video of a sperm whale exploding. Warning: this is a video of a sperm whale exploding. Obviously, it’s not going to be pretty.

And being a physics geek, the first question that popped into my head was, “I wonder how much pressure built up inside that whale for it to explode like that?”

First, what’s up with these exploding whales anyway? In an interview with Fusion, here’s how deep sea ecologist Andrew David Thaler explains this amazing and unsightly phenomenon.

A whale is a really nice, contained package with a big, big layer of blubber around it that’s designed to keep everything in and keep water out while it’s diving. So they actually make fairly good balloons [..]

And, like most mammals, when they die and they aren’t scavenged—-or they’re too big to be effectively scavenged—-their viscera begins to decompose, whatever contents were in their stomach. That produces methane and hydrogen sulphide and a couple other gases, which are going to begin expanding, especially if it’s sitting in the sun for a couple of weeks.

Eventually, they can explode.

The size of a male and female sperm whale, compared to a human. Image Credit: Kurzon / Wikimedia
The size of a male and female sperm whale, compared to a human. Image Credit: Kurzon / Wikimedia

So let’s make a rough, back-of-the-envelope style of calculation, to estimate the pressure inside a bloated beached whale that’s about to explode. The idea is to get a ballpark estimate, that’s within an order of magnitude of the actual result.

Why, you may ask? Because SCIENCE. That’s why.

Here’s the game plan. I’m going to open up the above video in the handy physics video analysis software Tracker.

Let’s guess that the person in the video is about 1.8 meters tall (~ 6 feet), which is the average height for a Dutch Danish man. (The brave soul in the video is marine biologist Bjarni Mikkelsen, and the whale was beached on the Faroe islands.) This sets a scale for the other distances in the video.

Now, I’m going to track the speed at which the blood and gas mixture shoots out of the whale (the liquid shoots out first, and the guts follow after).

I tracked this explosion along 4 different paths. Averaging these four values gives me an average speed of 17.7 meters/second (with a standard deviation of 3.4 meters/second).

Boom! The blood shoots outs of the whale at a whopping 17.7 meters/second (or about 40 mph)!

Continue reading What’s the Pressure Inside an Exploding Whale?

Galileo Got Game: 5 Things You Didn’t Know About the Physics of Basketball

basketball hangtime physics

In a way, a game like basketball is a physics geek’s delight. It’s a playground where you can apply physics principles to try and get some added insight to the game. You’ve got the interplay of projectile motion and collisions, energy and momentum, and so on. To get you started, here’s a list of five neat pieces of physics that you may not typically think about when watching a game.

1. Whenever you jump, you spend 71 percent of your time in the top half of the jump. This helps create the floating illusion of hangtime.

You might expect that when you jump, you spend equal amounts of time in the top and the bottom half of your jump.

But if you think about it, you’re moving fastest the moment you leave the ground. Every instant after liftoff, you slow down, until you reach zero vertical speed for a brief instant at the peak of your jump. After that, your speed increases again in the downwards direction, as you fall back down. This means that the top half of your jump (in terms of height) is also the slower half, and so it takes more time to cover that half.

How much more time? To work this out, we need to know that when you drop an object from rest, the time it takes to fall depends on the square root of the distance. The time it takes to fall half the distance is therefore just sqrt(1/2), or 71 percent the time it takes to fall the whole distance.

Basketball players appear to float because they spend 71 percent of their ‘hang time’ in the top half of their jump. If they’re moving towards the hoop as well, then 71 percent of that horizontal distance is covered while they’re in the top half of their jump, adding to the illusion of floating.

2. To make a layup (or a moving shot), players have to account for how their speed is added to the ball’s speed. 

Picture someone riding a bike in a straight line at a constant speed. They throw a ball vertically up into the air. After the ball leaves their hand, they keep cycling at the same steady pace.

bike throw ball

From the perspective of the cyclist, does the ball land..

  • In front of the cyclist?
  • With the cyclist?
  • Behind the cyclist?

Pause here for a moment to make your prediction.

Decided?

Once you’ve made your prediction, watch the video below. The spring loaded cart represents the cyclist, and the tennis ball is launched perfectly vertically.

Were you surprised by what you saw, or did it agree with what you expected?

To see why the ball must fall back to the cyclist, let’s think of an experiment conducted by Galileo in the 1600s. Picture a ship moving along, at a constant speed, just like the cyclist. Galileo dropped a rock from the mast of a moving ship, and found that it fell at the base of the mast, not behind the mast. (Other scientists believed the rock would fall behind the mast, and even claimed they’d seen that happen, but Galileo actually did the experiment to see for himself.)

galileo relative velocity

But now imagine that someone was watching this experiment from the shore. From this bystander’s perspective, Galileo’s ship is moving sideways, and so for the rock to land at the base of the mast, the rock must move sideways as it falls. Galileo understood what others before him had not – when he lets go of the rock, in addition to its downwards motion, it continues to move sideways with the same speed as the ship.

The same physics is at play with the cyclist and the ball. The cyclist throws the ball vertically up, but the ball also travels sideways with the cyclist’s speed. So when it falls, it catches up with the cyclist.

What does this have to do with basketball? Continue reading Galileo Got Game: 5 Things You Didn’t Know About the Physics of Basketball

What’s Making This Strange Chirping Noise? A Frog That Can Survive Being Frozen Alive

I was out on a run when I heard this intriguing sound, a chorus of high-pitched chirping noises. Curious to find out what was making this sound, I strayed off the trail and followed the chirps through a field of reeds. Here’s what I saw (and heard).

Hmmm… There wasn’t much I could see, although I could hear a ton of activity.

I tried to sneak up to a lone, strident voice from the choir. Many failed attempts later, my legs were covered in scratches from bumbling around in the reeds, but I finally managed to get a glimpse of the creature making this sound. It was a tiny brown frog, small enough to sit on your fingertip. The pond was transformed into a chorus of these frogs, all trying to outdo each other in attracting females.

Image: Dave Huth / Flickr  (used with permission)
Image: Dave Huth / Flickr (used with permission)

I got close enough to record the sound that made by one of these little guys, and asked twitter to help me identify the frog. (Hit play to hear the recording).

Happily, I woke up the next morning to a tweet from my friend, science writer Sarah Keartes (twitter, website), who forwarded my request to her frog enthusiast colleagues at EarthTouch. They were able to identify the frog as the Northern Spring Peeper, a frog whose recurring cheeping sound mark the onset of spring in Northeast USA.

The latin name of the Northern Spring Peeper is Pseudacris crucifer, which sounds more like a comic book villain or a badass rapper than a tiny frog. A quick youtube search convinced me that this was indeed our guy. Here’s what the male looks like when it emits this sound.

And here’s a closer look.

Image: Dave Huth / Flickr (used with permission)>/em>
Image: Dave Huth / Flickr (used with permission)

That huge pouch that you see is a vocal sac, and it inflates up to be nearly as large as the frog. It’s this acoustic resonator that allows this tiny frog to emit such a loud and shrill chirp.

Back home on my computer, I isolated a sample of the frog cheeping from my recording. Click below to listen to this sound and see the shape of the audio signal. Essentially, this is a plot of how your speakers have to wiggle in order to play back the frog’s chirp.

This frog’s call was remarkably consistent. It was almost exactly the same pitch every time, and precisely timed at evenly spaced intervals, about 40 to 50 times a minute. Neat.

The chirp sounded very shrill, but does it have any overtones, like when a singer hits a note? Or is it a pure note, like when you strike a tuning fork? To find out, I viewed the above plot as a frequency plot. (Mathematically, this is known as taking the Fourier transform, and I’ve written more about that here).

If you haven’t seen these spectrograms before, they’re a little tricky to read. On the vertical axis are the different frequencies (or pitches) in the sound, and as before, time ticks from left to right. Think of this plot as distilling the frog’s chirp into its constituent notes – low notes on the bottom, and higher notes on top. The hotter colors represent a louder sound, going from cool blues to warm reds to ultra-hot white.

cheep three spectrogram

See how there’s a sharp white bar cutting across the chirp? That tells us that the frog chirp is mostly made up of a very loud note with a pitch of about 3000 cycles/second, or G7. And there are some softer overtones as well – those parallel red and pink bars.

To confirm this, let’s make a plot of all the different frequencies that add up to make this sound. This plot is like a recipe for the sound, that tells us which ingredient notes make it up (on the horizontal axis), and at what volume (on the vertical axis).

cheep spectrum plot

The peak frequency from the plot above is 3144 cycles/second (or G7), which agrees with what we saw before. How does this compare with the scientific data on this frog?

A classic paper from 1985, Sexual Selection in the Spring Peeper, measured the pitch of the call of 72 Northern Spring Peepers in a lab, and found that the average peak frequency was 3061 cycles/second. That’s pretty close to my field results. Sweet! Science working as it should.

The same paper goes on to show that the female Northern Spring Peepers preferred the males with the loudest calls, and also preferred the males that repeated their calls the fastest. So for example, if there were two males, one that chirps every 1.2 seconds, and the other every 0.9 seconds, then nine out of ten times, the females would pick the faster chirper. For the male frogs, chirping loud and fast is a winning strategy.

The benefits of being loud are apparent. If you’re a frog and you can call out louder then your fellow peeps, you’re likelier to get the female’s attention. But why do the female frogs prefer the fastest chirpers?

It’s because the chirping advertises the male frog’s fitness. The frogs that chirp the fastest tend to be heavier and in better physical condition. That’s because it takes energy to chirp. To chirp faster, a frog has to take in more oxygen, and consume more energy. The frogs that chirp the fastest are the ones with the greatest stamina. Like the fastest long distance runners, they’re able to sustain a high consumption of energy over a long duration.

Which leads us to another puzzle. The very thing that makes the male Spring Peepers attractive to the females – their loud, repetitive calls – would also make them far more conspicuous to any predators. So how do they manage to not get eaten?

Image: Dave Huth / Flickr (used with permission)
Image: Dave Huth / Flickr (used with permission)

One way that these spring peepers avoid predators is by emerging from hibernation very early in spring. But there’s a problem with this strategy. Early spring comes with bouts of cold temperatures, often dropping below freezing (currently the case as I write these words). So the question really boils down to this. How does the frog prevent itself from freezing? The answer to this question totally and utterly blows my mind – the frog doesn’t prevent itself from freezing. Instead, evolution has devised a way for this frog to stay frozen alive.

You see, when the frog emerges from its hibernation in early spring, the temperature can often drop below freezing. If the temperature is -2 or -3 C (27 F), the frog can survive because the water inside it remains in a supercooled state – below its freezing point, but not yet frozen. But once the temperature dips any lower, the water inside the frog can’t stay supercooled, and so it starts to freeze.

For most animals, this would mean a quick death. But not the Northern Spring Peeper. Studies have shown that this frog can survive frozen up to a week . The frog enters a state of suspended animation. Its breathing, blood flow and heartbeat shut down, and its limbs become frozen stiff. The water under its skin freezes, and the contents of its stomach become a solid ball of ice. More than half of the water in its body turns to ice. Yet it can survive in this frozen state for days, and when the temperature goes back up, the frog thaws and eventually goes back to hopping around.

So how does it pull off this incredible trick? Continue reading What’s Making This Strange Chirping Noise? A Frog That Can Survive Being Frozen Alive

Voiding the Warranty: Using Microsoft Kinect to Make Your Own Dance Video

kinect happy whirl

I’m trying out a new experiment here, a series of blog posts on weekend hacks and projects, that I’m calling ‘voiding the warranty’. The unifying theme is to use things in some way other than their intended purpose.

I’ve always loved tinkering. From childhood on, I’ve been that kid who loves to take apart the VCR or the cordless phone (on a good day, I could even put them back together again). And so I’m really interested in ways in which we can repurpose existing technology to do new and creative things – things that they weren’t necessarily designed to do, but that are fun and inspiring.

But it’s always been frustrating to take stuff apart. Increasingly, technology isn’t designed for us to look under the hood (and certainly not to fiddle with anything there). Instead, it’s become a black box whose insides make sense only to the most über of über-techies. As consumers, when we own a black box, we’re letting other people design our world for us.

makey makey warning
Hardware for makers comes with a different kind of warning label (product shown: Makey Makey). Image: Visible Procrastinations / Flickr

Nonetheless, there is hope. There’s a growing movement of people who are trying to take technology back, and shrink the learning curve for building stuff. It’s often called the maker movement, or maker culture. I think that this movement is really important because it’s empowering – it lets you tinker with things once again, to learn, and to adapt and build things. And you don’t need to be an electrical engineer to take part – it’s open to anyone who wants to learn how things tick. There are tools available, like Arduino, Processing,  Makey Makey, or Raspberry Pi, and tutorials and starter kits available from SparkFun, Sylvia’s super-awesome Maker Show, Adafruit,  Make, and dozens of other places, that make it easier than ever for us to make stuff. Technology doesn’t have to be mysterious, it can be a tool to explore and a way to learn. And tinkering can be an immensely enjoyable and fruitful process.

So with that in mind, let’s get our hands dirty.

A week ago, I bought a Kinect Sensor ($99 on Amazon, although you can find it cheaper used. If you’re buying it, get the one for Xbox, not Windows, and check that the power adapter is included). It’s a sensor that allows your computer to see where you are. Unlike webcams that provides only images, which are notoriously difficult for computers to understand, the Kinect uses infra-red cameras to capture depth information. It measures the distance of every point in the room within the sensor’s range. It’s a bit like a 3D scanner, and can even detect people and gestures.

If you just want to play with the Kinect but don’t want to get into all this coding stuff, plug it in, get Synapse (mac only), and you’ll see a depth map of your room. This is an image where the brightness of each pixel represents how close it is to the camera. Looking at this is kind of like stepping in to the future, because for the first time your computer can see you, as an object with a wire-frame skeleton, as distinct from your chair, lamp, or table. It can track you as you move around, and it’s just freaking cool to use your body to control your on-screen avatar. (It even works if you turn out the lights.)

kinect synapse skeleton

Continue reading Voiding the Warranty: Using Microsoft Kinect to Make Your Own Dance Video

How to Toughen Glass by Cracking It: A Lesson From Teeth and Shells

The enamel in your teeth is as brittle as glass, yet built to last a lifetime. Image: Trey Menefee / Flickr
The enamel in your teeth is as brittle as glass, yet built to last a lifetime. Image: Trey Menefee / Flickr

Over the course of your life, each of your teeth will make well over a million bites (or megabites, if you prefer.) The average force imparted by your molars in one of these bites is 720 Newtons (162 lbs), or about the weight of an adult human. That’s a very large number of very powerful impacts, and so you’d imagine that our teeth must be incredibly strong and crack-resistant to withstand such heavy use. And yet enamel – the mineral that coats our teeth – is about as brittle as glass.

Chew on that thought for a moment.

Enamel and glass have a few things in common. They are both very strong materials (they can withstand a lot of pressure), and yet, at the same time, they are both very brittle (they crack easily). But they differ greatly in how they respond to these cracks. When you drop a glass, small cracks form that grow larger and cause the whole thing to shatter. But unlike glass, the enamel layer of our teeth is able to stop cracks dead in their tracks, effectively absorbing their energy and preventing them from growing. You might take your teeth for granted, but beneath the surface lies an ingenious micro-engineered structure that diffuses cracks and saves us from many trips to the dentist.

So how can brittle building blocks build an incredibly tough wall? The answer lies in how these blocks are stacked.

To see what I mean, let’s zoom in to the enamel coating of a tooth. Here’s what it looks like under a microscope.

tooth cross section
Beneath the surface lies an ingenious micro-engineered structure that diffuses cracks and saves us from many trips to the dentist. Image: Mirkhalaf, Dastjerdi, Barthelat / Nature Communications

The enamel layer on a tooth is really made out of tiny enamel rods, each one about 4-8 microns thick, that are stacked next to each other like a dense forest of trees. In between these rods is a tiny amount of protein (this makes up about 1% of the coating). When you bite into something really hard, tiny cracks develop along these seams between the rods. But instead of growing larger and shattering your tooth like a slab of glass, these cracks are deflected downwards, to a region where these enamel rods get knotted with each other. Like tangled roots of a microscopic enamel forest, this criss-crossing network safely absorbs any damage imparted by the crack. The key idea here is that you can toughen a material by deflecting incoming cracks and forcing them to travel down a more tortuous path. The energy in the crack is now spread over a larger area, and so the crack can do far less damage.

Nature tends to reuse her best tricks. Many tough materials found in nature use stiff building blocks separated by weaker gaps, in a carefully engineered microscopic arrangement that guides any incoming cracks through a maze of twists and turns.

Mother of pearl, or nacre, is found in the outer layer of pearls, and it gives pearls their characteristic shimmering white, iridescent color. Nacre also lines the insides of many mollusk shells, like the shells of oysters, abalone, and nautili. And here’s the really surprising thing — this nacre lining is 3,000 times tougher than the mineral that it’s made out of!

NautilusCutawayLogarithmicSpiral
A Nautilus shell cut in half. Not only is this a beautiful example of a logarithmic spiral, but the nacre that gives this shell its strength and shimmer is a micro-engineered material. Image: Wikimedia Commons

If you zoom into a chunk of this nacre, you’ll encounter a structure that looks a lot like a brick and mortar wall – an interlocking pattern of tiny nacre tablets glued together by sheets of elastic biopolymers.

nacre microscope
Electron microscope image of the surface of nacre, with a fracture in it. Image: Wikimedia Commons

This interlocking structure is behind nacre’s dramatic 3,000 fold boost in toughness. When a crack tries to make its way through this crystalline shock absorber, it’s deflected along the seams between the nacre slabs. The dangerously localized energy carried by the crack is safely diffused over a larger area (no wonder molluscs line their shells with this amazing stuff.)

nacre cross section
When a crack tries to make its way through this crystalline shock absorber, it’s deflected along the seams between the nacre slabs. Image: Mirkhalaf, Dastjerdi, Barthelat / Nature Communications

In a zen-inspired stroke of engineering brilliance, these materials gain their strength from their weaknesses*. A solid block of enamel or nacre would be hopelessly brittle. But, by introducing weaker channels that can guide and deflect cracks, these materials become far tougher than the building blocks that they are made out of.

cracked brick wall
Brick and mortar walls channel cracks away from a direct route and towards a longer maze-like path, and this makes them tougher. Nature uses a similar trick to build tough materials. Image: Rodnei Reis / Flickr

Wouldn’t it be cool if we could take a trick out of nature’s book, and use this idea to build tougher glass? This thought inspired Mirkhalaf, Dastjerdi, and Barthelat, three mechanical engineers at McGill University, to experiment with glass. They wondered what would happen if you could embed these maze-like paths inside a piece of glass. Could these weaker channels deflect and diffuse cracks just like our teeth or mollusk shells do?

So they designed a ‘3D laser engraving’ system where a laser beam is focused inside a piece of glass, and engraves small holes (or ‘microcracks’) inside the glass. By etching many of these small holes next to each other, the researchers could engineer a weak front inside the glass. And when they tore the glass apart, they found that indeed, as they expected, the crack no longer travelled in a straight line – instead, it was deflected down this weaker channel.

cracked glass
Image: Mirkhalaf, Dastjerdi, Barthelat / Nature Communications

So far, so good. They could now guide cracks to go where they wanted them to. The next step was to turn this weakness into a strength.

And so the researchers came up with a pretty ingenious idea. They etched out a weakened channel inside the glass in the shape of the edge of a jigsaw puzzle piece. Just as it’s hard to slide apart jigsaw pieces that are snapped together, the researchers expected that as the crack travels down this jigsaw channel it would have to work against friction to pull these jigsaw tabs apart. They realized that this idea worked even better if they filled these jigsaw shaped grooves with polyurethane (reminiscent of the biological examples where strong pieces are separated by weakened grooves).

crack deflect jigsaw
As the crack travels down this jigsaw channel it would have to work against friction to pull these jigsaw tabs apart. Image: Mirkhalaf, Dastjerdi, Barthelat / Nature Communications
glass jigsaw cracks
An example of this laser-engraved glass. It takes 200 times more energy to snap the glass, compared to when the curvy seams are absent. Image: Mirkhalaf, Dastjerdi, Barthelat / Nature Communications

The researchers found that this laser-engraved glass was 200 times tougher than regular glass. We frequently use the words ‘strong’ and ‘tough’ interchangeably, but in engineering these are two different quantities. The strength of a material refers to how much pressure it can withstand (either in compression or in stretching), whereas the toughness has to do with how easily cracks can spread. Traditional glass is fairly strong, but not at all tough – it’s brittle. Engineered glasses like tempered glass or Gorilla Glass increase the strength of glass (its ability to withstand high pressure) but not its toughness (its ability to stop cracks from spreading). The laser-engraving technique does the opposite. It gives you a large boost in toughness at the cost of lowering the strength.

Like dental enamel or mother of pearl, the bio-inspired glass developed by these researchers is far tougher than any of its parts. The secret to their success was not to prevent the glass from failing, but to create a situation where it fails well. And just as tooth enamel saves us trips to the dentist, I’m hoping that in the future bio-inspired glass will save the day whenever I drop my phone.

 

Update (March 11): Here’s a Q&A with Francois Barthelet, one of the authors of this work

Q. What motivated you to work on this project? What role did examples from nature play in guiding your investigations?

A. Teeth, bone and mollusk shells are made of extremely brittle minerals as fragile as chalk, yet they are notorious for their high toughness, which is higher than our best engineered ceramics and glasses. The idea of mimicking the structures and mechanisms behind the performance of these natural materials has been around for about two decades. The typical fabrication approach to mimic these materials  has been to assemble building blocks into bio-inspired microstructures. This is much like making a brick wall out of Lego blocks except in this case the blocks are microscopic, so this approach is very challenging. Our idea was to attack the problem from a new angle: start with a large block of material with no initial microstructure and carve weaker interfaces within it. This method allows for a much higher control over the final structure, and also gives material with a very high content of hard material. Glass is the perfect choice because it suits itself well to the laser engraving process, and it is a material with is used in many applications. Also glass is the archetype of brittle materials, and turning its brittleness into toughness makes for a more spectacular result. We are now experimenting with other types of materials as well.

Q. It seems that introducing these laser-engraved channels affects the transparency of the glass. Do you think that in future glass could be engineered with these structures in a way that can still be used in applications that rely on transparency (e.g. smartphone or computer screens)?

A. We are now working on optimizing the infiltration process so the engraved lines become completely invisible. We do it by combining different techniques, and while this is still ongoing we already have very encouraging result, where the engraving line is already much less visible than what you saw in our article.

Q.  Are there other architectures (other than the jigsaw puzzle piece architecture) that your group has considered working with? What inspired the idea of the jigsaw piece architecture?

A. Yes! There are of course many more possible architectures, which makes it very exciting for us because we now have a huge playground to explore. The design we proposed in this paper is essentially two-dimensional. Now we are exploring fully three-dimensional architectures. The “jigsaw pieces” geometry came for two reasons: we needed a “re-entrant” feature to generate locking and we also needed rounded geometries all around, because glass easily fractures near sharp corners.

Q. Are you working on any commercial applications of this work? Do you see these ideas being incorporated into glass for commercial and home use?

Glass is prevalent in many applications because of its optical properties, hardness, resistance to chemicals and durability. The main drawback of glass is its brittleness. Reducing the brittleness of glass can therefore expand the range of its applications: tougher bullet-proof windows, glasses, sport equipment, optical device, smart phones, touch screens. We have patented the design and the fabrication process, and we are already talking to several companies interested in commercialization. 

References

Mirkhalaf, M., Dastjerdi, A. K., & Barthelat, F. (2014). Overcoming the brittleness of glass through bio-inspiration and micro-architecture. Nature communications, 5.

Footnotes

*Technically I mean toughness here and not strength. These micro-architectures provide a boost of toughness that is accompanied with a loss of strength. See here for more on the difference between toughness and strength.

How many bites does a tooth go through in its lifetime? This is a fun question to think about (and could work well as a prompt to teach estimation in a math classroom.) I’ll leave it to you to work out the answer. Here are some estimates by others.

Homepage image: Andre Vandal/Flickr

The Questions That Computers Can Never Answer

Image: Armin Cifuentes/Flickr
Image: Armin Cifuentes/Flickr

Computers can drive cars, land a rover on Mars, and beat humans at Jeopardy. But do you ever wonder if there’s anything that a computer can never do? Computers are, of course, limited by their hardware. My smartphone can’t double as an electric razor (yet). But that’s a physical limitation, one that we could overcome if we really wanted to. So let me be a little more precise in what I mean. What I’m asking is, are there any questions that a computer can never answer?

Now of course, there are plenty of questions that are really hard for computers to answer. Here’s an example. In school, we learn how to factor numbers. So, for example, 30 = 2 × 3 × 5, or 42 = 2 × 3 × 7. School kids learn to factor numbers by following a straightforward, algorithmic procedure. Yet, up until 2007, there was a $100,000 bounty on factoring this number:

13506641086599522334960321627880596993888147560566702752448514385152651060
48595338339402871505719094417982072821644715513736804197039641917430464965
89274256239341020864383202110372958725762358509643110564073501508187510676
59462920556368552947521350085287941637732853390610975054433499981115005697
7236890927563

And as of 2014, no one has publicly claimed the solution to this puzzle. It’s not that we don’t know how to solve it, it’s just that it would take way too long. Our computers are too slow. (In fact, the encryption that makes the internet possible relies on these huge numbers being impossibly difficult to factor.)

So lets rephrase our question so that it isn’t limited by current technology. Are there any questions that, no matter how powerful your computer, and no matter how long you waited, your computer would never be able to answer?

Surprisingly, the answer is yes. The Halting Problem asks whether a computer program will stop after some time, or whether it will keep running forever. This is a very practical concern, because an infinite loop is a common type of bug that can subtly creep in to one’s code. In 1936, the brilliant mathematician and codebreaker Alan Turing proved that it’s impossible for a computer to inspect any code that you give it, and correctly tell you whether the code will halt or run forever. In other words, Turing showed that a computer can never solve the Halting Problem.

You’ve probably experienced this situation: you’re copying some files, and the progress bar gets stuck (typically at 99%). At what point do you give up on waiting for it to move? How would you know whether it’s going to stay stuck forever, or whether, in a few hundred years, it’ll eventually copy your file? To use an analogy by Scott Aaronson, “If you bet a friend that your watch will never stop ticking, when could you declare victory?

copying

As you get sick of waiting for the copy bar to move, you begin to wonder, wouldn’t it be great if someone wrote a debugging program that could weed out all annoying bugs like this? Whoever wrote that program could sell it to Microsoft for a ton of money. But before you get to work on writing it yourself, you should heed Turing’s advice – a computer can never reliably inspect someone’s code and tell you whether it will halt or run forever.

Think about how bold a claim this is. Turing isn’t talking about what we can do today, instead he’s raised a fundamental limitation on what computers can possibly do. Be it now, or in the year 2450, there isn’t, and never will be, any computer program that can solve the Halting Problem.

In his proof, Turing first had to mathematically define what we mean by a computer and a program. With this groundwork covered, he could deliver the final blow using the time honored tactic of proof by contradiction. As a warm up to understanding Turing’s proof, let’s think about a toy problem called the Liar paradox. Imagine someone tells you, “this sentence is false.” If that sentence is true, then going by what they said, it must also be false. Similarly, if the sentence is false, then it accurately describes itself, so it must also be true. But it can’t be both true and false – so we have a contradiction. This idea of using self-reference to create a contradiction is at the heart of Turing’s proof.

Here’s how computer scientist Scott Aaronson introduces it:

[Turing’s] proof is a beautiful example of self-reference. It formalizes an old argument about why you can never have perfect introspection: because if you could, then you could determine what you were going to do ten seconds from now, and then do something else. Turing imagined that there was a special machine that could solve the Halting Problem. Then he showed how we could have this machine analyze itself, in such a way that it has to halt if it runs forever, and run forever if it halts. Like a hound that finally catches its tail and devours itself, the mythical machine vanishes in a fury of contradiction.

"Ouroboros" by the Flipside CORE project  and Burning Man
“Like a hound that finally catches its tail and devours itself, the mythical machine vanishes in a fury of contradiction.” Photo: Michael Holden/Flickr

And so, let’s go through Turing’s proof that the Halting Problem can never be solved by a computer, or why you could never program a ‘loop snooper’. The proof I’m about to present is a rather unconventional one. It’s a poem written by Geoffrey Pullum in honor of Alan Turing, in the style of Dr. Seuss. I’ve reproduced it here, in entirety, with his permission.

SCOOPING THE LOOP SNOOPER

A proof that the Halting Problem is undecidable

Geoffrey K. Pullum

No general procedure for bug checks will do.
Now, I won’t just assert that, I’ll prove it to you.
I will prove that although you might work till you drop,
you cannot tell if computation will stop.

For imagine we have a procedure called P
that for specified input permits you to see
whether specified source code, with all of its faults,
defines a routine that eventually halts.

You feed in your program, with suitable data,
and P gets to work, and a little while later
(in finite compute time) correctly infers
whether infinite looping behavior occurs.

If there will be no looping, then P prints out ‘Good.’
That means work on this input will halt, as it should.
But if it detects an unstoppable loop,
then P reports ‘Bad!’ — which means you’re in the soup.

Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.

Here’s the trick that I’ll use — and it’s simple to do.
I’ll define a procedure, which I will call Q,
that will use P’s predictions of halting success
to stir up a terrible logical mess.

For a specified program, say A, one supplies,
the first step of this program called Q I devise
is to find out from P what’s the right thing to say
of the looping behavior of A run on A.

If P’s answer is ‘Bad!’, Q will suddenly stop.
But otherwise, Q will go back to the top,
and start off again, looping endlessly back,
till the universe dies and turns frozen and black.

And this program called Q wouldn’t stay on the shelf;
I would ask it to forecast its run on itself.
When it reads its own source code, just what will it do?
What’s the looping behavior of Q run on Q?

If P warns of infinite loops, Q will quit;
yet P is supposed to speak truly of it!
And if Q’s going to quit, then P should say ‘Good.’
Which makes Q start to loop! (P denied that it would.)

No matter how P might perform, Q will scoop it:
Q uses P’s output to make P look stupid.
Whatever P says, it cannot predict Q:
P is right when it’s wrong, and is false when it’s true!

I’ve created a paradox, neat as can be —
and simply by using your putative P.
When you posited P you stepped into a snare;
Your assumption has led you right into my lair.

So where can this argument possibly go?
I don’t have to tell you; I’m sure you must know.
A reductio: There cannot possibly be
a procedure that acts like the mythical P.

You can never find general mechanical means
for predicting the acts of computing machines;
it’s something that cannot be done. So we users
must find our own bugs. Our computers are losers!

What you just read, in delightfully whimsical poetic form, was the punchline of Turing’s proof. Here’s a visual representation of the same idea. The diamond represents the loop-snooping program P, which is asked to evaluate whether the program Q (the flow chart) will halt.

loop snooper serpents tail
“The program will halt when the loop snooper said it wouldn’t, and it runs forever when the loop snooper said it would halt!” Image Credit for serpent (right): Andrei

Like the serpent that tries to eat its tail, Turing conjured up a self-referential paradox. The program will halt when the loop snooper said it wouldn’t, and it runs forever when the loop snooper said it would halt! To resolve this contradiction, we’re forced to conclude that this loop snooping program can’t exist.

And this idea has far-reaching consequences. There are uncountably many questions for which computers can’t reliably give you the right answer. Many of these impossible questions are really just the loop snooper in disguise. Among the things that a computer can never do perfectly is identifying whether a program is a virus, or whether it contains vulnerable code that can be exploited. So much for our hopes of having the perfect anti-virus software or unbreakable software. It’s also impossible for a computer to always tell you whether two different programs do the same thing, an unfortunate fact for the poor souls who have to grade computer science homework.

By slaying the mythical loop snooper, Turing taught us that there are fundamental limits to what computers can do. We all have our limits, and in a way it’s comforting to know that the artificial brains that we create will always have theirs too.

The Experiment That Forever Changed How We Think About Reality

The uncertainty principle says that you can’t know certain properties of a quantum system at the same time. For example, you can’t simultaneously know the position of a particle and its momentum. But what does that imply about reality? If we could peer behind the curtains of quantum theory, would we find that objects really do have well defined positions and momentums? Or does the uncertainty principle mean that, at a fundamental level, objects just can’t have a clear position and momentum at the same time. In other words, is the blurriness in our theory or is it in reality itself?

Case 1: Blurred glasses, clear reality

The first possibility is that using quantum mechanics is like wearing blurred glasses. If we could somehow lift off these glasses, and peek behind the scenes at the fundamental reality, then of course a particle must have some definite position and momentum. After all, it’s a thing in our universe, and the universe must know where the thing is and which way it’s going, even if we don’t know it. According to this point of view, quantum mechanics isn’t a complete description of reality – we’re probing the fineness of nature with a blunt tool, and so we’re bound to miss out on some of the details.

This fits with how everything else in our world works. When I take off my shoes and you see that I’m wearing red socks, you don’t assume that my socks were in a state of undetermined color until we observed them, with some chance that they could have been blue, green, yellow, or pink. That’s crazy talk. Instead, you (correctly) assume that my socks have always been red. So why should a particle be any different? Surely, the properties of things in nature must exist independent of whether we measure them, right?

Case 2: Clear glasses, blurred reality

On the other hand, it could be that our glasses are perfectly clear, but reality is blurry. According to this point of view, quantum mechanics is a complete description of reality at this level, and things in the universe just don’t have a definite position and momentum. This is the view that most quantum physicists adhere to. It’s not that the tools are blunt, but that reality is inherently nebulous. Unlike the case of my red socks, when you measure where a particle is, it didn’t have a definite position until the moment you measured it. The act of measuring its position forced it into having a definite position.

Now, you might think that this is one of those ‘if-a-tree-falls-in-the-forest’ types of metaphysical questions that can never have a definite answer. However, unlike most philosophical questions, there’s an actual experiment that you can do to settle this debate. What’s more, the experiment has been done, many times. In my view, this is one of the most underappreciated ideas in our popular understanding of physics. The experiment is fairly simple and tremendously profound, because it tells us something deep and surprising about the nature of reality.

Here’s the setup. There’s a source of light in the middle of the room. Every minute, on the minute, it sends out two photons, in opposite directions. These pairs of photons are created in a special state known as quantum entanglement. This means that they’re both connected in a quantum way – so that if you make a measurement on one photon, you don’t just alter the quantum state of that photon, but also immediately alter the quantum state of the other one as well.

With me so far?

On the left and the right of this room are two identical boxes designed to receive the photons. Each box has a light on it. Every minute, as the photon hits the box, the light flashes one of two colors, either red or green. From minute to minute, the color of the light seems quite random – sometimes it’s red, and other times it’s green, with no clear pattern one way or another. If you stick your hand in the path of the photon, the light bulb doesn’t flash. It seems that this box is detecting some property of the photon.

So when you look at any one box, it flashes a red or a green light, completely at random. It’s anyone’s guess as to which color it will flash next. But here’s the really strange thing: Whenever one box flashes a certain color, the other box will always flash the same color. No matter how far apart you try to move the boxes from the detector, they could even be in opposite ends of our solar system, they’ll flash the same color without fail.

It’s almost as if these boxes are conspiring to give the same result. How is this possible? (If you have your own pet theory about how these boxes work, hold on to it, and in a bit you’ll be able to test your idea against an experiment.)

“Aha!” says the quantum enthusiast. “I can explain what’s happening here. Every time a photon hits one of the boxes, the box measures its quantum state, which it reports by flashing either a red or a green light. But the two photons are tied together by quantum entanglement, so when we measure that one photon is in the red state (say), we’ve forced the other photon into the same state as well! That’s why the two boxes always flash the same color.”

“Hold up,” says the prosaic classical physicist. “Particles are like billiard balls, not voodoo dolls. It’s absurd that a measurement in one corner of space can instantaneously affect something in a totally different place. When I observe that one of my socks is red, it doesn’t immediately change the state of my other sock, forcing it to be red as well. The simpler explanation is that the photons in this experiment, like socks, are created in pairs. Sometimes they’re both in the red state, other times they’re both in the green state. These boxes are just measuring this ‘hidden state’ of the photons.”

The experiment and reasoning spelt out here is a version of a thought experiment first articulated by Einstein, Podolsky and Rosen, known as the EPR experiment. The crux of their argument is that it seems absurd that a measurement at one place can immediately influence a measurement at totally different place. The more logical explanation is that the boxes are detecting some hidden property that both the photons share. From the moment of their creation, these photons might carry some hidden stamp, like a passport, that identifies them as being either in the red state or the green state. The boxes must then be detecting this stamp. Einstein, Podolsky and Rosen argued that the randomness we observe in these experiments is a property of our incomplete theory of nature. According to them, it’s our glasses that are blurry. In the jargon of the field, this idea is known as a hidden variables theory of reality.

It would seem the classical physicist has won this round, with an explanation that’s simpler and makes more sense.

The next day, a new pair of boxes arrives in the mail. The new version of the box has three doors build into it. You can only open one door at a time. Behind every door is a light, and like before, each light can glow red or green.

The two physicists play around with these new boxes, catching photons and watching what happens when they open the doors. After a few hours of fiddling around, here’s what they find:

1. If they open the same door on both boxes, the lights always flashes the same color.

2. If they open the doors of the two boxes at random, then the lights flash the same color exactly half the time.

After some thought, the classical physicist comes up with a simple explanation for this experiment. “Basically, this is not very different from yesterday’s boxes. Here’s a way to think about it. Instead of just having a single stamp, let’s say that each pair of photons now has three stamps, sort of like holding multiple passports. Each door of the box reads a different one of these three stamps. So, for example, the three stamps could be red, green, and red meaning the first door would flash red, the second door would flash green, and the third door would flash red.”

“Going with this idea, it makes sense that when we open the same door on both boxes, we get the same colored light, because both boxes are reading the same stamp. But when we open different doors, the boxes are reading different stamps, so they can give different results.”

Again, the classical physicist’s explanation is straightforward, and doesn’t invoke any fancy notions like quantum entanglement or the uncertainty principle.

“Not so fast,” says the quantum physicist, who’s just finished scribbling a calculation on her notepad. “When you and I opened the doors at random, we discovered that one half of the time, the lights flash the same color. This number – a half – agrees exactly with the predictions of quantum mechanics. But according to your ‘hidden stamps’ ideas, the lights should flash the same color more than half of the time!”

The quantum enthusiast is on to something here.

“According to the hidden stamps idea, there are 8 possible combinations of stamps that the photons could have. Let’s label them by the first letters of the colors, for short, so RRG = red red green.”

RRG
RGR
GRR
GGR
GRG
RGG
RRR
GGG

“Now, when we pick doors at random, a third of the time we will pick the same door by chance, and when we do, we see the same color.”

“The other two-thirds of the time, we pick different doors. Let’s say we encounter photons with the following stamp configuration:”

RRG

“In such a configuration, if we picked door 1 on one box and door 2 on another, the lights flash the same color (red and red). But if we picked doors 1 and 3, or doors 2 and 3, they’d flash different colors (red and green). So in one-third of such cases, the boxes flash the same color.”

“To summarize, a third of the time the boxes flash the same color because we chose the same door. Two-thirds of the time we chose different doors, and in one-third of these instances, the boxes flash the same color.”

“Adding this up,”

⅓ + ⅔ ⅓ = 3/9 + 2/9 = 5/9 = 55.55%

“So 55.55% is the odds that the boxes flash the same color when we pick two doors at random, according to the hidden stamps theory.”

“But wait! We only looked at one possibility – RRG. What about the others? It takes a little thought, but it isn’t too hard to show that the math is exactly the same in all the following cases:”

RRG
RGR
GRR
GGR
GRG
RGG

“That leaves only two cases:”

RRR
GGG

“In those cases, we get the same color no matter which doors we pick. So it can only increase the overall odds of the two boxes flashing the same color.”

“The punchline is that according to the hidden stamps idea, the odds of both boxes flashing the same color when we open the doors at random is at least 55.55%. But according to quantum mechanics, the answer is 50%. The data agrees with quantum mechanics, and it rules out the ‘hidden stamps’ theory.”

If you’ve made it this far, it’s worth pausing to think about what we’ve just shown.

We just went through the argument of a groundbreaking result in quantum mechanics known as Bell’s theorem. The black boxes don’t really flash red and green lights, but in the details that matter they match real experiments that measure the polarization of entangled photons.

Bell’s theorem draws a line in the sand between the strange quantum world and the familiar classical world that we know and love. It proves that hidden variable theories like the kind that Einstein and his buddies came up with simply aren’t true1. In its place is quantum mechanics, complete with its particles that can be entangled across vast distances. When you perturb the quantum state of one of these entangled particles, you instantaneously also perturb the other one, no matter where in the universe it is.

It’s comforting to think that we could explain away the strangeness of quantum mechanics if we imagined everyday particles with little invisible gears in them, or invisible stamps, or a hidden notebook, or something – some hidden variables that we don’t have access to – and these hidden variables store the “real” position and momentum and other details about the particle. It’s comforting to think that, at a fundamental level, reality behaves classically, and that our incomplete theory doesn’t allow us to peek into this hidden register. But Bell’s theorem robs us of this comfort. Reality is blurry, and we just have to get used to that fact.

Footnotes

1. Technically, Bell’s theorem and the subsequent experiment rule out a large class of hidden variable theories known as local hidden variable theories. These are theories where the hidden variables don’t travel faster than light. It doesn’t rule out nonlocal hidden variable theories where hidden variables do travel faster than light, and Bohmian mechanics is the most successful example of such a theory.

I first came across this boxes-with-flashing-lights explanation of Bell’s theorem in Brian Greene’s book Fabric of the Cosmos. This pedagogical version of Bell’s experiment traces back to the physicist David Mermin who came up with it. If you’d like a taste of his unique and brilliant brand of physics exposition, pick up a copy of his book Boojums All the Way Through.

Homepage Image: NASA/Flickr

The Fluid Dynamics of Spitting: How Archerfish Use Physics to Hunt With Their Spit

Archerfish are incredible creatures. They lurk under the surface of the water in rivers and seas, waiting for an insect to land on the plants above. Then, suddenly, and with unbelievable accuracy, they squirt out a stream of water that strikes down the insect. The insect falls, and by the time it hits the water, the archerfish is already waiting in place ready to swallow it up. You have to marvel at a creature that excels at what seems like such an improbable hunting strategy – death by water pistol squirt.

Image Credit: Alberto Vailati

Here’s a video by BBC Wildlife that shows the archerfish in action (the first half of the video is about archerfish).

Technically, the term archerfish doesn’t refer to a single species of fish but to a family of 7 different freshwater fish, that fall under the genus Toxotes. They strike with remarkable accuracy, and just a tenth of second after the prey is hit, they quickly move to the spot where it will hit the water. Unlike most baseball players who have to keep their eyes on a fly ball to track it, in less than the blink of an eye, the archerfish is in place, waiting for the insect to arrive.

If that isn’t impressive enough, consider this. When these archerfish squirt water, their eyes are underwater. If you’ve spent any time in a swimming pool, you’ll know that light bends when it enters water. A less astute fish might not correct for this bending of light, and would be tricked into thinking that the insect is somewhere it isn’t. But not the archerfish. This little aquatic physicist is able to seamlessly correct for the bending of light. And it isn’t a minor correction – when the perceived angle of the target is 45 degrees, its true angle is off by as much as 25 degrees.

When aiming their shot, archerfish have to correct for the bending of light

And it gets better. Spit doesn’t fly in a straight line, because the Earth pulls it down. The archerfish understands this. It’s able to correct for how gravity bends the spit’s path.

They also correct for the effect of gravity on the water’s path

In the graph below, this correction — the spit fall — is shown for different values of insect height. So, for example, when the insect is 10 centimeters high, the archerfish accounts for a spit fall of somewhere between 0 to 2 centimeters. When the insect is 30 centimeters high, this fall ranges from 2 to 15 centimeters. By precisely adjusting the angle and speed of its spit, the archerfish solves a challenging physics problem.

Image Credit: Lawrence M. Dill. Original Source: Behav. Ecol. Sociobiol.

And the story doesn’t end here. There’s something else that’s really strange about the archerfish’s hunting strategy. It has to do with how the water travels from the fish to its target. I learned about this counter-intuitive phenomenon from Alberto Vailati, a professor of fluid dynamics in the University of Milan.

Let’s start with a high-speed video of the archerfish spitting strategy, that Alberto was kind enough to share with me. Here it is in gif form (slowed down about 14X):

Image Credit: Alberto Vailati

Now I’m going to take this video and analyze it using Tracker, an easy to use open-source physics video analysis toolkit. The question I’m curious about is, how fast does the water stream travel from the fish’s mouth to the target?

Alberto provided me with the frame rate of the high-speed camera and the size of the fish – this let me set the distance scale and time scale in the video. Then, I tracked the blob at the tip of the squirted water.

If you track the blob versus time (top right in the above video), the graph looks like a straight line. The entire strike happens in just 4 hundredths of a second, or 10 times faster than the blink of an eye. It’s so blazingly fast that the water blob hasn’t had much time to fall as yet (that’s why the graph resembles a line and not a parabola).

But let’s look at this a little closer. Here’s a graph of the trajectory of the water blob. Its position is on the vertical axis (measured in meters) versus time on the horizontal axis (measured in seconds). If we look only at the first half of the trajectory, we find that the speed of the blob (i.e. the slope of the line) is 2.88 meters/second.

Now let’s look at the second half of the trajectory.

The slope of the line tells us that the speed of the blob is now 3.27 meters/second.

Do you see what’s odd here? The water blob is speeding up — it’s accelerating. As it climbs higher, it moves faster. This is in stark contrast to, say, a basketball or a bullet, which, when shot upwards, will slow down because of gravity and air resistance — they decelerate.

Maybe we messed up, somehow? Let’s try another video. This one is recorded at 1000 frames per second, so there’s more data.

Image Credit: Alberto Vailati

Again, I used Tracker to track the water blob. Each of those red circles that you see are just a thousandth of a second apart.

Like before, let’s first find the speed of the blob as it leaves the archerfish.

I get 3.64 meters/second.

Next, let’s find its speed near the top of the trajectory.

It’s 4.33 meters/second. Boom! Once again we find that the water blob is speeding up. How can that be?

This is exactly the phenomenon described by Alberto Vailati, Luca Zinnato, and Roberto Cerbino in their recent paper on the fluid dynamics of the archerfish water jet. They analyzed many different trajectories of water squirted out by archerfish, and concluded that it consistently hits the target at a greater speed than it left the fish. And they showed that the archerfish is using a few tricks from fluid dynamics to maximize the force with which the water hit its prey.

Here’s how it works.

The first trick has to do with bunching up. When the archerfish squirts a jet of water from its mouth, it ensures that the tail end of the jet is moving faster than the leading end. Imagine you’re standing in a queue and the people in the back start rushing forward. This would squeeze everyone into a smaller space. But the people have to go somewhere, and so the queue widens. Similarly, as the water jet is squished by the fast moving water at the tail end, it would widen into a pancake.

Image Credit: Vailati et al / PLOS One

This isn’t good for the archerfish, because water that is more spread out will strike the insect with less pressure (force divided by area). Fortunately for the archerfish, there’s another piece of physics that balances this tendency of the water to spread out. You might have noticed this second idea in your bathroom. As water trickles down a faucet, the stream sometimes breaks into small drops of water.

Image Credit: Chris 73 / Wikimedia Commons

This effect is known as the Plateau–Rayleigh instability. If you look really closely, any stream of water has tiny irregularities — some bumps that are thicker, and slight necks that are thinner. Surface tension, which is the tendency of tiny water molecules to attract each other, causes these irregularities to grow. The necks shrink until they’re pinched off, and the stream breaks into distinct blobs. This Plateau-Rayleigh instability is also behind the phenomenon of ‘splashback’ during urination (now you know what to blame), and has also been exploited by inkjet printers to precisely control how ink falls on paper.

Image Credit: Mahmoudreza Shirinsokhan

On their own, these two mechanisms — bunching and the tendency to form blobs — aren’t of much use to the archerfish. But when wielded together, they make a powerful impact. The water jet would break up into blobs due to surface tension, but the archerfish has ensured that the blobs in the rear are moving faster than the ones in front. When these fast blobs catch up with the slower ones in front, they merge into a more massive uber-blob that moves faster than before, and strikes its target with a larger force in a shorter amount of time. The energy that was spread out over a stream is now concentrated in a blob.

This isn’t just a neat trick — it’s the key to the archerfish’s hunting strategy. At point-blank range, the water jet doesn’t deliver enough of a force to knock down an insect. But with this power amplification trick, the archerfish can reach a striking power that’s more than five times what any vertebrate muscle can produce.

The archerfish hunts with a working knowledge of motion, gravity, optics, and fluid dynamics, effortlessly solving problems that might keep a physics student up at night. It uses science to give itself superhuman (or rather, superfish) strength — like the Hawkeye of the animal kingdom, it’s always on target and never runs out of arrows.

References

How Archer Fish Achieve a Powerful Impact: Hydrodynamic Instability of a Pulsed Jet in Toxotes jaculatrix. Alberto Vailati, Luca Zinnato and Roberto Cerbino. PLOS One.

Refraction and the spitting behavior of the archerfish. Lawrence M. Dill. Behavioral Ecology and Sociobiology.

Predicting three-dimensional target motion: how archer fish determine where to catch their dislodged prey. Samuel Rossel, Julia Corlija and Stefan Schuster. The Journal of Experimental Biology.

Want to learn more about archerfish? Here’s Wired’s Mary Bates on How Archerfish Decide.

A New Kind of Food Science: How IBM Is Using Big Data to Invent Creative Recipes

Computers are constantly getting smarter. But can they ever be creative? A team of IBM researchers believes so. They’ve built a program that uses math, chemistry, and vast quantities of data to churn out new and unusual recipes.

To build their algorithm, the researchers modeled the steps that we might go through to develop creative ideas. First, you need to understand the problem that you’re trying to solve. Then, build expertise by learning everything you can about the problem. With this knowledge under your belt, generate a bunch of new ideas, and maybe even combine different types of ideas. Then pick the most creative ideas from the lot. Finally, implement your idea. While computers have executed many of these steps before, the key insight of the IBM group was to find a way to quantitatively gauge the creativity of a recipe, and to put all the different pieces together.

“I have dishes from the system all the time”, says Lav Varshney, who led IBM’s team to develop this novel recipe generation engine. “Some of the recipes that we created ourselves like the Kenyan Brussels sprout gratin, the Caymanian plantain dessert, and the Swiss-Thai asparagus quiche are very good. Others that we did jointly with our partners, the Institute of Culinary Education, like the Spanish almond crescent and the Ecuadorian strawberry dessert are world-class.”

So let’s see how IBM’s computational chef gets creative.

Ecuadorian Strawberry Dessert: one of the dishes served up by IBM’s computer chef. Credit: IBM Research

Step 1: Define the problem

When you start the program, you’re asked to pick a key ingredient, choose some regional cuisines you’d like to explore, and decide what type of dish you’re interested in (a soup, a quiche, etc.).

The program asks you to define the constraints of the recipe

Step 2: Learn everything you can about the problem

This is where all that data comes in. The researchers used natural language processing algorithms to scan and parse the text of millions of different recipes. Using this data, they convert a written recipe into a web of relationships, including the quantities of different ingredients and the processes that transform these ingredients into food. They also scanned Wikipedia to learn which ingredients are commonly used in various regional cuisines. They went through handbooks of flavor ingredients to learn which molecules are present in different food ingredients, and also included information about the chemical structures of these molecules. They also included data on how humans rate the ‘pleasantness’ of 70 different chemical compounds.

In the end, the researchers had amassed a vast computer-readable body of knowledge about human flavor preferences, regional recipes, and about the chemistry behind these recipes.

Now the program is ready to start cooking.

Step 3: Generate ideas to solve the problem

Starting with the traditional recipes of a certain cuisine, the software generates millions of new recipe ideas that match the user’s preferences. They don’t just throw out these recipes at random. The recipes that are generated respect an empirical rule-of-thumb called the food pairing principle. This says that ingredients that pair well together in a recipe share common flavor molecules. (You can read more about food pairing in this great summary by Wired’s Sam Arbesman.)

The new recipes are generated by ‘mutating’ the ingredients of existing recipes, and then fusing these with other recipes, resulting in all sorts of new hybrid concoctions. (This idea, known as a genetic algorithm, is modeled after the process of genetic change.)

This is not a good selection algorithm. Credit: Randall Munroe / XKCD

As the comic suggests, it’s somewhat unfeasible to find friends willing to try out a million bizarre new recipe ideas (deep-fried skittles, anyone?) So, instead, the program automates the process. This is the really clever bit.

Step 4: Select the best ideas

According to Varshney, “Many previous attempts at computational creativity have been good at the generative part of creativity but not at the selective part of creativity. I think our major contribution has been to show how [..] big data models can be useful not just for creating a billion ideas but for pointing out, say, the ten best ideas.”

So how does a computer decide which ideas are the most creative? The first thing you need is an operational definition of creativity. Ken Robinson defines creativity as “the process of having original ideas that have value.” The IBM researchers adopt a similar metric. According to them, a creative idea should be novel, and should be high-quality.

Let’s take on novelty first. You’d probably agree that peanut butter and jelly can go together. And you probably also think it’s ok to put mustard on a hotdog. That’s because you have a set of beliefs about the likelihood of various recipes. These beliefs are based on what you think tastes good, but they’re also strongly influenced by the foods that you’ve been exposed to.

But you might never have thought to put peanut butter on a hot dog. This recipe clashes with your beliefs about food, and this is what makes it surprising. In contrast, a hotdog with mustard has absolutely no effect on your beliefs about recipes. It’s a totally boring recipe.

This IBM scientists used a very similar idea – they measured the novelty of a recipe by quantifying how much it alters one’s existing recipe worldview. They accomplished this with a mathematical tool known as Bayesian surprisal. (This tool has previously been used to identify what parts of a video people tend to pay most attention to.) Here’s how Varshney explained the concept to me. “Bayesian surprise basically compares prior beliefs about food with new beliefs after the introduction of the newly created recipe; the greater the change in beliefs, the greater the surprise.”

Spanish Almond Crescents – a computer generated recipe. Credit: IBM Research

Now, consider quality. Taste is a complex thing. Our tongue teases apart the basic tastes: sweet, salty, sour, bitter, and umami. But there’s so much more to our experience of food: whether the food is warm, creamy, gooey, chunky, slimy, the way it sits on your tongue, the way it crunches when you bite into it, how hungry you are, the memories that you associate a taste with, and so on.

The researchers argue that in spite of all this, they key to taste is really smell. “Work in neurogastronomy argues quite strongly that [smell] is the central contributor [to flavor perception]”, says Varshney. If that sounds counter-intuitive, think about how bland your food tastes when you have a bad cold — your taste receptors are working fine, but you can’t smell a thing.

But how does the program know how a dish smells? It all comes down to chemistry. The software goes through all the different flavor molecules in a recipe and looks up their chemical properties – these include technical terms like “topological polar surface area, heavy atom count, complexity, rotatable bond count, and hydrogen bond acceptor count.” By comparing these chemical properties to that of 70 other odor molecules, the researchers can predict how ‘pleasant’ a particular molecule will smell. They then mix the scents of different molecules together on the computer, and arrive at an overall ‘pleasantness’ of smell of each dish. Think about how surprising this is for a moment — they’re predicting how pleasant a dish will smell using the chemistry of the flavor molecules inside the it!

I asked Varshney about this startling finding, and he responded, “I also find it surprising that it is possible to predict a hedonic percept like pleasantness from molecular properties like the number of heavy atoms, but there is much emerging work in hedonic psychophysics that shows exactly this. Going forward, we hope to incorporate finer aspects of human flavor perception into our models.”

Step 5: Implement your idea

Finally, the software generates a list of recipes ranked using three categories: surprise, pleasantness of odor, and flavor pairings.

The final output lists recipes ranked by surprise, pleasantness, and pairing.

Now it’s finally time to leave your laptop and head to the kitchen.

As for Lav Varshney, he isn’t done exploring what the system has to offer. He says, “I was in Berlin this past weekend and we partnered with a chef there to do a full banquet which turned out really well. I particularly liked a grilled tomato on a saffron crouton, and a quark crème caramel with cranberry and caraway ice cream.”

In the near future, you might even see computer generated food in a store near you. Varshney adds, “We have been having discussions now with several large food manufacturing companies, food services companies, and flavor/fragrance houses about this technology.”

 

References

A Big Data Approach to Computational Creativity. Lav R. Varshney, Florian Pinel, Kush R. Varshney, Debarun Bhattacharjya, Angela Schoergendorfer, Yi-Min Chee.

Flavor networks and the principles of food pairing. Yong-Yeol Ahn, Sebastian E. Ahnert, James P. Bagrow, Albert-László Barabási.

Are There Fundamental Laws of Cooking? By Wired’s Sam Arbesman.

Want to use food pairing principles to invent new recipes at home? Try this handy website.

Empirical Zeal has moved to Wired

Hi! I’m incredibly excited to announce that Empirical Zeal has a new home, over on Wired’s all-star Science Blogging Network. Please update your bookmarks and RSS readers to the new location.

Thanks for reading Empirical Zeal! I really hope you follow me over to Wired, as I have a lot of exciting things planned for the future. So please stay with me!

zeal