Are We Living in a Simulation?

In a recent episode of StarTalk, Neil DeGrasse Tyson made some surprisingly strong statements about the hypothesis that we’re all living in a simulation. My first reaction was that Dr. Tyson has finally lost it.

But he’s in good company – Elon Musk firmly believes this as well:

Now, one could argue that Mr. Musk lost it a long time ago (read this great biography and decide for yourself) – but you can’t argue with his, or Dr. Tyson’s results. These are smart people, and if they both give this idea serious consideration, perhaps we should as well.

The argument goes something like this: in 40 years we’ve gone from Pong to creating virtual-reality video games. In another 40 years, we will probably have games that are indistinguishable from reality. And as artificial intelligence advances, it’s entirely plausible that a short time later we will be capable of creating simulated brains that experience a simulated universe that is indistinguishable from reality. Extrapolating further, and assuming there is more than one “base reality” advanced civilization out there in the real world, it’s much more likely that we’re part of a simulated universe than in part of a real one.

This is not a new idea; questions about the nature of our existence go back to Plato. But even this latest interpretation goes back to 2003, in a paper by philospher Nick Bostrom called “Are You Living in a Computer Simulation?” The argument is that we may be part of an “ancestor simulation” created by an advanced version of humanity seeking to understand itself, by simulating all of its prior existence. If you crunch the numbers, you find that a planet-sized supercomputer should be able to fully simulate every human brain that ever existed many, many times every second – and presumably, constructing sensory inputs to those brains for some shared virtual environment is a cakewalk in comparison. Given that we seem to be on track to develop such capability, it’s much more likely we are simulations than the real thing. And when you consider we may not be the only intelligent species in the universe, the odds get even worse that we’re real.

This would also provide a neat explanation for some scientific curiosities we’ve actually observed. There’s evidence that our 3-dimensional universe is really a holographic projection from a 2-D reality. There is real, observational evidence of this. It also explains the existence of the Planck length and Planck time – discrete values of time and space below which you cannot go smaller. Sounds an awful lot like pixels and video frames in a computer game! It also explains a lot of the weirdness found in quantum mechanics, where particles have no “real” state until they are observed. If you were making an ancestor simulation, why would you bother simulating the infinite tracts of the universe that your ancestors never interacted with at all? And it also explains the “Fermi paradox” – by some lines of reasoning, we really should have encountered extraterrestrial intelligence already. Perhaps the simulation we live within is only interested in human minds.

But, there are some real problems with the simulation hypothesis. Dr. Matt Dowd of PBS Space Time was Dr. Tyson’s guest on the StarTalk episode I mentioned, and he’s posted a great analysis of it here:

The main problem is that the simulation hypothesis is non-falsifiable. There is no experiment that you can even dream up that would prove that we’re not in a simulation. This is true of most conspiracy theories – you can’t prove the moon landing wasn’t faked, you can’t prove I’m not an evil alien lizard establishing a new world order, and you can’t prove the Earth isn’t flat and part of some elaborate cover-up of its flatness. In general, you can’t disprove a negative – so being unable to disprove something is most definitively not evidence in support of it.

But the simulation hypothesis is even worse. Not only can you not disprove it, you can’t prove it either! It is entirely a philosophical exercise, and that’s all it ever can be – unless the basement-dwelling gamer who created us decides to suddenly reveal himself. Perhaps the simulation hypothesis can also explain religion!

Beyond that, there are other problems. Even Nick Bostrom, creator of the “ancestor simulation” hypothesis, is on record of believing there’s less than a 50% chance of it being true. That’s because there are at least two equally plausible explanations:

  • Advanced civilizations just aren’t interested enough in simulating their ancestors to bother with it.
  • No civilization survives long enough to create an ancestor simulation.

I find the former argument pretty compelling. Why would anyone expend the resources to build a planet-sized computer just to simulate their ancestors?

So for now, the simulation hypothesis is certainly a great topic for interesting conversations – but it can’t be more than that. Besides, if we were to discover that we are simulations – our dungeon-master might decide to pull our plug! Let’s hope our universe doesn’t wink out of existence once I hit the “publish” button here.

Image credit: iStock.com / cobalt

Decoding the Brain’s Facial Recognition

In what’s being called “a major breakthrough that is destined to be famous for as long as people read about neuroscience,” researchers at Pasadena’s California Institute of Technology have successfully reconstructed facial images by monitoring just 205 neurons in monkey brains.

It was previously thought that facial recognition in the brain was much more complex; perhaps specific facial features were encoded somehow, and matched against a database of people you know in order to yield instant recognition of the people that are important to you. But, all it takes is a couple of hundred neurons to successfully distill a face down to the features you need to recognize it.

This study correlated the output of the brain’s “face patch” of neurons with measurements of the shapes of faces – for example, the distance between the eyes – and the color and texture of the skin. Nothing more. By doing this, they were able to reconstruct images of faces based purely on the activity of these neurons, which humans were able to recognize compared to original photos 80% of the time.

These findings were reported yesterday in the journal Cell.

As a computer scientist, I find these results especially exciting. It suggests that a task as complex as facial recognition can be accomplished with hardly any “hardware” – the magic is in the algorithms our brains have evolved. This means artificial intelligence may be closer than we thought, if we can continue to crack this code.


Image credit: iStock.com / bowie15

Google’s AlphaGo Beats the Best Player in the World

Last year, Google’s artificial intelligence program AlphaGo beat a Korean Go Master, and it was big news. Today the news is even bigger – AlphaGo beat the best human Go player in the world,  19-year-old Ke Jie of China.

This is a big deal because unlike Chess, you can’t simply brute-force all possible moves your Go opponent might make, and find the optimal move to counter whatever she may be doing. That’s why Chess programs have been kicking my butt since I was 12, but computers playing Go is a recent thing.

Instead, AlphaGo’s deep learning algorithms trains itself by playing games against itself, and learning as it goes which sorts of patterns result in advantages. Just like with humans, practice makes perfect – and it can practice 24/7. Its play is now described as very human-like, which perhaps shouldn’t be surprising because finding patterns given training data is pretty much all that our brain does. The difference is a computer never forgets a pattern it’s learned – well, unless you pull its plug!

Does this mean artificial intelligence is that much closer to taking over the world and enslaving its human creators? Well, yes and no. AI is still limited to learning how to get really good at very narrow problems – like keeping a car within its lane, figuring out what temperature you’d like your house to be at, or playing Go. Think of them as idiot savants, except they’re even less than idiots – they know nothing other than the data you’ve trained them with, and only within the context of the objective you’ve given them. But like all technology, it can be dangerous in the wrong hands – a human who trains an AI with some nefarious cyber-warfare goal could do a number on humanity, even today.

Copyright 2017 Sundog Education, a brand of Sundog Software LLC.
Tech Nerd theme designed by Siteturner