What is knowledge?

In the discussion on the last post on measurement, the definition of knowledge came up a few times.  That’s dredged up long standing thoughts I have about knowledge, which I’ve discussed with some of you before, but that I don’t think I’ve ever actually put in a post.

The ancient classic definition of knowledge is justified true belief.  This definition is simple and feels intuitively right, but it’s not without issues.  I think the effectiveness of a definition is in how well it enables us to distinguish between things that meet it or violate it.  In the case of “justified true belief”, its effectiveness hinges on how we define “justified”, “true”, and “belief”.

How do we justify a particular proposition?  Of course, this is a vast subject, with the entire field of epistemology dedicated to arguing about it.  But it seems like the consensus arrived at in the last 500 years, at least in scientific circles, is that both empiricism and rationalism are necessary, but that neither by themselves are sufficient.  Naive interpretations of observations can lead to erroneous conclusions.  And rationalizing from your armchair is impotent if you’re not informed on the latest observations.  So justification seems to require both observation and reason, measurement and logic.

The meaning of truth depends on which theory of truth you favor.  The one most people jump to is correspondence theory, that what is true is what corresponds with reality.  The problem with this outlook is that only works from an omniscient viewpoint, which we never have.  In the case of defining knowledge, it sets up a loop: we know whether a belief is knowledge by knowing whether the belief is true or false, which we know by knowing whether the belief about that belief is true or false, which we know by…  Hopefully you get the picture.

We could dispense with the truth requirement, simply define knowledge as justified belief, but that doesn’t seem right.  Prior to Copernicus, most natural philosophers were justified in saying they knew that the sun and planets orbit the earth.  Today we say that that belief was not knowledge.  Why?  Because it wasn’t true.  How do we know that?  Well, we have better information.  You could say that our current beliefs about the solar system are more justified than the beliefs of 15th century natural philosophers.

So maybe we could replace “justified true belief” with “currently justified belief” or perhaps “belief that is justified and not subsequently overturned with greater justification.”  Admittedly, these aren’t nearly as catchy as the original.  And they seem to imply that knowledge is a relative thing, which some people don’t like.

The last word, “belief”, is used in a few different ways in everyday language.  We often say “we believe” something when we really mean we hope it is true, or we assume it’s true.  We also often say we “believe in” something or someone when what we really mean is we have confidence in it or them.  In some ways, this usage is an admission that the proposition we’re discussing isn’t very justified, but we want to sell it anyway.

But in the case of “justified true belief”, I think we’re talking about a version that says our mental model of the proposition is that it is true.  In this version, if we believe it, if we really believe it, then don’t we think it’s knowledge, even if it isn’t?

Personally, I think the best way to look at this is as a spectrum.  All knowledge is belief, but not all belief is knowledge, and it isn’t a binary thing.  A belief can have varying levels of justification.  The more justified it is, the more it’s appropriate to call it knowledge.  But at any time, new observations might contradict it, and it would then retroactively cease to have ever been knowledge.

Someone could quibble here, making a distinction between ontology and epistemology, between what is reality, and what we can know about reality.  Ontologically, it could be argued that a particular belief is or isn’t knowledge regardless of whether we know it’s knowledge.  But we can only ever have theories about ontology, theories that are always subject to being overturned.  And a rigid adherence to a definition that requires omniscience to ever know whether a belief fits the bill, effectively makes it impossible for us to know whether that belief is knowledge.

Seeing the distinction between speculative belief and knowledge as a spectrum pragmatically steps around this issue.  But again, this means accepting that what we label as knowledge is, pragmatically, something relative to our current level of information.  In essence, it makes knowledge belief that we currently have good reason to feel confident about.

What do you think?  Is there a way to avoid the relative outlook?  Is there an objective threshold where we can authoritatively say a particular belief is knowledge?  Is there an alternative definition of knowledge that avoids these issues?

Posted in Philosophy | Tagged , , , | 46 Comments

Are there things that are knowable but not measurable?

It’s a mantra for many scientists, not to mention many business managers, that if you can’t measure it, it’s not real.  On the other hand, I’ve been told by a lot of people, mostly non-scientists, and occasionally humanistic scholars including philosophers, that not everything knowable is measurable.

But what exactly is a measurement?  My intuitive understanding of the term fits, more or less, with this Wikipedia definition:

Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events.[1][2]

There’s a sense that measurement is a precise thing, usually done with standard units, such as kilograms, meters, or currency denominations.  But Doug Hubbard argues in an interview with Julia Galef, as well in his book How to Measure Anything, that measurement should be thought of as a reduction in uncertainty.  More precisely, he defines measurement as:

A quantitatively expressed reduction of uncertainty based on one or more observations.

Hubbard, Douglas W.. How to Measure Anything: Finding the Value of Intangibles in Business (p. 31). Wiley. Kindle Edition.

The observation part is crucial.  Hubbard argues that, for anything we care about, there is a difference between what we’ll observe if that thing happens and what we’ll observe if it doesn’t.  Figure out this difference, define it carefully, and you have the basis to measure anything, at least anything knowable in this world.  The more the differences can be defined with observable intermediate stages, the more precise the measurement can be.

One caveat: just because it’s possible to measure anything knowable doesn’t mean it’s always practical, that it is cost effective to do so.  Hubbard spends a lot of time in the early parts of his book discussing how to figure out the value of information to decide if the costs of measuring something is worth it.

In many cases, precise measurement may not be practical, but not all measurements must be precise in order to be useful.  Precision is always a matter of degree since we never get 100% accurate measurements, not even in the most sophisticated scientific experiments.  There’s always a margin of error.

Measuring some things may only be practical in a very coarse grained manner, but if it reduces uncertainty, then it’s still a measurement.  If we have no idea what’s currently happening with something, then any observations which reduce that uncertainty count as measurements.  For example, if we have no idea what the life expectancy is in a certain locale, and we make observations which reduces the range to, say, 65-75 years, we may not have a very precise measurement, but we still have more than what we started with.

Even in scenarios where only one observation is possible, the notorious sample of one, Hubbard points out that the probability of that one sample being representative of the population as a whole is 75%.  (This actually matches my intuitive sense of things, and will make me a little more confident next time I talk about extrapolating possible things about extraterrestrial life using only Earth life as a guide.)

So, is Hubbard right?  Is everything measurable?  Or are there knowable things that can’t be measured?

One example I’ve often heard over the years is love.  You can’t measure, supposedly, whether person A loves person B.  But using Hubbard’s guidelines, is this true?  If A does love B, wouldn’t we expect their behavior toward B to be significantly different than if they didn’t?  Wouldn’t we expect A to want to spend a lot of time with B, to do them favors, to take care of them, etc?  Wouldn’t that behavior enable us to reduce the uncertainty from 50/50 (completely unknown) to knowing the answer with, say, an 80% probability?

(When probabilities are mentioned in these types of discussions, there’s almost always somebody who says that the probabilities here can’t be scientifically ascertained.  This implies that probabilities are objective things.  But, while admitting that philosophies on this vary, Hubbard argues that probabilities are from the perspective of an observer.  Something that I might only be able to know with a 75% chance of being right, you may be able to know with 90% if you have access to more information than I do.)

Granted, it’s conceivable for A to love B without showing any external signs of it.  We can never know for sure what’s in A’s mind.  But remember that we’re talking about knowable things.  If A loves B and never gives any behavioral indication of it (including discussing it), is their love for B knowable by anybody but A?

Another example that’s often put forward is the value of experience for a typical job.  But if experience does add value, people with it should perform better than those without it in some observable manner.  If there are quantifiable measurements of how well someone is doing in a job (productivity, sales numbers, etc), the value of their experience should show up somewhere.

But what other examples might there be?  Are there ones that actually are impossible to find a conceivable measurement for?  Or are we only talking about measurements that are hopelessly impractical?  If so, does allowing for very imprecise measurement make it more approachable?

Posted in Philosophy | Tagged , , , | 55 Comments

Could a neuroscientist understand a microprocessor? Is that a relevant question?

A while back, Julia Galef on Rationally Speaking interviewed Eric Jonas, one of the authors of a study that attempted to use neuroscience techniques on a simple computer processor.

The field of neuroscience has been collecting more and more data, and developing increasingly advanced technological tools in its race to understand how the brain works. But can those data and tools ever yield true understanding? This episode features neuroscientist and computer scientist Eric Jonas, discussing his provocative paper titled “Could a Neuroscientist Understand a Microprocessor?” in which he applied state-of-the-art neuroscience tools, like lesion analysis, to a computer chip. By applying neuroscience’s tools to a system that humans fully understand (because we built it from scratch), he was able to reveal how surprisingly uninformative those tools actually are.

More specifically, Jonas looked at how selectively removing one transistor at a time (effectively creating a one transistor sized lesion) affected the behavior of three video games: Space Invaders, Donkey Kong, and Pitfall.  The idea was to see how informative correlating a lesion with a change in behavior, a technique often used in neuroscience, would be in understanding how the chip generated game behavior.

As it turned out, not very informative.  From the transcript:

But we can then look on the other side and say: which transistors were necessary for the playing of Donkey Kong? And when we do this, we go through and we find that about half the transistors actually are necessary for any game at all. If you break that, then just no game is played. And half the transistors if you get rid of them, it doesn’t appear to have any impact on the game at all.

There’s just this very small set, let’s say 10% or so, that are … less than that, 3% or so … that are kind of video game specific. So there’s this group of transistors that if you break them, you only lose the ability to play Donkey Kong. And if you were a neuroscientist you’d say, “Yes! These are the Donkey Kong transistors. This is the one that results in Mario having this aggression type impulse to fight with this ape.”

While I think Jonas makes an important point, one that just about any reputable neuroscientist would agree with, that neuroscience is far from having a comprehensive understanding of how brains generate behavior, and his actual views are quite nuanced, I think many people are overselling the results of this experiment.  There’s a sentiment that all the neuroscience work that’s currently being done is worthless, which I think is wrong.

The issue, which Jonas accepts but then largely dismisses, is in the differences we think we know about how brains work versus how computer chips work, specifically the hardware / software divide.  When we run software on a computer, we’re actually using layered machinery.  On one level is the hardware, but on another level, often just as sophisticated, if not more so, is the software.

To illustrate this, consider the two images below.  The first is the architecture of the old Intel 80386DX processor.  The second is the architecture of one of the most complicated software systems ever built: Windows NT.  (Click on either image to see them in more detail, but don’t worry about understanding the actual architectures.  I’m not going down the computer science rabbit hole here.)

Architecture of the Intel 80386DX processor.
Image credit: Appaloosa via Wikipedia

 

Architecture of Windows NT
Image credit: Grn wmr via Wikipedia

The thing to understand is that the second system is built completely on the first.  If it occurred in nature, we’d probably consider the second system to be emergent from the first.  In other words, the second system is entirely a category of actions of the first system.  The second system is what the first system does (or more accurately, a subset of what it can do).

This works because the first system is a general purpose computing machine.  Windows is just one example of vast ephemeral machines built on top of general computing ones.  Implementing these vast software machines is possible because the general computing machine is very fast, roughly a million times faster than biological nervous systems.  This is why virtually all artificial neural networks, until recently, were implemented as software, not in hardware (as they are in living systems).

However, a performance optimization that always exists for engineers who control both the hardware and software of a system, is to implement functionality in hardware.  Doing so often improves performance substantially, since it moves that functionality down to a more primal layer.  This is why researchers are now starting to implement neural networks at the hardware level.  (We don’t implement everything in hardware because doing so would require a lot more hardware.)

Now, imagine that the only hardware an engineer had was a million times slower than current commercial systems.  The engineer, tasked with creating the same overall systems, would be forced to optimize heavily by moving substantial functionality into the hardware.  Much more of the system’s behavior would then be modules in the actual hardware, rather than modules in a higher level of abstraction.

In other words, we would expect that more of a brain’s functionality would be in its physical substrate, rather than in some higher abstraction of its behavior.  As it turns out, that’s what the empirical evidence of the last century and a half of neurological case studies show.  (The current wave of fMRI studies are only confirming this and doing so with more granularity.)

Jonas argues that we can’t be sure that the brain isn’t implementing some vast software layer.  Strictly speaking, he’s right.  But the evidence we have from neuroscience doesn’t match the evidence he obtained by lesioning a 6502 processor.  In the case of brains, lesioning a specific region very often leads to specific function loss.  If the brain were a general purpose computing system, we would expect results similar to those with the 6502, but we don’t get them.

Incidentally, lesioning a 6502 to see the effect it has on, say, Donkey Kong, is a mismatch between abstraction layers.  Doing so seems more equivalent to lesioning my brain to see what effect it has on my ability to play Donkey Kong, rather than my overall mental capabilities.  I suspect half the lesions might completely destroy my ability to play any video games, and many others would have no effect at all, similar to the results Jonas got.

Lesioning the 6502 to see what deficits arise in its general computing functionality would be a much more relevant study.  This recognizes that the 6502 is a general computing machine, and should be tested as one, just as testing for brain lesions recognizes that a brain is ultimately a movement decision machine, not a general purpose computing one.  (The brain is still a computational system, just not a general purpose one designed to load arbitrary software.)

All of which is to say, while I think Jonas’ point about neuroscience being very far from a full understanding of the brain is definitely true, that doesn’t mean the more limited levels of understanding it is currently garnering are useless.  There’s a danger in being too rigid or binary in our use of the word “understanding”.  Pointing out how limited that understanding is may have some cautionary value, but it ultimately does little to move the science forward.

What do you think?  Am I just rationalizing the difference between brains and computer chips (as some proponents of this experiment argue)?  Is there evidence for a vast software layer in the brain?  Or is there some other aspect of this that I’m missing?

Posted in Mind and AI | Tagged , , , , | 38 Comments

Merry Christmas

Still alive.  As I mentioned in the previous post back in October, the new job and family issues have been keeping me busy.  Hopefully I’ll get more time in the new year for blogging.  A couple of people have suggested that I consider shorter posts more to generate discussion rather than waiting until I have the time to formulate my own carefully worked out theses.  I like this advice and may take it, although my previous attempts to make shorter posts were not a roaring success.  We’ll see.

Anyway, just wanted to wish my online friends a happy holiday.  Whatever Christmas means to you, I hope you and your family are safe, comfortable, and enjoying the holiday season.

Merry Christmas!

Posted in Zeitgeist | 13 Comments

Why I haven’t been posting lately

It’s been a while since I’ve posted.  It’s probably fair to say that my posting frequency has plummeted to the lowest level since I started this blog in 2013.  I feel obliged to offer an explanation.

First, we’ve been undergoing an epic reorganization at work.  In the early stages, this endeavor left me very unsettled on what my future work life might look like, to the extent that I was considering early retirement.  Eventually, it ended up that I’m going to be moving into a new job (in the same organization, the central IT shop for a university).

This is a good thing, but I’m going to be managing a much more technical area than I have in years, and that’s forcing me to immerse myself back into the details of database administration, application development, and enterprise integration, all of which won’t leave much room for a while to think about consciousness, philosophy, space, science, and many of the other things I often post about.

On top of that, my father passed away this weekend, a staggering emotional blow that currently has me adrift in a way I haven’t been in a long time.  I anticipate being occupied dealing with the emotional and financial fallout for a while.

So, just to let you know, I definitely have no intention of giving up blogging.  But posts may be thin for a while until I can get all this processed.  I think I’ll have some new insights on stress, emotion, and grief when I do start up again.

Hopefully more soon.  How are you guys doing?

Posted in Zeitgeist | 44 Comments

Breakthroughs in imagination

When thinking about human history, it’s tempting to see some developments as inevitable.  Some certainly were, but the sheer amount of time before some of them took place seem to make them remarkable.

The human species, narrowly defined as Homo sapiens, is about 200,000 years old.  Some argue that it’s older, around 300,000 years, others that full anatomical modernity didn’t arrive in total until 100,000 years ago.  Whichever definition and time frame we go with, the human species has been around far longer than civilization, spending well more than 90% of its existence in small hunter-gatherer tribes.  (If we broaden the definition of “humanity” to the overall Homo genus, then we’ve spent well over 99% of our history in that mode.)

For tens of thousands of years, no one really seemed to imagine the idea of a settled, sedentary lifestyle, until around 10,000-12,000 years ago in the Middle East.  I’ve often wondered what those first settlers were thinking.  Did they have any idea of the world changing significance of what they were doing?  More than likely, they were solving their own immediate problems and judged the solutions by the immediate payoff.

The earliest sedentary, or semi-sedentary culture appears to have been a group we now call the Natufians.  Living on the east coast of the Mediterranean in what is now Israel, Lebanon, and Syria, they were in a nexus of animal migrations and, in their time, a lush environment.  Life for them was relatively good.  They appear to have gotten a sedentary lifestyle effectively for free, in other words, without having to farm for it.

Then the climate started to change, an event called the Younger Dryas cooled the world for a brief period (brief in geological time, over a millenia in human time), but it was long enough to endanger the easy lifestyles the Natufians had probably become used to.  After centuries or millenia of living in a sedentary environment, they likely had little or no knowledge of how to live the way their ancestors had.

Victims of circumstance, they were forced to innovate, and agriculture emerged.  Maybe.  This is only one possible scenario, but it strikes me as a very plausible one.  The earliest evidence of nascent agriculture reportedly appears in that region in that period.

Early proto-writing from Kish, c. 3500 BC
Image credit: Locutus Borg via Wikipedia

Another development that took a long time was writing.  The oldest settlements arose several thousand years before writing developed.

The traditional view of the development of writing was that it evolved from pictures.  But as Mark Seidenberg points out in his book, Language at the Speed of Sight, picture drawing is far more ancient than writing.  The oldest cave art goes back 40,000 years, but what we call writing only arose about 5000 years ago, in Mesopotamia according to most experts (although some Egyptologists insist the Egyptian system came first).

It appears that the mental jump from pictures to symbols representing concepts was not an easy transition.  What caused it?  Seidenberg presents an interesting theory developed by archaeologist Denise Schmandt-Besserat.

Starting around 8000 BC, people in the Middle East started using small clay figures, called “tokens” today, as an accounting tool.  The tokens were simple shapes such as cones, disks, or shell like forms.  A token of a particular shape represented something like a sheep, or an amount of oil, or some other trade commodity.  Pragmatic limitations in producing the tokens kept their shapes simple, instead of being accurate detailed depictions of what they represented.

A number of tokens were placed in sealed clay containers, presumably one for each actual item.  The container was sent along with a trade shipment so the recipient would know they were receiving the correct items in the correct amounts.  In time, in order to know what kinds of tokens were in a particular container, a 2D impression, a picture of the token, was often made on the container, in essence a label indicating which tokens it contained.

It then gradually dawned on people that they could get by with just the labels, with the token shape and some indicator of quantity.  No container or actual physical tokens required.  According to the theory, written symbolic representation of concepts had arrived.

The earliest proto-writing systems were a mixture of symbols and pictures.  Over time, the picture portions did evolve into symbols, but only after the conceptual breakthrough of the symbols had already happened.

The early Bronze Age writing systems  were difficult, requiring considerable skill to write or read.  Reading and writing was effectively a specialty skill, requiring a class of scribes to do the writing and later reading of messages and accounts.  It took additional millenia for the idea of an alphabet, with a symbol for each language sound, to take hold.

The earliest known alphabet was the Proto-Sinaitic script found in the Sinai peninsula dating to sometime around 1800-1500 BC.  It appears to have been the precursor to the later Canaanite script, which itself was a precursor to the Phoenician and Hebrew alphabets that arose around 1100-1000 BC.  The Phoenicians were sea traders and spread their alphabet around the Mediterranean.  The Greeks would adapt the Phoenician alphabet, add vowels to it (a necessity driven by the fact that Greek was a multi-syllable language, as opposed to the Semitic languages, which were dominated by monosyllable words), and then use it to produce classical Greek civilization.

The development of these alphabets would lead to a relative explosion in ancient literature.  This is why studying Bronze Age societies (3300-1200 BC) is primarily an exercise in archaeology, but studying the later classical ages of Greece and Rome is primarily about studying historical narratives, supplemented by archaeology.

Why did so much of this take place in the Middle East?  Probably because, for thousands of years, the Middle East lay at the center of the world, a nexus of trading paths and ideas.  It seems entirely possible to me that some of these breakthroughs happened in other lands, but that we first find archaeological evidence for them in the Middle East because they were imported there.  The Middle East only lost this central role in the last 500 or so years, a result of the European Age of Exploration and the moving of world trade to the seas.

So, are there any new ideas, any new basic breakthroughs on the scale of agriculture or writing that are waiting for us, that we simply haven’t conceived of yet?  On the one hand, you could argue that the invention of the printing press in the 15th century, along with the rise of the internet in the last couple of decades, and the dramatically increased collaboration they bring in, have ensured that the low hanging fruit has been picked.

On the other hand, you could also argue that all of these systems are built using our existing paradigms, paradigms that are so ingrained in our cognition that they simply may not point to breakthroughs waiting to happen.  We don’t know what we don’t know.

It’s worth noting that the execution of agriculture and writing are not simple things.  Most of us, if dropped unto an ancient farm, despite the techniques being much simpler than modern farming, would have no idea where to even begin.  Or know how to construct an appropriate alphabet for whatever language was in use at the time.  (Seidenberg points out that not all alphabets are useful for all languages.  The Latin alphabet this post is written may be awkward for ancient Sumerian or Egyptian.)

It may be that the idea of farming or writing did occur to people in the paleolithic, but they simply had no conception of how to make it happen.  In this view, these seeming breakthroughs are really the result of incremental improvements, none of which individually were that profound, that eventually added up to something that was profound.  Consider again the two theories above on how farming and writing came about.  Both seem more plausible than one lone genius developing them out of nothing, primarily because they describe incremental improvements that eventually add up to a major development.

Ideas are important.  They are crucial.  But alone, without competence, without the underlying pragmatic knowledge, they are impotent.  On the other hand, steady improvements in competence often cause us to stumble on profound ideas.  I think that’s an important idea.

Unless of course, I’m missing something?

Posted in History | Tagged , , , | 22 Comments

The extraordinary low probability of intelligent life

Marc Defant gave a TEDx talk on the improbable events that had to happen in our planet’s history for us to eventually evolve, along with the implications for other intelligent life in the galaxy.

I find a lot to agree with in Defant’s remarks, although there are a couple points I’d quibble with.  The first, and I’m sure a lot of SETI (Search for Extraterrestrial Intelligence) enthusiasts will quickly point this out, is that we shouldn’t necessarily use the current lack of results from SETI as a data point.  It’s a big galaxy, and within the conceptual space where SETI could ever pay off, we shouldn’t necessarily expect it to have done so yet.

My other quibble is that Defant seems to present the formation of our solar system as a low probability event, or maybe he means a solar system with our current metallicity.  I can’t really see the case for either being unlikely.  There are hundreds of billions of stars in our galaxy, most with some sort of attending solar system.  So I’m not sure where he’s coming from on that one.

My own starting point for this isn’t SETI, but the fact that we have zero evidence for Earth having ever been colonized.  If the higher estimated numbers of civilizations in the galaxy are correct, the older ones should be billions of years older than we are.  They’ve had plenty of time to have colonized the entire galaxy many times over, even if 1% of lightspeed is the best propagation rate.

The usual response is that maybe they’re not interested in colonizing the galaxy, not even with their robotic progeny.  That might hold if there is one other civilization, but if there are thousands, hundreds, even a few dozen?  Across billions of years?  The idea that every other civilization wouldn’t be interested in sending its probes out throughout the galaxy seems remote, at least to me.

But to Defant’s broader point about the probability of intelligent life evolving, there are many events in our own evolutionary history that, if we were to rewind things, might never happen again.

Life seems to have gotten an early start on Earth.  Earth is roughly 4.54 billion years old, and the earliest fossils date to 3.7 billion years ago.  With the caveat that we’re unavoidably drawing conclusions from a sample of one planet’s history, the early start of life here seems promising for its likelihood under the right conditions.

But there are many other developments that seem far less certain.

One crucial step was the evolution of photosynthesis, at least 2.5 billion years ago.  The development of photosynthesis gave life a much more reliable energy source than what was available before, converting sunlight, water, and carbon dioxide into sugars.

And its waste product, oxygen, started the process of oxygenation, increasing the levels of oxygen in Earth’s atmosphere, which would be very important as time went on.  The early atmosphere didn’t have much oxygen.  Indeed, the rise of oxygen levels may have originally been a serious problem for the life that existed at the time.  But life adapted and eventually used oxygen as a catalyst for quicker access to free energy.

The good news with photosynthesis is that there are multiple chemical pathways for it, and it’s possible it evolved multiple times, making it an example of convergent evolution.  That means photosynthesis might be a reasonably probable development.  Still, oxygen producing photosynthesis doesn’t seem to have arisen until the Earth was more than halfway through its current history, which doesn’t make it seem very inevitable.

The rise of eukaryotes may be a more remote probability.  The earliest life were simple prokaryotes.  Eukaryotes, cells with organelles, complex specialization compartments, arose 1.6-2.1 billion years ago.  All animal and plant cells are eukaryotes, making this development a crucial building block for later complex life.

Eukaryotes are thought to have been the result of one organism attempting to consume another, but somehow instead of consuming it, the consuming organism entered into a symbiotic relationship with the consumed organism.  This low probability accident may have happened only once, although no one knows for sure.

Yet another crucial development was sexual reproduction, arising 1-1.2 billion years ago, or when Earth was 73% of its current age.  Sexual reproduction tremendously increased the amount of variation in offspring, which arguably accelerated evolution.  Who knows how long subsequent developments might have taken without it?

Oxygen had been introduced with the rise of certain types of photosynthesis, but due to geological factors, oxygen levels remained relatively low by current standards until 800 million or so years ago, when it began to rise substantially, just in time for the development of complex life.  The Cambrian explosion, the sudden appearance of a wide variety of animal life 540-500 million years ago, would not have been possible without these higher oxygen levels.

Complex life (animals and plants) arose in the last 600-700 million years, after the Earth had reached 84% of its current age.  When you consider how contingent complex life is on all the milestones above, it’s development looks far from certain.  Life may be pervasive in the universe, but complex life is probably relatively rare.

Okay, but once complex life developed, how likely is intelligent life?  There are many more low probability events even within the history of animal life.

Earth’s environment just so happens to be mostly aquatic, providing a place for life to begin, but with enough exposed land to allow the development of land animals.  In general, land animals are more intelligent than marine ones.  (Land animals can see much further than marine ones, increasingly the adaptive benefits of being able to plan ahead.)  A 100% water planet may have limited opportunities for intelligence to develop.  For example, mastering fire requires being in the atmosphere, not underwater.

Defant mentions the asteroid that took out the dinosaurs and gave mammals a chance to expand their ecological niche.  Without an asteroid strike of just the right size, mammals might not have ascended to their current role in the biosphere.  We might still be small scurrying animals hiding from the dinosaurs if that asteroid had never struck.

Of course, there have been a number of intelligent species that have evolved, not just among mammals but also among some bird species, the surviving descendants of dinosaurs.  Does this mean that, given the rise of complex life, human level intelligence is inevitable?  Not really.  While there are many intelligent species (dolphins, whales, elephants, crows, etc), the number of intelligent species that can manipulate the environment is much smaller, pretty much limited to the primates.

(Cephalopods, including octopusses, can manipulate their environment, but their short lives and marine environment appear to be obstacles for developing a civilization.)

Had our early primate ancestors not evolved to live in trees, developing a body plan to climb and swing among branches, we wouldn’t have the dexterity that we have, nor 3D vision, or the metacognitive ability to assess our confidence in making a particular jump or other move.  And had environmental changes not driven our later great ape ancestors to live in grasslands, forcing them to walk upright, and freeing their hands to carry things or manipulate the environment, a civilization building species may never have developed.

None of this is to say that another civilization producing species can’t develop using an utterly different chain of evolutionary events.  The point is that our own chain is a series of many low probability events.  In the 4.54 billion years of Earth’s history, only one species, among the billions that evolved, ever developed the capability of symbolic thought, the ability to have language, art, mathematics, and all the other tools necessary for civilization.

Considering all of this, it seems like we can reach the following conclusions.  Microscopic single celled life is likely fairly pervasive in the universe.  A substantial subset of this life probably uses some form of photosynthesis.  But complex life is probably rare.  How rare we can’t really say with our sample of one, but much rarer than photosynthesis.

And intelligent life capable of symbolic thought, of building civilizations?  I think the data is telling us that this type of life is probably profoundly rare.  So rare that there’s likely not another example in our galaxy, possibly not even in the local group, or conceivably not even in the local Laniakea supercluster.  The nearest other civilization may be hundreds of millions of light years away.

Alternatively, it’s possible that our sample size of one is utterly misleading us and there actually are hundreds or even thousands of civilizations in the galaxy.  If so, then given the fact that they’re not here, interstellar exploration, even using robots, may be impossible, or so monstrously difficult that hardly anyone bothers.  This is actually the scenario that SETI is banking on to a large extent.  If true, our best bet is to continue searching with SETI, since electromagnetic communication may be the only method we’ll ever have to interact with them.

What do you think?  Is there another scenario I’m missing here?

Posted in Space | Tagged , , , , , , , | 21 Comments