Altered Carbon

Several years ago, I read Richard K. Morgan’s Takeshi Kovacs novels about a future where people’s minds are recorded in a device (called a “stack”) implanted just below the brain stem, essentially providing a form of mind uploading, and allowing people to survive the death of their body.  Kovacs, the protagonist of the series, is an ex-soldier, mercenary, criminal, and all around bitter human being, who is used to inhabiting many different bodies.  The novels follow his grim adventures, first on Earth, and then on other planets.

This weekend, I binge watched Netflix’s adaptation of the first novel, Altered Carbon.

It’s been several years since I read the books, but from what I can remember, the series broadly follows the first book, although it adds new content and new characters to fill out the 10 episode arc, and borrows content from the other two books, which in the process makes the story more interesting.  But it does preserve the issues Morgan explores in the novel, about what having a society of people who can transfer to new bodies might look like, particularly one that retains sharp divisions between rich and poor.

The show does seem to moderate Morgan’s intense anti-religious sentiment.  I recall Kovacs being a staunch atheist in the books, and with the story told from his point of view, that outlook permeates.  But the show seems to take a more even handed approach, showing the issues devout Catholics in this future have with the idea of being technologically resurrected while still showing them as sympathetic characters.

As in the book, the reluctance of Catholics to be resurrected makes them uniquely vulnerable targets.  A non-Catholic citizen who is murdered can potentially be revived to testify against their murderer.  But with the Catholic stipulation that they are not to be revived, it means that when they are murdered, they are dead.  There is a debate that takes place in the background of the story on whether a law should be passed to allow law enforcement to revive anyone who is a victim of a crime, regardless of their religious preferences.

What the show doesn’t moderate however, is the book’s grim noir character.  Kovacs seems more sympathetic than I recall him being in the books, more human and approachable, but the overall story’s very dark take on humanity remains.  This vision is dystopian, in a way that any Blader Runner fan should love.

Morgan’s version of mind uploading, where minds are recorded on the stack implanted in the body, preserves the jeopardy of the characters in the story.  Characters can be killed if their stack is destroyed (called “real death” in the story), and the stacks are frequently targeted in fight scenes.  Only the very rich are able to have themselves periodically backed up so that destruction of their stack doesn’t result in their real death.  (Although they have other vulnerabilities that get mentioned.)

The story also makes it clear that, although the technology is available, double-sleeving (being in two or more bodies at the same time) is illegal in this society, with the penalty being real death, although the reasons for this restriction are never discussed.  It seems to be one of many aspects of a repressive society.  (In the third book, I recall the suggestion that the Protectorate, the overall interstellar civilization in the books, is essentially holding back humanity by not allowing society to fully evolve with the technology.)

The show doesn’t explicitly address this, nor do I recall the books doing so, but the idea of the stack holding a person’s mind so that when it’s implanted between the brain and the spinal cord it takes control of the body, is actually very dark when you think about it.  It leaves open the disturbing possibility that the body’s original consciousness is still in there somewhere, but totally cut off from its body.

I think there are also some serious scientific issues with whether the technology as described would work.  For a device to really function the way the stacks are described, they would need to intercept all sensory input, much of which isn’t routed though the brain stem.  Vision, for example, at least detailed vision, goes straight to the thalamus and then the occipital lobe.  That said, the details are never discussed, so there’s enough room to imagine that the stacks are part of an overall technology harness that reaches deep into the brain.

Anyway, if you’re interested in watching the ideas of what a human society with mind uploading might look like, and don’t mind a lot of violence, language, and sexual content, then you might want to check it out.

Posted in Science Fiction | Tagged , , , | 42 Comments

What is knowledge?

In the discussion on the last post on measurement, the definition of knowledge came up a few times.  That’s dredged up long standing thoughts I have about knowledge, which I’ve discussed with some of you before, but that I don’t think I’ve ever actually put in a post.

The ancient classic definition of knowledge is justified true belief.  This definition is simple and feels intuitively right, but it’s not without issues.  I think the effectiveness of a definition is in how well it enables us to distinguish between things that meet it or violate it.  In the case of “justified true belief”, its effectiveness hinges on how we define “justified”, “true”, and “belief”.

How do we justify a particular proposition?  Of course, this is a vast subject, with the entire field of epistemology dedicated to arguing about it.  But it seems like the consensus arrived at in the last 500 years, at least in scientific circles, is that both empiricism and rationalism are necessary, but that neither by themselves are sufficient.  Naive interpretations of observations can lead to erroneous conclusions.  And rationalizing from your armchair is impotent if you’re not informed on the latest observations.  So justification seems to require both observation and reason, measurement and logic.

The meaning of truth depends on which theory of truth you favor.  The one most people jump to is correspondence theory, that what is true is what corresponds with reality.  The problem with this outlook is that only works from an omniscient viewpoint, which we never have.  In the case of defining knowledge, it sets up a loop: we know whether a belief is knowledge by knowing whether the belief is true or false, which we know by knowing whether the belief about that belief is true or false, which we know by…  Hopefully you get the picture.

We could dispense with the truth requirement, simply define knowledge as justified belief, but that doesn’t seem right.  Prior to Copernicus, most natural philosophers were justified in saying they knew that the sun and planets orbit the earth.  Today we say that that belief was not knowledge.  Why?  Because it wasn’t true.  How do we know that?  Well, we have better information.  You could say that our current beliefs about the solar system are more justified than the beliefs of 15th century natural philosophers.

So maybe we could replace “justified true belief” with “currently justified belief” or perhaps “belief that is justified and not subsequently overturned with greater justification.”  Admittedly, these aren’t nearly as catchy as the original.  And they seem to imply that knowledge is a relative thing, which some people don’t like.

The last word, “belief”, is used in a few different ways in everyday language.  We often say “we believe” something when we really mean we hope it is true, or we assume it’s true.  We also often say we “believe in” something or someone when what we really mean is we have confidence in it or them.  In some ways, this usage is an admission that the proposition we’re discussing isn’t very justified, but we want to sell it anyway.

But in the case of “justified true belief”, I think we’re talking about a version that says our mental model of the proposition is that it is true.  In this version, if we believe it, if we really believe it, then don’t we think it’s knowledge, even if it isn’t?

Personally, I think the best way to look at this is as a spectrum.  All knowledge is belief, but not all belief is knowledge, and it isn’t a binary thing.  A belief can have varying levels of justification.  The more justified it is, the more it’s appropriate to call it knowledge.  But at any time, new observations might contradict it, and it would then retroactively cease to have ever been knowledge.

Someone could quibble here, making a distinction between ontology and epistemology, between what is reality, and what we can know about reality.  Ontologically, it could be argued that a particular belief is or isn’t knowledge regardless of whether we know it’s knowledge.  But we can only ever have theories about ontology, theories that are always subject to being overturned.  And a rigid adherence to a definition that requires omniscience to ever know whether a belief fits the bill, effectively makes it impossible for us to know whether that belief is knowledge.

Seeing the distinction between speculative belief and knowledge as a spectrum pragmatically steps around this issue.  But again, this means accepting that what we label as knowledge is, pragmatically, something relative to our current level of information.  In essence, it makes knowledge belief that we currently have good reason to feel confident about.

What do you think?  Is there a way to avoid the relative outlook?  Is there an objective threshold where we can authoritatively say a particular belief is knowledge?  Is there an alternative definition of knowledge that avoids these issues?

Posted in Philosophy | Tagged , , , | 67 Comments

Are there things that are knowable but not measurable?

It’s a mantra for many scientists, not to mention many business managers, that if you can’t measure it, it’s not real.  On the other hand, I’ve been told by a lot of people, mostly non-scientists, and occasionally humanistic scholars including philosophers, that not everything knowable is measurable.

But what exactly is a measurement?  My intuitive understanding of the term fits, more or less, with this Wikipedia definition:

Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events.[1][2]

There’s a sense that measurement is a precise thing, usually done with standard units, such as kilograms, meters, or currency denominations.  But Doug Hubbard argues in an interview with Julia Galef, as well in his book How to Measure Anything, that measurement should be thought of as a reduction in uncertainty.  More precisely, he defines measurement as:

A quantitatively expressed reduction of uncertainty based on one or more observations.

Hubbard, Douglas W.. How to Measure Anything: Finding the Value of Intangibles in Business (p. 31). Wiley. Kindle Edition.

The observation part is crucial.  Hubbard argues that, for anything we care about, there is a difference between what we’ll observe if that thing happens and what we’ll observe if it doesn’t.  Figure out this difference, define it carefully, and you have the basis to measure anything, at least anything knowable in this world.  The more the differences can be defined with observable intermediate stages, the more precise the measurement can be.

One caveat: just because it’s possible to measure anything knowable doesn’t mean it’s always practical, that it is cost effective to do so.  Hubbard spends a lot of time in the early parts of his book discussing how to figure out the value of information to decide if the costs of measuring something is worth it.

In many cases, precise measurement may not be practical, but not all measurements must be precise in order to be useful.  Precision is always a matter of degree since we never get 100% accurate measurements, not even in the most sophisticated scientific experiments.  There’s always a margin of error.

Measuring some things may only be practical in a very coarse grained manner, but if it reduces uncertainty, then it’s still a measurement.  If we have no idea what’s currently happening with something, then any observations which reduce that uncertainty count as measurements.  For example, if we have no idea what the life expectancy is in a certain locale, and we make observations which reduces the range to, say, 65-75 years, we may not have a very precise measurement, but we still have more than what we started with.

Even in scenarios where only one observation is possible, the notorious sample of one, Hubbard points out that the probability of that one sample being representative of the population as a whole is 75%.  (This actually matches my intuitive sense of things, and will make me a little more confident next time I talk about extrapolating possible things about extraterrestrial life using only Earth life as a guide.)

So, is Hubbard right?  Is everything measurable?  Or are there knowable things that can’t be measured?

One example I’ve often heard over the years is love.  You can’t measure, supposedly, whether person A loves person B.  But using Hubbard’s guidelines, is this true?  If A does love B, wouldn’t we expect their behavior toward B to be significantly different than if they didn’t?  Wouldn’t we expect A to want to spend a lot of time with B, to do them favors, to take care of them, etc?  Wouldn’t that behavior enable us to reduce the uncertainty from 50/50 (completely unknown) to knowing the answer with, say, an 80% probability?

(When probabilities are mentioned in these types of discussions, there’s almost always somebody who says that the probabilities here can’t be scientifically ascertained.  This implies that probabilities are objective things.  But, while admitting that philosophies on this vary, Hubbard argues that probabilities are from the perspective of an observer.  Something that I might only be able to know with a 75% chance of being right, you may be able to know with 90% if you have access to more information than I do.)

Granted, it’s conceivable for A to love B without showing any external signs of it.  We can never know for sure what’s in A’s mind.  But remember that we’re talking about knowable things.  If A loves B and never gives any behavioral indication of it (including discussing it), is their love for B knowable by anybody but A?

Another example that’s often put forward is the value of experience for a typical job.  But if experience does add value, people with it should perform better than those without it in some observable manner.  If there are quantifiable measurements of how well someone is doing in a job (productivity, sales numbers, etc), the value of their experience should show up somewhere.

But what other examples might there be?  Are there ones that actually are impossible to find a conceivable measurement for?  Or are we only talking about measurements that are hopelessly impractical?  If so, does allowing for very imprecise measurement make it more approachable?

Posted in Philosophy | Tagged , , , | 93 Comments

Could a neuroscientist understand a microprocessor? Is that a relevant question?

A while back, Julia Galef on Rationally Speaking interviewed Eric Jonas, one of the authors of a study that attempted to use neuroscience techniques on a simple computer processor.

The field of neuroscience has been collecting more and more data, and developing increasingly advanced technological tools in its race to understand how the brain works. But can those data and tools ever yield true understanding? This episode features neuroscientist and computer scientist Eric Jonas, discussing his provocative paper titled “Could a Neuroscientist Understand a Microprocessor?” in which he applied state-of-the-art neuroscience tools, like lesion analysis, to a computer chip. By applying neuroscience’s tools to a system that humans fully understand (because we built it from scratch), he was able to reveal how surprisingly uninformative those tools actually are.

More specifically, Jonas looked at how selectively removing one transistor at a time (effectively creating a one transistor sized lesion) affected the behavior of three video games: Space Invaders, Donkey Kong, and Pitfall.  The idea was to see how informative correlating a lesion with a change in behavior, a technique often used in neuroscience, would be in understanding how the chip generated game behavior.

As it turned out, not very informative.  From the transcript:

But we can then look on the other side and say: which transistors were necessary for the playing of Donkey Kong? And when we do this, we go through and we find that about half the transistors actually are necessary for any game at all. If you break that, then just no game is played. And half the transistors if you get rid of them, it doesn’t appear to have any impact on the game at all.

There’s just this very small set, let’s say 10% or so, that are … less than that, 3% or so … that are kind of video game specific. So there’s this group of transistors that if you break them, you only lose the ability to play Donkey Kong. And if you were a neuroscientist you’d say, “Yes! These are the Donkey Kong transistors. This is the one that results in Mario having this aggression type impulse to fight with this ape.”

While I think Jonas makes an important point, one that just about any reputable neuroscientist would agree with, that neuroscience is far from having a comprehensive understanding of how brains generate behavior, and his actual views are quite nuanced, I think many people are overselling the results of this experiment.  There’s a sentiment that all the neuroscience work that’s currently being done is worthless, which I think is wrong.

The issue, which Jonas accepts but then largely dismisses, is in the differences we think we know about how brains work versus how computer chips work, specifically the hardware / software divide.  When we run software on a computer, we’re actually using layered machinery.  On one level is the hardware, but on another level, often just as sophisticated, if not more so, is the software.

To illustrate this, consider the two images below.  The first is the architecture of the old Intel 80386DX processor.  The second is the architecture of one of the most complicated software systems ever built: Windows NT.  (Click on either image to see them in more detail, but don’t worry about understanding the actual architectures.  I’m not going down the computer science rabbit hole here.)

Architecture of the Intel 80386DX processor.
Image credit: Appaloosa via Wikipedia

 

Architecture of Windows NT
Image credit: Grn wmr via Wikipedia

The thing to understand is that the second system is built completely on the first.  If it occurred in nature, we’d probably consider the second system to be emergent from the first.  In other words, the second system is entirely a category of actions of the first system.  The second system is what the first system does (or more accurately, a subset of what it can do).

This works because the first system is a general purpose computing machine.  Windows is just one example of vast ephemeral machines built on top of general computing ones.  Implementing these vast software machines is possible because the general computing machine is very fast, roughly a million times faster than biological nervous systems.  This is why virtually all artificial neural networks, until recently, were implemented as software, not in hardware (as they are in living systems).

However, a performance optimization that always exists for engineers who control both the hardware and software of a system, is to implement functionality in hardware.  Doing so often improves performance substantially, since it moves that functionality down to a more primal layer.  This is why researchers are now starting to implement neural networks at the hardware level.  (We don’t implement everything in hardware because doing so would require a lot more hardware.)

Now, imagine that the only hardware an engineer had was a million times slower than current commercial systems.  The engineer, tasked with creating the same overall systems, would be forced to optimize heavily by moving substantial functionality into the hardware.  Much more of the system’s behavior would then be modules in the actual hardware, rather than modules in a higher level of abstraction.

In other words, we would expect that more of a brain’s functionality would be in its physical substrate, rather than in some higher abstraction of its behavior.  As it turns out, that’s what the empirical evidence of the last century and a half of neurological case studies show.  (The current wave of fMRI studies are only confirming this and doing so with more granularity.)

Jonas argues that we can’t be sure that the brain isn’t implementing some vast software layer.  Strictly speaking, he’s right.  But the evidence we have from neuroscience doesn’t match the evidence he obtained by lesioning a 6502 processor.  In the case of brains, lesioning a specific region very often leads to specific function loss.  If the brain were a general purpose computing system, we would expect results similar to those with the 6502, but we don’t get them.

Incidentally, lesioning a 6502 to see the effect it has on, say, Donkey Kong, is a mismatch between abstraction layers.  Doing so seems more equivalent to lesioning my brain to see what effect it has on my ability to play Donkey Kong, rather than my overall mental capabilities.  I suspect half the lesions might completely destroy my ability to play any video games, and many others would have no effect at all, similar to the results Jonas got.

Lesioning the 6502 to see what deficits arise in its general computing functionality would be a much more relevant study.  This recognizes that the 6502 is a general computing machine, and should be tested as one, just as testing for brain lesions recognizes that a brain is ultimately a movement decision machine, not a general purpose computing one.  (The brain is still a computational system, just not a general purpose one designed to load arbitrary software.)

All of which is to say, while I think Jonas’ point about neuroscience being very far from a full understanding of the brain is definitely true, that doesn’t mean the more limited levels of understanding it is currently garnering are useless.  There’s a danger in being too rigid or binary in our use of the word “understanding”.  Pointing out how limited that understanding is may have some cautionary value, but it ultimately does little to move the science forward.

What do you think?  Am I just rationalizing the difference between brains and computer chips (as some proponents of this experiment argue)?  Is there evidence for a vast software layer in the brain?  Or is there some other aspect of this that I’m missing?

Posted in Mind and AI | Tagged , , , , | 38 Comments

Merry Christmas

Still alive.  As I mentioned in the previous post back in October, the new job and family issues have been keeping me busy.  Hopefully I’ll get more time in the new year for blogging.  A couple of people have suggested that I consider shorter posts more to generate discussion rather than waiting until I have the time to formulate my own carefully worked out theses.  I like this advice and may take it, although my previous attempts to make shorter posts were not a roaring success.  We’ll see.

Anyway, just wanted to wish my online friends a happy holiday.  Whatever Christmas means to you, I hope you and your family are safe, comfortable, and enjoying the holiday season.

Merry Christmas!

Posted in Zeitgeist | 13 Comments

Why I haven’t been posting lately

It’s been a while since I’ve posted.  It’s probably fair to say that my posting frequency has plummeted to the lowest level since I started this blog in 2013.  I feel obliged to offer an explanation.

First, we’ve been undergoing an epic reorganization at work.  In the early stages, this endeavor left me very unsettled on what my future work life might look like, to the extent that I was considering early retirement.  Eventually, it ended up that I’m going to be moving into a new job (in the same organization, the central IT shop for a university).

This is a good thing, but I’m going to be managing a much more technical area than I have in years, and that’s forcing me to immerse myself back into the details of database administration, application development, and enterprise integration, all of which won’t leave much room for a while to think about consciousness, philosophy, space, science, and many of the other things I often post about.

On top of that, my father passed away this weekend, a staggering emotional blow that currently has me adrift in a way I haven’t been in a long time.  I anticipate being occupied dealing with the emotional and financial fallout for a while.

So, just to let you know, I definitely have no intention of giving up blogging.  But posts may be thin for a while until I can get all this processed.  I think I’ll have some new insights on stress, emotion, and grief when I do start up again.

Hopefully more soon.  How are you guys doing?

Posted in Zeitgeist | 44 Comments

Breakthroughs in imagination

When thinking about human history, it’s tempting to see some developments as inevitable.  Some certainly were, but the sheer amount of time before some of them took place seem to make them remarkable.

The human species, narrowly defined as Homo sapiens, is about 200,000 years old.  Some argue that it’s older, around 300,000 years, others that full anatomical modernity didn’t arrive in total until 100,000 years ago.  Whichever definition and time frame we go with, the human species has been around far longer than civilization, spending well more than 90% of its existence in small hunter-gatherer tribes.  (If we broaden the definition of “humanity” to the overall Homo genus, then we’ve spent well over 99% of our history in that mode.)

For tens of thousands of years, no one really seemed to imagine the idea of a settled, sedentary lifestyle, until around 10,000-12,000 years ago in the Middle East.  I’ve often wondered what those first settlers were thinking.  Did they have any idea of the world changing significance of what they were doing?  More than likely, they were solving their own immediate problems and judged the solutions by the immediate payoff.

The earliest sedentary, or semi-sedentary culture appears to have been a group we now call the Natufians.  Living on the east coast of the Mediterranean in what is now Israel, Lebanon, and Syria, they were in a nexus of animal migrations and, in their time, a lush environment.  Life for them was relatively good.  They appear to have gotten a sedentary lifestyle effectively for free, in other words, without having to farm for it.

Then the climate started to change, an event called the Younger Dryas cooled the world for a brief period (brief in geological time, over a millenia in human time), but it was long enough to endanger the easy lifestyles the Natufians had probably become used to.  After centuries or millenia of living in a sedentary environment, they likely had little or no knowledge of how to live the way their ancestors had.

Victims of circumstance, they were forced to innovate, and agriculture emerged.  Maybe.  This is only one possible scenario, but it strikes me as a very plausible one.  The earliest evidence of nascent agriculture reportedly appears in that region in that period.

Early proto-writing from Kish, c. 3500 BC
Image credit: Locutus Borg via Wikipedia

Another development that took a long time was writing.  The oldest settlements arose several thousand years before writing developed.

The traditional view of the development of writing was that it evolved from pictures.  But as Mark Seidenberg points out in his book, Language at the Speed of Sight, picture drawing is far more ancient than writing.  The oldest cave art goes back 40,000 years, but what we call writing only arose about 5000 years ago, in Mesopotamia according to most experts (although some Egyptologists insist the Egyptian system came first).

It appears that the mental jump from pictures to symbols representing concepts was not an easy transition.  What caused it?  Seidenberg presents an interesting theory developed by archaeologist Denise Schmandt-Besserat.

Starting around 8000 BC, people in the Middle East started using small clay figures, called “tokens” today, as an accounting tool.  The tokens were simple shapes such as cones, disks, or shell like forms.  A token of a particular shape represented something like a sheep, or an amount of oil, or some other trade commodity.  Pragmatic limitations in producing the tokens kept their shapes simple, instead of being accurate detailed depictions of what they represented.

A number of tokens were placed in sealed clay containers, presumably one for each actual item.  The container was sent along with a trade shipment so the recipient would know they were receiving the correct items in the correct amounts.  In time, in order to know what kinds of tokens were in a particular container, a 2D impression, a picture of the token, was often made on the container, in essence a label indicating which tokens it contained.

It then gradually dawned on people that they could get by with just the labels, with the token shape and some indicator of quantity.  No container or actual physical tokens required.  According to the theory, written symbolic representation of concepts had arrived.

The earliest proto-writing systems were a mixture of symbols and pictures.  Over time, the picture portions did evolve into symbols, but only after the conceptual breakthrough of the symbols had already happened.

The early Bronze Age writing systems  were difficult, requiring considerable skill to write or read.  Reading and writing was effectively a specialty skill, requiring a class of scribes to do the writing and later reading of messages and accounts.  It took additional millenia for the idea of an alphabet, with a symbol for each language sound, to take hold.

The earliest known alphabet was the Proto-Sinaitic script found in the Sinai peninsula dating to sometime around 1800-1500 BC.  It appears to have been the precursor to the later Canaanite script, which itself was a precursor to the Phoenician and Hebrew alphabets that arose around 1100-1000 BC.  The Phoenicians were sea traders and spread their alphabet around the Mediterranean.  The Greeks would adapt the Phoenician alphabet, add vowels to it (a necessity driven by the fact that Greek was a multi-syllable language, as opposed to the Semitic languages, which were dominated by monosyllable words), and then use it to produce classical Greek civilization.

The development of these alphabets would lead to a relative explosion in ancient literature.  This is why studying Bronze Age societies (3300-1200 BC) is primarily an exercise in archaeology, but studying the later classical ages of Greece and Rome is primarily about studying historical narratives, supplemented by archaeology.

Why did so much of this take place in the Middle East?  Probably because, for thousands of years, the Middle East lay at the center of the world, a nexus of trading paths and ideas.  It seems entirely possible to me that some of these breakthroughs happened in other lands, but that we first find archaeological evidence for them in the Middle East because they were imported there.  The Middle East only lost this central role in the last 500 or so years, a result of the European Age of Exploration and the moving of world trade to the seas.

So, are there any new ideas, any new basic breakthroughs on the scale of agriculture or writing that are waiting for us, that we simply haven’t conceived of yet?  On the one hand, you could argue that the invention of the printing press in the 15th century, along with the rise of the internet in the last couple of decades, and the dramatically increased collaboration they bring in, have ensured that the low hanging fruit has been picked.

On the other hand, you could also argue that all of these systems are built using our existing paradigms, paradigms that are so ingrained in our cognition that they simply may not point to breakthroughs waiting to happen.  We don’t know what we don’t know.

It’s worth noting that the execution of agriculture and writing are not simple things.  Most of us, if dropped unto an ancient farm, despite the techniques being much simpler than modern farming, would have no idea where to even begin.  Or know how to construct an appropriate alphabet for whatever language was in use at the time.  (Seidenberg points out that not all alphabets are useful for all languages.  The Latin alphabet this post is written may be awkward for ancient Sumerian or Egyptian.)

It may be that the idea of farming or writing did occur to people in the paleolithic, but they simply had no conception of how to make it happen.  In this view, these seeming breakthroughs are really the result of incremental improvements, none of which individually were that profound, that eventually added up to something that was profound.  Consider again the two theories above on how farming and writing came about.  Both seem more plausible than one lone genius developing them out of nothing, primarily because they describe incremental improvements that eventually add up to a major development.

Ideas are important.  They are crucial.  But alone, without competence, without the underlying pragmatic knowledge, they are impotent.  On the other hand, steady improvements in competence often cause us to stumble on profound ideas.  I think that’s an important idea.

Unless of course, I’m missing something?

Posted in History | Tagged , , , | 22 Comments