Empiricism, the idea that sensory experience is a source of knowledge, is ancient. People have obviously learned through sensory experience as long as there have been people. Studying the night skies gave ancient humans insight into the flow of the seasons, crucial knowledge as the agricultural revolution kicked into gear. And farming techniques, medicinal practices, food preparation, and many other aspects of life improved as a result of empirical learning.
But the idea that empiricism should be a required source of knowledge was controversial for millennia. Many ancient, medieval, and even early modern philosophers weren’t sure that empiricism deserved primacy over reason. This may seem strange today after centuries of scientific success, but it’s not if you think about it. Empiricism as a source of knowledge is not without its complications.
Suppose I’m driving down the road on a dark night, and suddenly see some lights moving in the sky. I see that the lights are moving in a way that seems impossible for any known aircraft or natural phenomenon. Have I seen aliens from another world? Many people who have had such experiences jump to that conclusion. They have a sensory experience, seeing the lights, and then interpret that sensory experience in a way that seems logical to them. It’s easy to say that they should not interpret lights in the sky as anything more than lights in the sky, but learning from sensory experience almost always involves interpretation.
Consider a different example. In the 1920s, astronomer Edwin Hubble studied galaxies in the night sky, and noticed an interesting pattern. The dimmer a galaxy was, the redder it was. In fact, the degree of redness of the galaxies seemed to correlate with its dimness. Hubble used his experience as an astronomer and knowledge as a physicist and interpreted his observations. The dimmer galaxies were farther away. The farther galaxies being redder was light radiating from them being redshifted as it traveled to us, meaning that the light from the farther away galaxies had become stretched into longer (hence redder) wavelengths, because they were all moving away from us at a speed proportional to their distance.
From this, Hubble concluded that the universe was expanding. Note how much interpretation, how much experience and knowledge were required to take the raw sensory data and convert it into a coherent interpretation of what was going on. It’s worth noting that for a while there were other interpretations of this data, notably “light fatigue” theories put forth as alternate explantions. (These other interpretations lost credence once the cosmic microwave background was discovered.)
So, while empirical evidence is the backbone of the modern scientific method, that backbone is heavily enclosed in support structures of logical and mathematical reasoning. Empiricism is the required check that insures that our logic and mathematical reasoning don’t stray too far from reality, but empiricism alone isn’t sufficient. All data is theory laden.
So when did empirical data begin to become a required source of knowledge for scientists? It’s usually understood to have begun in the Scientific Revolution in 16th and 17th century Europe. When I first started reading about the history of science, I had a naive view that science really started during this period after development of new methods. This view turned out to be overly simplistic.
Many of the methods of natural philosophers during the Scientific Revolution actually had substantial continuity with the methods of late medieval philosophers such as Roger Bacon or William of Ockham (the originator of Occam’s razor). And the methods of these philosophers had continuity with the methods of Islamic philosophers, whose methods had continuity with ancient Greek philosophers. It’s tempting to see science (many would insist on calling it “proto-science”) starting with the ancient Greeks, but they almost certainly built on top of knowledge they had acquired from the older civilizations in Egypt and Mesopotamia.
What the Scientific Revolution brought in was rapid improvement in methods along with a much faster rate of discovery. This was almost certainly a result of the invention of the printing press in the 15th century. (The religious Reformation started in the decades after its invention, with the Scientific Revolution following a few decades later.) The printing press allowed for new discoveries to be disseminated much faster and much more widely than had ever been possible with hand written manuscripts, allowing for new knowledge to be quickly taken into account and built upon by others.
But there was another important factor in the decades leading up to, and in the early years of, the Scientific Revolution: the rise of an intermediate Renaissance class of professional engineers and architects, a class that existed between manual artisans and philosophers. To understand why this is important, it helps to understand the state of natural philosophy in the early 16th century, which was initially heavily Aristotelian.
Aristotle held that, in order to understand something in nature, it was necessary to understand its causes at several levels, including its teleology, what it’s ultimate purpose in nature was. For example, an acorn’s purpose was to become a tree. It was a mode of inquiry concerned with why things existed or why they did what they did. For a natural philosopher to be considered successful, it was necessary to explain causation at several levels. The problem was that many of these levels were difficult to establish with any finality.
Renaissance engineers and architects were taking in ideas from the Middle East and China, and making breakthroughs and developing new techniques, often using many of the same methods of investigation that natural philosophers used, but with a more pragmatic focus. The investigations of engineers had a couple of major advantages.
The first advantage (which may have felt like a disadvantage at the time) was that the engineer faced a stark reality check on whether or not they actually understood the physics or mechanisms they were working on. If a military engineer’s understanding of cannon mechanics was wrong, he was going to find out about it very quickly when his cannons malfunctioned, often with severe consequences. It was an empirical check an engineer didn’t have the option to avoid, and it forced them to develop testing paradigms.
The second advantage that engineers held, was that, as a profession, they were under no particular pressure to have any ultimate understanding of a thing or process’s purpose. As a result, that concern didn’t generally enter into their investigations of new designs and techniques. This enabled engineers such as Leonardo da Vinci and Niccolo Tartaglia to learn many pragmatic things about physics and mechanical technologies mostly unencumbered by the obligations that natural philosophers as a class faced.
Eventually, scientists such as Galileo, who was both a natural philosopher and an engineer, noticed this discrepancy, and began using many of the techniques of engineers in their investigations, leading to the development of the experimental method, a method that essentially structured and controlled the variables in empirical investigations, that essentially made empiricism more efficient and reliable as a source of knowledge. (Both Galileo and the engineers may have gotten inspiration from the methods of the Islamic polymath Alhazen.)
This largely meant abandoning the pursuit of ultimate causes, of why things were the way they were, and only focusing on immediate causes. Many natural philosophers struggled with and criticized this abandonment of teleology, but as the new pragmatic methods racked up successes, they eventually started to come around. It’s worth noting that at this point in history, virtually no one doubted that these ultimate purposes existed, only that pursuing them was productive.
A number of modern scientists see in the work of Galileo, and in the contemporary publications of Francis Bacon, the beginnings of modern science. But it’s important not to make too much of these beginnings, to make the mistake of seeing these figures as 21st century scientists transplanted into the 16th and 17th centuries. Galileo and his contemporaries such as Tycho Brahe and Johannes Kepler, were astronomers, but they were also astrologers, a well respected profession at the time. Many of them made money by doing astrological readings for noblemen. Their worldviews were different from Aristotle’s, but they were also different from that of modern science.
Just as 16th and 17th century scientific methods had continuity with medieval scientific methods, that continuity continued into the 18th century, which had continuity with the 19th century, which in turn had continuity with the 20th. But there were constant improvements such that 16th century methods would have been considered unacceptable to a 19th century scientist. And those improvements continue today. Methods used decades ago in just about any scientific field would probably not pass muster in modern labs.
In other words, while there are hallmarks of progress: Thales, Aristotle, Galileo, Bacon, etc, there’s no bright line distinguishing what we today call science from what came before, whether we want to call those early efforts natural philosophy or protoscience. Modern science gradually evolved over centuries and millennia with continuous improvements in methods, with those improvements speeding up rapidly as new communication technologies were introduced. Those insisting on looking for that bright line, might want to consider that the word “scientist” wasn’t coined until 1833.
Much of the information in this post comes from Lawrence Principe’s ‘The Scientific Revolution: A Very Short Introduction‘, a quick read that I recommend if you’re interested in learning more about this topic.