For most of history, reason was the only known or accepted way to arrive at truths about the world.
I disagree. Most knowledge has always come from "science". You know it's raining outside because you looked and it was. You know there's a snake under that rock because you looked under that rock and there was a snake there. Moreover, all human skills and crafts from fire-making to cathedral-building must have progressed through a process of trial and error, which is just a non-formalized experiment.
This wasn't as prestigious as knowledge acquired through reason, though, since it doesn't take a genius to figure things out just by looking at 'em. Only when science became formalised and difficult to understand did it start to gain prestige.
Observation isn't science. This is the fundamental difference between Engineering and Science.
In Engineering, I want to solve this one challenge; I don't want to generalize to a whole class of examples or the whole Universe.
In Science, I make a hypothesis, usually based on prior observations, I conduct controlled experiments, not just observing the World around me, and from the resulting data, I derive a theory.
I think mere observation does count as a form of non-formalised and occasionally unreliable science. Certainly if you're going to classify "science" and "reason" as the two ways of finding out about the world, then you have to bung "observation" in there with science.
In engineering you often do want to generalize to a whole class of examples, and can hardly help doing so. If you want to nail two pieces of wood together, and it's never been done before, you can't help but discover general principles along the way that are applicable to all wood-nailing scenarios. Then one day you'll go to nail two bits of different wood together, they'll behave differently and heck, you'll have discovered some additional facts about different types of wood.
Eventually you'll hit your finger with the hammer and discover that it really hurts to do so. Now I come to think of it, I would classify "hitting your finger with a hammer really hurts" as a perfect example of those true facts about the world which has nonetheless never been the subject of a proper experiment and certainly wasn't derived from reason, either.
I don't disagree, but instead seek to generalize (off topic).
I think the fundamental difference between science and engineering revolves around that when engineering you work with the assumption that you know everything about how the world works and can apply that to solve a problem. With science, you work with the certainty that you know nothing true about how the world works and you apply that to see further.
Many of the observations in science take place in a controlled, repeatable laboratory setting. Many do not; astronomers can't exactly drag other galaxies into the lab.
"This is easily disproved by dropping a heavy object and a light object of the same shape from a high place and seeing which hit the ground first. Yet it took over a thousand years before anyone thought to try this experiment."
Not to be too pedantic about it, but the reason the belief survived for so long is precisely because it isn't easily disproved. If you drop a heavy object and a light object of the same size from a high place, the heavy one will absolutely hit the ground first. (Try it!)
Galileo disproved the Aristotelian idea that heavy objects fall faster with a thought experiment, not through physical experiments and data collection.
He used deductive rather than inductive reasoning. That's why it was a proof.
As well, given two rocks of similar density but different mass (assuming m*g >> F_d for both) it generally takes a pretty high place (that is: higher than the couple stories that they had access to) for differences in acceleration or V_t to matter enough that ancient Greeks would have actually noticed.
A = projected area, which is the same if our objects are the "same shape".
As for the rocks, you're applying benefit of hindsight. Say I show you a balloon filled with air and a balloon filled with water falling at obviously different speeds. Now, you have to explain that with this odd theory of "everything falls at the same speed" -- not the other way around. In the real world, it is evidently true that heavier objects fall slower, and any better theory that disagrees must explain this (e.g., posit the existence of air resistance) before its new predictions matter.
I agree with his larger point that being purely scientific is impractical in situations when there are too many unknowns and data is hard to collect. So, sometimes one needs to go with their gut feeling or intuition.
I might be nitpicking here but intuitions are not always based on reasons. Some of our intuitions are based on reasons (eg: past experiences) and some are not (eg: preferences, prejudices.)
>> The measurements you choose (which will necessarily be
>> somewhat arbitrary) will shape what you make, and you
>> will most likely end up making the wrong thing.
Failure here is no more likely than failure after making a decision by intuition because we are still juggling with the unknowns.
Is the definition of science really that narrow? Surely doing logical deductions is also science? It doesn't mean you are correct because you can never verify the basic assumptions, but it can still lead to valuable insights.
How does a scientist build a house? Since apparently he can not calculate the statical properties in advance, the only option seems to be to build a random house and then measure if it crumbles down or not? That doesn't seem very effective. In fact, several houses would have to be built to get statistical significance. Then nobody would be allowed to move in for 10 years to be reasonable sure it is stable. As soon as somebody moved in, the conditions of the experiment would change and the house might still topple down. I can't see a way for this to work. If science is like that, it is useless.
Exactly. Real knowledge is almost always derived from a combination of science and reason.
A scientist building a house might use an experiment to find what kind of mortar is best for holding one brick to another brick. He must, however, use reason to deduce that a mortar that can hold two bricks together is also capable of holding a thousand bricks together.
Then he can combine it with a bunch of other observations about the world... for instance that mortar which works fine on a Sunday will also work fine on a Monday. How do we know this? Well, because no other material has been observed to change its properties depending on the day of the week, and because there is no logical connection between physical properties of materials and the arbitrary cycling of the calendar which is a human invention. On the other hand, some materials do have different properties in December vs July (eg the water in the lake might completely change state depending on where you live). Anyway, I seem to have gone off on a tangent but basically, observation and reasoning are inextricably intertwined if you want to actually figure out anything about the state of the world.
I think both science and logic (what the article calls Reason) are poor tools for studying complex systems--things like software and people.
The guess-and-check methodology of science works great when one is studying simple, consistent, statistically tractible behavior. Ask a scientist to characterize gravity, and he does a great job. But ask a scientist to characterize something complex--e.g., MS Word--something with a lot of inputs and outputs, complex behavior which changes in subtle ways in different circumstances--and he'll have a tough time with it. The reason is that the models of science converge inexorably and certainly, but slowly; too slowly to be really useful when studying complex things. The scientist may be able to characterize a few simple features of something complex, but it'll be a long time before he can totally describe the object's behavior.
In the case of complex systems, my preferred epistemic tool is revelation. Read the source code. Ask the person what they're thinking. Peek under the hood. Ask the designer what he was trying to accomplish.
Logic in particular serves an interesting purpose in complex systems. One usually thinks of logic as a tool to go from secure axioms to secure conclusions, but in the case of an unknown system, the axioms and models and definitions are what we're trying to discover. Hence, my primary use of logic when studying a complex system (e.g., while debugging software) is as a tool to highlight paradox. That is, to go from clearly-impossible conclusions to unstated false assumptions, or better definitions. Not, "The source code shows nothing changing the value of this variable, hence something else must be causing the bug," but "The source code shows nothing changing the value of this variable, yet the value does change, hence my assumptions about what can change a variable's value must be wrong."
Of course, when one is studying something opaque and complex, such as a market or biological system, revelation is not available as a tool. But one should still retain respect for the limitations of science as a methodology in that situation, and respect the complexity of reality in contrast with the simplicity of science's results.
While we're talking about unusual epistemic tools, emotion has a really bad reputation, but what people are criticizing is not emotion as a properly used tool, but its improper use.
The improper, oft-criticized use of emotion is saying, "I feel X is true" or "I want X to be true", hence "I believe X." This is clearly unreliable.
But consider the role emotion does properly play. "This is surprising," "Huh, that's funny," "I wonder why..." and "That can't be right." These positive and negative reactions to data are feelings. Curiosity, fascination, frustration at the impossible, the pain of unresolved paradox. Judging knowledge to be either trivia essentially interesting--final value judgements. These are emotions that push us to apply other epistemic tools to the right questions in the right ways.
I once suffered a mental illness that made it difficult for me to feel certain emotions. One of the most fascinating results of the experience was the degree to which my ability to reason degraded. Unable to distinguish the interesting from the uninteresting, the fascinating from the trivial, unmotivated to pursue and resolve impossibilities, I was unable to undertake even simple logical tasks such as debugging.
There is a popular notion that a disinterested party will provide the most accurate account of a phenomenon, the object being to avoid bias. I think this is only part of the story. True, an interested party fall pray to phenomena like confirmation bias. But a disinterested party will fall prey to analytical apathy, happy with slapdash, second-rate models and explanations. I believe the best work comes from a dialogue between people with diverse yet healthy emotional attachments to a problem and a commitment to intellectual honesty.
>I once suffered a mental illness that made it difficult for me to feel certain emotions. One of the most fascinating results of the experience was the degree to which my ability to reason degraded.
This helps reinforce a thought I've been playing around with since my 20's. It seems quite ridiculous to me that reason is set in conflict with emotion (a remnant, I believe, of the historical conflict of science contra religion in the west). It is much more likely that the ability to reason is a subset of emotion, i.e., reasoning is nothing more than the development of particular emotions working in concert. This conception is more aligned with how evolution actually operates (building upon the processes of before) rather than having to explain a "magical" reason which just appears out of nowhere in the human mind and dominates the animal nature.
Just a thought...I hope you were able to deal with your illness well.
See, the real problem here is that the OP thinks that math and science are distinct.
I certainly am of the opinion that they are not, which makes the whole argument very weak, since reason -> logic -> mathematics -> science.
[Edit: Perhaps the downvoters have tried to teach quantum mechanics as a subject distinct from mathematics and physics and succeeded, in which case I am most interested in your arguments. Personally I have only managed it as two sides of the same coin]
The author is conflating two philosophical schools of thought, Rationalism and Empiricism, with two other concepts (with a complex relationship), reason and science.
Point 1: The author is defining reason incorrectly. He states that reason is "internally generated," that it starts with "your internal sense for what is right and pure." The correct definition (taken from a dictionary) is: the mental powers concerned with forming conclusions, judgments, or inferences. The difference is that the author claims that reason is basically internal/introspection, whereas in actuallity, reasoning requires both internal mental processes (such as recalling concepts you've formed previously) and extrospection (observing the world around you, learning from it, etc.). The Rationalist school of philosophical thinkers (e.g. Descartes) argued that that extrospection is fundamentally flawed or not trustworthy, so the definition the author uses is actually, basically, the rationalists' position. (Later philosophers like Kant picked up with this and ran with it, claiming that since reason is faulty, you must resort to faith. Indeed, the rationalists were typically highly religious, although basically everyone was at that time.) I, personally, disagree very strongly with that position, and I suspect that many HN readers also disagree with it. So be careful not to accidentally "accept" this position as a valid definition.
Point 2: Empiricism is a philosophical school that was formed in reaction to the Rationalists. Empiricist philosophers pointed out that the Rationalists were incapable of achieving true knowledge about the world by rejecting perception, and insisted that we just go by experimentation. However, they accepted some of the Rationalist viewpoints about the way reason works, and thus were very skeptical of reasoning, prefering "hard data" and the like to explain things. (This is ironic because validating a philosophical claim would require reason, so you can't use philosophy to reject reason without contradicting yourself, but I digress.)
Point 3: The actual relationship between science and reason is not "science vs. reason" or "science OR reason", as the author states. It's "you need to use reason to do science." Science is a kind of technical study of the way the world works that uses certain techniques; that's all science means. There's things you can do using reason that aren't technical enough to be called science (like paying your taxes or thinking about philosophy). But there's no science you can do without using reason.
Conclusion: The claim that reason and reality are fundamentally divided is the root premise of the author's blog post. This was Kant's position. Kant's main goal was to state that reason isn't really useful, so we all have to go on faith; he was trying to uphold "God, faith, and immortality" (to cite his most famous quotation). That gives me the creeps. I strongly disagree with Kant.
Since the author uses Kant as an example, the rationalist/empiricist distinction doesn't really apply.
Synthetic a priori knowledge of mathematics and geometry is the starting point for Kant's reconciliation of the two schools carried out in Critique of Pure Reason.
Kant did not advocate taking anything on faith in the sense you are using the term.
What Kant did was point out that all empirical knowledge is mediated by the conditions of human experience (i.e. space and time), and that we should not confuse empirical facts with the ding an sich (thing in itself).
Kant's most famous quote is along the lines of "So act that your principle of action might safely be made a law for the whole world."
It usually leads to people justifying situational ethics by mentioning Nazis.
> "Kant did not advocate taking anything on faith in the sense you are using the term."
Exactly right. The sense in which the parent uses the term "faith" is a fairly modern invention. It's only over the last hundred years or so that "faith" has been thought of as the opposite of reason.
In ancient Jewish and Christian writings, "faith" is typically used to mean "acting on something you know to be true", especially in the face of difficult circumstances. It's typically contrasted with forgetfulness or fickleness. In this usage, "faith" is the triumph of reason, memory/history, and will over the emotions and difficulties that come with temporary adversity.
In Kant's usage, "faith" is sort of an extension of reason that also includes elements of willpower and morality. It's pretty close to the ancient definition, though it also played an important part in the shift from the ancient definition to the modern one.
> In ancient Jewish and Christian writings, "faith" is typically used to mean "acting on something you know to be true", especially in the face of difficult circumstances.
You're talking about ancient religious writings. Religious people have a marked tendency of incorrectly conflating faith and reason. Clearly, whatever they meant by "faith," it had a strong mystical (i.e. otherworldy, not based in this reality) slant.
The exact same point goes for Kant. You call his "reason" the following: "reason that also includes elements of willpower and morality" (which does hint at Nazi ideology). I call Kant's "reason" the following: a flawed, mistaken, weakoned account of reason, plus some mystical aspects to compensate.
> "Clearly, whatever they meant by "faith," it had a strong mystical (i.e. otherworldy, not based in this reality) slant."
A small but vocal subset of religious people within the last 100 years (particularly fundamentalist evangelicals) have a marked tendency of treating faith as a strongly mystical thing which is opposed to, and better than, reason. But within most of the rest of Judeo-Christian history and tradition (including most modern Jewish and Christian intellectuals), faith has been thought of in the way I described it: the triumph of reason and experience and willpower, in opposition to fickleness and forgetfulness and emotion.
The only ancient mysticism I know of surrounding Jewish or Christian ideas of faith is the idea that a person might have a supernaturally strong will, and therefore may be able to continue acting according to reason through remarkably dire circumstances. The concept of faith itself, within those traditions, is not mystical or supernatural.
Kant's morality has mystical aspects, and therefore, by association, so does his faith -- as I said, it was a step toward the modern definition (and it may very well be flawed.) Ayn Rand's definition is particularly 20th-century influenced: "blind acceptance of a certain ideational content, acceptance induced by feeling in the absence of evidence or proof." You will not find this mystical, blind conception of faith prior to about a hundred years ago; it is not what was meant by Kant, Augustine, Paul, or Moses.
I strongly believe the analytic/synthetic dichotomy is a false one. There is no such thing as a concept that comes pre-formed in the mind, without any reference to reality.
Kant used the analytic/synthetic dichotomy as part of his attempt to "destroy" reason. No, this isn't the same as a religionist's call to faith, but it has the same implications.
I am familiar with the categorical imperative, but it's not directly related to the discussion at hand. Yes, Kant is also very famous for that, and I also disagree with that (as does most everyone, probably even yourself :-) ).
I disagree. Most knowledge has always come from "science". You know it's raining outside because you looked and it was. You know there's a snake under that rock because you looked under that rock and there was a snake there. Moreover, all human skills and crafts from fire-making to cathedral-building must have progressed through a process of trial and error, which is just a non-formalized experiment.
This wasn't as prestigious as knowledge acquired through reason, though, since it doesn't take a genius to figure things out just by looking at 'em. Only when science became formalised and difficult to understand did it start to gain prestige.