Isn't this just a rehash of the lates 19th/early 20th century intuitionits vs. formalists debate? Even though Gödel had shown that the ultimate goal of the formalist program is unattainable, they still mostly won that debate by being able to produce more meaningful results. The radical intuitionist, with whom the author seems to sympathize, did not only reject the axiom of choice, but even things like the law of the excluded middle (i.E. they assumed that "A or not A" is not necessarily true). That makes many proofs considerably harder to perform, if not totally impossible.
Not surprisingly, the formalist program was also more fruitful for endeavours like computational logic and the theory of computation, which resonate with the formalist tenet that mathematics is just the manipulation of strings of abstract symbols according to specific rules.
I think it's more a "Humans vs Computers" debate. Some background:
Tom Hales cleverly reduced the Kepler Conjecture to the solution thousands of linear programming problems: if all the solutions are bigger than something, the kepler conjecture is true. Then he had a computer solve the LP problems, and the result was that Kepler was true.
However, his solution was informal in the following sense; he didn't have a rigorous proof that the solution of the LP problems was accurate, nor were his programs proven correct. Now he is attempting to redo the proof in HOL, a formal theorem proving system.
More background: no math paper is completely formal. All involve some handwaving, especially the 200 page monsters that appear in Annals of Math. An example from a paper I'm currently writing: "It can be easily seen by elementary calculus arguments that..."
Doron is arguing that the mathematical community is holding Hale's computer proof to a higher standard than it would hold a comparable 200 page human-written paper. Basically, he is claiming that we should accept an informal computer-assisted proof just like we would accept an informal human-written proof.
I dont think the debate was as much between intuitionism and formalism as it was between intuitionists and people who thought infinite objects are really real. In particular, intuitionists dont object to "A or not A" for finite sets.
In many cases theories become simpler not more complicated with constructive foundations for mathematics, (for instance, synthetic differential geometry is a much more elegant way of doing geometry). Modern categorical foundations for mathematics, using topos theory, has intuitionistic logic as its natural language.
It is exactly constructive foundations of mathematics which is related to foundations of programming languages, and this in many cases is formalized via the Curry-Howard correspondence which takes programs to proofs and vice versa.
I can't remember where I originally found this link (I thought it was here, but a quick check of SearchYC didn't reveal it), but I found the following article on intuitionistic mathematics for physics pretty fascinating. While I know a certain amount about intuitionistic mathematics, I'd never really considered that people might actually use it.
One can prove, ab initio, that 1+1=2 in less than ten lines in Coq.
Algorithms are _meaningless_ without accompanying proofs. There should be --- and is --- a continuum of formality. The 20th Century has been unique in recognizing the fallacies and contradictions of completely informal reasoning; why give up when we're finally ahead?
To whomever rated me down -- why do you think that human-written software has so many bugs? It's because we don't take the time (and don't have good strategies yet) to effectively "prove" that our code/algorithms exactly meet the program specs. To write code, we need to make certain idealizations (e.g. thinking about "doubles" as actual real numbers) which actually have tricky corner cases.
So many areas of life cannot easily be mathematically analyzed. Hence, our brain resorts to algorithms which work "most" of the time, which is luckily usually sufficient for survival.
I'm half-way into "Simple Heuristics That Make Us Smart", and it's very revelatory on the subject. Turns out with the "recognition heuristic" alone you can go about halfway. The rest goes again halfway with another just a bit more complex heuristic. Also interesting is that very simple heuristics tend to be about as good or more as very complicated algorithms, and sensibly more robust.
An algorithm's meaning: that which is computed when the algorithm runs. Pretty simple. Of course, you need something to run it on. But one could say that proofs are meaningless without humans to run on as well.
Given that algorithms can have undecidable outcomes, its hard to even tell what a meaningful description of such algorithms in terms of proof even entails.
Clearly when developing an algorithm one should have an intention as to what it will do. The idea behind many algorithmic proofs and, I assume, that of the original poster, is to prove that ones implementation in fact does what you intended. For example, proving that a sorting algorithm always outputs a properly sorted list or something to that effect.
Modern theorem provers are also pretty awesome in that they can often, using the correspondence between formal proofs and programs (the Curry-Howard isomorphism for those interested), turn a proof that some unspecified function F, say, sorts a list into a Haskell/ML implementation of a function satisfying that proof.
Even if you prove that a program meets a formal spec, there's no way to know that the formal spec does what you 'intend'. And specs can get complex too.
Nonetheless, you have reduced the uncertainty in the system. At least you know where the system is breaking down when you find bugs if you have proven your algorithms conform to spec.
It's hard to see how a computer would produce a proof of the Banach-Tarski "Paradox"
( http://news.ycombinator.com/item?id=411043 )
without going through effectively the same steps as a human. It doesn't seem likely that it will be reduced to calculations, or simple formula manipulations such as can be performed by Mathematica.
And in case you think that B-T is irrelevant, informal reasoning about infinities produced all sorts of results that caused people problems. Formal reasoning about infinities gave us calculus for real, that works wherever it's known to work. And I use infinities on a daily basis designing and implementing systems to assist with tasks similar to air traffic control and automated target tracking.
Computers can be used to explore and experiment with numbers, geometry and some structures, and will probably one day be able to assist with exploring more abstract concepts, but as things stand today, there's a great deal of mathematics that no one can see how to do with computers.
To say otherwise is to demonstrate ignorance of both fields.
I'd agree with the author that only logicians are interested in formal proofs.
But what the author describes as the Hilbet-Bourbaki method is simply what math is. Any mathematician will tell you that if you're not proving theorems then you're not really doing math.
Anyway, what good is a computer proof if it doesn't provide any insight ? (which is usually the case). A good proof usually illuminates why some mathematical statement is true, and that is why computer proofs are usually looked upon with suspicion by many mathematicians.
After all, a computer simply churning out the answer (42 ?) doesn't advance the state of human knowledge at all.
What the author is disagreeing with, the Hilbert-Bourbaki method, isn't mathematics as the proving of theorems, but rather the proving of theorems as a necessarily formal task; that is, that when one is doing a proof it ought to be done in a (preferably intuitionistic) formal system with no steps omitted. The alternative, and what almost all mathematicians do in practice, is to informally lay out a proof that any professional mathematician (but, perhaps not a computer) can follow, filling in details as necessary.
Further, formal computer proofs are rarely wholly generated by computer. Rather the user specifies goals and tactics which the computer verifies or refutes. If one is using a reasonably powerful logic (ie first-order and not propositional) there are only relatively small tasks that can be automated. Hence, the result is often equivalent, or, at least, close to a human generated proof in terms of what is illuminated about the mathematical statement, but every step is necessarily valid.
Further, there is a field of modern mathematical logic called proof mining which concerns itself with the analysis of fully formal proofs and can often obtain illuminating details from a formal proof that wouldn't be possible from an informal equivalent.
This isn't to say that all mathematics should be formal in the sense indicated, but just to point out that it remains an interesting endeavor that is worth keeping an eye on. Also, note that the author /does/ think that formal verification of this sort can be useful for software development which is indeed what formal methods are mostly used for these days.
He isn't disagreeing with the goals of math as practiced. All your quotes illustrate is that he thinks the usual methods are woefully inefficient.
More quotes:
"and I believe that mathematicians who continue to do pure human, pencil-and-paper, computer-less, research, are wasting their time." If they learned how to program, in, say, Maple, and used the same mental energy and ingenuity while trying to use the full potential of the computer, they would go much further.
I.e., human + computer >> human.
"The axiomatic method is not even the most efficient way to prove theorems in Euclidean Geometry." Thanks to Rene Descartes, every theorem in Euclidean Geometry is equivalent to a routine identity in high-school algebra,
I.e., verifying algebraic identities is easier than euclidean proofs, especially if done by maple.
His main argument is simply that computer-assisted proofs should not be held to a higher standard than human-only proofs.
"I.e., verifying algebraic identities is easier than euclidean proofs, especially if done by maple."
If something like that has been discovered, fine, use it, and solve geometry problems in that way in the future. But how would that have been discovered without doing classical mathematical proofs? I don't think that some problems can be solved algorithmically proves that the same holds for all problems.
He isn't arguing against classical mathematical proofs. He is arguing that informal computer assisted proofs should not be held to a higher standard than informal human created proofs.
The maniacal obsession with unassailable formal proof gives mathematics its credibility, above all else. There is a lesson here for all other endeavors that seek truth: Rigor uber alles. Take nothing for granted.
The question I see here is whether you are interested in Mathematics as end, or in the Application of Mathematics as an end. Certainly, if all you need is an algorithm that works, don't write a formal proof. However, if your mathematical work is foundational for some future algorithm, you need to know that p(x) holds for every x, not just a bunch that you tested.
Most students do not need to be concerned with (and indeed, are not instructed in) formal proofs. But, if we are going to push math forward, we need to lay a solid foundation of axioms and proofs.
It is not true that the best way to do geometry is by converting everything into algebra. Proving simple geometric stuff like, the three altitudes of a triangle all intersect at a single point, is a pain using coordinates. Just because it's possible to convert problems in domain X into problems in domain Y doesn't mean that domain X is pointless.
Not surprisingly, the formalist program was also more fruitful for endeavours like computational logic and the theory of computation, which resonate with the formalist tenet that mathematics is just the manipulation of strings of abstract symbols according to specific rules.
This http://en.wikipedia.org/wiki/Brouwer-Hilbert_controversy gives some of the context.