Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sigh. We build machines to automate the repetitive, to eliminate the daily drudgery and to repeat steps with perfection (that we would repeat with perfection ourselves if only we were as focused as a machone). So why do we keep finding ourselves arguing that a lazy compiler, which offloads the work of a machine on to a dev team, is an acceptable compromise?


Meta-comment: I believe the difference in opinion here (which seems to recur, over and over, and has for decades) is because the job title of "software engineer" actually encompasses many different job duties. For some engineers, their job is to "make it work"; they do not care about the thousand cases where their code is buggy, they care about the one case where it solves a customer's problem that couldn't previously be solved. For other engineers, their job is to "make it work right"; they do not care about getting the software to work in the first place (which, at their organization, was probably solved years ago by someone who's now cashed out and sitting on a beach), they care about fixing all the cases where it doesn't work right, where the accumulated complexity of customer demands has led to bugs. The first is in charge of taking the software from zero to one; the second is in charge of taking the software from one to infinity.

For the former group, error checking just gets in their way. Their job is not to make the software perfect, it's only to make it satisfy one person's need, to replace something that previously wasn't computerized with something that was. Oftentimes, it's not even clear what that "something" is - it's pointless to write something that perfectly conforms to the spec if the spec is wrong. So they like languages like Python, Lisp, Ruby, Smalltalk, things that are highly dynamic and let you explore a design space quickly without getting in your way. These languages give you tools to do things; they don't give you tools to prevent you from doing things.

The second group works in larger teams, with larger requirements and greater complexity, and a significant part of their job description is dealing with bugs. If a significant part of the job description is dealing with bugs, it makes sense to use machines to automate checking for them. And so they like languages like Rust, C++, Haskell, Ocaml, occasionally Go or Java.

The two groups do very little work in common (indeed, most of the time they can't stand to work in the opposing culture), but they come together on programming message boards, which don't distinguish between the two roles, and hence we get endless debates.


My point was: tools that prevent you from doing things should not do that without explicit permission. Because thinking is hard and any interruption by a tool or a compiler will impose unnecessary cognitive load and will make it even harder, which may lead to a logical mistake. It is much better to deal with the compiler after all the thinking is done, not during.


I'm pretty sure you've never used a language with a good type system then.

You describe a system where you have to keep everything a program is doing that's relevant in your head at once, and when you're forced out of that state, it's catastrophic. You seem to be assuming that's the only way to get productive work done while programming. I happen to know it's not.

If a language has a sufficiently good type system, it's possible to use the compiler as a mental force multiplier. You no longer need to track everything in your head. You just keep track of minimal local concerns and write a first pass. The compiler tells you how that fails to work with surrounding subsystems, and you examine each point of interaction and make it work. There is no time when you need the entire system in your head. The compiler keeps track of the system as a whole, ensuring that each individual part fits together correctly. The end result is confidence in your changes without having to understand everything at once.

So why cram everything into your brain at once? Human brains are notoriously fallible. The more work you outsource to the compiler, the less work your brain has to do and the more effectively that work gets done.


Yes but tools that you from doing thing you would prefer not to have done in the furst place (but still grant you permission to override this when desired) would be a fairer assessment of what a sting compiler is.

We all agree that a null dereference is a bad thing at runtime. I see no advantage for me as a programmer to be allowed to introduce null dereferences into my code as a side effect of "getting things to work" if then when the code runs it doesn't work right. This increases my cognitive load as a programmer, it does not decrease it.

I would argue that you don't think about the compiler anymore when using a language like haskell than you do when using Python. But you do get more assurances about your program after ghc produces a binary than after Python has finished creating a .pyc -- and that is a win for the programmer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: