Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"I see poor understanding of the performance ranges of various components"

I'm totally guilty of this. I write new code on new hardware, and have very little intuitive of how fast it should go. Is 10k ops a second good? 1M? I just don't know how fast it should go. Of course, then I pull out the profiler, and think about my algorithm, but it takes a lot of second-guessing to decide how close to the limit I am.

For example, I was writing some clojure code to write to a SQL database. I'm relatively new to the JVM stack. I was writing to the DB at 1MB/s. I thought "well, that's not great, but not bad. Maybe after network traffic and DB constraints, and writing to a laptop disk drive, I suppose that's alright". No, I replace the JDBC DB thread connection pooling driver, and the same code now writes at 8 MB/s.

It'd be nice if there were a web resource for general guidelines on what it takes to max out hardware. Basically, benchmarks for real-world tasks.



"It'd be nice if there were a web resource for general guidelines on what it takes to max out hardware. Basically, benchmarks for real-world tasks."

Here here!!

I had the same thought when reading the first two pages of the article. I'd love to be able to better intuit performance (or heck, troubleshoot slow systems - which I do more often). The problems I encounter are lack of accurate and understandable information about the underlying hardware and the various layers between my program and the hardware (especially important for me lately as more of my stuff runs in a VM).

It seems like you need to be lucky and find a mentor willing to teach this esoteric material.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: