No, that's like saying you need a completely different set of algorithms because you changed the overall speed of your computer components up or down.
Algorithms are not designed with absolute speeds in mind. It's relative. The seek to read ratio of RAM is very roughly comparable to that of SSD and to that of HDD. It's just that SSD is faster than HDD and RAM is way faster than SSD.
You may need to tweak a few numbers, such as how aggressive the read-ahead is, and how big the buffers are, but none of this necessitates a new file system or a brand new way of seeing things.
"They're different so they need different file systems" is honestly a bit of magical thinking, hoping for discovering untapped potential, just because of some fuzzy feeling of novelty SSD invokes.
What would you change specifically in a file system optimized for SSD? Be specific, and then we'll have something to talk about.
> No, that's like saying you need a completely different set of algorithms because you changed the overall speed of your computer components up or down.
yes? Why are you presenting that as a ridiculous scenario?
The way you optimize an algorithm is going to be completely different when you can assume the HDD is as fast to access as RAM. It would also completely change the way OS's do paging.
The post you're responding to isn't saying it's impossible to get work done, they're saying it's an opportunity to optimize, which it is. And since a FS has such a dramatic effect on the performance of a system (one of the biggest draws of SSD is it's speed), the idea of designing one with SSD in mind specifically isn't that outlandish.
Now whether or not what we have is "good enough" is a different discussion, but you presenting that as something that should be ridiculed is just short sighted and ignorant.
The seek to read ratio of RAM is very roughly comparable to that of SSD and to that of HDD.
Totally untrue. SSDs seek faster than HDDs, but HDDs have higher bandwidth than SSDs. This is true almost across the board. I don't know where you got the idea that this isn't true.
And with RAM (DDR specifically) you can generally pipeline accesses, thus negating the bandwidth penalty due to bank open latency. Not true of a single HDD.
What would you change specifically in a file system optimized for SSD?
Drop the notion that you gain any benefit from spatial locality. You're then freed from any data layout constraint and can do great things like content-based addressing.
Don't believe me? These guys are building an empire on this idea: http://www.xtremio.com/ (Disclaimer: I work there.)
Algorithms are not designed with absolute speeds in mind.
Entirely incorrect in certain instances, especially with regard to implementation, and specifically the ones we are talking about. Caching schemes are specifically designed with speeds and hardware costs in mind. Also, you seem to be laboring under the misconception that I don't know what an algorithm is.
Algorithms are not designed with absolute speeds in mind. It's relative. The seek to read ratio of RAM is very roughly comparable to that of SSD and to that of HDD. It's just that SSD is faster than HDD and RAM is way faster than SSD.
You may need to tweak a few numbers, such as how aggressive the read-ahead is, and how big the buffers are, but none of this necessitates a new file system or a brand new way of seeing things.
"They're different so they need different file systems" is honestly a bit of magical thinking, hoping for discovering untapped potential, just because of some fuzzy feeling of novelty SSD invokes.
What would you change specifically in a file system optimized for SSD? Be specific, and then we'll have something to talk about.