What your talking about is a type of overly simplified model that's irrelevant for actual computing. My point is there are other models out there. If you actually want the fastest algorithm out there you need to design it for actual hardware with finite data, cache, limited registers, limited ram, and sometime aproximating a real instruction set etc.
However, by changing your assumptions you can still use O notation with more complex models.
However, by changing your assumptions you can still use O notation with more complex models.