Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ...truly targeting low-latency and high frequency events concurrency is your enemy, not your friend.

This reminds me of the LMAX financial trading platform where they started with a concurrent model but ended up using a single-thread "...that will process 6 million orders per second..."

http://martinfowler.com/articles/lmax.html



Yes, but the concurrency has been shifted to the two disruptors at either end of the business process logic. And I suspect the business logic is fairly simple and lends itself to the straight through solution. If you want to react to a news event (say), you have to somehow send a signal to the feed requesting recent history (last 20 minutes say), have something waiting to look at it, crunch the numbers when they arrive to see if somethings up, then interupt the existing trade decision/risk management schedule with a potentially better trade. Complexity increases quickly and concurrency helps do all of that at high speed.


As your complexity curve increases, your latency expectations must go down. That is what I meant when I said there is no single trading system that can be the "best".

If what you are interested in is complex decision making then it may make sense to use a different sort of messaging technology than LMAX, but you won't be getting into anything remotely "low latency". Nothing wrong with that, just needs to be a known expectation.


Yep, I've poured over the technology and concepts there and its awesome. So awesome I think you can increase complexity without too much of a latency penalty. Just behind the fast guys but way ahead of the straight algo guys is where I think the opportunities lie (I'm kind of answering your previous question about where the dials sit in my mind). I might be wrong but, and would like to test out all dial positionings at once before locking in. And that's where a flexible haskell version could shine.


I have no opinion about the choice of haskell. That said, you are already making decisions about the dials if you go in with an architecture that relies on massive concurrency, functional languages, etc. You cannot for instance get the latency dial very low with that central architecture decision.

Again there is nothing wrong with that. It's just better to state it up front as an expectation and/or a goal than to assume you are going to be able to pivot easily once you have an architecture in place.


I can't recommend working with the LMAX library enough if you are interested in low latency in the JVM. Lots of people have driven to the same/similar place but LMAX was the first to just let everyone see it.


Yes, it is interesting, when you are becoming highly concurrent the penalty for context switching start taking its toll, but that's only when the rest of your trading system is already optimized.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: