Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A server that has a spike of load and can't cope with it is pretty normal, hard to characterize as "broken".

When the client(s) can send more work than the server can handle there are three options:

  1 - Do nothing; server drops requests. 
  2 - Server notifies the clients (429 in HTTP) and client backs-off (exponential, jitter). 
  3 - Put the client requests in a queue.
Interview question/solution does 2 in a poor way (just adding a pause), it's part of the client and does 3 in the client, when usually this is done in an intermediate component (RMQ/Kafka/Redis/Db/whatever).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: