Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I once had to track user actions in a pretty active online game - I configured the Apache servers to pipe their logs to a Perl script. The Perl script was parsing the requests, then pushing that data to a buffer (i.e. an array).

Then when the buffer reached 5000 requests, it would push the data in PosgreSQL using a COPY FROM STDIN. On committing the transaction, we also specified synchronous_commit == off

As far as DB tables go, each day a new table was created with a timestamp in its name, then a cron script would take care of tables older than one week, aggregating data and getting rid of junk.

This setup was handling tens of thousands of writes per second without a sweat on a pretty modest server. Of course, it's less than what Redis can do, but then again I trust PostgreSQL more than I trust Redis.



Out of curiosity, were you doing a high number of reads at the same second? And if so, was it the same batch processing method you were using, or was it random reads?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: