I think it's a measure of the goodwill in the community for Github (and perhaps, the fact that a lot of us have done something similar in the past) that they won't cop much flak at all for this.
I don't know. I use github, but my paid, private repos are elsewhere. The fact that someone, anyone, can run against the production system and nuke it raises some basic questions about password storage. I don't run a site anything like github, but my production and test databases have different passwords and none of them are stored in a way that the test environment could get access to the live db, nor could the tests be run on the production environment. There's bad luck and theres asking for trouble.
To me, that sounds like encouraging a race to the bottom.
When you're in the business of storing other people's data, things like transactions, binary backups, QUICK RESTORES, and separating dev from production shouldn't be afterthoughts. They are core attributes.
I would understand this for a demo, but not after a couple of years and half a million users. They have paying customers.
"Reliable code hosting
We spend all day and night making sure your repositories are secure, backed up and always available."
The half a million users validates their techniques, no matter how much armchair quarterbacking.
The code repos were never in danger, and they've been killing it in the market because they are racing to add awesome extra features, not racin to the bottom.
I don't disagree that they can do more in their db operations, and that it's fine for us paying customers to demand more, but the reality of startups is that it very much is a race and features, for availability or recovery or other, are viciously prioritized and many things don't happen until something breaks.
If you don't thing Github cuts the mustard, svn on Google Code probably won't have problems like this...
Disclaimer: I'm personal friends with much of the Github crew.
> The half a million users validates their techniques,
Let me introduce you to my friend, GeoCities.
Funny thing about free hosting.
> no matter how much armchair quarterbacking.
Transactions aren't armchair quarterbacking. Binary backups aren't armchair quarterbacking. Separating development from production isn't armchair quarterbacking. Kindly, you have no idea what you're talking about.
You can't give them credit for the repositories being mostly intact, when the ONLY parts that broke were the parts they mucked with to tie them into their database.
> but the reality of startups is that it very much is a race and features, for availability or recovery
And those are exactly the places where they screwed up.
Are you sitting in a chair? Do you not work at Github? Then you're armchair quarterbacking. We all are. Even if you are a quarterback for another team (I'm a DBA myself)
Anyway, it's a great discussion so we can learn from other peoples mistakes. I will be triple checking my restores later today, and likely halt the project I'm working on to get cold standbys shored up asap. (but it's hard to prioritize house keeping over customer centric features in the race I'm running along side the Github crew)
"a pejorative modifier to refer to a person who experiences something vicariously rather than first-hand, or to a casual critic who lacks practical experience"
Ironically the most fitting application would be to say that they're armchair quarterbacking their own database administering.
Fortunately, there is this thing called "science" which means we can understand things about the world regardless of where we live. As Dawkins would say, there is no such thing as "Chinese Science" or "French Science", just science. Similarly, there is no such thing as "Github MySQL" or "Github separation of production and development systems" in that same sense.
If we're going to use hindsight, we might as well look at the cause and the effect.
Yes, they missed out some pretty obvious things. And what happened? A few hours of downtime because their restore was slow, plus a tiny bit of inconsequential data loss. Hardly a catastrophe.
The fact is that every site has some sort of problems. Many of them will be completely obvious like this one. And while github could have gone through and attempted to fix them all, I much prefer they spend their time doing things that have more than a few hours of impact on my life.
I wonder if this is a rails thing. It's an out of the box pattern to put production, development and testing credentials into one database configuration file.
Also from what I know Github is a flat org where all the devs have ability to do work in production. My company is like this too, and while it sometimes leads to scary mistakes, it also leads to massive productivity over having to go through a release engineering team.