Hacker Newsnew | past | comments | ask | show | jobs | submit | Sinjo's commentslogin

Fixed! I grabbed that example from the bug tracker issue where the change was discussed and that wasn’t based on a version where the backtick change had been made.


Ooh, I hadn't heard of rbs-inline before. I was always a bit put off by the type definitions having to live in separate files.


Oh nice! I haven't used Sorbet yet, but this is the second time I've heard good things about it outside of Stripe (where it originated). I'll have to give it a proper look.


I think they direct people to tapioca by default at this point, but that was key to making it a smooth process for us.


I find it to be a mixed bag. Better than nothing, but doesn't come anywhere close to what something like typescript can do for example.


By coincidence I just ordered my first PC Engines computer to use as a router last week.

Glad I got in while they're still being made. It seems like an underserved niche.


You will love it


Honestly state machines are fantastic in Rails too. My last company built Statesman (https://github.com/gocardless/statesman/) and being able to lean on it to prevent you getting into invalid states is fantastic. You also get the bonus of tracking the history of states your resources went through (which is especially useful when you're dealing with payments).

At some point you'll have to think about query performance on the state transition table, but it'll go further than you think and is firmly in the realm of problems of success.


In a previous life I implemented a business system in rails with a core state machine. The staff who used it could fundamentally understand the states and transitions, bringing them closer to understanding how the whole system worked. Not all benefits or costs of a technique are limited to the developer


Doesn't coiling cables while passing increasing levels of power through them (which is exactly what's happening with new USB-C PD specs) start giving off more and more heat?

EDIT: Nope! See the replies.


Coiling is only relevant to magnetic fields. These cables generally have the same amount of current passing in opposite directions through different wires, so net zero.


Ah that makes sense! Having current in opposite directions in the two wires in the cable cancels it out. Thanks for explaining.

I guess the warnings about not running loads of current through coiled cables are more to do with heat dissapation then?


No. The amount of heat given off is just going to be I^2R, and coiling the cable doesn't increase R.

If you coil it so tightly that the heat can't get out, that could be problematic, but it is not likely to be a problem with this coiling technique.


Coiled extension cords are definitely a hazard at sufficiently high power, though.


Coiled AC extension cords.


We stopped using HDDs and CRTs, and started using SSDs and LCDs.


Or more broadly, we stopped using magnetic-based storage & display technology. Hence why random external magents were bad for those things, they used magnetic fields for their operation so other magnets would corrupt things.


Pretty much number 1 on my Postgres wishlist.


It's a little lost in another comment thread (https://news.ycombinator.com/item?id=15862584), but I'm definitely excited about solutions like Patroni and Stolon that have come along more recently.


Well you should definitly look into them. In the past we used corosync/pacemaker a lot (even for different things than just database-ha) but trust me... it was never a sane system. if it ain't broke it worked. if something broke it was horrible to actually get back to any sane state at all.

we migrated to patroni (yeah stolon is cool aswell, but since it's a little bit bigger than we need to we used patroni). the hardest part for patroni is actually creating a script which would create service files for consul (consul is a little bit wierd when it comes to services) or somehow changes dns/haproxy whatever to point to the new master (this is not a problem on stolon)

but since then we tried all sorts of failures and never had a problem. we pulled plugs (hard drive, network, power cord) nothing bad did happen no matter what we did. watchdog worked better than expected in some cases where we tried to fire bad stuff at patroni/overload it. and since it's in python the charactaristic/memory/cpu usage is well understood. (the code is also easy to reason about, at least better than corosync/pacemaker.) etcd/zk/consul is battle tested and did work even that we have way more network partitions than your typical network (this was bad for galera.. :(:() we never autostart a failed node after a restart/clean start. we always look into the node and manually start patroni. and also we use the role_change/etc hooks to create/delete service files in consul and to ping us if anything on the cluster happens.


I am currently using Stolon with synchronous replication for a setup, and overall it's great.

It gives me automated failover, and -- perhaps more imporatantly -- the opportunity to exercise it a lot: I can reboot single servers willy-nilly, and do so regularly (for security updates every couple days).

I picked the Stolon/Patroni approach over Corosync/Pacemaker because it seems simpler and more integrated; it fully "owns" the postgres processes and controls what they do, so I suspect there is less chance to accidentally mis-configurations in the fashion of what the article describes.

I currently prefer Stolon over Patroni because statically typed languages make it easier to have less bugs (Stolon is Go, Patroni is Python), and because the proxy it brings out of the box makes it convenient: On any machine I connect to localhost:5432 to get to postgres, and if the Postgres fails over, it ensures to disconnect me so that I'm not accidentally connected to a replica.

In general, the Stolon/Patroni approach feels like the "right way" (in absence of failover being built directly into the DB, which would be great to have in upstream postgres).

Cons:

Bugs. While Stolon works great most of the time, every couple months I get some weird failure. In one case it was that a stolon-keeper would refuse to come back up with an error message, in another that a failover didn't happen, in a third that Consul stopped working (I suspect a Consul bug, the create-session endpoint hung even when used via plain curl) and as a result some stale Stolon state accidentally accumulated in the Consul KV store, with entries existing that should not be there and thus Stolon refusing to start correctly.

I suspect that, as with other distributed systems that are intrinsically hard to get right, the best way to get rid of these bugs is if more people use Stolon.


> I currently prefer Stolon over Patroni because statically typed languages make it easier to have less bugs (Stolon is Go, Patroni is Python)

Sounds like a holy-war topic :) But lets be serious. How statically typed language helps you to avoid bugs in algorithms you implement? The rest is about proper testing.

> and because the proxy it brings out of the box makes it convenient: On any machine I connect to localhost:5432 to get to postgres

It seems like you are running a single database cluster. When you'll have to run and support hundreds of them you will change your mind.

> if the Postgres fails over, it ensures to disconnect me so that I'm not accidentally connected to a replica.

HAProxy will do absolutely the same.

> Bugs. While Stolon works great most of the time, every couple months I get some weird failure. In one case it was that a stolon-keeper would refuse to come back up with an error message, in another that a failover didn't happen, in a third that Consul stopped working (I suspect a Consul bug, the create-session endpoint hung even when used via plain curl) and as a result some stale Stolon state accidentally accumulated in the Consul KV store, with entries existing that should not be there and thus Stolon refusing to start correctly.

Yeah, it proves one more time: * don't reinvent wheel: HAProxy vs stolon-proxy * using statically typed language doesn't really help you to have less bugs

> I suspect that, as with other distributed systems that are intrinsically hard to get right, the best way to get rid of these bugs is if more people use Stolon.

As I've already told before. We are running a few hundred Patroni clusters with etcd and a few dozen with ZooKeeper. Never had such strange problems.


> > if the Postgres fails over, it ensures to disconnect me so that I'm not accidentally connected to a replica.

> HAProxy will do absolutely the same.

well I think that is not the same what stolon-proxy actually provides. (actually I use patroni) but if your network gets split and you end up with two masters (one application writes to the old master) there would be a problem if one application would still be connected to the splitted master.

however I do not get the point, because etcd / consul would not allow to still hold the master role which means that the splitted master would lose the master role and thus either die, because it can not connect to the new master or just be a read slave and the application would than probably throw errors if users are still connected to the splitted application. highly depends how big your etcd/consul is and how good your application detects failures. (since we are highly dependent on our database we actually kill hikaricp (java) in case of too many master write failures and just restart it after a special amount of time. well we also look in creating a small lightweight async driver based on akka, where we do this in a little bit more automated fashion.)


> well I think that is not the same what stolon-proxy actually provides. (actually I use patroni) but if your network gets split and you end up with two masters (one application writes to the old master) there would be a problem if one application would still be connected to the splitted master.

On network partition Patroni will not be able to update leader key in Etcd and therefore restart postgres in read-only mode (create recovery.conf and restart). No writes will be possible.


it would be interesting to know how stolon/patroni deal with the failover edge cases and how this impacts availability. like if you accessing the DB but can't contact etcd/consul then you should stop accessing the DB because you might start doing unsafe writes. but this means that consul/etcd is now a point of failure (though, this usually runs multiple nodes so shouldn't happen!). but you can end up in a situation where bugs/issues with the HA system ends up causing you more downtime than manual failover would cause.

you also have to be careful with ensuring there is sufficient time gaps when failing over to cover the case when the master is not really down and connections are still writing to it. like the patroni default haproxy config doesn't even seem to kill live connections which seems kind of risky.


> if you accessing the DB but can't contact etcd/consul then you should stop accessing the DB because you might start doing unsafe writes.

If patroni can't update leader lock in Etcd, it will restart postgres in read-only mode. No writes will happen.

> like the patroni default haproxy config doesn't even seem to kill live connections which seems kind of risky.

That's not true: https://github.com/zalando/patroni/blob/master/haproxy.cfg#L...


Ah. Thanks. I was looking at the template files but I guess that is not used or used for something else.


Thanks for the extra info, and the insight into how you're using Patroni. Always helpful to hear about someone using it for real, especially someone who's come from Pacemaker. :)


Right!

A thing that scares me is anyone saying they're running their own HA cluster (not single instance) for cost reasons. Infra people are not cheaper than the hosted solutions (Amazon RDS, Google Cloud SQL, Heroku Postgres).


That’s a blanket statement that has very little basis in reality. Hosted Postgres is never going to give you the performance you need for low latency deployments.


RDS i3 on bare-metal is in preview now, so it's not too far off

https://aws.amazon.com/blogs/aws/new-amazon-ec2-bare-metal-i...


Oh for sure!

My claim is that you need to hire some expensive people if you want that performance, not that there aren't reasons to run your own database instances!


Then you are running it yourself for latency reasons, not (just) the cost reasons in the GP's scenario.


Cause AWS infra is managed by magic monkeys.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: