Hacker Newsnew | past | comments | ask | show | jobs | submit | n_e's commentslogin

The issue is that the API itself is, I assume, badly designed.

Equivalent delete queries in rest / graphql would be

  curl -X DELETE 'https://api.example.com/users/123'
vs

  curl 'https://api.example.com/graphql?query={ deleteUser(id: 123) { id } }'


I've been surprised to see that several cycling products have gotten better over time.

For example I have bought these bottles https://www.zefal.com/en/bottles/545-magnum.html three times over ten years:

- the first time, the mouthpiece was attached by two plastic prongs. The prongs eventually failed

- The second time I bought them, the mouthpiece was attached by four prongs

- The last time I bought them, the hard plastic mouthpiece was replaced by a more comfortable plastic mouthpiece.

I also bought these pedals three times https://www.lookcycle.com/fr-en/products/pedals/road/race/ke... :

- With the first version, small rocks got stuck between the carbon spring and the body of the pedal, making it impossible to clip in and eventually dislodging the spring

- The second version fixed that by adding a plastic cover over the spring, and also improved the bearing seals (which was also a problem with the first version)

- The third version made the angles on the outside of the pedal less acute, making it harder to damage the pedals in a fall


How easy would it be do discover this info without personal experience?


An analytics database is better (clickhouse, bigquery...).

They can do aggregations much faster and can deal with sparse/many columns (the "paid" event has an "amount" attribute, the "page_view" event has an "url" attribute...)


> Cloud SQL Postgres

Although it works and is solid, I wouldn’t say it’s fantastic. My impression is that Google makes limited investment in it to steer customers towards their own services such as Cloud Spanner.

- Major versions are 6 months late - Small instances are horribly slow - Integration with other services is poor (e.g. the Cloud Run integration doesn’t work with a database private IP, so you have to fallback to configuring a VPC and connecting the standard way) - IAM authentication, although great when it works, is complicated and poorly documented - The UI has very few features, for example it isn’t possible to query the database from it - Although I’ve never seen any provider have it, automatic upgrades between major versions would have been nice.


¯\_(ツ)_/¯ I started with a small instance, moved up to the next size so I could get more connections and it ran flawlessly for over a year with 50-60 RPS from Cloud Functions, hitting it 24/7. Total price was under $40 a month. Zero regrets and would do it again in a heartbeat.


You can count to an arbitrary number (until the recursion limit) by using tuples of length N (and go back and forth to number literals by using the type of tuple['length'])

e.g. :

  type zero = []

  type Inc<N extends string[]> = [...N,'']
  type Dec<N extends string[]> = N extends [...infer T,''] ? T : never
  
  type one = Inc<zero>
  type two = Inc<one>
  type oneb = Dec<two>


A tuple does have a upper limit of 10 000 elements, which means that with this approach we can count to 10 000 at most.

Another approach which I tried is to do arithmetics on digits directly, storing digits in a tuple instead, but the code is not as elegant as the tuple one

https://github.com/dqbd/ts-math-evaluate/blob/main/src/math/...


> At places with basically no platform team, no advanced cloud setup etc, I as a dev could understand everything, and deploying mostly meant getting my jar file or whatever running on some ec2 instance.

With an ec2 instance, how do you, for example, update the Java version? Store the database password? Add URLs the service is served at? If it’s done manually how do you add a second instance or upgrade the os?

Though, I agree the infra setups are usually overly complicated, and using a “high-level” service such as Heroku or one of its competitors for as long as possible, and even longer, is usually better, especially for velocity.


You stop your service, do apt-get update java, and then start it again? New URLs, update your nginx config file and restart nginx. Second instance? Dunno, provision a VM, ssh into it, FTP the jar over and stick a load-balancer in front of the two. When you get to 3 instances, we can maybe talk about a shell script to automate it. Heck, before we do that, we can just flash an image of the VM and ask EC2 to start another one up.

Literally 100's of ways to do it.

All this IAC and yaml configs and K8 are exactly like DI and IOC. You get sold on "simple", you start implementing it, and every single hurdle only has one answer: Add more of it or add more of this ecosystem into your stack (the one you just wanted to dip your toes into).

Before you know it, everything is taken over and your whole stack is now complicated, run by 50 different json yaml configs, and you now need tooling and templating to get it all working or to make one tiny change.


> Literally 100's of ways to do it.

And if you have 3 services with 3 different people you'll have 3 different ways of doing it in your team. Suddenly you need 15 different tools at the right versions with the right configs to update a URL.

> Before you know it, everything is taken over and your whole stack is now complicated, run by 50 different json yaml configs, and you now need tooling and templating to get it all working or to make one tiny change.

I'm a developer as opposed to an "ops" person but in my career I've had far more issues with "well the machine for X has a very specific version of Y installed on it. We've tried upgrading it before but we had to manually roll it back" than I have had this. Those configs exist _somewhere_ if you're using AWS or something similar. If you want to avoid the complexity, use IAC (terraform) and simple managed services (DigitalOcean is the sweet spot for me).


if you don't have the problem it solves, don't use it. you need clusters services that scale up and down quickly. it's not the best way to deploy one server, it's the best way to deploy 10k servers, turn them off, and deploy them again. that's not even mentioning monitoring etc


    select array_agg(e)  from jsonb_array_elements_text('["a","b"]'::jsonb) e;

?


The biggest advantage of SQL is that it's so common that if you deal with data a lot you tend to know it well enough. Sure, there are small differences between databases but joins/grouping/window functions tend to work similarly enough.

On the other hand, when I have to do a somewhat complex query in Elasticsearch, or MongoDB, or gorm, or Django ORM, I have to check each time in the docs how it's done.


In most tooling, you can write SELECT FROM table, then go back to the select list and have autocomplete work.

The situation could certainly be better, but at least this works today.


> GraphQL doesn't allow for recursive types.

It does, this schema works:

  type Item {
    id: ID!
    children: [Item!]!
  }

  type Query {
    root: Item!
  }
GraphQL queries describe the shape of the response, so with this schema it's not possible to ask recursively for "the full tree up to an arbitrary depth". One way to solve this would be to add a "descendants" field that returns a list of all the children, grand-children...


It is not infinitely recursive. It supports nested structures as you show but only up to a predefined depth.

IE:

    fragment CategoriesRecursive on Category {
      subcategories {
        ...SubcategoryFields
        subcategories {
          ...SubcategoryFields
          subcategories {
            ...SubcategoryFields
          }
        }
      }
    }

So you have to build your schema with a maximum supported depth. This is not infinitely recursive which is a limitation of the GQL type system.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: