Hacker Newsnew | past | comments | ask | show | jobs | submit | more n_e's commentslogin

I'm very happy with GraphQL. We use it mainly for APIs that are consumed by several clients (web app, mobile app, other services).

The main benefit to us is that it's a huge time saver. There is almost no similar/duplicated code for similar operations (for example search, list and get operations). It's also very easy to write a generic API once, without thinking to much about how the clients will use it, and have it used in ways that weren't anticipated at the time it was written.

It's also pretty easy to have the API made of entities that are always the same, rather than having routes that return slightly different data.

Another benefit is that the default / basic tooling works well and with no setup. The playground works, it's easy to generate idiomatic clients in many languages, which is not really the case with openapi or grpc.

However, although I have no data to back this up, I feel that the adoption is not that high, and more advanced or unusual tooling doesn't exist or isn't very good or progressing.

Another problem is that developers don't seem to grasp the best practices intuitively and the docs don't make it very clear what they are, but it's necessary to use them to have a useful API and not a slow, more complicated version of REST.


> That’s so weird, I wonder how that works. If I’m the first to make a map or a street or any geographic object in France, it can’t be featured on any other map, unless so license it?

A GR isn't a geographic object, it is a sequence of trails that has been arbitrarily chosen by the author among all the possible sequences of trails.


Although the problem will be a lot less severe than with remote servers, this is still sub-optimal:

- the data passed from one query to the next still needs to move from the database process to the service process and back

- the queries will always be executed in the order they are in the code, denying the optimizer the opportunity to execute the full query in the best order


With SQLite there is no "database process" unless you are explicitly only using a dedicated process to make the queries through which isn't necessary with SQLite anyway.

That being said, the problem here is not whether N+1 is a problem or not, but rather if, given the immense amount of unnecessary complexity that using an ORM brings, it is appropriate to use an ORM.


There is still also non-zero overhead associated with making queries in general, in both the querying and query-answering process.

The ceiling of the range where you can get away with this without user-visible performance impact will be much higher, and the relative performance difference may be smaller, but in general fewer queries for the same data will still be better in general.

Even with an in-process DB, you're still essentially making a sort of context switch.


Piling on about overhead (and SQLite), many high-level languages take some hit for using an FFI. So you're still incentivized to avoid tons of SQLite calls.

https://github.com/dyu/ffi-overhead


Sub optimal in one regard, but if segmenting queries makes for simpler, easier to read and easier to debug code, then you're optimising dev time. Often this is the right tradeoff to make.


I don't know ActiveRecord either, but it appears so https://guides.rubyonrails.org/active_record_querying.html#j...


> If Kubernetes does indeed provide the best solution to provide scalability and availability, one can argue that this would result in a decreased demand for dev ops engineers, as they "would just have to use Kubernetes".

I'd say it would result in either:

- the same scalability and availability with fewer DevOps engineers - better scalability and availability with a similar number or more DevOps engineers

In my experience, it's almost always the second case that happens. For example, a service would be moved from a few (virtualized or physical) servers that can only be scaled manually, to a k8s cluster with either autoscaling or at least scaling by changing a configuration file.


Right. Most companies aren't content to settle for doing the same thing they were doing but with fewer engineers when they realize they could be using those other engineers to automate even more things (via Kubernetes operators).


In the ecosystem I work in, it's normal to have automated tests, it's normal to have a main branch that builds, and it's normal to be able to develop a full feature locally. Plenty of other things that make coding and deployment easier, faster and more reliable, such as using containers, CI/CD, or error reporting are normal too.

However, as you have noticed, all of these things are far from the standard. No tests, code that barely works, devs that are barely able to code, etc. is the standard in many companies.

The people I know who are competent and motivated go down one of two paths:

- either work for a company that values and applies best practices - or work for a company that doesn't in a role such as an architect, where they have a lot of freedom in what they do, and spend most of their time finding simple ways to make poorly written applications work or writing POCs or starting new projects

To answer your questions:

> I wonder if I'm just holding my peers to standards that are too high? Is it too much to expect tests? Is it too much to expect to be able to test full stack locally?

All of the standards you mention are best practices and are uncontroversial in the sense that they require little engineering effort compared to the benefits they bring.

Whether it's too much to expect or not depends of the context: if management or the more senior developers don't value it I'd say it's too much to expect.

However, if your colleagues aren't opposed to it or it's a school project I'd try to introduce improvements, starting with small things. For example, for the main branch that doesn't build you can explain why it's a problem (e.g. your colleagues will pull main before starting to develop something, and it's not nice to them to leave the project in a state where they have to fix things first), and introduce a solution (for ex. a CI job or a pre-commit hook that checks the project builds).

> Am I an arrogant or am I surrounded by incompetent people?

Some people value other things more than software development, don't mind repetitive work, are less driven, or even less intelligent and it's perfectly OK for them to be this way. So I'd say you're arrogant by judging them too harshly or expecting them to be different. However that's not the important part (though it's still a life lesson that I learnt much too late).

The most important part is that you should surround yourself with competent people so you can have more satisfaction in your work and learn new things.


> I think it looks fine

_.chain and .value() could be removed

> works fine.

- it isn’t possible to tree-shake the package and only include the lodash functions that are used - it isn’t possible to have non-lodash functions in the pipeline (e.g. date-fns)


There is no mess.

The 99% case is a pretty string. The 1% case is a string with escaped delimiters.


> Chalk maintainers basically discard the performance improvement as micro-benchmarking (i.e. "doesn't matter in the real world"). Chalk maintainers also say tree shaking doesn't matter in the real world.

Chalk’s purpose is to color things printed to the terminal. Unless it’s performance is atrocious, which I doubt, it doesn’t matter unless a ridiculous amount of data is printed.

> Chalk maintainers also say tree shaking doesn't matter in the real world

No, they say it doesn’t matter for chalk’s use case. Which is command-line tools that are almost never tree-shaken.


> Chalk’s purpose is to color things printed to the terminal. Unless it’s performance is atrocious, which I doubt, it doesn’t matter unless a ridiculous amount of data is printed.

Chalk is downloaded 15 millions per day.

We probably can multiply how much time has wasted from 15 millions run per day. It matters collectively.

> No, they say it doesn’t matter for chalk’s use case. Which is command-line tools that are almost never tree-shaken.

Are you saying people don't package the terminal app when written in Node.js?

It matters in some cases, and it doesn't matter in some.

Chalk maintainers try to brush off these improvements, which seem like bad intents, tbh.


I don't have a horse in this race, but I am a bit bothered by two arguments, because I see variations on them so often (so this is less of a reply to you personally).

> Chalk is downloaded 15 millions per day. > We probably can multiply how much time has wasted from 15 millions run per day. It matters collectively.

If I give you back 5 seconds each day, will it matter to you? Will you be able to enjoy or do something that you otherwise wouldn't? I doubt it. I am certain that is true practically every person in those 15 millions.

The cumulative loss of something can be huge, but still don't matter because its distributed to a degree where it is barely a rounding error.

> Are you saying people don't package the terminal app when written in Node.js?

Details matter. Chalk is ~100K and nanocolors is 16K. Yes, 90K is meaningless saving for a terminal app.


> The cumulative loss of something can be huge, but still don't matter because its distributed to a degree where it is barely a rounding error.

Are you claiming it doesn't matter to anyone or just you?

> If I give you back 5 seconds each day, will it matter to you? Will you be able to enjoy or do something that you otherwise wouldn't?

I'd welcome the time back since I don't lose anything in return anyway.

So, a big YES here.

Generally I would put faster library as a plus.

Working in a big company, when choosing between 2 open sources, we will need to list down pros and cons.

Among other things, being 4x smaller is definitely one of the plus consideration. I try not to exaggerate this; it's not a major plus, but a plus nonetheless

Saying these doesn't matter is disingenuous.

To you, maybe, but not for most. If everything else is equal, you would choose chalk despite it is 4x bigger in size and slower? Most will choose the smaller and faster library.


Yes, I am claiming that saving 5 seconds out of 86400 in a day does not matter to any healthy person.

As a web developer I generally prefer things being faster and/or smaller too, but I would not change a working field-tested library for negligible effect. The problem in the case that is being discussed is not which library would be chosen when none is being used yet, but does it make sense to replace an existing one to save 90K of disk space and maybe a few seconds per day? To me this looks like very definition of bike-shedding and yes, I would definitely not switch library if these are the only compelling reasons.

4x smaller than the other one doesn't really tell anything without considering how big the whole app that uses this library is. On our fairly basic Nuxt project node_modules directory takes 250M. Do you really think any of us should care about 90K?

And lastly, you don't know me. Save your "disingenuous" remarks for people you do.


I guess to get 5 seconds back it would have to many many "green success" or "red failure". Then a slow network hop would get the 5 seconds back.


> We probably can multiply how much time has wasted from 15 millions run per day. It matters collectively.

The benchmark measurements they're discussing are on the order of tens-of-millions of operations per second: https://github.com/ai/nanocolors#benchmarks

Are the contents of your terminal changing that frequently?


> The fact that GraphQL allows all these permutations in the queries is the root of the problem. It's not something which can be solved or optimized within GraphQL.

Common ways to solve that are to whitelist the allowed queries or to cache at the resolver level instead of the query level.


I'd like to argue against that. Yes, whitelisting is a solution. But Caching at the Query level can be extremely efficient. I'm the founder of WunderGraph and we're doing it like this. We turn GraphQL Operations into REST/JSON-RPC Endpoints, allowing them to be cached by CDNs, Browsers, etc... https://wundergraph.com/docs/overview/features/edge_caching


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: