Personally, I would say that if you are doing DOM updates via AJAX calls that return HTML, HTMX is just the correct way to do it. (Unless you're using some other solution already for other reasons and it also includes that functionality.)
Sure, you could do the basics of that workflow with twenty lines of your own JS, and save a dependency. But that's the kind of thing that is generally very unscalable, because unless you're very disciplined it quickly becomes a mass of spaghetti. The virtue of HTMX is more in how it channels and limits your code, than in the new capabilities it grants you (which were all in common use as of 2005 or whatever).
Why is it better to render JSON on the server, read that JSON in a separate client app that you also have to write, and then do a bunch of manual DOM calls in Javascript, rather than rendering HTML on the server and letting the browser's blazing-fast compiled HTML parser turn it into DOM for you?
Because you want to offload all the work of rendering onto the client. And in fact you'd often want the rendering to be sensitive to the browser context. Different types of devices may require a different UI or arrangement of content. All that logic would be client logic. Why would you treat a browser different from any other client?
The presentation aspect is simply not the concern of the server in the first place. Data is not presentation and servers should be able to support more than one client. Data should be not be encapsulated in several layers of presentation. Having to scrape data from a web page is ugly and dumb.
Web technology is heavily invested in the goal of rendering content on the server and shipping markup and styling. The client browser still has to take the markup and render it properly, but the whole point of web architecture to begin with was to render data into markup on the server so the browser can be a pretty thin, standards-based rendering engine.
There are absolutely situations where web apps need to render most or all of the data into UI in the client, but that should be the outlier rather than the norm. Why do you assume that there are no situations when rendering HTML from the server is a valid, or even ideal, approach?
Even with native applications, the UI is a combination of content that is rendered on and off the client. An iOS app will end up with screens that are designed and rendered by the developer, the navigation menu and other chrome elements are likely rendered in the build, while some screens in the app do fetch data and render it to the screen. Again its very much the outlier to have a native app where the app entirely fetches data and renders all the content in the client.
The cost of launch is small-ish compared to the total program cost, but the limitations on launch condition the engineering requirements in ways that inflate the engineering costs. JWST had to be built as an insane on-orbit autonomous origami project because its mirror couldn't fit in a fairing unfolded. Repeat for ten thousand other decisions that are made in order to optimize weight or volume.
If you can launch a hundred tons to orbit for $5M, you can just make a huge dumb cheap telescope and throw a dozen of them up there. Quantity covers a multitude of sins.
> The thing is there are just vanishingly few places where you only need a "sparkling of interactivity on top".
I would say it's precisely the opposite.
Say 97% of work done by web pages and web apps in practice boils down to "render some data available on the server as HTML, then show it to the user". For these cases, putting what amounts to an entire GUI framework written in Javascript on the frontend is massive, bandwidth-sucking, performance-killing overkill.
There are absolutely exceptions. Google Sheets exists. But your project is probably not Google Sheets.
What is this “entire GUI framework written in JavaScript”? React isn’t a GUI framework, even Angular despite being quite batteries included is not that.
I swear the people writing these comments aren’t working in web development?
I assume "jQuery" here is being used as a metonym for the old frontend style where you would use the jQuery AJAX and DOM functions to query HTML fragments and swap them in. This is only really related to jQuery in that it used jQuery utility functions; under modern conditions you would just use fetch() and querySelector() etc to do the same thing.
It's true that the core concept of HTMX is very simple, and you could probably reimplement any given particular use case in a few lines. However, it is in fact a big advance over the manual HTML-fragment style, precisely because it abstracts over the particulars. Standardizing on a particular, good design is an important benefit even if the functionality of the code is easy.
Yes - I can see why that particular comparison makes sense. However, not sure if that was the standard (or even most common) way of using jQuery whereas with HTMX it's pretty much the only option you have?
Alpine is primarily designed to be reused via server templating. You use a single template per component to do the in-HTML side, using the server template's facilities to handle variations as necessary. Then you can factor out complex common behaviors into Javascript using Alpine.data.
It definitely does have a maximum size of project it's suitable for. In particular, it's thoroughly individual-component-based; changing anything outside the component requires tacking on non-native hacks, and doing a full interactive app with it would be a painful exercise. But for adding simple interactivity to a primarily server-rendered web page, I've found it to be quite useful.
Should have mentioned that I made a generic Blade component from it (it was a Laravel project)
Copying still happened, take that as you will - in this case that was the problem :) Might be my implementation was not generic enough, but tbh the colleague was not especially proficient at JS.
We had a productive conversation about this, but this particular project ultimately was lost by my former employer.
The reason was not this autocomplete component though :)
Regarding a topic that you never come into personal contact with, your entire knowledge of it throughout your life will be media-mediated. How, then, would you ever learn what you had been misinformed of?
How often do you learn new things later that, even as transmitted through untrustworthy media, allow for clear falsification of the former media presentation?
This isn't zero but it's not very common either. Usually, the domain in question is sufficiently subtle that you can't make a rigorous prediction from an untrustworthy media presentation at all; thus, the media accounts are effectively unfalsifiable (unless you go out and seek personal experience).
> Usually, the domain in question is sufficiently subtle that you can't make a rigorous prediction
Is it really a big deal then? Experts really care about the details so they will notice inaccuracies, but does that mean that they really matter to the point that the entire notion of journalism and media itself should be discredited?
I can't speak to any kind of general principle here, but viewing any standard modern web page e.g. Twitter, Discord in a Web browser reliably takes more CPU and memory than running a late-game Factorio save.
I will allow that it is probably theoretically possible to do client-side rendering in a CPU-efficient way, but it sure isn't the standard.
What exactly is the resource-intensive request here? Loading an E-mail, or list of E-mails? I don't see why that should be any more resource-intensive than any other CRUD app.
A list of emails. That's essentialls a database query that is taking X items and sorting by the date field, most commonly, except that the average user can have thousands, or even tens or hundreds of thousands of items that are unique to them in that dataset that need to be sorted and returned.
Sure, gmail optimizes for this heavily so it's fast, but it's still one of the most intensive things you can do for an app like that, so reducing the amount of times you need to do that is a huge win for any webmail. If you've ever used a webmail client that's essentially just an IMAP client for a regular IMAP account, you'll note that if you open a large inbox or folder it's WAY slower than trying to view an individual message, most times, for what are obvious reasons of you just think of a mailbox as a database of email and the operations that need to happen on that database (which it is).
If clicking on an individual message is a new page, that's fine, but if going back to the main view is another full IMAP inbox query, that's incredibly resource intensive compared to having it cached in the client already (even if the second request is cached in the server, it's still far more wasteful than not having to request it again).
Sure, you could do the basics of that workflow with twenty lines of your own JS, and save a dependency. But that's the kind of thing that is generally very unscalable, because unless you're very disciplined it quickly becomes a mass of spaghetti. The virtue of HTMX is more in how it channels and limits your code, than in the new capabilities it grants you (which were all in common use as of 2005 or whatever).