The author of the article also wrote a CLI parser library for Typescript, called Optique. I really appreciate them including a "When Optique makes sense" section in the docs. It would be great if more projects did that.
In this case, both fair and fare are words in English. Which shows that spell checking needs to know a lot about grammar and context to work in general. Basically you need an LLM. Or if not a 'large language model', perhaps at least a small language model.
I wonder how it does work, I remember MS Word having a fairly decent grammar checker when I was using it in school - which predated LLMs by many years!
I suspect an LLM wouldn’t be the most optimal choice
> Remember Semantic Web? The web was supposed to evolve into semantically structured, linked, machine-readable data that would enable amazing opportunities. That never happened.
I think the lesson to be learned is in answering the question "Why didn't the semantic web happen?"
I have literally been doing we development since their was a web, and the companies I developed for are openly hostile to the idea of putting their valuable, or perceived valuable, information online in a format that could be easily scraped. Information doesn't want to be free, it wants to be paid for. Unless the information shared pulls visitors to the site it doesn't need to be public.
> Information doesn't want to be free, it wants to be paid for. Unless the information shared pulls visitors to the site it doesn't need to be public.
That's a cultural and societal problem, not a technology problem. The motivations (profit) are wrong, and don't lead to true innovations, only to financialization.
So long as people need to pay to eat, then information will also want to continue to be paid for, and our motivations will continue to be misaligned with true innovations, especially if said innovations would make life easier but wouldn't result in profit.
I'd argue that resource availability is already high enough to alleviate scarcity for most people, and that most scarcity today is artificially generated, because of profit.
We won't achieve post scarcity, even with widespread automation (if AI ever brings that to fruition), because we haven't yet fixed the benefits that wealth brings, so the motivation to work toward a post-scarcity society just doesn't exist.
I've encountered a similar issue in academia - PI's don't want to make their data available to be scraped (or, at least not easily) because the amount of grant funding is limited, and a rival who has scraped one's data could get the grant money instead by using that scraped data to bolster their application.
To a degree re ads on pages, but why didn't big business end up publishing all of their products in JSON-LD or similar?
A lot did, to get aggregated, but not all.
But also because companies that produce web content wanted it to be seen by humans who would look at ads, not consumed by bots and synthesized with other info into a product owned by some other firm.
And yet today most websites are being scraped by LLM bots which don't look at ads and which synthesize with other info into a product owned by some other firm.
Optimistically, the semantic web is going to happen. Just that instead of the original plan of website owners willingly making data machine-readable, LLMs will be the ones turning non-machine-readable data machine-readable (which can then be processed by user agents), even if the website owner prefers you looked at the ads instead.
The semantic web was theoretically great for data scientists and metadata scrapers, but offered close to zero value for ordinary humans, both on the publishing side an the consumption side. Also, nobody did the hard work of defining all of the categories and protocols in a way that was actually usable.
The whole concept was too high minded and they never got the implementation details down. Even if they did it would have been horrendously complex and close to impossible to manage. Asking every single publisher to neatly categories their data into this necessarily enormous scheme would have resulted in countless errors all over the web that would have seriously undercut the utility of the project anyway. Ultimately the semantic web doesn't scale very well. It failed for the same reason command economies fail: It's too overwhelming for the people in control to manage and drowns in its own bureaucracy.
Semantic web never existed. There was Google and Google had an API to get breadcrumbs to show on search results. And that's what people called "semantic web." A few years later they gave up and made everything look like a breadcrumb anyway. And that sums up the whole semantic web experience.
I've navigated burnout and some slightly toxic environments.
I bailed on a job in 2022 when I was tired of it and the hiring market was still good. The hiring market turned and it took me a long time to get something new once I was ready. Fortunately, I had a significant cushion.
Prerequisite to taking up any of my suggestions that follow:
How long do you think it would take you to get a new job? Triple that estimate. Can you go that long without pay?
If the answer is no, stop reading here.
Regarding prepping / searching while working - have you considered dialing back your hours to give yourself more energy for the search? If they don't like it... well, if you are still comfortable with your financial cushion given my first question, then at least you will have made some progress toward your next thing while still getting paid.
Another way to assess how you feel about staying: Take at least a week and a half off. Several of my job changes have occurred after some time off. It made the suckitude of the current job so obvious that I quit waffling about it and started looking.
> it’s not all that much effort after the first two or three. There’s no reason to think the language you start with should be one that you could get paid to use.
Definitely!
This inspired me to list out languages I've learned to at least some minimal level and written programs in. Here they are, with the contexts in which I used them:
BASIC - middle school ;
Fortran - high school ;
C++ - self-taught & later some work;
Pascal - university;
Motorola 68k assembly - university;
Miranda - university (Programming Languages class only);
Ada - university (Programming Languages class only);
Lisp - university & work (CLOS);
sh - work;
ColdFusion - work;
bash - work and personal;
Java - work;
Atmel microcontroller assembly - hobby;
C - hobby (to replace Atmel assembly);
Ruby - work and hobby;
Scala - work;
Elixir - work;
Solidity - work;
Typescript - work and hobby;
Elm - work;
Clojure - work;
Javascript - work;
Python - hobby (with a tiny smattering of work usage over the years)
I think the author's ideas are likely too complex for a wide audience, but they could be a game changer for those who can handle that kind of complexity.
TLDR: "According to a preprint posted to the bioRxiv server this month, nearly all the dead colonies tested positive for bee viruses spread by parasitic mites. Alarmingly, every single one of the mites the researchers screened was resistant to amitraz, the only viable mite-specific pesticide—or miticide—of its kind left in humans’ arsenal."
I’m an agronomist and ~ten years ago attended an yearly industry meeting where there are various presentations that we sit in on and gain “credits” to maintain various state licenses used to legally recommend and/or apply fertilizers and pesticides.
The one presentation I recall from that far back was a bee researcher that basically said exactly what you posted, whenever his team investigated colony collapses from varroa mites (as opposed to poor treatment from being moved to California), they’d find markers for multiple previously unknown viruses. Honeybees were basically having to contend with previously isolated viruses they never evolved to resist, all at once.
I also remember the xerces society trying protest and interrupt his talk because they wanted to blame (and therefore ban) pesticides only, specifically neonicotinoids. I generally really appreciate the work they do, but in this case they really came away as being dogmatic instead of helpful.
What gets less attention though are the many dozens of native pollinator bees that also were/are hard hit and driven to full/near extinction. These species also have to contend with food source loss, because they are very selective about the flowers they will pollinate because the require a certain nutritional profile. I can’t stop viruses or varroa mites, but I can at least recommend planting wildflower mixes native to your local area.
edit Rediscovered some old blog posts I found looking into the issue at the time and found enlightening. It’s a great example of the observation work that makes a good agronomist. Bear in mind these are from 2012, so no idea if they’ve updated their thoughts to something different.
https://en.m.wikipedia.org/wiki/Razor_strop