Hacker Newsnew | past | comments | ask | show | jobs | submit | unshavedyak's commentslogin

My problem with those abstractions is:

1. That the error messages for them i find to be awful. They really need to take inspiration from Rust to help users know exactly where the API error is happening relative to your code. I've spent quite a lot of time toggling things on and off to try and find which part of code, or which package, is causing a specific failure.

2. I've not found the APIs that the abstractions provide to be stable in the same way my Rust crates/etc are. Something can randomly break between updates and i'm not sure what caused it. Takes me quite a while of digging through source to find the root cause, and digging through Nix source i find extra painful.

All around i hate this side of Nix. A lot. However it's enough of "another level" that i stay with it and don't switch away.. I use it for my Linux machines and my Mac laptop. It makes so many things so easy, the majority of the time.. but when something goes wrong.. it's super painful.


As someone who up until recently would have agreed with you, both of these are fundamentally familiarity issues. Nix's error messages really aren't that bad, but chances are there's exactly one line out of 200 that tells you what's wrong. Learning to read stack traces for a new language is part of learning that language.

I've not had issues with Nix APIs, at least not Nixpkgs or the language builtins. When something does break for me, it's usually some random JavaScript package that had some external dependency change. Nixpkgs is pretty well organized and I find navigating it not that hard once you read the packaging guidelines. find / fzf / ripgep / etc. are all great at this, as file and folder names are critical to the organization of nixpkgs.

The big turning point for me was trying to build and package a non-trivial application and build a NixOS module for it.


Package errors aren't that bad, but NixOS config errors are a nightmare because they are often a value that was computed from a value that was computed from what you wrote triggering a missing field, class mismatch, or infinite recursion in very generic framework code.

Agreed on 1, thats part of the cost you pay when choosing Nix.

Disagree somewhat on 2. Yes, there are frequently small breaking changes, but the massive upside of nix is I can either just rollback or pin a specific package at a specific version and move on.


> There probably needs to be some settled discussion on what constitutes "vibe coding." I interpret this term as "I input text into $AI_MODEL, I look at the app to see my change was implemented. I iterate via text prompts alone, rarely or never looking at the code generated."

Agreed. I've seen some folks say that it requires absolute ignorance of the code being generated to be considered "vibe coded". Though i don't agree with that.

For me it's more nuanced. I consider a lack of review to be "vibed" related to how little you looked at it. Considering LLMs can do some crazy things, even a few ignored LOC might end up with a pretty "vibe coded" feelings, despite being mostly reviewed outside of those ignored lines.


Maybe read the original definition: https://x.com/karpathy/status/1886192184808149383

Or here: https://en.wikipedia.org/wiki/Vibe_coding

Not looking at the code at all by default is essential to the term.


I agree, i'm saying any code it produces. Eg if you ignore 95% of the LLM's PR, are you vibe coding? Some would say no, because you read 5% of the PR. I would say yes, you are vibe coding.

Ie you could say you vibe'd 95% of the PR, and i'd agree with that - but are you vibe coding then? You looked at 5% of the code, so you're not ignoring all of the code.

Yet in the spirit of the phrase, it seems silly to say someone is not vibe coding despite ignoring almost all of the code generated.


The question is, to what purpose are you looking at those 5%? I reckon it’s because you don’t really trust the vibes. In that sense, you’re not vibe-coding.

If that's the case i feel like we need some other term. As you're saying that someone who ignores 95% of the written code is not vibe coding. That to me says we need a term that describes "i ignored almost all code" type of "coding" that LLMs provide.

I don't care about the 5% difference. I care about the bulk, the amount of bugs and poor logic that can slip in, etc. I have no attachment to the term "vibe coded", but it's useless to me if it doesn't describe this scenario.


Wish i could remember my issues with jj. I tried it, i wanted to stick with it because i loved the fact that i could reorder commits while deferring the actual conflicts.. but something eventually prevented me from switching. Searching my slack history where i talked about this with a coworker who actually used jj:

1. I had quite a bit of trouble figuring out a workflow for branches. Since my companies unit of work is the branch, with specifically named branches, my `jj ls` was confusing as hell.

`jj st` might have helped a bit, but there were scenarios where creating an commit would abandon the branch... if i'm reading my post history correctly. My coworker who was more familiar explained my jj problems away with "definitely pre-release software", so at the time neither of us were aware of a workflow which considered branches more core.

Fwiw, I don't even remember when the jj workflow had branches come into play.. but i was not happy with the UX around branches.

2. iirc i didn't like how it auto stashed/committed things. I found random `dbg!` statements could slip in more easily and i had to be on guard about what is committed, since everything just auto pushed. My normal workflow has me purposefully stashing chunks when i'm satisfied with them, and i use that as the visual metric. That felt less solid with jj.

Please take this with a huge grain of salt, this is 10 month old memory i scavenged from slack history. Plus as my coworker was saying, jj was changing a lot.. so maybe my issues are less relevant now? Or just flat out wrong, but nonetheless i bounced off of jj despite wanting to stick with it.


"creating a commit would abandon the branch" is certainly something lost in translation. There are other reasons you may have not liked the UX, largely that if you create branches and then add a bunch of commits after it, the branch head doesn't automatically move by default. There is a config setting you can change if you prefer that, or the `jj tug` alias some people set up.

Auto-commit is still a thing, but you can regain the stuff you like with a workflow change, this is called the "squash workflow" and is very popular: https://steveklabnik.github.io/jujutsu-tutorial/real-world-w...


I more or less use the method described [here](https://steveklabnik.github.io/jujutsu-tutorial/advanced/sim...) for branches. One thing I do change is that I set the bookmark to an empty commit that serves as the head of each branch. When I am satisfied with a commit on head and want to move it to a branch I just `jj rebase -r @ -B branch`. When I want to create a new branch it's just `jj new -A main -B head` and `jj bookmark set branch_name -r @`


_Every time I see one of these nifty jj tricks or workarounds I find myself wondering, “why not just use git?”_


How would you do this in stock git?


I might just not be following correctly but committing in git just carries the branch along for the ride, so there's nothing to do in git for this scenario.

IIRC forcing some specific branch name to point to my changes with `jj` was non-obvious and what made me give up and go back to git when I tried it last year.


You are mistaken. In the workflow I described, I am making changes on top of all branches at once and then deciding which branch to send the new commit to. This allows me to make changes simultaneously to both branches without friction.


Rust is hard in that it gives you a ton of rope to hang yourself with, and some people are just hell bent on hanging themselves.

I find Rust quite easy most of the time. I enjoy the hell out of it and generally write Rust not too different than i'd have written my Go programs (i use less channels in Rust though). But i do think my comment about rope is true. Some people just can't seem to help themselves.


That seems like an odd characterization of Rust. The borrow checker and all the other type safety features, as well as features like send/sync are all about not giving you rope to hang yourself with.


The rope in my example is complexity. Ie choosing to use "all teh features" when you don't need or perhaps even want to. Eg sometimes a simple clone is fine. Sometimes you don't need to opt for every generic and performance minded feature Rust offers - which are numerous.

Though, i think my statement is missing something. I moved from Go to Rust because i found that Rust gave me better tooling to encapsulate and reuse logic. Eg Iterators are more complex under the hood, but my observed complexity was lower in Rust compared to Go by way of better, more generalized code reuse. So in this example i actually found Go to be more complex.

So maybe a more elaborated phrase would be something like Rust gives you more visible rope to hang yourself with.. but that doesn't sound as nice. I still like my original phrase heh.


I would love to see a language that is to C what Rust is to C++. Something a more average human brain like mine can understand. Keep the no-gc memory safety things, but simplify everything else a thousand times.

Not saying that should replace Rust. Both could exist side by side like C and C++.


I'm curious about what you'd want simplified. Remove traits? What other things are there to even simplify if you're going to keep the borrow checker?


I'm the last person to be able to answer that. There would be Chesterton's fences everywhere for one thing.

Better question is what to add to something like C. The bare minimum to make it perfectly safe. Then stop there.


I feel like it is the opposite, Go gives you a ton of rope to hang yourself with and hopefully you will notice that you did: error handing is essentially optional, there are no sum types and no exhaustiveness checks, the stdlib does things like assume filepaths are valid strings, if you forget to assign something it just becomes zero regardless of whether it’s semantically reasonable for your program to do that, no nullability checking enforcement for pointers, etc.

Rust OTOH is obsessively precise about enforcing these sort of things.

Of course Rust has a lot of features and compiles slower.


> error handing is essentially optional

Theoretically optional, maybe.

> the stdlib does things like assume filepaths are valid strings

A Go string is just an array of bytes.

The rest is true enough, but Rust doesn't offer just the bare minimum features to cover those weaknesses, it offers 10x the complexity. Is that worth it?


What do people generally write in Rust? I've tried it a couple of times but I keep running up against the "immutable variable" problem, and I don't really understand why they're a thing.


> but I keep running up against the "immutable variable" problem

...Is that not what mut is for? I'm a bit confused what you're talking about here.


I don't really get immutable variables, or why you'd want to make copies of things so now you've got an updated variable and an out-of-date variable. Isn't that just asking for bugs?


As with many things, it comes down to tradeoffs. Immutable variables have one set of characteristics/benefits/drawbacks, and mutable variables have another. Different people will prefer one over the other, different scenarios will favor one over the other, and that's expected.

That being said, off the top of my head I think immutability is typically seen to have two primary benefits:

- No "spooky action at a distance" is probably the biggest draw. Immutability means no surprises due to something else you didn't expect mutating something out from under you. This is particularly relevant in larger codebases/teams and when sharing stuff in concurrent/parallel code.

- Potential performance benefits. Immutable objects can be shared freely. Safe subviews are cheap to make. You can skip making defensive copies. There are some interesting data structures which rely on their elements being immutable (e.g., persistent data structures). Lazy evaluation is more feasible. So on and so forth.

Rust is far from the first language to encourage immutability to the extent it does - making immutable objects has been a recommendation in Java for over two decades at this point, for example, to say nothing of its use of immutable strings from the start, and functional programming languages have been working with it even longer. Rust also has one nice thing as well which helps address this concern:

> or why you'd want to make copies of things so now you've got an updated variable and an out-of-date variable

The best way to avoid this in Rust (and other languages with similarly capable type systems) is to take advantage of how Rust's move semantics work to make the old value inaccessible after it's consumed. This completely eliminates the possibility that the old values anre accidentally used. Lints that catch unused values provide additional guardrails.

Obviously this isn't a universally applicable technique, but it's a nice tool in the toolbox.

In the end, though, it's a tradeoff, as I said. It's still possible to accidentally use old values, but the Rust devs (and the community in general, I think) seem to have concluded that the benefits outweigh the drawbacks, especially since immutability is just a default rather than a hard rule.


I think their skills have the ability to dynamically pull in more data, but so far i've not tested it to much since it seems more tailored towards specific actions. Ie converting a PDF might translate nicely to the Agent pulling in the skill doc, but i'm not sure if it will translate well to it pulling in some rust_testing_patterns.md file when it writes rust tests.

Eg i toyed with the idea of thinning out various CLAUDE.md files in favor of my targeted skill.md files. In doing so my hope was to have less irrelevant data in context.

However the more i thought through this, the more i realized the Agent is doing "everything" i wanted to document each time. Eg i wasn't sure that creating skills/writing_documentation.md and skills/writing_tests.md would actually result in less context usage, since both of those would be in memory most of the time. My CLAUDE.md is already pretty hyper focused.

So yea, anyway my point was that skills might have potential to offload irrelevant context which seems useful. Though in my case i'm not sure it would help.


They added a "How is claude doing?" rating a while back which backs this statement up imo. Tons of A/B tests going on i bet.


Every time I type in cap case or use a 4 letter word with Claude I’ll get hit with the 1 questions survey.

More times than not the answer is 1 (bad, IIRC). Then it’s 2 for fine. I can only ever remember hitting 3 once.


Briefly turned off? Oh it's back on? That's good at least. I thought he permanently turned it off the moment it didn't align with his goals.


tbh, that's just what I read yesterday, it could be turned off again. I haven't looked at that cesspit in well over a year.


Is using CC outside of the CC binary even needed? CC has a SDK, could you not just use the proper binary? I've debated using it as the backend for internal chat bots and whatnot unrelated to "coding". Though maybe that's against the TOS as i'm not using CC in the spirit of it's design?


That's very much in the spirit of Claude Code these days. They renamed the Claude Code SDK to the Claude Agent SDK precisely to support this kind of usage of it: https://www.anthropic.com/engineering/building-agents-with-t...


Oh man, PRQL looks so good.

I just wish they had mutation in there too. I don't like the idea of swapping between PRQL and SQL, let alone some complex update statements where i'd rather write the query in PRQL. .. Yea you could argue they shouldn't be that complex for updates though heh.


Yeah, we deliberately left out DML to focus on DQL exclusively. I also find that appealing from a philosophical angle since it allows PRQL to remain completely functional.

I haven't thought about DML too much but what I could envision is an approach like the Elm or React architecture where you specify a new state for a table as a PRQL query and then the compiler computes the diff and issues an efficient update.

For example

    DELETE FROM table_name WHERE id = 1;
would be something like

    table_name = from table_name | filter id != 1
SQL:

    INSERT INTO table_name (id, name) VALUES (1, 'Jane');
PRQL:

   table_name = from table_name | append [{id=1, name='Jane'}]
Update is the trickiest to not have clunky syntax. For example what should the following look like?

SQL:

    UPDATE table_name SET name = 'John' WHERE id = 1;
I can think of `filter` followed by `append` or maybe a case statement but neither seems great.

Any ideas?


Dumb question, but is this Claude for Excel the.. app? The webapp? Does it work on Google sheets? etc

There are quite a few spreadsheet apps out there, just curious what their implementation is or how it's implemented to work with multiple apps.

I always find Excel (and the Office ecosystem) confusing heh.


Modern Excel add-ins work in desktop Windows, macOS, and web. They're just a bit of XML that Excel looks at to call a whatever web endpoint is defined in the XML.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: