Hacker Newsnew | past | comments | ask | show | jobs | submit | VadimPR's commentslogin

"I automatically programmed it" doesn't really roll off the tongue, nor does it make much sense - I reckon we need a better term.

It certainly quicker (and at times, more fun!) to develop this way, that is for certain.


You will say I programmed it, there is no longer for this distinction. But then you can add that you used automatic programming in the process. But shortly there will be no need to refer to this term similarly to how today you don't specify you used an editor...

(Yes?) but the editor isn't claiming to take your job in 5 years.

Also I do feel like this is a very substantial leap.

This is sort of like the difference between some and many.

Your editor has some effect on the final result so crediting it/mentioning it doesn't really impact it (but people still do mention their editor choices and I know some git repo's with .vscode which can show that the creator used vscode, I am unfamiliar if the same might be true for other editors too)

But especially in AI, the difference is that I personally feel like its doing many/most work. It's literally writing the code which turns into the binary which runs on machine while being a black box.

I don't really know because its something that I am contradicted about too but I just want to speak my mind even if it may be a little contradicted on the whole AI distinction thing which is why I wish to discuss it with ya.


LLMs translate specs into code, if you master conputational thinking like Antirez, you basically reduce LLMs to intelligent translators of the stated computational ideas and specifications into a(ny) formal language + the typing. In that scenario LLMs are a great tool and speedup the coding process. I like how the power is in semantics, whereas syntax becomes more and more a detail (and rightfully so)!

I like to think that the prompt is dark magic and the outputs are conjured. I get to feel like a wizard.

I coined the term lite coding for this after reading this article and now my chatGPT has convinced me that I am a genius

"Throwaway prototype" - that's the traditional term for this.

They corned the market, drove everyone out of it, and are now rent-seeking. Can't say you have much of a choice between youtube and any other video provider that has the same content on it.

>They corned the market, drove everyone out of it, and are now rent-seeking.

It's almost dumping [1]: they gave a service away for free (even if they were losing a lot of money) just to make it unfeasible for any other company to start a competing service.

Vimeo could have been a competitor, but then they pivoted to a professional market and now that Bending Spoons bought them [2], I'm not sure they will even have a future.

[1] https://en.wikipedia.org/wiki/Dumping_(pricing_policy) [2] https://news.ycombinator.com/item?id=45197302


It is dumping. The whole YCombinator VC Silicon Valley model is entirely based on dumping. They call it "burning VC cash", which is an overly wordy synonym for it to muddy the waters, and it would be positive for the world if everyone installed a browser script that did a `s/burning vc cash/dumping` on all text elements.

Using your CC Max account for this seems like a good way to get your account banned, as it's against the ToS and Anthropic has started enforcing this.

Correct me if I'm wrong, but the only legal way to use pi is to use an API, and that's enormously expensive.


Sure, I'm not using it with my company/enterprise account for that reason. But for my private sub, it's worth the tradeoff/risk. Ethically I see no issue at all, because those LLMs are trained on who knows what.

But you can use pi with z.ai or any of the other cheap Claude-distilled providers for a couple bucks per month. Just calculate the risk that your data might be sold I guess?


Really curious, what paragraph of the ToS is being violated?

https://venturebeat.com/technology/anthropic-cracks-down-on-... don't have the paragraph, but here's the news about it for you.

Look it up. They have banned people over this and it was all over the news, some people cancelling their accounts etc

So the same is true if people use OpenCode with Claude Pro/Max?

Yes only the plan OpenCode themselves sells is „legal“ Opus.

Is the app legitimate though? A few of these apps that deal with LLMs seem too good to be true and end up asking for suspiciously powerful API tokens in my experience (looking at Happy Coder).

It's legitimate, but its also extremely powerful and people tend to run it in very insecure ways or ways where their computer is wiped. Numerous examples and stories on X.

I used it for a bit, but it burned through tokens (even after the token fix) and it uses tokens for stuff that could be handled by if/then statements and APIs without burning a ton of tokens.

But it's a very neat and imperfect glimpse at the future.


They've recently added "lobster" which is an extension for deterministic workflows outside of the LLM, at least partially solving that problem. Also fixed a context caching bug that resulted in it using far more Anthropic tokens than it should have.

> It's legitimate

How do you know?

> it burned through tokens (even after the token fix) and it uses tokens for stuff that could be handled by if/then statements and APIs without burning a ton of tokens.

Sponsored by the token seller, perhaps?


> How do you know?

I looked at the code and have followed Peter, it's developer, for a long time and he has a good reputation?

> Sponsored by the token seller, perhaps?

I don't know what this means. Peter wasn't sponsored at the time, but he may or may not have some sort of arrangement with Minimax now. I have no clue.



Inspired by https://news.ycombinator.com/item?id=46754944, I built a Linux clone of the app. Always had it in the back of my mind to create such a thing and this was the impetus needed.

Install using:

pip install postured

uv pip install postured

Disclaimer: built with Claude Code, tested by a human.


We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.

Been doing it this way for years now, so it's well battle tested.


That kind of defeats the point of an AppImage though - you could just as well have a tar archive with a c classic collection of binaries + optional launcher script.

A single file is much better to manage on the eyes than a whole bunch of them, plus AppImages can be installed into the desktop using integration.

How can you tell if a short person is slouching? Or a tall person?

I'm not the author, but I assume it benchmarks the highest height of your head, blurs from there, and updates its baseline if you ever appear higher.

Meaning that the way to have "perfect posture" is never to sit up straight in the first place :-)


It has a calibration step

If you assume a person’s chair height and desk height are both set optimally, then I guess the person’s height doesn’t matter for this detection.

Very cool, thanks! I will try this with Mudlet.


Would love to see a Linux native application for this, after all a lot of folks are using it more and more these days.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: