These personal blogs are starting to feel like Linkdin Lunatic posts, kinda similar. to the optimised floor sweeping blog, “I am excited to provide shareholder value, at minimum wage”
What does it tell you that programmers with the credibility of antirez - and who do not have an AI product to sell you - are writing things like this even when they know a lot of people aren't going to like reading them?
What it tells me is that humans are fallible, and that being a competent programmer has no correlation with having strong mental defenses against the brainrot that typifies the modern terminally-online internet user.
I leverage LLMs where it makes sense for me to do so, but let's dispense with this FOMO silliness. People who choose not to aren't missing out on anything, any more than people who choose to use stock Vim rather than VSCode aren't missing out on anything.
People higher up the ladder aren't selling anything but they also have to not worry about losing jobs. We are worried that execs are going to see the advances and quickly clear the benches, might not be true but every programmer believing they have become a 10x programmer pushes us more into that reality.
That is an argument to authority. There is a large enough segment of folks who like to be confirmed in either direction. Doesn't make the argument itself correct or incorrect. Time will tell though.
I never connected my smart tvs to the internet. I buy the cheapest TV (at the size I want) and connect an old laptop, lid closed, and a cheap mini keyboard. It does everything I want, never updates itself with unwanted features and never shows me ads. Been doing this for 10 years, why would anyone actually want a smart tv.
The problem is this doesn't really work anymore with Widevine protected content. You are not getting Widevine L1 protected content through Windows or other type of home desktop operating system. Even without L1 content, platforms like Youtube won't serve 5.1 surround unless it's through an app and not the browser.
I'm not saying you need a smart TV, but if you want to get the content you're actually paying for via Netflix, HBO etc in the highest quality they offer, you'll need to fork over money for a device with dedicated hardware
I’ve never had those issues. Most I’ve ever had to do enable DRM on my browser. Yeah I don’t use surround sound in my living room, but my Linux box in my office supports it, so I don’t see why not.
Even if that were unsupported I would it get an external Apple TV or chrome stick. There is still no reason to have a smart tv.
Code has always been nondetermistic. Which engineer wrote it? What was their past experience? This just feels like we are accepting subpar quality because we have no good way to ensure the code we generate is reasonable that wont mayyyybe rm-rf our server as a fun easter egg.
Code written by humans has always been nondeterministic, but generated code has always been deterministic before now. Dealing with nondeterministically generated code is new.
determinism v nondeterminism is and has never been an issue. also all llms are 100% deterministic, what is non deterministic are the sampling parameters used by the inference engine. which by the way can be easily made 100% deterministic by simply turning off things like batching. this is a matter for cloud based api providers as you as the end user doesnt have acess to the inferance engine, if you run any of your models locally in llama.cpp turning off some server startup flags will get you the deterministic results. cloud based api providers have no choice but keeping batching on as they are serving millions of users and wasting precious vram slots on a single user is wasteful and stupid. see my code and video as evidence if you want to run any local llm 100% deterministocally https://youtu.be/EyE5BrUut2o?t=1
That's not an interesting difference, from my point of view. The box m black box we all use is non deterministic, period. Doesn't matter where on the inside the system stops being deterministic: if I hit the black box twice, I get two different replies. And that doesn't even matter, which you also said.
The more important property is that, unlike compilers, type checkers, linters, verifiers and tests, the output is unreliable. It comes with no guarantees.
One could be pedantic and argue that bugs affect all of the above. Or that cosmic rays make everything unreliable. Or that people are non deterministic. All true, but the rate of failure, measured in orders of magnitude, is vastly different.
My man did you even check my video, did you even try the app. This is not "bug related" nowhere did i say it was a bug. Batch processing is a FEATURE that is intentionally turned on in the inference engine for large scale providers. That does not mean it has to be on. If they turn off batch processing al llm api calls will be 100% deterministic but it will cost them more money to provide the services as now you are stuck with providing 1 api call per GPU. "if I hit the black box twice, I get two different replies" what you are saying here is 100% verifiably wrong. Just because someone chose to turn on a feature in the inference engine to save money does not mean llms are anon deterministic. LLM's are stateless. their weights are froze, you never "run" an LLM, you can only sample it. just like a hologram. and depending on the inference sampling settings you use is what determines the outcome.....
Correct me if I'm wrong, but even with batch processing turned off, they are still only deterministic as long as you set the temperature to zero? Which also has the side-effect of decreasing creativity. But maybe there's a way to pass in a seed for the pseudo-random generator and restore determinism in this case as well. Determinism, in the sense of reproducible. But even if so, "determinism" means more than just mechanical reproducibility for most people - including parent, if you read their comment carefully. What they mean is: in some important way predictable for us humans. I.e. no completely WTF surprises, as LLMs are prone to produce once in a while, regardless of batch processing and temperature settings.
You can change ANY sampling parameter once batch processing is off and you will keep the deterministic behavior. temperature, repetition penalty, etc.... I got to say I'm a bit disappointed in seeing this in hacker news, as I expect this from reddit. you bring the whole matter on a silver platter, the video describes in detail how any sampling parameter can be used, i provide the whole code opensource so anyone can try it themselves without taking my claims as hearsay, well you can bring a horse to water as they say....
Technically you are right… but in principle no. Ask an LLM any reasonably complex task and you will get different results. This is because the mode changes periodically and we have no control over the host systems source of entropy. It’s effectively non deterministic.
It’s called TDD, ya write a bunch a little tests to make sure your code is doing what it needs to do and not what it’s not. In short, little blocks of easily verifiable code to verify your code.
But seriously, what is this article even? It feels like we are reinventing the wheel or maybe just humble AI hype?
Jokes and sales pitches aside. We kinda have that already, we platforms that allow us to run the same code on, x86, arm, wasm… and so on. It’s just there is no consensus on which platform to use. Nor should there be since that would slow progress of new and better ways to program.
We will never have one language to span graphics, full stack, embedded, scientific, high performance, etc without excessive bloat.
No. I work in VC backed startups. Externally they might say that for investors, but talk to a software engineer. Agentic IA is not working, it will regurgitate code from examples online, but if you get it to do something complex or slightly novel, it falls flat on its face.
Maybe it will someday be good enough, but not today, and probably not for at least 5 years.
Meanwhile there are people delivering solutions in iPaaS tools, which is quite far from the traditional programming that gets discussed on HN.
Some of those tools aren't yet fully there, but also aren't completely dumb, they get more done in a day, than trying to do the same workflows with classical programming.
Those are high level programming tools, nowadays backed by AI, being adopted the big corps, some are sponsored with VC money, which you are sceptical about.
I’m not sure about that. I used to use LabView and its various libraries often. The whole thing felt scattered and ossified. I’d take a python standard library any day.
I once interned at a lab that used a piece of surely overpriced hardware that integrated with Simulink. You would make a Simulink model, and you’d click something and the computer would (IIRC) compile it to C and upload it to the hardware. On the bright side, you didn’t waste time bikeshedding about how to structure things. On the other hand, actually implementing any sort of nontrivial logic was incredibly unpleasant.
No not really, depending on the application Cpp or python has been the language of choice in the lab. Labview was used because it was seen as easy to make UIs for operators in production facilities, but even that was a regrettable decision. We ended up rewriting LV business logic in c# and importing it as a lib into a LV front end.
reply