Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is obviously besides the point but I did blindly follow a wiener schnitzel recipe ChatGPT made me and cooked for a whole crew. It turned out great. I think I got lucky though, the next day I absolutely massacred the pancakes.


I genuinely admire your courage and willingness (or perhaps just chaos energy) to attempt both wiener schnitzel and pancakes for a crew, based on AI recipes, despite clearly limited knowledge of either.


Recent experiments with LLM recipes (ChatGPT): missed salt in a recipe to make rice, then flubbed whether that type of rice was recommended to be washed in the recipe it was supposedly summarizing (and lied about it, too)…

Probabilistic generation will be weighted towards the means in the training data. Do I want my code looking like most code most of the time in a world full of Node.js and PHP? Am I better served by rapid delivery from a non-learning algorithm that requires eternal vigilance and critical re-evaluation or with slower delivery with a single review filtered through an meatspace actor who will build out trustable modules in a linear fashion with known failure modes already addressed by process (ie TDD, specs, integration & acceptance tests)?

I’m using LLMs a lot, but can’t shake the feeling that the TCO and total time shakes out worse than it feels as you go.


There was a guy a few months ago who found that telling the AI to do everything in a single PHP file actually produced significantly better results, i.e. it worked on the first try. Otherwise it defaulted to React, 1GB of node modules, and a site that wouldn't even load.

>Am I better served

For anything serious, I write the code "semi-interactively", i.e. I just prompt and verify small chunks of the program in rapid succession. That way I keep my mental model synced the whole time, I never have any catching up to do, and honestly it just feels good to stay in the driver's seat.


Pro-tip: Do NOT use LLMs to generate recipes, use them to search the internet for a site with a trustworthy recipe, for information on cooking techniques, science, or chemistry, or if you need ideas about pairings and/or cooking theory / conventions. Do not trust anything an LLM says if it doesn't give a source, it seems people on the internet can't cook for shit and just make stuff up about food science and cooking (e.g. "searing seals in the moisture", though most people know this is nonsense now), so the training data here is utterly corrupt. You always need to inspect the sources.

I don't even see how an LLM (or frankly any recipe) that is a summary / condensation of various recipes can ever be good, because cooking isn't something where you can semantically condense or even mathematically combine various recipes together to get one good one. It just doesn't work like that, there is just one secret recipe that produces the best dish, and the way to find this secret recipe is by experimenting in the real world, not by trying to find some weighting of a bunch of different steps from a bunch of different recipes.

Plus, LLMs don't know how to judge quality of recipes at all (and indeed hallucinate total nonsense if they don't have search enabled).


> I don't even see how an LLM (or frankly any recipe) that is a summary / condensation of various recipes can ever be good

It's funny, I actually know quite a few (totally non tech) people who uses (and like using) LLMs for recipes/recipes ideas.

They probably have enough experience to push back when there's a bad idea, or figure out missing steps/follow up.

Thinking about it, it sounds a bit like LLM usage for coding where an experienced programmer can get more value out of it.


If you have lots of experience from years of serious cooking, like I do, almost everything the LLM suggests or outputs re: cooking is false, bad or at best incredibly sub-par, and you will spend far more time correcting it and/or pushing it toward what you already know for it to be actually helpful / productive in getting you anything actually true. I also think it just messes up incredibly basic stuff all the time. I re-iterate it is only good for the things I said.

Whether or not you think you can get "good" recipes out of it will also depend on your experience with cuisine and cooking, and your own pickiness. I am sure amateurs or people who cook only occasionally can get use out of it, but it is not useful for me.

Cooking is a very different world from coding: recipes aren't composable like code (within-recipe ratios need to be maintained, i.e. recipes written in bakers ratios/proportions, steps are almost always sequentially dependent, and ingredients need to complement each other) and most sources besides the few good empirical ones actually verify anything they make, which is a problem, because the training data for cooking is far more poisoned.


I guess different people have different experiences when using those tools :)

(I was talking about people who cook daily for their households and enjoy doing it, I guess they found a way to make LLMs useful for them)


I also cook daily at home, for fun (though I have catered a couple times for some large 50+ people family events too). Just, in my case, cooking is my passion, and has been more than just a minor hobby for me. I.e. there have been many years of my life where I spent 3-5 hours of every day cooking, and this has been the case for about 15 years now. If "professional home cook" was a thing, I'd be that, but, alas.

So my standards are admittedly probably a bit deranged relative to most...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: