Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’d like to hear more discussion of AI being applied in a ways that are “good enough”. So much focus on it having to be 100% or it sucks. There are use cases where it provides a lot of value and doesn’t have to be perfect to replace tasks done by an imperfect employee who sometimes misses details too. Audit the output with a human like Taco Bell (Yum) is doing in AI drive through orders. Are most the day to day questions the a person asks so critical in nature that hallucinations cause any more issues than bad advice from a person or an inaccurate Wikipedia or news article or mishearing? Tolerance of correctness proportional to the importance of the task i guess. I wouldn’t publish government health policy citing hallucinated research or devise tariff algorithms but I’m cool with my generative pumpkin bars recipe accidentally having a tbsp tsp error I’d notice in making them.


I think we see this a lot with software development AI; the tab complete only has to be “good enough” to be worth tweaking. Often “good enough” first pass from the AI is a few motions on the keyboard away from shippable.

Now with headless agents (like CheepCode[0], the one I built) that connect directly to the same task management apps that we do as human programmers, you can get “good enough” PRs out of a single Linear ticket with no need to touch an IDE. For copy changes and other easy-to-verify tweaks this saves developers a lot of overhead checking out branches, making PRs, etc so they can stay focused on the more interesting/valuable work. At $1/task a “good enough” result is well worth it compared to the cost of human time.

[0] https://cheepcode.com


> like to hear more discussion of AI being applied in a ways that are “good enough”

Search. AI works about as well as a clueless first-year analyst. You need to double check its work. But there is still value added in its compiling together sources and providing a reasonably-accurate summary of, at the very least, some of their arguments.

Also basic web development. Most restauranteurs I know no longer deem it necessary to hire a web developer. (Happy side effect: more PDF menus instead of over-engineered nonsense.)


I think biochemistry might be the biggest beneficiary of "good enough" AI. Much of the expensive and slow parts of discovery (like creating new drug or making sense of some crazy complicated protein structure) are already speedrunning and while I'm on the conservative side of estimates, with mathematical certainty will see groundbreaking new drugs, discoveries from biochemistry field and not in the distant future but in a few years.

For the other stuff like automating white collar jobs, good enough might not suffice due to the intricate dependencies and implicit contracts formed naturally out of human groups.

Creative jobs will be the most impacted by "good enough" depending on the number of features. For 2d art it was almost certainly over (unless you add text feature to it like manga). You can see with increasing features, like starting with general photography, stock photography and now product photography are overnight made redundant. ex) with the latest Flux image editor negates a need to hire a photo editor, photographer, camera equipment, lighting, product artist. Veo3 not quite there but handles speech features in video generation that other models did not and getting closer to replacing videographers. I think 3d model is the next frontier here following the trend but is still quite difficult as it involves mesh generation/texture/rigging/animation/physics that also must come with shaders and interaction with other 3d models.

Software engineering falls somewhat in the creative field but also shares the complexity from white collar jobs for the same reason that will prevent it from being completed automatable with "good enough".

The hallucination issue is less of an issue and an old trope. The truly challenging enemy of AI of "good enough" is due to "not enough context" and "poor context compression and recall". The problems I listed in white collar and software engineering jobs is context problem. The compression of contexts cannot be stable as the former isn't solved. The fast efficient recall of contexts then cannot take place due to poor compression and so on.

This is just my observation of seeing how things are progressing. I do feel that we will see something different from LLM altogether that could solve some of the context issues but a major misalignment of incentives is what I think would prevent an AGI know-all-see-all type of deal. ex) you might not have any incentive to share all the essential context with the AI because you might become irrelevant and want it to stay in the dark. you might have a union or some social organization to legislate monopoly of human knowledge/skill workers in a field.

but perhaps THE most difficult problem even after we solve the context problem is the inability for the God AGI to be awake or conscious which is absolutely critical in many real world applications.

I like to focus more on the very near impact of what AI is currently doing in the labs and its impact on humans than worrying about who and when all of the other problems are going to be addressed.

Whether we get a UBI-first socialist world order or a continuation of technological feudalism with the poors still using GPTs while the rich sell the energy and chips (software would almost be worthless on its own by then) is the least of my concern.

I'm an optimist and I'm very excited for the very-near and immediate impact of our currently available AI tools doing the "good enough" in very positive ways.


I think some more white collar jobs might be affected, not just creative ones. There is a substantial amount of jobs where the end result needs to be of a certain quality, but all context can be inferred or provided up front, and checking and correcting a result is quicker than producing it manually. Think e.g. law or translations. Translators, proofreaders, and others are already feeling the squeeze.

In other cases, like software development, there is a split between tasks of a narrow scope and those of a wide scope. Creating one-shot pieces of software is kind of a solved issue now. Maintaining some relatively self-contained piece of software might soon turn into a task for single maintainers that review AI PRs. The more the bottleneck is context tracking, as opposed to producing code, the less useful the AI. I am uncertain, however, how the millions of devs in the world are distributed on this continuum.

I am also skeptical about legal protections or unionization, as many of these jobs are quite suited to international competition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: