Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can easily ask an LLM to return JSON results, and soon working code, on your exact query and plug those to another system for automation.


If you ask "for JSON" it'll make up a different schema for each new answer, and they get a lot less smart when you make them follow a schema, so it's not quite that easy.


Chain of prompts can be used to deal with that in many cases.

Also, the intelligence of these models will likely continue to increase for some time based on expert testimonials to congress, which align with evidence so far.


OpenAI recently launched structured responses so yes schema following is not hard anymore.


Didn't they release a structured output mode recently to finally solve this?


It doesn't solve the second problem. Though I can't say how much of an issue it is, and CoT would help.

JSON also isn't an ideal format for a transformer model because it's recursive and they aren't, so they have to waste attention on balancing end brackets. YAML or other implicit formats are better for this IIRC. Also don't know how much this matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: