Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The mostly static knowledge content from sites like Wikipedia is already well represented in LLMs.

LLMs call out to external websites when something isn’t commonly represented in training data, like specific project documentation or news events.



That's true, but the data is only approximately represented in the weights.

Maybe it's better to have the AI only "reason", and somehow instantly access precise data.


Is this Retrieval Augmented Generation, or something different?


Yes, RAG, but have the model specifically optimzied for RAG.


What use cases will gain from this architecture?


Data processing, tool calling, agentic use. Those are also the main use-cases outside "chatting".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: