I'm definitely giving this a try sometime soon. I had an idea back when it was just GPT-3 out there, to use LLM-generated embeddings as part of a search ranking function. I'm betting that's roughly how Expert mode works, right?
Edit: Just had another thought. You could use the output of a normal search algorithm to feed the LLM targeted context, which it could then use to come up with a better answer than it would without the extra background. Yeah, I like that.
Although, I will say I asked it about writing a lisp interpreter in Python, because I was just tooling around with such a thing a little while ago for funsies. It essentially pointed me to Peter Norvig's two articles on the subject, which, unfortunately, both feature code that either doesn't run properly or doesn't run right at all. I was disappointed.
Edit: Just had another thought. You could use the output of a normal search algorithm to feed the LLM targeted context, which it could then use to come up with a better answer than it would without the extra background. Yeah, I like that.
Although, I will say I asked it about writing a lisp interpreter in Python, because I was just tooling around with such a thing a little while ago for funsies. It essentially pointed me to Peter Norvig's two articles on the subject, which, unfortunately, both feature code that either doesn't run properly or doesn't run right at all. I was disappointed.