Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. Start small and build up.

I’ve found it to be very forgetful and have to work function-by-function, giving it the current code as part of the next prompt. Otherwise it randomly changes class names, invents new bits that weren’t there before or forgets entire chunks of functionality.

It’s a good discipline as I have to work out exactly what I want to achieve first and then build it up piece by piece. A great way to learn a new framework or language.

It also sometimes picks convoluted ways of doing things, so regularly asking whether there’s a simpler way of doing things can be useful.



IIRC its "memory" (actually input size, it remembers by taking its previous output as input) is only about 500 tokens, and that has to contain both your prompt and the beginning of the answer to hold relevance towards the end of its answer. So yes, it can't make anything bigger than maybe a function or two with any consistency. Writing a whole program is just not possible for an LLM without some other knowledge store for it to cross reference, and even then I have my doubts.


This isn't quite accurate.

GPT3.5 is 4k tokens and has a 16k version GP4 is 8k and has a 32k version.

You are correct that this needs to account for both input and output. I suspect that when you feed chat gpt longer it prompts, it may try to use the 16k / 32k models when it makes sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: