Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Main problem for me is that the quality tails off on chats and you need to start afresh

I worry that the garbage at the end will become part of the memory.

How many of your chats do you end… “that was rubbish/incorrect, i’m starting a new chat!”



Exactly, and main reason I've stopped using GPT for serious work. LLMs start to break down and inject garbage at the end, and usually my prompt is abandoned before the work is complete, and I fix it up manually after.

GPT stores the incomplete chat and treats it as truth in memory. And it's very difficult to get it to un-learn something that's wrong. You have to layer new context on top of the bad information and it can sometimes run with the wrong knowledge even when corrected.


Reminds me of one time asking ChatGPT (months ago now) to create a team logo with a team name. Now anytime I bring up something it asks me if it has to do with that team name. That team name wasn’t even chosen. It was one prompt. One time. Sigh.


You can manually delete memories in your profile settings, just FYI


So a thing with claude.ai chats is that after long enough they add a long context injection on every single turn after a while.

That injection (for various reasons) will essentially eat up a massive amount of the model's attention budget and most of the extended thinking trace if present.

I haven't really seen lower quality of responses with modern Claudes with long context for the models themselves, but in the web/app with the LCR injections the conversation goes to shit very quickly.

And yeah, LCRs becoming part of the memory is one (of several) things that's probably going to bite Anthropic in the ass with the implementation here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: