I've noticed that a lot of complaints about model behavior seem to be addressed by simply changing the custom instructions. Is it that people aren't using these features, or are there real issues that can't be resolved this way?
6 Answers
A lot of folks think that custom instructions are the solution, but the problem is the system prompt often overrides them. You can tell it not to do something, and it might listen for a bit, but then it just goes back to its old ways. It's really frustrating!
Maybe some people have tried multiple approaches and are still getting the same responses. Building specific commands can help, but the model still tends to veer off course pretty quickly. It’s hit or miss, honestly.
If custom instructions were more like Claude’s response styles, allowing for more personalized settings, I’d probably use them more. The model’s default behavior is subjective; trying to fix it with broad instructions could backfire and affect the overall quality of the responses.
I usually think the problem lies with the user. I mean, troubleshooting has been my thing since I was using a Commodore 64, and I feel like that mindset applies here too.
Totally agree! It often feels like the complaints come from users who might not be prompting effectively or aren't familiar with the custom instructions.
Custom instructions tend to be a short-term fix. Even simple prompts like 'never use this word' can fail. The model eventually reverts to its foundational training, which can be a bummer.
Sometimes it's about more than just instructions. There are bigger social implications at play here, and those who might need to set up custom instructions probably aren’t in a position to do so. Also, the model’s understanding of balance is pretty shallow, which can lead to mixed results when trying to instruct it.
Yeah, I can relate. It's not always that way for me, though. Generally, the model follows my instructions pretty well, especially for specific tasks.