I just started using GPT Plus and I'm exploring creating some graphic elements with it. However, I've noticed something frustrating: sometimes the images look perfect as they generate, but at the very end, they seem to get messed up. It's like they go through a filter that makes them appear watercolor-like, blurring lines and making them less precise. I'm curious about why this happens at the last moment and why it seems inconsistent. Has anyone else encountered this, or does it feel like the model is trying too hard and overshooting the desired look?
5 Answers
I suspect this issue might happen when the model picks up on other works. It could be a tie to some ethical constraints. I’d recommend checking for any embedded messages or encodings. It’s a bit murky, but it seems relevant to the model's integrity rather than sabotage.
I totally get your pain! It's super annoying when the image is looking great, and then bam—suddenly it adds a bunch of details you didn’t want. I've had that happen to me a lot, too!
That’s an interesting issue! Can you share some of your prompts? I've noticed the API can sometimes produce more stable outputs without as many restrictions. I'd love to test it myself!
What you’re seeing is likely just the final layer of the image being processed. You’re only noticing the 'overdone' aspect because it’s a big change right at the finish. It often adds small details or messes with larger parts like faces. Honestly, just let it form naturally. If you’re not happy with something, tweak your prompts to guide it better; it's not a bug—it's part of the process.
Sure! I'll share a couple of examples, but I wonder if changing the prompts could make a difference.