I've been using an AI prompt that extracts work orders to map them to my price list and create invoices, with a Python script to check the calculations. Lately, however, it's been giving me incorrect outputs. I'm wondering if anyone else is experiencing similar issues and if there's a fix I can try before throwing in the towel.
5 Answers
It sounds like you might be running into issues because you're using an ongoing chat thread. Typically, the longer the thread goes, the more likely the AI is to hallucinate or make mistakes. Starting fresh with a new session might help!
Nope, I always start with a new thread!
Yeah, I noticed that too. They seem to keep downgrading the quality of their AI while expecting us to keep paying the same price!
I just canceled mine. I've been finding much better results with other AI models.
Totally agree! Sometimes it feels like we're going backwards instead of forwards.
Could you share an example? Maybe a prompt or something that shows where it's going wrong?
I feel your pain. I’ve struggled with similar issues lately with my basic work orders.
I can't share exact details due to confidentiality, but I've got a simple price sheet with about 50 labor items. It used to correctly map items from uploaded work orders, but now it consistently messes up prices and even fabricates line items. It's super frustrating.
I've been having the same struggles. I upload detailed project documents and get back a total mess—wrong information, fabrication, you name it. It's like the AI has taken a step back in capabilities.
It's exhausting, right? Makes you wonder if newer updates are just causing more confusion.
Exactly! I had much better accuracy a year ago. Now, I feel like I'm talking to a child.
It seems like they've messed with the AI's systems, leading to these downgrades. I've heard it might be worth trying different settings or even another model entirely like Gemini.
Good idea! Maybe a different model would restore some functionality.
I haven't tried changing models yet, but I might just have to.
Is that a common problem? Do all models behave this way?