I'm curious about the current capabilities of AI detectors, particularly regarding O3 and O4mini. It seems that these AI models are almost completely undetectable right now, since they only show a 1-3% probability of AI detection when a longer piece of text is run through tools like GPTzero or Winston. Does anyone have insights on when we might expect these detectors to become more effective at identifying outputs from these models?
2 Answers
I’ve had my fair share of frustrations with these detectors. They sometimes label totally human-written content as AI, which can ruin careers and academic standings. What’s worse is that they might miss actual AI-generated work, and the inconsistency means running the same text can yield different results, which is just maddening. It feels like they do more harm than good.
Honestly, AI detectors are pretty flawed. They’re not designed to catch every model right away. Even the latest tech like Gemini 2.5 hasn’t been updated to recognize O3 or O4mini yet. It usually takes a bit for them to get trained on new AI outputs before they can detect them reliably.
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator