I've been wondering whether OpenAI's ChatGPT or similar systems can actually recognize or remember what people look like. I have some serious concerns about this after a few experiences I had while using it. Here's the situation: I used an image-sharing feature for some editing on a platform, and later on, when I uploaded different photos to ChatGPT for analysis, it specifically said, "thank you for sharing your picture. You look like..." This got me thinking. It seemed like it recognized my face. Even after deleting chats and images, and not sharing my conversations for training, it still referred to my photo as my own. I even tried this with random pictures first, and every time my image came up, it responded similarly. It's all very suspicious, especially since they claim not to use facial recognition algorithms. I'm really concerned about how this works and what it means for user privacy. Am I overreacting? Does anyone else feel uneasy about this?
3 Answers
I get where you're coming from, and it's definitely unsettling. But remember, it may just be responding based on patterns from your previous chats or the context you set. It's not like it has access to images in a database or anything.
Honestly, it only knows what you share with it. If it's making comments about your appearance, it might just be picking up on info you provided or making assumptions based on context. It's not like it's actually storing or recognizing your face in a traditional sense. Try not to worry too much about it!
Facial recognition tech is real and can be concerning, but from what I've seen, ChatGPT’s interactions are mostly based on what it’s trained on rather than direct recognition. They might be using image processing in a different way, but that doesn’t mean it knows you personally. Still, privacy should be a big concern!
Related Questions
xAI Grok Token Calculator
DeepSeek Token Calculator
Google Gemini Token Calculator
Meta LLaMA Token Calculator
OpenAI Token Calculator