Hey everyone! I'm currently developing an app that retrieves texts reflecting negative emotions or pain points. I'm using a HuggingFace text classification model to filter the text by emotion, but I've noticed that these models seem to work best with shorter sentences rather than larger paragraphs, which is what I need to analyze. I'm wondering if I should stick with using this kind of model or if I could switch to an AI solution for detecting pain points and negative emotions. Specifically, I'm curious if something like ChatGPT could filter the data for me. I have a large dataset of about 1000 texts, and I'm looking for a way to get back an array of texts that show signs of anger or frustration. Any insights on whether AI can handle this and how efficiently it can process larger texts?
2 Answers
For analyzing a long paragraph's sentiment, traditional models are usually efficient and cost-effective, particularly for your dataset size. If you're looking to extract specific complaint points, LLMs can provide more depth, but for only 1000 texts, it could be overkill. Plus, consider the API costs; you'd be spending relatively little compared to the output quality!
Using both traditional NLP and LLMs could be a good strategy. Traditional models are solid for consistent results and easier to fix if problems arise, while LLMs can really shine at understanding context and extracting detailed emotional content. If you want reliable results, try to get both systems to agree before finalizing the output.
I see what you mean! If quality extraction is key, then it sounds like AI might be the better option, but I’m just worried about processing speed when handling so many texts at once. Would sending in 1000+ paragraphs in one go slow things down?