Hi all, I currently work for a company as somewhere between a data analyst and a data scientist. I have recently been tasked with trying to create a model/algorithm to help classify our help desk’s chat data. The goal is to be able to build a model which can properly identify and label the reason the customer is contacting our help desk (delivery issue, unapproved charge, refund request, etc). This is my first time working on a project like this, I understand the overall steps to be get a copy of a bunch of these chat logs, label the reasoning the customer is reaching out, train a model on the labeled data and then apply it to a test set that was set aside from the training data but I’m a little fuzzy on specifics. This is supposed to be a learning opportunity for me so it’s okay that I don’t know everything going into it but I was hoping you guys who have more experience could give me some advice about how to get started, if my understanding of the process is off, advice on potential pitfalls, or perhaps most helpful of all any good resources that you feel like helped you learn how to do tasks like this. Any help or advice is greatly appreciate!
If you just need to classify them based on a few pre-determined labels you can do that a few ways. LLMs will do a pretty good job out of the box if you structure your prompts clearly. Depending on how many rows of data and the size of the text, this could get a bit costly.
There are a few classical ways to do this too. They are a few more steps to accomplish, but will be less costly and you’ll learn some fun stuff about NLP along the way. I’d probably start by taking a sample and manually labeling the text per your company’s desired labels. Do the usual preprocessing steps (tokenize, lowercase, remove stop words, stemming or lemmatization, and scrap special characters). Then get some features out of the text via bag of words or TF-IDF. Train a simple model like logistic regression to match your features to your manual labels. If it works reliably enough for your purpose, test the model on a new chunk of data to see how well it predicts the labels for unseen text.
Yeah I’m hoping to accomplish it without using an LLM because I find this topic really interesting and really want to actually learn how to tackle it rather than just throwing a pre-built tool at it.
Awesome, sounds like a pretty solid roadmap, appreciate the insight!
We’ve been spoilt by LLMs in many respects. I wish you luck!
An alternative half way between the two could be to use embedding LLM models to generate vector embeddings, then do the rest yourself. Nomic AI embedding models are trained for search, classification and clustering tasks. Then you could implement your own vector search, clustering and/or classification solutions on the outputs. You’d lose out on learning the initial classical NLP stuff, but gain a bit more understanding of how LLMs work under the hood as a trade off.
(Also, quantized nomic embedding models are small and very fast. I run them on my terrible old laptop with no performance issues)
Does your organization use any kind of hardware for inferencing? If so, a <12B LLM would work well for you. You could also use a BERT model, but you would need to make sure that the training data is cleaned, labeled, and organized. You can achieve decent results via BERT if you have sufficient high-quality data; if not, an LLM will be more dependable for low-quality data. Consider using the BERT model or a Llama-3.1-8B. Although I think the BERT is better, a bi-directional LSTM is an additional option.