Hello! I am now attempting to perform Aspect Based Sentiment Analysis (ABSA) with a variety of models, including Roberta and BERT. However, in order to determine how similar words are to the categories, the ABSA must be connected with the three categories that I created and the sentiments using word embedding. I am aware that I could have attempted to train the model to improve its performance and give me more control over it. However, is there any method to accomplish this if I need it quickly and training the pre-trained model is not an option?
Why limit it to just 3 categories?
Using a large language model (LLM) with an effective prompt could provide faster results without the need for pre-processing, labeling, or other preliminary steps.
Refer to this:
You can leverage the tool-calling feature. This involves defining a structured schema for the desired output, such as identifying sentiments related to food, service, price, and other aspects in customer reviews.
By providing a prompt and a predefined schema, the LLM can generate responses that fit this structure, ensuring accurate extraction of relevant information.
Using frameworks like LangChain helps manage different LLMs and their unique tool-calling capabilities, allowing for consistent and accurate data extraction without free-form text ambiguities.
This method also helps reduce hallucination by the LLM, ensuring it sticks to the provided schema.