2024-12-182024-12-182024-09-23PIRES, T. G. Análise de técnicas de ajuste fino em classificação de texto. 2024. 80 f. Dissertação (Mestrado em ciência da computação) - Instituto de Informática, Universidade Federal de Goiás, Goiânia, 2024.http://repositorio.bc.ufg.br/tede/handle/tede/13751Natural Language Processing (NLP) aims to develop models that enable computers to understand, interpret, process and generate text in a way similar to human communication. The last decade has seen significant advances in the field, with the introduction of deep neural network models, and the subsequent evolution of the architecture of these models such as the attention mechanism and the Transformers architecture, culminating in language models such as ELMo, BERT and GPT. And later models called Large Language Models (LLMs) improved the ability to understand and generate texts in a sophisticated way. Pre-trained models offer the advantage of reusing knowledge accumulated from vast datasets, although specific fine-tuning is required for individual tasks. However, training and tuning these models consumes a lot of processing resources, making it unfeasible for many organizations due to high costs. In resource-constrained environments, efficient fine-tuning techniques such as LoRA (Low-Rank Adaptation) were developed to optimize the model adaptation process, minimizing the number of adjustable parameters and avoiding overfitting. These techniques allow for faster and more economical training, while maintaining the robustness and generalization of the models. This work evaluates three efficient fine-tuning techniques LoRA, AdaLoRA and IA3 (in addition to full fine-tuning) in terms of memory consumption, training time and accuracy, using the DistilBERT, Roberta-base and TinyLlama models on different datasets (AG News, IMDb and SNLI).Attribution-NonCommercial-NoDerivatives 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-nd/4.0/Processamento de Linguagem NaturalBidirectional Encoder Representations for TransformersEmbeddings from Language ModelsGenerative Pre-trained transformerLow-Rank AdaptationAdaptative Low-Rank AdaptationInternet Movie DatabaseStanford Natural Language InferenceNatural Language ProcessingLarge Language ModelsBidirectional Encoder Representations for TransformersEmbeddings from Language ModelsGenerative Pretrained transformerLow-Rank AdaptationAdaptative Low-Rank AdaptationInternet Movie DatabaseStanford Natural Language InferenceCIENCIAS EXATAS E DA TERRA::CIENCIA DA COMPUTACAOAnálise de técnicas de ajuste fino em classificação de textoAnalysis of fine-tuning techniques in text classificationDissertação