Doutorado em Ciência da Computação

URI Permanente para esta coleção

Navegar

Submissões Recentes

Agora exibindo 1 - 20 de 42
  • Item
    Alocação dinâmica de recursos em fatias de redes IoT não-3GPP envolvendo VANTs
    (Universidade Federal de Goiás, 2025-04-22) Silva, Rogério Sousa e; Oliveira Júnior, Antonio Carlos de; http://lattes.cnpq.br/3148813459575445; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Oliveira Júnior, Antonio Carlos de; http://lattes.cnpq.br/3148813459575445; Abelém, Antônio Jorge Gomes; http://lattes.cnpq.br/5376253015721742; Both, Cristiano Bonato; http://lattes.cnpq.br/2658002010026792; Rocha, Flávio Geraldo Coelho; http://lattes.cnpq.br/5583470206347446
    The exponential growth of the Internet of Things (IoT) has introduced increasing challenges to communication infrastructures, particularly in critical scenarios such as natural disasters and densely populated events, where network overload compromises service continuity and quality. In this context, this thesis presents a hybrid approach for dynamic resource allocation in non-3GPP IoT networks, integrating network slicing (NS), heterogeneous access (Multi-RAT), and unmanned aerial vehicles (UAVs) equipped with LoRaWAN gateways. The proposed hybrid approach synergistically combines the precision of exact optimization methods based on Mixed Integer Linear Programming (MILP), employed for determining the optimal initial positioning of UAVs, with the adaptive flexibility of advanced Deep Reinforcement Learning (DRL) algorithms, which enable dynamic and autonomous repositioning in variable environments. The goal of the first stage is to minimize operational and deployment costs while maximizing Quality of Service (QoS), while the second stage is to facilitate the autonomous repositioning of UAVs in response to environmental changes and fluctuations in network demand. We develop and assess four DRL algorithms, e.g., SR-DQN, DA-DDDQN, NSE-A2C, and RG2E-PPO. The proposed solutions were validated through realistic simulations using the ns-3 network simulator, in customized scenarios with non-3GPP connectivity. Results demonstrated significant improvements in QoS, reduced number of deployed UAVs, enhanced decision robustness, and increased spectral efficiency, with notable performance from the NSE-A2C and RG2E-PPO algorithms. The hybrid approach enables the creation of a mobile, scalable, and resilient communication infrastructure capable of autonomously and efficiently addressing the specific requirements of diverse IoT applications, particularly in urban and emergency environments with critical connectivity constraints. This thesis contributes to the state of the art by proposing a replicable, sustainable, and service-oriented hybrid architecture for reliable communication in heterogeneous and dynamic networks based on unlicensed technologies. Potential applications include smart cities, disaster response, and temporary connectivity deployment in degraded or non-existent infrastructure scenarios.
  • Item
    Decomposição de tarefas para problemas de linguagem natural: segmentação de hashtags e anotação de texto argumentativo
    (Universidade Federal de Goiás, 2025-04-24) Inuzuka, Marcelo Akira; Silva, Nádia Félix Felipe da; http://lattes.cnpq.br/7864834001694765; Nascimento, Hugo Alexandre Dantas do; http://lattes.cnpq.br/2920005922426876; Nascimento, Hugo Alexandre Dantas do; http://lattes.cnpq.br/2920005922426876; Martins, Wellington Santos; http://lattes.cnpq.br/3041686206689904; Dias, Márcio de Souza; http://lattes.cnpq.br/0095510023252013; Alencar, Wanderley de Souza; http://lattes.cnpq.br/5491185436975801; Rosa, Thierson Couto; http://lattes.cnpq.br/4414718560764818
    Corpus annotation is essential for training Natural Language Processing (NLP) models, yet it faces challenges such as high cognitive complexity, annotator inconsistency, and elevated costs. This thesis proposes task decomposition as a methodological strategy to modularize complex NLP processes, promoting greater conceptual clarity, scalability, and reproducibility. Initially focused on Argument Mapping, the research redirected its scope due to the infeasibility of the original task, concentrating on the identification of reusable patterns applicable to annotation and automation stages. Guidelines, a hierarchical decomposition algorithm, and artifacts such as annotated datasets and the Argmap platform — which supports collaborative annotation with quality control — were developed. The approach was validated through three empirical case studies: hashtag segmentation, keyphrase curation, and annotation of argumentative structures. Results demonstrate that decomposition improves consistency among agents (human or automatic), guideline clarity, and automation feasibility. The thesis also introduces the Recruiter–Selector architectural pattern, which structures tasks into two independent modules — candidate generation and final selection — applicable to both annotation workflows and algorithms based on Large Language Models (LLMs). It concludes that decomposition driven by reusable patterns enhances efficiency and reliability in corpus construction and the development of robust NLP systems, contributing to the systematization of annotation processes and their integration with automatic solutions
  • Item
    Preenchendo a Lacuna da Supervisão Humana na Robótica por Meio da Linguagem: Uma IA Agêntica Multimodal para Escalar a Supervisão Humana em Sistemas Autônomos
    (Universidade Federal de Goiás, 2025-04-22) Santos, Lara Fernanda Portilho dos; Soares, Anderson da Silva; Soares, Anderson Da Silva; Laureano, Gustavo Teodoro; Salazar, Aldo Andre Diaz; Colombini, Esther Luna; Vieira, Flavio Henrique Teles
    While many autonomous systems can navigate and avoid obstacles under predictable conditions, they often rely on a human supervisor (Human-in-the-Loop - HITL) to adapt to adverse obstacles, sudden layout modifications, or partial hardware failures. However, existing HITL strategies frequently leave operators struggling with large volumes of data that demand real-time interpretation. To mitigate these challenges, we propose an agentic AI approach that integrates long-term memory with adaptive reasoning techniques, thereby reducing operator workload and minimizing disruptions in dynamic autonomous robotics operations. The proposed system incorporates hierarchical subagents to systematically integrate historical data, sensor logs, and iterative problem-solving techniques to address frequent challenges in multi-robot deployments, including localization failures, hardware malfunctions, and crowd-induced obstacles. Experimental evaluations comparing memory-augmented and baseline (no-memory) conditions reveal that its usage consistently yields higher solution accuracy and operator satisfaction. In particular, memory retrieval accelerates the resolution of recurring failure modes, while adaptive reasoning enhances real-time decision-making in novel or crowded scenarios. Text-based similarity metrics (Token Overlap and Semantic Alignment) further demonstrate that reusing verified domain language and strategies improves the clarity and maintainability of the recommended actions. The results underscore the viability of a modular, language-based system that combines data-driven diagnostics, robust memory mechanisms, and self-reflective planning for large-scale robot supervision. By uniting flexible LLM capabilities with HITL workflows, our proposal holds considerable promise for improving both efficiency and transparency in real-world autonomous robotics operations.
  • Item
    MDMWare: model-driven domain-specific middleware for smart cities
    (Universidade Federal de Goiás, 2025-03-10) Melo, Paulo César Ferreira; Costa, Fábio Moreira; http://lattes.cnpq.br/0925150626762308; Costa, Fábio Moreira; Sampaio Junior, Adalberto Ribeiro; Lucrédio, Daniel; Carvalho, Sérgio Teixeira de; Graciano Neto, Valdemar Vicente
    Embargado
  • Item
    Técnicas de reamostragem e super-resolução em imagens de culturas agrícolas
    (Universidade Federal de Goiás, 2025-02-28) Nogueira, Emília Alves; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; http://lattes.cnpq.br/7206645857721831; Soares, Fabrizzio Alphonsus Alves de Melo Nunes; Pedrini, Helio; Cabacinha, Christian Dias; Costa, Ronaldo Martins da; Fernandes, Deborah Silva Alves
    The increasing demand for food, coupled with climate change, has driven the development of agricultural monitoring technologies to increase the efficiency and sustainability of crop production such as sugarcane and corn. However, the low resolution of images captured by Unmanned Aerial Vehicle (UAV) and satellites limits the detailed analysis of essential agronomic features. This thesis investigates methods to improve the resolution of agricultural images, comparing Traditional Resampling Techniques (TRT) with Super-Resolution with Deep Networks (SRDN) algorithms, such as Real Enhanced Super-Resolution Generative Adversarial Network (Real-ESRGAN), Multi-Level upscaling Transform (MuLUT) and Learning Resampling Function (LeRF). The aim of this study is to investigate the application of deep learning techniques to improve the resolution of agricultural images. For this purpose, existing methods were reviewed and an agricultural dataset was prepared. The research adopted an experimental approach, evaluating the methods quantitatively using metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and qualitatively by visual analysis. The experiments demonstrate significant improvements in image resolution using the SRDN algorithms compared to TRT, with gains of 484.34% in sugarcane images, 234.4% in corn, and 58.57% in satellite images. Although the SRDN techniques were developed for other purposes, such as improving the resolution of images of people and anime, their performance can be observed in agricultural images. The results obtained are significant for precision agriculture, since the increase in image resolution can aid in monitoring plant growth and health, providing faster and more effective interventions. In future investigations, we hope to expand the comparisons with other SRDN algorithms.
  • Item
    Aplicação de CNN e LLM na Localização de Defeitos de Software
    (Universidade Federal de Goiás, 2024-10-16) Basílio Neto, Altino Dantas; Camilo Júnior, Celso Gonçalves; http://lattes.cnpq.br/6776569904919279; Camilo Junior, Celso Gonçalves; Leitão Júnior , Plínio de Sá; Oliveira, Sávio Salvarino Teles de; Vincenzi, Auri Marcelo Rizzo; Souza, Jerffeson Teixeira de
    The increase in the quantity or complexity of computational systems has led to a growth in the occurrence of software defects. The industry invests significant amounts in code debugging, and a considerable portion of the cost is associated with the task of locating the element responsible for the defect. Automated techniques for fault localization have been widely explored, with recent advances driven by the use of deep learning models that combine different types of information about defective source code. However, the accuracy of these techniques still has room for improvement, suggesting open challenges in the field. This work aims to formalize and investigate the most impactful aspects of fault localization techniques, proposing a framework for characterizing approaches to the problem and two solution methodologies: a) based on convolutional neural networks (CNNs) and b) based on large language models (LLMs). From experimentation involving public datasets in Java and Python, it was demonstrated that CNNs are comparable to traditional methods but were found to be inferior to other methods in the literature. The LLM-based approach, on the other hand, greatly outperformed heuristics like Ochiai and Tarantula and proved competitive with more recent literature. An experiment in a scenario free from the data leakage problem showed that LLM-based approaches can be improved by combining them with the Ochiai heuristic.
  • Item
    Deep Learning aplicado à classificação em nível de pixel de variedades de culturas por imagens multiespectrais
    (Universidade Federal de Goiás, 2024-11-06) Kai, Priscila Marques; Oliveira, Bruna Mendes de; Costa, Ronaldo Martins da; http://lattes.cnpq.br/7080590204832262; Costa, Ronaldo Martins da; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; Leitão Júnior, Plínio de Sá; Arraut, Eduardo Moraes; Costa, Kelton Augusto Pontara da
    The classification of different crop varieties still faces significant challenges due to their similar spectral characteristics. To address this issue, the integration of remote sensing techniques with deep learning methods offers a promising solution by analyzing pixel-level data based on spectral bands, band combinations, and vegetation indices. In this study, we developed a cross-deep neural network methodology, referred to as DCN-S, with a case study focused on the classification of sugarcane varieties. The methodology was applied to remote sensing data from cultivation areas in the state of Goiás, Brazil, collected between 2019 and 2021. The DCN-S model was compared with traditional classifiers, such as k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Random Forest, as well as other neural network configurations. The results indicated that the DCN-S model achieved competitive accuracy in validation scenarios, including temporal variety considerations when compared to other studies in the literature. Moreover, the model excelled in classifying varieties without requiring the separation of developmental stages, surpassing traditional methods. Performance improvements were further observed after applying a voting process. Finally, this work’s main contributions include developing an approach for classifying agricultural varieties by combining deep learning with remote sensing data and validating this methodology in a practical scenario. The results highlight the potential of the DCN-S model to outperform traditional techniques, offering a tool for automated agricultural monitoring
  • Item
    Métodos para Análise de Dano Foliar e Reconhecimento de Pragas na Agricultura Usando Técnicas Computacionais
    (Universidade Federal de Goiás, 2024-08-02) Vieira, Gabriel da Silva; Soares, Fabrizzio Alphonsus Alves De Melo Nunes; http://lattes.cnpq.br/7206645857721831; Soares, Fabrizzio Alphonsus Alves De Melo Nunes; Cabacinha, Christian Dias; Pedrini, Helio; Laureano, Gustavo Teodoro; Costa, Ronaldo Martins Da
    The application of computer techniques in agriculture has significantly improved rural activities, particularly crop monitoring, plant protection, and overall yield. This thesis emphasizes leaf analysis as a valuable tool for inspecting and continually improving plantations, as well as supporting decision-making and agricultural management interventions. Changes in leaves can lead to irreparable losses in productivity, the delivery of low-quality products, and significant economic impacts. To prevent production failures, it is crucial to efficiently monitor and identify whether pests are affecting productivity or remaining within acceptable levels. However, damage to the leaf silhouette can limit automated analysis, and the diversity in leaf shape and damage levels makes it challenging to delineate the compromised edge regions. This study introduces original computer-based methods for defoliation estimate, damage detection, leaf surface reconstruction, and pest classification that are prepared to address damage to the leaf boundaries. Notable aspects of this study include template matching for pattern recognition and pest classification using only traces of leaf damage. The methodological design of the study consists of a literature review, investigation of digital image processing techniques, computer vision and machine learning, software development, and formulation of experimental tests. The results indicate high accuracy in estimating leaf area loss with a linear correlation of 0.98, damage detection and pest classification with assertiveness above 90%, and visual restoration of regions affected by herbivory with SSIM scores between 0.68 and 0.94.
  • Item
    Redes de telecomunicações convergentes: modelagem e implementação de arquitetura para infraestruturas pós-5G
    (Universidade Federal de Goiás, 2024-09-25) Macedo, Ciro José Almeida; Both, Cristiano Bonato; http://lattes.cnpq.br/2658002010026792; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Cardoso, Kleber Vieira; Martins, Joberto Sérgio Barbosa; Oliveira Júnior, Antonio Carlos de; Klautau Júnior, Aldebaro Barreto da Rocha; Alberti, Antônio Marcos
    The evolution of cellular mobile networks has been guided by directives specified by institutions such as 3GPP, aimed at supporting demanding services. Simultaneously, non3GPP wireless communication technologies have also evolved and play a significant role in various contexts. These technologies are essential for connectivity in diverse scenarios where long-range communication and low power consumption are crucial. Recent studies have shown that the integration and harmonious coexistence of 3GPP and non-3GPP technologies are vital in the context of post-5G networks, enhancing ubiquitous and seamless connectivity. In this context, the present thesis investigated the feasibility of converging non-3GPP communication technologies with the 5G core, dividing the investigation into two phases. In the first phase, an architecture was proposed to integrate these technologies. In the second phase, a functional prototype of this architecture was built to conduct experiments demonstrating its viability in different use cases. The thesis conducted a detailed technical analysis, offering a comprehensive view of the benefits of convergence for consumers and infrastructure providers. Significant gaps were identified that still need to be addressed in post-5G/6G networks, such as the current inability to monitor non-3GPP networks by the 5G infrastructure operator. Some of these gaps were explored and investigated in the context of the solution proposed in this thesis
  • Item
    MemoryGraph: uma proposta de memória para agentes conversacionais utilizando grafo de conhecimento
    (Universidade Federal de Goiás, 2024-09-25) Oliveira, Vinicius Paulo Lopes de; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Galvão Filho, Arlindo Rodrigues; Silva, Nadia Felix Felipe da; Pereira, Fabíola Souza Fernandes; Fanucchi, Rodrigo Zempullski
    With the advancement of massive language models applied to natural language processing, many proposals have become viable, such as the use of conversational agents applied to various everyday tasks. However, these models still have limitations in both the integration of new knowledge and the representation and retrieval of that knowledge, being constrained by costs, execution time, and training. Furthermore, their black-box nature prevents the direct manipulation of knowledge, mainly due to the vector representation that indirectly represents it, making the control and explanation of their results more difficult. In contrast, knowledge graphs allow for a rich and explicit representation of relationships between real-world entities. Despite the challenges in their construction, studies indicate that these can complement each other to produce better results. Therefore, the objective of this research is to propose a memory system for conversational agents based on massive language models through the combination of explicit knowledge (knowledge graphs) and implicit knowledge (language models) to achieve better semantic and lexical representation. This methodology was called MemoryGraph and is composed of three processes: graph construction, graph search, and user representation. Various knowledge graph construction workflows were proposed and compared, considering their costs and influences on the final result. The agent can search for information in this base through various search proposals based on RAG, referred to here as GraphRAG. This search methodology was evaluated by humans in five proposed question scenarios, showing superior average results in all five proposed search approaches (29% in the best approach). In addition, six RAG metrics, evaluated by a massive model, were applied to the proposed application results from two popular datasets and one composed of diabetes guidelines, showing superior results in all datasets. Furthermore, a method for long-term user representation, called user_memory, was proposed, demonstrating 93% retention of user information. To reinforce this result, case studies were conducted, demonstrating the agent's ability to personalize the user experience based on past experiences, increasing the speed of information delivery and user satisfaction. The results demonstrate that the MemoryGraph paradigm represents an advance over vector representation in environments where richer, temporal, and mutable contextualization is necessary. It also indicates that the integration of knowledge graphs with massive language models, especially in the construction of long-term memory and rich contextualization based on past experiences, can represent a significant advance in creating more efficient, personalized conversational agents with enhanced capacity for retaining and utilizing information over time.
  • Item
    IVF/NSGA-III: Uma Metaheurística Evolucionária Many-Objective com Busca Guiada por Balizas e Fertilização In Vitro
    (Universidade Federal de Goiás, 2024-04-11) Sampaio, Sávio Menezes; Camilo Junior, Celso Gonçalves; http://lattes.cnpq.br/6776569904919279; Camilo Junior; Camilo Junior, Celso Gonçalves; Lima Neto, Fernando Buarque de; Leite, Karla Tereza Figueiredo; Rodrigues, Vagner José do Sacramento; Oliveira, Sávio Salvarino Teles de
    Sampaio, Sávio Menezes. The In Vitro Fertilization Genetic Algorithm (IVF/GA) demonstrates robust applicability to single-objective optimization problems, particularly those that are complex and multimodal. This work proposes the expansion of the IVF method to many-objective optimization, which deals with more than three simultaneous objectives. The study introduces new activation criteria, selection, assisted exploration, and transfer mechanisms, consolidating innovation through the integration of the IVF method with NSGA-III, here referred to as IVF/NSGA-III. This approach incorporates the Beacon-Guided Search strategy in a Steady State configuration, aiming to overcome the inherent challenges of many-objective optimization. It focuses on dynamic convergence to promising regions of the solution space and adopts an adaptive scale factor within the context of Differential Evolution, providing an alternative methodology to conventional intensification methods. Experiments conducted with the many-objective benchmarks DTLZ, MaF, WFG show that IVF/NSGA-III significantly enhances performance compared to the standard NSGA-III algorithm across various tested problems, validating its potential as a valuable contribution to the field of Many-Objective Evolutionary Algorithms (MOEAs). The study suggests new directions for the development of many-objective memetic strategies and offers significant insights for researchers seeking more effective and adaptable optimization methods.. Goiânia-GO, 2024. 220p. PhD. Thesis Relatório de Graduação. Instituto de Informática, Universidade Federal de Goiás.
  • Item
    Framework para sistemas de recomendação baseados em neural contextual Bandits com restrição de justiça
    (Universidade Federal de Goiás, 2024-06-03) Santana, Marlesson Rodrigues Oliveira de; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Rosa, Thierson Couto; Carvalho, Cedric Luiz De; Araújo, Aluizio Fausto Ribeiro; Veloso, Adriano
    The advent of digital businesses such as marketplaces, in which a company mediates a commercial transaction between different actors, presents challenges to recommendation systems as it is a multi-stakeholder scenario. In this scenario, the recommendation must meet conflicting objectives between the parties, such as relevance versus exposure, for example. State-of-the-art models that address the problem in a supervised way not only assume that the recommendation is a stationary problem, but are also user-centered, which leads to long-term system degradation. This thesis focuses on modeling the recommendation system as a reinforcement learning problem, through a Markovian decision-making process with uncertainty where it is possible to model the different interests of stakeholders in an environment with fairness constraints. The main challenges are the need for real interactions between stakeholders and the recommendation system in a continuous cycle of events that enables the scenario for online learning. For the development of this work, we present a model proposal, based on Neural Contextual Bandits with fairness constrain for multi-stakeholder scenarios. As results, we present the construction of MARS-Gym, a framework for modeling, training and evaluating recommendation systems based on reinforcement learning, and the development of different recommendation policies with fairness control adaptable to Neural models. Contextual Bandits, which led to an increase in fairness metrics for all scenarios presented while controlling the reduction in relevance metrics.
  • Item
    Design de experiência aplicado a times
    (Universidade Federal de Goiás, 2024-10-18) Alves, Leonardo Antonio; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Ferreira, Deller James; Lucena, Fábio Nogueira de; Dias, Rodrigo da Silva; Federson, Fernando Marques
    Despite recent advances, current Gamification methodologies still face challenges in effectively personalizing learning experiences and accurately assessing the development of specific competencies. This thesis presents the Marcta Autonomy Framework (MAF), an innovative framework that aims to overcome these limitations by increasing team members’ motivation and participation while promoting personal development and skills through a personalized experience.The MAF, consisting of six phases (Planning, Reception, Advancement, Feedback, Process Evaluation, and Lessons and Adjustments), guides the development of activities with both intrinsic and extrinsic rewards. The research was applied in two academic case studies: a Software Factory and an Introduction to Programming course for students of the Bachelor’s degree in Artificial Intelligence. Using a qualitative approach, including interviews and observations, the results demonstrate that the MAF significantly enhances the development of personal skills. The analysis suggests that the framework can be applied both within a course and in a specific discipline. The main contribution of the MAF lies in its ability to provide a structured roadmap for planning and evaluating pedagogical actions focused on Personal Skills Development.Furthermore, the framework leverages easily capturable data through observation, context, and evaluations. It is concluded that the MAF stands as a personalized and affective Gamification solution for Experience Design in Learning, promoting Personal Skills Development in both academic and corporate contexts.
  • Item
    Escolha de parâmetros aplicados a modelos inteligentes para o incremento da qualidade do aprendizado de sinais de EEG captados por dispositivo de baixo custo
    (Universidade Federal de Goiás, 2024-07-10) Silva, Uliana Duarte; Felix, Juliana Paula; http://lattes.cnpq.br/3610115951590691; Nascimento, Hugo Alexandre Dantas do; http://lattes.cnpq.br/2920005922426876; Nascimento, Hugo Alexandre Dantas do; Pires, Sandrerley Ramos; Carvalho, Sérgio Teixeira de; Carvalho, Sirlon Diniz de; Melo, Francisco Ramos de
    Since the creation of the first electroencephalography (EEG) equipment at the beginning of the 20th century, several studies have been carried out based on this technology. More recently, investigations into machine learning applied to the classification of EEG signals have started to become popular. In these researches, it is common to adopt a sequence of steps that involves the use of filters, signal windowing, feature extraction and division of data into training and test sets. The choice of parameters for such steps is an important task, as it impacts classification performance. On the other hand, finding the best combination of parameters is an exhaustive work that has only been partially addressed in studies in the area, particularly when considering many parameter options, the progressive growth of the training set and data acquired from low-cost EEG equipment. This thesis contributes to the area by presenting an extensive research on the choice of parameters for processing and classifying of EEG signals, involving both raw signals and specific wave data collected from a low-cost equipment. The EGG signals acquisition was done with ten participants, who were asked to observe a small white ball that moved to the right, left or remained stationary. The observation was repeated 24 times randomly and each observation situation lasted 18 seconds. Different parameter settings and machine learning methods were evaluated in classifying EEG signals. We sought to find the best parameter configuration for each participant individually, as well as obtain a common configuration for several participants simultaneously. The results for the individualized classifications indicate better accuracies when using data from specific waves instead of raw signals. Using larger windows also led to better results. When choosing a common parameter combination for multiple participants, the results indicate a similarity to findings when looking for the best parameters for individual participants. In this case, the parameter combinations using data from specific waves showed an average increase of 8.69% with a standard deviation of 4.02%, while the average increase using raw signals was 7.82% with a standard deviation of 2.81%, when compared to general average accuracy results. Still in the case of the parameterization common to several participants, the maximum accuracies using data from specific waves were higher than those obtained with the raw signals, and the largest windows appeared among the best results.
  • Item
    Soluções baseadas em aprendizado de máquina para alocação de recursos em redes sem fio de próxima geração
    (Universidade Federal de Goiás, 2024-05-06) Lopes, Victor Hugo Lázaro; Klautau Júnior, Aldebaro Barreto da Rocha; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Cardoso, Kleber Vieira; Klatau Júnior, Aldebaro Barreto da Rocha; Rocha, Flávio Geraldo Coelho; Silva, Yuri Carvalho Barbosa; Rezende, José Ferreira de
    5G and beyond networks have been designed to support challenging services. Despite important advances already introduced, resource allocation and management methods remain critical tasks in this context. Although resource allocation methods based on exact optimization have a long history in wireless networks, several aspects involved in this evolution require approaches that can overcome the existing limitations. Recent research has shown the potential of AI/ML-based resource allocation methods. In this approach, resource allocation strategies can be built based on learning, in which the complex relationships of these problems can be learned through the experience of agents interacting with the environment. In this context, this thesis aimed to investigate AI/MLbased approaches for the development of dynamic resource allocation and management methods. Two relevant problems were considered, the rst one related to user scheduling and the allocation of radio resources in multiband MIMO networks, and the second one focused on the challenges of allocating radio, computational, and infrastructure resources involved in the VNF placement problem in disaggregated vRAN. For the rst problem, an agent based on DRL was proposed. For the second problem, two approaches were proposed, the rst one being based on an exact optimization method for dening the VNF placement solution, and the second one based on a DRL agent for the same purpose. Moreover, components adhering to the O-RAN architecture were proposed, creating the necessary control for monitoring and dening new placement solutions dynamically, considering aspects of cell coverage and demand. Simulations demonstrated the feasibility of the proposals, with important improvements observed in different metrics.
  • Item
    Reflexo pupilar à luz como biomarcador para identificação de glaucoma: avaliação comparativa de redes neurais e métodos de aprendizado de máquina
    (Universidade Federal de Goiás, 2024-08-22) Pinheiro, Hedenir Monteiro; Costa, Ronaldo Martins da; http://lattes.cnpq.br/7080590204832262; Costa, Ronaldo Martins da; Matsumoto, Mônica Mitiko Soares; Camilo, Eduardo Nery Rossi; Papa, João Paulo; Barbosa, Rommel Melgaço
    The study of retinal ganglion cells, their photosensitivity characteristics, and their relationship with physical and cognitive processes has driven research on the pupillary reflex. Controlled by the Autonomic Nervous System (ANS), dilation (mydriasis) and contraction (miosis) are involuntary reflexes. Variations in pupil diameter may indicate physical or cognitive changes in an individual. For this reason, the pupillary reflex has been considered an important biomarker for various types of diagnoses. This study aimed to improve the automated identification of glaucoma using data from the pupillary light reflex. A comparative analysis between neural networks and classical techniques was performed to segment the pupillary signal. In addition, the performance of various data processing methods was evaluated, including filtering techniques, feature extraction, sample balancing, and feature selection, analyzing their effects on the classification process. The results show an accuracy of 73.90% in the overall classification of glaucoma, 98.10% for moderate glaucoma classification, and 98.73% for severe glaucoma, providing insights and guidelines for glaucoma screening and diagnosis through the signal derived from the pupillary light response
  • Item
    Future-Shot: Few-Shot Learning to tackle new labels on high-dimensional classification problems
    (Universidade Federal de Goiás, 2024-02-23) Camargo, Fernando Henrique Fernandes de; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Galvão Filho, Arlindo Rodrigues; Vieira, Flávio Henrique Teles; Gomes, Herman Martins; Lotufo, Roberto de Alencar
    This thesis introduces a novel approach to address high-dimensional multiclass classification challenges, particularly in dynamic environments where new classes emerge. Named Future-Shot, the method employs metric learning, specifically triplet learning, to train a model capable of generating embeddings for both data points and classes within a shared vector space. This facilitates efficient similarity comparisons using techniques like k-nearest neighbors (\acrshort{knn}), enabling seamless integration of new classes without extensive retraining. Tested on lab-of-origin prediction tasks using the Addgene dataset, Future-Shot achieves top-10 accuracy of $90.39\%$, surpassing existing methods. Notably, in few-shot learning scenarios, it achieves an average top-10 accuracy of $81.2\%$ with just $30\%$ of the data for new classes, demonstrating robustness and efficiency in adapting to evolving class structures
  • Item
    Abordagem de seleção de características baseada em AUC com estimativa de probabilidade combinada a técnica de suavização de La Place
    (Universidade Federal de Goiás, 2023-09-28) Ribeiro, Guilherme Alberto Sousa; Costa, Nattane Luíza da; http://lattes.cnpq.br/9968129748669015; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610; Barbosa, Rommel Melgaço; Lima, Marcio Dias de; Oliveira, Alexandre César Muniz de; Gonçalves, Christiane; Rodrigues, Diego de Castro
    The high dimensionality of many datasets has led to the need for dimensionality reduction algorithms that increase performance, reduce computational effort and simplify data processing in applications focused on machine learning or pattern recognition. Due to the need and importance of reduced data, this paper proposes an investigation of feature selection methods, focusing on methods that use AUC (Area Under the ROC curve). Trends in the use of feature selection methods in general and for methods using AUC as an estimator, applied to microarray data, were evaluated. A new feature selection algorithm, the AUC-based feature selection method with probability estimation and the La PLace smoothing method (AUC-EPS), was then developed. The proposed method calculates the AUC considering all possible values of each feature associated with estimation probability and the LaPlace smoothing method. Experiments were conducted to compare the proposed technique with the FAST (Feature Assessment by Sliding Thresholds) and ARCO (AUC and Rank Correlation coefficient Optimization) algorithms. Eight datasets related to gene expression in microarrays were used, all of which were used for the cross-validation experiment and four for the bootstrap experiment. The results showed that the proposed method helped improve the performance of some classifiers and in most cases with a completely different set of features than the other techniques, with some of these features identified by AUC-EPS being critical for disease identification. The work concluded that the proposed method, called AUC-EPS, selects features different from the algorithms FAST and ARCO that help to improve the performance of some classifiers and identify features that are crucial for discriminating cancer.
  • Item
    Acelerando florestas de decisão paralelas em processadores gráficos para a classificação de texto
    (Universidade Federal de Goiás, 2022-09-12) Pires, Julio Cesar Batista; Martins, Wellington Santos; http://lattes.cnpq.br/3041686206689904; Martins, Wellington Santos; Lima, Junio César de; Gaioso, Roussian Di Ramos Alves; Franco, Ricardo Augusto Pereira; Soares, Fabrízzio Alphonsus Alves de Melo Nunes
    The amount of readily available on-line text has grown exponentially, requiring efficient methods to automatically manage and sort data. Automatic text classification provides means to organize this data by associating documents with classes. However, the use of more data and sophisticated machine learning algorithms has demanded an increasingly computing power. In this work we accelerate a novel Random Forest-based classifier that has been shown to outperform state-of-art classifiers for textual data. The classifier is obtained by applying the boosting technique in bags of extremely randomized trees (forests) that are built in parallel to improve performance. Experimental results using standard textual datasets show that the GPUbased implementation is able to reduce the execution time by up to 20 times compared to an equivalent sequential implementation.
  • Item
    Abordagem de seleção de características baseada em AUC com estimativa de probabilidade combinada a técnica de suavização de La Place
    (Universidade Federal de Goiás, 2024-09-28) Ribeiro, Guilherme Alberto Sousa; Costa, Nattane Luíza da; http://lattes.cnpq.br/9968129748669015; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610; Barbosa, Rommel Melgaço; Lima, Marcio Dias de; Oliveira, Alexandre César Muniz de; Gonçalves, Christiane; Rodrigues, Diego de Castro
    The high dimensionality of many datasets has led to the need for dimensionality reduction algorithms that increase performance, reduce computational effort and simplify data processing in applications focused on machine learning or pattern recognition. Due to the need and importance of reduced data, this paper proposes an investigation of feature selection methods, focusing on methods that use AUC (Area Under the ROC curve). Trends in the use of feature selection methods in general and for methods using AUC as an estimator, applied to microarray data, were evaluated. A new feature selection algorithm, the AUC-based feature selection method with probability estimation and the La PLace smoothing method (AUC-EPS), was then developed. The proposed method calculates the AUC considering all possible values of each feature associated with estimation probability and the La Place smoothing method. Experiments were conducted to compare the proposed technique with the FAST (Feature Assessment by Sliding Thresholds) and ARCO (AUC and Rank Correlation coefficient Optimization) algorithms. Eight datasets related to gene expression in microarrays were used, all of which were used for the crossvalidation experiment and four for the bootstrap experiment. The results showed that the proposed method helped improve the performance of some classifiers and in most cases with a completely different set of features than the other techniques, with some of these features identified by AUC-EPS being critical for disease identification. The work concluded that the proposed method, called AUC-EPS, selects features different from the algorithms FAST and ARCO that help to improve the performance of some classifiers and identify features that are crucial for discriminating cancer.