Programa de Pós-graduação em Ciência da Computação
URI Permanente desta comunidade
Navegar
Navegando Programa de Pós-graduação em Ciência da Computação por Por tipo de Acesso "Attribution-NonCommercial-NoDerivatives 4.0 International"
Agora exibindo 1 - 20 de 73
Resultados por página
Opções de Ordenação
Item Detecção online de dispositivos sem fio intrusos usando o sinal eletromagnético de transmissão(Universidade Federal de Goiás, 2022-09-23) Abreu, Marcos Felipe Barboza de; Vieira, Flávio Henrique Teles; http://lattes.cnpq.br/0920629723928382; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Klautau Júnior, Aldebaro Barreto da Rocha; Corrêa, Sand Luz; Cardoso, Kleber Vieira; Vieira, Flávio Henrique TelesThe identification of Internet of Things (IoT) devices through the electromagnetic signal is a topic widely investigated in the literature, and this technique is considered highly accurate by several works. The use of offline techniques, that is, when there is no presence of new devices, is widely explored, but so far, systems are not found effectively using the detection of unknown devices in the online way, i.e. , one of the greatest potentials of this type of technique has not been investigated. This work presents an online system that differentiates authentic devices from intrusive devices. For this, the use of the probability matrix of classifiers is explored, aiming to identify unknown devices by them. In addition to the technique, it is also presented a system that features a modular, extensible and generic architecture, which aims to minimally interfere with the normal flow of an Internet of Things application. The system is implemented using the GNU Radio tool and experiments are presented, which aim to show the feasibility of the technique. The entire discussion is based on data collected from real environments, using devices from wireless communication technologies LoRa and ZigBee. In addition, the work analyzed data from WiFi technology, from collections found in the literature. Tests show that it is possible to identify unknown devices in the order of milliseconds, with a low error rate.Item Aplicação de técnicas de visualização de informações para os problemas de agendamento de horários educacionais(Universidade Federal de Goiás, 2023-10-20) Alencar, Wanderley de Souza; Jradi, Walid Abdala Rfaei; http://lattes.cnpq.br/6868170610194494; Nascimento, Hugo Alexandre Dantas do; http://lattes.cnpq.br/2920005922426876; Nascimento, Hugo Alexandre Dantas do; Jradi, Walid Abdala Rfaei; Bueno, Elivelton Ferreira; Gondim, Halley Wesley Alexandre Silva; Carvalho, Cedric Luiz deAn important category, or class, of combinatorial optimization problems is called Educational Timetabling Problems (Ed-TTPs). Broadly, this category includes problems in which it is necessary to allocate teachers, subjects (lectures) and, eventually, rooms in order to build a timetable, of classes or examinations, to be used in a certain academic period in an educational institution (school, college, university, etc.). The timetable to be prepared must observe a set of constraints in order to satisfy, as much as possible, a set of desirable goals. The current research proposes the use of methods and/or techniques from the Information Visualization (IV) area to, in an interactive approach, help a better understanding and resolution, by non-technical users, of problem instances in the scope of their educational institutions. In the proposed approach, human actions and others performed by a computational system interact in a symbiotic way targeting the problem resolution, with the interaction carried out through a graphical user interface that implements ideas originating from the User Hints framework [Nas03]. Among the main contributions achieved are: (1) recognition, and characterization, of the most used techniques for the presentation and/or visualization of Ed-TTPs solutions; (2) conception of a mathematical notation to formalize the problem specification, including the introduction of a new idea called flexibility applied to the entities involved in the timetable; (3) proposition of visualizations able to contribute to a better understanding of a problem instance; (4) make available a computational tool that provides interactive resolution of Ed-TTPs, together with a specific entity-relationship model for this kind of problem; and, finally, (5) the proposal of a methodology to evaluate visualizations applied to the problem in focus.Item Preditor híbrido de estruturas terciárias de proteínas(Universidade Federal de Goiás, 2023-08-10) Almeida, Alexandre Barbosa de; Soares, Telma Woerle de Lima; http://lattes.cnpq.br/6296363436468330; Soares , Telma Woerle de Lima; Camilo Junior , Celso Gonoalves; Vieira, Flávio Henrique Teles; Delbem, Alexandre Cláudio Botazzo; Faccioli, Rodrigo AntônioProteins are organic molecules composed of chains of amino acids and play a variety of essential biological functions in the body. The native structure of a protein is the result of the folding process of its amino acids, with their spatial orientation primarily determined by two dihedral angles (φ, ψ). This work proposes a new hybrid method for predicting the tertiary structures of proteins called hyPROT, combining techniques of Multi-objective Evolutionary Algorithm optimization (MOEA), Molecular Dynamics, and Recurrent Neural Networks (RNNs). The proposed approach investigates the evolutionary profile of dihedral angles (φ, ψ) obtained by different MOEAs during the minimization process of the objective function by dominance and energy minimization by molecular dynamics. This proposal is unprecedented in the protein prediction literature. The premise under investigation is that the evolutionary profile of dihedrals may be concealing relevant patterns about folding mechanisms. To analyze the evolutionary profile of angles (φ, ψ), RNNs were used to abstract and generalize the specific biases of each MOEA. The selected MOEAs were NSGAII, BRKGA, and GDE3, and the objective function investigated combines the potential energy from non-covalent interactions and the solvation energy. The results obtained show that the hyPROT was able to reduce the RMSD value of the best prediction generated by the MOEAs individually by at least 33%. Predicting new series for dihedral angles allowed for the formation of histograms, indicating the formation of a possible statistical ensemble responsible for the distribution of dihedrals (φ, ψ) during the folding processItem Alocação de recursos e posicionamento de funções virtualizadas em redes de acesso por rádio desagregadas(Universidade Federal de Goiás, 2023-08-30) Almeida, Gabriel Matheus Faria de; Pinto, Leizer de Lima; http://lattes.cnpq.br/0611031507120144; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Cardoso, Kleber Vieira; Pinto, Leizer de Lima; Klautau Júnior, Aldebaro Barreto da Rocha; Silva, Luiz Antonio Pereira daJointly choosing a functional split of the protocol stack and placement of network functions in a virtualized RAN is critical to efficiently using the access network resources. This problem represents a current research topic in 5G and Post-5G networks, which involves the challenge of simultaneously choosing the placement of virtualized functions, the routes for traffic and the management of available computing resources. In this work, we present three approaches to solve this problem considering the planning scenario and two approaches considering the network operation scenario. The first result is a Mixed Integer Linear Programming (MILP) model, considering a generic set of processing nodes and multipath routing. The second approach uses artificial intelligence and machine learning concepts, in which we formulate a deep reinforcement learning agent. The third approach used is based on search meta-heuristics, through a genetic algorithm. The last two approaches are Markov Decision Process (MDP) formulations that consider dynamic demand on radio units. In all formulations, the objective is to maximize the network function’s centralization while minimizing positioning cost. Analysis of the solutions and comparison of their results show that exact approaches such as MILP naturally provide the best solution. However, in terms of efficiency, the genetic algorithm has the best search time, finding a high quality solution in a few seconds. The deep reinforcement learning agent presents a high convergence, finding high quality solutions for the problem and showing problem generalization capacity with different topologies. Finally, the formulations considering the network operation scenario with dynamic demand are highly complex due to the size of the action spaceItem Exploiting parallelism in document similarity tasks with applications(Universidade Federal de Goiás, 2019-09-05) Amorim, Leonardo Afonso; Martins, Wellington Santos; http://lattes.cnpq.br/3041686206689904; Martins, Wellington Santos; Vincenzi, Auri Marcelo Rizzo; Rodrigues, Cássio Leonardo; Rosa, Thierson Couto; Martins, WeberThe amount of data available continues to grow rapidly and much of it corresponds to text expressing human language, that is unstructured in nature. One way of giving some structure to this data is by converting the documents to a vector of features corresponding to word frequencies (term count, tf-idf, etc) or word embeddings. This transformation allows us to process textual data with operations such as similarity measure, similarity search, classification, among others. However, this is only possible thanks to more sophisticated algorithms which demand higher computational power. In this work, we exploit parallelism to enable the use of parallel algorithms to document similarity tasks and apply some of the results to an important application in software engineering. The similarity search for textual data is commonly performed through a k nearest neighbor search in which pairs of document vectors are compared and the k most similar are returned. For this task we present FaSSTkNN, a fine-grain parallel algorithm, that applies filtering techniques based on the most common important terms of the query document using tf-idf. The algorithm implemented on a GPU improved the top k nearest neighbors search by up to 60x compared to a baseline, also running on a GPU. Document similarity using tf-idf is based on a scoring scheme for words that reflects how important a word is to a document in a collection. Recently a more sophisticated similarity measure, called word embedding, has become popular. It creates a vector for each word that indicates co-occurrence relationships between words in a given context, capturing complex semantic relationships between words. In order to generate word embeddings efficiently, we propose a fine-grain parallel algorithm that finds the k less similar or farthest neighbor words to generate negative samples to create the embeddings. The algorithm implemented on a multi-GPU system scaled linearly and was able to generate embeddings 13x faster than the original multicore Word2Vec algorithm while keeping the accuracy of the results at the same level as those produced by standard word embedding programs. Finally, we applied our accelerated word embeddings solution to the problem of assessing the quality of fixes in Automated Software Repair. The proposed implementation was able to deal with large corpus, in a computationally efficient way, being a promising alternative to the processing of million source code files needed for this task.Item Implementação de princípios de gamificação adaptativa em uma aplicação mHealth(Universidade Federal de Goiás, 2023-08-25) Anjos, Filipe Maciel de Souza dos; Carvalho, Sergio Teixeira de; http://lattes.cnpq.br/2721053239592051; Carvalho, Sergio Teixeira de; Mata, Luciana Regina Ferreira da; Berretta, Luciana de OliveiraThis work describes the implementation of a gamified mHealth application called IUProst for the treatment of urinary incontinence through the performance of pelvic exercises for men who have undergone prostate removal surgery. The development of the application followed the guidelines of Framework L, designed to guide the creation of gamified mHealth applications. The initial version of IUProst was exclusively focused on the self-care dimension of Framework L and was released in November 2022. It was used by hundreds of users seeking the treatment provided by the application. Subsequently, the Gamification dimension of Framework L was employed to gamify IUProst. During the process of implementing game elements, it was noted that there were no clear definitions of how to implement the components to allow for gamification adaptation based on user profiles. To address this gap, an implementation model for gamification components was developed to guide developers in creating gamification that could adapt to the user profile dynamics proposed by the adaptive gamification of Framework L. Therefore, the contributions of this research include delivering a gamified mHealth application, analyzing usage data generated by the gamified application, and providing an implementation model for game components that were incorporated into Framework L, enabling the use of components in the context of adaptive gamification. The gamified version of IUProst was published in July 2023 and was used for 30 days until the writing of this dissertation. The results obtained demonstrate that during the gamified month, patients performed approximately 2/3 more exercises compared to the previous two months, reaching 61% of the total exercises performed during the three months analyzed. The data confirmed the hypothesis that game components indeed contribute to patient engagement with the application and also highlighted areas for improvement in the mHealth application.Item Heurísticas aplicadas à comunicação visual de informação científica: uma pesquisa exploratória(Universidade Federal de Goiás, 2021-07-21) Barbosa, Zanalis Alves; Federson, Fernando Marques; http://lattes.cnpq.br/0513724372523279; Federson, Fernando Marques; Soares, Telma Woerle de Lima; Lucena, Fábio Nogueira deThe result of scientific research needs to be communicated and, at the same time, communicative. In recent years, Digitization has affected the Visual Communication of Scientific Information. This exploratory study was carried out in two main stages. In the first stage, bibliographical research characterized the main elements of Visual Communication, the effects of Digitization and communication through Graphics. The second stage of the study brought together several guidelines and proposals on the process of building good visual communication. From these stages, it was possible to propose the CVGC Heuristic Model, a heuristic based on questions to assist the process of visual communication of scientific information through graphics.Item Aprendizado de máquina automático aplicado à predição da evasão no ensino superior(Universidade Federal de Goiás, 2022-10-20) Barros, Bruno de Mattos; Nascimento, Hugo Alexandre Dantas do Nascimento; http://lattes.cnpq.br/2920005922426876; Nascimento, Hugo Alexandre Dantas do; Ferreira, Deller James; Mello, Rafael Ferreira Leite deAcademic dropout is a problem that affects many public and private university students in Brazil and around the world. Machine learning techniques have been used to mitigate the problem, but still require a lot of manual adjustments. We present in this work, a proposal of an automatic machine learning framework to predict academic dropout, with the goal of obtaining good results without the need for human intervention. This data processing framework includes the following stages: pre-processing, feature vector creation, data splitting into testing and training sets, clustering of data from different degrees for training, model selection, model parameter tunning and explainability. Additionally, we formalize temporal data splitting approaches for train and test datasets, as this task is not adequately addressed in most of the previous works.Item Acelerando a construção de tabelas hash para dados textuais com aplicações(Universidade Federal de Goiás, 2020-11-17) Barros, Chayner Cordeiro; Martins, Wellington Santos; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4782112U1; Martins, Wellington Santos; Rosa, Thierson Couto; Sousa, Daniel Xavier deText mining is characterized by the extraction of information from textual data, in the most diverse formats, aiming at the knowledge production, classification, clusterization, translation of this information among other things. In order for text mining to be efficient, some procedures are performed on the data to ensure that it contains only content relevant to the analysis to be performed, and that it is structured in a format that is easier to manipulate computationally. Several pre-processing tasks must be performed on this data, in order to achieve the desired quality and representation. In this sense, the present work proposes an implementation of a hash table capable of efficiently exploring the high parallelism available in GPUs, as a way to increase the performance of pre- processing tasks. However, this work not only presents more efficient algorithms, but also demonstrates the feasibility of its use in applications such as the generation of the co- occurrence matrix and the representation of the text using embeddings.Item Mineração de dados de autopsia para determinar as causas de morte na depressão(Universidade Federal de Goiás, 2021-10-08) Campos, Lívia Mancine Coelho de; Salvini, Rogerio Lopes; http://lattes.cnpq.br/5009392667450875; Salvini, Rogerio Lopes; Guimarães, Nilza Nascimento; Soares, Fabrízzio Alphonsus Alves de Melo NunesDepression is associated with increased mortality, but the causes of death related to depression, as well as the life expectancy of patients, are still poorly explored and controversial. Identifying possible diseases associated with death in patients with depression can help in public health policy decision making and culminate in more specific treatments, prevention strategies, and improved life expectancy for these patients. In studies on causes of death which autopsy examinations are analyzed, it is possible to acquire more accurate information about the diseases related to death, since the autopsy determines the precise cause of death. In this study, we evaluated the causes of death of 1,136 subjects, according to autopsy reports from the Death Verification Service of the Capital (SVOC-USP) in the metropolitan region of São Paulo. The diagnosis of depression of these subjects was made according to the Structured Clinical Interview for DSM-IV (SCID). Data mining based on the ICDs of causes related to death was performed, in which eleven Machine Learning algorithms were applied in order to search for patterns to determine the possible causes of death related to depression. In addition to major depression, eight other subgroups of depression were analyzed. Although this study perfomed a broad investigation in the general population and in specific groups of patients, the results obtained by the generated models do not indicate differences in patterns in the causes of death in individuals with and without depression. This result corroborates with previous studies in the literature where the evidences for all-cause and cause-specific causes of death and depression are not significant.Item Ensino do pensamento computacional por meio de uma abordagem transversal apoiada por padrões de programação, jogos desconectados e Scratch(Universidade Federal de Goiás, 2021-03-04) Carlos, Cássio Martins; Ferreira, Deller James; http://lattes.cnpq.br/1646629818203057; Ferreira, Deller James; Brandão, Leônidas de Oliveira; Berretta, Luciana de OliveiraThe concept of Computational Thinking (PC) has beeing to be widely addressed in research in the past ten years. These researches look for ways to use this concept in formal education. However, it is common to find difficulties in learning the PC, because of this there is a need for teaching approaches capable of stimulating the learning of the concept. Games emerge as tools capable of expanding PC teaching results, improving student engagement and motivation. There are also programming patterns, which can be understood as a common solution to a recurring problem, which allows inexperienced students to accelerate the development of fundamental skills for PC learning. There is also software for visual programming, such as Scratch, which has an intuitive block programming interface, allowing the visualization of the command blocks and their execution. The PC is a skill that is not restricted only to information technology, in fact it can and should be applied in other areas, so it must be linked to transversal approaches. For the development of a transversal approach to teaching PC using the aforementioned tools, a design-based research methodology (Design Based Research) was applied. This methodology involves the participation of teachers in the process of developing a set of teaching practices, so that this set is contextualized in a real world of education and at the same time refined by specialists in each area of knowledge. The interactions with the teachers presented satisfactory data and produced an efficient and robust set of practices for teaching the PC in a transversal way in a context of elementary school.Item Uma estratégia de pós-processamento para seleção de regras de associação para descoberta de conhecimento(Universidade Federal de Goiás, 2023-08-22) Cintra, Luiz Fernando da Cunha; Salvini, Rogerio Lopes; http://lattes.cnpq.br/5009392667450875; Salvini, Rogerio Lopes; Rosa, Thierson Couto; Aguilar Alonso, Eduardo JoséAssociation rule mining (ARM) is a traditional data mining method that provides information about associations between items in transactional databases. A known problem of ARM is the large amount of rules generated, thus requiring approaches to post-process these rules so that a human expert is able to analyze the associations found. In some contexts the domain expert is interested in investigating only one item of interest, in these cases a search guided by the item of interest can help to mitigate the problem. For an exploratory analysis, this implies looking for associations in which the item of interest appears in any part of the rule. Few methods focus on post-processing the generated rules targeting an item of interest. The present work seeks to highlight the relevant associations of a given item in order to bring knowledge about its role through its interactions and relationships in common with the other items. For this, this work proposes a post-processing strategy of association rules, which selects and groups rules oriented to a certain item of interest provided by an expert of a domain of knowledge. In addition, a graphical form is also presented so that the associations between rules and groupings of rules found are more easily visualized and interpreted. Four case studies show that the proposed method is admissible and manages to reduce the number of relevant rules to a manageable amount, allowing analysis by domain experts. Graphs showing the relationships between the groups were generated in all case studies and facilitate their analysis.Item Redes IoT baseadas em SDN e dirigidas por intenções(Universidade Federal de Goiás, 2022-06-22) Cordeiro, Bruna Michelly de Oliveira Silva; Sene Junior, Iwens Gervasio; http://lattes.cnpq.br/3693296350551971; Rodrigues Filho, Roberto; http://lattes.cnpq.br/3150867071308016; Costa, Fábio Moreira; http://lattes.cnpq.br/0925150626762308; Costa, Fábio Moreira; Sene Junior, Iwens Gervasio; Rodrigues Filho, Roberto; Oliveira Junior, Antonio Carlos de; Ueyama, JóSmart Cities have an operating environment with high volatility, where frequent changes occur, forcing systems to deal with new situations. Specifically in IoT networks, which make up the infrastructure of devices in the city, this volatility demands the ability to quickly and correctly change the network behavior at runtime. In light of this, the management of IoT networks for smart cities has a high level of complexity. In order to facilitate the (re)programming of network behavior, the concept of Software Defined Networks (SDN) for IoT is gaining popularity and has been applied to control this type of network. A concept present in the literature that has the potential to abstract some of this complexity refers to Intent-Driven Networks (IDN). The application of this concept allows the programming of the network by means of intentions, built using a high-level declarative language. This work explores the combination of IDN and SDN to abstract and facilitate the programming and adaptation of the IoT network behavior, according to intentions defined by applications at runtime. In this work, SDN mechanisms are used to perform the deployment of network functions that define and/or change the behavior of the network nodes, while IDN is used to abstract the programming of the network behavior, as well as allow fine-grained adjustments of that behavior. The behavior to be implemented in the network is chosen at runtime, based on intentions coming from the application. This choice is made by a decision making algorithm, which by means of a metric, is able to determine the best behavior in function of the current state of the network. Simulation-based experiments were conducted to validate this intent processing in different usage scenarios.Item Uso e estabilidade de seletores de variáveis baseados nos pesos de conexão de redes neurais artificiais(Universidade Federal de Goiás, 2021-03-19) Costa, Nattane Luíza da; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610; Barbosa, Rommel Melgaço; Lima, Márcio Dias de; Lins, Isis Didier; Costa, Ronaldo Martins da; Leitão Júnior, Plínio de SáArtificial Neural Networks (ANN) are machine learning models used to solve problems in several research fields. Although, ANNs are often considered “black boxes”, which means that these models cannot be interpreted, as they do not provide explanatory information. Connection Weight Based Feature Selectors (WBFS) have been proposed to extract knowledge from ANNs. Most of studies that have been using these algorithms are based on just one ANN model. However, there are variations in the ANN connection weight values due to the initialization and training, and consequently, leading to variations in the importance ranking generated by a WBFS. In this context, this thesis presents a study about the WBFS. First, a new voting approach is proposed to assess the stability of the WBFS, i.e, the variation in the result of the WBFS. Then, we evaluated the stability of the algorithms based on multilayer perceptron (MLP) and extreme learning machines (ELM). Furthermore, an improvement is proposed in the algorithms of Garson, Olden, and Yoon, combining them with the feature selector ReliefF. The new algorithms are called FSGR, FSOR, and FSYR. The experiments were performed based on 28 MLP architectures, 16 ELM architectures, and 16 data sets from the UCI Machine Learning Repository. The results show that there is a significant difference in WBFS stability depending on the training parameters of the ANNs and depending on the WBFS used. In addition, the proposed algorithms proved to be more effective than the classic algorithms. As far as we know, this study was the first attempt to measure the stability of WBFS, to investigate the effects of different ANN training parameters on the stability of WBFS, and the first to propose a combination of WBFS with another feature selector. Besides, the results provide information about the benefits and limitations of WBFS and represent a starting point for improving the stability of these algorithms.Item Reconhecimento de entidades nomeadas em textos informais no domínio legislativo(Universidade Federal de Goiás, 2023-04-19) Costa, Rosimeire Pereira da; Souza, Ellen Polliana Ramos; http://lattes.cnpq.br/6593918610781356; Silva, Nádia Félix Felipe da; http://lattes.cnpq.br/7864834001694765; Silva, Nádia Félix Felipe da; Souza, Ellen Polliana Ramos; Silva, Sérgio Francisco da; Fernandes, Deborah Silva AlvesNamed Entity Recognition (NER) is a challenging task in Natural Language Processing (NLP) for a language as rich as Portuguese. When applied in a scenario appropriate to informal language and short texts, the task acquires a new layer of complexity, manipulating a lexicon specific to the domain in question. In this work, we expand the UlyssesNER-Br corpus for the NER task with Brazilian Portuguese comments on bill projects. Additionally, we enriched the annotated set with a formal corpus in order to analyze whether the combination of formal and informal texts from the same domain could improve the performance of NER models. Finally, we conducted experiments with a Conditional Random Fields (CRF) model, a Bidirectional LSTM-CRF model (BiLSTM-CRF), and subsequently fine-tuned a BERT and RoBERTa language model on the NER task with our dataset. We conclude that formal texts aided in identifying entities in informal texts. The best model was the fine-tuning of BERT which achieved an F1- score of 74.63%, surpassing the benchmark of related works.Item Construção de dispositivo com software embarcado de avaliação do reflexo pupilar humano para apoio a diagnóstico oftalmológico(Universidade Federal de Goiás, 2021-08-06) Delfino, Higor Pereira; Costa, Ronaldo Martins da; http://lattes.cnpq.br/7080590204832262; Costa, Ronaldo Martins da; Gonçalves, Cristhiane; Laureano, Gustavo TeodoroThe realization of diagnosis through precise metrics, in a fast, safe, efficient way, with non-evasive means and at a low cost, brings great advantages for the medical field. Meeting this need are studies in the field of the human pupillary reflex that demonstrate that the eyes are more than an organ of the sensory system and can provide accurate and reliable biosignals to aid in diagnosis. This work aims to build an automated pupillometry equipment in order to help identify pathologies or disorders, using computer vision techniques. In this sense, this work proposed a pupilometer that can be used in people, capable of stimulating the pupil at various wavelengths, providing a friendly interface and pre-assessing the exam. This work proposes to build a low-cost, easy-to-operate and minimally viable equipment to be used in an ophthalmological office. The equipment built in this work presents adequate quality, automation and configuration flexibility with a great potential of use for several studies in the field of automated pupillometry. Regarding the induction of pupillary reactivity, the equipment is capable of working with flexible and dynamic configurations, and it can be adjusted to the intensity of stimuli and worked with strobe light.Item Alianças defensivas em grafos(Universidade Federal de Goiás, 2010-03-26) Dias, Elisângela Silva; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610; Barbosa, Rommel Melgaço; Martins, Wellington Santos; Tronto, Íris Fabiana de BarcelosA defensive alliance in graph G = (V,E) is a set of vertices S ⊆V satisfying the condition that every vertex v ∈ S has at most one more neighbor in V − S than S. Due to this type of alliance, the vertices in S together defend themselves to the vertices in V − S. This dissertation introduces the basic concepts for the understanding of alliances in graphs, along with a variety of alliances and their numbers and provides some mathematical properties for these alliances, focusing mainly on defensive alliances in graphs. It shows theorems, corollaries, lemmas, propositions and observations with appropriate proofs with respect to the minimum degree of a graph G δ(G), the maximum degree ∆(G), the algebraic connectivity µ, the total dominanting set γt(G), the eccentricity, the edge connectivity λ(G), the chromatic number χ(G), the (vertex) independence number β0(G), the vertex connectivity κ(G), the order of the largest clique ω(G) and the domination number γ(G). It also shows a generalization of defensive alliances, called defensive k alliance, and the definition and properties of a security set in G. A secure set S ⊆ V of graph G = (V,E) is a set whose every nonempty subset can be successfully defended of an attack, under appropriate definitions of “attack” and “defence”.Item Um sistema WebGIS para classificação supervisionada de cobertura do solo utilizando inteligência artificial(Universidade Federal de Goiás, 2022-10-21) Fernandes, Yuri Kuivjogi; Costa, Ronaldo Martins da; http://lattes.cnpq.br/7080590204832262; Costa, Ronaldo Martins da; Oliveira, Bruna Mendes de; Cremon, Édipo HenriqueWith the advancement in data generation for Earth observation and its availability free of charge, the Remote Sensing (SR) area advanced significantly. Over the years, it has been observed the migration of RS applications to the internet environment, facilitating searches of different uses. This work proposes a new approach for collecting and manipulating spatial data for spectral classification based on pixels. A web application was built integrating Google Earth Engine, Google Maps and Auto Machine Learning services for performance analysis. Experiments using samples from land cover regions in Goiás, Brazil, justifying the gain in time, processing and data storage. Such contributions are related to the large amount of information from satellite images collected in a conventional way, which are later not used. As a final result, there is an image classified through the classification process representing the different land cover classes. Model training achieved an accuracy of 99.85% using the Light Gradient Boosting Machine (LightGBM) model. In addition to these benefits, the optimization of processes allows the inclusion of research from other major areas, thus for the greater dissemination of knowledge in the area of SR and pattern recognition applications.Item Análise multirresolução de imagens gigapixel para detecção de faces e pedestres(Universidade Federal de Goiás, 2023-09-27) Ferreira, Cristiane Bastos Rocha; Pedrini, Hélio; http://lattes.cnpq.br/9600140904712115; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; http://lattes.cnpq.br/7206645857721831; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; Pedrini, Helio; Santos, Edimilson Batista dos; Borges, Díbio Leandro; Fernandes, Deborah Silva AlvesGigapixel images, also known as gigaimages, can be formed by merging a sequence of individual images obtained from a scene scanning process. Such images can be understood as a mosaic construction based on a large number of high resolution digital images. A gigapixel image provides a powerful way to observe minimal details that are very far from the observer, allowing the development of research in many areas such as pedestrian detection, surveillance, security, and so forth. As this image category has a high volume of data captured in a sequential way, its generation is associated with many problems caused by the process of generating and analyzing them, thus, applying conventional algorithms designed for non-gigapixel images in a direct way can become unfeasible in this context. Thus, this work proposes a method for scanning, manipulating and analyzing multiresolution Gigapixel images for pedestrian and face identification applications using traditional algorithms. This approach is analyzed using both Gigapixel images with low and high density of people and faces, presenting promising results.Item Escalonamento de recursos em redes sem fio 5G baseado em otimização de retardo e de alocação de potência considerando comunicação dispositivo a dispositivo(Universidade Federal de Goiás, 2021-10-15) Ferreira, Marcus Vinícius Gonzaga; Vieira, Flávio Henrique Teles; http://lattes.cnpq.br/0920629723928382; Vieira, Flávio Henrique Teles; Madeira, Edmundo Roberto Mauro; Lima, Marcos Antônio Cardoso de; Rocha, Flávio Geraldo Coelho; Oliveira Júnior, Antônio Carlos deIn this thesis, a resources scheduling scheme is proposed for 5G wireless network based on CP-OFDM (Cyclic Prefix - Orthogonal Frequency Division Multiplexing) and f-OFDM (filtered - OFDM) modulations in order to optimize the average delay and the power allocation for users. In the proposed approach the transmission rate value is calculated and the modulation format is defined so that minimize system BER (Bits Error Rate). The algorithm considers, in addition to the transmission modes determined to minimize the BER, the calculation of the system's weighted throughput to optimize the users' average delay. Additionally, it is proposed an algorithm for uplink transmission in 5G wireless networks with D2D (Device-to-device) multi-sharing communication which initially allocates resources for the CUEs (Cellular User Equipments) and subsequently allocates network resources for communication between DUEs (D2D User Equipment) pairs based in the optimization of the delay and power allocation. The proposed algorithm, namely DMCG (Delay Minimization Conflict Graph), considers the minimization of the estimated delay function using concepts of Network Calculus to decide on the allocation of idle resources of the network CUEs for DUEs pairs. In this thesis, the performance of the proposed algorithms for downlink and uplink transmission are verified and compared with others algorithms in the literature in terms of several QoS (Quality of Service) parameters and considering the carrier aggregation and 256-QAM (Quadrature Amplitude Modulation) technologies. In computational simulations they are also considered scenarios with propagation by millimeter waves and the 5G specifications of the 3GPP (3rd Generation Partnership Project) Release 15. The simulation results show that the algorithms proposed for downlink and uplink transmission provide better system performance in terms of throughput and delay, in addition to presenting lower processing time compared to optimization heuristics and other QoS parameters being compatible to those of the compared algorithms.