INF - Instituto de Informática
URI Permanente desta comunidade
Navegar
Navegando INF - Instituto de Informática por Por tipo de Acesso "Attribution-NonCommercial-NoDerivs 3.0 Brazil"
Agora exibindo 1 - 12 de 12
Resultados por página
Opções de Ordenação
Item Reconhecimento de padrões por processos adaptativos de compressão(Universidade Federal de Goiás, 2020-03-02) Bailão, Adriano Soares de Oliveira; Delbem, Alexandre Cláudio Botazzo; http://lattes.cnpq.br/1201079310363734; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Silva, Nadia Felix Felipe da; Duque, Cláudio Gottschalg; Costa, Ronaldo Martins da; Monaco, Francisco JoséData compression is a process widely used by the industry in the storage and transport of information and is applied to a variety of domains such as text, image, audio and video. The compression processes are a set of mathematical operations that aim to represent each sample of data in compressed form, or with a smaller size. Pattern recognition techniques can use compression properties and metrics to design machine learning models from adaptive algorithms that represent samples in compressed form. An advantage of adaptive compression models, is that they have dimensionality reduction techniques resulting from the compression properties. This thesis proposes a general unsupervised learning model (for different problem domains and different types of data), which combines adaptive compression strategies in two phases: granulation, responsible for the perception and representation of the knowledge necessary to solve a problem generalization, and the codification phase, responsible for structuring the reasoning of the model, based on the representation and organization of the problem objects. The reasoning expressed by the model denotes the ability to generalize data objects in the general context. Generic methods, based on compactors (without loss of information), lack generalization capacity for some types of data objects, and in this thesis, lossy compression techniques are also used, in order to circumvent the problem and increase the capacity of generalization of the model. Results demonstrate that the use of techniques and metrics based on adaptive compression produce a good approximation of the original data samples in data sources with high dimensionality. Tests point to good machine learning models with good generalization capabilities derived from the approach based on the reduction of dimensionality offered by adaptive compression processes.Item KLTVO: Algoritmo de Odometria Visual estéreo baseada em seleção de pontos chaves pela imposição das restrições da geometria epipolar(Universidade Federal de Goiás, 2020-07-23) Dias, Nigel Joseph Bandeira; Laureano, Gustavo Teodoro; http://lattes.cnpq.br/4418446095942420; Laureano, Gustavo Teodoro; Colombini, Esther Luna; Costa, Ronaldo Martins daSelf-localization is one of the key tasks for applications such as robotics, self-driving cars, and augmented reality. The cameras have been broadly used because of their affordable cost, lower energy consumption, rich information, and the ability to provide results comparable to more expensive sensors. Among the visual localization methods, the feature-based Visual Odometry (VO) has attracted substantial attention, due to their low computation demand which makes them suitable for embedded systems. This is due to the nature of the information used since the pose of the camera is estimated based on a geometric consistency of feature matching. On the other hand, these methods tend to be more sensitive to errors resulting from bad correspondence. In this present work is proposed a correspondence methodology based on a circular matching procedure, which fuses well-known strategies in Computer Vision in order to enhance the quality of feature matching. The process combines the INSAD (Illumination Normalized Sum of Absolute Differences) metric, for stereo feature matching, and the KLT algorithm (Kanade-Lucas-Tomasi feature tracker) for feature tracking between consecutive frames. In both approaches is imposed the constraints of the epipolar geometry, in order to obtain a fast and accurate feature matching. The proposed methodology was evaluated in the KITTI dataset and against other methods. Experimental results demonstrate that the proposed method contributes to faster convergence and achieves high local accuracy. Furthermore, even without global optimizations, the proposed method demonstrated to be accurate for long term tracking, compared to other methods.Item Classificação de cenas utilizando a análise da aleatoriedade por aproximação da complexidade de Kolmogorov(Universidade Federal de Goiás, 2020-03-15) Feitosa, Rafael Divino Ferreira; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Delbem, Alexandre Cláudio Botazzo; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; Laureano, Gustavo Teodoro; Costa, Ronaldo Martins daIn many pattern recognition problems, discriminant features are unknown and/or class boundaries are not well defined. Several studies have used data compression to discover knowledge, without features extraction and selection. The basic idea is two distinct objects can be grouped as similar, if the information content of one explains, in a significant way, the information content of the other. However, compressionbased techniques are not efficient for images, as they disregard the semantics present in the spatial correlation of two-dimensional data. A classifier is proposed for estimates the visual complexity of scenes, namely Pattern Recognition by Randomness (PRR). The operation of the method is based on data transformations, which expand the most discriminating features and suppress details. The main contribution of the work is the use of randomness as a measure discrimination. The approximation between scenes and trained models, based on representational distortion, promotes a lossy compression process. This loss is associated with irrelevant details, when the scene is reconstructed with the representation of true class, or with the information degradation, when it is reconstructed with divergent representations. The more information preserved, the greater the randomness of the reconstruction. From the mathematical point of view, the method is explained by two main measures in the U-dimensional plane: intersection and dispersion. The results yielded accuracy of 0.6967, for a 12-class problem, and 0.9286 for 7 classes. Compared with k-NN and a data mining toolkit, the proposed classifier was superior. The method is capable of generating efficient models from few training samples. It is invariant for vertical and horizontal reflections and resistant to some geometric transformations and image processing.Item Estudo da aplicação de agrupamento hierárquico aglomerativo em dados clínicos de transtorno afetivo bipolar(Universidade Federal de Goiás, 2020-04-22) Freitas, Fabrício Alves de; Dias, Rodrigo da Silva; http://lattes.cnpq.br/2892758391071617; Salvini, Rogerio Lopes; http://lattes.cnpq.br/5009392667450875; Salvini, Rogerio Lopes; Federson, Fernando Marques; Alonso, Eduardo José AguilarBipolar Affective Disorder (BD) is a mood disorder characterized by recurrent episodes of depression or mania. BD usually causes a drastic reduction in person’s quality of life, which can even lead to suicide. The Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) is a long-term outpatient study designed to find out which treatments or treatment combinations are most effective in treating episodes of depression and mania and in preventing recurrent episodes in people with BD. The research group has been working with STEP-BD in order to support medical decisions based on the data obtained in the study. In this work, in particular, the technique known as Hierarchical Agglomerative Clustering (AGNES) was used with the intention of detecting possible psychiatric comorbidities characteristic of two groups: patients who had and who had no history of attempted suicide. The results of this study indicate that it is possible not only to detect the comorbidities of each group, but it was also possible to study some individual characteristics, symptoms of current mood and sleep characteristics present in each study group. The results obtained reinforce results found in the literature.Item Algoritmos de junção por similaridade sobre fluxo de dados(Universidade Federal de Goiás, 2020-07-21) Pacífico, Lucas Oliveira; Ribeiro, Leonardo Andrade; http://lattes.cnpq.br/4036932351063584; Ribeiro, Leonardo Andrade; Dorneles, Carina Friedrich; Leitão Junior, Plinio de SaIn today's Big Data era, data is generated and collected at high speed, which imposes strict performance and memory requirements for processing this data. Also, the presence of heterogeneity data demands the use of similarity operations, which are computationally more expensive. In this context, the present work investigates the problem of performing similarity join over a continuous stream of data represented by sets. The concept of temporal similarity is employed, where the similarity between two data items decreases with the distance in their arrival time. The proposed algorithms directly incorporates this concept to reduce the comparison of space and memory consumption. Moreover, a new technique based on the partial frequency of the data elements is presented to substantially reduce processing cost. Results of the experimental evaluation performed demonstrate that the techniques presented provide substantial performance gains and good memory usage.Item BAAnet: an efficient deep neural network for automatic bone age assessment(Universidade Federal de Goiás, 2020-07-14) Pereira, Lucas Araújo; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Carvalho, Hervaldo Sampaio; Costa, Ronaldo Martins daThe task of bone age assessment is constantly performed by radiologists worldwide to aid in the diagnosis of metabolic and endocrine disorders in children. Currently, the most used methods to perform such task are manual methods based on image comparison developed between the 1950s and 1960s. These methods make the evaluation process long, tedious and costly, besides presenting high variability and uncertainty between examinations and also among examiners. This work starts describing the characteristics of the bone age evaluation problem. It then presents some automatic solutions proposed in the last decades. Later it describes the paradigm shift in the field of computer vision with the development of convolutional neural networks and automatic feature extraction strategies, but also points out the high computational cost of newly published neural networks when applied to high resolution images such as the ones from medical imaging applications. Finally, it proposes a novel convolutional neural network to perform the task of bone age assessment in a computationally efficient manner. This network is named BAAnet and it introduces a novel convolutional module called the Incremental Convolutional Estimation (ICE) module. We evaluate the module against two standard benchmarks and see an average relative improvement of approximately 10\% in performance. We also submitted an ensemble of BAAnet models for a competition organized by the Radiological Society of North America (RSNA) with the goal to obtain the lowest mean absolute error in a dataset of 14236 images collected for the event. Our submission finished third and had a mean absolute error of 4.38 months (less than 3\% lower than the first place) in a dataset of 200 images used exclusively for final evaluation of the competing models. BAAnet achieved this performance with a number of parameters and computational cost an order of magnitude smaller than the models in first and second places.Item Plano de voo autônomo e arquitetura Drone-as-a-Service para cobertura de redes IoT(Universidade Federal de Goiás, 2020-08-04) Rodrigues, Lucas Soares; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Oliveira Júnior, Antonio Carlos de; http://lattes.cnpq.br/3148813459575445; Oliveira Júnior, Antonio Carlos de; Cardoso, Kleber Vieira; Moreira Junior, Waldir Aranha; Both, Cristiano BonatoThis work is an approach to the vehicle routing problem focusing on drones as data collectors in a network of IoT sensors that generate useful information for an intelligent environment system. The dissertation presents a model for one aircraft as well as for multiple ones. The flight plan generated from the model focuses on preventing breakdowns due to lack of battery charge. Thus, in order to maximize the number of nodes that will be visited, various restrictions differ in this model from conventional vehicle routing models. In addition to the drone's flight autonomy, another limiting aspect is considered: data storage. The work also proposes an architecture with the purpose of sharing this data collection function between various external applications, the architecture is based on services and components in the cloud and applies the concept of Drone-as-a-Service. A simulation environment is also presented to visualize the flights generated by routing algorithms with the conversion of the flight plan into MAVlink commands, and a tool is also defined for the generation of those flight plans and management of sensors in a given area.Item Localização de defeitos evolucionária baseada em fluxo de dados(Universidade Federal de Goiás, 2020-07-22) Silva Junior, Deuslirio da; Leitão Junior, Plínio de Sá; http://lattes.cnpq.br/4480334653242457; Leitão Junior, Plínio de Sá; Soares, Telma Woerle de Lima; Chaim, Marcos LordelloContext- Fault localization is the activity of precisely indicating the faulty commands ina buggy program. This is an activity known to be too costly and monotonous. Automatingthis process has been the objective of several studies, having proved to be a challengingproblem. A common strategy is to associate a suspiciousness value to each command inthe code. Most methods, which use this strategy, are heuristics that use the commandsexecuted during the software test as an information source. These approaches are knownto be based on the control-flow coverage spectrum.Objective- The present study seeks toinvestigate another source of information about faults, the data-flow, which is expressedby the relationship between the places of definition and places of use of variables. How thedata-flow can contribute to fault localization and how to use it in evolutionary strategiesare interests of this work.Approach- Two evolutionary approaches are presented, onebased on a genetic algorithm (GA) that seeks to combine different heuristics using control-flow and also data-flow as a sources of information about faults. And another, based ongenetic programming (GP), which uses new variables that express the data-flow coveragespectrum, to generate new equations, more fitted to fault localization.Results- The GAapproach was evaluated in 7 small C programs that make up theSiemens Suite,benchmarkwidely used in similar approaches, and also in a set of faulty versions of the Java programjsoup. The evaluation metrics used describe the effectiveness from an absolute pointof view, as well as the dependence on tiebreak strategies. In this context, although theapproach using only data-flow produces competitive results, the hybrid approach (control-flow and data-flow) stands out for maintaining good results in terms of effectiveness, andstill being less dependent on tiebreakers. The GP approach in turn was investigated foreffectiveness using popular metrics in this context, and also for efficiency, by countingthe cycles of executions (generations) necessary to present competitive results. Again, thehybrid strategy stands out for producing the same results as other methods, but requiringless generations to do so.Conclusions- The results of both approaches highlight thatalthough data-flow has good effectiveness in locating defects, hybrid strategies, usingcontrol- and data-flow as sources of information about defects, generally outperforms allthe methods used as a comparison. However, further investigations must be conducted indifferent sets of programs.Item Redes neurais profundas para reconhecimento facial no contexto de segurança pública(Universidade Federal de Goiás, 2020-07-29) Silva Júnior, Jones José da; Laureano, Gustavo Teodoro; http://lattes.cnpq.br/4418446095942420; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Calixto, Wesley Pacheco; Costa, Ronaldo Martins daFace recognition is an important tool for law enforcement. Bein able to compare a face image of a suspect filmed at a crime scene with a database of millions of photos and thus find his true identity represents a significant increase in crime resolution rates. Although this task has been researched since the 1970s, it was with the use of Convolutional Neural Networks (RNCs) from 2014 that a relevant advance was achieved that allowed some to reach 99.63% accuracy in the benchmark Labeled Faces in the Wild (LFW). Despite different architectures and cost functions, a common feature of the papers published since then is the fact that they are trained in a supervised manner, thus requiring large collections of facial images previously labeled. Even state of arts models in public benchmarks, they may not achieve the same results in the real world. The main reason is the lack of demographic data distribuition of these public datasets, which results in models with greater accuracy in specific demographic subgroups and worst accuracy in other subgroups, such as afrodescendant women. This work aims to investigate the fine tuning training strategies of deep neural network architectures for facial recognition in public safety context, using a dataset with the Brazilian faces in order to generate a more accurate model for a investigations police department. We managed to improve accuracy on test set with samples representative of the context of this work training a model with private dataset with a very small number of samples compared to the public ones.Item Predição de internações por condições sensíveis à atenção básica(Universidade Federal de Goiás, 2020-04-16) Silva, Zilmar Sousa; Salvini, Rogerio Lopes; http://lattes.cnpq.br/5009392667450875; Salvini, Rogerio Lopes; Lucena, Fábio Nogueira de; Soares, Anderson da SilvaOne of the main problems, with strategic and financial consequences for the public health system and private health insurance providers, is the occurrence of hospitalizations for Ambulatory Care Sensitive Conditions (ACSC), %Conditions Sensitive to Primary Care (ICSAB), that is, hospitalizations that could be avoided if certain actions were performed in outpatient care. Health systems have significant data regarding patients seen in their network, coming from a range of information systems for primary outpatient and hospital care. We can then use this data and see if it is possible to find patterns that could previously indicate a risk of hospitalization for the patient. The main purpose of this work is to use data mining techniques, in particular machine learning algorithms, to generate models for predicting ACSC in six pathological subgroups that fall into this category: Urinary Tract Infection, Heart Failure, Unspecified Bronchitis, Chronic Obstructive Pulmonary Disease, Diabetes Mellitus and Essential Hypertension.The data for this project are from patient care in health units in the municipality of Mineiros, GO, Brazil. Among the models generated, those that achieved the best results were Decision Tree and SVM (Support Vector Machine) which resulted in accuracy values ranging from 81 % (chronic obstructive pulmonary disease) to 92 % (essential hypertension) ), and AUC ROC ranging from 87 % (urinary tract infection) to 97 % (essential hypertension). The results achieved indicate that the use of machine learning models are promising for the prediction of ACSC and, combining with new studies using temporal windows for forecasting, they can contribute effectively to the reduction of hospitalizations, and thus, bring benefits to the patient who will not need to go through the negative experience of hospital treatment.Item Estudo sobre requisitos e automação do teste de acessibilidade para surdos em aplicações Web(Universidade Federal de Goiás, 2020-08-21) Sousa, Caio César Silva; Rodrigues, Cássio Leonardo; Rodrigues, Cássio Leonardo; Chaveiro , Neuma; Ferreira, Deller JamesAccessibility in web environments means that tools and technologies are developed and modeled for people with disabilities to be able to use them. It is also necessary to ensure that the target audience can understand, navigate, interact, and perceive the proposed environment. Accessibility on the web includes several groups of disabilities such as: auditory, visual, neurological, cognitive, etc. Over time, numerous surveys show ways to conduct accessibility tests. However, its authors do not attempt to evaluate accessibility with an emphasis on deaf users. Therefore, there is a lack of resources that provide acceptable accessibility tests for deaf people. Therefore, this dissertation aims to identify accessibility requirements for deaf people and address automation approaches for accessibility testing. For this, a study was carried out on guidelines and accessibility standards for deaf people. In the development of this study, a prototype of a web application called the interpreter center was developed. This application interacts with deaf users through videos in Sign Language developed in partnership with the Language department of the Federal University of Goias. The results of this work are related to the definition of accessibility requirements based on the accessibility guidelines identified through the literature review. The accessibility requirements were used to elaborate 2 research questions to carry out the accessibility tests. Static software analysis was the main approach to perform the accessibility test. The 2 approaches to accessibility testing in this work are: i) Metadata testing; ii) Test by descriptive statistical analysis. The contributions of this dissertation involve assessing accessibility for deaf users in web environments. To automate the evaluation through tooling support, a tool called Web Accessibility Evaluation Testing was developed to address accessibility on web pages. This assessment is based on accessibility requirements where, through tooling support, warnings display the success or failure of accessibility on a web page. We conclude that the approaches presented for assessing accessibility are effective for testing accessibility for deaf people in web environments.Item Predição de desempenho no Moodle usando princípios da andragogia(Universidade Federal de Goiás, 2020-05-15) Trindade, Fernando Ribeiro; Ferreira, Deller James; http://lattes.cnpq.br/1646629818203057; Ambrósio, Ana Paula Laboissière; http://lattes.cnpq.br/0900834483461062; Rodrigues, Cássio; Siqueira, Sean Wolfgand Matsui; Ferreira, Deller JamesAccording to current literature, the teaching skills of tutors are essential to ensure excellence in teaching and, consequently, the interest of students in courses. In online teaching environments, students and tutors interact with each other through the various communication resources provided by virtual learning environments (VLE). With this, a large amount of educational data is collected by AVAS’s, making it possible to carry out analyzes of these data. However, in the academic literature, few studies have been conducted in order to collect behavioral data from tutors and use this data to make the prediction of students' school performance. Therefore, in this dissertation a framework of tutoring characteristics was elaborated correlated to the good school performance of students, and this framework was used to guide the data collection of tutors, which were used to make the prediction of student performance. The tutoring characteristics included in the framework were extracted from previous research, which investigated each tutoring attribute, and from tutoring attributes desired by Andragogy. The prediction of students' performance was carried out from the development of an extension of the Moodle Predicta tool, which performs classification of students as to possible failure or approval. The prediction of student performance is made from the behavioral data of students and tutors. The implementation of the prediction was preceded by a performance analysis of the classifying algorithms, and the implemented classifier was RandomForest, which achieved better performance according to the AUC metric. Educational data from Moodle from the Goiás Judicial School (EJUG) was used in a case study. Two exploratory data analyzes were conducted to learn about the courses and investigate the tutoring characteristics of the framework in EJUG tutors. The data from EJUG tutors were included in the classification model, used to predict student performance, showing that the actions of tutors can impact students' academic achievements.