INF - Instituto de Informática
URI Permanente desta comunidade
Navegar
Navegando INF - Instituto de Informática por Por Tipo de Defesa "Tese"
Agora exibindo 1 - 20 de 59
Resultados por página
Opções de Ordenação
Item Aplicação de técnicas de visualização de informações para os problemas de agendamento de horários educacionais(Universidade Federal de Goiás, 2023-10-20) Alencar, Wanderley de Souza; Jradi, Walid Abdala Rfaei; http://lattes.cnpq.br/6868170610194494; Nascimento, Hugo Alexandre Dantas do; http://lattes.cnpq.br/2920005922426876; Nascimento, Hugo Alexandre Dantas do; Jradi, Walid Abdala Rfaei; Bueno, Elivelton Ferreira; Gondim, Halley Wesley Alexandre Silva; Carvalho, Cedric Luiz deAn important category, or class, of combinatorial optimization problems is called Educational Timetabling Problems (Ed-TTPs). Broadly, this category includes problems in which it is necessary to allocate teachers, subjects (lectures) and, eventually, rooms in order to build a timetable, of classes or examinations, to be used in a certain academic period in an educational institution (school, college, university, etc.). The timetable to be prepared must observe a set of constraints in order to satisfy, as much as possible, a set of desirable goals. The current research proposes the use of methods and/or techniques from the Information Visualization (IV) area to, in an interactive approach, help a better understanding and resolution, by non-technical users, of problem instances in the scope of their educational institutions. In the proposed approach, human actions and others performed by a computational system interact in a symbiotic way targeting the problem resolution, with the interaction carried out through a graphical user interface that implements ideas originating from the User Hints framework [Nas03]. Among the main contributions achieved are: (1) recognition, and characterization, of the most used techniques for the presentation and/or visualization of Ed-TTPs solutions; (2) conception of a mathematical notation to formalize the problem specification, including the introduction of a new idea called flexibility applied to the entities involved in the timetable; (3) proposition of visualizations able to contribute to a better understanding of a problem instance; (4) make available a computational tool that provides interactive resolution of Ed-TTPs, together with a specific entity-relationship model for this kind of problem; and, finally, (5) the proposal of a methodology to evaluate visualizations applied to the problem in focus.Item Preditor híbrido de estruturas terciárias de proteínas(Universidade Federal de Goiás, 2023-08-10) Almeida, Alexandre Barbosa de; Soares, Telma Woerle de Lima; http://lattes.cnpq.br/6296363436468330; Soares , Telma Woerle de Lima; Camilo Junior , Celso Gonoalves; Vieira, Flávio Henrique Teles; Delbem, Alexandre Cláudio Botazzo; Faccioli, Rodrigo AntônioProteins are organic molecules composed of chains of amino acids and play a variety of essential biological functions in the body. The native structure of a protein is the result of the folding process of its amino acids, with their spatial orientation primarily determined by two dihedral angles (φ, ψ). This work proposes a new hybrid method for predicting the tertiary structures of proteins called hyPROT, combining techniques of Multi-objective Evolutionary Algorithm optimization (MOEA), Molecular Dynamics, and Recurrent Neural Networks (RNNs). The proposed approach investigates the evolutionary profile of dihedral angles (φ, ψ) obtained by different MOEAs during the minimization process of the objective function by dominance and energy minimization by molecular dynamics. This proposal is unprecedented in the protein prediction literature. The premise under investigation is that the evolutionary profile of dihedrals may be concealing relevant patterns about folding mechanisms. To analyze the evolutionary profile of angles (φ, ψ), RNNs were used to abstract and generalize the specific biases of each MOEA. The selected MOEAs were NSGAII, BRKGA, and GDE3, and the objective function investigated combines the potential energy from non-covalent interactions and the solvation energy. The results obtained show that the hyPROT was able to reduce the RMSD value of the best prediction generated by the MOEAs individually by at least 33%. Predicting new series for dihedral angles allowed for the formation of histograms, indicating the formation of a possible statistical ensemble responsible for the distribution of dihedrals (φ, ψ) during the folding processItem Definitividade de formas quadráticas – uma abordagem polinomial(Universidade Federal de Goiás, 2016-11-18) Alves, Jesmmer da Silveira; Brustle, Thomas; http://www2.ubishops.ca/algebra/brustleCv.pdf; Castonguay, Diane; http://lattes.cnpq.br/4005898623592261; Castonguay, Diane; http://lattes.cnpq.br/4005898623592261; Centeno, Carmen; Alvares, Edson Ribeiro; Martinez, Fabio Henrique Viduani; Longo, Humberto JoséQuadratic forms are algebraic expressions that have important role in different areas of computer science, mathematics, physics, statistics and others. We deal with rational quadratic forms and integral quadratic forms, with rational and integer coefficients respectively. Existing methods for recognition of rational quadratic forms have exponential time complexity or use approximation that weaken the result reliability. We develop a polinomial algorithm that improves the best-case of rational quadratic forms recognition in constant time. In addition, new strategies were used to guarantee the results reliability, by representing rational numbers as a fraction of integers, and to identify linear combinations that are linearly independent, using Gauss reduction. About the recognition of integral quadratic forms, we identified that the existing algorithms have exponential time complexity for weakly nonnegative type and are polynomial for weakly positive type, however the degree of the polynomial depends on the algebra dimension and can be very large. We have introduced a polynomial algorithm for the recognition of weakly nonnegative quadratic forms. The related algorithm identify hypercritical restrictions testing every subgraph of 9 vertices of the quadratic form associated graph. By adding Depth First Search approach, a similar strategy was used in the recognition of weakly positive type. We have also shown that the recognition of integral quadratic forms can be done by mutations in the related exchange matrix.Item Design de experiência aplicado a times(Universidade Federal de Goiás, 2024-10-18) Alves, Leonardo Antonio; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Ferreira, Deller James; Lucena, Fábio Nogueira de; Dias, Rodrigo da Silva; Federson, Fernando MarquesDespite recent advances, current Gamification methodologies still face challenges in effectively personalizing learning experiences and accurately assessing the development of specific competencies. This thesis presents the Marcta Autonomy Framework (MAF), an innovative framework that aims to overcome these limitations by increasing team members’ motivation and participation while promoting personal development and skills through a personalized experience.The MAF, consisting of six phases (Planning, Reception, Advancement, Feedback, Process Evaluation, and Lessons and Adjustments), guides the development of activities with both intrinsic and extrinsic rewards. The research was applied in two academic case studies: a Software Factory and an Introduction to Programming course for students of the Bachelor’s degree in Artificial Intelligence. Using a qualitative approach, including interviews and observations, the results demonstrate that the MAF significantly enhances the development of personal skills. The analysis suggests that the framework can be applied both within a course and in a specific discipline. The main contribution of the MAF lies in its ability to provide a structured roadmap for planning and evaluating pedagogical actions focused on Personal Skills Development.Furthermore, the framework leverages easily capturable data through observation, context, and evaluations. It is concluded that the MAF stands as a personalized and affective Gamification solution for Experience Design in Learning, promoting Personal Skills Development in both academic and corporate contexts.Item Exploiting parallelism in document similarity tasks with applications(Universidade Federal de Goiás, 2019-09-05) Amorim, Leonardo Afonso; Martins, Wellington Santos; http://lattes.cnpq.br/3041686206689904; Martins, Wellington Santos; Vincenzi, Auri Marcelo Rizzo; Rodrigues, Cássio Leonardo; Rosa, Thierson Couto; Martins, WeberThe amount of data available continues to grow rapidly and much of it corresponds to text expressing human language, that is unstructured in nature. One way of giving some structure to this data is by converting the documents to a vector of features corresponding to word frequencies (term count, tf-idf, etc) or word embeddings. This transformation allows us to process textual data with operations such as similarity measure, similarity search, classification, among others. However, this is only possible thanks to more sophisticated algorithms which demand higher computational power. In this work, we exploit parallelism to enable the use of parallel algorithms to document similarity tasks and apply some of the results to an important application in software engineering. The similarity search for textual data is commonly performed through a k nearest neighbor search in which pairs of document vectors are compared and the k most similar are returned. For this task we present FaSSTkNN, a fine-grain parallel algorithm, that applies filtering techniques based on the most common important terms of the query document using tf-idf. The algorithm implemented on a GPU improved the top k nearest neighbors search by up to 60x compared to a baseline, also running on a GPU. Document similarity using tf-idf is based on a scoring scheme for words that reflects how important a word is to a document in a collection. Recently a more sophisticated similarity measure, called word embedding, has become popular. It creates a vector for each word that indicates co-occurrence relationships between words in a given context, capturing complex semantic relationships between words. In order to generate word embeddings efficiently, we propose a fine-grain parallel algorithm that finds the k less similar or farthest neighbor words to generate negative samples to create the embeddings. The algorithm implemented on a multi-GPU system scaled linearly and was able to generate embeddings 13x faster than the original multicore Word2Vec algorithm while keeping the accuracy of the results at the same level as those produced by standard word embedding programs. Finally, we applied our accelerated word embeddings solution to the problem of assessing the quality of fixes in Automated Software Repair. The proposed implementation was able to deal with large corpus, in a computationally efficient way, being a promising alternative to the processing of million source code files needed for this task.Item Reconhecimento de padrões por processos adaptativos de compressão(Universidade Federal de Goiás, 2020-03-02) Bailão, Adriano Soares de Oliveira; Delbem, Alexandre Cláudio Botazzo; http://lattes.cnpq.br/1201079310363734; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Silva, Nadia Felix Felipe da; Duque, Cláudio Gottschalg; Costa, Ronaldo Martins da; Monaco, Francisco JoséData compression is a process widely used by the industry in the storage and transport of information and is applied to a variety of domains such as text, image, audio and video. The compression processes are a set of mathematical operations that aim to represent each sample of data in compressed form, or with a smaller size. Pattern recognition techniques can use compression properties and metrics to design machine learning models from adaptive algorithms that represent samples in compressed form. An advantage of adaptive compression models, is that they have dimensionality reduction techniques resulting from the compression properties. This thesis proposes a general unsupervised learning model (for different problem domains and different types of data), which combines adaptive compression strategies in two phases: granulation, responsible for the perception and representation of the knowledge necessary to solve a problem generalization, and the codification phase, responsible for structuring the reasoning of the model, based on the representation and organization of the problem objects. The reasoning expressed by the model denotes the ability to generalize data objects in the general context. Generic methods, based on compactors (without loss of information), lack generalization capacity for some types of data objects, and in this thesis, lossy compression techniques are also used, in order to circumvent the problem and increase the capacity of generalization of the model. Results demonstrate that the use of techniques and metrics based on adaptive compression produce a good approximation of the original data samples in data sources with high dimensionality. Tests point to good machine learning models with good generalization capabilities derived from the approach based on the reduction of dimensionality offered by adaptive compression processes.Item Aplicação de CNN e LLM na Localização de Defeitos de Software(Universidade Federal de Goiás, 2024-10-16) Basílio Neto, Altino Dantas; Camilo Júnior, Celso Gonçalves; http://lattes.cnpq.br/6776569904919279; Camilo Junior, Celso Gonçalves; Leitão Júnior , Plínio de Sá; Oliveira, Sávio Salvarino Teles de; Vincenzi, Auri Marcelo Rizzo; Souza, Jerffeson Teixeira deThe increase in the quantity or complexity of computational systems has led to a growth in the occurrence of software defects. The industry invests significant amounts in code debugging, and a considerable portion of the cost is associated with the task of locating the element responsible for the defect. Automated techniques for fault localization have been widely explored, with recent advances driven by the use of deep learning models that combine different types of information about defective source code. However, the accuracy of these techniques still has room for improvement, suggesting open challenges in the field. This work aims to formalize and investigate the most impactful aspects of fault localization techniques, proposing a framework for characterizing approaches to the problem and two solution methodologies: a) based on convolutional neural networks (CNNs) and b) based on large language models (LLMs). From experimentation involving public datasets in Java and Python, it was demonstrated that CNNs are comparable to traditional methods but were found to be inferior to other methods in the literature. The LLM-based approach, on the other hand, greatly outperformed heuristics like Ochiai and Tarantula and proved competitive with more recent literature. An experiment in a scenario free from the data leakage problem showed that LLM-based approaches can be improved by combining them with the Ochiai heuristic.Item Future-Shot: Few-Shot Learning to tackle new labels on high-dimensional classification problems(Universidade Federal de Goiás, 2024-02-23) Camargo, Fernando Henrique Fernandes de; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Galvão Filho, Arlindo Rodrigues; Vieira, Flávio Henrique Teles; Gomes, Herman Martins; Lotufo, Roberto de AlencarThis thesis introduces a novel approach to address high-dimensional multiclass classification challenges, particularly in dynamic environments where new classes emerge. Named Future-Shot, the method employs metric learning, specifically triplet learning, to train a model capable of generating embeddings for both data points and classes within a shared vector space. This facilitates efficient similarity comparisons using techniques like k-nearest neighbors (\acrshort{knn}), enabling seamless integration of new classes without extensive retraining. Tested on lab-of-origin prediction tasks using the Addgene dataset, Future-Shot achieves top-10 accuracy of $90.39\%$, surpassing existing methods. Notably, in few-shot learning scenarios, it achieves an average top-10 accuracy of $81.2\%$ with just $30\%$ of the data for new classes, demonstrating robustness and efficiency in adapting to evolving class structuresItem Sobre grafos com r tamanhos diferentes de conjuntos independentes maximais e algumas extensões(Universidade Federal de Goiás, 2014-10-01) Cappelle, Márcia Rodrigues; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610; Barbosa, Rommel Melgaço; Abreu, Nair Maria Maia de; Santos, José Plínio de Oliveira; Longo, Humberto José; Silva, Hebert Coelho daIn this thesis, we present some results concerning about the sizes of maximal independent sets in graphs. We prove that for integers r and D with r 2 and D 3, there are only finitely many connected graphs of minimum degree at least 2, maximum degree at most D, and girth at least 7 that have maximal independent sets of at most r different sizes. Furthermore, we prove several results restricting the degrees of such graphs. These contributions generalize known results on well-covered graphs. We study the structure and recognition of the well-covered graphs G with order n(G) without an isolated vertex that have independence number n(G)k 2 for some non-negative integer k. For k = 1, we give a complete structural description of these graphs, and for a general but fixed k, we describe a polynomial time recognition algorithm. We consider graphs G without an isolated vertex for which the independence number a(G) and the independent domination number i(G) satisfy a(G) i(G) k for some non-negative integer k. We obtain a upper bound on the independence number in these graphs. We present a polynomial algorithm to recognize some complementary products, which includes all complementary prisms. Also, we present results on well-covered complementary prisms. We show that if G is not well-covered and its complementary prism is well-covered, then G has only two consecutive sizes of maximal independent sets. We present an upper bound for the quantity of sizes of maximal independent sets in complementary prisms and other wellcovered concerning results. We present a lower bound for the quantity of different sizes of maximal independent sets in Cartesian products of paths and cycles.Item Uso e estabilidade de seletores de variáveis baseados nos pesos de conexão de redes neurais artificiais(Universidade Federal de Goiás, 2021-03-19) Costa, Nattane Luíza da; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610; Barbosa, Rommel Melgaço; Lima, Márcio Dias de; Lins, Isis Didier; Costa, Ronaldo Martins da; Leitão Júnior, Plínio de SáArtificial Neural Networks (ANN) are machine learning models used to solve problems in several research fields. Although, ANNs are often considered “black boxes”, which means that these models cannot be interpreted, as they do not provide explanatory information. Connection Weight Based Feature Selectors (WBFS) have been proposed to extract knowledge from ANNs. Most of studies that have been using these algorithms are based on just one ANN model. However, there are variations in the ANN connection weight values due to the initialization and training, and consequently, leading to variations in the importance ranking generated by a WBFS. In this context, this thesis presents a study about the WBFS. First, a new voting approach is proposed to assess the stability of the WBFS, i.e, the variation in the result of the WBFS. Then, we evaluated the stability of the algorithms based on multilayer perceptron (MLP) and extreme learning machines (ELM). Furthermore, an improvement is proposed in the algorithms of Garson, Olden, and Yoon, combining them with the feature selector ReliefF. The new algorithms are called FSGR, FSOR, and FSYR. The experiments were performed based on 28 MLP architectures, 16 ELM architectures, and 16 data sets from the UCI Machine Learning Repository. The results show that there is a significant difference in WBFS stability depending on the training parameters of the ANNs and depending on the WBFS used. In addition, the proposed algorithms proved to be more effective than the classic algorithms. As far as we know, this study was the first attempt to measure the stability of WBFS, to investigate the effects of different ANN training parameters on the stability of WBFS, and the first to propose a combination of WBFS with another feature selector. Besides, the results provide information about the benefits and limitations of WBFS and represent a starting point for improving the stability of these algorithms.Item Reconhecimento polinomial de álgebras cluster de tipo finito(Universidade Federal de Goiás, 2015-09-09) Dias, Elisângela SIlva; Castonguay, Diane; http://lattes.cnpq.br/4005898623592261; Castonguay, Diane; Schiffler, Ralf; Dourado, Mitre Costa; Carvalho, Marcelo Henrique de; Longo, Humberto JoséCluster algebras form a class of commutative algebra, introduced at the beginning of the millennium by Fomin and Zelevinsky. They are defined constructively from a set of generating variables (cluster variables) grouped into overlapping subsets (clusters) of fixed cardinality. Since its inception, the theory of cluster algebras found applications in many areas of science, specially in mathematics. In this thesis, we study, with computational focus, the recognition of cluster algebras of finite type. In 2006, Barot, Geiss and Zelevinsky showed that a cluster algebra is of finite type whether the associated graph is cyclically oriented, i.e., all chordless cycles of the graph are cyclically oriented, and whether the skew-symmetrizable matrix associated has a positive quasi-Cartan companion. At first, we studied the two topics independently. Related to the first part of the criteria, we developed an algorithm that lists all chordless cycles (polynomial on the length of those cycles) and another that checks whether a graph is cyclically oriented and, if so, list all their chordless cycles (polynomial on the number of vertices). Related to the second part of the criteria, we developed some theoretical results and we also developed a polynomial algorithm that checks whether a quasi-Cartan companion matrix is positive. The latter algorithm is used to prove that the problem of deciding whether a skew-symmetrizable matrix has a positive quasi-Cartan companion for general graphs is in NP class. We conjecture that this problem is in NP-complete class.We show that the same problem belongs to the class of polynomial problems for cyclically oriented graphs and, finally, we show that deciding whether a cluster algebra is of finite type also belongs to this class.Item Sobre convexidade em prismas complementares(Universidade Federal de Goiás, 2015-04-10) Duarte, Márcio Antônio; Szwarcfiter, Jayme L.; http://lattes.cnpq.br/2002515486942024; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610; Barbosa, Rommel Melgaço; Yanasse, Horacio Hideki; Oliveira, Carla Silva; Coelho, Erika Morais Martins; Silva, Hebert Coelho daIn this work, we present some related results, especially the properties algoritimics and of complexity of a product of graphs called complementary prism. Answering some questions left open by Haynes, Slater and van der Merwe, we show that the problem of click, independent set and k-dominant set is NP-Complete for complementary prisms in general. Furthermore, we show NP-completeness results regarding the calculation of some parameters of the P3-convexity for the complementary prism graphs in general, as the P3-geodetic number, P3-hull number and P3-Carathéodory number. We show that the calculation of P3-geodetic number is NP-complete for complementary prism graphs in general. As for the P3-hull number, we can show that the same can be efficiently computed in polynomial time. For the P3-Carathéodory number, we show that it is NPcomplete complementary to prisms bipartite graphs, but for trees, this may be calculated in polynomial time and, for class of cografos, calculating the P3-Carathéodory number of complementary prism of these is 3. We also found a relationship between the cardinality Carathéodory set of a graph and a any Carathéodory set of complementary prism. Finally, we established an upper limit calculation the parameters: geodetic number, hull number and Carathéodory number to operations complementary prism of path, cycles and complete graphs considering the convexities P3 and geodesic.Item Uma solução baseada em economia colaborativa para escalar o teste de aplicações Android em dispositivos reais(Universidade Federal de Goiás, 2019-04-02) Faria, Kenyo Abadio Crosara; Vincenzi, Auri Marcelo Rizzo; http://lattes.cnpq.br/0611351138131709; Vincenzi, Auri Marcelo Rizzo; Leitão Júnior, Plínio de Sá; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; Maldonado, José Carlos; Freitas, Eduardo Noronha de AndradeSoftware testing for mobile devices presents additional challenges compared to testing desktop and web applications, especially in fragmented environments such as the Android ecosystem. There are currently over 24 thousands different device models, with different screen sizes and densities, operating system versions, and other configurations that con- tribute to the instability of applications for this ecosystem. Several frameworks and TaaS platforms assist the validation of these type of applications, especially with regard to the construction and execution of UI tests. However, existing architectural limitations result in high-cost and low-diversity real devices to be used during Android apps validation. Inspired in this context and in the Collaborative Economy paradigm, this study proposes a disruptive architecture, which allows the execution of applications tests in an distribu- ted way, using idle devices around the world, reducing the cost of infrastructure with real devices at the same time with potential of generating a new market. Experiments have demonstrated the robustness of the architecture for performing UI tests on geographically distributed devices, and a financial analysis indicates a reduction of 85.67% in the infras- tructure cost of physical devices allocated for testing, while showing the viability of the built platform.Item Classificação de cenas utilizando a análise da aleatoriedade por aproximação da complexidade de Kolmogorov(Universidade Federal de Goiás, 2020-03-15) Feitosa, Rafael Divino Ferreira; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Delbem, Alexandre Cláudio Botazzo; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; Laureano, Gustavo Teodoro; Costa, Ronaldo Martins daIn many pattern recognition problems, discriminant features are unknown and/or class boundaries are not well defined. Several studies have used data compression to discover knowledge, without features extraction and selection. The basic idea is two distinct objects can be grouped as similar, if the information content of one explains, in a significant way, the information content of the other. However, compressionbased techniques are not efficient for images, as they disregard the semantics present in the spatial correlation of two-dimensional data. A classifier is proposed for estimates the visual complexity of scenes, namely Pattern Recognition by Randomness (PRR). The operation of the method is based on data transformations, which expand the most discriminating features and suppress details. The main contribution of the work is the use of randomness as a measure discrimination. The approximation between scenes and trained models, based on representational distortion, promotes a lossy compression process. This loss is associated with irrelevant details, when the scene is reconstructed with the representation of true class, or with the information degradation, when it is reconstructed with divergent representations. The more information preserved, the greater the randomness of the reconstruction. From the mathematical point of view, the method is explained by two main measures in the U-dimensional plane: intersection and dispersion. The results yielded accuracy of 0.6967, for a 12-class problem, and 0.9286 for 7 classes. Compared with k-NN and a data mining toolkit, the proposed classifier was superior. The method is capable of generating efficient models from few training samples. It is invariant for vertical and horizontal reflections and resistant to some geometric transformations and image processing.Item Técnicas de otimização multiobjetivo e otimização estocástica para o roteamento de fluxos em redes(Universidade Federal de Goiás, 2019-03-22) Fernandes, Kátia Cilene Costa; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Pinto, Leizer de Lima; http://lattes.cnpq.br/0611031507120144; Pinto, Leizer de Lima; Cardoso, Kleber Vieira; Vieira, Flávio Henrique Teles; Bueno, Elivelton Ferreira; Abelém, Antônio Jorge GomesIn this work we are interested in optimization problems related to network flow routing. Three models and an exact and polynomial algorithm are presented. The first model is a bi-objective integer programming problem in which the objective functions refer to the load balancing of the network and the length of the paths through which the flows are routed. An exact and polynomial algorithm based on the -constraint technique is presented. The second model differs from the first one with respect to the weights of the flows and the qualities of the links. In these parameters can assume different values. The last model is a stochastic single-objective flow routing problem. It aims to minimize the bottleneck of the network, respecting a certain limit on the length of the paths through which flows are routed. In addition, the link qualities are random variables, which can be approximated by a discrete and finite set. Implementations were developed in C++ language using the CPLEX solver for the resolution of instances. Grid topologies and random topologies based on the Barabási-Albert model were used in our computational experiments. The network flow settings defined here are those commonly used in wireless sensor networks and wireless mesh networks. The analysis of computational results provides the decision maker valuable informations about which factors most affect the solutions.Item Análise multirresolução de imagens gigapixel para detecção de faces e pedestres(Universidade Federal de Goiás, 2023-09-27) Ferreira, Cristiane Bastos Rocha; Pedrini, Hélio; http://lattes.cnpq.br/9600140904712115; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; http://lattes.cnpq.br/7206645857721831; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; Pedrini, Helio; Santos, Edimilson Batista dos; Borges, Díbio Leandro; Fernandes, Deborah Silva AlvesGigapixel images, also known as gigaimages, can be formed by merging a sequence of individual images obtained from a scene scanning process. Such images can be understood as a mosaic construction based on a large number of high resolution digital images. A gigapixel image provides a powerful way to observe minimal details that are very far from the observer, allowing the development of research in many areas such as pedestrian detection, surveillance, security, and so forth. As this image category has a high volume of data captured in a sequential way, its generation is associated with many problems caused by the process of generating and analyzing them, thus, applying conventional algorithms designed for non-gigapixel images in a direct way can become unfeasible in this context. Thus, this work proposes a method for scanning, manipulating and analyzing multiresolution Gigapixel images for pedestrian and face identification applications using traditional algorithms. This approach is analyzed using both Gigapixel images with low and high density of people and faces, presenting promising results.Item Problemas de otimização combinatória para união explícita de arestas(Universidade Federal de Goiás, 2018-03-21) Ferreira, Joelma de Moura; Foulds, Les; http://lattes.cnpq.br/3737395828552021; Nascimento, Hugo Alexandre Dantas do; http://lattes.cnpq.br/2920005922426876; Freitas, Carla Maria Dal Sasso; Paulovich, Fernando; Longo, Humberto José; Soares, Telma Woerle de LimaEdge bundling is a technique to group, align, coordinate and position the depiction of edges in a graph drawing, so that sets of edges appear to be brought together into shared visual structures, i.e. bundles. The ultimate goal is to reduce clutter to improve how it conveys information. This thesis provides a general formulation for the explicity edge bundling problems, as a formal combinatorial optimization problem. This allows for the definition and comparison of edge bundling problems. In addition, we present four explicity edge bundling optimization problems that address minimizing the total number of bundles, in conjunction with other aspects, as the main goal. An evolutionary edge bundling algorithm is described. The algorithm was successfully tested by solving three related problems applied to real-world instances. The reported experimental results demonstrate the effectiveness and the applicability of the proposed evolutionary algorithm to help resolve edge bundling problems formally defined as optimization models.Item Escalonamento de recursos em redes sem fio 5G baseado em otimização de retardo e de alocação de potência considerando comunicação dispositivo a dispositivo(Universidade Federal de Goiás, 2021-10-15) Ferreira, Marcus Vinícius Gonzaga; Vieira, Flávio Henrique Teles; http://lattes.cnpq.br/0920629723928382; Vieira, Flávio Henrique Teles; Madeira, Edmundo Roberto Mauro; Lima, Marcos Antônio Cardoso de; Rocha, Flávio Geraldo Coelho; Oliveira Júnior, Antônio Carlos deIn this thesis, a resources scheduling scheme is proposed for 5G wireless network based on CP-OFDM (Cyclic Prefix - Orthogonal Frequency Division Multiplexing) and f-OFDM (filtered - OFDM) modulations in order to optimize the average delay and the power allocation for users. In the proposed approach the transmission rate value is calculated and the modulation format is defined so that minimize system BER (Bits Error Rate). The algorithm considers, in addition to the transmission modes determined to minimize the BER, the calculation of the system's weighted throughput to optimize the users' average delay. Additionally, it is proposed an algorithm for uplink transmission in 5G wireless networks with D2D (Device-to-device) multi-sharing communication which initially allocates resources for the CUEs (Cellular User Equipments) and subsequently allocates network resources for communication between DUEs (D2D User Equipment) pairs based in the optimization of the delay and power allocation. The proposed algorithm, namely DMCG (Delay Minimization Conflict Graph), considers the minimization of the estimated delay function using concepts of Network Calculus to decide on the allocation of idle resources of the network CUEs for DUEs pairs. In this thesis, the performance of the proposed algorithms for downlink and uplink transmission are verified and compared with others algorithms in the literature in terms of several QoS (Quality of Service) parameters and considering the carrier aggregation and 256-QAM (Quadrature Amplitude Modulation) technologies. In computational simulations they are also considered scenarios with propagation by millimeter waves and the 5G specifications of the 3GPP (3rd Generation Partnership Project) Release 15. The simulation results show that the algorithms proposed for downlink and uplink transmission provide better system performance in terms of throughput and delay, in addition to presenting lower processing time compared to optimization heuristics and other QoS parameters being compatible to those of the compared algorithms.Item Reconhecimento de padrões em imagens radiográficas de tórax: apoiando o diagnóstico de doenças pulmonares infecciosas(Universidade Federal de Goiás, 2023-09-29) Fonseca, Afonso Ueslei da; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; http://lattes.cnpq.br/7206645857721831; Soares, Fabrízzio Alphonsus Alves de Melo Nunes; Laureano, Gustavo Teodoro; Pedrini, Hélio; Rabahi, Marcelo Fouad; Salvini, Rogerio LopesPattern Recognition (PR) is a field of computer science that aims to develop techniques and algorithms capable of identifying regularities in complex data, enabling intelligent systems to perform complicated tasks with precision. In the context of diseases, PR plays a crucial role in diagnosis and detection, revealing patterns hidden from human eyes, assisting doctors in making decisions and identifying correlations. Infectious pulmonary diseases (IPD), such as pneumonia, tuberculosis, and COVID-19, challenge global public health, causing thousands of deaths annually, affecting healthcare systems, and demanding substantial financial resources. Diagnosing them can be challenging due to the vagueness of symptoms, similarities with other conditions, and subjectivity in clinical assessment. For instance, chest X-ray (CXR) examinations are a tedious and specialized process with significant variation among observers, leading to failures and delays in diagnosis and treatment, especially in underdeveloped countries with a scarcity of radiologists. In this thesis, we investigate PR and Artificial Intelligence (AI) techniques to support the diagnosis of IPID in CXRs. We follow the guidelines of the World Health Organization (WHO) to support the goals of the 2030 Agenda, which includes combating infectious diseases. The research questions involve selecting the best techniques, acquiring data, and creating intelligent models. As objectives, we propose low-cost, high-efficiency, and effective PR and AI methods that range from preprocessing to supporting the diagnosis of IPD in CXRs. The results so far align with the state of the art, and we believe they can contribute to the development of computer-assisted IPD diagnostic systems.Item SCOUT: a multi-objective method to select components in designing unit testing(Universidade Federal de Goiás, 2016-02-15) Freitas, Eduardo Noronha de Andrade; Camilo Júnior, Celso Gonçalves; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4736184D1; Vincenzi, Auri Marcelo RIzzo; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4763450Y6; Vincenzi, Auri Marcelo Rizzo; Camilo Júnior, Celso Gonçalves; Ferrari, Fabiano Cutigi; Dias Neto, Arilo Cláudio; Leitão Júnior, Plínio de SáThe creation of a suite of unit testing is preceded by the selection of which components (code units) should be tested. This selection is a significant challenge, usually made based on the team member’s experience or guided by defect prediction or fault localization models. We modeled the selection of components for unit testing with limited resources as a multi-objective problem, addressing two different objectives: maximizing benefits and minimizing cost. To measure the benefit of a component, we made use of important metrics from static analysis (cost of future maintenance), dynamic analysis (risk of fault, and frequency of calls), and business value. We tackled gaps and challenges in the literature to formulate an effective method, the Selector of Software Components for Unit testing (SCOUT). SCOUT was structured in two stages: an automated extraction of all necessary data and a multi-objective optimization process. The Android platform was chosen to perform our experiments, and nine leading open-source applications were used as our subjects. SCOUT was compared with two of the most frequently used strategies in terms of efficacy.We also compared the effectiveness and efficiency of seven algorithms in solving a multi-objective component selection problem: random technique; constructivist heuristic; Gurobi, a commercial tool; genetic algorithm; SPEA_II; NSGA_II; and NSGA_III. The results indicate the benefits of using multi-objective evolutionary approaches such as NSGA_II and demonstrate that SCOUT has a significant potential to reduce market vulnerability. To the best of our knowledge, SCOUT is the first method to assist software testing managers in selecting components at the method level for the development of unit testing in an automated way based on a multi-objective approach, exploring static and dynamic metrics and business value.
- «
- 1 (current)
- 2
- 3
- »