Mestrado em Ciência da Computação (INF)
URI Permanente para esta coleção
Navegar
Navegando Mestrado em Ciência da Computação (INF) por Por Orientador "Bulcão Neto, Renato de Freitas"
Agora exibindo 1 - 12 de 12
Resultados por página
Opções de Ordenação
Item Agente para suporte à decisão multicritério em gestão pública participativa(Universidade Federal de Goiás, 2014-09-26) Amorim, Leonardo Afonso; Patto, Vinicius Sebba; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Sene Junior, Iwens Gervásio; Patto, Vinicius Sebba; Cruz Junior, Gelson daDecision making in public management is associated with a high degree of complexity due to insufficient financial resources to meet all the demands emanating from various sectors of society. Often, economic activities are in conflict with social or environmental causes. Another important aspect in decision making in public management is the inclusion of various stakeholders, eg public management experts, small business owners, shopkeepers, teachers, representatives of social and professional classes, citizens etc. The goal of this master thesis is to present two computational agents to aid decision making in public management as part of ADGEPA project: Miner Agent (MA) and Agent Decision Support (DSA). The MA uses data mining techniques and DSA uses multi-criteria analysis to point out relevant issues. The context in which this work fits is ADGEPA project. The ADGEPA (which means Digital Assistant for Participatory Public Management) is an innovative practice to support participatory decision making in public resources management. The main contribution of this master thesis is the ability to assist in the discovery of patterns and correlations between environmental aspects that are not too obvious and can vary from community to community. This contribution would help the public manager to make systemic decisions that in addition to attacking the main problem of a given region would decrease or solve other problems. The validation of the results depends on actual data and analysis of public managers. In this work, the data were simulated.Item Um modelo ontológico e um serviço de gerenciamento de dados de apoio à privacidade na Internet das Coisas(Universidade Federal de Goiás, 2019-02-08) Arruda, Mayke Ferreira; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Prazeres, Cássio Vinicius Serafim; Berardi, Rita Cristina GalarragaIn the Internet of Things (IoT) paradigm, real-world objects equipped with identification, detection, network, and processing capabilities communicate and consume services over the Internet to perform some task on behalf of users. Due to the growing popularization of devices with sensing capabilities and the consequent increase in data production from these devices, the literature states that the design of an ontology-based model is an essential starting point for addressing privacy risks in IoT, since the connected devices are increasingly able to monitor human activities. In addition, due to the complexity and dynamicity of IoT environments, we emphasize the need for privacy ontologies that combine expressive and extensible vocabulary but do not overload the processing of privacy data. Facing this problem, this work presents the development of an ontology-based solution for privacy in IoT, composed by: i) IoT-Priv, a privacy ontology for IoT, built as a light layer on IoT concepts imported from an emerging ontology, called IoTLite; and ii) IoT-PrivServ, a privacy management service, which provides functionalities for consumers and / or producers who use the IoT-Priv ontology in modeling their data, abstracting from them the complexity of perform such tasks. As contributions, the results of the evaluation of IoT-Priv and IoT-PrivServ indicate that we maintained the lightness characteristic present in IoT-Lite, which was one of our initial goals. In addition, we have demonstrated that IoT-Priv is expressive and extensible, since its concepts allow complex scenarios to be modeled, and if necessary, the extension points included in the ontology allow it to be imported and extended to meet more specific needs.Item Aplicando a técnica de simulação hardware-in-the-loop no desenvolvimento de um simulador híbrido para testbeds de redes de sensores sem fio(Universidade Federal de Goiás, 2014-08-28) Brito Filho, Elio Ribeiro de; Sene Junior, Iwens Gervasio; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Sene Junior, Iwens Gervasio; Silvestre, Bruno Oliveira; Rossetto, SilvanaThis paper presents an alternative to scale testbeds for wireless sensor networks. The approach in this research was based on the recurrence of Hardware-In-the-Loop simulation technique to co-relate real sensor nodes with virtual sensor nodes. A literature review of explanatory feature enabled the fundamentals of this research and allowed the definition of predicates for targeting a functional solution. The design of the approach used was materialized through the implementation of a scalable and high-fidelity hybrid simulator. A series of tests/benchmarks took effect in order to legitimate the current study, that allowed us to evaluate the integrity, operating cost and limitations present in the infrastructure. The applied assessment confirmed the compliance of the outlined goals and the identification of some limitations related to the hardware.Item Uma abordagem computacional para predição de mortalidade em utis baseada em agrupamento de processos gaussianos(Universidade Federal de Goiás, 2016-09-09) Caixeta, Rommell Guimarães; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Soares, Anderson da Silva; Laureano, Gustavo Teodoro; Oliveira, Marco Antônio Assfalk deThe analysis of physiological variables of a patient can improve the death risk classification in Intensive Care Units(ICU) and help decision making and resource management. This work proposes a computational approach to death prediction through physiological variables analysis in ICU. Physiological variables that compounds time-series(e.g., blood pressure) are represented as Dependent Gaussian Processes(DGP). Variables that do not represent time-series (e.g., age) are used to cluster DGPs with Decision Trees. Classification is made according to a distance measure that combines Dynamic Time Warping and Kullback-Leibler divergence. The results of this approach are superior to other method already used, SAPS-I, on the considered test dataset.The results are similar to other computational methods published by the research community. The results comparing variations of the proposed method show that there is advatage in using the proposed clustering of DGPs.Item Uma solução baseada em ontologia para a prevenção de erros comuns em modelos de requisitos escriitos na linguagem i*(Universidade Federal de Goiás, 2016-03-29) França, Heyde Francielle do Carmo; Bulcão Neto, Renato de Freitas; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4708369E3; Bulcão Neto, Renato de Freitas; Silva, Marcel Ferrante; Zinader, Juliana Pereira de SouzaThe Goal Oriented Requirements Engineering (GORE) approach represents users’ needs through goals with focus on capturing the real intentions of stakeholders. Based on the GORE technique, the i* modeling language represents system’s and organization’s goals and brings several advantages. Despite that, the i* language faces problems regarding the quality of models, which include typical mistakes of misuse of i* constructs, the presence of ambiguities on the interpretation of those constructs, and the complexity of the resulting i* models. The aim of this work is to present an ontology-based solution for i* models in order to reduce the most well-known errors while constructing such models. To achieve this goal was accomplished initially a literature search, followed by an experimental research to produce the proposed solution This solution includes the extension of an ontology called OntoiStar+ with OWL restrictions to ensure that frequent mistakes in i* models are not found. Besides, the TAGOOn+ tool was also extended to validate i* models in the iStarML language and convert those to an OWL representation.To perform the tests were modeled two different domains, Media Shop and on universities, using these domains case studies have been reproduced and measured results. Results demonstrate an approximate coverage of 70% of those common errors with extension of OntoiStar+ and more than 80% with extension of TAGOOn+ tool.Item Projeto arquitetural de serviços de representação e agregação de informações de contexto baseadas em ontologias(Universidade Federal de Goiás, 2016-09-05) Marques, Guilherme Silva; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Rodrigues, Cássio Leonardo; Sene Junior, Iwens GervasioDey et al. [10] define context as the information set that can be used to characterize the status of an entity (e.g. person, place or object relevant to the user-application interaction). Perera et al. [36] propose a context information life cycle in 4 phases: acquisition, modeling, reasoning and dissemination. Context modeling consists in representing the real world in computing and, according to Bettini et al. [2], a good information context modeling mitigates the complexity of context-aware applications. In the context of this work, a systematic review demonstrated that the literature has focused on context information modeling, but with less emphasis on the architectural project. The main problem addressed on this work is the creation and evaluation of architectural projects that work together on the context modeling phase of a Semantic Context-Aware System (SCAS), which uses ontologies as main information modeling technique. This work proposes an architectural solution to two relevant services of the ontology-based context modeling phase: the former represents context information collected through sensors, and the latter aggregates context information represented by the former for a same individual or entity. Both architectural designs were built using well-known architectural styles and design patterns, as well as were evaluated against architectural designs of similar projects. As a result, the main contributions include the architectural designs of ontology-based services for context representation and aggregation as references for the development of SCAS in terms of the context modeling phase.Item Construção e avaliação empírica de um catálogo de padrões de requisitos para sistemas de registro eletrônico em saúde(Universidade Federal de Goiás, 2022-03-31) Martins, Mariana Crisostomo; Kudo, Taciana Novo; http://lattes.cnpq.br/7044035224784132; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Kudo, Taciana Novo; Vicenzi, Auri Marcelo Rizzo; Oliveira, Juliano Lopes de[Context] Requirements pattern catalogs (CPR) are a useful reuse approach for dealing with incomplete, incorrect, and omitted requirements. The main benefits of using catalogs are: time savings and higher quality requirements. [Problem] Despite these advantages, there is a scarcity of supported domains and of empirical studies evaluating the use of these catalogs. [Objective] - Develop a CPR for Electronic Health Record Systems (SEHR) and evaluate it empirically considering: (a) support for requirements elicitation and specification and (b) the perception of its usefulness and ease of use. [Methods] - The requirements modeled in the CPR are based on a Certification Manual for S-RES marketed in Brazil. The structure and grammar of the CPR is based on a metamodel called Software Pattern Metamodel (SoPaMM), and the construction of the CPR took place using the Terminal Model Editor (TMed) tool. In two case studies, the quality of requirements documents built with and without the use of CPR by undergraduate students was evaluated. A survey was also applied to collect students perception of the usefulness and ease of use of CPR. [Results] - In terms of completeness and scope, the catalog helped to increase the quality of the Requirements especification document (DER) by approximately 50%. The catalog also made it possible to eliminate inconsistencies in case of reuse. In the perception analysis, 98% of the students stated that the use of the catalog enabled faster elicitation and specification. [Conclusions] - It was identified that the approach helps to improve the quality of the DER and allows a perception of faster elicitation and specification.Item Geração automática de especificações de requisitos e de casos de teste baseada em padrões de requisitos com comportamento(Universidade Federal de Goiás, 2020-10-23) Ribeiro, Pollyana de Queiroz; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Vincenzi, Auri Marcelo Rizzo; Leitão Júnior, Plínio de SáThe requirements guide all software development activities and establishing them correctly, becomes essential for the success of the project. For the most part, the specification of the software requirements is carried out in a descriptive and informal manner, causing negative impacts on the project. One solution that can improve the quality of the software requirements specification document is to use the Software Requirement Standard (PRS). However, research indicates that there are limitations to implementing PRS, among them, the lack of tooling support to enable the adoption of the use of standards. In this context, the present work proposes the development of a tool based on a PRS integration metamodel with software testing standards for automatic generation of the software requirements specification document and the test case specification document, from pattern catalogs. In order to assess the quality of the requirements specification and test case specification documents, a survey was carried out with professionals in the software market, researchers and graduate students with different levels of experience in Requirements Engineering and Software Testing. The quality characteristics for assessing the artifacts were defined based on ISO / IEC 24766: 2009 (requirements) and ISO / IEC 25051: 2014 (test cases). The results of the evaluation show that the documents considerably meet the quality aspects and that the respondents had a good impression in relation to the documents. There were qualitative contributions from the professionals regarding the quality of the artifacts. In general, the evaluators understand that the automatic generation of documents from PRS streamlines the requirements specification and test case activities.Item Requisitos em ação: uma arquitetura pedagógica para o ensino de engenharia de requisitos(Universidade Federal de Goiás, 2023-06-12) Santana, Thalia Santos de; Kudo, Taciana Novo; http://lattes.cnpq.br/7044035224784132; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Santos, Davi Viana dos; Ferreira, Deller JamesRequirements Engineering Education (REE) is a knowledge area that raises discussions regarding the teaching of Requirements Engineering (RE) topics, aiming to provide quality training for future requirements analysts. [Problem] Despite this concern, the literature describes that the teaching of RE in undergraduate courses often lacks synergy with market demands, which is corroborated by the lack of pedagogical frameworks that permeate the REE on what and how to teach such content, causing training failures understood in academia and industry. [Objective] Thus, this work aimed to collaborate with the teaching-learning of requirements through the design of a pedagogical architecture (PA) focused on teaching specification and validation activities, in order to favor the development of hard and soft skills. [Methods] The PA was developed contemplating lesson plans, teaching materials and practical activities, and was subsequently instantiated in four editions of ER courses with undergraduate Computing students. [Results] The results of the quantitative and qualitative evaluations indicate that the techniques of user stories and acceptance test scenarios have their learning potential enhanced, being perceived by students as useful in professional practice. In addition, the teachers mention that the PA collaborates with the EER ensuring greater student participation, also highlighting the collaboration of the class scripts for a correct execution of the proposed practices. [Conclusions] Therefore, the present AP contributes to ER education by demonstrating a pedagogical framework that can promote alignment between software requirements knowledge and the hard and soft skills desired in industry.Item Uma solução para qualidade de contexto baseada em ontologia e lógica nebulosa com aplicação em monitoramento de sinais vitais em UTI(Universidade Federal de Goiás, 2016-05-03) Sena, Márcio Vinícius Oliveira; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Patto, Vinícius Sebba; Brasil, Virginia ViscondeMore than a decade after the first training paradigm Computer Context Sensitive, the term Quality of Context (QoC) emerged in a setting filled with services that provide userapplication interactions seamlessly, highlighting the need for managers of information context. These infrastructures that handle context information, CMS (Context Management System), are full of inputs and outputs relevant data or not. The relevance of information is studied by many lines of sensitive computing the context, including QoC. Although there are a large number of QoC metrics, the literature still lacks general values to characterize the relevance of context information, ie from metrics produce an overall value that represents the importance of information. The objective of this work is to propose, from fuzzy logic, a combination of metrics in order to produce a QoC value for each information received, and possible disposal, if this information does not meet the expected relevance, described in this work by policies of QoC. In order to disseminate the context of consumers that information produced, this research proposes also, from an ontology domain independent QoC, annotate semantically the data that refer to the quality of information, thus avoiding the need for new measures in a short-term time. Finally, this paper presents the application of the solutions found in a case study of monitoring vital signs of patients in ICU.Item Semantic enrichment of sensor data: a case study in environmental health(Universidade Federal de Goiás, 2021-08-06) Silva, Lucas Felipe Moreira; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Macedo, Alessandra Alaniz; Sene Júnior, Iwens GervásioIndoor Air Quality is crucial for human health, but over ninety percent of people worldwide breathe air with pollutant levels that exceed the WHO limits, which may trigger or worsen symptoms the longer one stays exposed. Studies in the area face an inherent difficulty: the massive number of interconnected elements and the effects on human health. However, IoT technologies like sensors and actuators are helping the field address this problem by acquiring and processing EH data to be used in automation and decision-making. Still, although sensors deployment is relatively simple and feasible, raw data is barely useless in practice, requiring preprocessing before usage and is highly dynamic, meaning sensor data for Environmental Health (EH) should be handled as data streams. Streams can be enriched with information such as air quality indexes and associated with curated medical knowledge, improving usage. IoT's regular data life cycle comprises acquisition, modeling, reasoning, and distribution, so a first step to enable an IoT-based EH scenario is a shared common representation for EH data acquired from sensors, which can be met by Ontologies' expressiveness and reasoning support. The organization of the fundamental processes of IoT-based EH systems into a reference architecture can further support the development of such systems and a Reference Architecture like RAISE, whose central idea is to structure general software components into a reusable design solution for semantic enrichment of healthcare data attain this task. That process comprises steps like acquisition, modeling, extraction, preprocessing, semantic annotation, integration, and storage of heterogeneous healthcare information. The problem addressed here is the low number of validation research investigating semantic enrichment and integration of EH data through ontologies and medical knowledge. This work's objective was to elaborate on an instance of the RAISE architecture that enriches sensor data for the EH domain, contributing with: Semantic Enrichment of EH sensor-acquired data; The link between ontologies to address the complete picture; and more.Item Serviços de representação, agregação e histórico de contexto ontológico em apoio ao desenvolvimento de aplicações(Universidade Federal de Goiás, 2016-07-19) Veiga, Ernesto Fonseca; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Costa, Fábio Moreira; Trinta, Fernando Antonio MotaContext Management Systems (CMS) are responsible for processing the context throughout its life cycle, which is performed in four steps defined in the literature: acquisition, modeling, reasoning and dissemination. The modeling step has great relevance in this process, being responsible for transforming the data acquired from sensors in highlevel context. However, this step has been impacted by the increasing deployment of sensors and the large volume of information that is available, creating the need to provide services to carry out the handling of such information, providing adequate representation and supporting the development of applications. In this sense, the ontology-based representation has major advantages, including: high expressiveness, standardization and interoperability. Before highlighted requirements by the literature, this work presents the construction of services related to the context life cycle modeling step: representation, aggregation and history, based on ontologies. These services have been designed and implemented as components that integrated a CMS called Hermes. Therefore, the scope of this work is composed by: i) Hermes Widget: ontological representation of independent domain context, based on Stimulus-Sensor-Observation standard; ii) Hermes Aggregator: ontologically represented context aggregation and its conversion for application domains; and iii) Hermes History: context history for the different levels of expression produced by other components. These services have been validated and evaluated in a human vital signs monitoring scenario.