INF - Instituto de Informática
URI Permanente desta comunidade
Navegar
Navegando INF - Instituto de Informática por Por tipo de Acesso "Acesso Aberto"
Agora exibindo 1 - 20 de 239
Resultados por página
Opções de Ordenação
Item Airetama Um Arcabouço Baseado em Sistemas Multiagentes para a Implantação de Comunidades Virtuais de Prática na Web(Universidade Federal de Goiás, 2010-10-04) ALARCÓN, Jair Abú Bechir Láscar; CARVALHO, Cedric Luiz de; http://lattes.cnpq.br/4090131106212286The objective of this dissertation is to present the framework Airetama. This framework is based on Multiagent Systems and Semantic Web principles. It provides a semantic, distributed and open-source infrastructure for the creation of Virtual Communities of Practice on the Web. It makes possible, through the use of agents, coupling of resources and tools that use semantic technologies. Integration of semantic in the current Web has as main objective to allow such software agents can use their pages more intelligently, thus offering better service.Item Item-based-adp: análise e melhoramento do algoritmo de filtragem colaborativa item-based(Universidade Federal de Goiás, 2014-09-02) Aleixo, Everton Lima; Rosa, Thierson Couto; http://lattes.cnpq.br/4414718560764818; Rosa, Thierson Couto; Camilo Júnior, Celso Gonçalves; Pereira, Denilson AlvesMemory-based algorithms are the most popular among the collaborative filtering algorithms. They use as input a table containing ratings given by users to items, known as the rating matrix. They predict the rating given by user a to an item i by computing similarities of the ratings among users or similarities of the ratings among items. In the first case Memory-Based algorithms are classified as User-based algorithms and in the second one they are labeled as Item-based algorithms. The prediction is computed using the ratings of k most similar users (or items), also know as neighbors. Memory-based algorithms are simple to understand and to program, usually provide accurate recommendation and are less sensible to data change. However, to obtain the most similar neighbors for a prediction they have to process all the data which is a serious scalability problem. Also they are sensitive to the sparsity of the input. In this work we propose an efficient and effective Item-Based that aims at diminishing the sensibility of the Memory-Based approach to both problems stated above. The algorithm is faster (almost 50%) than the traditional Item-Based algorithm while maintaining the same level of accuracy. However, in environments that have much data to predict and few to train the algorithm, the accuracy of the proposed algorithm surpass significantly that of the traditional Item-based algorithms. Our approach can also be easily adapted to be used as User-based algorithms.Item Predição de estrutura terciária de proteínas com técnicas multiobjetivo no algoritmo de monte carlo(Universidade Federal de Goiás, 2016-06-17) Almeida, Alexandre Barbosa de; Faccioli, Rodrigo Antonio; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4710519J5; Soares, Telma Woerle de Lima; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4717638T6; Soares, Telma Woerle de Lima; Facciolo, Rodrigo Antonio; Martins, Wellignton Santos; Leão, Salviano de AraújoProteins are vital for the biological functions of all living beings on Earth. However, they only have an active biological function in their native structure, which is a state of minimum energy. Therefore, protein functionality depends almost exclusively on the size and shape of its native conformation. However, less than 1% of all known proteins in the world has its structure solved. In this way, various methods for determining protein structures have been proposed, either in vitro or in silico experiments. This work proposes a new in silico method called Monte Carlo with Dominance, which addresses the problem of protein structure prediction from the point of view of ab initio and multi-objective optimization, considering both protein energetic and structural aspects. The software GROMACS was used for the ab initio treatment to perform Molecular Dynamics simulations, while the framework ProtPred-GROMACS (2PG) was used for the multi-objective optimization problem, employing genetic algorithms techniques as heuristic solutions. Monte Carlo with Dominance, in this sense, is like a variant of the traditional Monte Carlo Metropolis method. The aim is to check if protein tertiary structure prediction is improved when structural aspects are taken into account. The energy criterion of Metropolis and energy and structural criteria of Dominance were compared using RMSD calculation between the predicted and native structures. It was found that Monte Carlo with Dominance obtained better solutions for two of three proteins analyzed, reaching a difference about 53% in relation to the prediction by Metropolis.Item Um Componente para Geração e Evolução de Esquemas de Bancos de Dados como Suporte à Construção de Sistemas de Informação(Universidade Federal de Goiás, 2010-11-22) ALMEIDA, Alexandre Cláudio de; OLIVEIRA, Juliano Lopes de; http://lattes.cnpq.br/8890030829542444An Information System (IS) has three main aspects: a database that contains data which is processed to generate business information; an application functions which transforms data in information; and business rules which control and restrict data manipulated by the functions. An IS evolves continuously to follow the corporation changes, and the database should be change to attend the new requirements. This dissertation presents a model driven approach to generate and evolve IS databases. A software component, called Especialista em Banco de Dados (EBD), was developed. There are two mapping sets for database generation: from Modelo de Meta Objeto (MMO) (used to representing IS) to Relational Model (RM), and from this to DBMS PostgreSQL SQL dialect. The component EBD is a part of a framework for modeling, building and maintaining enterprise information systems software. This component provides services to other framework components. To validate the proposed approach, Software Engineers had developed IS using the component EBD. The Dissertation main contributions are an approach to support IS database life cycle, a software architecture to generate and evolve IS database schema, an IS data representation model (MMO), a mapping specification to generate schema and stored procedures and the definition of automated operation sets to evolve IS database schema.Item Métodos de visão computacional aplicados a extração de características de ambientes urbanos em imagens de satélite de baixa resolução(Universidade Federal de Goiás, 2018-10-03) Almeida, Dyego de Oliveira; Oliveira, Leandro Luis Galdino de; http://lattes.cnpq.br/5899392002875573; Spoto, Edmundo Sérgio; Sene Junior, Iwens GervasioThe urban growth of the population and the deforestation of greenareas are one of the most critical problems currently in Brazil. Due to mobilization of rural people to the urban, high solar irradiation and the deforestation, the Government is creating sustainable actions sustainable in order to enlarge the green areas and permeable. In this perspective, to promote this mapping effectively in large areas necessary to the use of technologies of recognition of facial features. Low-resolution satellite imagery have low cost and great coverage area coverage, and therefore apply them in identifying features is advantageous over other types of images. However, to accomplish this identification is computationally complex due to the different features present in images of this type. This work proposes an effective method of digital processing of low resolution images in the identification of features, in particular the pertinent green aáreas with average accuracy of 80.5% and detection of buildings with an average accuracy of 63%.Item Projeto InVision Framework: Um framework para suportar a criação e uso de jogos no ensino(Universidade Federal de Goiás, 2011-05-31) ALVES, Daniel Ferreira Monteiro; ALBUQUERQUE, Eduardo Simões de; http://lattes.cnpq.br/8181318469884254The number of people joining the Computer Science course, in the last years, is decreasing. Among those who enter, just a few are able to graduate, because there is a great retention rate and dropout, particularly among the introductory courses in algorithms and programming. The use of games as a motivational factor is a subject much studied in recent years, achieving good results for this problem. However, for the implementation of games in education, using a constructionist approach, students are required to build games. Several tools are available for this job, but there is a big difference in usability between the educational tools (that focus on educational programming) and those specific for creating games. This work proposes a framework for building games, being supported by an extensible application through scripts which allow it to be adapted for use in various disciplines throughout the course, and not only in introductory courses.Item Definitividade de formas quadráticas – uma abordagem polinomial(Universidade Federal de Goiás, 2016-11-18) Alves, Jesmmer da Silveira; Brustle, Thomas; http://www2.ubishops.ca/algebra/brustleCv.pdf; Castonguay, Diane; http://lattes.cnpq.br/4005898623592261; Castonguay, Diane; http://lattes.cnpq.br/4005898623592261; Centeno, Carmen; Alvares, Edson Ribeiro; Martinez, Fabio Henrique Viduani; Longo, Humberto JoséQuadratic forms are algebraic expressions that have important role in different areas of computer science, mathematics, physics, statistics and others. We deal with rational quadratic forms and integral quadratic forms, with rational and integer coefficients respectively. Existing methods for recognition of rational quadratic forms have exponential time complexity or use approximation that weaken the result reliability. We develop a polinomial algorithm that improves the best-case of rational quadratic forms recognition in constant time. In addition, new strategies were used to guarantee the results reliability, by representing rational numbers as a fraction of integers, and to identify linear combinations that are linearly independent, using Gauss reduction. About the recognition of integral quadratic forms, we identified that the existing algorithms have exponential time complexity for weakly nonnegative type and are polynomial for weakly positive type, however the degree of the polynomial depends on the algebra dimension and can be very large. We have introduced a polynomial algorithm for the recognition of weakly nonnegative quadratic forms. The related algorithm identify hypercritical restrictions testing every subgraph of 9 vertices of the quadratic form associated graph. By adding Depth First Search approach, a similar strategy was used in the recognition of weakly positive type. We have also shown that the recognition of integral quadratic forms can be done by mutations in the related exchange matrix.Item Predição do tempo de durações de processos e de movimentações processuais na esfera trabalhista(Universidade Federal de Goiás, 2019-01-12) Amaral, Ayrton Denner da Silva; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Silva, Nádia Félix Felipe da; Marques, Thyago CarvalhoThe prediction of legal proceeding movements is a relevant problem in the juridical context. Predictability is an important variable in sizing the work in attorneys offices. This works proposes an artificial neural network architecture to predict proceedings movements in Brazilian labor court. Despite the recent advances in the use of machine learning techniques and natural language processing, the problem in juridical context has its own characteristics by geographic and linguistic contexts. As a case study, a proceedings database of the year 2015 and from the same district from the labor sphere was used, due to the volume of data available.Item Agente para suporte à decisão multicritério em gestão pública participativa(Universidade Federal de Goiás, 2014-09-26) Amorim, Leonardo Afonso; Patto, Vinicius Sebba; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Sene Junior, Iwens Gervásio; Patto, Vinicius Sebba; Cruz Junior, Gelson daDecision making in public management is associated with a high degree of complexity due to insufficient financial resources to meet all the demands emanating from various sectors of society. Often, economic activities are in conflict with social or environmental causes. Another important aspect in decision making in public management is the inclusion of various stakeholders, eg public management experts, small business owners, shopkeepers, teachers, representatives of social and professional classes, citizens etc. The goal of this master thesis is to present two computational agents to aid decision making in public management as part of ADGEPA project: Miner Agent (MA) and Agent Decision Support (DSA). The MA uses data mining techniques and DSA uses multi-criteria analysis to point out relevant issues. The context in which this work fits is ADGEPA project. The ADGEPA (which means Digital Assistant for Participatory Public Management) is an innovative practice to support participatory decision making in public resources management. The main contribution of this master thesis is the ability to assist in the discovery of patterns and correlations between environmental aspects that are not too obvious and can vary from community to community. This contribution would help the public manager to make systemic decisions that in addition to attacking the main problem of a given region would decrease or solve other problems. The validation of the results depends on actual data and analysis of public managers. In this work, the data were simulated.Item Avaliação do comportamento dos pontos fiduciais faciais durante o envelhecimento humano(Universidade Federal de Goiás, 2017-03-04) Aquino, Cleiton Paiva; Oliveira, Leandro Luís Galdino de; http://lattes.cnpq.br/5899392002875573; Oliveira, Leandro Luís Galdino de; Seoliato, Araceli Aparecida; Albuquerque, Eduardo Simões deFacial aging is known as a complex process that varies the shape and texture of thefacial area. Variations in shape include variation in craniofacial structure, while texture variation includes skin coloration, appearance of facial lines and wrinkles. Shape and texture are both considered the most common forms of facial aging patterns. Fiducial points are control points on an object that define characteristic regions with properties of interest to the application. Thus, the objective of this work is to evaluate the behavior offacial fiducial points during human aging. An evaluation was performed in a statistical manner and through classification, through the characteristics vectors obtained throughfacial fiducial points. For the accomplishment of this work, we had a social motivation,that is the aid in the search for missing persons, and another one of technical character,that is the perfection of facial recognition systems. Among the results we can highlight the behavior of increase, reduction and stabilization among some fiducial points. With regard to classification, we obtained a result of 84.29 % of correct answers when we compared the class with people under 20 years old and the class with people between 20 and 39 years old from the black men’s groupsItem Seleção e geração de características utilizando regras de associação para o problema de ordenação de resultados de máquinas de buscas(Universidade Federal de Goiás, 2014-08-29) Araujo, Carina Calixto Ribeiro de; Rosa, Thierson Couto; http://lattes.cnpq.br/4414718560764818; Rosa, Thierson Couto; Gonçalves, Marcos André; Longo, Humberto JoséInformation Retrieval is an area of IT that deals with document storage and the information retrieval in these documents. With the advent of the Internet, the number of documents produced has increased as well as the need to retrieve the information more accurately. Many approaches have been proposed to meet these requirements and one of them is Learning to rank (L2R). Despite major advances achieved in the accuracy of retrived documents, there is still considerable room for improvement. This master thesis proposes the use of feature selection and generation using association rules to improve the accuracy of the L2R methods.Item Uma investigação da correspondência entre mutações e avisos relatados por ferramenta de análise estática(Universidade Federal de Goiás, 2015-12-04) Araújo, Claudio Antônio de; Vincenzi, Auri Marcelo Rizzo; http://lattes.cnpq.br/0611351138131709; Vincenzi, Auri Marcelo Rizzo; Valente, Marco Túlio de Oliveira; Lucena, Fábio Nogueira deTraditionally, mutation testing is used for test set and/or test criteria evaluation once it is considered a good fault model. Since static analyzers, in general, report a substantial number of false positive warnings, Objective: This paper uses mutation testing for evaluating an automated static analyzer. The intention of this study is to define a prioritization approach of static warnings based on their correspondence with mutations. Method: We used mutation operators as a fault model to evaluate the direct correspondence between mutations and static warnings. The main advantage of using mutation operators is that they generate a large number of programs containing faults of different types, which can be used to decide the ones most probable to be detected by static analyzers. Results: The results obtained for a set of open-source programs indicate that: 1) correspondence exists when considering specific mutation operators such that static warnings may be prioritized based on their correspondence level with mutations; 2) correspondence exists when considering specific warning categories such that, assuming we perform static analysis considering these warning categories, mutation operators may be prioritized based on their correspondence level with warnings. Conclusion: It is possible to provide an incremental testing strategy aiming at reducing the cost of both static analysis and mutation testing using the correspondence information. On the other hand, knowing that Mutation Test has a high application cost, we identified mutations of some specific mutation operators, which an automatic static analyzer is not able to detect. Therefore, this information can used to prioritize the order of applying mutation operators incrementally considering, firstly, those with no correspondence with static warnings.Item Análise dos microdados do Enade: proposta de uma ferramenta de exploração utilizando mineração de dados(Universidade Federal de Goiás, 2019-12-20) Araújo, Rodrigo Alexandrino; Brancher, Jacques Duílio; http://lattes.cnpq.br/7909976127880843; Brancher, Jacques Duílio; Sanches, Danilo Sipoli; Camos, Vitor Valério SouzaOne way to analyze higher education institutions and student performance is through the National Student Performance Examination (ENADE). From its results it is possible to make intelligent decisions for the improved teaching-learning process. However, in the analysis reports provided by Anísio Teixeira National Institute for Educational Studies and Research (INEP) only descriptive analyzes are available. Although the Institute provides ENADE’s evidence-related micro-data, advanced knowledge in data analysis and statistics is required to obtain more in-depth information about candidates. That said, this paper aims to use KDD techniques to develop an exploratory analysis tool for Enade microdata, together with a classification model capable of predicting student performance. For the elaboration of rules, several decision tree classification algorithms were used, in which CART stood out. The end result was a data analysis tool, which allows comparing higher education courses and institutions, and providing the best view of this information for the purpose of assisting decision making. Finally, an online questionnaire was distributed so that teachers, students and coordinators could evaluate and validate the developed system. After this study, the tool proved to be satisfactory and fulfills what is promised, and serves as a motivation to improve the work developed.Item Um modelo ontológico e um serviço de gerenciamento de dados de apoio à privacidade na Internet das Coisas(Universidade Federal de Goiás, 2019-02-08) Arruda, Mayke Ferreira; Bulcão Neto, Renato de Freitas; http://lattes.cnpq.br/5627556088346425; Bulcão Neto, Renato de Freitas; Prazeres, Cássio Vinicius Serafim; Berardi, Rita Cristina GalarragaIn the Internet of Things (IoT) paradigm, real-world objects equipped with identification, detection, network, and processing capabilities communicate and consume services over the Internet to perform some task on behalf of users. Due to the growing popularization of devices with sensing capabilities and the consequent increase in data production from these devices, the literature states that the design of an ontology-based model is an essential starting point for addressing privacy risks in IoT, since the connected devices are increasingly able to monitor human activities. In addition, due to the complexity and dynamicity of IoT environments, we emphasize the need for privacy ontologies that combine expressive and extensible vocabulary but do not overload the processing of privacy data. Facing this problem, this work presents the development of an ontology-based solution for privacy in IoT, composed by: i) IoT-Priv, a privacy ontology for IoT, built as a light layer on IoT concepts imported from an emerging ontology, called IoTLite; and ii) IoT-PrivServ, a privacy management service, which provides functionalities for consumers and / or producers who use the IoT-Priv ontology in modeling their data, abstracting from them the complexity of perform such tasks. As contributions, the results of the evaluation of IoT-Priv and IoT-PrivServ indicate that we maintained the lightness characteristic present in IoT-Lite, which was one of our initial goals. In addition, we have demonstrated that IoT-Priv is expressive and extensible, since its concepts allow complex scenarios to be modeled, and if necessary, the extension points included in the ontology allow it to be imported and extended to meet more specific needs.Item CGPlan: a scalable constructive path planning for mobile agents based on the compact genetic algorithm(Universidade Federal de Goiás, 2017-02-16) Assis, Lucas da Silva; Laureano, Gustavo Teodoro; http://lattes.cnpq.br/4418446095942420; Soares, Anderson da Silva; http://lattes.cnpq.br/1096941114079527; Soares, Anderson da Silva; Laureano, Gustavo Teodoro; Camilo Junior, Celso Gonçalves; Osório, Fernando Santosbetween desired points. These optimal paths can be understood as trajectories that best achieves an objective, e.g. minimizing the distance travelled or the time spent. Most of usual path planning techniques assumes a complete and accurate environment model to generate optimal paths. But many of the real world problems are in the scope of Local Path Planning, i.e. working with partially known or unknown environments. Therefore, these applications are usually restricted to sub-optimal approaches which plan an initial path based on known information and then modifying the path locally or re-planning the entire path as the agent discovers new obstacles or environment features. Even though traditional path planning strategies have been widely used in partially known environments, their sub-optimal solutions becomes even worse when the size or resolution of the environment's representation scale up. Thus, in this work we present the CGPlan (Constructive Genetic Planning), a new evolutionary approach based on the Compact Genetic Algorithm (cGA) that pursue efficient path planning in known and unknown environments. The CGPlan was evaluated in simulated environments with increasing complexity and compared with common techniques used for path planning, such as the A*, the BUG2 algorithm, the RRT (Rapidly-Exploring Random Tree) and the evolutionary path planning based on classic Genetic Algorithm. The results shown a great efficient of the proposal and thus indicate a new reliable approach for path planning of mobile agents with limited computational power and real-time constraints on on-board hardware.Item Algoritmos baseados em estratégia evolutiva para a seleção dinâmica de espectro em rádios cognitivos(Universidade Federal de Goiás, 2013-11-22) Barbosa, Camila Soares; Cardoso, Kleber Vieira; http://lattes.cnpq.br/0268732896111424; Cardoso, Kleber Vieira; Corrêa, Sand Luz; Camilo Junior, Celso Gonçalves; Santos, Aldri Luiz dosOne of the main challenges in Dynamic Spectrum Selection for Cognitive Radios is the choice of the frequency range for each transmission. This choice should minimize interference with legacy devices and maximize the discovering opportunities or white spaces. There are several solutions to this issue, and Reinforcement Learning algorithms are the most successful. Among them stands out the Q-Learning whose weak point is the parameterization, since adjustments are needed in order to reach successfully the proposed objective. In that sense, this work proposes an algorithm based on evolutionary strategy and presents the main characteristics adaptability to the environment and fewer parameters. Through simulation, the performance of the Q-Learning and the proposal of this work were compared in different scenarios. The results allowed to evaluate the spectral efficiency and the adaptability to the environment. The proposal of this work shows promising results in most scenarios.Item Planejamentos combinatórios construindo sistemas triplos de steiner(Universidade Federal de Goiás, 2011-08-26) Barbosa, Enio Perez Rodrigues; Barbosa, Rommel Melgaço; http://lattes.cnpq.br/6228227125338610Intuitively, the basic idea of Design Theory consists of a way to select subsets, also called blocks, of a finite set, so that some properties are satisfied. The more general case are the blocks designs. A PBD is an ordered pair (S;B), where S is a finite set of symbols, and B is a collection of subsets of S called blocks, such that each pair of distinct elements of S occur together in exactly one block of B. A Steiner Triple System is a particular case of a PBD, where every block has size only 3, being called triples. The main focus is in building technology systems. By resolvability is discussed as a Steiner Triple Systems is resolvable, and when it is not resolvable. This theory has several applications, eg, embeddings and even problems related to computational complexity.Item Estudo e Definição de uma Metodologia de Teste de Software no Contexto de Sistemas Embarcados Críticos(Universidade Federal de Goiás, 2011-07-28) BARBOSA, Jacson Rodrigues; VINCENZI, Auri Marcelo Rizzo; http://lattes.cnpq.br/0611351138131709Computing is becoming increasingly critical in the embedded applications space and depending on the software, its malfunction may result in a severe financial loss to the loss of human life. Considering this scenario, we presented a systematic literature review in order to investigate the evolution of work-related activity test critical embedded software in order to evaluate the level of compliance found in the work to the standard DO- 178B (Software Considerations in Airborne Systems and Equipment Certification). This research, in addition to conducting a systematic review of publications about this issue, has resulted in the composition of primary studies to define a process of quality testing and including the requirements of DO-178B at their different levels of criticality.Item Construção de middleware específico de domínio: unificando abordagem dirigida por modelos e separação de interesses(Universidade Federal de Goiás, 2017-10-30) Barbosa, Weider Alves; Costa, Fábio Moreira; http://lattes.cnpq.br/0925150626762308; Costa, Fábio Moreira; Delicato, Flávia; Carvalho, Sérgio Teixeira deThis thesis presents an approach to construct model execution machines based on the concept of Domain Specific Virtual Machines (DSVMs), focusing on the control middleware layer that is responsible for the control of model execution. In order to build this layer, we used techniques derived from model-driven engineering (MDE), in order to take advantage of the fact that DSVMs can both interpret models directly and be constructed using models. Another concept used in the proposed approach is the Separation of Concerns, separating the execution model from the knowledge of the application domain. In this sense, the main objective of this work is to propose an approach that unifies the MDE techniques and separation of concerns for the construction of DSVMs, thus allowing to express both the structure and operational semantics of the middleware. As a result, an instance of the control layer of a DSVM for the user-centric communication domain is displayed. We also present the results of a performance evaluation that was carried out to analyze the impact of proposed approach on the execution time.Item Uma abordagem ontológica baseada em informações de contexto para representação de conhecimento de monitoramento de sinais vitais humanos(Universidade Federal de Goiás, 2013-10-21) Bastos, Alexsandro Beserra; Sene Junior, Iwens Gervasio; http://lattes.cnpq.br/3693296350551971; Neto, Renato de Freitas Bulcão; http://lattes.cnpq.br/5627556088346425; Carvalho, Sérgio Teixeira de; Vanni, Renata Maria PortoMonitoring vital signs in intensive care units (ICU) is an everyday activity of various health professionals, including doctors, nurses, technicians and nursing assistants. In most ICUs, monitoring and recording vital signs are performed in a manual fashion and in predefined time instants. The records of measurements of vital signs in ICUs are generally written on preprinted forms, and a health professional has to re-sort those forms when he wants to get some information about the clinical state of a patient. Besides, when an abnormal measurement of vital sign is detected, a multiparameter monitor triggers audible alarms, and that alarm may not be prompted detected by the medical staff, depending on the workflow within an ICU. In that sense, this work proposes a knowledge representation model of monitoring of vital signs of patients in ICUs. The model proposed exploits the expressiveness and the formality of ontologies, rules and semantic web technologies. This promotes the consensual comprehension, the sharing and the reuse of vital signs of patients. The aim is to develop context-aware applications for monitoring human vital signs, including the storage, query support and semantic alarms triggering.