Disertación/Tesis

Clique aqui para acessar os arquivos diretamente da Biblioteca Digital de Teses e Dissertações da UnB

2024
Disertaciones
1
  • James Duván García Montoya
  • A Comprehensive Analysis of Model Predictive Control for Lane Keeping Assist System

  • Líder : EVANDRO LEONARDO SILVA TEIXEIRA
  • MIEMBROS DE LA BANCA :
  • EVANDRO LEONARDO SILVA TEIXEIRA
  • RENATO VILELA LOPES
  • BRUNO AUGUSTO ANGÉLICO
  • DIEGO COLON
  • Data: 01-mar-2024


  • Resumen Espectáculo
  • The integration of new technologies into daily routines or activities is an inevitable reality that allows them to improve their performance. In this sense, integrating new technologies into current vehicles demonstrates the concern of professionals in the automotive industry and the government with safety, performance and user comfort. For this reason, advanced driver assistance functions (ADAS) have gained popularity today. These functions make it possible to increase safety in vehicles, as they help to carry out different tasks that can be repetitive or difficult for humans. One of these activities is lane maintenance (LKAs). As the name suggests, this function was developed to prevent unintentional departure from the center of the lane. This function uses sensors such as a camera, GPS, LIDAR, etc. to capture the vehicle's current performance on the road and make a decision through the steering system, changing the angle of the wheels. LKAS uses different algorithms to process information from sensors and also to make decisions in real-time. These control algorithms must guarantee the safety and stability of the vehicle in different risk situations, for this reason, the model-based predictive controller (MPC) was chosen for this work, which can deal with restrictions for different vehicle variables. system, such as lateral displacement or the maximum allowable angle of the vehicle; However, this controller faces a major challenge in implementation, as it requires a high computational load. For this reason, in this research, computational load reduction strategies were implemented for the MPC controller, to be implemented in simulation in the loop model. (MIL) and in hardware in the loop (HIL) on the SpeedGoat real-time simulation platform, after carrying out the respective simulations that demonstrated the good behaviour of the control in the LKAS function and its viability for real-time implementation, once it was a comparison between linear quadratic regulation (LQR) controller. Two formulations were proposed for the vehicle models implemented in the LKAS function, and, for tracking an already established reference and, on the other hand, the use of the current state of the vehicle to the track, these models demonstrated to be strongly related to the sensitivity and oscillations in the controller response. Furthermore, the MPC controller formulations in combination with the different QP solvers show that the exponential parameterization strategy allows a significant reduction of the MPC controller's computation time.

2
  • Ciro Barbosa Costa
  • Design of a UWB Low Noise Amplifier Integrated in a UHF Passive Tag with Energy Harvesting for Vital Signs Sensing and Monitoring

  • Líder : DANIEL MAURICIO MUNOZ ARBOLEDA
  • MIEMBROS DE LA BANCA :
  • ANDRE AUGUSTO MARIANO
  • CARLOS HUMBERTO LLANOS QUINTERO
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • SANDRO AUGUSTO PAVLIK HADDAD
  • Data: 11-mar-2024


  • Resumen Espectáculo
  • In this project, we present the design of a low-noise amplifier in complementary metaloxide-semiconductor (CMOS) 180nm technology. This amplifier is integrated into a passive Ultra High Frequency (UHF)/Ultra Wideband (UWB) tag with the purpose of monitoring cardiac and pulmonary blood flow in a patient’s residence. Considering the significant advancements in integrated circuits over recent years, spanning a wide range of applications in analog and digital electronics, it is asserted that the UWB standard is a suitable choice for non-invasive biomedical applications and various applications in the Internet of Things (IoT) in general. The tag performs measurements, data transmission, and reception of vital information, enabling various forms of control and alarm generation. The circuit is powered through energy harvesting from the UHF band. Additionally, a power management unit is integrated to control battery usage when data reception is active, also powered by energy harvesting. The primary function of the low-noise amplifier (LNA) in the UWB receiver is to increase the signal amplitude with minimal distortion and control the noise figure of the entire receiver. Thus, the LNA project was evaluated based on reflection losses at the input and output, gain, noise figure, stability, and consumed area. Corner simulations were conducted considering extreme variations in temperature and supply voltage, along with post-layout simulations after parasitic extraction.

3
  • Arthur da Costa Aguiar
  • 3D Computational Model for Simulating Blood Flow during Radiofrequency Cardiac Ablation: Exploring Chaotic Cardiac Behavior with Translational Hydrodynamics

  • Líder : SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • MIEMBROS DE LA BANCA :
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • ADSON FERREIRA DA ROCHA
  • ALLISSON LOPES DE OLIVEIRA
  • FÁTIMA MRUÉ
  • Data: 12-mar-2024


  • Resumen Espectáculo
  • Cardiac ablation, an invasive procedure to treat cardiac arrhythmias, may present complications such as neighboring structure injuries, induced arrhythmias, and the risk of developing cardiac fistulas. Postoperatively, patients may experience chest pain, fatigue, possible arrhythmia recurrences, and medication-related complications. Knowing this, this dissertation aims to develop a 3D computational model to simulate blood flow during Radiofrequency Cardiac Ablation (RFCA) and investigate chaotic cardiac behavior through translational hydrodynamic analysis. The proposed model will allow for a better understanding of interactions between the electrode, cardiac, connective, and esophageal tissues during RFCA, providing valuable insights to enhance treatment efficacy and safety. Specifically, the thesis will involve using COMSOL software to simulate blood flow dynamics, applying hydrodynamics and chaos theory. Geometry and mesh will be defined, and relevant physical properties determined. Adequate boundary and initial conditions will be established. The RFCA process will be modeled, considering interactions between the electrode and cardiac, connective, and esophageal tissues. Additionally, chaos theory will be integrated into the model to analyze complex patterns and dynamic blood flow behavior during RFCA. Furthermore, a bibliometric analysis using VOSviewer will be conducted to identify the lack of correlation between relevant themes such as atrial fibrillation, chaos theory, computational simulation, translational research, fluid mechanics, hemodynamics, and RFCA in scientific literature. VOSviewer will be used to map the co-authorship network and visualize interconnections between different themes, emphasizing the lack of connections between areas like computational simulation, chaos theory, and RFCA. Co-occurrence analysis of keywords using VOSviewer will identify gaps and trends in research related to the mentioned themes, highlighting the scarcity of studies addressing the intersection between computational simulation, translational research, chaos theory, fluid mechanics, hemodynamics, and RFCA. Additionally, major journals and conferences publishing articles on these themes will be identified through VOSviewer data analysis, aiming to understand the distribution of scientific production in these areas and identify opportunities for collaboration and knowledge exchange. The bibliometric analysis conducted in VOSviewer supported by Excel software revealed a variety of keyword combinations, some with a significant number of associated articles, while others showed a limited count of interconnections, suggesting unexplored key areas. This knowledge gap underscores the need for further investigations in the field of RFCA and computational modeling of nonlinear systems. Simulation of temperature behavior during RFCA provided insights into research distribution and fundamental considerations for procedure success, highlighting the importance of temperature monitoring and blood flow visualization. In summary, the results underscore the ongoing need for interdisciplinary investigations and strategies that can advance the understanding and practice of RFCA, offering opportunities to improve clinical outcomes and patient care, including exploring the potential of simulation tools to replace live animal testing. My impressions on future studies were shaped by data analyses that emerged alongside unforeseen limitations during the research. Examining the results obtained in bibliometric research and computational simulation, it became clear that specific areas lack attention and development. The complexity of hydrodynamics, the nonlinear nature of cardiac systems, and gaps in scientific literature highlighted the importance of future studies to fill these gaps and advance the field of RFCA. Additionally, the need for more advanced monitoring techniques and prospective clinical studies emerged as crucial points to improve treatment efficacy and safety. These insights highlighted interdisciplinary abstinence to drive progress in the area, aiming not only for scientific advances but also for better clinical outcomes and quality of life for patients.

4
  • Bruno Pinheiro de Melo Lima
  • Automatic detection and counting of Euschistus heros (Brown stink bug) in soybean crops using images and deep learning

     

  • Líder : DIBIO LEANDRO BORGES
  • MIEMBROS DE LA BANCA :
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • DIBIO LEANDRO BORGES
  • HELIO PEDRINI
  • JOSE MAURICIO SANTOS TORRES DA MOTTA
  • Data: 15-mar-2024


  • Resumen Espectáculo
  • In Brazil, soybean production has increased considerably in recent decades, driven by ad-
    vanced agricultural technologies. However, the excessive use of pesticides, which represents
    an important portion of the costs, presents economic and environmental challenges. Pest
    detection and counting methods using Computer Vision appear promising, since traditional
    approaches are laborious and time-consuming. This study introduces a method tailored for
    real-time detecting and counting insects in soybean fields, built upon an improved YOLOv8
    model, designed for this mission. To enhance the accuracy in detecting small insects and
    reduce the complexity of YOLOv8, we conducted ablation experiments to assess the impact
    of integrating a deeper feature level and a proposed C2f2 layer into the insect detection model.
    Through these ablation experiments, new algorithm underwent training and testing using a
    public research dataset, composed of samples of Euschistus Heros (NBSB), a bug of interest
    in soybean crops in Brazil, and its performance was compared against three configurations
    under identical conditions: A) YOLOv8n, B) YOLOv8n with C2f2 only and C) YOLOv8n
    with P2 only. Our evaluation employed various metrics, including Precision, mAP0.5 and
    mAP0.95. We also considered model complexity as essential factors in assessing the effi-
    ciency of YOLO models for specific applications, by comparing Params, FLOPs, Inference,
    and Time. The proposed model was then integrated into a framework capable of track and
    count NBSBs from a video stream. This video stream was created, animating 42 new images,
    captured under diverse lighting and background conditions to address potential challenges
    in practical applications, testing the generalization capacity of the model as well as the
    model’s performance in video applications. The model demonstrates a satisfactory measure
    of accuracy, represented by the MOTA metric, with around 62%. This indicates that the
    proposed technique is effective in tracking bedbugs well throughout the scenes presented,
    which is corroborated by the high hit rate in the final insect count (upward deviation of
    5.3%). These results point to the potential of this type of framework and signal potential in
    future pest mapping applications based on real-time tracking.

5
  • Gabriel da Silva Lima
  • Semianalytical Method for Determination of Optimal Strategy for Miscible WAG Injection

  • Líder : EUGENIO LIBORIO FEITOSA FORTALEZA
  • MIEMBROS DE LA BANCA :
  • EUGENIO LIBORIO FEITOSA FORTALEZA
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • JONES YUDI MORI ALVES DA SILVA
  • ANDRÉ BENINE NETO
  • Data: 20-mar-2024


  • Resumen Espectáculo
  • Water-alternating-gas (WAG) injection is an established Enhanced Oil Recovery (EOR) technique involving alternating water and gas injections to enhance sweep efficiency and maximize hydrocarbon recovery. Recently, WAG has been recognized for its effectiveness in
    reinjecting gas into offshore reservoirs, especially those with high 𝐶𝑂2 content like Brazil’s
    pre-salt reservoirs.
    WAG performance is influenced by reservoir properties and injection parameters, with the WAG ratio being a critical variable. Poorly designed WAG ratios can result in suboptimal
    oil recovery, making its optimization essential for successful oil recovery. Various studies
    have employed techniques such as artificial neural networks and bio-inspired algorithms to
    determine the optimal WAG ratio, but the high computational costs associated with these
    methods limit their practical application in real models. In this context, an analytical or
    semi-analytical methodology of low computational cost is required to find the WAG ratio for
    miscible WAG injection.
    This study presents a semi-analytical method to determine the WAG Ratio based on the
    reservoir’s solution gas-oil ratio (Rs). The Rs is used to determine the ideal injection quantities
    of water and miscible gas, aiming to reduce oil viscosity and enhance sweep efficiency, all
    while avoiding the formation of continuous gas phases under reservoir conditions. The
    primary methodology goal is to optimize total oil production. A comparative analysis is
    performed to assess the effectiveness of the proposed method concerning conventional
    injections like Waterflooding and Gas flooding, as well as different WAG Ratios found in the
    literature, such as 1:1, 1:2, 2:1, 4:1, and 1:4. The reservoir simulation is carried out utilizing
    the Olympus and EGG models, employing two modified fluids to simulate conditions of the
    Brazilian pre-salt.
    The results showed that the Rs method required only two simulations to determine the
    WAG Ratio, which was 2.6:1 for fluid ’A’ and 3.2:1 for fluid ’B’, and managed to reduce oil
    viscosity and improve oil production, in the worst case, by 30% compared to Waterflooding.
    Although the method did not yield the exact optimal oil production across all scenarios, its outcome demonstrates its potential to compete with the common WAG Ratios found in the
    literature and also to serve as a practical guide for improving oil recovery without relying on
    algorithms with high computational costs.

6
  • Conceição Jane Guedes de Lira
  • RELIABILITY STUDY FOR CHARACTERIZING MAINTENANCE STRATEGIES FOR INDUSTRIAL CONDITIONING EQUIPMENT IN THE FOOD BUSINESS.

  • Líder : ANTONIO PIRATELLI FILHO
  • MIEMBROS DE LA BANCA :
  • ANTONIO PIRATELLI FILHO
  • CARLOS HUMBERTO LLANOS QUINTERO
  • SANDERSON CESAR MACEDO BARBALHO
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • Data: 29-abr-2024


  • Resumen Espectáculo
  • In a competitive global environment, maintenance plays a fundamental role in the availability of equipment in the food industry. This study seeks to balance maintenance strategies to optimize equipment uptime, developing a preventive plan based on quantitative analyses, including MTBF (MeanTime Between Failures) and MTTR (Mean Time To Repair). ) and availability of equipment and system. In the methodological approach adopted in this study, 11 models of industrial air conditioners were analyzed based on performance and efficiency criteria. Historical data from these equipment was used to calculate MTTR, MTBF and availability. A preventive maintenance plan was developed based on these results. The approach followed statistical modeling using Minitab® version 16 software and the phases of the data analysis reference model. The Anderson-Darling test was used to evaluate data adherence to different probabilistic distributions. The sample consisted of 399 observations of repair time and time between failure data. Probability distributions, such as Exponential, Gamma, LogNormal and Weibull, and the approach to evaluating the preventive maintenance strategy were detailed. In this study, we explored the importance of preventive maintenance in the food industry, focusing on the reliability and availability of air conditioning equipment. In this study, historical data and statistical modeling are used to analyze the mean time between failures (MTBF), the mean time to repair (MTTR) and equipment availability. Developed a preventive maintenance plan based on these average times and availability, aiming to optimize production efficiency and effectiveness. Given the analyzes carried out, it is clear that the implementation of preventive maintenance strategies is essential to guarantee the reliability and availability of equipment in the food industry. Statistical modeling of the data made it possible to evaluate the performance of the equipment over time, enabling the creation of an effective maintenance plan. It is recommended that these preventive practices continue to mitigate the impacts of failures in production processes and guarantee the quality and integrity of food products.

Tesis
1
  • ELENA JAVIDI DA COSTA
  • Array Signal Processing Applications on Biomedical Engineering

  • Líder : JOSE ALFREDO RUIZ VARGAS
  • MIEMBROS DE LA BANCA :
  • EBRAHIM SAMER EL YOUSSEF
  • EDISON PIGNATON DE FREITAS
  • JOSE ALFREDO RUIZ VARGAS
  • JOSE FELICIO DA SILVA
  • RICARDO ZELENOVSKY
  • Data: 22-ene-2024


  • Resumen Espectáculo
  • The present thesis project seeks to propose hardware and software solutions for biomedical engineering applications, in particular, using array signal processing techniques. First a multi-sensor wearable health device (MWHD), including high-resolution signal processing algorithms to measure of the Heart Rate (HR) and the steps, is proposed. Next a novel modi- fication of the traditional eigenvalue based Information Theoretic Criteria (ITC) is presented in order to estimate the amount of components of Magnetoencephalography (MEG) and Electroencephalography (EEG) data. Finally, an unsupervised framework for the identifica- tion of Visual Evoked Potential (VEP) in MEG measurements is developed. The proposed approaches are validated using measurements.

2
  • Rogerio Rodrigues dos Santos
  • Secure Communication Based on Subactuated Hyperchaos: Proposal Using Lyapunov Methods

  • Líder : JOSE ALFREDO RUIZ VARGAS
  • MIEMBROS DE LA BANCA :
  • ELMER ROLANDO LLANOS VILLARREAL
  • GIOVANNI ALMEIDA SANTOS
  • GUILLERMO ALVAREZ BESTARD
  • JOSE ALFREDO RUIZ VARGAS
  • MAX EDUARDO VIZCARRA MELGAR
  • Data: 29-feb-2024


  • Resumen Espectáculo
  • This thesis explores essential concepts to justify an innovative research proposal in chaotic secure communication. Initially, a comprehensive review of articles and surveys related to secure communication was conducted, focusing on chaotic systems, actuated control, robustness, and hyperchaotic systems. This provided an understanding of the current state of research in communication security. Next, a statistical analysis was conducted to evaluate recent communication system models, considering simplicity, robustness, security, and reliability in encoding and decoding encrypted messages, as well as the ability to handle disturbances. The methodology involved collecting data from reliable sources such as Google Scholar and Scopus. The results highlight a growth in publications, with significant leadership from China. Additionally, trends in hyperchaotic secure communication will be analyzed. Following this, an innovative secure communication system is proposed, using chaos-based analog cryptography to transmit signals in a public range. The main contributions include the introduction of a subactuated control law in synchronization, demonstrating system robustness against disturbances, and exploring unpredictability in hyperchaotic systems to enhance cryptography security. These contributions consolidate the proposed system as a promising approach to improving security in communication within public environments, providing a solid foundation for future research and practical applications in secure telecommunications.

3
  • Emanuel Pereira Barroso Neto
  • Constructive Optimization Methodologies for Oil Reservoirs with Modified Cost Function

  • Líder : EUGENIO LIBORIO FEITOSA FORTALEZA
  • MIEMBROS DE LA BANCA :
  • ALEXANDRE ANOZÉ EMERICK
  • MARCELO SOUZA DE CASTRO
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • EUGENIO LIBORIO FEITOSA FORTALEZA
  • MANOEL PORFIRIO CORDAO NETO
  • Data: 08-mar-2024


  • Resumen Espectáculo
  • Among the problems to be solved in the field of reservoir engineering, there is the challenge of obtaining the best possible operating conditions so that the economic return from oil and gas exploration is maximized. In this context, there is a significantly large set of solutions in the literature, both in terms of model adjustment and production optimization; among the algorithms used, one can mention the gradient methods and even the use of artificial intelligence, using the Net Present Value as a cost function. However, such solutions consider a long life cycle for the reservoir, which is not desirable from an economic point of view, and also require, in general, a high computational power.

    Considering the limitations of algorithms already consolidated in the literature, this research aims to establish a production optimization methodology for a case where the well settings and the processing capacity of the produced and injected fluids were already defined and is constructive, that is, obtains the controls progressively over time, and results in economic values that are competitive and better than uncontrolled cases, and also in lower exploration time and low computational cost.

    In order to achieve the proposed objective, the objective function is modified to directly consider the intrinsic parameters of the reservoir, and simple methods of finding solutions are used. The algorithms developed with this methodology are then used in commonly used reservoir models, both in the case of a deterministic model and in a set of realizations, in which parameters with uncertainties are present, and the results obtained are compared with those coming from benchmark cases: constant control and reactive control. It is expected that the search algorithms for the optimal control of wells present better economic results than those of the benchmarks, showing the contribution of this research in the area of production optimization.

4
  • Mayla dos Santos Silva
  • Reliability Assessment of Hospital Infusion Pumps
  • Líder : ANTONIO PIRATELLI FILHO
  • MIEMBROS DE LA BANCA :
  • ADSON FERREIRA DA ROCHA
  • ANTONIO PIRATELLI FILHO
  • ALLISSON LOPES DE OLIVEIRA
  • FÁTIMA MRUÉ
  • GLECIA VIRGOLINO DA SILVA LUZ
  • Data: 15-mar-2024


  • Resumen Espectáculo
  • Infusion Pumps (IP) are medical devices (MD) used to administer medicines or food continuously and precisely. Over the years, its use has expanded, currently being used in emergencies, ICUs, pediatrics and several other hospital sectors. In the recent COVID-19 pandemic, there has been an intensification of the use of this device due to the demand for the administration of high doses of sedatives for ventilated patients. As it is an invasive medical equipment, maintenance is required with greater attention and frequency, in order to maintain good functioning and precision in the infusion. If the IPs fail, they can cause adverse events that can cause harm to the patient, harming their health. In addition to the problems associated with the patient, the IP is the MD that has the highest number of adverse event reports in Brazil, which increases the need to verify the conditions under which the device is operating, aiming to minimize failures during use.

    Given this panorama, the objective of this research is to analyze the reliability of IP operating in a Brazilian hospital, using an internal database, built from Clinical Engineering software. Probability distributions for repair time and time between failures were modeled, aspects such as reliability and availability of hospital IPs were calculated through failure analysis, in accordance with national/international recommendations, and hospital sectors with recurrent failures were investigated. and services performed. Furthermore, we propose to carry out a systematic literature review to certify how studies on the reliability of pumps and other metrological parameters have been studied in recent years, to finally propose a plan for checking the pumps.

    The review demonstrated that the gravimetric method is a proposal that brings positive results in flow assessment, being an alternative for those responsible for clinical engineering. It was evident that there is a lack of studies that address reliability concepts applied to IP, highlighting those involving bench and on-site tests. In the evaluation of the operating equipment, it was observed that there were no details in the notes regarding the reason for the failure and when opening the work order, which could lead to inaccuracy in maintenance planning. The longest repair time was identified in the Intensive Care Unit (Neurological), a sector that comprises the majority of pumps. Graphical analyzes and tests showed that the Weibull distribution satisfactorily models both the time between failures and the time to repair machines, and model A demonstrated better results in terms of availability and reliability. Finally, the study proposes a plan that combines national recommendations with a reliability cycle for DM aimed at verifying BI, so that it can be included in the predictive and preventive maintenance routine.

5
  • Pablo Henrique Ursulino de Pinho
  • Semi-blind Receivers for two-hop MIMO relay systems with Multiple Khatri-Rao and Kronecker Codings.

  • Líder : JOAO PAULO JAVIDI DA COSTA
  • MIEMBROS DE LA BANCA :
  • JOAO PAULO JAVIDI DA COSTA
  • JOSE ALFREDO RUIZ VARGAS
  • RICARDO ZELENOVSKY
  • TARCISIO FERREIRA MACIEL
  • WALTER DA CRUZ FREITAS JUNIOR
  • Data: 30-abr-2024


  • Resumen Espectáculo
  • In recent years, semi-blind receivers based on tensor models for MIMO communication systems have been widely used as they allow a better estimation of parameters without prior channel knowledge. This thesis presents developments carried out in the scope of new semiblind receivers applied to point-to-point and cooperative MIMOcommunication systems to perform symbols and channel estimation. More specifically, the theoretical contributions of this thesis are linked to the extension of the codings that introduce spatial diversity to symbol matrices. These codings allow the proposition of semi-blind receivers to estimate the symbol and channel matrices without prior channel knowledge. In the first part of this thesis, a particular case of the multiple Khatri-Rao space-time (MKRST) coding is considered for a point-to-point MIMO system. For the MKRST coding, a symbol matrix is assumed known, which can be considered as a pre-coding matrix. For this coding scheme, a received signal tensor model is proposed, and new semi-blind receivers are presented to jointly estimate the symbols and the channel. In the second part of this thesis, a new coding extension based on the multiple Kronecker product of the symbol matrices is applied to a two-hop MIMO relay system. This system considers a single relay and the coding scheme is combined with a tensor space-time-frequency (TSTF) coding on both, transmit and relay nodes denoted by TSTF-MSMKron coding. For the proposed system, the tensor model is exploited to obtain receivers that jointly estimate channels and symbols in a semi-blind way. In each part of the thesis, conditions related to the uniqueness of tensor decompositions and the identifiability of the proposed algorithms are discussed. Simulation results are provided to evaluate the performance of the proposed coding schemes and the semi-blind receivers.

6
  • DIOGO DE OLIVEIRA COSTA
  • Hardware-In-the-Loop Testing Platform for Embedded Controller in a Radiofrequency Ablation Device

  • Líder : SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • MIEMBROS DE LA BANCA :
  • ADSON FERREIRA DA ROCHA
  • ALLISSON LOPES DE OLIVEIRA
  • FÁTIMA MRUÉ
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • Data: 30-abr-2024


  • Resumen Espectáculo
  • The main objective of this work is the development of a protocol and the modification of the ARFACTA radiofrequency ablation equipment to obtain the values of the components of the cole-cole model, which will be used in the Hardware in the loop (HIL) module and in the controller. From this modification, a test bench will be created that will use the HIL technology to emulate the behaviour of an organ during the ablation procedure.

    This test bench will be able to represent the impedance variation of the organ that has undergone the treatment, allowing to verify the behaviour and evaluate the effectiveness of the process. The device will enable controller testing, seeking to maximise treatment time by delaying the roll-off of the frequency response.

    The development includes the creation of a frequency controller with the purpose of optimising the ablation parameters, enabling the modification of the output frequency in a pre-established range. This dynamic controller has the ability to cover different treatment regions, covering the variables inherent to the organ in question and to the treatment process itself. This advancement will allow maximising the treatment time by delaying the roll-off of the frequency response, aiming to improve the effectiveness of the medical procedure and achieve more satisfactory outcomes for patients.

    This set of developments will contribute to the advancement of ablation technology, providing greater efficiency and control in organ treatment, as well as enabling more accurate evaluation of the performance indicators of the controllers used. With this, it is expected to improve the effectiveness of medical procedures, ensuring more satisfactory results for patients.

2023
Disertaciones
1
  • Rodrigo Bonifácio de Medeiros
  • Design of a Fuzzy PID controller in embedded hardware for a lower limb exoskeleton

  • Líder : DANIEL MAURICIO MUNOZ ARBOLEDA
  • MIEMBROS DE LA BANCA :
  • ABEL GUILHERMINO DA SILVA FILHO
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • GUILLERMO ALVAREZ BESTARD
  • RENATO CORAL SAMPAIO
  • Data: 30-ene-2023


  • Resumen Espectáculo
  • Robotic manipulators are multiple input/output (MIMO) systems with multiple points of nonlinearities affected by numerous uncertainties and disturbances. PID controllers are widely used in industry for kinematic and dynamic control. However, when applied to MIMO systems, it is not easy to tune them and achieve performance improvements. In this work, a standard PID controller is combined with a fuzzy precompensator (FP-PID), both of which are tuned using the PSO mono-objective algorithm and the multi-objective algorithms NSGA-II and MODE for a two-degree-of-freedom (2-DOF) robotic manipulator, representing an exoskeleton. To validate the system, two sets of real human gait data were used: normal walking and stair climbing to estimate the error trajectory of the manipulator. The statistical analyzes of the algorithms with 16 experiments were satisfactory, and the addition of the fuzzy precompensator to the conventional PID resulted in a reduction of the mean square error of one of the links of the manipulator by up to 73 percent, apart from improving the smoothing of the torque with the multi-objective results. Another focus is the development of a hardware-software co-project for the FP-PID model of the exoskeleton, embedding the system on an ARM processor for the PID and system plant of the robotic manipulator and an FPGA architecture for the fuzzy controller module. The result shows that the control loop keeps the response time within the expected range of the application.

2
  • Leonardo Bezerra Libanio
  • PID CONTROLLER TUNING APPLIED TO A MOTOR-GENERATOR SET USING BIOINSPIRED ALGORITHMS
  • Líder : DANIEL MAURICIO MUNOZ ARBOLEDA
  • MIEMBROS DE LA BANCA :
  • CARLOS HUMBERTO LLANOS QUINTERO
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • MAURICIO FIGUEIREDO DE OLIVEIRA
  • RUDI HENRI VAN ELS
  • Data: 13-feb-2023


  • Resumen Espectáculo
  • The present work presents the development of a proportional integral derivative (PID) tuning for a motor generator set using bioinspired algorithms, and proposes an alternative to meet the tuning in offshore environments. The tuning of the controller was carried out from the formulation of an optimization problem, whose performance analysis is done by tracking the trajectory of the response in time domain, applying disturbances when increasing and decreasing loads in the power system, thus providing the necessary data to optimize the system and raise the mechanical potential, reducing instabilities and maintenance and increasing the energy efficiency. A preliminary analysis of the components and variables involved in the motor-generator system is presented, namely: internal combustion engine, electric generator, controller and data acquisition system. A simplified non-linear modeling was made, from a transfer function of a servo system. The obtained model was used to extract a set of controller performance parameters that guide the search process of particle swarm optimization (PSO) and differential evolution (DE) algorithms, as well as their variations, using Matlab/Simulink. For comparative purposes, the PIDTune function was used, allowing the evaluation of the performance of bioinspired algorithms. With the obtained PID parameters, project simulations and discussions were carried out. Subsequently, the PID parameters were implemented in the controller, following trajectory tracking tests with a Wartsila engine, coupled to a Leroy Somer electric generator from the Petrobras platform 65 offshore installation. Finally, a discussion is made about the effectiveness of the proposed optimization of the controller with the selected tuning and an analysis of the engine-generator set parameters is presented, as well as conclusions and suggestions for further research.

3
  • Carlos Alberto Alvares Rocha
  • Deep Learning Applied to Long Text Classification with Few Data: The Case of PPF

  • Líder : LI WEIGANG
  • MIEMBROS DE LA BANCA :
  • LI WEIGANG
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • VICTOR RAFAEL REZENDE CELESTINO
  • MARCELO XAVIER GUTERRES
  • Data: 16-mar-2023


  • Resumen Espectáculo
  • Natural language processing (NLP) is an area of artificial intelligence that has been gaining a lot of attention in recent years. The great recent advances attracted the attention of the Ministry of Science, Technology and Innovations (MCTI) to the execution of a project with the objective of locating international funding for research and development accessible to Brazilian researchers. Classification appears as a challenge for this solution due to the absence of high quality labeled data, which are requirements for most state-of-the-art implementations in the field. This work explores different machine learning strategies to classify long, unstructured and irregular texts, obtained by scraping websites of funding institutions, to, through an incremental approach, find a suitable method with good performance. Due to the limited amount of data available for supervised training, pre-training solutions were employed to learn the context of words from other datasets, with great similarity and larger size. Then, using the acquired information, a transfer of learning associated with deep learning models was applied to improve the understanding of each sentence. To reduce the impact of text irregularity, pre-processing experiments were carried out to identify the best techniques to be used for this type of content. Compared to the baseline of the work, it was possible to reach a new level of results, exceeding 90% accuracy in most of the trained models. The Longformer + CNN models stand out, which reached 94% accuracy with 100% accuracy and the Word2Vec + CNN model with 93.55% accuracy. The study's findings represent a successful application of artificial intelligence in public administration. 

4
  • ANDREA HENRIQUE CAMPOS DA FONSECA
  • Case Study: Application of Information Systems and Insertion of Mechatronic Products in Improving SUS User Service and Combating COVID-19
  • Líder : SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • MIEMBROS DE LA BANCA :
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • ANTONIO PIRATELLI FILHO
  • EUGENIO LIBORIO FEITOSA FORTALEZA
  • Paulo Roberto dos Santos
  • Data: 02-jun-2023


  • Resumen Espectáculo
  • In Brazil, the information systems aimed at meeting social demands such as public health services have had a more concerned look at the global health crisis - COVID-19- that we are facing, but there are still limitations in the field of information technology in public health services, primary care, verification of scheduled exams, consultations, and care in vaccine rooms. This chapter will deal with the Research and Innovation Project, the 3TS that comprises the SUS+ modules, which is the possibility of self-service to the patient, and the Imuna SUS that shows the control of received vaccines. This set of Technology and Communication Systems (TIC) has steps that go from the original conception of the information system to its arrival at SUS, adding value to the services provided by SUS

5
  • Elpídio Cândido de Araújo Bisneto
  • Simultaneous Wireless Information and Power Transfer for Sensor Networks

  • Líder : DANIEL MAURICIO MUNOZ ARBOLEDA
  • MIEMBROS DE LA BANCA :
  • OLYMPIO LUCCHINI COUTINHO
  • DANIEL COSTA ARAUJO
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • LEONARDO AGUAYO
  • Data: 14-jun-2023


  • Resumen Espectáculo
  • The Simultaneous Wireless Information and Power Transfer (SWIPT) technology has emerged as a promising solution for simultaneous stable data and power transmission. SWIPT uses radio frequency (RF) signals to transport both data and energy, making it an attractive option for applications in challenging environments such as unmanned aerial vehicles (UAVs) and smart agriculture.

    However, SWIPT faces obstacles such as efficient long-distance power transmission and power constraints in the ISM frequency range. Additionally, a balance needs to be struck between data and energy transmission, considering energy consumption and robustness in data transmission and reception.

    In this context, this work aims to develop a SWIPT circuit based on Amplitude Shift Keying (ASK) modulation coupled with a Dual-Polarized (DP) architecture. Aspects such as energy harvesting efficiency, data transmission rate, and the trade-off between the two are considered. Furthermore, an optimization in the antenna array is proposed using bio-inspired algorithms to reduce side lobes.

    The obtained results show an RF-DC converter efficiency of 78.94% for 17 dBm, a data transmission rate of 7 kbps with a bit error rate (BER) of 10E-3 for a signal-to-noise ratio (SNR) of 13 dB. The antenna array achieved a gain of 12.55 dBi in the vertical polarization and an average gain of 7.16 dBi in the horizontal polarization.

    Experimental data demonstrates that the proposed system is a viable SWIPT receiver for various monitoring applications using wireless sensor networks. The solution achieved good energy efficiency and the ability to receive high input powers. The antenna array allows for simultaneous data and energy transmission while ensuring isolation between the systems. This work highlights the Dual-Polarized architecture as promising, with potential optimizations and improvements in future works.

6
  • Rafael Pissinati de Souza
  • DEVELOPMENT OF A MEDICAL ASSISTIVE EQUIPMENT CAPABLE OF IMPROVING POWER CONTROL THROUGH FREQUENCY VARIATION WITH COMPLEX BIOIMPEDANCE FEEDBACK

  • Líder : SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • MIEMBROS DE LA BANCA :
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • ICARO DOS SANTOS
  • Paulo Roberto dos Santos
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • Data: 18-jul-2023


  • Resumen Espectáculo
  • Problem: Despite the great advances in modeling in radiofrequency ablation, a great difficulty is due to the intrinsic characteristics of human body tissues. Variations in the amount of water and fat in the body composition of patients generate significant variations between the models and the results obtained in the ablations.

    Objective: To develop a generalist radiofrequency ablation equipment, capable of performing complex bioimpedance reading and adjusting the frequency and amplitude of the wave, to improve impedance matching and increase the efficiency of the ablation process.

    Methodology: The development was based on the circuit of the equipment that was already under development, but all circuits were analyzed, simulated and some were designed. The equipment has a main source, with a voltage regulator circuit for 5 and 12V dc (Volts in direct current), has a variable Buck converter, which is responsible for feeding the power circuit, a Push-Pull circuit to generate the alternating signal, current and voltage measurement circuits, in addition to the phase shift measurement circuit, so that it can determine the complex impedance of the tissue. To validate the operation of the equipment, we performed ablation in porcine and bovine liver tissue with power variation.

    Results: The ablation performed in bovine liver presented a curve relating the variation of power with the ablation area, showing that there is an optimal time for ablation with constant power to be achieved. The tests on porcine liver showed constancy in the ablation area, even though the tests were performed in different sectors. It was also possible to determine that the implementation of a control is able to significantly increase the ablation area.

    Conclusion: The equipment was able to receive voltage and frequency control techniques to maximize power transfer and increase the ablation area. Due to the ability of the equipment to communicate via wi-fi, there is the possibility to implement complex control systems without compromising the local processing of the equipment.

7
  • KLÉRISTON SILVA SANTOS
  • ARFACTA - PROJECT FOR THE DEVELOPMENT AND IMPLEMENTATION OF A SIMULATION-BASED TUMOR ABLATION DEVICE WITH EX VIVO TESTING

  • Líder : SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • MIEMBROS DE LA BANCA :
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • ICARO DOS SANTOS
  • Paulo Roberto dos Santos
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • Data: 01-ago-2023


  • Resumen Espectáculo
  • Radiofrequency Ablation (RFA) is a minimally invasive and effective procedure to treat Hepatocellular Carcinomas (HCC). During the procedure, an electrode is inserted into the affected area of the liver and radiofrequency generates intense heat at the tip of the electrode. This heat can destroy the tumor, leading to necrosis of the affected region. A RFA has been widely used as an established technology in the medical field for the efficient treatment of HCC, offering advantages such as reduced hospital stays and post-surgical complications. The RFA procedure has some limitations, including inadequate response for tumors larger than 3 cm, lack of customization of energy application protocols for each patient, low standardization inherent to the generating equipment used in the procedures, and lack of standardization in the description of the dynamic behavior of the tissue during RFA. Thus, the following paper consists in determining parameters that subsidize the approach of a new hardware capable of solving these gaps. A new hardware was developed based on current market standards, capable of being widely adjustable and dynamic, providing satisfactory control conditions with reliability in the power circuit part. During the development of the work, several aspects were considered, including understanding the behavior of the output signal demand, control of the push-pull circuit through the duty cycle, power supply through the DC-DC regulators, among others. The physical effects of radiofrequency on the liver were also studied. Electrical circuit simulations were performed to analyze the influence of the energy demand of each block of the RFA circuit, with the purpose of ensuring the reliability of the signals demanded by the electronic components in the new equipment. These simulations allowed a better understanding of the circuit behavior and, saving time and material resources, which helped in the proposition of a more efficient and precise design for the RFA. To prove the simulated data, a prototype was suggested and assembled, enabling its use in tests and data collection with porcine liver ex vivo.

8
  • Pedro Aurelio Coelho de Almeida
  • Unsupervised learning model based on visual saliency for automatic segmentation of the lung region in X-ray images

  • Líder : DIBIO LEANDRO BORGES
  • MIEMBROS DE LA BANCA :
  • DIBIO LEANDRO BORGES
  • HELIO PEDRINI
  • SANDERSON CESAR MACEDO BARBALHO
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • Data: 25-ago-2023


  • Resumen Espectáculo
  • Automatically dividing an image into regions of similar properties, named segmentation, is a challenging task for computers, and it can avoid human errors induced by fatigue. One area that may be greatly benefited from automatic segmentation methods is medical imaging analysis. Within it, chest X-rays are amongst the cheapest and most widely available type of medical images. Provided they can be used for diagnosing lung related diseases, they are an excellent target for automatic image segmentation methods. The current state-of-the art image segmentation relies on manual labels defined a priori to ’learn’ the necessary features for this task. Deep unsupervised learning stands as an interesting alternative to supervised methods, since it only requires the input (e.g. the image X-ray) for training. Due to the visual nature of image segmentation and the standout aspect of the lungs on an X-ray, the combination of unsupervised learning and visual saliency (i.e. the attempt to model human visual attention) is tested for the lung segmentation on X-ray images. The saliency method is compared to state-of-the-art supervised and unsupervised models designed for grayscale medical image segmentation. Results using the Dice, jaccard, precision and recall scores on JSRT and MC datasets indicate that the saliency method enhanced performance over other unsupervised approaches is statistically significant. When compared to supervised models, the saliency method appears to adequately substitute them given the flexibility achieved by the independence from manual labels. Future work includes segmenting the cardiac area and identifying anomalies on X-ray images in an unsupervised fashion.

9
  • MARIANNYS RODRIGUEZ GASCA
  • ANALYSIS OF THE IMPACT OF TEAM SENIORITY ON COLLABORATIVE WORK PERFORMANCE: CASE STUDY OF A MECHANICAL LUNG VENTILATOR DEVELOPMENT TEAM

  • Líder : SANDERSON CESAR MACEDO BARBALHO
  • MIEMBROS DE LA BANCA :
  • SANDERSON CESAR MACEDO BARBALHO
  • CARLOS HUMBERTO LLANOS QUINTERO
  • LUIS ANTONIO PASQUETTI
  • MARLY MONTEIRO DE CARVALHO
  • Data: 28-ago-2023


  • Resumen Espectáculo
  • Mechatronics is a field in constant evolution, driven by advanced technologies, which
    combines precision mechanical engineering, electronic control and systems thinking to design
    products and manufacturing processes. This approach is applied in various areas, such as
    medicine, industry, automation and control systems. In this sense, the work in an
    interdisciplinary team is fundamental to guarantee the integration of these disciplines and,
    therefore, a high performance of the two mechatronic products. Therefore, the characteristics
    of the composition of the equipment are essential to optimize this process. In this context, this
    research proposed to evaluate a work performance framework in virtual product development
    teams mediated by a seniority relationship (education and practical development experience).
    For this framework, it was based on the input-process-output factor structure. Furthermore, the
    research analyzes a case study to evaluate how two different seniority profiles of two members
    performed during the project and how this impacted the results of the project. The results will
    show interesting trends with respect to the positive influence of the senior profile with respect
    to the interactions and performance of the input-process-output factors, with special emphasis
    on strengthening the two processes throughout the project. As for the junior profile members,
    they will allow us to observe that they will face greater challenges and therefore a lower
    performance in their performance in the project.

10
  • TIAGO MARTINS DE BRITO
  • DEVELOPMENT OF A RADIOFREQUENCY ABLATION SYSTEM FOR CHARACTERIZING THE ROLL-OFF CURVE IN HEPATIC, PULMONARY, AND CARDIAC TISSUES

  • Líder : SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • MIEMBROS DE LA BANCA :
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • ANTONIO PIRATELLI FILHO
  • ADSON FERREIRA DA ROCHA
  • ALLISSON LOPES DE OLIVEIRA
  • Data: 07-nov-2023


  • Resumen Espectáculo
  • In this master's study, we address the complexities and implications of cancer, an intricate disease with a significant impact on global health. Our focus is directed towards three critical medical conditions: liver cancer, lung cancer, and cardiac arrhythmia. The core of this project lies in the development of ARFACTA, an innovative biomedical device that utilizes radiofrequency ablation as a treatment. Additionally, we investigate the characterization of roll-off curves in post-mortem bovine and porcine tissue maintained in an ex-vivo state. This research meets the growing demand for affordable treatments within the Unified Health System (SUS), based on data that highlight the costliness of import alternatives. The primary goal is to improve the quality of life for patients, reduce the costs associated with treatment, and alleviate the strain on healthcare systems, both public and private.

11
  • Angélica Kathariny de Oliveira Alves
  • Bond Graph modelling of the biophysical characteristics of liver tissue applied to the control of the radiofrequency ablation procedure

  • Líder : SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • MIEMBROS DE LA BANCA :
  • SUELIA DE SIQUEIRA RODRIGUES FLEURY ROSA
  • ADSON FERREIRA DA ROCHA
  • ALLISSON LOPES DE OLIVEIRA
  • FÁTIMA MRUÉ
  • Data: 09-nov-2023


  • Resumen Espectáculo
  • In this study, the challenge of treating Hepatocellular Carcinoma (HCC), a fatal liver cancer, is addressed. Recognising the limitations of radiofrequency ablation (RFA) due to biophysical variations in HCC-related liver tissue, the main objective is to develop an adapted Cole-Cole model using the Bond Graph (BG) modelling technique. This adaptation will incorporate the complex characteristics of liver tissue affected by HCC, including variations associated with liver cirrhosis, fat and blood vessels. The expected contributions of this study include improved understanding of the interaction between ARF and the target tissue, enabling detailed analyses of the system’s stability and efficacy of the system, potentially resulting in significant advances in treatment strategies for HCC patients, as well as providing a solid foundation for future multidisciplinary future multidisciplinary research.

12
  • Yasmin Yunes Salles Gaudard
  • Using Model-Based Systems Engineering to implement Digital Twins in Manufacturing Systems.

  • Líder : JONES YUDI MORI ALVES DA SILVA
  • MIEMBROS DE LA BANCA :
  • ANDREA CRISTINA DOS SANTOS
  • JONES YUDI MORI ALVES DA SILVA
  • REGIS KOVACS SCALICE
  • SANDERSON CESAR MACEDO BARBALHO
  • Data: 15-dic-2023


  • Resumen Espectáculo
  • Historically, each Industrial Revolution emerged supported by technologies that increased efficiency and reduced costs. Industry 4.0, acclaimed by many as the 4th Industrial Revolution, is supported by a set of Enabling Technologies, such as Big data, Robotics, Artificial Intelligence, Cloud Computing, Additive Manufacturing, Internet of Things, and Virtual/Augmented Reality, among others. When associating these technologies, there is a convergence to the Cyber-Physical Systems concept, and one of its implementations is the Digital Twin (DT). A DT is commonly defined as a virtual representation of a physical object or process, allowing ultra-realistic simulations of physical models, capturing historical data, and real-time processing/monitoring. Different levels of abstraction and detail offer varied views of industrial systems, equipment, processes, and products. Given its complexity and multi-technological composition, its implementation is not smooth. In this context, we propose using a methodology based on Model-Based Systems Engineering (MBSE) to support a model industry's technological evolution, aiming to implement its DT. Our model industry is an innovation laboratory comprising several sectors: Management, Design Office, Manufacturing (several processes), Stock, and Maintenance, among others. In this work, we focus on the area of Additive Manufacturing for external customers (private companies) and internal ones (academic projects), providing the modelling of the optimal path from the arrival of a work order to 3D printing and final delivery of parts. Using the Reference Architecture Model for Industry 4.0 (RAMI-4.0) and the ISO Digital Twin Framework for Manufacturing (ISO-2347), we modelled the system at several levels with the MBSE Arcadia method through the free tool Capella. As a result, we reached the Final System Architecture, indicating the actors involved in the scenario, their functionalities and the interaction flow among all the components. This model allows the simulation and analysis in a single framework considering the complete system and not only isolated subsystems. It provides us with correlations among indicators of different abstraction levels, generating the basis for implementing a multi-scope DT. We can highlight that MBSE systems, even supported by formal methods of Systems Engineering, still cannot be considered mature enough. Consequently, its tools constantly evolve, and new applications favour its consolidation. Therefore, the principles used to construct this work result in an innovative methodology by allying MBSE's tooling support to implement Digital Twins for Manufacturing Systems efficiently.

Tesis
1
  • Bruna Felippes Corrêa
  • PROPOSAL OF THE CUBE-4.0 READINESS MODEL FOR ENGINEERING COMPANIES THROUGH DIGITAL TRANSFORMATION CONTEXT

  • Líder : SANDERSON CESAR MACEDO BARBALHO
  • MIEMBROS DE LA BANCA :
  • CARLOS HUMBERTO LLANOS QUINTERO
  • INA HEINE
  • JONES YUDI MORI ALVES DA SILVA
  • MARIA ELIZETE KUNKEL
  • SANDERSON CESAR MACEDO BARBALHO
  • Data: 16-ene-2023


  • Resumen Espectáculo
  • This thesis proposes a new Readiness Model, called CUBE-4.0, to assess the current state of readiness and guide improvement strategies, in an innovative way, in engineering companies (industries) of any size, type, and level of readiness, in the digital transformation context.  A systematic literature and theory review was conducted to select, with a Bibliographic Synonym Test (BST) and a specific 8-Step Search Flow (both created by the author), concrete information from 486 relevant studies found in 10 renowned databases, considering 63 existing maturity and readiness models and the entire scientific literature on this subject worldwide.  Based on the existing maturity and readiness models’ shortcomings and after pre-design and systematization, the CUBE-4.0 Readiness Model was developed as an essential contribution to this research stream. This includes its Framework (dimensions, sub-dimensions, elements, readiness levels, radar chart, score calculation and data collection methodology), Questionnaire and Roadmap. Besides, this model provides a practical and easily applicable methodology, with 3 dimensions (X = Organizational Enabler, Y = Technological Enabler, and Z = Processes Maturity Enabler), 6 sub-dimensions, and 21 elements. Furthermore, it has a scale from 0 to 5 to assess the company readiness level, defined and structured in an unprecedented way, besides considering, for the first time, maturity as an "input" enabler for the company readiness evaluation, and not as an "output" as in all other existing models. Also, a "CUBE-4.0 Questionnaire" was developed, based on these CUBE-4.0 concepts, to collect data and survey engineering firms about their readiness for digital transformation. Finally, with a "CUBE-4.0 Roadmap", based on the CUBE-4.0 Questionnaire results, this model can also help corporate boards to guide strategies and plan improvements in their companies in this Industrial 4.0 (I4.0) Age.  After presenting some deductive hypotheses, a pre-test with the CUBE-4.0 Questionnaire and CUBE-4.0 Roadmap was applied in six steps, whose satisfactory results will be presented in this thesis. Then, the CUBE-4.0 Model was reviewed and applied in three renowned engineering companies, enabling its complete validation, by using theoretical and practical methods. Last, this thesis will present the main discussion about the results. This includes the falsiability of the hypotheses, concluding that CUBE-4.0 Model is complete, useful, inexpensive, and efficient, and could help companies to improve their readiness through the digital transformation context.
2
  • Maria de Fátima Kallynna Bezerra Couras
  • Coupled Nested Tensor Decomposition applied to Dual-Polarized MIMO Communication Systems

  • Líder : JOAO PAULO JAVIDI DA COSTA
  • MIEMBROS DE LA BANCA :
  • JOAO PAULO JAVIDI DA COSTA
  • JOSE ALFREDO RUIZ VARGAS
  • RICARDO ZELENOVSKY
  • WALTER DA CRUZ FREITAS JUNIOR
  • TARCISIO FERREIRA MACIEL
  • Data: 27-jun-2023


  • Resumen Espectáculo
  • In recent years, massive Multiple-Input-Multiple-Output (MIMO) systems have been the subject of intense research due to their great potential to provide energy efficiency, data rate gains and diversity through the transmission and reception of multiple versions of the same signal for multiusers. However, its performance benefits depend heavily on the channel estimate at the base station and the way the symbol arrays are transmitted. For modeling and estimating received signals, channel modeling and information transmission (symbol matrices) are very important. In estimating the MIMO channel, it is important to accurately estimate all the parameters that model the channel, such as azimuth and elevation angles, path directions, polarization and amplitude parameters on both sides of the link. For the transmitted symbols it is important to estimate the information received as accurately as possible, for this, in the transmission of the symbols in most cases we use spatial, temporal or frequency coding to increase the diversity of the systems. These codings generally improve the performance, the reliability of the systems and a better estimate of the received signals. In this context, in recent years, semi-blind receivers based on tensor decompositions for MIMO massive systems have been extensively studied. These receivers allow us a better estimate of the channel and symbols without any information about the channel. In some cases, in addition to estimating the channel, it is possible to estimate the channel parameters. This thesis presents received signal model based on tensor decompositions that combine a extension of the MKronST coding and 5th-order channel tensor to transmit the symbols. The coding extension is based on the combination of the tensor space time (TST) coding and the multiple Kronecker product of symbol matrices, called TST-MSMKron coding. The channel assumes a uniform rectangular array (URA) at both, transmitter and receiver which allows us to model the channel as a tensor. More specifically, the theoretical contributions of this thesis are around the proposal of new semi-blind receivers to jointly estimate the symbol matrices, channel and channel parameters without prior knowledge of the channel and channel parameters. What makes this semi-blind estimation possible is the use of the TST-MSMKron coding to transmit the signal. In the first part of this thesis, a multidimensional CX decomposition for tensors is proposed. CX Decomposition is applied to data reconstruction. Based on CX model, one algorithm is proposed to estimate and reconstruct the data tensor. In the second part of this thesis, the TST- MSMKron coding is presented for massive MIMO systems, where a model of the received signal is proposed that combines a 5th-order channel with the TST-MSMKron coding. This system allows us to model the received signal as a coupled-nested-TuckerPARAFAC. In addition, semi-blind receivers in two steps are proposed to jointly estimate the symbols, the channel and the channel parameters. Conditions related to the identifiability and the computational complexity of the proposed algorithms are also discussed in both parts of the thesis. In each part of the thesis, results from Monte Carlo simulations are provided to evaluate the performance of the proposed algorithms. Results show the efficiency of the algorithms in the reconstruction of the datas and joint estimation of the symbols, channel and channel parameters of the system, respectively: In recent years, massive Multiple-Input-Multiple-Output (MIMO) systems have been the subject of intense research due to their great potential to provide energy efficiency, data rate gains and diversity through the transmission and reception of multiple versions of the same signal for multiusers. However, its performance benefits depend heavily on the channel estimate at the base station and the way the symbol arrays are transmitted. For modeling and estimating received signals, channel modeling and information transmission (symbol matrices) are very important. In estimating the MIMO channel, it is important to accurately estimate all the parameters that model the channel, such as azimuth and elevation angles, path directions, polarization and amplitude parameters on both sides of the link. For the transmitted symbols it is important to estimate the information received as accurately as possible, for this, in the transmission of the symbols in most cases we use spatial, temporal or frequency coding to increase the diversity of the systems. These codings generally improve the performance, the reliability of the systems and a better estimate of the received signals. In this context, in recent years, semi-blind receivers based on tensor decompositions for MIMO massive systems have been extensively studied. These receivers allow us a better estimate of the channel and symbols without any information about the channel. In some cases, in addition to estimating the channel, it is possible to estimate the channel parameters. This thesis presents received signal model based on tensor decompositions that combine a extension of the MKronST coding and 5th-order channel tensor to transmit the symbols. The coding extension is based on the combination of the tensor space time (TST) coding and the multiple Kronecker product of symbol matrices, called TST-MSMKron coding. The channel assumes a uniform rectangular array (URA) at both, transmitter and receiver which allows us to model the channel as a tensor. More specifically, the theoretical contributions of this thesis are around the proposal of new semi-blind receivers to jointly estimate the symbol matrices, channel and channel parameters without prior knowledge of the channel and channel parameters. What makes this semi-blind estimation possible is the use of the TST-MSMKron coding to transmit the signal. In the first part of this thesis, a multidimensional CX decomposition for tensors is proposed. CX Decomposition is applied to data reconstruction. Based on CX model, one algorithm is proposed to estimate and reconstruct the data tensor. In the second part of this thesis, the TST- MSMKron coding is presented for massive MIMO systems, where a model of the received signal is proposed that combines a 5th-order channel with the TST-MSMKron coding. This system allows us to model the received signal as a coupled-nested-TuckerPARAFAC. In addition, semi-blind receivers in two steps are proposed to jointly estimate the symbols, the channel and the channel parameters. Conditions related to the identifiability and the computational complexity of the proposed algorithms are also discussed in both parts of the thesis. In each part of the thesis, results from Monte Carlo simulations are provided to evaluate the performance of the proposed algorithms. Results show the efficiency of the algorithms in the reconstruction of the datas and joint estimation of the symbols, channel and channel parameters of the system, respectively

3
  • Marlon Marques Soudré
  • GPU-Based Embedded Monitoring for Fault Detection and Diagnosis  in Time-Varying Nonlinear Systems -A Wind Turbine Case Study

  • Líder : CARLOS HUMBERTO LLANOS QUINTERO
  • MIEMBROS DE LA BANCA :
  • ALVARO BERNAL NOROÑA
  • CARLOS HUMBERTO LLANOS QUINTERO
  • EDWARD DAVID MORENO ORDONEZ
  • JONES YUDI MORI ALVES DA SILVA
  • LEANDRO DOS SANTOS COELHO
  • Data: 06-dic-2023


  • Resumen Espectáculo
  • Monitoring non-linear and time-varying systems to detect and diagnose failures is not a trivial task, especially when applied to embedded systems and their restrictions. Despite the challenges, the development and popularization of technologies such as the Internet of Things (IoT), machine learning tools and the Edge Computing paradigm has increased interest in the topic. However, there are still few works included in this scenario, with a space to be filled in terms of solutions that are viable to be implemented. In this sense, the present work proposes contributions based on GPUs in order to contribute to filling this gap in the literature. Firstly, a strategy applied to embedded systems based on GPUs is proposed to identify non-linear and black-box systems. More precisely, a parallelization strategy for the Forward Regression Orthogonal Least Squares (FROLS) algorithm was developed to select parsimonious, linear and non-linear autoregressive models. Then, solutions for fault detection and classification are presented, both mapped onto GPUs and based on the analysis of the parameters of the identified model, implemented with control-chart tools and Support Vector Machines (SVM), respectively. . Finally, the aforementioned solutions were brought together, together with the recursive estimation strategy of the model parameters within a moving window (SWRLS), in order to establish a monitoring algorithm for fault detection and diagnosis (SWRLS\SVM\FROLS ). The solutions proved to be viable when validated with real data from a wind turbine blade subjected to temperature variation and failures resulting from ice accumulation and crack formation at different scales. It is worth highlighting that this case study is current and relevant, combining the characteristics of a time-varying non-linear system, due to failures and environmental factors, and acting in the edge computing paradigm, essential especially for offshore installations. In this sense, the chosen case study explores the proposed solution in its entirety, showing the feasibility of GPU-based embedded monitoring for detecting and diagnosing faults in time-varying non-linear systems.

2022
Disertaciones
1
  • LAIS SOARES VIEIRA
  • Production and characterization of hydroxyapatite porous ceramic scaffolds doped with CoFe2O4

  • Líder : ALYSSON MARTINS ALMEIDA SILVA
  • MIEMBROS DE LA BANCA :
  • ALYSSON MARTINS ALMEIDA SILVA
  • DANIEL MONTEIRO ROSA
  • EDSON PAULO DA SILVA
  • PRISCILA LEMES
  • Data: 29-ago-2022


  • Resumen Espectáculo
  • The general objective of this work is to produce ceramic supports based on hydroxyapatite doped with cobalt ferrite and to evaluate the influence of the dopant element on its microstructural characteristics and its porous structure. The ceramic supports were produced by freeze-casting process which created a matrix of interconnected and unidirectional pores. The solids concentration used for the production of the ceramic supports, utilizing freeze-casting was 12.5% vol. Camphene was used as a solvent. Cobalt ferrite was produced using sol-gel and was later used in the doping of hydroxyapatite at 2, 6 and 10% vol. The cooling rate, the dopant content, the sintering temperature and the compressive strength of the ceramic supports were analyzed.. The porosity was measured by the Architect's Principle.Using the X-ray diffraction analysis it was possible to observe the formation of TCP-β for pure hydroxyapatite and under lower conditions of cobalt ferrite. SEM images showed that higher concentrations of cobalt ferrite favored pore growth. Cobalt ferrite contributed to a greater densification of the material, raising the compressive strength from 0.64 MPa to 2.07 MPa in supports cooled to 35 °C and from 0.85 MPa to 2.50 MPa in supports cooled in liquid nitrogen

2
  • MARIO ANDRÉS PASTRANA TRIANA
  •  

    Behavior Based Control Applied to Mobile Robotics Using Reconfigurable Hardware

  • Líder : DANIEL MAURICIO MUNOZ ARBOLEDA
  • MIEMBROS DE LA BANCA :
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • CARLOS HUMBERTO LLANOS QUINTERO
  • JONES YUDI MORI ALVES DA SILVA
  • LEANDRO DOS SANTOS COELHO
  • Data: 15-sep-2022


  • Resumen Espectáculo
  • The navigation of mobile robots is a research area with different technological challenges, from mechanical development, sensing, and even in terms of computational processing. In particular, the control of mobile robots involves complex systems that, to be modeled, it is necessary to have experience in control systems.
     
    The learning from demonstration methodology allows designing controllers by imitating behaviors. This area is gaining strength because of its ability to solve complex control problems and its easy implementation. However, this methodology requires the development of embedded systems with good computational performance and low energy consumption. In this sense, the community has not conducted research projects to explore reconfigurable hardware to accelerate the algorithms involved in the training process.
     
    This work describes the development of the learning from demonstration methodology using embedded hardware applied to mobile robotics. A single-layer neural network with adaptiveness was used along with the particle swarm optimization algorithm were used for the learning process. This research also includes the development of a mobile robotic platform, called Maria. Three micro-behaviors were taught (move forward, rotate clockwise, and rotate counter-clockwise) and 16 experiments were performed for each one, in order to obtain statistical results for each behavior. In addition, an experimental protocol was performed to test the robot in unknown scenarios, collecting more statistical results.
     
    The trajectory error of the robot for the micro-behavior 2 was 4830cm², with a 100% success, for the micro-behavior 3 was 6872.4cm², with a 75% success and 25% failure, and the accumulated distance error for the micro-Behavior 1 was $202cm$, with an 81.25% success and an 18.75 % failures.
3
  • Jacó Cirino Gomes
  • A few-shot learning framework for classifying insects in agriculture using few samples

  • Líder : DIBIO LEANDRO BORGES
  • MIEMBROS DE LA BANCA :
  • DIBIO LEANDRO BORGES
  • FABRIZZIO ALPHONSUS ALVES DE MELO NUNES SOARES
  • FLÁVIO LEMES FERNANDES
  • JOSE MAURICIO SANTOS TORRES DA MOTTA
  • Data: 16-sep-2022


  • Resumen Espectáculo
  • The ability to learn with few visual samples is one of the main challenges in the field of machine learning. Deep learning has been successful in large data sets applications, but models tend to drastically reduce their performance when dealing with small datasets. Few-shot Learning (FSL), however, is a recent and suitable learning approach to deal with few samples. However, current solutions still seek improvements in the image feature extraction processes to differentiate them effectively. In this work, we conducted a study to provide an improved metric-based FSL classification model. Starting with a literature review, we distinguish the main state-of-the-art models and describe the main approaches with application in insect classification. We explored
    the use of Bregman divergences as similarity methods in FSL and investigated the parametric training conditions to minimize classification errors during the test time. The proposed model includes two main modules: 1) a low, middle, and high-level image feature extraction and fusion mechanism (FMC), with an exclusive method of merging these features to generate information-rich representative vectors of classes, and 2) the relative entropy as a promising alternative to deal with the comparison of vectors generated by the FMC in tasks with a few data. The model was tested using two challenging agricultural insect datasets proposed in this work: the first containing insect pests separated by stages of maturity, and the second covering pests and beneficial insects of the maize crops. A wide experimental chain was carried out for the divergence analysis and validation of the proposed model. Validation was performed by comparing our results to those of convolutional neural networks consolidated in the literature. The results showed that the presented model improved the accuracy by up to 3.62% and 2% in 1-shot and 5shot classification tasks, respectively, compared to the traditional global image feature extraction model. Furthermore, the model outperformed the results of the ResNet50, VGG16, and Mobilenetv2 networks, with the advantage of reducing the number of learnable parameters by up to 99%. Few-shot learning is a relevant approach for research related to insect classification in agriculture, serving as an
    alternative for developing fast and accurate solutions for application in the crop field.

4
  • Antonio Henrique Duarte
  • LIFE CYCLE COST ANALYSIS APPLIED TO THE DEFENSE SECTOR: MULTIPLE CASE STUDY FOR IDENTIFY THE USE OF THE CCV IN THE PROGRAMS ARMY STRATEGIES

  • Líder : SANDERSON CESAR MACEDO BARBALHO
  • MIEMBROS DE LA BANCA :
  • ALENCAR SOARES BRAVO
  • CARLOS HUMBERTO LLANOS QUINTERO
  • JOSÉ LUÍS GARCIA HERMOSILLA
  • SANDERSON CESAR MACEDO BARBALHO
  • Data: 25-nov-2022


  • Resumen Espectáculo
  • The life cycle cost (LCC) is considered internationally as one of the best instruments for evaluating investments in military equipment, because according to its holistic precepts, it estimates all the costs associated with the product/system in its stages of concept, development, production, usage, support and deactivation. However, despite the recognition, its application is still poorly studied. This dissertation, seeking to understand such motivations and contribute to the existing knowledge base on the subject, initially carried out a quantitative and qualitative bibliometric study with data from Scopus and WoS databases, identifying the current stage of research related to the theme, with emphasis on the sector of defense worldwide. It revealed that although knowledge is still incipient worldwide, there are a few developed countries on the subject, with practical application in their armed forces, that the CCV is not applied only in the initial phase of development or acquisition of a product/system, to the selection of the best investment proposal, but also has strong application in integrated development, to ensure the inclusion of factors that will provide competitiveness, reliability, low cost operation and sustainability throughout the life cycle of future military products/systems. Finally, to deepen the study on the findings of the bibliometric research, a case study was carried out with six Strategic Programs of the Army, which identified a repetition of the data presented in the world context in a way equivalent to the position that Brazil figured in the international research. However, the application of the case study allowed to increase the understanding of the possible facts that lead to the incipient application of the CCV to the Strategic Programs and that, by induction, can cause repetition at world levels, such as: the complexity of knowledge and calculation of the CCV; the need for continuous investment in training and the need for specific structures with specialized personnel to support the calculation of the CCV.

Tesis
1
  • Ana Paula Gonçalves Soares de Almeida
  • AN EXPLORATORY ASSESSMENT OF MULTISTREAM DEEP NEURAL NETWORK FUSION: DESIGN AND APPLICATIONS

  • Líder : FLAVIO DE BARROS VIDAL
  • MIEMBROS DE LA BANCA :
  • FLAVIO DE BARROS VIDAL
  • ALEXANDRE RICARDO SOARES ROMARIZ
  • LUCIANO REBOUÇAS DE OLIVEIRA
  • RICARDO DA SILVA TORRES
  • Data: 01-jul-2022


  • Resumen Espectáculo
  • Machine-learning methods depend heavily on how well the selected feature extractor can represent the raw input data. Nowadays, we have more data and computational capacity to deal with it. With Convolutional Neural Networks, we have a network that is easier to train and generalizes much better than usual. However, a good amount of essential features are discarded in this process, even when using a powerful CNN. Multistream Convolutional Neural Networks can process more than one input using separate streams and are designed using any classical CNN architecture as a base. The use of M-CNNs generates more features and thus, improves the overall outcome. This work explored M-CNNs architectures and how the stream signals behave during the processing, coming up with a novel M-CNN cross-fusion strategy. The new module is first validated with a standard dataset, CIFAR-10, and compared with the corresponding networks (single-stream CNN and late fusion M-CNN). Early results on this scenario showed that our adapted model outperformed all the abovementioned models by at least 28% compared to all tested models. Expanding the test, we used the backbones of former state-of-the-art networks on image classification and additional datasets to investigate if the technique can put these designs back in the game. On the NORB dataset, we showed that we could increase accuracy up to 63.21% compared to basic M-CNNs structures. Varying our applications, the mAP@75 of the BDD100K multi-object detection and recognition dataset improved by 50.16% compared to its unadapted version, even when trained from scratch. The proposed fusion demonstrated robustness and stability, even when distractors were used as inputs. While our goal is to reuse previous state-of-the-art architectures with few modifications, we also expose the disadvantages of our explored strategy.

2
  • Ana Paula Gonçalves Soares de Almeida
  • AN EXPLORATORY ASSESSMENT OF MULTISTREAM DEEP NEURAL NETWORK FUSION: DESIGN AND APPLICATIONS

  • Líder : FLAVIO DE BARROS VIDAL
  • MIEMBROS DE LA BANCA :
  • ALEXANDRE RICARDO SOARES ROMARIZ
  • CAMILO CHANG DOREA
  • FLAVIO DE BARROS VIDAL
  • LUCIANO REBOUÇAS DE OLIVEIRA
  • RICARDO DA SILVA TORRES
  • Data: 01-jul-2022


  • Resumen Espectáculo
  • Machine-learning methods depend heavily on how well the selected feature extractor can represent the raw input data. Nowadays, we have more data and computational capacity to deal with it. With Convolutional Neural Networks, we have a network that is easier to train and generalizes much better than usual. However, a good amount of essential features are discarded in this process, even when using a powerful CNN. Multistream Convolutional Neural Networks can process more than one input using separate streams and are designed using any classical CNN architecture as a base. The use of M-CNNs generates more features and thus, improves the overall outcome. This work explored M-CNNs architectures and how the stream signals behave during the processing, coming up with a novel M-CNN cross-fusion strategy. The new module is first validated with a standard dataset, CIFAR-10, and compared with the corresponding networks (single-stream CNN and late fusion M-CNN). Early results on this scenario showed that our adapted model outperformed all the abovementioned models by at least 28% compared to all tested models. Expanding the test, we used the backbones of former state-of-the-art networks on image classification and additional datasets to investigate if the technique can put these designs back in the game. On the NORB dataset, we showed that we could increase accuracy up to 63.21% compared to basic M-CNNs structures. Varying our applications, the mAP@75 of the BDD100K multi-object detection and recognition dataset improved by 50.16% compared to its unadapted version, even when trained from scratch. The proposed fusion demonstrated robustness and stability, even when distractors were used as inputs. While our goal is to reuse previous state-of-the-art architectures with few modifications, we also expose the disadvantages of our explored strategy.

3
  • Gustavo Silva Vaz Gontijo
  • Simulation of water cone and gas cone in horizontal and vertical oil wells using the Boundary Element Method

  • Líder : EUGENIO LIBORIO FEITOSA FORTALEZA
  • MIEMBROS DE LA BANCA :
  • ANDRES FELIPE GALVIS RODRIGUEZ
  • ANTONIO PIRATELLI FILHO
  • EUGENIO LIBORIO FEITOSA FORTALEZA
  • LUCAS SILVEIRA CAMPOS
  • MANOEL PORFIRIO CORDAO NETO
  • Data: 19-ago-2022


  • Resumen Espectáculo
  • This thesis presents the application of the Boundary Element Method in the development of simulators for the study of water and gas coning phenomena in horizontal and vertical oil wells. The scope is the two-dimensional and axisymmetric numerical modeling of the phenomena using the Boundary Element Method with sub-regions and interface between fluids modeled as a moving boundary. This text provides a review about these phenomena and their impact on the petroleum industry, in order to situate the reader on the motivation for the choice of the application. Two potential flow models are used, single-phase and two-phase. A third model, single-phase diffusive flow, is also applied. In addition to these, single-phase and two-phase models are adopted for potential flow in axisymmetric domains. All the mathematical treatment involved in the deduction of the governing equations of the models is presented. The mathematical developments of several formulations of the Boundary Element Method are also presented. These are the formulation for two-dimensional potential problems, the Dual Reciprocity formulation for problems governed by the diffusivity equation in a two-dimensional domain, the formulation for potential problems in a three-dimensional axisymmetric domain, and the subregion formulation, for problems over piecewise homogeneous domains. An alternative in modeling the horizontal oil well in two-dimensional domain is proposed, using a set of point sinks to represent its perimeter. This approach allows satisfactory analysis of the behavior of the fluid interface, even after it has already touched the well. The main aspects of the simulators implementation are documented in a way that allows this text to be used as a reference for similar implementations. The determination and application of the boundary conditions for all models, including the compatibility and equilibrium conditions at the fluid interface, are presented. The proposed simulators have been developed and validated against analytical, numerical, and experimental results

4
  • Ahmed Abdelfattah Saleh Sherif
  • Language Independent Text Summarizer and Deep Self-Organizing Cube

  • Líder : LI WEIGANG
  • MIEMBROS DE LA BANCA :
  • DANIEL MAURICIO MUNOZ ARBOLEDA
  • DANIEL OLIVEIRA CAJUEIRO
  • LI WEIGANG
  • PAULO CESAR GUERREIRO DA COSTA
  • PENG WEI
  • Data: 01-dic-2022


  • Resumen Espectáculo
  • The rapid development of the Internet and the massive exponential growth in web textual data has brought considerable challenges to tasks related to text management, classification and information retrieval. In this thesis, we propose two novel domain agnostic models, aiming at improving the generalization performance in the fields of Natural Language Processing (NLP) and Deep Learning (DL), to address the challenges imposed by the massive growth in data and the need for proper information retrieval and knowledge inference. Both models adopt a straightforward, yet efficient, approaches that depend on extracting intrinsic features in the modeled data, in order to perform their intended task in a totally domain agnostic manner.  The performance evaluation strategy applied in this thesis aims at testing the model on benchmark dataset and then compare the obtained results against those obtained by the standard models. Moreover, the proposed models are challenged against state-of-the-art models presented in literature for the same benchmark dataset. In NLP domain, the majority of text summarization techniques in literature depend, in one way or another, on language dependent pre-structured lexicons, databases, taggers and/or parsers. Such techniques require a prior knowledge of the language of the text being summarized. In this thesis, we propose a novel extractive text summarization tool, UnB Language Independent Text Summarizer (UnB-LITS), which is capable of performing text summarization in a language agnostic manner. The proposed model depends on intrinsic characteristics of the text being summarized rather than its language and thus eliminates the need for language dependent lexicons, databases, taggers or parsers. Within this tool, we develop an innovative way of coding the shapes of text elements (words, n-grams, sentences and paragraphs), in addition to proposing language independent algorithms that are capable of normalizing words and performing relative stemming or lemmatization. The proposed algorithms and its Shape-Coding routine enable the UnB-LITS tool to extract intrinsic features of document elements and score them statistically to extract a representative summary independent of the document language. The proposed model was applied on an English and Portuguese benchmark datasets, and the results were compared to twelve state-of-the-art approaches presented in recent literature. Moreover, the model was applied on French and Spanish news datasets, and the results were compared to those obtained by standard commercial summarization tools. UnB-LITS has outer performed all the state-of-the-art approaches as well as the commercial tools in all four languages while maintaining its language agnostic nature. On the other hand, Multi-dimensional classification (MDC) task can be considered the most comprehensive description of all classifications tasks, as it joins multiple class spaces and their multiple class members into a single compound classification problem. The challenges in MDC arise from the possible class dependencies across different class spaces, as well as the imbalance of labels in training datasets due to lack of all possible combinations. In this thesis, we propose a straightforward, yet efficient, MDC deep learning classifier, named “Deep Self-Organizing Cube” or “DSOC” that can model dependencies among classes in multiple class spaces, while consolidating its ability to classify rare combinations of labels. DSOC is formed of two n-dimensional components, namely the Hypercube Classifier and the multiple DSOC Neural Networks connected to the hypercube.  The multiple neural networks component is responsible for feature selection and segregation of classes, while the Hypercube classifier is responsible for creating the semantics among multiple class spaces and accommodate the model for rare sample classification. DSOC is a multiple-output learning algorithm that successfully classify samples across all class spaces simultaneously. To challenge the proposed DSOC model, we conducted an assessment on seventeen benchmark datasets in the four types of classification tasks, binary, multi-class, multi-label and multi-dimensional. The obtained results were compared to four standard classifiers and eight competitive state-of-the-art approaches reported in literature. The DSOC has achieved superior performance over standard classifiers as well as the state-of-the-art approaches in all the four classification tasks. Moreover, in terms of Exact Match accuracy metrics, DSOC has outer performed all state-of-the-art approaches in 77.8% of the cases, which reflects the superior ability of DSOC to model dependencies and successfully classify rare samples across all dimensions simultaneously. 

SIGAA | Secretaria de Tecnologia da Informação - STI - (61) 3107-0102 | Copyright © 2006-2024 - UFRN - app19_Prod.sigaa13