Disertación/Tesis

Clique aqui para acessar os arquivos diretamente da Biblioteca Digital de Teses e Dissertações da UnB

2024
Disertaciones
1
  • Helena Santos Brandão
  • Um estudo sobre estimação não paramétrica de entropia diferencial para a análise de dados financeiros

  • Líder : RAUL YUKIHIRO MATSUSHITA
  • MIEMBROS DE LA BANCA :
  • RAUL YUKIHIRO MATSUSHITA
  • ANTONIO EDUARDO GOMES
  • EDUARDO YOSHIO NAKANO
  • ERALDO SERGIO BARBOSA DA SILVA
  • Data: 27-mar-2024


  • Resumen Espectáculo
  • In financial risk, the conventional approach has typically linked risk to the variance of a variable, such as the return of a stock or portfolio. By recognizing the constraints of this conventional method and the need for various risk metrics, alternative measures have been developed to address downside risk or extreme outcomes specifically. One such complementary metric is the uncertainty measure, which enables us to capture and describe different aspects of risk, going beyond traditional notions of variability alone. Obtaining a robust estimator with desirable properties for entropy is crucial for its practical application. In particular, our study aims to conduct a comprehensive review of non-parametric differential entropy estimators and then propose adjustments regarding the choice and optimization of their use to find an estimator with convenient properties for application in financial data, which are often characterized by distributions with heavy tails. We also conducted real-data applications to illustrate the use of the proposed measures.

2
  • Pedro Henrique Monteiro Moreira
  • "Ordinal Logistic Regression Models Application to Convalescent plasma therapy in COVID-19"

  • Líder : JOANLISE MARCO DE LEON ANDRADE
  • MIEMBROS DE LA BANCA :
  • JOANLISE MARCO DE LEON ANDRADE
  • ANDRE LUIZ FERNANDES CANCADO
  • EDUARDO YOSHIO NAKANO
  • CARLA ALMEIDA VIVACQUA
  • Data: 24-may-2024


  • Resumen Espectáculo
  • Objectives: To evaluate the response to COVID-19 treatment through monitored convalescent plasma using ordinal logistic regression models. To determine whether the effectiveness of convalescent plasma treatment in combating COVID-19 depends more on donor or recipient characteristics, and to identify the most important variables. Methods: To assess the performance of convalescent plasma treatment, ordinal logistic regression models were used, including proportional odds models (POM), partial proportional odds models (PPOM), continuation ratio models (CRM), and stereotype models (SM) to determine which best describes the studied dataset. Results: The dataset included 2,369 patients and was divided into 6 smaller groups according to preliminary results from other studies. In all the groups analyzed, there was significance, but the fit was not satisfactory. Conclusion: Statistical evidence to prove the effectiveness of the treatment using POM, PPOM, CRM, and SM was not found. All models classified at least 98.7% of cases in the lowest severity category, due to the higher proportion of this category in the database, highlighting a considerable imbalance in category distribution. Considering the dataset, donor characteristics were not as relevant to the models as recipient
    variables. In this scenario, even without satisfactory predictive results, some variables such as "Severity level upon hospitalization"and "WHO score upon hospitalization"were included in almost all models. Therefore, future investigations may consider alternative approaches to better explore these variables or include additional variables to better understand the factors influencing patient outcomes subjected to this type of treatment.

     

3
  • Yasmin Lírio Souza de Oliveira
  • "Distribuição Gumbel bivariada multimodal"

  • Líder : CIRA ETHEOWALDA GUEVARA OTINIANO
  • MIEMBROS DE LA BANCA :
  • CIRA ETHEOWALDA GUEVARA OTINIANO
  • FELIPE SOUSA QUINTINO
  • RAUL YUKIHIRO MATSUSHITA
  • VERONICA ANDREA GONZALEZ LOPEZ
  • Data: 29-may-2024


  • Resumen Espectáculo
  • A DEFINIR.

4
  • Vívia de Alencar Seabra
  • An Efficient Ensemble of Geographically Fine-Tuned Positional Encoder Graph Neural Networks.

  • Líder : GUILHERME SOUZA RODRIGUES
  • MIEMBROS DE LA BANCA :
  • FABRICIO AGUIAR SILVA
  • ALAN RICARDO DA SILVA
  • GUILHERME SOUZA RODRIGUES
  • JOSE AUGUSTO FIORUCCI
  • Data: 07-jun-2024


  • Resumen Espectáculo
  • A DEFINIR.

5
  • Lucas de Moraes Bastos
  • Deep IRT: an application of deep learning methods to Item Response Theory.

  • Líder : GUILHERME SOUZA RODRIGUES
  • MIEMBROS DE LA BANCA :
  • ANTONIO EDUARDO GOMES
  • DALTON FRANCISCO DE ANDRADE
  • GUILHERME SOUZA RODRIGUES
  • JOSE AUGUSTO FIORUCCI
  • Data: 12-jun-2024


  • Resumen Espectáculo
  • A DEFINIR.

6
  • Joao Gabriel Rodrigues Reis
  • Multiobjective Bayesian Optimization To Enhance Computational Efficiency In Neural Network models.

  • Líder : GUILHERME SOUZA RODRIGUES
  • MIEMBROS DE LA BANCA :
  • ANDRE LUIZ FERNANDES CANCADO
  • ELIZABETH FIALHO WANNER
  • GUILHERME SOUZA RODRIGUES
  • JOSE AUGUSTO FIORUCCI
  • Data: 13-jun-2024


  • Resumen Espectáculo
  • A DEFINIR.

2023
Disertaciones
1
  • PEDRO CARVALHO BROM
  • Relation between Variance and Range of Financial Returns

  • Líder : RAUL YUKIHIRO MATSUSHITA
  • MIEMBROS DE LA BANCA :
  • RAUL YUKIHIRO MATSUSHITA
  • ALAN RICARDO DA SILVA
  • ROBERTO VILA GABRIEL
  • REGINA CÉLIA BUENO DA FONSECA
  • Data: 31-ene-2023


  • Resumen Espectáculo
  • This work, organized as a collection of three articles, proposes a solution to the truncation problem, reconciling past-bounded information and future-unbounded events. We show that this is possible by applying a power law relating the length of the truncation (ℓ) and the standard deviation of the data (σ) given by ℓ = ζσβ, where ζ and β are positive coefficients. This approach is applicable for a wide class of symmetric distributions—including truncated Lévy flights — as it does not require the exact form of the probability distribution function. In addition, distributional moments may vary over time. In particular, we applied the proposed methodology to intraday financial returns of exchange rates for different currencies, totaling more than 32 million observations. In this case, we propose a non-Gaussian standardization in the form z = r/σβ, where r is a financial return (typically subject to volatility clusters) and z is the standardized return without volatility clusters.

2
  • Rodrigo Marques dos Santos
  • A Bayesian method for checking the fit of the three parameter logistic model in item response theory

  • Líder : ANTONIO EDUARDO GOMES
  • MIEMBROS DE LA BANCA :
  • ANTONIO EDUARDO GOMES
  • ANDRE LUIZ FERNANDES CANCADO
  • RAUL YUKIHIRO MATSUSHITA
  • DALTON FRANCISCO DE ANDRADE
  • Data: 27-feb-2023


  • Resumen Espectáculo
  • he Item Response Theory has been increasingly used in studies that aim to estimate the latent trait and, among the existing models, the logistic ones are the most used. However, more and more studies show that the assumption that Item Characteristic Curves (ICC’s) follow the Logistic form are not valid, making it increasingly important to check this assumption. herefore, estimating the ICC in alternative, nonparametric ways can be a powerful tool to compare with the ICC generated by the logistic model and thus allow inference about the veracity of this assumption.This study proposes a nonparametric test that uses Bayesian inference, more specifically the Posterior Predictive Model Checking (PPMC) method to test this hypothesis. To compare with the ICC calculated by the Logistic Model, Isotonic and Nadaraya-Watson regressions were used to create 6 test statistics. Two analyses were done, one using a simulated data set and the other applying this test to real data from a SARESP application. The simulation results were satisfactory, with the test indicating significant differences in very few items that actually followed the 3-parameter Logistic Model, and managing to recognize well those items that had a non-monotonic ICC. Despite this, the test recognized only one item that were mixtures of distributions.For the real data, the Isotonic Regression estimators indicated different values than those indicated by the Nadaraya-Watson Regression, for the most part of items.

3
  • Arthur Canotilho Machado
  • Approximate Bayesian Computation via factorisationof the posterior distribution

  • Líder : GUILHERME SOUZA RODRIGUES
  • MIEMBROS DE LA BANCA :
  • GUILHERME SOUZA RODRIGUES
  • RAUL YUKIHIRO MATSUSHITA
  • THAIS CARVALHO VALADARES RODRIGUES
  • KELLY CRISTINA MOTA GONÇALVES
  • Data: 01-mar-2023


  • Resumen Espectáculo
  • It is common in modern Bayesian inference problems to come across complex and/or high-dimensional models, such as those that arise in the field of population genetics (Beaumont Zhang, & Balding, 2002), where the likelihood function and marginal distributions are difficult or even intractable to compute, leading to problemsin obtaining the posterior distribution. There are several methods for approximating the posterior distribution for these type of cases, including the Approximate Gibbs Sampler proposed by Rodrigues, Nott, and Sisson (2019), which allows the generation ofsamples from an approximate posterior distribution using principles of Approximate Bayesian Computation (ABC) and Gibbs Sampling. Santos (2021) proposed an improvement to the technique by previously decorrelating the parameters of interest and using quantile regression models via neural networks in the process of approximating the complete conditional distributions In this work, we suggest replacing the Approximate Gibbs Sampler with an algorithm that approximates the terms of a convenient factorization ofthe posterior distribution. We present a review of the theory and practical applications comparing the methods of Rodrigues, Nott, and Sisson (2019), of Santos (2021), and the proposed in this work. Synthetic datasets were generated to compare the methods.The algorithm proposed in this work showed good performance compared to its peers.

4
  • Ricardo Torres Bispo Reis
  • Quantile-based Recalibration of Artificial Neural Networks

  • Líder : GUILHERME SOUZA RODRIGUES
  • MIEMBROS DE LA BANCA :
  • GUILHERME SOUZA RODRIGUES
  • JOSE AUGUSTO FIORUCCI
  • THAIS CARVALHO VALADARES RODRIGUES
  • RAFAEL IZBICKI
  • Data: 01-mar-2023


  • Resumen Espectáculo
  • Artificial neural networks (ANN) are powerful tools for prediction and data modeling. Although they are becoming ever more powerful, modern improvements have compromised their calibration in favor of enhanced prediction accuracy, thus making their true confidence harder to assess. To address this problem, we propose a new post-processing quantile-based method of recalibration for ANN. To illustrate the method's mechanics we present two toy examples. In both, recalibration reduced the Mean Squared Error over the original uncalibrated models and provided a better representation of the data generative model. To further investigate the effects of the proposed recalibration procedure, we also present a simulation study comparing various parameter configurations--the recalibration successfully improved performance over the base models in all scenarios under consideration. At last, we apply the proposed method to a problem of diamond price prediction, where it was also able toimprove the overall model performance.

5
  • Lucas José Gonçalves Freitas
  • Text clustering applied to the treatment of unbalanced legal data

  • Líder : THAIS CARVALHO VALADARES RODRIGUES
  • MIEMBROS DE LA BANCA :
  • THAIS CARVALHO VALADARES RODRIGUES
  • ANDRE LUIZ FERNANDES CANCADO
  • NÁDIA FELIX FELIPE DA SILVA
  • RAFAEL BASSI STERN
  • Data: 02-mar-2023


  • Resumen Espectáculo
  • The Federal Supreme Court (STF), the highest instance of the Brazilian judicial system, produces, as well as courts of other instances, an immense amount of data organized in text form, through decisions, petitions, injunctions, appeals and other legal documents. Such documents are classified and grouped by public employees specialized in cataloging of judicial processes, which in specific cases use technological support tools. Some processes in the STF, for example, are classified under one or more sustainable development goals (SDGs) of the United Nations (UN) 2030 Agenda. As it is a repetitive task related to pattern recognition, it is possible to develop tools based on machine learning for this purpose. In this work, Natural Language Processing (NLP) models are proposed for clustering processes, in order to increase the database on certain sustainable development goals (SDGs) with few inputs naturally. The activity of clustering, which is of enormous importance in its own right, is also able to gather unlabeled entries around cases already classified by court officials, thus allowing new labels to be allocated to similar cases. The results of the work show that cluster-augmented sets can be used in supervised learning flows to aid in the classification of legal texts, especially in contexts with unbalanced data.

6
  • Gustavo Martins Venancio Pires
  • A hybrid model for hierarchical time series with multiple seasonality

  • Líder : JOSE AUGUSTO FIORUCCI
  • MIEMBROS DE LA BANCA :
  • DIEGO CARVALHO DO NASCIMENTO
  • EDUARDO YOSHIO NAKANO
  • JOSE AUGUSTO FIORUCCI
  • PAULO HENRIQUE FERREIRA DA SILVA
  • Data: 14-mar-2023


  • Resumen Espectáculo
  • This Master’s Thesis proposes a hybrid model capable of forecasting hierarchical time series with multiple seasonality. This hybrid methodology consists of using aMachine Learningmodel that has variables containing time series statistical methodologies to generate cohesive forecasts. This methodology was applied to theM5-Forecasting(2020) competition available through Kaggle, in which the objective was to more accurately predict the daily sale of 3,409 products distributed in 5 levels of hierarchy by 28 days. During the dissertation, 5 different approaches were compared andtheLight Gradient Boosting Machine(LGBM) model containing a variable based on the TBATS (Trigonometric seasonity, Box-Cox transformation ARMA errors, Tred and Seasonal components) obtained an accuracy gain of 27% compared to the LGBM models without the variable mentioned. This model would have obtained the 318th place in the competition, being among the top 6% competitors.

7
  • Roberto de Souza Marques Buffone
  • Analysis of the traffic accidents rate with victims using Geographically Weighted Beta Regression.

  • Líder : ALAN RICARDO DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALAN RICARDO DA SILVA
  • ANDRE LUIZ FERNANDES CANCADO
  • TEREZINHA KESSIA DE ASSIS RIBEIRO
  • FLÁVIO JOSÉ CRAVEIRO CUNTO
  • Data: 14-jun-2023


  • Resumen Espectáculo
  • Classical linear regression allows, in a simple way, that a continuous quantitative variable is modeled from other variables. However, this type of methodology has certain assumptions, such as independence between observations, which if ignored can lead to methodological issues. Additionally, not all data follows a normal distribution, which leads to alternative methods for modeling. In this context, Geographically Weighted Beta Regression (GWBR) is presented with the aim of incorporating spatial dependence into the modeling, along with the analysis of rates and proportions using the beta distribution. The beta distribution, with its scope within the unit interval and its flexible nature, easily adapts to the analyzed data. In this study, GWBR was applied to the rate of traffic accidents with victims in Fortaleza-CE, Brazil, from 2009 to 2011, comparing its results to global and local models of classical regression, classical regression with logit transformation of the response variable, and global beta regression. Additionally, the ‘gwbr’ package was developed in R software, providing the necessary algorithms for GWBR application. In conclusion, it was found that the local approach using the beta distribution is a viable model for explaining the rate of traffic accidents with victims, given its suitability to both asymmetric and symmetric distributions. Therefore, when analyzing rates, the use of the beta distribution is always recommended.

8
  • Matheus Stivali
  •  Two essays on yield curve modelling

  • Líder : JOSE AUGUSTO FIORUCCI
  • MIEMBROS DE LA BANCA :
  • JOSE AUGUSTO FIORUCCI
  • EDUARDO YOSHIO NAKANO
  • RAUL YUKIHIRO MATSUSHITA
  • GERALDO NUNES SILVA
  • Data: 12-dic-2023


  • Resumen Espectáculo
  • The dissertation undertakes two distinct lines of statistical analysis on the yield curve for Brazil: the first involves the interpolation of daily observed data to estimate the complete curve. In contrast, the second focuses on extrapolating past information to forecast the yield curve. These analyses aim to model the behaviour of interest rates in Brazil, offering insights for improved macroeconomic management and supporting investment decisions. The analysis utilizes data from interest rate futures contracts traded in Brazil between January 2018 and April 2023. The second chapter is dedicated to estimating empirical models of the Term Structure of Interest Rates. Despite B3 periodically releasing yield curve estimates for monitoring the Brazilian market, various estimation techniques are considered for alternative purposes due to inherent trade-offs. The interest rate and maturity relationship holds for all terms, but daily observations are limited to specific maturities corresponding to traded securities or derivatives. Therefore, estimating the entire curve from these observed data points is crucial. This chapter evaluates empirical models, which do not impose restrictions derived from theoretical term structure models during the estimation process. These models are focused on obtaining a smooth function from observed data while adhering to specific constraints, such as the non-negativity of interest rates. The evaluation criteria include the quality of fit, robustness to outliers, and smoothness of the estimated function. This chapter contributes to literature by assessing models not previously applied to yield curve estimation and utilizing the multiple comparison procedure. Results highlight the strong fit of spline models, emphasize the greater smoothness of Nelson-Siegel family models, and recognize the noteworthy performance of the previously overlooked Loess model. The third chapter delves into modelling the yield curve dynamics through a factor model perspective to generate curve predictions. The analysis incorporates Brazilian data by implementing the Nelson-Siegel Dynamic model proposed by Diebold and Li (2006) and further developed in Diebold et al. (2006). Both original estimation procedures, two-step and one-step, are considered, focusing on the latter using the Kalman filter. Out-of-sample predictive capacity is assessed through the Diebold-Mariano test, comparing the performance of these implementations against simpler models.

9
  • Gabriel Ângelo da Silva Gomes
  • Essays on fingerprint data statistical analysis

  • Líder : RAUL YUKIHIRO MATSUSHITA
  • MIEMBROS DE LA BANCA :
  • RAUL YUKIHIRO MATSUSHITA
  • GLADSTON LUIZ DA SILVA
  • ROBERTO VILA GABRIEL
  • REGINA CÉLIA BUENO DA FONSECA
  • Data: 13-dic-2023


  • Resumen Espectáculo
  • This dissertation is organized as a collection of five articles regarding applying statistical tools in fingerprint studies. The first applies convolutional neural networks to fingerprint data for predicting human attributes such as sex, hand types (left or right), and position of fingers (right index finger, for example). The second presents a bibliometric review from 2018 to 2023 of automated minutiae counting initiatives, we noted that most involve convolutional neural networks. The third deals with a statistical analysis of the distribution of Level 2 details concerning levels 1 and 3, in addition to considering sex and type of finger. The fourth suggests an initiative to disseminate 1,000 fingerprints sampled from Brazilians (50 males and 50 females) for ethical, non-profit academic and scientific research. This initiative aims to promote fingerprint identification studies. Finally, the fifth essay suggests Rényi’s divergence as an alternative to the traditional chi-square test to evaluate goodness-of-fit, homogeneity, and independence in contingency tables involving rare events. We illustrate this method using fingerprint minutiae data sampled from the Brazilian Federal Police records.

10
  • Aitcheou Gauthier Zountchegnon
  • Time series forecasting applied to data sale of a large retailer in Brazil.

  • Líder : JOSE AUGUSTO FIORUCCI
  • MIEMBROS DE LA BANCA :
  • JOSE AUGUSTO FIORUCCI
  • EDUARDO YOSHIO NAKANO
  • GUILHERME SOUZA RODRIGUES
  • MARINHO GOMES DE ANDRADE FILHO
  • Data: 19-dic-2023


  • Resumen Espectáculo
  • Retail trade plays a crucial role in the Brazilian economy, and planning for sales volume and other factors related to the retail sector is of great importance for its growth. To effectively forecast and plan sales quantities, methodologies related to time séries can be employed. This study focuses on the development and evaluation of predictive models, which should consider typical characteristics of such data, such as hierarchical structure, the presence of multiple seasonalities in higher-level séries, and intermittent behavior in lower-level séries.

2022
Disertaciones
1
  • Matheus Gorito de Paula
  • Cross learning for univariate time series forecasting.

  • Líder : JOSE AUGUSTO FIORUCCI
  • MIEMBROS DE LA BANCA :
  • JOSE AUGUSTO FIORUCCI
  • EDUARDO YOSHIO NAKANO
  • GUILHERME SOUZA RODRIGUES
  • FLÁVIO LUIZ DE MORAES BARBOZA
  • Data: 19-sep-2022


  • Resumen Espectáculo
  • Machine learning refers to the process by which computers develop pattern recognition, or the ability to continually learn, or make predictions based on data, and then make adjustments without being specifically programmed to do so. Within machine learning methods, this work focuses on the Stacking technique. Time Series Forecast Competitions are competitions that aim to evaluate and compare the accuracy of Time Series forecast models. In this project we use the Time Series database from the M3 competition to make predictions using the Time Series reference models. Afterwards, we train a Boosting model with the results of the predictions seeking to obtain more efficient results in competitions.

2
  • Marcos Douglas Rodrigues de Sousa
  • Geographically weighted zero inflated negative binomial regression.

  • Líder : ALAN RICARDO DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALAN RICARDO DA SILVA
  • ANDRE LUIZ FERNANDES CANCADO
  • FRANCISCO JOSÉ DE AZEVEDO CYSNEIROS
  • THAIS CARVALHO VALADARES RODRIGUES
  • Data: 21-sep-2022


  • Resumen Espectáculo
  • The goal of this work is to bring an approach to the modeling of count data, considering the existence of zeros in the distribution. Assuming the use of spatial data, in which the phenomenon under analysis does not present stationarity, the geographically weighted regression appears to solve this problem. Therefore, this work brings an extension of the geographically weighted negative binomial regression (GWNBR) to include a zero-inflated negative binomial distribution, entitled geographically weighted zero-inflated negative binomial regression (GWZINBR).

     

    To verify the performance of the fit of the RBNIZGP model, some simulated data from distributions, zero-inflated poisson and zero-inflated negative binomial, without spatial space, were used. Finally, adjustment was used in the case of selection of the real quality of data on COVID-19 cases in South Korea, with data from South Korea being analyzed by (Weinstein et al., 2021).

     

    The results of the simulations showed that the RBNIZGP model was able to model the data with Poisson, negative binomial, zero inflated Poisson and zero inflated negative binomial distributions, without spatial variation, by means of a large bandwidth. In the real case study, the results showed that locally, the adjusted models could be Poisson or negative binomial, thus refining the analysis, and showing the flexibility of the GWZINBR model.

3
  • Monique Lohane Xavier Silva
  • A Bayesian credit risk model for classification of default customers

  • Líder : EDUARDO YOSHIO NAKANO
  • MIEMBROS DE LA BANCA :
  • EDUARDO YOSHIO NAKANO
  • HELTON SAULO BEZERRA DOS SANTOS
  • JOSE AUGUSTO FIORUCCI
  • MARCELO ANGELO CIRILLO
  • Data: 03-nov-2022


  • Resumen Espectáculo
  • The aim of this work was to propose a bayesian credit risk model for classifying customers in terms of their default risk. The differential of the proposed methodology is the possibility of incorporatinga priori information in the customer classification process and not just in the estimation of the customers' evaluation parameters. The main advantage of this procedure is due to the simplicity in incorporating the expert's opinion in the classification process, something that does not occur in traditional bayesian modeling, whose a priori information falls on the parameters of the models, which are usually abstract quantities and/or associated with covariates with multicollinearity problems. To illustrate the proposed methodology, a dataset in the literature was usedand the results obtained showed that the model is useful for classifying customers in terms of their probability of default

4
  • Beatriz Leal Simões e Silva
  • A new invertible bimodal Weibull model

  • Líder : CIRA ETHEOWALDA GUEVARA OTINIANO
  • MIEMBROS DE LA BANCA :
  • CIRA ETHEOWALDA GUEVARA OTINIANO
  • ANTONIO EDUARDO GOMES
  • HELTON SAULO BEZERRA DOS SANTOS
  • MARCELO BOURGUIGNON PEREIRA
  • Data: 03-nov-2022


  • Resumen Espectáculo
  • The Weibull distribution is one of the most used models in statistics and applied areas, as it has a simple expression for the probability density function, survival function, and moments. However, the Weibull distribution is not able to fit bimodal data. In this work, we propose a new generalization of the three-parameter Weibull distribution, a new invertible bimodal Weibull model (NIBW), which can be bimodal and its cumulative distribution function and quantile function have a simple and closed form, which makes it very interesting in simulation procedures and for the calculation of risk measures in the applied areas. Several properties of the model were studied and the non-negative version of the model (NNIBW) was used in the performance of the maximumlikelihood estimates of the parameters and tested using Monte Carlo simulation. Furthermore, using four sets of temperature data, we fitted and compared our model with another bimodal distribution, calculate the return time and fit as well a regression model for one chosen dataset.

5
  • Ana Lívia Protázio Sá
  • Bivariate Log-Symmetric Models: Theoretical Properties and Parametric Estimation

  • Líder : ROBERTO VILA GABRIEL
  • MIEMBROS DE LA BANCA :
  • ROBERTO VILA GABRIEL
  • CIRA ETHEOWALDA GUEVARA OTINIANO
  • JOSE AUGUSTO FIORUCCI
  • JEREMIAS DA SILVA LEÃO
  • Data: 17-nov-2022


  • Resumen Espectáculo
  • The bivariate Gaussian distribution has been the basis of probability and statistics for many years. Nonetheless, this distribution faces some problems, mainly due to the fact that many real-world phenomena generate data that follow asymmetric distributions. Bidimensional log-symmetric models have attractive properties and can be considered as good alternatives to solve this problem. In this dissertation, we propose new characterizations of bivariate log-symmetric distributions and their applications. This dissertation aims to develop important contributions to probability, theoretical and applied statistics due to the flexibility and interesting properties of the outlined models. We implemented maximum likelihood estimation for the parameters of the distributions. A Monte Carlo simulation study was performed to evaluate the performance of the parameter estimation. Finally, we applied the proposed methodology to a real data set

6
  • Gustavo Maia Rodrigues Gomes
  • A Multi-Armed Bandit Framework for Portfolio Allocation

  • Líder : RAUL YUKIHIRO MATSUSHITA
  • MIEMBROS DE LA BANCA :
  • RAUL YUKIHIRO MATSUSHITA
  • JOSE AUGUSTO FIORUCCI
  • ERALDO SERGIO BARBOSA DA SILVA
  • REGINA CÉLIA BUENO DA FONSECA
  • Data: 22-nov-2022


  • Resumen Espectáculo
  • For over a century, the academic community has studied the financial market in an attempt to understand its behavior to maximize profits. This work looks for ways to maximize results in the financial market by creating a two-phase procedure that we call MAB-MMAR. First, individual generative models are established for each asset, to simulate, via Monte Carlo, future returns, using the Multifractal Model of Asset Returns, which is able to multiscale the moments of the return distribution under time scales, being an alternative to representations of the ARCH type, which are the representations that have been the focus of empirical research on the distribution of prices in recent years. Second, a Multi-Armed Bandit (MAB) structure is built by applying the Upper Confidence Bound (UCB)-Tuned algorithm on the simulated paths, in order to make choices between assets that optimize the allocation of resources. Furthermore, as a layer of protection for operations, we propose the Double Barrier Method, where the operation is terminated if a lower barrier is touched. As a performance comparison, the One-Asset, 1/n, Modern Portfolio Theory (MPT) and Axiomatic Second-order Stochastic Dominance Portfolio Theory (ASSDPT) models were tested. Our results are promising, revealing that, in general, the MAB-MMAR performed best in the most varied scenarios.

7
  • Ana Carolina Souto Valente Motta
  • Association between Cardiovascular Health and Socioeconomic determinants: An application of multinomial and ordinal logistic regression models.

  • Líder : JOANLISE MARCO DE LEON ANDRADE
  • MIEMBROS DE LA BANCA :
  • JOANLISE MARCO DE LEON ANDRADE
  • ANDRE LUIZ FERNANDES CANCADO
  • EDUARDO YOSHIO NAKANO
  • JULIA MARIA PAVAN SOLER
  • Data: 28-nov-2022


  • Resumen Espectáculo
  • Objective: To estimate the prevalence of Ideal Cardiovascular Health (CVH) in the Brazilian adult population and to evaluate the association between CVH and social determinants based on the 2019 National Health Survey. Methods: Nationwide health survey (n=77,494). The CVH score proposed by the American Heart Association includes 4 behavioral metrics (smoking, body mass index, exercise, and diet) and 3 biological metrics (cholesterol, blood pressure, and glucose). Prevalence (and 95% confidence intervals) of ideal CVH and their individual metrics were estimated using sample expansion. Associations between CVH and socioeconomic determinants (Education, Wealth and Occupation Index) were evaluated by logistic, ordinal and multinomiaregression models,adjusting for sociodemographicvariables. Results:Only0.5%(95%CI 0.4;0.6) of the population presented Ideal CVH (7 favorable metrics) and 8.9% (95%CI 8.5;9.3presented superior CVH (6-7 favorable metrics), with worse performance in behavioral metrics. Education, wealth index and occupation status, in addition to the covariates age group, marital status, presence of chronic diseases, region, and urban-rural classification were significantly associated with CVH. Binary, multinomial and ordinallogistic regression models identified practically the same significant independent variables, with the multinomial being more interesting clinically and the ordinal being difficult to interpret and evaluate in the context of complex sampling. Conclusion: The very low prevalence of Ideal CVH and associations between CVH and sociodemographic characteristics observed in the Brazilian adult population highlight the need for public policies to promote, monitor and care for CVH with more targeted and effective interventions to increase the prevalence of CVH.

8
  • Lídia Almeida de Carvalho
  • A proposal to control items exposure rate in computerized  adaptive tests

  • Líder : ANTONIO EDUARDO GOMES
  • MIEMBROS DE LA BANCA :
  • ANTONIO EDUARDO GOMES
  • GUILHERME SOUZA RODRIGUES
  • RAUL YUKIHIRO MATSUSHITA
  • CAIO LUCIDIUS NABEREZNY AZEVEDO
  • Data: 29-nov-2022


  • Resumen Espectáculo
  • The development of computerized adaptive tests was only possible due to the technological advances of the last decades, allowing this methodology to obtain estimates for the ability of the examinees based on a reduced number of items selected specifically for each respondent from their estimated latent trait. Its difficulties arise when a small group of items is exposed frequently, jeopardizing the security of the test. Thus, this research aims to propose a method for the item selection step, based on the use of information weighted by a power of order alpha of the current proportion of respondents not exposed to each item, in order to reduce the exposure rate of the items, so that they do not have very high exposure rates or items that have never been exposed even with a degree of difficulty close to the respondent’s real skill theta. The results demonstrate the advantages of the proposed methodology in relation to those already used, presenting better performance in the proportion of overexposed items with all values of alfa for random weighted information and increasing the proportion of exposed items for higher values of alpha in the weighted maximum information method, for the simulated item bank. The weighted maximum information method with random alpha presented the best performance among all the methods discussed here when applied to the real item bank. Other advantages related to the choice of alpha values are also mentioned.



SIGAA | Secretaria de Tecnologia da Informação - STI - (61) 3107-0102 | Copyright © 2006-2024 - UFRN - app11_Prod.sigaa05