Abstract: In the last two decades, observational cosmology has established itself as an area of large surveys and a vast amount of data in different bands of the electromagnetic spectrum. Surveys like the Legacy Survey of Space and Time (LSST) and the Euclid satellite, for example, will take us to a new level in terms of high quality data volume in the next 10 years. This has generated a demand for efficient and accurate algorithms. Establishing this accuracy and efficiency requires the existence of independent codes that can be compared in order to be validated. This is a fundamental step to be carried out in order to ensure that cosmological and astrophysical analyzes do not have errors and biases in their numerical results. At the same time, we cannot focus only on the accuracy of the calculation, since
statistical analysis, in general, is time-consuming and has a high computational cost. The efficiency of codes must be achieved with knowledge of the numerical precision of each calculation performed. In this work, we compared and validated part of two programming libraries, Numerical Cosmology (NumCosmo) and COsmology, haLO, and large-Scale StrUcture toolS (Colossus). We focus on the basic quantities of cosmology, such as the Hubble function and cosmological distances, to the calculation of halo matter density profiles and excess surface mass density ΔΣ(𝑅). It is known that the ΔΣ(𝑅) calculation must take into account different factors such as large-scale structure and miscentering. Therefore, we completed this work by developing a code to calculate the miscentering term and compared it with the cluster-lensing library.