Here we provide a selection of academic journal templates for articles and papers which automatically format your manuscripts in the style required for submission to that journal. Thanks to the partnerships we're building within the publishing community, you can also now submit your paper directly to a number of journals and other editorial and review services via the publish menu in the editor.
The efficiency of a query execution plan depends on the accuracy of the selectivity estimates given to the query optimiser by the cost model. The cost model makes simplifying assumptions in order to
produce said estimates in a timely manner. These assumptions lead to selectivity estimation errors that have dramatic effects on the quality of the resulting query execution plans. A convenient assumption that is ubiquitous among current cost models is to assume that attributes are independent with each other. However, it ignores potential correlations which can have a huge negative impact on the accuracy of the cost model. In this paper we attempt to relax the attribute value independence assumption without unreasonably deteriorating the accuracy of the cost model. We propose a novel approach based on a particular type of Bayesian networks called Chow-Liu trees to approximate the distribution of attribute values inside each relation of a database. Our results on
the TPC-DS benchmark show that our method is an order of magnitude.
more precise than other approaches whilst remaining reasonably efficient
in terms of time and space.
Optimization is a crucial step in the development context of algorithms, where depending on the purpose, different levels of optimization can be applied. Thus, the Network Dijkstra algorithm has been choosen in order to perform the compilation and execution with some levels of optimization from the GCC, measuring its execution time, number of cycles and instructions. In the present work, it is also discussed how the front-end and middle-end analyzes are performed in GCC.
En ésta páctica medimos y cuantificamos algunas de las propiedades de diferentes rocas, como la densidad y la masa, utilizando los equipos necesarios y comparándolas entre ellas para saber como es el comportamiento de las rocas que recolectamos. Al finalizar de hacer experimentos con las diferentes rocas, se compararon los resultados con los de otro equipo y se observ ́o que ambos resultados eran similares, lo que indica que cada tipo de roca tiene características diferentes de las demás, como su porosidad o densidad, independientemente de su masa o volumen.
The Electricity theft is an economic issue for the electricity company due to unbilled revenue of consumers who commit such action. In a regulated scenario the company needs to fit within the laws of a regulatory agency (ANEEL in Brazil) and the loss of revenue is a problem that can compromise the compliance with regulatory targets and business efficiency. The objective of this article is to analyze how the energy theft impacts on the economy of the regulated company, consumers and society as a whole. Through the economic model Tarot (Optimized Tariff) it was possible through a concise and comprehensive manner to analyze the regulated electricity market using simulations and discover in which points the company operates optimally and through it to determine the economic indicators.
How to conceal objects from electromagnetic radiation has been a hot research topic. Radar is an object detection system that uses Radio waves to determine the range , angle, or velocity. A radar transmit radio waves or microwaves that reflect from any object in their path. A receive radar is typically the same system as transmit radar, receives and processes these reflected wave to determine properties of object. Different organizations are working onto hide object from the radar in outer space. Any confidential object can be taken through space without being detected by the enemies. This calls for necessity of devising new method to conceal an object electromagnetically.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.