Here we provide a selection of academic journal templates for articles and papers which automatically format your manuscripts in the style required for submission to that journal. Thanks to the partnerships we're building within the publishing community, you can also now submit your paper directly to a number of journals and other editorial and review services via the publish menu in the editor.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
We all have a good reason to learn a new language; discovering our roots, passion for travel, academic purposes, pure interest etc. However most of us find it hard to become conversationally fluent in a new language while we use traditional resources for learning like textbooks and tutorials on the internet. In this paper we propose a novel approach to learn a new language. We aim to develop an intelligent browser extension, LanGauger, that will help users learn foreign languages. This application will allow users to look up words while they are browsing, by highlighting the text to be learned. The application will then provide a translation of the word, its pronunciation and its usage context in sentences. In addition, this intelligent tutor will also remember what words have been seen by the user, and quiz them on these words at appropriate times. While testing the recall of the user, this feature will also allow users to frequently think about the language and use it.
Optimization is a crucial step in the development context of algorithms, where depending on the purpose, different levels of optimization can be applied. Thus, the Network Dijkstra algorithm has been choosen in order to perform the compilation and execution with some levels of optimization from the GCC, measuring its execution time, number of cycles and instructions. In the present work, it is also discussed how the front-end and middle-end analyzes are performed in GCC.
En ésta páctica medimos y cuantificamos algunas de las propiedades de diferentes rocas, como la densidad y la masa, utilizando los equipos necesarios y comparándolas entre ellas para saber como es el comportamiento de las rocas que recolectamos. Al finalizar de hacer experimentos con las diferentes rocas, se compararon los resultados con los de otro equipo y se observ ́o que ambos resultados eran similares, lo que indica que cada tipo de roca tiene características diferentes de las demás, como su porosidad o densidad, independientemente de su masa o volumen.
The efficiency of a query execution plan depends on the accuracy of the selectivity estimates given to the query optimiser by the cost model. The cost model makes simplifying assumptions in order to
produce said estimates in a timely manner. These assumptions lead to selectivity estimation errors that have dramatic effects on the quality of the resulting query execution plans. A convenient assumption that is ubiquitous among current cost models is to assume that attributes are independent with each other. However, it ignores potential correlations which can have a huge negative impact on the accuracy of the cost model. In this paper we attempt to relax the attribute value independence assumption without unreasonably deteriorating the accuracy of the cost model. We propose a novel approach based on a particular type of Bayesian networks called Chow-Liu trees to approximate the distribution of attribute values inside each relation of a database. Our results on
the TPC-DS benchmark show that our method is an order of magnitude.
more precise than other approaches whilst remaining reasonably efficient
in terms of time and space.