Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Estimates of expected future power which purport to use this information for purposes of sample size adjustment after given interim points need to reflect this uncertainty. Estimates of future power at later interim points need to track the evolution of the clinical trial. We employ sequential models to describe this evolution. We show that current techniques using point estimates of auxiliary parameters for estimating expected power: (i) fail to describe the range of likely power obtained after the anticipated data are observed, (ii) fail to adjust to different kinds of thresholds, and (iii) fail to adjust to the changing patient population. Our algorithms address each of these shortcomings. We show that the uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing the resulting posterior distribution to estimate power. We devise MCMC-based algorithms to implement sample size adjustments after the first interim point. Bayesian models are designed to implement these adjustments in settings where both hard and soft thresholds for distinguishing the presence of treatment effects are present. Sequential MCMC-based algorithms are devised to implement accurate sample size adjustments for multiple interim points. We apply these suggested algorithms to a depression trial for purposes of illustration.
Image reconstruction is observed to be one of the most common problem because of it's large data movement and non-trivial data dependencies. In the past, these problems were tackled by many high performance hardware such as FPGA's and GPGPU's. This also reflects the investemts to be made in these supercomputers for real time reconstruction of clinical computed tomography (CT) applications. Medical imaging systems are employing high performance computing (HPC) technology to meet their time constraints. This paper presents different optimizations to the volume reconstruction and implement it on a commodity hardware such as x86 based multicore system. This paper chooses to perform its implementaion on Intel Xeon X5365 multicore processor. We perform different levels of parallelization and analyse each of them and report their results with respect to serial implementation. The objective of this paper is to understand the constraints of volume reconstruction in multicore architecture and optimize them while preserving the quality of the reconstructed image.
The parameters of a fuel-optimal mission from Kerbin via Eve to an outer planet with an arbitrary orbit radius were calculated. The $\Delta v$ gain of performing the Eve flyby instead a direct Hohmann transfer to the outer planet is on the order of 50 m/s. Eccentricity and inclination of planets' orbits were not considered.
Although individual ants have an extremely basic intelligence, and are completely incapable of surviving on their own, colonies of ants can develop remarkably sophisticated and biologically successful behavior. This paper discusses a set of experiments which attempt to simulate one of these behaviors, namely the ability of ants to place pheromones as a way of communication. These experiments involved a variety of different environments, and tested two varieties of the genetic algorithm NEAT: the standard offline version, and its online counterpart rtNEAT. Since the experimental environment did not seem to offer any benefit to continuous learning, we had expected NEAT and rtNEAT to have roughly similar learning curves. However, our results directly contradict this hypothesis, showing much more succesful learning with rtNEAT than with standard NEAT.