Browsing by Browse by FOR 2008 "010406 Stochastic Analysis and Modelling"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
- Some of the metrics are blocked by yourconsent settings
Journal ArticlePublication Adjusting age at first breeding of albatrosses and petrels for emigration and study durationThe age at first breeding is an important demographic parameter in determining maximum growth rate, population size and generation time and is a key parameter in calculating the potential biological removal of birds. Albatrosses and petrels do not begin breeding for many years, with some first breeding in their teens. This means that even long-term studies of birds banded as chicks may not last long enough to observe the entire process of recruitment to breeding. Estimates based only on observed data (the naive estimate) may be biased by imperfect observation, emigration, and study duration. Instead, modelling approaches should be used to estimate the mean age at first breeding, but these must be used carefully. Here, we show the large negative bias that may be caused by limited study duration and emigration when the naive estimate is used. Capture-mark-recapture methods combined with additional assumptions about emigration can alleviate the bias, provided that an appropriate model is used. Using these methods, we analysed data collected between 1991 and 2006 on 1,246 Gibson's albatrosses ('Diomedea gibsoni') banded as chicks (mostly banded from 1995 onwards) and 1,258 birds banded as adults. While 402 birds banded as chicks were observed returning to the study area, only 42 were observed breeding. With limited data, model-based approaches must be used, and assumptions about recruitment to breeding play an additional role in the estimate of the age at first breeding. In particular, the function chosen for recruitment to breeding for older age classes cannot be compared to data. Three recruitment functions are compared to show the large sensitivity of the estimated mean age at first breeding to the assumed functional form.997 - Some of the metrics are blocked by yourconsent settings
Publication Open AccessJournal ArticleBayesian Parametric Bootstrap for Models with Intractable Likelihoods(International Society for Bayesian Analysis, 2019-03); ;Drovandi, Christopher CPettitt, Anthony NIn this paper it is demonstrated how the Bayesian parametric bootstrap can be adapted to models with intractable likelihoods. The approach is most appealing when the computationally efficient semi-automatic approximate Bayesian computation (ABC) summary statistics are selected. The parametric bootstrap approximation is used to form a proposal distribution in ABC algorithms to improve the computational efficiency. The new approach is demonstrated through the sequential Monte Carlo and the ABC importance and rejection sampling algorithms. We found efficiency gains in two simulation studies, the univariate g-and-k quantile distribution, a toggle switch model in dynamic bionetworks, and in a stochastic model describing expanding melanoma cell colonies.1262 5 - Some of the metrics are blocked by yourconsent settings
Journal ArticlePublication A comparison of individual patient analysis versus pooled study meta-analysis methodologies of exercise training trials in heart failure patients'Background': A fixed effects meta-analysis of ten exercise training in trials heart failure patients was conducted. The aim of this current work was to compare different approaches to meta analysis using the same dataset from the previous work on ten exercise training trials in heart failure patients. 'Methods': The following different meta-analysis techniques were used to analyse the data and compared the effects of exercise training on BNP, NT-pro-BNP and peak VO2 before and after exercise training: (1) Trial level (traditional) level MA i) Follow up (post-exercise training intervention) outcome only. ii) Baseline-follow up difference (2) Patient level MA by Post-Stage ANCOVA i) naive model does not take into account trial level ii) Single Stage iii) Two Stage (3) Post outcome only i) Single stage ii) Pre-post outcome difference Single stage. 'Results': The Individual patient data (IPD) analyses produced smaller effect sizes and 95% confidence intervals compared to conventional meta-analysis. The advantage of the one-stage model is that it allows sub-group analyses, while the two-stage model is considered more robust but limited for sub-analyses. 'Conclusions': Our recommendation is to use one-stage or two-stage ANCOVA analysis, the former allows sub-group analysis, while the latter is considered to be more technically robust.885 - Some of the metrics are blocked by yourconsent settings
Conference PublicationPublication Discriminating "Signal" and "Noise" in Computer-Generated Data(International Group for the Psychology of Mathematics Education (IGPME), 2010); Pratt, DavidThis paper presents a case study of a group of students (age 14-15) as they use a computer-based domain of stochastic abstraction to begin to view spread or noise as dispersion from the signal. The results show that carefully designed computer tools, in which probability distribution is used as a generator of data, can facilitate the discrimination of signal and noise. This computational affordance of distribution is seen as related to classical statistical methods that aim to separate main effect from random error. In this study, we have seen how signal and noise can be recognised by students as an aspect of distribution. Students' discussion of computer-generated data and their sketches of the distribution express the idea that more variation is centred close to the signal, and less variation is located further away from it.1245 3 - Some of the metrics are blocked by yourconsent settings
DatasetPublication Impacts of Climate Change and Land Use on Water Resources and River Dynamics Using Hydrologic Modelling, Remote Sensing and GIS: Towards Sustainable DevelopmentThe aerial photographs, taken on the 6th of February 1975 at a scale 1: 50 000, were obtained from the Survey of Kenya and were used to generate my original data.2261 29 - Some of the metrics are blocked by yourconsent settings
Publication Open AccessJournal ArticleInference for Reaction Networks Using the Linear Noise Approximation(Wiley-Blackwell Publishing Ltd, 2014) ;Fearnhead, Paul ;Giagos, VasileiosSherlock, ChrisWe consider inference for the reaction rates in discretely observed networks such as those found in models for systems biology, population ecology, and epidemics. Most such networks are neither slow enough nor small enough for inference via the true state-dependent Markov jump process to be feasible. Typically, inference is conducted by approximating the dynamics through an ordinary differential equation (ODE) or a stochastic differential equation (SDE). The former ignores the stochasticity in the true model and can lead to inaccurate inferences. The latter is more accurate but is harder to implement as the transition density of the SDE model is generally unknown. The linear noise approximation (LNA) arises from a first-order Taylor expansion of the approximating SDE about a deterministic solution and can be viewed as a compromise between the ODE and SDE models. It is a stochastic model, but discrete time transition probabilities for the LNA are available through the solution of a series of ordinary differential equations. We describe how a restarting LNA can be efficiently used to perform inference for a general class of reaction networks; evaluate the accuracy of such an approach; and show how and when this approach is either statistically or computationally more efficient than ODE or SDE methods. We apply the LNA to analyze Google Flu Trends data from the North and South Islands of New Zealand, and are able to obtain more accurate short-term forecasts of new flu cases than another recently proposed method, although at a greater computational cost.838 - Some of the metrics are blocked by yourconsent settings
Conference PublicationPublication Student's Causal Explanations for Distribution(Institut National de Recherche Pedagogique [French Institute of Education] (INRP), 2009); Pratt, DaveThis paper presents a case study of two students aged 14-15, as they attempt to make sense of distribution, adopting a range of causal meanings for the variation observed in the animated computer display and in the graphs generated by the simulation. The students' activity is analysed through dimensions of complex causality. The results indicate support for our conjecture that carefully designed computer simulations can offer new ways for harnessing causality to facilitate students' meaning-making for variation in distributions of data. In order to bridge the deterministic and the stochastic, the students transfer agency to specially designed active representations of distributional parameters, such as average and speed.1166