One-day short course at NACCB congress in Madison, WI, on July 16th, with Peter Solymos and Subhash Lele.
We aim this training course towards conservation professionals who need to understand and feel comfortable with modern statistical and computational tools used to address conservation issues. Conservation science needs to be transparent and credible to be able to make an impact and translate information and knowledge into action.
Communicating scientific methods and results require a full understanding of concepts, assumptions and implications. However, most ecological data used in conservation decision making are inherently noisy, both due to intrinsic stochasticity found in nature and extrinsic factors of the observation processes. We are often faced with the need to combine multiple studies across different spatial and temporal resolutions. Natural processes are often hierarchical. Missing data, measurement error, soft data provided by expert opinion need to be accommodated during the analysis. Data are often limited (rare species, emerging threats), thus small sample corrections are important for properly quantify uncertainty.
Hierarchical models are useful in such situations. Fitting these models to data, however, is difficult. Advances in the last couple of decades in statistical theory and software development have fortunately made the data analysis easier, although not trivial. In this course, we propose to introduce statistical and computational tools for the analysis of hierarchical models (including tools for small sample inference) specifically in the context of conservation issues.
We will teach both Bayesian and Likelihood based approaches to these models using freely available software developed by the tutors. By presenting both Bayesian and Likelihood based approaches participants will be able to go beyond the rhetorics of philosophy of statistics and use the tools with full understanding of their assumptions and implications. This will help ensure that when they use the statistical techniques, be they Bayesian or Frequentist, they will be able to explain and communicate the results to the managers and general public appropriately.
Check out previous courses at at DataCloning.org.
Congress registration is
now open here closed.
The congress program is out (see here).
Course notes are now on the course website: http://datacloning.org/courses/2016/madison/.
It all started with this paper in Methods in Ecol. Evol. where we looked at detectability of many species. So we wanted to use life history traits to validate our results. But we had to cut the manuscript, and there was this leftover with some neat patterns, but without much focus. It took a few years, and the most positive peer-review experience ever, and the paper is now early view in Ecography. This post is a quick summary of the goodies stuffed inside the lhreg R package that makes the whole analysis reproducible, and provides some functions for similar PGLMM models.
ABMI (6) ARU (1) C (1) CRAN (1) Hungary (2) JOSM (2) PVA (2) PVAClone (1) QPAD (1) R (18) R packages (1) bioacoustics (1) biodiversity (1) birds (2) course (2) data (1) data cloning (4) dclone (3) dependencies (1) detect (2) detectability (1) footprint (3) forecasting (1) functions (3) intrval (3) lhreg (1) mefa4 (1) monitoring (2) pbapply (5) phylogeny (1) plyr (1) poster (2) processing time (2) progress bar (4) publications (2) report (1) sector effects (1) single visit (1) site (1) slides (2) special (3) species (1) trend (1) tutorials (2) video (4)