As part of the detection process, a skilled observer counts individual birds at a count station. New individuals are assigned to time and distance categories, the type of behavior also registered. During this process, auditory cues travel through the distance between the bird and the observer. As the pover of the sound fades away, the chances of being detected also decreases. If the detection process is based on visual detections, vegetation can block line of sight, etc. In this chapter, we scrutinize how this detection process contributes to the factor \(C\).
library(bSims) # simulations
library(detect) # multinomial models
library(Distance) # distance sampling
## Loading required package: mrds
## This is mrds 2.2.4
## Built: R 4.0.2; ; 2020-12-02 11:32:45 UTC; unix
##
## Attaching package: 'Distance'
## The following object is masked from 'package:mrds':
##
## create.bins
load("data/josm-data.rda") # JOSM data
source("src/functions.R") # some useful stuff
The distance function (\(g(d)\) describes the probability of detecting an individual given the distance between the observer and the individual (\(d\)). The detection itself is often triggered by visual or auditory cues, and thus depend on the individuals being available for detection (and of course being present in the survey area).
Distance functions have some characteristics:
Here are some common distance function and rationale for their use (i.e. mechanisms leading to such distance shapes):
d <- seq(0, 2, 0.01)
plot(d, exp(-d/0.8), type="l", col=4, ylim=c(0,1),
xlab="Distance (100 m)", ylab="P(detection)", main="Negative Exponential")
plot(d, exp(-(d/0.8)^2), type="l", col=4, ylim=c(0,1),
xlab="Distance (100 m)", ylab="P(detection)", main="Half-Normal")
plot(d, 1-exp(-(d/0.8)^-4), type="l", col=4, ylim=c(0,1),
xlab="Distance (100 m)", ylab="P(detection)", main="Hazard rate")
Try different values of \(b\) to explore the different shapes of the Hazard rate function.
Write your own code (plot(d, exp(-(d/<tau>)^<b>), type="l", ylim=c(0,1))
), or run bSims::run_app("distfunH")
.
d <- seq(0, 2, 0.01)
tau <- 1
b <- 1
plot(d, exp(-(d/tau)^b), type="l", ylim=c(0,1))
bSims::run_app("distfunH")
As we saw before, the Shiny app can be used to copy the distance function that we can use later in our simulations.
We will apply this new found knowledge to our bSims world: the observer is in the middle of the landscape, and each vocalization event is aither detected or not, depending on the distance. Units of tau
are given on 100 m units, so that corresponding density estimates will refer to ha as the unit area.
In this example, we want all individuals to be equally available, so we are going to override all behavioral aspects of the simulations by the initial_location
argument when calling bsims_animate
. We set density
and tau
high enough to get many detections in this example.
tau <- 2
set.seed(123)
l <- bsims_init()
a <- bsims_populate(l, density=10)
b <- bsims_animate(a, initial_location=TRUE)
(o <- bsims_detect(b, tau=tau))
## bSims detections
## 1 km x 1 km
## stratification: H
## total abundance: 1013
## no events, duration: 10 min
## detected: 128 heard
plot(o)
The distribution of the observed distances is a product of detectability and the distribution of the individuals with respect to the point where the observer is located. For point counts, area increases linearly with radial distance, implying a triangular distribution with respect to the point (\(h(d)=\pi 2 d /A=\pi 2 d / \pi r_{max}^2=2 d / r_{max}^2\), where \(A\) is a circular survey area with truncation distance \(r_{max}\)). The product \(g(d) h(d)\) gives the density function of the observed distances.
g <- function(d, tau, b=2, hazard=FALSE) {
if (hazard)
1-exp(-(d/tau)^-b) else exp(-(d/tau)^b)
}
h <- function(d, rmax) {
2*d/rmax^2
}
rmax <- 4
d <- seq(0, rmax, 0.01)
plot(d, g(d, tau), type="l", col=4, ylim=c(0,1),
xlab="d", ylab="g(d)", main="Prob. of detection")
plot(d, h(d, rmax), type="l", col=4,
xlab="d", ylab="h(d)", main="PDF of distances")
plot(d, g(d, tau) * h(d, rmax), type="l", col=4,
xlab="d", ylab="g(d) h(d)", main="Density of observed distances")
The object da
contains the distances to all the nests based on our bSims object, we use this to display the distribution of available distances:
da <- sqrt(rowSums(a$nests[,c("x", "y")]^2))
hist(da[da <= rmax], freq=FALSE, xlim=c(0, rmax),
xlab="Available distances (d <= r_max)", main="")
curve(2*x/rmax^2, add=TRUE, col=2)
The get_detections
function returns a data frame with the detected events (in our case just the nest locations): $d
is the distance.
head(dt <- get_detections(o))
## x y t v d i j
## 9 -2.13683299 -1.4639392 0 1 2.5902072 9 9
## 15 -0.22748175 0.3605372 0 1 0.4263039 15 15
## 19 -0.02865112 -0.3528623 0 1 0.3540235 19 19
## 45 -0.71086142 -1.5128051 0 1 1.6714973 45 45
## 47 -2.78899963 0.3554139 0 1 2.8115544 47 47
## 58 -3.08015181 -1.1482640 0 1 3.2872246 58 58
The following piece of code plots the probability density of the observed distances within the truncation distance \(r_{max}\), thus we need to standardize the \(g(r) h(r)\) function by the sum of the integral:
f <- function(d, tau, b=2, hazard=FALSE, rmax=1)
g(d, tau, b, hazard) * h(d, rmax)
tot <- integrate(f, lower=0, upper=rmax, tau=tau, rmax=rmax)$value
hist(dt$d[dt$d <= rmax], freq=FALSE, xlim=c(0, rmax),
xlab="Observed distances (r <= rmax)", main="")
curve(f(x, tau=tau, rmax=rmax) / tot, add=TRUE, col=2)
In case of the Half-Normal, we can linearize the relationship by taking the log of the distance function: \(log(g(d)) =log(e^{-(d/\tau)^2})= -(d / \tau)^2 = x \frac{1}{\tau^2} = 0 + x \beta\). Consequently, we can use GLM to fit a model with \(x = -d^2\) as predictor and no intercept, and estimate \(\hat{\beta}\) and \(\hat{\tau}=\sqrt{1/\hat{\beta}}\).
For this method to work, we need to know the observed and unobserved distances as well, which makes this approach of low utility in practice when location of unobserved individuals is unknown. But we can at least check our bSims data:
dat <- data.frame(
distance=da,
x=-da^2,
detected=ifelse(rownames(o$nests) %in% dt$i, 1, 0))
summary(dat)
## distance x
## Min. :0.2264 Min. :-49.33230
## 1st Qu.:2.8758 1st Qu.:-23.73953
## Median :3.9653 Median :-15.72398
## Mean :3.8274 Mean :-16.68895
## 3rd Qu.:4.8723 3rd Qu.: -8.27033
## Max. :7.0237 Max. : -0.05125
## detected
## Min. :0.0000
## 1st Qu.:0.0000
## Median :0.0000
## Mean :0.1264
## 3rd Qu.:0.0000
## Max. :1.0000
mod <- glm(detected ~ x - 1, data=dat, family=binomial(link="log"))
c(true=tau, estimate=sqrt(1/coef(mod)))
## true estimate.x
## 2.000000 2.034018
curve(exp(-(x/sqrt(1/coef(mod)))^2),
xlim=c(0,max(dat$distance)), ylim=c(0,1),
xlab="Distance (100 m)", ylab="P(detection)")
curve(exp(-(x/tau)^2), lty=2, add=TRUE)
rug(dat$distance[dat$detected == 0], side=1, col=4)
rug(dat$distance[dat$detected == 1], side=3, col=2)
legend("topright", bty="n", lty=c(2,1),
legend=c("True", "Estimated"))
The Distance package offers various tools to fit models to observed distance data. See here for a tutorial. The following script fits the Half-Normal (key = "hn"
) without ajustments (adjustment=NULL
) to observed distance data from truncated point transect. It estimates \(\sigma = \sqrt{\tau}\):
dd <- ds(dt$d, truncation = rmax, transect="point",
key = "hn", adjustment=NULL)
## Fitting half-normal key function
## Key only model: not constraining for monotonicity.
## AIC= 315.502
## No survey area information supplied, only estimating detection function.
c(true=tau, estimate=exp(dd$ddf$par)^2)
## true estimate
## 2.000000 2.175567
BREAK
To calculate the average probability of detecting individuals within a circle with truncation distance \(r_{max}\), we need to integrate over the product of \(g(r)\) and \(h(r)\): \(q(r_{max})=\int_{0}^{r_{max}} g(d) h(d) dd\). This gives the volume of pie dough cut at \(r_{max}\), compared to the volume of the cookie cutter (\(\pi r_{max}^2\)).
q <- sapply(d[d > 0], function(z)
integrate(f, lower=0, upper=z, tau=tau, rmax=z)$value)
plot(d, c(1, q), type="l", col=4, ylim=c(0,1),
xlab=expression(r[max]), ylab=expression(q(r[max])),
main="Average prob. of detection")
For the Half-Normal detection function, the analytical solution for the average probability is \(\pi \tau^2 [1-exp(-d^2/\tau^2)] / (\pi r_{max}^2)\), where the denominator is a normalizing constant representing the volume of a cylinder of perfect detectability.
To visualize this, here is the pie analogy for \(\tau=2\) and \(r_{max}=2\):
tau <- 2
rmax <- 2
w <- 0.1
m <- 2
plot(0, type="n", xlim=m*c(-rmax, rmax), ylim=c(-w, 1+w),
axes=FALSE, ann=FALSE)
yh <- g(rmax, tau=tau)
lines(seq(-rmax, rmax, rmax/100),
g(abs(seq(-rmax, rmax, rmax/100)), tau=tau))
draw_ellipse(0, yh, rmax, w, lty=2)
lines(-c(rmax, rmax), c(0, yh))
lines(c(rmax, rmax), c(0, yh))
draw_ellipse(0, 0, rmax, w)
draw_ellipse(0, 1, rmax, w, border=4)
lines(-c(rmax, rmax), c(yh, 1), col=4)
lines(c(rmax, rmax), c(yh, 1), col=4)
The cumulative density function for the Half-Normal distribution (\(\pi(r) = 1-e^{-(r/\tau)^2}\)) is used to calculate cell probabilities for binned distance data (the normalizing constant is the area of the integral \(\pi \tau^2\), instead of \(\pi r_{max}^2\)). It captures the proportion of the observed distances relative to the whole volume of the observed distance density. In the pie analogy, this is the dough volume inside the cookie cutter, compared to the dough volume inside and outside of the cutter (that happens to be \(\pi \tau^2\) for the Half-Normal):
plot(0, type="n", xlim=m*c(-rmax, rmax), ylim=c(-w, 1+w),
axes=FALSE, ann=FALSE)
yh <- g(rmax, tau=tau)
lines(seq(-m*rmax, m*rmax, rmax/(m*100)),
g(seq(-m*rmax, m*rmax, rmax/(m*100)), tau=tau),
col=2)
lines(seq(-rmax, rmax, rmax/100),
g(abs(seq(-rmax, rmax, rmax/100)), tau=tau))
draw_ellipse(0, yh, rmax, w, lty=2)
lines(-c(rmax, rmax), c(0, yh))
lines(c(rmax, rmax), c(0, yh))
draw_ellipse(0, 0, rmax, w)
In case of the Half-Normal distance function, \(\tau\) is the effective detection radius (EDR). The effective detection radius is the distance from observer where the number of individuals missed within EDR (volume of ‘air’ in the cookie cutter above the dough) equals the number of individuals detected outside of EDR (dough volume outside the cookie cutter), EDR is the radius \(r_e\) where \(q(r_e)=\pi(r_e)\):
plot(0, type="n", xlim=m*c(-rmax, rmax), ylim=c(-w, 1+w),
axes=FALSE, ann=FALSE)
yh <- g(rmax, tau=tau)
lines(seq(-m*rmax, m*rmax, rmax/(m*100)),
g(seq(-m*rmax, m*rmax, rmax/(m*100)), tau=tau),
col=2)
lines(seq(-rmax, rmax, rmax/100),
g(abs(seq(-rmax, rmax, rmax/100)), tau=tau))
draw_ellipse(0, yh, rmax, w, lty=2)
lines(-c(rmax, rmax), c(0, yh))
lines(c(rmax, rmax), c(0, yh))
draw_ellipse(0, 0, rmax, w)
draw_ellipse(0, 1, rmax, w, border=4)
lines(-c(rmax, rmax), c(yh, 1), col=4)
lines(c(rmax, rmax), c(yh, 1), col=4)
What would be a computational algorithm to calculate EDR for any distance function and truncation distance?
Try to explain how the code below is working.
Why are EDRs different for different truncation distances?
find_edr <- function(dist_fun, ..., rmax=Inf) {
## integral function
f <- function(d, ...)
dist_fun(d, ...) * 2*d*pi
## volume under dist_fun
V <- integrate(f, lower=0, upper=rmax, ...)$value
u <- function(edr)
V - edr^2*pi
uniroot(u, c(0, 1000))$root
}
find_edr(g, tau=1)
## [1] 0.9999999
find_edr(g, tau=10)
## [1] 10
find_edr(g, tau=1, b=1)
## [1] 1.414208
find_edr(g, tau=1, b=4, hazard=TRUE)
## [1] 1.331326
find_edr(g, tau=1, rmax=1)
## [1] 0.7950578
vtau <- seq(0.001, 5, 0.1)
vrmax <- seq(0.001, 2, 0.025)
vals <- expand.grid(tau=vtau, rmax=vrmax)
edr <- apply(vals, 1, function(z) find_edr(g, tau=z[1], rmax=z[2]))
edr <- matrix(edr, length(vtau), length(vrmax))
image(vtau, vrmax, edr, col=hcl.colors(100, "Lajolla"))
contour(vtau, vrmax, edr, add=TRUE, lwd=0.5)
The function \(\pi(r)\) increases monotonically from 0 to 1:
curve(1-exp(-(x/tau)^2), xlim=c(0, 5), ylim=c(0,1), col=4,
ylab=expression(pi(d)), xlab=expression(d),
main="Cumulative density")
Here are binned distances for the bSims data, with expected proportions based on \(\pi()\) cell probabilities (differences within the distance bins). The nice thing about this cumulative density formulation is that it applies equally to truncated and unlimited (not truncated) distance data, and the radius end point for a bin (stored in br
) can be infinite:
br <- c(1, 2, 3, 4, 5, Inf)
dat$bin <- cut(da, c(0, br), include.lowest = TRUE)
(counts <- with(dat, table(bin, detected)))
## detected
## bin 0 1
## [0,1] 7 27
## (1,2] 42 53
## (2,3] 111 32
## (3,4] 227 13
## (4,5] 287 2
## (5,Inf] 211 1
pi_br <- 1-exp(-(br/tau)^2)
barplot(counts[,"1"]/sum(counts[,"1"]), space=0, col=NA,
xlab="Distance bins (100 m)", ylab="Proportions",
ylim=c(0, max(diff(c(0, pi_br)))))
lines(seq_len(length(br))-0.5, diff(c(0, pi_br)), col=3)
We can use the bsims_transcribe
function for the same effect, and estimate \(\hat{\tau}\) based on the binned data:
(tr <- bsims_transcribe(o, rint=br))
## bSims transcript
## 1 km x 1 km
## stratification: H
## total abundance: 1013
## no events, duration: 10 min
## detected: 128 heard
## 1st event detected by breaks:
## [0, 10 min]
## [0, 100, 200, 300, 400, 500, Inf m]
tr$removal
## 0-10min
## 0-100m 27
## 100-200m 53
## 200-300m 32
## 300-400m 13
## 400-500m 2
## 500+m 1
Y <- matrix(drop(tr$removal), nrow=1)
D <- matrix(br, nrow=1)
tauhat <- exp(cmulti.fit(Y, D, type="dis")$coef)
c(true=tau, estimate=tauhat)
## true estimate
## 2.000000 2.067061
Here are cumulative counts and the true and expected cumulative cell probabilities:
plot(stepfun(1:6, c(0, cumsum(counts[,"1"])/sum(counts[,"1"]))),
do.points=FALSE, main="Binned CDF",
ylab="Cumulative probability",
xlab="Bin radius end point (100 m)")
curve(1-exp(-(x/tau)^2), col=2, add=TRUE)
curve(1-exp(-(x/tauhat)^2), col=4, add=TRUE)
legend("topleft", bty="n", lty=1, col=c(2, 4, 1),
legend=c("True", "Estimated", "Empirical"))
We have ignored availability so far when working with bSims, but can’t continue like that for real data. What this means, is that \(g(0) < 1\), so detecting an individual 0 distance from the observer depends on an event (visual or auditory) that would trigger the detection. For example, if a perfectly camouflaged birds sits in silence, detection might be difficult. Movement, or a vocalization can, however, reveal the individual and its location.
The phi
and tau
values are at the high end of plausible values for songbirds. The Den
sity value is exaggerated, but this way we will have enough counts to prove our points using bSims:
phi <- 0.5
tau <- 2
Den <- 10
Now we go through the layers of our bSims world:
set.seed(4321)
l <- bsims_init()
a <- bsims_populate(l, density=Den)
b <- bsims_animate(a, vocal_rate=phi)
o <- bsims_detect(b, tau=tau)
Transcription is the process of turning the detections into a table showing new individuals detected by time intervals and distance bands, as defined by the tint
and rint
arguments, respectively (ignore any errors in transcription for now).
tint <- c(1, 2, 3, 4, 5)
rint <- c(0.5, 1, 1.5, 2) # truncated at 200 m
(tr <- bsims_transcribe(o, tint=tint, rint=rint))
## bSims transcript
## 1 km x 1 km
## stratification: H
## total abundance: 951
## duration: 10 min
## detected: 254 heard
## 1st event detected by breaks:
## [0, 1, 2, 3, 4, 5 min]
## [0, 50, 100, 150, 200 m]
(rem <- tr$removal) # binned new individuals
## 0-1min 1-2min 2-3min 3-4min 4-5min
## 0-50m 3 2 1 0 0
## 50-100m 4 5 2 0 0
## 100-150m 5 9 7 3 3
## 150-200m 9 2 4 3 1
colSums(rem)
## 0-1min 1-2min 2-3min 3-4min 4-5min
## 21 18 14 6 4
rowSums(rem)
## 0-50m 50-100m 100-150m 150-200m
## 6 11 27 19
The plot method displays the detections presented as part of the tr
object.
plot(tr)
The detection process and the transcription (following a prescribed protocol) is inseparable in the field. However, recordings made in the field can be processed by a number of different ways. Separating these processed gives the ability to make these comparisons on the exact same set of detections.
Similarly to the get_events()
function, we can visualize the accumulation of detections with time or with distance using the get_detections()
function
tr_dets <- get_detections(tr)
plot(tr_dets, type="time", ylim=c(0, nrow(tr_dets)))
curve((1-exp(-x*phi))*nrow(tr_dets), col=2, add=TRUE)
plot(tr_dets, type="distance", ylim=c(0, nrow(tr_dets)))
curve((1-exp(-(x/tau)^2))*nrow(tr_dets), col=2, add=TRUE)
We now fit the removal model to the data pooled by time intervals. p
is the cumulative probability of availability for the total duration:
fitp <- cmulti.fit(matrix(colSums(rem), 1), matrix(tint, 1), type="rem")
phihat <- exp(fitp$coef)
c(true=phi, estimate=exp(fitp$coef))
## true estimate
## 0.5000000 0.3883786
(p <- 1-exp(-max(tint)*phihat))
## [1] 0.8565678
The distance sampling model uses the distance binned counts, and a Half-Normal detection function, q
is the cumulative probability of perceptibility within the area of truncation distance rmax
:
fitq <- cmulti.fit(matrix(rowSums(rem), 1), matrix(rint, 1), type="dis")
tauhat <- exp(fitq$coef)
c(true=tau, estimate=tauhat)
## true estimate
## 2.000000 2.214674
rmax <- max(rint)
(q <- (tauhat^2/rmax^2) * (1-exp(-(rmax/tauhat)^2)))
## [1] 0.6837211
The known A
rea, p
, and q
together make up the correction factor, which is used to estimate density based on \(\hat{D}=Y/(A \hat{p}\hat{q})\):
(A <- pi * rmax^2)
## [1] 12.56637
Dhat <- sum(rem) / (A * p * q)
c(true=Den, estimate=Dhat)
## true estimate
## 10.000000 8.560319
We now change the distance bins to include the area outside of the previous rmax
distance, making the counts to be unlimited distance counts:
rint <- c(0.5, 1, 1.5, 2, Inf) # unlimited
(tr <- bsims_transcribe(o, tint=tint, rint=rint))
## bSims transcript
## 1 km x 1 km
## stratification: H
## total abundance: 951
## duration: 10 min
## detected: 254 heard
## 1st event detected by breaks:
## [0, 1, 2, 3, 4, 5 min]
## [0, 50, 100, 150, 200, Inf m]
(rem <- tr$removal) # binned new individuals
## 0-1min 1-2min 2-3min 3-4min 4-5min
## 0-50m 3 2 1 0 0
## 50-100m 4 5 2 0 0
## 100-150m 5 9 7 3 3
## 150-200m 9 2 4 3 1
## 200+m 18 6 6 6 2
colSums(rem)
## 0-1min 1-2min 2-3min 3-4min 4-5min
## 39 24 20 12 6
rowSums(rem)
## 0-50m 50-100m 100-150m 150-200m 200+m
## 6 11 27 19 38
The removal model is basically the same, the only difference is that the counts can be higher due to detecting over larger area and thus potentially detecting more individuals:
fitp <- cmulti.fit(matrix(colSums(rem), 1), matrix(tint, 1), type="rem")
phihat <- exp(fitp$coef)
c(true=phi, estimate=phihat)
## true estimate
## 0.5000000 0.4140603
(p <- 1-exp(-max(tint)*phihat))
## [1] 0.8738523
The distance sampling model also takes the extended data set
fitq <- cmulti.fit(matrix(rowSums(rem), 1), matrix(rint, 1), type="dis")
tauhat <- exp(fitq$coef)
c(true=tau, estimate=tauhat)
## true estimate
## 2.000000 2.034329
The problem is that our truncation distance is infinite, thus the area that we are sampling is also infinite. This does not make too much sense, and not at all helpful in estimating density (anything divided by infinity is 0).
We can use an arbitrarily large but finite distance (400 or 500 m). But let’s think before taking a shortcut lika that.
We can use EDR (tauhat
for Half-Normal) and calculate the estimated effective area sampled (Ahat
; \(\hat{A}=\pi \hat{\tau}^2\)). We can also set q
to be 1, because the logic behind EDR is that its volume equals the volume of the integral, in other words, it is an area that would give on average same count under perfect detection. Thus, we can estimate density using \(\hat{D}=Y/(\hat{A} \hat{p}1)\)
(Ahat <- pi * tauhat^2)
## [1] 13.00146
q <- 1
Dhat <- sum(rem) / (Ahat * p * q)
c(true=Den, estimate=Dhat)
## true estimate
## 10.000000 8.889783
Remember, that we have used a single location so far. We set the density unreasonably high to have enough counts for a reasonable estimate. We can independently replicate the simulation for multiple landscapes and analyze the results to give justice to bSims under idealized conditions:
phi <- 0.5
tau <- 1
Den <- 1
tint <- c(3, 5, 10)
rint <- c(0.5, 1, 1.5, Inf)
b <- bsims_all(
density=Den,
vocal_rate=phi,
tau=tau,
tint=tint,
rint=rint)
B <- 200
set.seed(123)
sim <- b$replicate(B, cl=2)
res <- lapply(sim, get_table)
Ddur <- matrix(tint, B, length(tint), byrow=TRUE)
Ydur1 <- t(sapply(res, function(z) colSums(z)))
Ydur2 <- t(sapply(res, function(z) colSums(z[-nrow(z),])))
colSums(Ydur1) / sum(Ydur1)
colSums(Ydur2) / sum(Ydur2)
fitp1 <- cmulti(Ydur1 | Ddur ~ 1, type="rem")
fitp2 <- cmulti(Ydur2 | Ddur ~ 1, type="rem")
phihat1 <- unname(exp(coef(fitp1)))
phihat2 <- unname(exp(coef(fitp2)))
Ddis1 <- matrix(rint, B, length(rint), byrow=TRUE)
Ddis2 <- matrix(rint[-length(rint)], B, length(rint)-1, byrow=TRUE)
Ydis1 <- t(sapply(res, function(z) rowSums(z)))
Ydis2 <- t(sapply(res, function(z) rowSums(z)[-length(rint)]))
colSums(Ydis1) / sum(Ydis1)
colSums(Ydis2) / sum(Ydis2)
fitq1 <- cmulti(Ydis1 | Ddis1 ~ 1, type="dis")
fitq2 <- cmulti(Ydis2 | Ddis2 ~ 1, type="dis")
tauhat1 <- unname(exp(fitq1$coef))
tauhat2 <- unname(exp(fitq2$coef))
## unlimited correction
Apq1 <- pi * tauhat1^2 * (1-exp(-max(tint)*phihat1)) * 1
rmax <- max(rint[is.finite(rint)])
## truncated correction
Apq2 <- pi * rmax^2 *
(1-exp(-max(tint)*phihat2)) *
(tauhat2^2/rmax^2) * (1-exp(-(rmax/tauhat2)^2))
round(rbind(
phi=c(true=phi, unlimited=phihat1, truncated=phihat2),
tau=c(true=tau, unlimited=tauhat1, truncated=tauhat2),
D=c(Den, unlimited=mean(rowSums(Ydis1))/Apq1,
truncated=mean(rowSums(Ydis2))/Apq2)), 4)
## true unlimited truncated
## phi 0.5 0.5455 0.5366
## tau 1.0 0.9930 1.0087
## D 1.0 0.9823 0.9637
If time permits, try different settings and time/distance intervals.
Quickly organize the JOSM data:
## predictors
x <- josm$surveys
x$FOR <- x$Decid + x$Conif+ x$ConifWet # forest
x$AHF <- x$Agr + x$UrbInd + x$Roads # 'alienating' human footprint
x$WET <- x$OpenWet + x$ConifWet + x$Water # wet + water
cn <- c("Open", "Water", "Agr", "UrbInd", "SoftLin", "Roads", "Decid",
"OpenWet", "Conif", "ConifWet")
x$HAB <- droplevels(find_max(x[,cn])$index) # drop empty levels
levels(x$HAB)[levels(x$HAB) %in%
c("OpenWet", "Water", "Open", "Agr", "UrbInd", "Roads")] <- "Open"
levels(x$HAB)[levels(x$HAB) %in%
c("Conif", "ConifWet")] <- "Conif"
x$OBS <- as.factor(x$ObserverID)
## time intervals
yall_dur <- Xtab(~ SiteID + Dur + SpeciesID,
josm$counts[josm$counts$DetectType1 != "V",])
yall_dur <- yall_dur[sapply(yall_dur, function(z) sum(rowSums(z) > 0)) > 100]
## distance intervals
yall_dis <- Xtab(~ SiteID + Dis + SpeciesID,
josm$counts[josm$counts$DetectType1 != "V",])
yall_dis <- yall_dis[sapply(yall_dis, function(z) sum(rowSums(z) > 0)) > 100]
Pick our most abundant species again, and organize the data:
spp <- "TEWA"
Ydur <- as.matrix(yall_dur[[spp]])
Ddur <- matrix(c(3, 5, 10), nrow(Ydur), 3, byrow=TRUE,
dimnames=dimnames(Ydur))
stopifnot(all(rownames(x) == rownames(Ydur)))
Ydis <- as.matrix(yall_dis[[spp]])
Ddis <- matrix(c(0.5, 1, Inf), nrow(Ydis), 3, byrow=TRUE,
dimnames=dimnames(Ydis))
stopifnot(all(rownames(x) == rownames(Ydis)))
colSums(Ydur)
## 0-3min 3-5min 5-10min
## 4262 558 764
colSums(Ydis)
## 0-50m 50-100m 100+m
## 2703 2470 411
We pick a removal models with DAY
as covariate, and calculate \(p(t)\):
Mdur <- cmulti(Ydur | Ddur ~ DAY, x, type="rem")
summary(Mdur)
##
## Call:
## cmulti(formula = Ydur | Ddur ~ DAY, data = x,
## type = "rem")
##
## Removal Sampling (homogeneous singing rate)
## Conditional Maximum Likelihood estimates
##
## Coefficients:
## Estimate Std. Error z value
## log.phi_(Intercept) 0.07841 0.26149 0.300
## log.phi_DAY -2.09097 0.58661 -3.565
## Pr(>|z|)
## log.phi_(Intercept) 0.764274
## log.phi_DAY 0.000365 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log-likelihood: -3199
## BIC = 6413
phi <- drop(exp(model.matrix(Mdur) %*% coef(Mdur)))
summary(phi)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.3769 0.4015 0.4203 0.4235 0.4476 0.4767
p <- 1-exp(-10*phi)
summary(p)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.9769 0.9819 0.9850 0.9850 0.9886 0.9915
We fit the intercept only distance sampling model next:
Mdis0 <- cmulti(Ydis | Ddis ~ 1, x, type="dis")
summary(Mdis0)
##
## Call:
## cmulti(formula = Ydis | Ddis ~ 1, data = x, type = "dis")
##
## Distance Sampling (half-normal, circular area)
## Conditional Maximum Likelihood estimates
##
## Coefficients:
## Estimate Std. Error z value
## log.tau_(Intercept) -0.482723 0.007526 -64.14
## Pr(>|z|)
## log.tau_(Intercept) <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log-likelihood: -3823
## BIC = 7654
Let’s try a few covariates:
FOR
est cover covariate: sound attenuation increases with forest cover;HAB
itat has 3 levels: open, deciduous forest, and coniferous forest (based on dominant land cover), because broad leaves and needles affect sound attenuation;Mdis1 <- cmulti(Ydis | Ddis ~ FOR, x, type="dis")
Mdis2 <- cmulti(Ydis | Ddis ~ HAB, x, type="dis")
We can look at AIC to find the best supported model:
aic <- AIC(Mdis0, Mdis1, Mdis2)
aic$delta_AIC <- aic$AIC - min(aic$AIC)
aic[order(aic$AIC),]
## df AIC delta_AIC
## Mdis2 3 7638.842 0.000000
## Mdis1 2 7644.517 5.675140
## Mdis0 1 7648.398 9.556476
Mdis <- get(rownames(aic)[aic$delta_AIC == 0])
summary(Mdis)
##
## Call:
## cmulti(formula = Ydis | Ddis ~ HAB, data = x,
## type = "dis")
##
## Distance Sampling (half-normal, circular area)
## Conditional Maximum Likelihood estimates
##
## Coefficients:
## Estimate Std. Error z value
## log.tau_(Intercept) -0.44975 0.03207 -14.023
## log.tau_HABDecid -0.05826 0.03363 -1.732
## log.tau_HABConif -0.00300 0.03426 -0.088
## Pr(>|z|)
## log.tau_(Intercept) <0.0000000000000002 ***
## log.tau_HABDecid 0.0832 .
## log.tau_HABConif 0.9302
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log-likelihood: -3816
## BIC = 7657
After finding the best model, we predict tau
:
tau <- drop(exp(model.matrix(Mdis) %*% coef(Mdis)))
boxplot(tau ~ HAB, x)
Finally, we calculate the correction factor for unlimited distances, and predict mean density:
Apq <- pi * tau^2 * p * 1
x$ytot <- rowSums(Ydur)
mean(x$ytot / Apq)
## [1] 1.039206
Alternatively, we can use the log of the correction as an offset in log-linear models. This offset is called the QPAD offset:
off <- log(Apq)
m <- glm(ytot ~ 1, data=x, offset=off, family=poisson)
exp(coef(m))
## (Intercept)
## 1.025481
Try distance sampling and density estimation for another species.
Fit multiple GLMs with QPAD offsets and covariates affecting density, interpret the results and the visualize responses.
Use OBS
as predictor for tau
and look at predicted EDRs. What is the practical issue with using observer as predictor?
Use bSims::run_app("bsimsH")
to study the effects of the following factors on EDR (\(\hat{\tau}\)):
We need to talk about conditioning (1st event vs. 1st detection).
Use bSims::run_app("bsimsHER")
to see how habitat related numeric and behavior and detection differences can bias estimates.
Next: Review QPAD, adding ARUs, looking into roadside data, etc.
Estimate \(tau\) from distance sampling using species traits
library(lhreg)
## Loading required package: DEoptim
## Loading required package: parallel
##
## DEoptim package
## Differential Evolution algorithm in R
## Authors: D. Ardia, K. Mullen, B. Peterson and J. Ulrich
set.seed(1)
yall_dis <- Xtab(~ SiteID + Dis + SpeciesID,
josm$counts[josm$counts$DetectType1 != "V",])
## common species (with life history data)
SPP <- intersect(names(yall_dis), lhreg_data$spp)
SPP <- sample(SPP)[1:20]
n <- 200
i <- sample(nrow(yall_dis[[1]]), n)
yall_dis <- lapply(yall_dis[SPP], function(z) as.matrix(z[i,]))
yall_dis <- yall_dis[sapply(yall_dis, sum)>0]
SPP <- names(yall_dis)
## number of rows
Ystack <- do.call(rbind, yall_dis)
Dstack <- matrix(c(0.5, 1, Inf), nrow(Ystack), 3, byrow=TRUE)
Xstack <- droplevels(lhreg_data[match(rep(SPP, each=n), lhreg_data$spp),])
dim(Ystack)
## [1] 3400 3
## Constant
Mmulti1 <- cmulti(Ystack | Dstack ~ spp - 1, Xstack, type="dis")
summary(Mmulti1)
##
## Call:
## cmulti(formula = Ystack | Dstack ~ spp - 1, data = Xstack,
## type = "dis")
##
## Distance Sampling (half-normal, circular area)
## Conditional Maximum Likelihood estimates
##
## Coefficients:
## Estimate Std. Error z value
## log.tau_sppALFL -0.4248109 0.0991150 -4.286
## log.tau_sppBRCR 0.0247186 0.2924020 0.085
## log.tau_sppDOWO 0.4694089 2.2991589 0.204
## log.tau_sppEVGR 0.0002117 0.5022579 0.000
## log.tau_sppGCKI 0.3536946 0.5980371 0.591
## log.tau_sppHAWO 0.0853224 0.6686424 0.128
## log.tau_sppLISP -0.4552637 0.1057551 -4.305
## log.tau_sppMOWA 0.1225492 0.6374294 0.192
## log.tau_sppOCWA -0.0566148 0.3859966 -0.147
## log.tau_sppRBNU -0.0851356 0.1746977 -0.487
## log.tau_sppRUBL 0.2177284 1.8075774 0.120
## log.tau_sppRUGR 0.1041798 0.2541780 0.410
## log.tau_sppSWTH -0.3536771 0.0459594 -7.695
## log.tau_sppWAVI 0.1009146 0.4421675 0.228
## log.tau_sppWETA 0.0321651 0.1579049 0.204
## log.tau_sppWTSP -0.0820293 0.0513322 -1.598
## log.tau_sppYRWA -0.6383405 0.0412009 -15.493
## Pr(>|z|)
## log.tau_sppALFL 0.0000181884206102 ***
## log.tau_sppBRCR 0.933
## log.tau_sppDOWO 0.838
## log.tau_sppEVGR 1.000
## log.tau_sppGCKI 0.554
## log.tau_sppHAWO 0.898
## log.tau_sppLISP 0.0000167070385646 ***
## log.tau_sppMOWA 0.848
## log.tau_sppOCWA 0.883
## log.tau_sppRBNU 0.626
## log.tau_sppRUBL 0.904
## log.tau_sppRUGR 0.682
## log.tau_sppSWTH 0.0000000000000141 ***
## log.tau_sppWAVI 0.819
## log.tau_sppWETA 0.839
## log.tau_sppWTSP 0.110
## log.tau_sppYRWA < 0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log-likelihood: -524.8
## BIC = 1152
Mmulti2 <- cmulti(Ystack | Dstack ~ logmass + MaxFreqkHz + Mig2 + Hab2,
Xstack, type="dis")
summary(Mmulti2)
##
## Call:
## cmulti(formula = Ystack | Dstack ~ logmass + MaxFreqkHz +
## Mig2 + Hab2, data = Xstack, type = "dis")
##
## Distance Sampling (half-normal, circular area)
## Conditional Maximum Likelihood estimates
##
## Coefficients:
## Estimate Std. Error z value
## log.tau_(Intercept) -1.87108 0.29023 -6.447
## log.tau_logmass 0.32789 0.05077 6.458
## log.tau_MaxFreqkHz 0.08490 0.03476 2.443
## log.tau_Mig2W -0.06578 0.11009 -0.598
## log.tau_Hab2Open -0.15913 0.07298 -2.181
## Pr(>|z|)
## log.tau_(Intercept) 0.000000000114 ***
## log.tau_logmass 0.000000000106 ***
## log.tau_MaxFreqkHz 0.0146 *
## log.tau_Mig2W 0.5501
## log.tau_Hab2Open 0.0292 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log-likelihood: -508.1
## BIC = 1046
AIC(Mmulti1, Mmulti2)
## df AIC
## Mmulti1 17 1083.621
## Mmulti2 5 1026.243
tau1 <- exp(coef(Mmulti1))
names(tau1) <- gsub("log.tau_spp", "", names(tau1))
X <- model.matrix(~ logmass + MaxFreqkHz + Mig2 + Hab2,
lhreg_data[match(names(tau1), lhreg_data$spp),])
tau2 <- exp(X %*% coef(Mmulti2))
res <- data.frame(count=sapply(yall_dis[names(tau1)], sum), tau1=tau1, tau2=tau2)
res$diff <- res$tau1 - res$tau2
res
## count tau1 tau2 diff
## ALFL 36 0.6538934 0.5202360 0.13365744
## BRCR 8 1.0250267 0.5890234 0.43600331
## DOWO 1 1.5990487 0.5984179 1.00063078
## EVGR 5 1.0002117 0.7765534 0.22365830
## GCKI 8 1.4243202 0.6270183 0.79730191
## HAWO 2 1.0890681 0.8929306 0.19613748
## LISP 40 0.6342807 0.5976666 0.03661410
## MOWA 3 1.1303747 0.5487603 0.58161445
## OCWA 6 0.9449580 0.6556621 0.28929585
## RBNU 14 0.9183877 0.6009321 0.31745560
## RUBL 1 1.2432493 1.3173532 -0.07410389
## RUGR 7 1.1097999 1.5842298 -0.47442988
## SWTH 141 0.7021017 0.7874567 -0.08535503
## WAVI 6 1.1061822 0.6359309 0.47025131
## WETA 18 1.0326880 0.6689432 0.36374475
## WTSP 146 0.9212450 0.7685436 0.15270144
## YRWA 187 0.5281682 0.5680660 -0.03989782
plot(diff ~ count, res)
abline(h=0, lty=2)
Think about how you’d do leave-one-out cross-validation.
Think about how you would include HAB
effects on \(\tau\) (hint: main effect and interactions).