Chapter 1 Introduction

All assumptions are violated, but some are more than others

A comparison of apples and oranges occurs when two items or groups of items are compared that cannot be practically compared (Wikipedia). The way we measure things can have a big impact on the outcome of that measurement. For example, you might say that “I saw 5 robins walking down the road”, while I might say that “I only saw one robin while sitting on my porch”. Who say more robins? If looking at only the numeric results, you saw more robins than me. But this seems like an apples to oranges comparison.

To compare apples to apples, we need to agree on a comparable measurement scheme, or at least figure out how does effort affect the observations.

Effort in our example can depend on, e.g. the area of the physical space searched, the amount of time spent, etc. The outcome might further affected by weather, time of year, time of day, location, experience and skill level of the observer.

All these factors can affect the observed count. Which brings us to the definition of a point count: a trained observer records all the birds seen and heard from a point count station for a set period of time within a defined distance radius.

Point count duration and distance have profound effect on the counts, as shown in Figure 1.1 showing that a 10-min unlimited distance count is roughly 300% increased compared to 3-min 50-m counts (averaged across 54 species of boreal songbirds, (Matsuoka et al. 2014)).

Effects of duration and distance on mean counts, from [@matsuoka2014].

Figure 1.1: Effects of duration and distance on mean counts, from (Matsuoka et al. 2014).

Point counts are commonly used to answer questions like:

  • How many? (Abundance, density, population size)
  • Is this location part of the range? (0/1)
  • How is abundance changing in space? (Distribution)
  • How is abundance changing in time? (Trend)
  • What is the effect of a treatment on abundance?

1.1 Design-based approaches

Standards and recommendations can maximize efficiency in the numbers of birds and species counted, minimize extraneous variability in the counts.

But programs started to deviate from standards: “For example, only 3% of 196,000 point counts conducted during the period 1992–2011 across Alaska and Canada followed the standards recommended for the count period and count radius” ((Matsuoka et al. 2014)). Figure 1.2 show how point count protocol varies across the boreal region of North America.

Survey methodology variation (colors) among contributed projects in the Boreal Avian Modelling (BAM) data base, from [@barker2015].

Figure 1.2: Survey methodology variation (colors) among contributed projects in the Boreal Avian Modelling (BAM) data base, from (Barker et al. 2015).

Exercise

In what regard can protocols differ?

What might drive protocol variation among projects?

Why have we abandoned following protocols?

1.2 Model-based approaches

Detection probabilities might vary even with fixed effort (we’ll cover this more later), and programs might have their own goals and constraints (access, training, etc). These constraints would make it almost impossible, and potentially costly to set up very specific standards.

Labour intensive methods for unmarked populations have come to the forefront, and computing power of personal computers opened the door for model-based approaches, that can accomodate more variation given enough information in the observed data. These methods often rely on ancillary information and often some sort of replication.

Some of the commonly used model-based approaches are:

Models come with assumptions, such as:

  • population is closed during multiple visits,
  • observers are independent,
  • all individuals emit cues with identical rates,
  • spatial distribution of individuals is uniform,
  • etc.

Although assumptions are everywhere, we are really good at ignoring and violating them.

Exercise

Can you mention some assumptions from everyday life?

Can you explain why we neglect/violate assumptions in these situations?

Assumptions are violated, because we seek simplicity. The main question we have to ask: does it matter in practice if we violate the assumptions?

1.3 Our approach

In this book and course, we will critically evaluate common assumptions made when analyzing point count data using the following approach:

  1. we will introduce a concept,
  2. understand how we can infer it from data,
  3. then we recreate the situation in silico,
  4. and see how the outcome changes as we make different assumptions.

It is guaranteed that we will violate every assumption we make. To get away with it, we need to understand how much is too much, and whether it has an impact in practice. If there is a practical consequence, we will look at ways to minimize that effects – so that we can safely ignore the assumption.

References

Barker, Nicole K. S., Patricia C. Fontaine, Steve G. Cumming, Diana Stralberg, Alana Westwood, Erin M. Bayne, Péter Sólymos, Fiona K. A. Schmiegelow, Samantha J. Song, and D. J. Rugg. 2015. “Ecological Monitoring Through Harmonizing Existing Data: Lessons from the Boreal Avian Modelling Project.” Wildlife Society Bulletin 39: 480–87. https://doi.org/10.1650/CONDOR-14-108.1.

Matsuoka, S. M., C. L. Mahon, C. M. Handel, Péter Sólymos, E. M. Bayne, P. C. Fontaine, and C. J. Ralph. 2014. “Reviving Common Standards in Point-Count Surveys for Broad Inference Across Studies.” Condor 116: 599–608. https://doi.org/10.1650/CONDOR-14-108.1.