Bird Conservation Impacts

Correlation vs Causation in Bird Incidents: How to Tell What’s Real

Split-screen of a bird incident scene beside a simple timeline-like pattern on pavement, no text.

When birds start dying near a wind farm, a backyard feeder, or a construction site, people immediately want to know why. That instinct is good. The problem is that the jump from "birds died near X" to "X killed those birds" happens almost automatically, and it is usually wrong, or at least unproven. Correlation just means two things happened together. Causation means one thing actually produced the other. With bird incidents specifically, those two concepts get tangled constantly, and the consequences range from harmless misconceptions to genuinely bad decisions: removing feeders that weren't causing any harm, ignoring a real chemical exposure, or misreading aviation strike data in a way that leaves a runway more dangerous than it needs to be.

What "correlation vs causation" actually means for bird incidents

A dark bird perched on a metal railing outside a glass office building at dusk.

Correlation means that when one thing changes, another tends to change with it. Causation means the first thing is directly responsible for producing the second. The logical fallacy at the center of this confusion has a formal name: post hoc ergo propter hoc, Latin for "after this, therefore because of this." A bird dies near a cell tower the morning after it was erected, so the tower must have killed it. Ten sparrows disappear from a feeder within a week of a neighbor starting to use pesticides, so the pesticides must be the cause. These feel like obvious cause-and-effect stories, but without additional evidence they are just narratives built around timing or proximity.

Here is a concrete bird example to make this stick. Suppose bird mortality reports spike near a stretch of coastline every October. You could correlate those deaths with the presence of offshore wind infrastructure in that area. But October is also peak migration season. Coastal geography funnels migrants into collision-prone corridors. Autumn storms increase disorientation. Any one of those factors, or all of them together, could explain the spike with no contribution from turbines whatsoever. The EPA's CADDIS causality framework, which environmental scientists use to evaluate exactly these kinds of ecological associations, is explicit on this point: an empirical association is only the starting point, not the conclusion.

Common misleading patterns in bird deaths, attacks, and collisions

Most false conclusions about bird incidents fall into a handful of repeating patterns. Recognizing them does not mean dismissing every reported association, but it does mean slowing down before acting on one.

  • Timing coincidences: A new feeder goes up, and two weeks later a goldfinch is found dead nearby. The timing feels causal, but bird deaths occur constantly in any habitat. The feeder just gave the observer a reason to be watching.
  • Reporting bias: Incidents near visible human infrastructure get reported far more than identical incidents in remote areas. Wind turbines sit in open fields and are regularly monitored. Cats and building collisions happen in backyards and city blocks with almost no systematic documentation. This skews perception of which factors are most dangerous.
  • Confounding variables: Weather, season, migration timing, habitat quality, and food availability all affect bird mortality rates independently. When any of these shift at the same time as a new factor appears, separating their effects requires deliberate study design, not just observation.
  • Location bias: If you only look for dead birds near wind turbines, you will only find dead birds near wind turbines. Studies that compare carcass density at turbine sites to matched control areas consistently paint a more nuanced picture than site-only counts.
  • Small sample size: Three dead birds is not a dataset. Without knowing how many birds passed through that area, how many typically die there, and what killed them, three carcasses can support almost any narrative.
  • Severity salience: Dramatic incidents, a hawk attack caught on video, a mass mortality event near a facility, get disproportionate attention. The emotional weight of a striking event does not make it more statistically meaningful.

These patterns show up across every bird-related risk topic, from questions about how wind turbines affect bird populations to debates over PCBs and biomagnification. PCBs can affect bird populations through biomagnification, where these persistent chemicals become more concentrated at higher trophic levels PCBs and biomagnification. The mechanism driving the confusion is the same each time: a visible association fills in for an explanation that would actually require controlled investigation.

How to test causality: what real evidence looks like

You do not need a graduate degree to apply better-than-anecdote thinking. You just need to ask a consistent set of questions every time an association is claimed.

The four evidence tests that actually matter

Two matched lab trays with sample vials, one subtly darker, suggesting a baseline controlled comparison.
  1. Controlled comparison: Does mortality (or attack rate, or collision rate) rise meaningfully compared to a similar location or time period without the proposed cause? Without a comparison group, you cannot distinguish signal from background noise.
  2. Consistency across time and location: If X causes Y, then Y should appear wherever and whenever X is present at sufficient levels, not just once in one place. One-time associations are hypothesis-generators, not conclusions.
  3. Mechanistic plausibility: Is there a known biological or physical pathway by which X could produce Y? The EPA's CADDIS guidance specifically names this as a credibility-booster: observing low cholinesterase activity in birds exposed to organophosphates, for example, provides mechanistic support for a pesticide-mortality link in a way that proximity alone never could. USGS researchers make the same point, arguing that mechanistic knowledge can support a causal interpretation even when the statistical picture is incomplete.
  4. Ruling out alternatives: Have other plausible explanations been actively investigated and found insufficient, rather than simply ignored? A study that finds elevated mortality near infrastructure but does not account for seasonal migration patterns has not ruled out the most obvious alternative.

These criteria come directly from the epidemiological tradition, particularly the framework articulated by Austin Bradford Hill in 1965, which has been adapted for ecological causality assessments ever since. They are practical tools, not academic exercises. If a claim about bird risk cannot pass at least two or three of these tests, treat it as an open question rather than a settled fact.

Practical risk decisions for pet owners and backyard birders

If you keep pet birds or run a backyard feeding station, you are likely to encounter alarming claims regularly: a new seed brand is killing finches, a neighbor's lawn treatment is wiping out sparrows, a feeder design is spreading disease. Here is how to think through each situation before acting.

Checklist for pet bird owners

Close-up of a notebook with checkboxes beside a birdcage and safe household items, ready for recording details.
  • Record the timeline precisely: When did symptoms start? What changed in the bird's environment in the 7 to 14 days before? New foods, cleaning products, air fresheners, Teflon cookware, candles, and paint fumes are all documented causes of avian illness. Proximity to a new product matters far less than whether there is a known biological mechanism by which that product could harm a bird.
  • Contact an avian vet, not a general vet, and bring a history of exposures, not just symptoms. Ask them explicitly whether the presentation is consistent with a known toxicological, infectious, or nutritional cause.
  • Request a necropsy if a bird dies unexpectedly. Tissue analysis can identify specific toxins, pathogens, or nutritional deficiencies, converting an anecdotal correlation into an actual diagnosis.
  • Do not make sweeping changes based on one death. If one bird dies but others in the same environment are healthy, a systemic cause is less likely than an individual vulnerability or a random infectious event.

Checklist for backyard birders and feeder managers

  • Track changes systematically: Keep a simple log of species counts, feeder activity, and any deaths or injuries you observe. Patterns over weeks and months are far more meaningful than a single dramatic day.
  • Report unusual mortality events to your state wildlife agency or the USGS National Wildlife Health Center. They collect carcasses, test for disease, and can actually distinguish correlation from causation in ways that individual observation cannot.
  • Before attributing a feeder-area death to your setup, ask whether the bird could have collided with a nearby window, been caught by a cat, or succumbed to a passing illness. Multiple competing explanations deserve equal consideration.
  • Salmonella and other feeder-associated diseases are real and documented, with consistent mechanistic evidence (contaminated seed, crowding, fecal transmission). If you see multiple birds acting lethargic near a feeder, cleaning the feeder with a 10% bleach solution is a well-supported precaution. Removing it entirely based on one dead bird is probably an overreaction.

Aviation and wildlife hazard contexts: what data to trust

For aviation professionals, the stakes of getting this wrong are concrete and serious. The FAA Wildlife Strike Database is the standard reference in the US, containing more than 300,000 reported strikes since 1990. That sounds like a lot of data, and it is, but it is also self-reported and widely acknowledged to be undercounted, with estimates suggesting only 20 to 30 percent of actual strikes are ever filed. What this means practically is that any single-airport or single-species count in the database should be treated as a floor, not a ceiling.

The more important causality question in aviation contexts is usually this: does a particular mitigation actually reduce strike rates, or does it just correlate with lower reported incidents? Habitat modification around airports, such as reducing grass height to discourage foraging birds, has mechanistic plausibility and consistent empirical support. A species that prefers short grass will use the runway environment less when the grass is kept short. Pyrotechnics and other hazing methods show more variable results because their effectiveness depends on species, habituation, and consistency of application. Trusting a mitigation because strike reports dropped after its introduction, without a controlled comparison that rules out seasonal changes in bird activity, is exactly the kind of causal overconfidence that this framework helps avoid.

When evaluating wildlife hazard assessments for a specific airport or flight path, look for studies that include comparison periods, account for seasonal migration patterns, and specify the species involved. A generic claim that "bird activity is higher near wetlands" is not an actionable hazard assessment. A species-specific analysis showing that Canada goose populations within the 5-mile airport buffer have tripled since habitat changes occurred, with mechanistic reasoning for why that increases ingestion-strike risk during take-off and landing, is something you can actually act on.

A quick evidence quality check for strike and hazard data

Minimal desk scene with papers and pen suggesting an evidence quality check for strike/hazard data, no readable text.
Evidence typeWhat it tells youWhat it does not tell you
Raw strike count at a locationHow often strikes were reportedWhether local conditions caused them or reporting rates changed
Species identified at strike siteWhich species are presentWhether local management or natural range explains presence
Strike reduction after mitigationThat fewer strikes were reported after an interventionWhether the intervention caused the reduction or season, traffic, or reporting changed
Controlled before-and-after study with comparison siteSolid evidence that an intervention had a measurable effectExact mechanism if not paired with biological data
Mechanistic hazard model (species behavior + habitat + flight path analysis)Why strikes are plausible and where peak risk fallsActual strike rate without empirical validation

Actionable next steps: investigate first, overreact never

Here is the practical sequence for anyone dealing with a bird incident right now, whether you are a pet owner, a backyard birder, or a wildlife officer at an airport.

Step 1: Document before you do anything else

Write down the date, time, location, species if known, number of birds affected, and any observable symptoms or conditions. Photograph the bird and the surrounding environment. Note any changes in the area in the past two to four weeks. This is not busywork: accurate documentation is the difference between useful data and another anecdote.

Step 2: Contact the right resource

  • Pet bird illness or death: An avian veterinarian, with necropsy requested for unexplained deaths.
  • Backyard or wild bird mortality involving multiple individuals: Your state wildlife agency or the USGS National Wildlife Health Center (nwhc.usgs.gov). They can test carcasses for disease and toxins.
  • Suspected pesticide exposure: The National Pesticide Information Center (NPIC) at 1-800-858-7378 can advise on exposure pathways and testing.
  • Aviation wildlife strike: File a report with the FAA Wildlife Strike Database and notify the airport wildlife biologist if one is on staff. For pattern-level analysis, request a multi-year species-specific strike history from the FAA database.

Step 3: Apply mitigations proportional to evidence strength

Some precautions are worth taking even when causation is uncertain, because they have broad-spectrum safety benefits and low costs. Cleaning feeders regularly, reducing reflective glass surfaces near bird habitat, and keeping cats indoors are all examples. Other responses, like removing a feeder permanently, demanding a wind turbine shutdown, or grounding flights based on a single unusual strike event, require much stronger causal evidence before they are justified. The principle is straightforward: the bigger the cost of the action, the higher the evidentiary bar should be before you take it.

Step 4: Track outcomes and stay skeptical of your own narrative

If you make a change and the problem improves, resist the urge to declare victory immediately. Seasonal changes, natural disease cycles, and simple random variation all improve situations without any intervention. A true causal test asks whether the improvement was larger than what you would expect by chance, was consistent across multiple subsequent time periods, and did not coincide with other changes that could explain it. This sounds demanding, but it only requires that you keep observing after you act, not that you run a randomized trial in your backyard.

The goal across all of this is not paralysis. It is calibrated confidence: acting on the evidence you actually have, acknowledging what you do not know, and building better information over time rather than locking in a wrong answer because it arrived first. Questions about which factors most reduce bird populations, how industrial pollutants like PCBs move through food chains, and how to honestly compare mortality sources like wind turbines against other hazards are all answerable, but only if the causal reasoning underneath them is honest. Questions about which factors most likely lead to reduced bird populations are answerable, but only if the causal reasoning underneath them is honest. Researchers estimate that bird deaths from &lt;a data-article-id=&quot;67A8AFE3-F808-455A-8088-07E7F69DC609&quot;&gt;wind turbines</a> are generally low compared with other causes, but the exact numbers vary by site and species. If you are asking are windmills destroying the bird population, start with whether the association holds up when you compare migration timing and other hazards, not just proximity to turbines wind turbines. Start there, and the practical decisions follow. To put those turbine impacts in context, it helps to compare bird deaths from wind turbines with the larger baseline from fossil fuels bird deaths from wind turbines vs fossil fuels.

FAQ

If birds die shortly after a new change (like a pesticide or feeder setup), does that automatically mean the change caused it?

Not necessarily. Timing can fit many explanations, so treat the pattern as a hypothesis. Check whether the association repeats in other seasons, after weather changes, and in locations with similar bird activity but different exposure levels. If multiple independent incidents follow the same timing pattern plus species-specific mechanistic logic, the causal case strengthens.

How can I tell whether an incident is just seasonal or weather-driven rather than caused by something I did?

Yes, and it is a common trap. Weather and migration can drive large swings in bird counts even when no harm is occurring. Use multiple weeks of baseline before the change, then compare to the same weeks in other years or nearby areas to see whether the pattern exceeds expected natural variation.

What’s the best way to avoid misleading conclusions when incident reports lump different bird species together?

Start with a stratified look, not a single number. If you only compare total bird counts, you can miss that different species respond differently to the same condition. Separate by species, age class when possible, and microhabitat (for example, runway side vs terminal side at an airport), then test whether the change is strongest where the exposure is most plausible.

Should I stop feeding birds immediately if I suspect my feeder is causing harm?

Feeder removal can reduce disease and improve safety, but it can also eliminate useful evidence. If you can do so safely, pause and document first, then switch one variable at a time (cleaning schedule, seed type, feeder location). This helps you distinguish a sanitation issue from an exposure issue like pesticide drift or nearby construction dust.

In aviation or airport mitigation, what would count as strong evidence that an intervention truly reduces strikes?

Look for “causal signatures” that align with the mechanism. For example, if a mitigation works, you often see consistent reductions during the exposure-critical window (like takeoff and landing periods) and across multiple days, not just the first few weeks. Also check whether a related factor changed at the same time, such as grass height, new landscaping, or changes in flight schedules.

Why might strike reports decrease even if birds are not actually less risky near the airport?

Do not focus on a single mitigation-to-drop correlation, because reporting and effort can change too. If the number of reported strikes drops, confirm whether reporting practices, staff coverage, or aircraft routing changed. A useful check is whether bird activity measurements (habitat use, grass height, foraging density) also shift in the direction expected by the mitigation.

How far back should I look for potential causes when toxins or contaminants are involved?

It can be, especially with pesticides or pollution. Use a broader timeline than “the last event,” because residues can persist, and exposure can occur through food chain or water. For chemicals like PCBs, even if the immediate incident timing looks tight, the larger food-chain pathways and trophic transfer should be consistent with the timing of the affected species.

How should I interpret Wildlife Strike Database numbers without overreacting to a small count?

Yes. The “floor” idea applies, but you can still use the database well by comparing within the same airport across time and by using confidence ranges rather than one raw count. If possible, pair strike data with operational context (traffic volume, season, runway configuration) so the comparison reflects expected exposure rather than just reporting volume.

What’s a realistic “minimum standard” for testing cause after I change something in my yard or at a facility?

You do not need a full randomized trial to reduce error, but you do need a structured comparison. If you change one thing, keep observing longer than a single week, and look for whether the improvement stays larger than natural variation. If you cannot keep observing, your best alternative is to compare to a control-like context, such as a nearby similar area without the change.

How should I decide what level of evidence is enough before taking high-cost actions (like shutting down an operation)?

If the cost or irreversible nature is high, use a higher evidentiary bar. For example, permanent feeder removal or major operational changes should be supported by repeated patterns, species-specific reasoning, and evidence that other plausible causes are unlikely. For low-cost steps with broad benefits (cleaning, cat control, reducing reflective glass), you can act sooner because the downside is limited.