After a bit of administrative arm-wrestling the Federal Aviation Administration is releasing numbers today on reported bird strikes – the sort of incident that plunked US Airways Flight 1549 into the Hudson River in January. The concern’s obviously a real one – but the figures need a great deal of caution.
It’s essential to know the limitations of any data, and here they seem to be numerous. The voluntary nature of the reporting is a very substantial weakness; non-reporting is not the same as the absence of events. You could have a bigger problem at airports with the fewest reported bird strikes, if the reason is not lack of strikes but lack of reporting, possibly indicating generally lax procedures.
One example: US Airways requires its pilots to report strikes. Do all other airlines? If not, those that require reporting could have elevated numbers at their hubs – not because more strikes occur there, but simply because more are reported per airline policy.
There also seems to be a strong possibility of double- and triple-counting, since per AP the FAA takes reports from a range of sources. Plane hits a bird, one report. Dead bird found on runway, another report. Mechanic finds feathers in engine, third report. One strike, three reports. Do they adjust for this by comparing individual reports and trying to control for double- or triple-counting? A meaningful tally would require it.
Further, any increase over time could reflect an increase in reporting, not an increase in actual strikes – an artifact of new or different reporting protocols, or higher awareness. If these are unevenly distributed (e.g., one airline makes a policy, another doesn’t), it, too, would call the data into question.
Per AP the FAA says only about 20 percent of strikes are recorded. (Again, without knowing how many are double- or triple-counted.) It would be useful to know how they estimate that – we’ll ask today. But it suggests vast noncoverage – 80 percent – with no apparent reason to have any confidence that the absent incidents are distributed proportionately to the strikes that are reported.
The AP and others filed a Freedom of Information Act request for these data, and the news service looks to have the bit in its mouth; its latest report says "the FAA has always feared the public can’t handle the full truth about bird strikes, so it has withheld the names of specific airports and airlines involved."
But data are not "the truth" if inadequately collected or if misanalyzed; they can misinform as easily as inform our judgment. In this case it seems clear that the FAA numbers are neither a full count of bird strikes, nor a representative sample. They should not be construed otherwise.
11:20 a.m. update:
The FAA data are now online here – with a disclaimer at the top that underscores some of what I suggested above. It reads as follows:
"The FAA National Wildlife Strike Database contains strike reports that are voluntarily reported to the FAA by pilots, airlines, airports and others. Current research indicates that only about 20% of strikes are reported. Wildlife strike reporting is not uniform as some organizations have more robust voluntary reporting procedures. Because of variations in reporting, users are cautioned that the comparisons between individual airports or airlines may be misleading."
Also worth reviewing is the FAA’s latest report on this material from last June. In addition to the fairly gruesome dead-bird pictures, it suggests reasons for increased bird-strike reporting beyond increased awareness and reporting requirements: more birds, more air traffic and quieter airplanes. (It also says bird-strike reports are "edited … to ensure consistent, error-free data," but doesn’t say specifically if or how duplication is identified and corrected. )
A last point, one that almost goes without saying, is that the number of incidents per airport would need to be adjusted to its level of air traffic to mean anything. But without uniform reporting standards, it’d be highly suspect even then.