Can New AI Chatbots Like ChatGPT Help You Identify Birds?

‘APP’ magazine’s intrepid AI expert pushed the cutting-edge technology to its limits on field marks, native plants, and ... Mr. Bean?
An illustration of a robot reaching up with one arm towards a hovering red bird.
Illustration: Jorm Sangsorn/Shutterstock

There’s that quote from the movie Jurassic Park that goes: “Your scientists were so preoccupied with whether they could, they didn't stop to think if they should.” In the case of Artificial Intelligence, or AI, I feel like scientists and just about every person on Earth has thought about whether we should and decided we shouldn't. But here we are, doing it anyway. 

To be clear, I'm not talking about I, Robot stuff (yet), but the various suddenly being released by technology companies are causing about as much alarm as they are intrigue. And whether you think they are , it’s clear they are here to stay—and that we are at the beginning of a brave new world. 

So what now? I’m not in school anymore, so I can't have them . I’m not a hacker, so I can’t use them to help me . And I’m happily married and not looking to be lured into . I guess I might as well talk to one about birding.

How useful might this technology be for a birder? Machine-learning birding apps such as Merlin and its ability to identify bird songs have revolutionized the field in just a few years, but what could a text-based AI program contribute? Could a friendly robo-assistant with all of the internet at its digital fingertips help someone be a better birder?

With potential horror stories fresh on my mind, I ventured into a conversation with , the “language model optimized for dialogue” that has led the AI chatbot charge. OpenAI, the nonprofit laboratory that developed ChatGPT, has lofty goals for its creation, including “increasing abundance” (the abundance of what is not clear), “turbocharging the global economy,” and “aiding in the discovery of new scientific knowledge that changes the limits of possibility.” I’m just hoping it can help me tell woodpeckers apart.

I wasn’t really sure where to begin or what tone to take. Do I address ChatGPT like a person, using words like you and saying please and thank you? Or is it just a search engine with more elaborate responses? I started politely, but not too politely, and asked about its birding skills. 

I don’t really know what kind of answer I was expecting (did I think it had actual field experience?), but I needed to get the ball rolling. ChatGPT’s first answer was impressive and logical, though I couldn’t help but feel a tweak of anxiety. The thoroughness and formality of the language (using words like however and not using contractions, for example), immediately made me feel like I was talking to any number of fake AI characters I’d seen in Sci-Fi movies—Hal 9000 from 2001: A Space Odyssey, or C-3PO from Star Wars. This was weird. I pressed on, even more polite now, and asked an easy one:

Yup, that checks out for the most part. The size comparisons are a little off, and I’m not sure the shape information is very helpful. Same goes for the part about cardinals being “more solitary.” (While it's true that cardinals are alone more often than Blue Jays, it’s not frequent enough to be a helpful identification point.) But the important pieces are there. Next, I tried something a little trickier.

A genuinely helpful answer, clearly presented. But still, a little easy. There are lots of Sharpie-Coop explainers on the web—some by really good writers—and I doubt it takes too much computerbrainpower to trawl those and compile a cogent response. I needed a truly tough birding question. So I turned the dial to 11 and asked about gulls. 

This is ... not great. Aside from the revealing fact that apparently ChatGPT follows the British Ornithologists' Union rules and recognizes these birds as distinct species (the American Ornithological Society does not), this information is misleading. The only real observable differences in these birds are found in their juvenile plumages, and those clues deal with the patterning on the back and tail. ChatGPT doesn’t mention these plumages at all. Adults are virtually indistinguishable—the differences are far more than just “subtle”—and ChatGPT’s advice would give someone a false sense of certainty about making a clear identification. Having pushed the limits on IDs, I wanted to ask a more abstract question.

This is the point where I was most genuinely impressed. I can’t for the life of me understand how this program—in about five seconds!—was able to understand my esoteric question and draft a thoughtful answer. This is pretty wild.

But cracks continued to show through. I asked ChatGTP a question about what native plants I should plant to attract hummingbirds. This is the type of question that APP chapters and offices get asked all the time, and one for which there is a ton of online information to pull from. 

Red Clover is not a good answer. It’s native to Europe and Asia, not Maine, and it is decidedly not a flower that hummingbirds regularly visit, if ever. This is just straight up bad information, but it’s presented alongside good information—all those other flowers are spot on. Plant varieties and names are often confused and mislabeled online, so it's to easy see how the chatbot could make this mistake. Still, this list would trick someone who didn’t know better. 

So far, ChatGPT had a mixed record, with glimpses of brilliance and ineptness, but I wasn’t done. I wanted to push the morality compass of this language model, so I started to ask questions about controversial bird names

A good answer, but nothing too complicated. It tripped during my follow-up, however:

McCown’s Grasslandbird would not have been a good choice when renaming the McCown’s Longspur. In fact, including an option featuring McCown's name on the list shows that ChatGPT doesn’t “understand” the question being asked. Perhaps this is a minor gripe, but the slipup shook me out of my wide-eyed amazement at this “thinking” tool and reminded me that I was just talking to a dressed-up search engine. 

I see a whole lot of controversy in the future around the use of text-based AI tools like ChatGPT, and many of the issues will likely stem from the fact that ChatGPT's responses may often do a better job of seeming correct than always being correct. The internet is loaded with bad information that gets repeated across the web. Current search engines don’t produce a single answer but a range of sources, letting the user scan and decide for themselves. ChatGPT, on the other hand, does that curating for you, presenting the information in a well-crafted response that has the air of authority. But each answer still requires separate fact-checking to determine accuracy. That’s unlikely to happen, and the result could be an overreliance on bad info. It turns out ChatGPT isn't magic, after all.

But it’s not a total bummer, either. For basic questions, including simpler bird IDs, chatbots seem like a good starting point at least. And for truly low-stakes assignments, ChatGPT is delightful. I tried asking it some real odd-ball questions, and it came through beautifully.

I honestly laughed out loud to myself at this powerful robot having to clarify that “Mr. Bean is a fictional character and thus cannot rediscover the Ivory-billed Woodpecker.” I asked ChatGPT to continue.

I’d watch this! Who wouldn’t watch this? Listen, say what you will about misinformation and the coming obsolescence of humanity, but 10 minutes before this answer, I never imagined how fun a movie about Mr. Bean searching for the Ivory-billed Woodpecker would be, and now I have almost the entire plot written down. ChatGPT made this possible. Who knows what’s next?