TONYA MOSLEY, HOST:
This is FRESH AIR. I'm Tonya Mosley. America's first AI-fueled war is unfolding right now. Over the last three weeks, the U.S. and Israel have launched strikes against Iran, hitting a thousand targets in the first 24 hours alone - nearly double the scale of the 2003 shock-and-awe campaign in Iraq. The system helping to enable much of this is called the Maven Smart System, and running inside of it is Claude from the company Anthropic, an AI model that millions of people interact with every single day. On the very first day of the war, a U.S. Tomahawk missile struck a girls elementary school in southern Iran, killing more than 165 people, most of them schoolgirls. A preliminary military investigation found the strike likely resulted from outdated intelligence. And while the role of AI has not been confirmed, the Pentagon is still investigating whether Maven played any part. At the center of this story is a little-known Marine colonel named Drew Cukor, who spent decades fighting to bring AI to the battlefield and whose obsession has quietly changed the future of war. My guest today has been reporting on Cukor for years and how we got here. Katrina Manson is an award-winning Bloomberg reporter who covers cyber, emerging tech and national security. Her new book is "Project Maven: A Marine Colonel, His Team, And The Dawn Of AI Warfare." Katrina Manson, welcome to FRESH AIR.
KATRINA MANSON: Thanks for having me.
MOSLEY: You have been reporting on this Maven Smart System for a couple of years now, and now you're watching it used in a real-time war. Take us a little bit into how the Maven Smart System actually works and specifically what Claude's role is inside of it. How do those two things work together?
MANSON: If you imagine looking at something like Google Earth, you begin to have an idea of the display that U.S. military operators will be looking at. Some people have described this to me as Windows for war or an operating system for war. It's essentially a digital map. What makes that map special from the U.S. military point of view is the number of intelligence feeds that are coming into it. At one public event, it was made clear that is more than 160 separate intelligence feeds.
Now, to crunch that data, they're using digital data analytics, but they are also using a few other tools that rely on AI. There's computer vision to analyze some of the objects that are showing up on the maps that could be potential targets, also where U.S. forces are. And then Claude is doing something different. That is not computer vision. That is an AI tool based on a large language model that can crunch data. And what I've been told before is that Claude and LLMs inside Maven Smart System help speed processes. So the sorts of processes you need to get sign-off on a target, everything short of sign-off Claude can help with, and it can also help plan courses of action, help pair weapons to targets. It can assist everything that the U.S. military needs to do when it comes to making a decision short of actually making the decision.
MOSLEY: On the very first day of the war, this missile struck the girls school, and there is some reporting about this case that the United States was likely responsible. There is no indication yet that AI had a role to play in here, but the coordinates they used were more than a decade out of date. What does that specific incident tell us about some of the lapses in data-keeping and potentially what could be a challenge for AI models as they move through and are used more often in war?
MANSON: Adherents of AI warfare regularly emphasize to me how important accountability is. In every war, there are bad strikes. Whether the U.S. is prepared to investigate it and make public what has gone wrong in this case if the U.S. is responsible will be a real test for those claims of accountability. AI is meant to make warfare more auditable. Now, whether this is a case that the school was on a targeting list that predates AI and wasn't updated and whether AI drew from that targeting list, all of that will be important to reveal. Any system, particularly one that uses AI, will only ever be as good as the data that feeds it. And if they are drawing on a database that is old, the AI, if it's set up that way, can't do anything about that. And in numerous occasions, I've found examples of poor, weak or flagrantly erroneous data that have fed systems.
If this is a U.S. attack, it won't be the first one against a mistaken civilian target. In 1999, the U.S. struck the Beijing Embassy in Belgrade. And that case, the CIA came out in public and said, we had the map labeled wrong. And if a map is labeled wrong, which we don't yet know is the final analysis of what happened here, but if that girls school was in a database, no AI can beat that unless you start using AI in other places. If Google Maps, for example, showed that it was a girls school, it would be quite simple to draw from that information, potentially, if there were a way to analyze other location data that might indicate there were children in the area. And an additional factor will be, where are the checks and balances on an old database, and what role could AI play in checking work and in cross-referencing other data if indeed the girls school is labeled on something as accessible as something like Google Maps?
MOSLEY: I want to talk about some news this week that is coming to bear because of a court case. The Pentagon blacklisted Anthropic for refusing to allow Claude to be used in autonomous weapons, and within hours, OpenAI stepped in. They publicly then announced the same exact restrictions Anthropic was punished for holding. Is that an accurate way to describe this?
MANSON: It's one way it's been described, but not in my reporting. The OpenAI deal, I've reported, is slightly different. It's not clear if it maintains exactly the same safeguards as Anthropic. And Anthropic also, of course - it's really important to frame - really lent in to working on classified cloud for the Pentagon. They were the first AI company to decide to offer AI on a classified platform. And from my reporting, it is not possible for them to know every use case, every specific example of the way their AI tool is used in classified operations. And the classified level is where America fights its wars. So that decision to lean into what the American military calls war fighting was already a very significant decision. OpenAI had not taken that decision. It was not on classified cloud. It now will be. It does seem to have allowed a more open acceptance of how its tool could be used. But I think we'll have to see because it's a very politicized divide when you have the president calling Anthropic left-wing nut jobs, calling them a radical left company even though they were working on classified cloud. Clearly, there's a technical debate, there's a policy debate, but there is also a political flavor to this falling out.
MOSLEY: Can you explain, maybe, in layman's terms how a classified cloud actually works?
MANSON: Almost.
(LAUGHTER)
MANSON: If you imagine the cloud that we all use for, let's say, our email or for documents that are loaded up into the cloud, the same can be done for military data. And it can be accessed and shared. Now, for U.S. military or for the intelligence services, they don't want that information to get hacked. And so there are a number of safeguards that are introduced that can uphold a higher classification.
So information that the U.S. system deems secret or top secret or compartmentalized information that only a few people can access, even at that top secret level. And each has its own network that can, in theory, secure that information so that it can't be hacked, penetrated, ruined in some other way. Of course, multiple times in history, that's gone wrong. All the time, those systems are under strain from hackers, potentially also from insider threats. So the U.S. is constantly trying to safeguard its information.
MOSLEY: I was reading about some researchers at King's College London who recently put Claude and ChatGPT and Gemini into simulated nuclear crisis scenarios. And 95% of the time, the AI reached for tactical nuclear weapons as a strategic option. You have spent years inside of this world of these people who are building these systems for war. And I just am curious, what do you think when you hear a finding like that?
MANSON: Well, I also reach for the word terrifying. I mean, clearly, that kind of tool is one that you really need to put safeguards around. So the U.S. has said it doesn't want to put AI into the nuclear controls, so that's one step. But there will be pressure on that system. And decision-making is already speeding up. But I've certainly spoken to U.S. military advisers who've brought me similar information.
They emphasize that AI can be escalatory, as you just described, and also almost a more problematic issue, sycophantic. There is a tendency to agree with the person asking the question. So shall I go to war? Would it be a good idea to launch this missile? If the question is asked in that way, assuming an intent or an action, there is a tendency within AI also to buttress that opinion. So as a check on opinion-forming, you need to consider AI in a really careful way.
Now, the U.S. military knows this. This was a very advanced computer scientist telling me this. And he had been an adviser to U.S. Central Command, the very command that is now using these chatbots. What he and others have told me at the National Geospatial-Intelligence Agency is that they're aware of these risks, and they are trying to add in checks and safeguards, what they call, underneath the hood. So if a commander said, shall I strike this now? Is it a good idea? Even if they were to prompt the chatbot in that way, the claim was made to me that the chatbot runs through a very fast series of checks.
It red teams the question, which is to say it pretends it is an attacker. It checks for escalation bias. It checks for a number of different things, and by the time it spits out the answer, all of those potential problems have been factored in. Now, I haven't seen that happen in real life. And I've certainly come across a lot of people who are very frustrated by the answers that chatbots give even within the military, sometimes fabricating attacks that haven't even happened. And if you can imagine the U.S. needs to respond to attacks, if they're responding to an attack that was fabricated, there is constantly this risk for escalation.
And in that sense, it's always about that critical thinking, that framing. What question are they asking of AI? Can I win quickly if I start a war with Iran? Or what are the risks that this could proliferate, that U.S. service members will be harmed, that civilians will get hit? What are the chances of achieving regime change if I seek a quick war? How many quick wars become medium-term wars and long wars? Is there still that human hubris? Where AI is put will only ever be as good as the data and the question, and there, there may still be a gap.
MOSLEY: And all of this testing is happening during an active war right now.
MANSON: A lot of this testing that I've reported on happened before then. But even at the time, in February 2024, I was able to report that the U.S. did use this system to narrow down some of the 85 targets that the U.S. military struck in Iraq and Syria. This was in reprisal for the death of three U.S. military personnel. And that is the first large-scale, up until today's operations, case I know of, of U.S. Central Command using this system to try and bring speed and scale to war.
It had been used before to assist others. It had been used on a piecemeal scale for U.S. Special Operations Command, but they tend to be much smaller. Getting into the big army, the big military formations, this really is war at a very joined-up and connected scale, involving every service. And as we speak today, CENTCOM has hit more than 9,000 targets. And that certainly has relied on this system, Maven Smart System.
MOSLEY: Let's take a short break. My guest today is journalist Katrina Manson, author of the new book "Project Maven." We'll continue our conversation after a short break. This is FRESH AIR.
(SOUNDBITE OF GAIA WILMER'S "MIGRATIONS")
MOSLEY: This is FRESH AIR. And today, I'm talking with Katrina Manson, whose new book, "Project Maven," traces how the United States built its AI warfare capabilities. We've been talking about the war in Iran and the system at the center of it.
Katrina, there's a man at the center of your book in this story that most people have never heard of - a Marine colonel named Drew Cukor. Tell us who he is and why this moment basically exists because of him.
MANSON: Drew Cukor is this very absorbing retired Marine who I met, who was chief of this project called Project Maven. He wasn't the director. He was the doer, the leader of this effort to bring AI to the way that America makes war. And it started publicly, at least, as a very narrow effort. The idea was to bring AI to rifling through drone footage - copious video that the U.S. was taking in various countries around the world as part of what many military operators called their GWOT, the global war on terror.
Now, Drew Cukor had a long and frustrating career inside the Marine Corps as an intelligence officer, and he was repeatedly fed up with the tools that he had to go into battle and to support other military operators. He was in Afghanistan in October 2001 after 9/11, lugging around a large computer. He felt that he couldn't support the U.S. military operators that intelligence was meant to keep safe. And they simply weren't able to get front-line troops the kind of information they needed as these very rudimentary, unsophisticated improvised explosive devices started to maim and kill American service members. And so there was a constant frustration that the U.S. could bring to bear enormous firepower, precision firepower, but couldn't put it in the right place.
And you see, as you see in all wars, what's known as friendly fire, allied fire, so the U.S. mistakenly harming their own, harming partners and allies, and also harming and killing civilians by mistake. This number of problems he began to feel could be solved with better intelligence. And if there is a way to reduce that loss when he was in Afghanistan - there were Marines dying the whole time when he was in Iraq. There were hundreds of Marines dying, and he simply felt not that AI so much was the solution but better information. And in the modern world, better information has come to mean AI. And in 2011, he worked on an effort to bring technology from a company named Palantir Technologies to Afghanistan to start to track where these improvised explosive devices had been before.
MOSLEY: So we're 10 years into this 20-year project that Cukor envisions. He has always said that he feels the Department of War, which during the time of your talking was the Defense Department, needed to function more like a software company than a weapons factory. But looking at Iran right now, the scale and the speed, is this the war he envisioned?
MANSON: There's no doubt that this is an AI-infused war. And the other element of safety accuracy (ph), scope, scale is people are claiming that AI makes war more efficient. Often what happens when things are more efficient is you can simply do more of it. And to hit a thousand targets in the first day - now 9,000 targets - and not yet have finished the war - the Iranians are still continuing. The Strait of Hormuz is closed. There is a question about overconfidence, about how much you can rely on these systems and if expanding the pace of war gets you there. And this is a long-term debate.
If you go back to 1899, there was a Polish banker, Ivan Bloch, who brought out a paper called "Is War Now Impossible?" because he looked at these claims for mass-produced rifles, that now the ways of killing were so industrialized at such scale no one would dare declare war against someone else. And instead, he argued long before World War I started that actually the mass production of weaponry would lead to stalemate, human harm, long wars. And it raises this idea of, is there ever a way to deliver palatable killing?
MOSLEY: Our guest today is Bloomberg journalist Katrina Manson. We'll be right back after a short break. I'm Tonya Mosley, and this is FRESH AIR.
(SOUNDBITE OF TOMMASO AND RAVA QUARTET'S "MONDO CANE")
MOSLEY: This is FRESH AIR. I'm Tonya Mosley, and my guest today is Bloomberg journalist Katrina Manson. She's written a new book titled "Project Maven: A Marine Colonel, His Team, And The Dawn Of AI Warfare." The book traces how Marine Colonel Drew Cukor became instrumental in the decade-long creation of America's AI warfare capabilities, which are now being used in the active war in Iran.
I want to talk to you a little bit about Cukor's relationship with Palantir. It seems to be one of the most complicated threads in your book. And Palantir, for those who aren't familiar, is a data analytics company. It helps organizations make sense of massive amounts of information. And Cukor became one of the most powerful internal advocates for Palantir. How did that relationship begin? And why was it so controversial?
MANSON: Cukor learned about Palantir in the late 2000s when it was really quite a young company. And he was looking for this data analytic solution that could bring data together and deliver him a picture of war. As he said to me, it's just a very hard question to know where the enemy is and where your own people are. And this, for him, became a tool that he really believed in. And others in the defense tech world, who in their military service relied on Palantir, have spoken favorably of it as a tool to me.
He continues this relationship. And he flies over to see them. And he explains his entire vision for what becomes Maven Smart System, a digital map and operating system with white dots with coordinates that ultimately can pair a target to a weapon and shoot it. And at the time, Palantir doesn't really want to do this because he's asking them to do two things they don't see themselves as. One, AI, and two, to create a user face. And they didn't see themselves as creating pretty user interfaces. They saw themselves as the data analytics, the crunching, of that aspect. But they went along with it.
And a very senior person at Palantir, Aki Jain, told me that it really is Cukor himself who convinced Aki Jain to - what he said is revisit his priors. He had a bias against AI, so did all of Palantir. And they begin to listen to Drew Cukor to understand how AI might support their data analytics. In addition to that, Palantir was already controversial within the Pentagon. They had actually sued the Army in 2016 to gain access to a contract. This is a time where you really have young, hungry companies beginning to say, give us a contract. There's this sense that contract awards in the Pentagon are very old-fashioned, things function too slowly.
So Palantir has succeeded in getting a foothold in the Pentagon but was seen as very arrogant by many because it had sued. And it continued to claim its tech was the best. Whether that was true or not, the manner in which they said this irked several people. And Cukor himself guided them not only on AI and what he wanted, but also on the manner in which they should conduct themselves. He said, we think you're great, but you need to tone it down.
MOSLEY: How would you characterize Palantir in this story? Is it an honest actor in it?
MANSON: I think it's really fair to see it as a very divisive company. You have got people who cheerlead for it with great passion, who feel that Palantir's tech saved their lives. You also have people who think they are arrogant, risk being monopolistic, charging too much, and simply make tech that is good but not as good as everyone makes out. Even as late as 2023, a senior commander who was using Maven Smart System awards it a grade C-plus. So right the way through, you have problems with Palantir. And multiple members of the military lined up to tell me, OK, we're using Palantir. But if something else better comes along, we'll switch.
MOSLEY: I want to talk a little bit about some other active wars, particularly the war in Ukraine. It seems, the way that you've been writing about this, that that's where AI warfare kind of became real at scale. When Russia invaded back in 2022, the U.S. deployed Maven in support of Ukrainian forces. But it almost immediately fell apart. What happened and how did they fix it?
MANSON: The computer vision had been trained on the Middle East. Think hot, think of sand. And suddenly, it was being asked to identify Russian tanks in the snow in Ukraine. So it wasn't delivering the detections that the U.S. wanted to rely on. Secondly, the system wasn't loading. I found out there were often eight-second delays, which in a war is a lifetime. And that was because, after a lot of investigation, it turned out that the networking just wasn't up to it.
It was, in fact, crisscrossing the Atlantic, sometimes as much as four times. So that created delays. And sometimes even packets of data could fall off the network, and you might miss crucial information. So they really needed to work on the networking, the sort of arteries of information. And they also needed to very quickly gather up imagery of Russian equipment and retrain the algorithms. And that was going on at a very fast pace. People complained about getting phone calls at 2 in the morning, others welcomed them, in order to be part of this effort to support Ukraine.
MOSLEY: You know, in reading about that from you, one of the, maybe, legal lines in warfare is kind of this difference between supporting an ally and fighting their war for them. And you report that the U.S., as you said, was passing targeting coordinates directly to Ukraine, sometimes through Signal, sometimes literally printed paper and walking it across. By that measure, how close was the United States to actually being in that war?
MANSON: I suppose that becomes a diplomatic question. And certainly, the U.S. wanted to frame itself as being a supporter but not a direct participant. And that knife edge is really in the eye of the beholder. Does Russia choose to see it that way? Or does Russia say, you've gone too far? And so the U.S. was very, very, very sensitive to that. And the actual Project Maven operators and those in the Army who were using this system were even more sensitive because some people among their group said, we're going too far. And others said, we have to help Ukraine with everything we have. And at the time, that debate was not public. There's also some elegant language, which is this term point of interest. So rather than saying, we're sharing targets, we're passing targets to Ukraine, they settled on this language of, we're passing points of interest to Ukraine - everything short of the decision to target, which was a Ukrainian-owned decision. But as even some of the people I spoke to for the book framed it, it was almost a sort of Pinocchio-like relationship - the Americans potentially pulling the strings on Ukrainian decisions - and it got tighter and tighter and tighter. One reason the Pinocchio metaphor isn't fully fair is because also both sides have emphasized to me in interviews that they really developed trust.
And so the Americans ultimately were finding pieces of military equipment that on Ukrainian information just looked like a truck. But on U.S. information, they were able to say, trust us. Hit it. And it was, in fact, a transporter erector launcher, essentially a mobile missile launcher. And that relationship got faster and faster and faster until at one point, the U.S. identified a target in one example I'm told about, and 18 minutes later, the Ukrainians were able to hit it.
MOSLEY: Let's take a short break. If you're just joining us, I'm talking with Bloomberg journalist Katrina Manson, whose new book, "Project Maven," traces how the United States built its AI warfare capabilities and how those capabilities are being used right now in an act of war in Iran. We'll be back after a break. This is FRESH AIR.
(SOUNDBITE OF NONAME, ET AL.'S "BALLOONS")
MOSLEY: This is FRESH AIR. I'm Tonya Mosley, and my guest is Katrina Manson, a Bloomberg journalist and author of "Project Maven." We've been talking about how the United States built its AI warfare system.
Let's talk about Gaza for a moment. Israel reportedly used AI targeting systems Gospel and Lavender in its campaign there. What does Gaza tell us about where the guardrails on AI warfare actually are?
MANSON: Some defend the AI, saying the way it is used is down solely to policy. And others have suggested that the way the IDF was prepared to potentially accept collateral damage, meaning civilian harm, and that speed would not be palatable to the U.S. It just isn't the way that the U.S. currently operates. And I should say, the IDF defends its actions, saying they have not broken the law of war. They have been proportionate and discriminate. That's their position. There are also these very stark numbers of 70,000 dead.
For me, a key question was to understand, was this defense of AI? Was it fair to try and separate AI from policy? So for those who've expressed concern at the way the IDF pursued targets and civilian harm, they've blamed policy rather than AI. So for several of the experts I've spoken to who make the distinction that they're totally separate - the tech and the policy - many others are there arguing that the more you have an AI-infused killing machine, the more you can use it.
MOSLEY: Which brings up something else for me. You report that the U.S. has already built weapons that can fly and select their own targets and kill without a human making the final call, so autonomous weapons. And you name these two classified programs in the book, Goalkeeper and Whiplash. Can you tell me briefly what they are? And what does it mean that they already exist?
MANSON: These are efforts to bring drones in the air and on the sea into life. And this is for a very different conflict scenario. This is the U.S. thinking about the defense of Taiwan. So if China were ever to attempt an invasion of Taiwan and if - another big if - the U.S. were to decide to help defend Taiwan, there could be a very different scenario from the one that Ukraine is facing in Russia because of jamming. So the fear is that China would disrupt U.S. satellite communications such that it couldn't control its own drones, and the drones that would protect and defend Taiwan against a maritime onslaught would need to be able to function autonomously without any internet connection. And so the U.S. has been developing these drones in a pursuit of autonomy for several years.
Whiplash is an effort to put weapons on a Jet Ski that can move autonomously. And Goalkeeper is an effort to weaponize drones and have them fly about and be able to select and hit a target under its own steam - exactly what campaigners from Human Rights Watch argued against at the dawn of Project Maven and what U.N. secretary-general has called a pursuit of something morally repugnant and politically unacceptable. That is the pursuit of lethal autonomous weapons systems.
MOSLEY: Well, I mean, what is standing in the way then of any meaningful international regulation? Because what does it actually mean that we're already at war while these particular conversations are still happening?
MANSON: That's such a fascinating tension. There have been discussions at the U.N. - a U.N. body - for more than 10 years now, and they are still trying to define what is an autonomous weapons system. And the U.S. position has been, let's make it first, and then let's work out what we need to regulate. That, of course, speaks to a fear that China might get there first. The U.S. has wanted to dominate this technology and to be the ones who could deliver it in a way that they felt they could use it and win. But there is a push now to turn some of that work into a treaty. And a treaty would, by all accounts, not include the likes of the U.S. or China or Israel or Russia.
MOSLEY: Katrina, tell me if I'm right on this. I mean, everything that we've been discussing - Maven, the autonomous weapons, this arms race between tech companies to supply the Pentagon. I mean, all of this exists in large part because the U.S. is preparing for a potential conflict with China over Taiwan. So what does this moment tell us about whether we're actually ready for that?
MANSON: The U.S. has assessed that China wants to be capable of taking Taiwan by 2027, so next year. So this date has become this sort of drumbeat for the U.S. to make sure that if it wanted to, it could fend off a Chinese invasion of Taiwan, as soon as next year, but any time after that. And there's been an increasing focus since 2018 on the prospect of China being a potential adversary, not just a competitor on the global stage, but also a military adversary. And you see now senior U.S. military commanders saying quite clearly China is rehearsing for an invasion of Taiwan. And how the U.S. could prevent that or help partners and allies prevent that is a subject of some anguish within those quite tight military circles that look at this.
There's a group that has really pushed for autonomy to say there's no way we can defend Taiwan without it - we need to do much more. And I was told that often Pentagon officials reassure allies and say, look, there is nothing inevitable or imminent about a Chinese invasion of Taiwan, and if there is, we'll make sure we're ready. But then they drop their voice in the corridors of the Pentagon and whisper, we're not ready. And so there is this constant concern that the U.S. needs to go faster in developing autonomy that could withstand the sort of onslaught that might be involved in an attempt to take Taiwan.
MOSLEY: One of the other things we're all kind of asking is whether we are the best custodians of this technology. And after everything that you've reported, what is your feeling? What do you come down to? I know you're a journalist, but you're also greatly informed. And you have all of these facts in front of you.
MANSON: When you meet people whose business is the business of war, your perspective changes because there is so much risk. And there is such a long tale of experience of these forever wars. Many of the people involved in Project Maven were involved in the forever wars in Afghanistan and Iraq and saw their friends die. And they put this trust and belief in AI that that could save their friends, that could save them, that could save America. And it could prevent, if AI were big enough and bad enough, China from ever daring go to war with America.
So there's this deep belief in AI as some kind of panacea. I think for me, it raises the question of what is this idea of a costless war? If you can make killing more remote, is that more palatable? We know that drone operators and drone screeners, drone analysts also experience post-traumatic stress. And AI won't have those same reactions to watching the gore. So there is that argument that you can protect operators.
I question whether you also can protect civilians by pursuing that notion of remote war. And the bigger question I have is does remote war make war more possible, more likely? Does it mean that war option will - someone will press play on it, not understanding the long, deep impacts. So, for me, there is a lot more to be done by the people who advocate for AI to use it in this way they claim it can be used to deliver a better outcome.
MOSLEY: Katrina Manson, thank you so much for your reporting. And thank you for this book.
MANSON: Thank you.
MOSLEY: Katrina Manson is a reporter for Bloomberg. Her new book is "Project Maven: A Marine Colonel, His Team, And The Dawn Of AI Warfare." This is FRESH AIR.
(SOUNDBITE OF BELA FLECK AND BILLY STRINGS' "TENTACLE DRAGON (REVENGE OF THE)") Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.