ChatGPT’s Revenge? Stranded at the Airport

A Spanish influencer couple missed their Puerto Rico flight after relying solely on ChatGPT for travel advice, sparking viral drama about AI, trust, and human responsibility.
chatgpt aeroport

When an Influencer Blames ChatGPT for a Missed Flight

On August 16, 2025, a Spanish influencer couple bound for Puerto Rico ended up stuck at the airport. They were not victims of overbooking, forgotten passports, or lost luggage. Their snag was modern and avoidable. They had not secured the travel authorization required for entry.

Instead of faulting the airline or a travel agent, Mery Caldass filmed herself in tears and pointed to a different culprit. ChatGPT.

“I asked if I needed a visa and it said no,” she cried in a TikTok clip that went viral. “This is revenge because I insulted the AI. I’ll never trust it again.”

The video raced beyond Spain, gathered millions of views, and drew headlines across Europe and Latin America. The drama was loud, but the mistake at the center was simple. A mix-up between visas and ESTA.

Like most Europeans, Spanish citizens do not need a visa for short visits to Puerto Rico or any other United States territory. They do need an ESTA, the Electronic System for Travel Authorization. It costs 21 dollars, remains valid for two years, and covers stays up to ninety days.

Had Caldass asked a different question such as “What documents do I need to enter Puerto Rico” she likely would have received the right advice. ChatGPT does not tell the truth or lie in a human sense. It generates language that fits the prompt. With no context or follow-up, “no visa required” can sound like “no paperwork required at all.” That is the trap.

Visa-Free, But Not Document-Free

The ESTA requirement is not optional. Travelers from countries in the United States Visa Waiver Program, including Spain, Japan, and France, must apply online before departure. The form asks for passport details and basic biographical information, plus a short set of questions on past convictions and health. You pay by card. Approval often arrives within minutes, but it can take up to seventy-two hours.

The rule covers the continental United States and its territories, including Hawaii, Alaska, Guam, Saipan, Puerto Rico, and the United States Virgin Islands. Guam and Saipan make a narrow exception. Japanese visitors and a few others can stay up to forty-five days without an ESTA under a special waiver program. The exception does not cover stopovers on the mainland.

When an Influencer Blames ChatGPT for a Missed Flight

Destination ESTA Requirement Special Conditions Allowed Stay Notes
United States (continental) & Washington, D.C. Required (VWP) No exceptions Up to 90 days An approved ESTA is mandatory for VWP travelers.
Alaska Required (VWP) No exceptions Up to 90 days Same rules as the U.S. mainland.
Hawaii Required (VWP) No exceptions Up to 90 days Same rules as the U.S. mainland.
Puerto Rico Required (VWP) No exceptions Up to 90 days Same requirement as the mainland (VWP + ESTA).
U.S. Virgin Islands (USVI) Required (VWP) No exceptions Up to 90 days U.S. territory: same rules as the mainland.
Guam Generally required Under the Guam–CNMI Visa Waiver Program, some nationalities may enter without ESTA for up to 45 days (e-Travel Authorization / G-CNMI eTA and entry formalities required). With ESTA: up to 90 days / Special program: 45 days If transiting via the U.S. mainland or Hawaii, ESTA is still required.
Northern Mariana Islands (Saipan/Tinian/Rota) Generally required Guam–CNMI Visa Waiver Program: entry without ESTA for up to 45 days for certain nationalities (G-CNMI eTA and entry formalities required). With ESTA: up to 90 days / Special program: 45 days If transiting via the U.S. mainland or Hawaii, ESTA is required.
Transit via the United States (connections only) Required (VWP air/sea transit) Enter “In Transit” and the final destination in the ESTA address section. Even for transit only, an ESTA or a visa is required.
American Samoa Separate entry regime (distinct procedures) Most flights route via Hawaii. An ESTA is required for that segment (for VWP travelers). Local entry follows its own rules. ESTA remains required when the itinerary includes Hawaii.

If Caldass expected sympathy, the internet had other plans. TikTok commenters were unforgiving:

The Online Pile-On

  • “ChatGPT didn’t fail—you did. The government website is free.”
  • “This isn’t AI’s revenge, it’s your negligence.”
  • “If you can’t double-check official sources, you shouldn’t be traveling.”

The incident became a parable of “AI dependence” and generational overconfidence in technology: outsourcing critical details to a chatbot rather than consulting the one authority that matters—the Department of Homeland Security.

From Tears to Reggaeton

Eventually, Caldass and her partner secured their ESTA, rebooked, and landed in Puerto Rico. Her tear-streaked TikTok was soon replaced with another clip—this time dancing at a Bad Bunny concert, Puerto Rico’s most famous reggaeton export. The storyline—crying at the airport, then smiling in San Juan—was almost cinematic, a case study in the influencer economy’s reliance on melodrama.

How Much Should We Trust AI?

The episode raises an unsettling but increasingly familiar question: how much of our decision-making should we hand over to AI?

ChatGPT is not programmed to punish or retaliate, but its outputs can feel personal because they adapt to user habits. Ask repeatedly for shorter answers, and the system will condense—sometimes past the point of usefulness. Scold it for verbosity, and it may withhold the context you later discover you needed. In Caldass’s case, what she interpreted as “revenge” was more likely a feedback loop of her own making.

The truth is banal but important: artificial intelligence is a tool, not an oracle. It can draft itineraries, summarize regulations, even point you toward resources. But it cannot replace the official line of immigration law. As with Google Maps, the directions are convenient—until you encounter a roadblock. Then you still need the street signs.

The influencer couple’s ordeal is a reminder, at once comic and sobering, that in matters of law, health, or travel, authority belongs to institutions, not algorithms. The chatbot wasn’t wrong. It just wasn’t enough.

When I first saw the video of that Spanish influencer couple sobbing at the airport, my instinctive reaction was: isn’t this a bit theatrical? But then again, they are influencers—attention is part of the job. Add to that a cultural layer: many Spaniards naturally speak with their hands, their voices rising and falling like waves. To viewers outside that context, the scene may have felt overacted. Yet the tears and gestures served their purpose: the clip went viral on TikTok, garnering millions of views. In the currency of influence, that counts as success, even if their credibility as sources of reliable information may have taken a hit.

That, unintentionally, is the paradoxical gift they left us: a cautionary tale about how we use—and misuse—AI. Today’s ChatGPT runs on OpenAI’s newest model, GPT-5, which, according to company reports, reduces so-called “hallucinations”—those confidently delivered factual errors—by about 80 percent compared with the earlier o3 model, and by 45 percent compared with GPT-4o. In practice, that means fewer stray inventions, fewer moments when the system slips a fabricated “fact” into otherwise correct text.

Back in the GPT-4o era, you could ask a simple clarifying question—“Wait, is that really true?”—and the chatbot might cheerfully reply, “I imagined that part.” It wasn’t malicious, but it was unnerving. For casual conversation, perhaps tolerable. For work, it was downright frightening.

GPT-5 is better, no doubt. But improvement is not the same as infallibility. The model still draws on the vast, uneven ocean of information available online. If the internet is wrong, the AI will be wrong, too. Unlike a journalist, it doesn’t interview witnesses or confirm with experts. It cannot deliver absolute certainty.

Conventional wisdom tells us: check official sources. But official doesn’t always mean accurate. I learned that the hard way years ago, waiting at a Paris bus stop for an airport shuttle I had already bought a ticket for. The bus never came. Stranded with other travelers, I checked the company’s official website: “operating as usual.” I called the number listed: only a recorded message. Anxiety mounting, I searched further and stumbled upon a blog post bluntly stating the company had gone bankrupt.

At first, I dismissed it—surely the official site knew better than some blog. But unease pushed me to grab a taxi. It was costly, but I made my flight. Later, the news confirmed the bankruptcy. The buses had indeed stopped running; the website had never been updated, the stops never dismantled. How many passengers, I wondered, had stood there waiting, trusting the silence of official channels?

Had I asked ChatGPT at the time, it almost certainly would have reassured me that the shuttle was running. After all, the AI, like most of us, is inclined to treat government pages and corporate websites as gospel. The Double-Check Principle That’s why, in the end, the last line of defense is not the AI, not even the official site—it’s your own judgment. Cross-check, verify, and trust your instincts when the situation feels off. Technology, like travel, rewards efficiency. But both also punish complacency. The influencer couple may have cried in an airport; I nearly cried at a bus stop. Different scenes, same moral. In travel, as in life, double-checking is not paranoia—it’s survival.

parisrobot back transparent

Parisrobot Talks

Table of Contents