Our AI cybersecurity fears are (mostly) real
Five Cyber Stories - April 19, 2026 - Issue 005
Howdy! Welcome to this week's issue of Five Cyber Stories where I share five stories, every week, about how cybersecurity affects or non-digital lives. This week I have fascinating stories about snarky crosswalk signals, whether the hype around Anthropic's Mythos model is real, and more.
Though, before we jump in, I wanted to note that I'm switching up the format this week. Most of the stories featured will have smaller summaries. I'll have a brief description of the story, and a quick explanation why I think it's worth your valuable time. Still, I'll have one story (story #1) featuring a more thorough summary. Reply back to the newsletter to let me know if you like/dislike the changes. Just reply to this email!
With that, here we go!
1. Is it really too powerful?
Follow up: Last week, my number one story was Anthropic's release of its reportedly powerful AI model Mythos. The model's release was still in the news ($) at the start of week, and it remained in the news cycle ($) till the end of the workweek. Even Anthropic's chief rival, Open AI, joined in the action.
My analysis: With all the press, I personally worried I had fallen for AI marketing hype. Anthropic has incentives ($) to sell their services after all. So, I set out this week to sort through the noise, and I've summarized my conclusions here after reading over two dozen stories, just for you, dear readers.
TLDR: Our fears are warranted but in ways I didn't expect.
Leaders from business and civil society reiterated the concerns about Mythos throughout the week. CEO Jamie Dimon of major bank JP Morgan Chase, notably part of Anthropic's Project Glasswing, seemed to echo the alarm, and the Cloud Security Alliance (CSA) published a paper with strategies on mitigation with contributions from major industry figures. On Friday, Anthropic's CEO met with White House Chief of Staff ($) Susie Wiles and Treasury Secretary Scott Bessent to discuss Mythos. This is despite the Trump administration's ongoing feud with the AI company. Politico reported that multiple agencies are trying to "skirt" the White House's restrictions on Anthropic's products. A congressional aide even told Politico that "The Pentagon has 'shot itself in the foot by giving the middle finger to the most capable AI provider,'".
Still, there are certainly doubts that Mythos is all that Anthropic says. Many voicing concern all have a stake in either Anthropic’a success. Or they are in a position to benefit from the general AI hype. The Trump administration has been friendly to AI interests ($), and JP Morgan Chase's Chief Information Security Officer, Pat Opet, is featured in Anthropic's Project Glasswing announcement. The lack of detail regarding the "thousands of high-severity vulnerabilities" has also fueled questions. Jessica Lyons from The Register pointed out the lack of documentation, and Jon Martindale from Tom's Hardware heaped a hefty amount of skepticism ($) on the hype. Some have rightly pointed out that "...we can't rely..." on AI companies' claims about their products.

Lucky for us, the United Kingdom's AI Security Institute (AISI) analyzed Mythos's capabilities. Their report stated that Mythos "...is at least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained." But they "...cannot say for sure whether Mythos Preview would be able to attack well-defended systems." The Cloud Security Alliance (CSA) put it this way - "AI lowers the cost and skill floor for discovering and exploiting vulnerabilities faster than organizations can patch them." So even though the heavy-hitters of cybersecurity are probably safe for now, it's becoming easier than ever to launch cyber attacks. My favorite quote from the CSA: "Using a coding agent is now easier than using Excel."
While this analysis is somewhat a relief from my apocalyptic fears of broken power grids, water treatment, and financial systems, there's still a lot of risk here. Mozilla's Chief Technology Officer, Raffi Krikorian, argued in a New York Times oped ($) that open source software is particularly vulnerable. He writes,
"...the most valuable software infrastructure in the world continues to be maintained by people working for free, while the companies building fortunes on top of it never had to pay for its upkeep. Now a powerful new capability has arrived — and as we’ve seen repeatedly in tech, there’s the risk that organizations with resources will receive it first and learn to protect themselves, while others are left vulnerable."
Krikorian continues on to exhort society to give these new tools to those freely maintaining open source software, what he considers digital critical infrastructure. Though Anthropic did grant some open source maintainers access to Project Glasswing, this is indisputably a fraction of the open source community. The risks are real. As I've written about previously, effects of open source software cyberattacks can be far reaching.
If there remains any doubts, older AI models thought to be less powerful than Mythos are already capable of more efficiently finding and exploiting software vulnerabilities. It's also likely that other models catch up to Mythos within years if not months. Mohan Pedhapati, Chief Technology Officer of Hacktron (disclaimer: Hacktron is an AI Cybersecurity firm), says:
"Whether Mythos is overhyped or not doesn't matter," said Pedhapati. "The curve isn't flattening. If not Mythos, then the next version, or the one after that. Eventually, any [novice hacker] with enough patience and an API key will be able to [hack] unpatched software. It's a question of when, not if."
So, the hype seems real - just not in the way I expected. But, in some ways, that scares me more. How confident are any of us that our children's school uses software systems that are "well-defended"? Our doctor or dentist? How about our local government and police department? I hope we don't find out via the local news at 5.

2. Crosstalk at the crosswalk
Reporting by Wired's Paresh Dave
Wait: Last year, a number of crosswalk signals were hacked. The hacks first took place in a number of cities in the San Francisco Bay Area, and they eventually also affected signals in Seattle. The hack happened again just last month in Denver. While the effects were relatively harmless, the hacker did replace the usual voice commands ("Wait!", "Walk!") with "spoofed" voices of Mark Zuckerberg, Elon Musk, and Jeff Bezos.
Walk: While watching videos featuring the spoofed voices is a fun time, I recommend checking out the article for more serious reasons. This is a great example of this newsletter's thesis: cybersecurity affects everyone's non-digital lives. The signals largely still worked even with the hack, but it's easy to imagine an alternate scenario. To top it off, it seems the hack may have happened with the help of unchanged default passwords such as "1234". Best practices for passwords matter even when crossing the street.

3. A head scratcher
Reporting from The Verge's Sean Hollister
Conditions: A few weeks ago, I shared the story of the FCC's ban of future routers made outside the U.S. The Verge's Sean Hollister is back this week with an update on the ban, and it unfortunately seems to make less sense. The FCC issued "Conditional Approvals" for NETGEAR and Adtran Inc. routers for reportedly unknown reasons. While The Verge reached out to the FCC and NETGEAR to see if the company had followed steps in the Conditional Approval application process, the company has yet to reply.

Approved?: I previously wrote that a number of sources have said the router ban likely won't instantly make us safer. Routers made by American companies are still vulnerable to attacks, and these Conditional Approvals, so far, don't seem to be the result of NETGEAR onshoring its manufacturing. That's based on NETGEAR's own account. So, we're yet again left with more questions. Have NETGEAR and Adtran taken steps to address the stated supply chain risks? If not, what were the other factors of their Conditional Approval?

4. Section 702: Will it stay or will it go?
Reporting from TechCrunch's Zack Whittaker
The act: The United States' Foreign Intelligence Surveillance Act (FISA), along with its Section 702, was set to expire on Monday, April 20th. While the Trump administration seemingly wanted it passed without changes, Zack Whittaker reports there is bipartisan support on the hill to make changes. On Friday, Congress voted ($) to punt the vote to April 30th delaying the decision.
The motion: This vote will affect the privacy of all Americans. The Fourth Amendment protects against "unreasonable searches", but Section 702 allows intelligence agencies to hoover up Americans' data without a warrant if said data leaves the country. That could apply to you if you've ever messaged someone outside of U.S. borders. Senator Ron Wyden from Oregon has also said that there's an interpretation of the statute that "directly affects the privacy rights of Americans." I believe we should all pay attention to whether this passes.
And to highlight how this is a bipartisan issue even in these polarized United States, Senators Lee (UT) and Durbin (IL) co-wrote an op-ed in The New York Times arguing to reform FISA and Section 702.

5. Seeing privacy differently ($)
Reporting by Wired's Dell Cameron
Demands for privacy: At the start of last week, Wired reported ($) that a number of advocacy organizations were calling Meta to terminate the Name Tag feature over broad privacy concerns. Meta makes smart glasses in partnership with EssilorLuxottica (owner of Ray-Ban and Oakley), and Name-Tag would bring facial recognition to Meta's eyewear. This is in context of the New York Times' reporting ($) that Meta thought the current "...political environment.." was ideal to release such a feature with opposition groups likely distracted. Organizations against this application of facial recognition include the ACLU, Electronic Privacy Information Center (EPIC), New York State Coalition Against Domestic Violence, Common Cause and others.
An epiphany: I originally read this story and moved on, concerned about the implications. Then, my own life collided with it. I was at church in an informal discussion about everyone's personal struggles, and I realized someone was wearing Meta smart glasses. The eyewear is subtle. Despite the glasses' recording indicator light being off, I suddenly felt less comfortable opening up in light of Wired's reporting. Sharing my stress and worries in private is one thing, but I'd rather avoid even the faint possibility of broadcasting my cares to the world. Suddenly the fears of advocacy groups became very relatable.
A few weeks ago, I wrote about self-surveillance and the societal questions surrounding it. Where and when is this tech appropriate? Who gets to decide? These questions remain as evident as ever.
Wrapping up
That's all our main stories for this week. Some interesting articles I read that didn't make the cut include Signal message extraction by the FBI, bug bounties, World Cup security, and banning the sale of precise geo-location data. Did I miss anything? Let me know here. Until next week, thanks for reading.
Danny