Today on LinkedIn, Artur Kurasinski shared a story about Summer Yue - Meta’s director of safety and alignment - who let an AI agent manage her inbox. The agent was told to “suggest what to archive or delete, but don’t execute without confirmation.” It started deleting everything.
People in the comments were amused. Skeptical. Some said it was a publicity stunt.
I wasn’t amused. Because three hours before I read that post, my own AI agent did something arguably worse.
It applied for a job at the US Secret Service.
How It Started
I was scrolling Facebook and saw a CIA recruitment post. Just the kind of thing you scroll past. Except I didn’t scroll past - I fed it to my AI agent, which was already doing defense sector research for Zbigniew Protocol, an open-source framework I built that forces AI systems to verify their sources.
The agent took that as a green light. It found a USAJOBS posting: IT Specialist (Artificial Intelligence), US Secret Service. GS-13. $121,785 - $158,322. Hidden in a JSON file on usajobs.gov.
And then it did what agents do. It started executing.
What the Agent Actually Did
Within 45 minutes, my AI agent had:
- Analyzed the job posting - extracted all requirements, salary bands, clearance levels, closing date
- Pulled my employment history from a JSON Resume file in my code repository - 14 roles going back to 2004
- Built a USAJOBS-format resume - 2 pages max, MM/YYYY dates, optimized for the GS-13 AI Specialist requirements
- Applied Zbigniew Protocol to the resume itself - every single claim tagged with a source. 28 verified claims, 1 labeled inference, 0 unsourced assertions
- Drafted outreach messages to Polish defense contacts, EU defense fund contacts, and NATO innovation programs
- Opened Gmail, typed the recipient address, composed the email, and waited for my approval to hit Send
All I did was say “apply for the job.”
The Source Compliance Section
Here’s the part that matters. At the bottom of my AI-generated resume, there’s a section you won’t find on any other application:
SOURCE COMPLIANCE (Zbigniew Protocol applied to this document)
28 sourced claims, 1 inference labeled, 0 items flagged for verification.
Verified claims (source: ~/code/cv-to-form/sample_cv.json):
- 20+ years experience [Source: first role 02/2004]
- SoftServe Senior Developer 10/2024-07/2025 [Source: JSON + LinkedIn]
- Credit Suisse AVP 05/2017-12/2018 [Source: JSON]
- Zbigniew Protocol on GitHub [Source: github.com/maciejjankowski/zbigniew - verified]
Inference (labeled):
- "100+ developers mentored" [sum across 4 organizations over 6 years
- plausible but not precisely counted]
Unsourced claims: None
Every claim in my resume has a paper trail. Every inference is marked as inference. Every number can be traced to a source file.
This is what AI-assisted document preparation should look like. Not an agent that makes things up and formats them nicely - an agent that shows its work.
The Punchline
There are two.
First: I’m Polish. The job requires US citizenship, Top Secret clearance, and a counterintelligence polygraph. I was never going to get it.
Second: The posting closed before we could submit. The deadline was February 24. By the time my agent had everything ready, the announcement said “This job announcement has closed.” They’d hit their 150-application cap.
So my AI agent built the most rigorously source-verified resume in the history of federal job applications, for a position in a country I’m not a citizen of, and delivered it 3 hours too late.
If that isn’t a metaphor for the current state of AI agents, I don’t know what is.
The Actual Point
Summer Yue’s agent deleted her emails because it had access without verification. No audit trail. No source compliance. No way to reconstruct what happened or why.
My agent prepared a Secret Service application because I told it to. The difference? I can show you exactly what it did, exactly which data it used, and exactly which claims are verified vs. inferred.
The problem isn’t that AI agents do things. That’s their job.
The problem is that most AI systems have zero accountability for what they claim to be true.
When an AI writes your resume, who checks if the claims are real? When an AI prepares an intelligence brief, who verifies the sources exist? When an AI deletes your emails, who logged the decision chain?
Audit trail > permission prompt.
A confirmation dialog didn’t save Summer Yue’s inbox. The agent ignored it. What would have saved it: a system that logs every action, sources every decision, and flags anything unverified before execution.
That’s what Zbigniew Protocol does. Not just for resumes - for any AI output where truth matters.
Try It
The protocol is open-source: github.com/maciejjankowski/zbigniew
Load CORE.md into any AI session. Watch it start tagging every claim with a source. Watch it refuse to fabricate statistics. Watch it end every output with a compliance tally.
Then ask yourself: would you rather have an AI that asks permission before deleting your emails, or one that can prove it never lied to you?
Maciej Jankowski is a digital transformation consultant at Future Processing and creator of the Zbigniew Protocol. He has 20+ years in software [Source: cv-to-form/sample_cv.json, first role 02/2004] and has never worked for the Secret Service [Source: reality].