Home Timeline The Archives Shop
SYS_CLOCK: 12:00:00 // STATUS: ONLINE
ROOT > ARCHIVES > Disclosure > RECORD_313
Disclosure // Mar 1, 2026

Reported AARO 2023 Five Eyes UAP Caucus: What It Could Mean

AARO's 2023 Five Eyes UAP Caucus: Allied Intelligence Sharing Begins The reported 2023 "Five Eyes UAP Caucus" matters for a simple reason: it hints that UAP ...

AUTHOR: ctdadmin
EST_READ_TIME: 23 MIN
LAST_MODIFIED: Mar 1, 2026
STATUS: DECLASSIFIED

The reported 2023 “Five Eyes UAP Caucus” matters for a simple reason: it hints that UAP handling is starting to look less like random, made-for-TV “UFO news” moments and more like routine, allied process. If you’re trying to decide whether “UFO disclosure / UAP disclosure” is real progress or just another spin cycle, you’re not alone. The public keeps running into the same wall: gaps in the record, official non-answers, and reversals that make everything feel staged.

That’s where the frustration splits people into camps. Skeptics see noise and attention economics. Believers see suppression and a long-running government cover-up. The tension the story keeps dodging is spectacle vs process: viral clips and big claims on one side, and slow institutional coordination, oversight, and paperwork on the other. The caucus sits right on that fault line, which is why the only disciplined way to look at it is to separate what’s known, what’s inferred, and what’s unverified. What’s known is the U.S. has an official apparatus for UAP now, with AARO leading U.S. government efforts to address UAP. What’s inferred is that allied coordination becomes more likely once a topic has an office, lanes, and repeatable workflows. What’s unverified is the full substance of the reported caucus itself, because “reported” is not the same thing as documented.

Two process beats show how disclosure actually tends to move. First, the DoD Office of Inspector General released an unclassified summary of a previously classified report evaluating DoD actions regarding UAP. That’s not a cinematic reveal. It’s oversight plumbing becoming visible, the kind of paper trail that exists even when nobody is trying to “disclose” anything. See the DoD OIG unclassified summary for the evaluation (Department of Defense Office of Inspector General, Unclassified Summary: Evaluation of DoD Actions to Assess and Mitigate Unidentified Aerial Phenomena, May 2024; Dodig).

Second, reporting of FOIA denials and later acknowledgements in appeals indicates DoD at times denied UAP and AATIP-related records and later acknowledged them after appeal. That pattern is messy and slow, but it’s also how real information surfaces: requests, denials, appeals, corrections, and eventually a clearer record.

You’ll walk away with a practical way to read future UAP and UFO disclosure signals, so hype doesn’t yank your attention around while the actual process grinds forward. To do that, it helps to get the basic cast of characters straight: AARO on the U.S. side, and Five Eyes on the allied side.

Five Eyes and AARO Explained

This isn’t mystery theater; it’s an institutional setup problem. Once you know who the players are and what their mandates are, an allied “caucus” stops reading like a sci-fi reveal and starts reading like bureaucracy doing what it always does: creating a repeatable channel so the same kinds of reports get handled the same way, every time.

The All-domain Anomaly Resolution Office (AARO) exists to run a disciplined process for unidentified anomalous phenomena (UAP), meaning reported observations that can’t be immediately identified and could touch national security or safety. In practical terms, that work breaks into four deliverables you can actually picture: intake of reports, analysis to sort signal from noise, dissemination of guidance so DoD and Intelligence Community personnel know how to report and what to preserve, and a resolution focus that prioritizes cases with national security and personnel protection implications.

AARO’s outputs aren’t just “because the Pentagon felt like it.” Section 1683 of the FY2023 National Defense Authorization Act directs AARO to produce specified deliverables related to UAP reporting and data collection (see FY2023 NDAA, H.R. 7776, section 1683; text and public law at Congress.gov, enacted December 23, 2022). Separately, the FY2022 NDAA required ODNI and DoD to take initial joint actions on UAP reporting (see FY2022 NDAA, H.R. 4350; text at Congress.gov), and later NDAA language in FY2023 and FY2024 adjusted and expanded interagency obligations (see FY2024 NDAA, H.R. 2670; text at Congress.gov). The punchline is simple: when Congress mandates reporting and data collection, agencies build a workflow that can survive staff turnover and shifting headlines, because they have to keep producing the required products.

Here’s the constraint you should keep in your head as you read anything about AARO: the provided excerpts do not specify AARO’s organizational placement, the AARO director’s reporting chain, or which committees receive the mandated reports. Don’t guess. Treat any confident org-chart claims you see elsewhere as unverified unless they’re backed by the statute text or authoritative DoD or ODNI documentation.

Five Eyes (FVEY) is the intelligence partnership among Australia, Canada, New Zealand, the UK, and the US, built for sharing foreign intelligence collection and analysis, with deep roots in signals intelligence cooperation. In practice, that matters for UAP handling because “unidentified” reports don’t respect borders: the same pattern of sightings, sensor anomalies, or air-safety incidents can show up in multiple countries, and shared analytic habits make it easier to compare like with like.

At the human level, FVEY coordination is less about a single dramatic briefing and more about shared expectations: common terminology, a culture of comparing assessments, and routine points of contact so one country’s “we can’t explain this yet” doesn’t stay trapped inside one inbox.

Procedurally, “caucus” is loose language, not a legally precise term. Read it as shorthand for something like a regular working group, a liaison channel between offices, or a shared analytic framework that lets participants ask consistent questions of inconsistent data. It can also signal a decision to standardize how reports get logged, categorized, and elevated when they implicate defense, air safety, or personnel protection.

What it does not give you, by itself, is the actual content of any specific 2023 meeting: who attended, what was said, what was exchanged, or what agreements were made. Treat “caucus” as a process label, not as leaked minutes.

Guardrails matter here. Allied coordination is not proof of aliens. It’s not proof that a cover-up has been “confirmed.” It’s not even proof that the underlying incidents are extraordinary. It’s evidence that governments are treating the reporting pipeline as serious enough to organize, because mandates and risk management push them there.

The actionable takeaway: when you see future mentions of allied coordination or a UAP “caucus,” treat it as a signal of repeatable workflow, then look for the paperwork trail those workflows tend to create, especially the statute-driven deliverables tied to Section 1683 and the ODNI-DoD joint obligations.

Those definitions narrow the question down to something more concrete: not “could allies ever share,” but whether this specific 2023 caucus is actually on-record anywhere. That’s where “reported” versus “documented” becomes the whole game.

What We Know About the 2023 Caucus

The smartest read on intelligence-adjacent topics is always: what can we actually verify? Here, what isn’t in the public record matters as much as what is, so you need a disciplined “Known / Credibly reported / Unknown” lens instead of treating every repeatable claim as confirmed.

A solid starting checkpoint is AARO’s Historical Record Report, Volume I (AARO, March 2024). If you’re evaluating the “Five Eyes UAP caucus” claim, this is one of the first places to check for any explicit allied-coordination language (even one unambiguous line), and to state plainly whether it’s present or absent.

The other checkpoint is the Department of Defense UAP report covering May 1, 2023 to June 1, 2024 (DoD, June 2024). Per the DoD’s own description, this report also includes UAP reports from previous time periods not previously included, which makes it a good place to look for consistent patterns in how the government describes coordination, data sharing, or external partners.

In addition, consult the DoD Office of Inspector General’s public report listings and the OIG’s unclassified summary of its UAP evaluation (see DoD OIG reports, Unclassified Summary: Evaluation of DoD Actions to Assess and Mitigate UAP, May 2024). These are the primary public checkpoints to determine whether any Five Eyes-specific language appears in official materials.

In the material provided for this draft, there is no excerpt from either official document that confirms any Five Eyes-specific language. So, at this stage, those reports function as checkpoints for what’s on-record, not as proof of a Five Eyes-branded caucus.

What people are referring to when they say “the 2023 Five Eyes UAP caucus” is a reported, secondhand-described event: the idea that a Five Eyes-aligned caucus or briefing structure existed in 2023 in connection with UAP-related work. That’s a meaningful claim, but in this section it stays in the “reported/described” bucket because it’s not backed here with a primary document, transcript, attendee list, agenda, or an on-the-record confirmation.

So treat the caucus claim as reported unless and until drafting turns up primary sourcing: an official publication, a directly attributable statement, or a document that unmistakably ties “Five Eyes” to UAP coordination.

The big friction point is that intelligence and counterintelligence policy treats “sources and methods” as deliberately protected, with dissemination restricted by classification constraints. “Sources” can include people, imagery, signals, documents, databases, and communications media, which makes even basic descriptions of collection and partner coordination easy to withhold without any grand mystery involved.

That gap matters because an absence of Five Eyes language in public reports doesn’t disprove behind-the-scenes allied engagement; it just means you can’t responsibly cite the public record as confirming it.

A small confirmation would still be meaningful here, precisely because this domain is so careful with what it prints. One explicit line acknowledging allied coordination, or a consistent official pattern of naming partner frameworks, would move the caucus claim from “reported” toward “documented,” without requiring anyone to reveal operational details.

The safest way to talk about this is: “A 2023 Five Eyes UAP caucus has been reported, but it’s not independently confirmed in the excerpts we have from official publications. Watch for primary sources that explicitly use Five Eyes language.”

If a caucus does exist, it would still have to operate inside the same constraints that keep public reports thin. So it helps to picture what allied sharing would look like when it’s done as a workflow, not as a headline.

How Allied UAP Sharing Would Work

If allied UAP sharing is going to mean anything operational, everyone has to agree on three boring things up front: what gets collected, how it gets packaged, and what can’t be revealed. Without that, you’re not exchanging cases, you’re swapping stories, and every partner re-litigates the same basics from scratch.

The security twist is the part most people miss: the best evidence is often the least shareable, because the “extra context” that makes a clip analyzable is the same context that exposes sensitive collection.

Think of a serious UAP report like trying to troubleshoot a system outage. A screenshot helps, but you fix the problem with the logs: timestamps, configuration, and what the system thought it was doing at the time. Cross-border analysis works the same way. A usable case file needs consistent fields so another country’s analysts can recreate the geometry and test hypotheses, not just react to a narrative.

  • Common reporting formats: time (with timezone), location, altitude bands, observer platform (ship, aircraft, ground), weather, and a plain-language description that distinguishes “what was seen” from “what was inferred.” Consistency changes outcomes because it enables quick correlation against known traffic, scheduled tests, and other sensors, instead of treating each report as a one-off mystery.
  • Sensor context, not just the clip: which sensor captured it (EO/IR video, radar, ESM), the mode it was in, and whether there were supporting tracks. “We saw it on radar too” only helps if the report notes what radar, what track parameters, and the time alignment.
  • Metadata and provenance: who handled the data, when it was copied, and whether the file is first-generation. Analysts care because a video with missing context is easy to misread, and a video with unclear handling is easy to dismiss.

That last point is where sources and methods stops being a slogan and becomes a workflow constraint: the more context you add to make a case analyzable, the more likely you are to expose something sensitive about collection or handling.

Once you have a consistent package, you still need a shared bar for what earns the “anomalous” label. Without analytic thresholds, misidentifications multiply: parallax gets called acceleration, glare gets called “aura,” and a missing altitude estimate turns “small and close” into “large and far.” Standards don’t eliminate uncertainty, they eliminate avoidable disagreement.

DoD counterintelligence definitions treat Sources and Methods as distinct categories, and they explicitly frame access as something to manage within “limits of acceptable risk.” That’s the logic behind partial releases: you might share the conclusion or a sanitized clip, but not the collection geometry, sensor performance details, or processing chain that would let someone reverse-engineer how you see.

The second friction point is classification and compartmentation, meaning the formal classification level plus the need-to-know compartments and release controls that gate access even inside friendly systems. In practice, markings and controls like NOFORN (no foreign release) can override broader “release to” intent, and compartment rules can limit who can even learn a program exists, much less export its raw data.

Put those together and you get the uncomfortable reality: the “most convincing” part of a dataset is often exactly what can’t travel. Allies can still cooperate, but the shareable product is frequently a curated package, not the whole sensor stack.

The U.S. Navy’s UAP guidelines are a good example of what a formal intake culture looks like in practice. The value isn’t that a guideline proves anything extraordinary. The value is that it normalizes reporting, prompts the right contextual details, and creates a repeatable path from “we saw something” to “here’s a case file others can analyze.” The Navy has continued reporting under those guidelines, which is exactly what you’d expect when an organization wants trendable data instead of scattered anecdotes.

On the technical interoperability side, MISB ST 0601 is an illustrative standard because it describes how motion imagery can carry consistent metadata alongside the video. MISB is the designated authority for motion imagery under the GEOINT Functional Manager, and ST 0601 specifies the UAS Datalink Local Set, described as an extensible SMPTE construct. You don’t need to memorize fields to get the point: standard metadata turns “a clip on the internet” into “video tied to time, position, and sensor context,” which is what makes cross-border analysis efficient.

Public “UAP sightings” and “UFO sightings” stay ambiguous because the public usually sees the clip, while analysts need the full data package: the original file, the sensor metadata, the chain-of-custody, and the supporting tracks. When governments release less than you want, the clean inference is often sensitivity, not necessarily “nothing there.”

The practical way to watch new “UFO news” is to ask a pipeline question: what sensor context and metadata would another country need to treat this as a real case, not a story? If that information would expose sources and methods or run into classification and compartmentation walls, you’ve learned why the public version stays thin.

All of that explains why information moves slowly even when the underlying work is real. The other piece that determines speed and consistency is external pressure specifically, what Congress mandates and what agencies are required to brief.

Congressional Pressure and Disclosure Laws

Oversight, not hype, is what changes agency behavior. The biggest driver of “disclosure” momentum isn’t a single leak or one dramatic witness, it’s the grind of Congress forcing repeatable workflows: reports that have to ship on a schedule, briefings that have to happen in a secure room, and tasking that has to be coordinated across agencies year after year.

The frustrating part is baked into the model: Congress can demand more process while the public still sees very little. Most of what gets produced is classified, and that gap between “more oversight is happening” and “nothing new is public” is exactly where cover-up narratives thrive. The practical reality is simpler: if you want durable change, you watch the mandates, not the headlines.

The National Defense Authorization Act (NDAA) UAP provisions matter because annual defense authorization language turns UAP work into assigned duties with deadlines, owners, and required coordination, not a hobby project that can be deprioritized. Those FY2022 and FY2023 NDAA requirements that put ODNI and DoD on joint UAP-related obligations are the quiet engine: once ODNI and DoD have to co-produce outputs, they also have to reconcile definitions, sources, and “who owns what” decisions inside the bureaucracy (see FY2022 NDAA text at Congress.gov and FY2023 NDAA text at Congress.gov).

That coordination pressure spills outward. When the U.S. system has to standardize how it collects and briefs on UAP-related events, alignment with allies becomes more likely and more valuable, not because anyone is staging an international reveal, but because shared problem sets reward shared formats. If U.S. reporting and briefings get more structured, it is easier to compare like-for-like with allied reporting and to ask sharper questions in secure channels.

Briefing requirements are the other lever that actually moves behavior. The FY2024 NDAA directs AARO to provide expanded congressional briefings on UAPs, specifically targeting any UAP intercepts (see FY2024 NDAA text at Congress.gov, enacted into public law in 2023). “Intercepts” is the kind of word that pulls operational detail into formal oversight channels, including timelines, sensor types, rules of engagement constraints, and what was done after the event. And the FY2024 NDAA is not a talking point, it was enacted as the public law authorizing appropriations for FY2024 for DoD activities and military construction. In other words: real law, real oversight hooks.

Public moments like the House Oversight hearing, plus the visible interest from figures such as Tim Burchett, Anna Paulina Luna, and Eric Burlison, function less as plot and more as pressure indicators. They signal that members are willing to spend political oxygen demanding briefings and documentation, which is the part agencies cannot ignore.

Proposals are where Congress test-drives concepts that agencies then have to plan around, even if the text never becomes law. The Senate Congressional Record entry for July 29, 2025 references proposed text described as the “UAP Disclosure Act of 2025” that includes language about “non-human intelligence” (see Senate Congressional Record, July 29, 2025; consult the Senate Congressional Record for the precise page and section). Even when this is proposal language rather than enacted law, its presence in the Record can shape expectations and internal agency preparations. For historical Congressional documents and the daily Congressional Record, see Congress.gov.

That’s how bureaucracy shifts. Even the possibility that Congress will formalize a term can trigger internal lawyering, records management conversations, and pre-brief prep, because nobody wants to be the office that gets caught flat-footed when the next cycle’s NDAA negotiations harden the language.

Whistleblower pathways are the human pipeline that feeds oversight, but the key is process, not verdicts. Intelligence Community whistleblowers face heightened risk of adverse employment actions, security-clearance consequences, or legal actions, which is why protected channels and classification-safe handling are central to whether Congress gets usable information at all.

If you bring up David Grusch, treat him as an oversight catalyst, not a fact pattern. His allegations are allegations. What matters mechanically is that high-profile allegations increase congressional demand for documented briefings and for agencies to show their work inside AARO and other oversight touchpoints.

The takeaway is simple: track what’s mandated and whether it gets stronger every year. Real movement looks like mandated briefings that actually occur, a consistent reporting cadence that survives election cycles, and durable language that stays in enacted text, not just in ambitious proposals.

Mandates and briefings can feel abstract until they show up as day-to-day triage: hundreds of reports coming in, getting sorted, and getting labeled more consistently. That’s where allied coordination – if it’s real and routine – starts to produce visible downstream effects.

What This Could Change Next

This is an inference, not a proven outcome: routine allied coordination would likely make the easy cases disappear faster because standardization and shared triage let analysts knock out common explanations earlier. That inference is consistent with how standardization improves triage in other safety and intelligence contexts (for example, aviation incident reporting and analysis such as NASA’s Aviation Safety Reporting System and ICAO reporting standards help filter routine causes so investigators focus on the anomalies; see NASA and Icao).

AARO’s own intake scale shows why that matters. It received 757 new incident reports from May 2023 to June 2024. Of those 757, 485 occurred in that window and 272 were older submissions from 2021 and earlier. When you’re sorting hundreds of reports across different years, basic coordination and shared triage does not create “more UFOs.” Instead, it makes the obvious stuff stop clogging the pipe.

AARO has already identified balloons, birds, and unmanned aerial systems (UAS) as explanations for many sightings. If allied partners start treating those categories consistently, “unknown” becomes a harder bucket to stay in.

If coordination becomes routine, you get fewer countries running the same analysis in parallel. That reduces duplicated effort and speeds up debunking and attribution. A single incident that bounces between “balloon” and “drone” depending on who looked at it is exactly the kind of confusion standardization cleans up.

In practice, the most plausible win is faster sorting of common culprits: balloons, birds, UAS, satellites, consumer drones, and occasional adversary tech. The outcome is less chaos online because the first institutional answer arrives sooner, with a more consistent label attached.

You could also see a more consistent public reporting cadence, with comparable coverage windows. Not “what the next report will say,” but the simple ability to compare like with like across periods, the way “May 1, 2023 to June 1, 2024” is a defined slice of time you can line up against the next slice.

The catch is that better coordination can look like more secrecy from the outside. Intelligence organizations are built to protect sources and methods and to restrict dissemination under classification constraints. When multiple allies are aligned, the incentive to sanitize public detail gets stronger, not weaker.

That kind of message discipline keeps capabilities and access “within acceptable risk limits,” but it also feeds the perception loop: fewer specifics in public, louder claims of a “government UFO cover-up.” Both can be true at the same time: better internal clarity, thinner external narrative.

Meanwhile, “UFO sightings 2025” and “UFO sightings 2026” will keep trending. Your best way to read those cycles is to watch the structure, not the hype: are reporting periods comparable, is the cadence consistent, and does the “unknown” bucket shrink while identification buckets (balloons, birds, UAS, and more specific categories) get sharper over time?

A Practical Lens for Future UAP Claims

The real significance of the reported Five Eyes UAP caucus is institutionalization: a repeatable, allied way to collect, triage, and brief on UAP, not a single “smoking gun” moment that settles everything.

That’s why the public picture still looks messy even when the internal machinery gets sharper. The sharing model runs straight into hard limits: protecting sources and methods, plus classification and compartmentation that restrict what can be broadly distributed, even inside government. And the political lever that actually moves behavior is oversight and mandates, not headlines, because reporting requirements and compliance pressure change what gets logged, reviewed, and escalated.

Over time, better filtering should shrink the pile of “mystery” cases by catching mundane explanations earlier. But the perception of secrecy can persist anyway, because the most sensitive cases will stay the least explainable in public.

AARO’s own intake guidance is the best yardstick because it’s built for analysis, not virality. Use it like a scoring rubric:

  1. Identify what sensors detected it (AARO asks for the sensor type, such as visual or radar).
  2. Demand the full sensor package and context: all still images, video, and any sound recordings AARO requests, plus basic situational details.
  3. Verify whether the claim shows multi-sensor correlation and whether the data has time sync, calibration, and clear provenance (who collected it, how, and when).
  4. Classify the source of the claim: official process artifact (report language, briefing phrasing) versus pure rumor.

Don’t expect fast public releases, either. The DoD issues formal guidance governing classification and declassification of DoD information requiring protection in the interest of national security, and automatic declassification is declassification based on a specific date or event determined by the original classification authority.

What to watch next is simple: the exact wording that shows up in hearings and closed briefings that later gets summarized, enacted statute requirements that force repeatable reporting, AARO’s public updates as its casework matures, and any credible multi-nation confirmations that align on the same underlying data. If you want, subscribe to my newsletter and I’ll flag the genuinely process-backed updates, not the rumor-cycle spikes.

Frequently Asked Questions

  • What is AARO and what does it do for UAP reports?

    The All-domain Anomaly Resolution Office (AARO) runs a formal U.S. government process for handling unidentified anomalous phenomena (UAP) that could affect national security or safety. The article describes four core deliverables: intake of reports, analysis, dissemination of reporting/preservation guidance, and resolution prioritizing national security and personnel protection.

  • What is the Five Eyes (FVEY) alliance and why would it matter for UAP?

    Five Eyes is the intelligence-sharing partnership among Australia, Canada, New Zealand, the UK, and the US, with deep roots in signals intelligence cooperation. The article’s point is that UAP reports can span borders, so shared terminology and repeatable points of contact make cross-country comparison and triage easier.

  • Did official AARO or DoD reports confirm a 2023 “Five Eyes UAP caucus”?

    No-based on the excerpts discussed, there is no Five Eyes-specific language shown from AARO’s Historical Record Report, Volume I (2024) or the DoD UAP report covering May 1, 2023 to June 1, 2024. The article treats the caucus as “reported” but not independently documented with primary sources like an agenda, attendee list, transcript, or official statement.

  • What laws or mandates drive AARO and ODNI/DoD UAP reporting workflows?

    Section 1683 of the FY23 NDAA directs AARO to produce specific UAP reporting and data-collection deliverables. The FY2022 NDAA (as amended by the FY2023 NDAA) also requires ODNI and DoD to perform joint UAP-related obligations, pushing repeatable interagency workflows.

  • What information needs to be in a “serious” UAP case file for allied sharing?

    The article says usable packages include common report fields (time with timezone, location, altitude bands, platform, weather, and what was observed vs inferred), plus sensor context beyond a clip (which sensor, mode, supporting tracks) and metadata/provenance (who handled the data and whether it’s first-generation). It also notes MISB ST 0601 as an example of a motion-imagery metadata standard that ties video to time, position, and sensor context.

  • How many UAP incident reports did AARO receive from May 2023 to June 2024, and how many were older cases?

    AARO received 757 new incident reports in that period. The article states 485 occurred within the May 2023-June 2024 window, while 272 were older submissions from 2021 and earlier.

  • How can I tell if new “UFO disclosure” news is real process or just hype?

    The article’s rubric is to check for process artifacts: identified sensor types (visual, radar, etc.), a full sensor package and context, multi-sensor correlation with time sync/calibration and clear provenance, and whether the claim is tied to official report/briefing language rather than rumor. It also advises watching statute-driven deliverables and mandated briefings (including FY2024 NDAA language requiring expanded AARO briefings, specifically targeting UAP intercepts).

ANALYST_CONSENSUS
Author Avatar
PERSONNEL_DOSSIER

ctdadmin

Intelligence Analyst. Cleared for level 4 archival review and primary source extraction.

→ VIEW_ALL_REPORTS_BY_AGENT
> SECURE_UPLINK

Get the next drop.

Sign up for urgent disclosure updates and declassified drops straight to your terminal.