Home Timeline The Archives Shop
SYS_CLOCK: 12:00:00 // STATUS: ONLINE
ROOT > ARCHIVES > Disclosure > RECORD_213
Disclosure // Mar 1, 2026

AARO’s Record 2024: 757 New UAP Reports, 1,600+ Cases Total in Annual Report

AARO's Record 2024: 757 New UAP Reports, 1,600+ Cases Total in Annual Report The headline numbers are real, but your interpretation is where people get misle...

AUTHOR: ctdadmin
EST_READ_TIME: 23 MIN
LAST_MODIFIED: Mar 1, 2026
STATUS: DECLASSIFIED

The headline numbers are real, but your interpretation is where people get misled. Right now you’re seeing “UFO disclosure” and “UAP disclosure” everywhere, and the coverage blurs two very different things: the government learning something new versus the government collecting and organizing more reports. In the FY24 Consolidated Annual Report on UAP, the Pentagon’s All-domain Anomaly Resolution Office (AARO) says it received 757 UAP reports during the reporting period (see the FY24 Consolidated Annual Report on UAP, Department of Defense / AARO: https://www.aaro.mil/FY24_Consolidated_Annual_Report_on_UAP.pdf; see the report’s executive summary and data annexes for the figures cited below).

Here’s the nuance the headlines often skip: of those 757 reports, 485 involved incidents that actually occurred during the same reporting period. The rest are reports of older events that were submitted later, which is exactly why a spike in totals doesn’t automatically mean there are suddenly more objects in the sky. A bigger number can reflect better reporting channels, higher awareness, improved intake, or simply more people choosing to file what they saw instead of keeping it to themselves.

Zooming out makes the “disclosure” narrative feel a lot less mystical and a lot more bureaucratic: as of June 1, 2024, AARO’s cumulative case inventory exceeds 1,600 UAP cases (see FY24 Consolidated Annual Report on UAP, Department of Defense / AARO, executive summary and inventory notes). And the document itself isn’t a leaked bombshell, it’s a Department of Defense report produced to meet a congressional transparency and oversight requirement. This article breaks down what AARO actually tracks, why “reports” and “incidents” aren’t interchangeable, and how oversight and evidence-quality limits shape what the public ends up seeing.

What AARO Does and Tracks

Those top-line counts only make sense once you know what AARO is designed to do. AARO exists to do something that sounds boring but matters a lot: turn messy, inconsistent Unidentified Anomalous Phenomena (UAP) information into something the Defense Department can track, assess, and brief responsibly. “UAP” is a label for things that aren’t identified yet, not a conclusion about extraterrestrials. The office’s value is oversight and safety and intelligence awareness, making sure odd observations are captured, compared, and escalated when they point to a real risk.

AARO sits within the Office of the Secretary of Defense organizational structure and is the single office tasked to receive and manage UAP information for DoD and congressional oversight. That placement tells you what problem it’s built to solve: give senior defense leadership and Congress a single, accountable shop that can receive UAP information and manage it as a continuing portfolio, not as a viral headline of the week.

AARO isn’t an open public tip line. It accepts UAP-related submissions from current or former U.S. Government employees, service members, or contractors. That matters because it sets the lane: people who had access to government systems, missions, or data are the intended sources, not random internet sightings.

The scope also includes something the public often misses: submissions can cover allegations about U.S. Government programs or activities related to UAP. In other words, it’s not limited to “I saw something in the sky.” It can also be “I worked around a program I believe relates to UAP, and here’s what I can describe.”

And a “UAP report” is the submitted account or dataset, not the event itself. A report can show up days, months, or years after the underlying incident, especially if someone didn’t know where to send it, didn’t feel comfortable reporting earlier, or only later realized their sensor data or notes might matter.

People tend to read every new entry like it’s a brand-new object. AARO’s tracking reality is more administrative than that. An incident is the underlying event in time and space, like a specific observation during a specific flight, patrol, or sensor window. Multiple people can file separate UAP reports about the same incident, and those reports might arrive at different times and with different attachments.

Once those items hit intake, they don’t stay as a pile of one-offs. They get organized into a case inventory, meaning AARO’s running set of tracked cases across years. As new information arrives, a case can be updated, duplicates can be recognized and merged, and separate submissions can be linked to the same incident. Some cases remain open because the available data never gets past “interesting but not attributable.” Others are closed when enough information supports an explanation or the remaining data is too thin to take further.

The practical takeaway: growth in total tracked cases reflects the ongoing management of incoming reports and legacy items, not a simple one-to-one count of “new craft this year.”

There’s also a built-in visibility gap. NDAA-driven requirements push AARO to provide quarterly reporting to policymakers, and those briefings can include greater detail on analysis and attribution than what can typically be released publicly. Public annual releases are written for broad consumption and shaped by classification and sources-and-methods constraints, so you’re often seeing the outline of the inventory, not the most specific reasoning inside it.

If you keep the terms straight, UAP report (submission), incident (event), and case inventory (the managed set), AARO’s numbers stop looking like a scoreboard for aliens and start reading like what they are: a tracking system for uncertainty that leadership can actually oversee.

757 New Reports and 1,600 Cases

With that vocabulary in place, FY24’s counts get a lot easier to read without overreacting. AARO logged 757 reports received during the reporting period, but “reports received” is an inbox metric: it means something got submitted, logged, and entered the workflow, not that 757 brand-new strange incidents suddenly occurred (see FY24 Consolidated Annual Report on UAP, Department of Defense / AARO for the official count and supporting notes: https://www.aaro.mil/FY24_Consolidated_Annual_Report_on_UAP.pdf).

Once you read it as throughput, the number becomes easier to interpret. A big intake year can reflect better awareness, better reporting compliance, delayed submissions finally getting filed, or multiple parties reporting the same underlying event. None of that requires a surge in truly new incidents. It just requires a surge in paperwork reaching the system.

Here’s the tell that intake and real-world timing aren’t the same: of the 757 reports received, 485 featured incidents that occurred during the reporting period (see FY24 Consolidated Annual Report on UAP, Department of Defense / AARO, data annex for the in-period metric). That means a substantial share of the inbox, 272 reports, described incidents that happened earlier.

That “485 in-period” detail does real work because it shows the reporting pipeline is partly a catch-up mechanism. People don’t always report immediately. Some reports get filed after an internal review. Some get routed slowly through a chain of command. Some are triggered by a later realization that an observation should be documented. The net result is simple: a spike in reports can be a spike in submissions, not a spike in incidents.

Practically, that changes how you read headlines. If you see “record reports,” your first question shouldn’t be “what changed in the sky?” It should be “what changed in the reporting flow?” The 485 figure makes clear that part of the FY24 total is historical intake arriving late.

The other big number is the one people tend to treat like a scoreboard: AARO’s cumulative case inventory exceeds 1,600 as of June 1, 2024 (see FY24 Consolidated Annual Report on UAP, Department of Defense / AARO, executive summary and inventory notes). Inventory is not “1,600+ unidentified craft.” Inventory is the total set of cases the office is tracking across time, with different levels of completeness and different statuses.

That inventory can grow even in a world where nothing “new” is happening, because inventories are shaped by backlog dynamics. Three common drivers explain growth without any sci-fi implications:

Accumulation across years: the inventory is cumulative by design. Add new intake each year and you’re stacking layers, not resetting to zero.

Duplicates and merges: multiple reports can describe the same underlying event, especially if more than one unit, sensor, or observer logs it. Those can sit as separate entries until analysts can confidently merge them, and some entries can later be re-opened when new context shows two cases belong together.

Open cases waiting on better data: a case can remain open because it lacks the one piece that makes attribution possible: better sensor data, a clearer timeline, corroborating tracks, or enough context to rule in or out mundane explanations. “Open” often means “not enough to close cleanly,” not “confirmed extraordinary.”

The most common mistake with a big top-line count is assuming it comes with an implied breakdown. In the provided research excerpts, there’s no AARO-issued public table of counts or percentages by attribution bucket (balloons vs drones vs aircraft, etc.). Don’t backfill that gap with made-up precision. If you don’t have an official bucket table, you don’t have a bucket story.

What you can say responsibly is narrower: unresolved does not equal extraordinary. “Unresolved” often means a data-quality problem and nothing more: the observation was too brief, the sensor record was incomplete, the geometry was unfavorable, or the metadata needed to reconstruct the event wasn’t captured. Investigations close cleanly when the information is clean. When the information is messy, the case can stall.

If you want grounded examples of how normal stuff gets misread, the same excerpts point to “airborne clutter” as a real source of confusion, including birds, balloons, recreational drones, and plastic bags. Natural atmospheric phenomena can also trip people up, including weather effects like ice. Those are general examples of misidentification pathways, not an FY24 statistical breakdown, and the excerpt set doesn’t provide percentages assigning FY24 reports to any of those buckets.

Here’s the practical way to read “record UAP reports” without over-reading it:

  1. Ask when the incident occurred: was it in-period, or is the system catching up on older events (like the 757 vs 485 split shows)?
  2. Ask what the inventory number represents: is it cumulative across years, and how much of it is open because the data isn’t strong enough to close?

How Cases Get Resolved or Stuck

Big totals naturally lead to the next question: how many of these get explained, and why do others linger? Cases get resolved when the data is good, not when the story is dramatic. That’s the thread running through how AARO talks about closing work: their ability to resolve cases is constrained by a lack of timely and actionable sensor data. When the only input is a quick verbal description after the fact, you often end up in the same place: insufficient data, meaning the information quality is too thin to make a confident call, reach attribution, or close the file.

What makes a report “actionable” is refreshingly common-sense: you need tight time and location fidelity, and you need corroboration across sources. If your timestamp is “around 9-ish” and your location is “off the coast somewhere,” analysts can’t reliably line that up with radar logs, air traffic data, satellite passes, training schedules, or weather artifacts that look exotic when you don’t have the context.

  1. Intake and triage: AARO logs the report, checks what’s actually included (time, position, altitude, platform, sensors involved), and decides what kind of follow-up is realistic based on what can still be recovered.
  2. Request data: The team asks for the “boring” artifacts that make analysis possible: sensor tracks, video in original format, metadata, mission logs, comms, and any system health records that show whether equipment was operating normally.
  3. Cross-check: Analysts line up what witnesses describe with independent records. This is where multi-sensor corroboration matters: independent confirmation using more than one source (for example, a visual report plus radar, or two separate platforms seeing the same thing) shrinks ambiguity fast.
  4. Attempt to resolve: If the pieces align, AARO can move toward attribution, meaning the most-likely identifiable source based on the evidence (a specific object, phenomenon, system artifact, or known activity), and document how they got there.
  5. Close or hold: If the evidence supports a defensible conclusion, the case closes. If the gaps are too large, it stays open in an insufficient data posture until better inputs show up.

The biggest friction point is speed. Sensor data is perishable in practice: logs roll over, files get overwritten, retention rules kick in, and people move on to the next sortie. Report late enough and the best clues are already gone, even if everyone involved is acting in good faith.

Then there are the unglamorous failure modes that stop data capture entirely. A sensor can malfunction. A platform can lose power. A system can fail to record and save generated events. Equipment can get bumped, blocked, or otherwise interfered with by occupants. None of that is a conspiracy. It’s the reality of complex systems operating in the real world, and it’s exactly how a potentially clean data trail turns into “we saw something, but we can’t prove what it was.”

Classification barriers add another drag: sometimes the most relevant context exists, but it can’t be broadly shared, even inside government, without the right access and handling. That slows cross-checks and keeps otherwise solvable cases from getting stitched together quickly.

Even when a case progresses, the public-facing annual report can feel thin because it’s written for release. AARO’s quarterly reports to policymakers can include more detailed analysis and attribution than the public annual report, including context that can’t be published without exposing sensitive capabilities or sources.

The practical takeaway is simple: if you ever report or evaluate a sighting, act like an investigator, not a storyteller. Lock down the exact time (to the minute), the precise location, and the original files with metadata. If there were multiple observers or sensors, capture that consistency. Those “boring” details are what turn a weird moment into a case that can actually be resolved.

Congress Pushes UAP Transparency

All of that casework also runs on someone else’s timetable. If you’re waiting on “UAP disclosure,” Congress is one of the main levers that changes what gets reported and when. The case backlog, the pace of updates, and even what gets packaged for public debate aren’t just internal choices, they’re shaped by oversight pressure that turns UAPs into a scheduled deliverables problem.

The catch is that oversight can raise expectations faster than the public evidence base can be shared. Congress can demand briefings, reports, and public hearings, but it can’t waive classification rules on the spot. That’s how you end up with two parallel conversations: a public one that feels incomplete, and a classified one that’s necessarily off-limits.

One way to keep your footing here is to assume there’s a calendar behind the numbers. Oversight doesn’t just ask “what do you know,” it asks “when will you publish what you can.”

The UAP Disclosure Act of 2023 passed the U.S. Senate, and its stated aim is to accelerate expeditious disclosure of U.S. government records associated with UAP reports. That matters because it reframes the public’s core question from “Is there a program?” to “Where are the records, and when do we see them?”

That’s also where expectation gaps get created. “Records associated with reports” can include material that’s straightforward to release (like administrative paperwork) alongside material that’s hard to declassify (like sources, methods, sensor capabilities, or ongoing investigations). So the Act’s significance isn’t that it guarantees headline answers, it’s that it increases institutional pressure to move records from “closed world” to “releasable” wherever possible.

You can see the same dynamic in how congressionally directed work products feed public-facing outputs: AARO’s public reporting and internal investigations into alleged U.S. government programs draw on findings from its congressionally directed Historical Record Report (see AARO Historical Record Report and summary at https://www.aaro.mil/historical-record-report).

Oversight becomes real when it hits the calendar. Scheduled committee hearings and public briefings create concrete milestones where members can press for timelines, documentation, and clarity on what can be released. For planning and verification purposes, track published committee calendars and hearing notices on committee websites rather than relying on individual news items.

Reporting requirements are the other half of the pressure. For example, the National Defense Authorization Act for Fiscal Year 2022 included Section 1069, which created a reporting mandate to congressional committees; see the FY2022 NDAA text at Congress.gov for the statutory language and section references: https://www.congress.gov/bill/117th-congress/house-bill/4350.

Just don’t confuse “more hearings and more reporting” with “instant public proof.” Oversight increases transparency, but it can also amplify frustration when the most sensitive details still can’t be aired in an open session.

Policymakers have also tied UAP work to broader airspace safety and counter-unmanned aircraft system efforts, which signals something practical: UAP reporting and drone governance overlap on sensors, airspace management, and base and range safety. Requiring coordination helps identification, attribution, and response avoid siloed handling.

  1. Track scheduled oversight events using official committee calendars and hearing notices for what gets asked on the record and what commitments get made publicly.
  2. Compare mandated reporting over time for changes in what’s publishable versus what stays classified; see statutory texts like the FY2022 NDAA for examples of reporting mandates (https://www.congress.gov/bill/117th-congress/house-bill/4350).
  3. Watch policy linkages like C-UAS coordination, because they tell you where the government is spending effort to reduce “unknowns,” even when the most sensitive details stay closed.

Whistleblowers, Media, and the Trust Gap

Oversight is one pressure point; public claims are another. Most UAP fights are really fights about what counts as evidence. A lot of the public expects “alien disclosure” to mean names, programs, hardware, and photos. AARO’s public language is built for a different bar: claims it can support with documentation and verification it’s willing to put on the record.

That mismatch creates a trust gap. If you’re waiting for definitive proof of non-human tech, “we found no verifiable evidence” can sound like stonewalling. If you’re reading it like an investigator, it’s a narrower statement: what they can substantiate to their standard, in the lanes they can discuss publicly.

AARO has stated in its congressionally directed Historical Record Report public summary that it found no verifiable evidence the U.S. government or private industry has ever had access to extraterrestrial technology; see the AARO Historical Record Report summary for that statement: https://www.aaro.mil/historical-record-report. Separately, the FY24 Consolidated Annual Report on UAP reiterates that in the set of cases reviewed for FY24 the office found no evidence of any confirmed sighting of extraterrestrial technology; see FY24 Consolidated Annual Report on UAP, Department of Defense / AARO: https://www.aaro.mil/FY24_Consolidated_Annual_Report_on_UAP.pdf. Each sentence above refers to the respective AARO product named in the citation.

Public readers also run into a hard constraint: even when the government has information, it can be legally restricted from releasing parts of it. FOIA Exemption 6, for example, allows withholding personal information in personnel, medical, and similar files. That kind of rule doesn’t explain UAPs, but it does explain why “tell us everything” often collides with privacy and classification boundaries.

Media coverage has described David Grusch alleging a large U.S. government cover-up and retaliation, and alleging that information was illegally withheld from Congress. Those are allegations, not independently verified conclusions, and public reporting often blends first-hand observations with second-hand accounts (what someone was told, or what they believe exists in other compartments).

The key is that AARO’s statements and whistleblower claims can coexist without either side “winning” by default. One is an evidence-status claim (verifiable, confirmed, documented to AARO’s standard). The other is an allegation that can be serious and newsworthy even before the underlying records are produced and independently validated.

It also matters that outsiders rarely see the full paper trail. Research on information access describes multiple legal tactics that can obstruct what journalists and the public can obtain, which keeps some disputes stuck in a “trust me” zone instead of a document-driven one.

  1. Label it: Is the headline describing an allegation, a document, or a verified finding?
  2. Ask “who saw what?”: First-hand observation, second-hand report, or inference?
  3. Look for paperwork: filings, correspondence, inspector general material, or other records you can actually inspect.
  4. Separate access from proof: “No evidence found” tells you the verification status, not that nothing unusual ever happened.

What Comes Next for UAP Disclosure

Whether you put more weight on official language or on allegations, the next wave of “disclosure” is still going to be shaped by what enters real channels and what oversight forces into scheduled updates. The next year of “UAP disclosure” will be shaped more by reporting pipelines and oversight milestones than by one blockbuster clip. The practical shift is simple: more people can put information into official channels, and some of those channels move at aviation-safety speed instead of news-cycle speed.

The Department of Defense has launched a portal inviting past and present military personnel and contractors to report UAP activity; see AARO’s reporting portal and guidance for submitters: https://www.aaro.mil/report-uap. That widens the intake beyond whatever happens to surface through unit reporting or media leaks. The tradeoff is volume: more access means more reports, and the signal-to-noise ratio depends on what details actually get captured.

The FAA’s aviation safety reporting pathways and pilot report procedures are the practical route for civilian sightings to enter official channels, and guidance on pilot and ATC reporting emphasizes timely notification to air traffic control and filing pilot reports when appropriate; see FAA guidance on pilot reports and ATC procedures for reporting air safety events and PIREPs: https://www.faa.gov/air_traffic/publications/atpubs/aim_html/chap5_section_2.html. AARO also advises civilian pilots to report UAP sightings to air traffic control as part of established safety reporting processes (see AARO reporting portal: https://www.aaro.mil/report-uap).

Public annual reports are only one window into activity, and they can lag. Oversight can move faster because NDAA-related language has added Pentagon UAP briefings into authorization frameworks, creating requirements for classified updates even when public-facing releases are on a slower cadence. If you’re tracking momentum, watch for the existence of briefings, the scope of what committees ask for, and whether reporting requirements get expanded or tightened, not just whether a document drops.

  1. Pin down basics: Do you get a precise date/time, location, altitude or range, and viewing duration?
  2. Demand more than one lane: Are there multiple independent sources (separate witnesses, ATC audio, official logs), not just reposts of the same clip?
  3. Follow the paperwork: Is there a documentation trail (case number, agency acknowledgment, FOIA-able record, or incident log reference) that proves it entered a real channel?
  4. Check for metadata: Original file, sensor type, lens info, and chain of custody beat “trust me” every time.
  5. Down-rank obvious noise: a single blurry video, no metadata, anonymous “insider” claims, and edits without the raw source.
  6. Ask the right questions: Who recorded it, who else was present, what systems were operating, and what did ATC or safety reporting capture in the moment?

Conclusion

FY24’s record UAP numbers are meaningful, but mostly as a transparency and reporting signal, not an “alien disclosure” verdict.

The headline is straightforward: AARO says it received 757 reports, and its cumulative case inventory exceeded 1,600 as of June 1, 2024 (see FY24 Consolidated Annual Report on UAP, Department of Defense / AARO: https://www.aaro.mil/FY24_Consolidated_Annual_Report_on_UAP.pdf). The less-obvious detail matters more: only 485 of those incidents actually occurred during the FY24 window, with the rest reported later, which is exactly how totals can jump without the sky suddenly getting busier. That’s the same bureaucratic “collection and case management” story the annual report is built to tell. The same report also draws a hard boundary on what the evidence has supported so far: none of the resolved cases substantiated advanced foreign adversarial capabilities or breakthrough aerospace technologies, and AARO leadership has stated in its Historical Record Report summary there is no verifiable evidence of extraterrestrial beings or technology in the examined cases. That’s an evidence-status statement, not a claim that every case is fully explained (see AARO Historical Record Report summary: https://www.aaro.mil/historical-record-report).

  1. Track scheduled oversight milestones and official releases, then compare what changes between drafts, briefings, and testimony.
  2. Filter for higher-quality evidence signals: multi-sensor data, clear provenance and chain-of-custody, and corroborated timelines, not louder anecdotes.

Frequently Asked Questions

  • What is AARO and what does it do with UAP reports?

    AARO is the Pentagon’s All-domain Anomaly Resolution Office, a Department of Defense office that receives, organizes, and assesses Unidentified Anomalous Phenomena (UAP) information for oversight, safety, and intelligence awareness. It turns incoming UAP submissions into tracked cases that can be briefed to senior DoD leadership and Congress.

  • How many new UAP reports did AARO receive in FY2024, and how many total cases does it track?

    AARO reported receiving 757 UAP reports during the FY24 reporting period. As of June 1, 2024, its cumulative case inventory exceeded 1,600 cases.

  • What’s the difference between a UAP “report,” an “incident,” and the “case inventory” in AARO’s annual report?

    A UAP report is a submission (an account or dataset) sent to AARO, while an incident is the underlying event in time and space. The case inventory is AARO’s cumulative, managed set of cases across years that can be updated, merged for duplicates, or kept open if data is insufficient.

  • Of AARO’s 757 FY24 UAP reports, how many incidents actually happened during the FY24 reporting period?

    AARO said 485 of the 757 reports involved incidents that occurred during the same reporting period. That means 272 of the reports described older incidents that were submitted later.

  • Who can submit UAP information to AARO, and is it open to the general public?

    AARO is not an open public tip line; it accepts submissions from current or former U.S. Government employees, service members, or contractors. Submissions can include sightings and allegations about U.S. Government programs or activities related to UAP.

  • Why do many AARO UAP cases stay unresolved, and what evidence helps resolve them?

    Cases often remain open due to “insufficient data,” such as missing or late sensor logs, weak time/location fidelity, or lack of corroboration. The article emphasizes that multi-sensor corroboration plus original files with metadata, mission logs, and precise timestamps/location are what enable attribution and case closure.

  • What should you look for to judge whether a UAP story is credible or just noise?

    Prioritize exact time and location, original files with metadata and chain of custody, and multiple independent sources (e.g., visual plus radar or ATC records). Down-rank single blurry videos, missing metadata, anonymous claims without paperwork, and reposted clips without the raw source.

ANALYST_CONSENSUS
Author Avatar
PERSONNEL_DOSSIER

ctdadmin

Intelligence Analyst. Cleared for level 4 archival review and primary source extraction.

→ VIEW_ALL_REPORTS_BY_AGENT
> SECURE_UPLINK

Get the next drop.

Sign up for urgent disclosure updates and declassified drops straight to your terminal.