-->

PROJECT MAVEN | The Architecture of Algorithmic Warfare

A Comprehensive Strategic, Technical, Historical, and Ethical Analysis (2017-2026)

A Documentary Intelligence Report

PROJECT MAVEN | The Architecture of Algorithmic Warfare
Project Maven

+Prologue: The Silent Transformation of War

War has always evolved with technology—from steel to gunpowder, from mechanization to nuclear deterrence. Yet no transformation has been as subtle, pervasive, and consequential as the one now unfolding: the transition from human decision-making in warfare to algorithmic mediation of violence.

At the center of this transformation lies Project Maven. What began as a technical solution to an intelligence bottleneck has evolved into something far more consequential: A system that compresses human judgment into machine-speed decision loops—reshaping not only how wars are fought, but how life-and-death decisions are made.

This documentary report examines Project Maven from its origins in 2017 through its global deployment in 2026. It traces the technical evolution, corporate partnerships, battlefield applications, and ethical implications of what has become the foundational architecture for algorithmic warfare in the 21st century. 

Part I: Genesis—The Problem of Too Much Data (2017)

transforming raw footage into actionable intelligence
Transforming Data into Action

1.1 The Intelligence Crisis

By 2016, U.S. military operations faced a paradox: unprecedented surveillance capability coupled with near-total inability to process collected data. Drone platforms alone generated millions of hours of Full Motion Video (FMV) and continuous ISR (Intelligence, Surveillance, and Reconnaissance) streams across multiple theaters.

The scale of the data challenge was staggering:

Metric

Value

Impact

FMV Generated Daily

Thousands of hours

Overwhelming volume

Human Review Rate

< 5% of data

95% unanalyzed

Analyst Burnout

High turnover

Critical skill loss

Target Detection Delay

Hours to days

Missed opportunities

Table 1: The Intelligence Data Crisis (2016)

Human analysts could review less than 5% of collected intelligence. This created a strategic blind spot: critical threats existed inside data that no human would ever see. The problem was not a shortage of sensors or firepower, but the need to connect information and act faster than adversaries could respond.

1.2 The Birth of Project Maven

In April 2017, the Pentagon established the Algorithmic Warfare Cross-Functional Team (AWCFT), codenamed Project Maven. Lieutenant General John N.T. "Jack" Shanahan, then Director for Defense Intelligence (Warfighter Support), was appointed to lead this groundbreaking initiative. The mission was clear: automate the identification of objects and patterns in drone footage using machine learning.

The project was established in a memo by the U.S. Deputy Secretary of Defense on April 26, 2017, proposing an "Algorithmic Warfare Cross-Functional Team." With the help of the Defense Innovation Unit, the project obtained the support of top AI talents outside of the traditional defense contracting base. It was initially funded for $70 million over 36 months through rapid acquisition authorities.

Shanahan's vision was revolutionary. He pioneered the Department of Defense's first operational AI program, advancing the use of artificial intelligence for military operations and intelligence collection and analysis. According to Shanahan in November 2017, Maven was "designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department."

Initial Framing vs. Actual Trajectory:

  • Initial: "Assist analysts" and "Reduce workload"
  • Actual: Replace core elements of human cognitive labor in targeting
The strategic context was competitive. Drivers included rapid AI military development in China, the need for faster targeting in counterterrorism operations, and pressure to modernize command structures. Maven was not merely reactive—it was an essential response to maintain American military superiority in an era of algorithmic warfare.

1.3 The Google Employee Revolt

When the Department of Defense first explored AI for military use in 2017, its focus was highly specific: reduce the cognitive burden of human drone pilots conducting search-and-kill missions against Middle Eastern insurgents by automating the task of searching through video footage for signs of enemy hideouts. To accomplish this mission, the Pentagon turned to Google to generate the required software.

In 2018, the relationship between Google and the Pentagon became a flashpoint for one of the most significant ethical confrontations in Silicon Valley history. When thousands of Google employees signed a petition opposing the company's involvement in a military-oriented project of this sort, the company's leadership chose to terminate its contract for Maven.

The Google employee protest marked the first major ethical confrontation between Silicon Valley and military AI. It raised fundamental questions about the role of technology companies in warfare and the moral responsibilities of engineers building systems that could be used to take human lives.

Following Google's withdrawal, Shanahan reassigned the work to Palantir, a defense-oriented startup chaired by Peter Thiel. Palantir then developed the algorithms that enabled Maven software to identify potential targets for attack by armed Predator drones. This transition marked a fundamental shift in the corporate landscape of military AI.

Corporate Partnership Evolution:

Year

Company

Role

Outcome

2017-2018

Google

AI provider

Withdrawal (employee protest)

2018-2024

Palantir

Core development

MSS platform deployed

2024-2026

Multiple vendors

LLM integration

Decision-shaping AI

Table 2: Corporate Partnership Evolution

Part II: The Machine—Technical Architecture of Maven

The initial system capabilities focused on three core areas that would transform military intelligence processing:

         Object Detection: Humans, vehicles, weapons

         Pattern Recognition: Movement, formation behavior

         Activity Classification: Suspicious vs. normal patterns

This phase addressed what military planners called the "visibility problem"—making vast quantities of collected data readable and actionable. The output included annotated video feeds, highlighted targets, and analyst alerts that dramatically accelerated the intelligence processing pipeline.

As of December 2017, 150,000 images had been manually labeled to establish the first training data sets, with projections to reach 1 million by January 2018. This massive data labeling effort was essential for training the machine learning algorithms that would power the system's recognition capabilities.

2.2 Phase Two: From Tool to Platform

Maven evolved into the Maven Smart System (MSS)—a comprehensive battlefield intelligence platform with four key components that would revolutionize military decision-making:

MSS Architecture Components:

Component

Function

Capability

Data Fusion Engine

Multi-source integration

Satellite, drone, SIGINT, OSINT

Ontology Layer

Structuring reality

Object → Event → Relationship → Context

NLP Command Layer

Natural language queries

LLM-based intelligence synthesis

AI Tasking System

Strike recommendation

Target prioritization, weapon selection

Table 3: Maven Smart System Architecture

The most significant evolution was the shift from merely detecting targets to recommending how to eliminate them. System outputs now include target prioritization, strike options, weapon selection, and timing optimization—transforming AI from an analytical tool into a decision-support system with lethal implications.

2.3 GEOINT and the NGA's Role

The National Geospatial-Intelligence Agency (NGA) plays a decisive role in U.S. national security by providing geospatial intelligence (GEOINT) for military, policy, and disaster response needs. At the GEOINT Symposium of 2022, it was announced that Project Maven was transferred from the Office of the Under Secretary of Defense for Intelligence and Security to the NGA, under President Biden's proposed budget for Fiscal Year 2023. It became a Program of Record on November 7, 2023.

NGA's state-of-the-art computer vision and AI capabilities are now integrated into various military analytic workflows to automatically detect, identify, characterize, extract, and attribute features and objects in imagery and video. Maven provides trusted GEOINT at speed and scale for object recognition.

NGA Maven has decreased targeting workflow timelines by a substantial amount, with one of our fighting element's targeting cells seeing intelligence operation timelines drop from hours to minutes—from sensing to target engagement—during a recent exercise.

According to NGA Director Vice Admiral Frank Whitworth, NGA Maven is now available to all services and all combatant commands, with 20,000 active users through more than 35 service and combat and command tools across three security domains. The user base has more than quadrupled since March of last year.

NGA Maven Operational Impact:

Metric

Traditional

Maven-Enabled

Targeting Timeline

Hours

Minutes

Active Users

Hundreds

20,000+

Combatant Commands

Limited

All commands

Decision Quality

Variable

1,000/hour target capacity

Table 4: NGA Maven Performance Metrics (2025)

Part III: The Doctrinal Shift—From Kill Chain to Kill Web


3.1 From Kill Chain to Kill Web

The traditional targeting process followed a linear sequence that reflected industrial-era military thinking:

  1. Sensor detects
  2. Analyst evaluates
  3. Commander decides
  4. Weapon deployed

The Maven-enabled model transforms this into a networked system: multiple sensors feeding AI fusion layer, automated prioritization, and distributed execution. This represents a fundamental shift from sequential to parallel processing of targeting decisions.

Decision Timeline Comparison:

Phase

Traditional

Maven-Enabled

Compression

Detection to Analysis

Hours

Minutes

60x faster

Analysis to Decision

Hours

Seconds

100x+ faster

Decision to Strike

Variable

Automated

Near-instant

Table 5: Decision Loop Compression

3.2 CJADC2 and All-Domain Operations

Combined Joint All-Domain Command and Control (CJADC2) was created to operate in the reality of modern warfare, where actions in the air can trigger effects in space, cyberspace, or at sea within seconds. CJADC2 is the U.S. Department of Defense's evolving framework for enabling faster, more effective decision-making by linking sensors, commanders, and shooters across services and coalition partners.

CJADC2 aims to: collect data from sensors across all domains; process and fuse that data into a coherent operational picture; distribute actionable information to the right decision-makers and weapons systems; and enable rapid, synchronized action across services and allies. Rather than a single system or platform, CJADC2 is a concept, architecture, and approach for how future forces share data, coordinate actions, and fight as an integrated whole.

In a CJADC2-enabled environment, a sensor detecting a threat in one domain—such as a Space-Based Infrared System satellite tracking a missile launch, an Aegis-equipped Navy destroyer radar identifying a hostile aircraft, or an Army Sentinel radar spotting incoming rockets—can immediately share that data across the force. This approach decouples sensors from shooters, allowing the most suitable platform and weapon to respond regardless of service or domain.

The Pentagon aims to utilize AI tools like Maven to support its CJADC2 warfighting construct. This initiative seeks to better connect the platforms, sensors, and data streams of the U.S. military and its key international partners under a unified network. Defense officials believe that leveraging AI will help commanders and other personnel make faster and more informed decisions, thereby improving operational effectiveness and efficiency.

Part IV: The Corporate-Military Complex

4.1 Rise of Palantir

Following Google's withdrawal from Project Maven in 2018, Palantir took over core system development, building the Maven Smart System with focus on data integration, battlefield ontology, and operational scalability. Under the leadership of CEO Alex Karp and Chairman Peter Thiel, Palantir has become the dominant player in military AI infrastructure.

Palantir's commercialized Maven Smart System pulls together data of different classification levels from a vast array of sources—satellite intelligence on potential enemy targets, readiness reports from friendly units, social media posts on unfolding crises or misinformation—and puts it into a single, customizable interface for military planners.

On May 29, 2024, Palantir was awarded a $480 million contract by the U.S. Army for its Maven Smart System prototype. This five-year firm-fixed-price contract, running through May 28, 2029, will allow the Defense Department to expand its use to thousands of users at five combatant commands: U.S. Central Command, European Command, Indo-Pacific Command, Northern Command, and Transportation Command. The system will also be available to members of the Joint Staff.

Palantir Defense Contracts (2024-2026):

Contract

Value

Purpose

Army MSS Expansion

$480 million

Expand to 5 combatant commands

Army Research Lab

$100 million

MSS support for all services

Army Licenses (May 2025)

$795 million

New MSS licenses

NGA Expansion

$28 million

MSS for NGA analysts

Table 6: Palantir Major Defense Contracts

According to Shannon Clark, Palantir's head of defense growth, "Users are going to span everyone from intel analysts and operators in some of the remote island chains across the world to leadership at the Pentagon. This is taking what has been built in prototype and experimentation and bringing this to production."

4.2 NATO Adoption

In April 2025, NATO announced it had awarded a contract to Palantir to adopt its Maven Smart System for artificial intelligence-enabled battlefield operations. Through the contract, finalized March 25, the NATO Communications and Information Agency (NCIA) plans to use Maven Smart System NATO to support the transatlantic military organization's Allied Command Operations strategic command.

NATO plans to use the system to provide a common data-enabled warfighting capability to the Alliance, through a wide range of AI applications—from large language models (LLMs) to generative and machine learning—ultimately enhancing intelligence fusion and targeting, battlespace awareness and planning, and accelerated decision-making.

The contract was one of the most expeditious in NATO's history, taking only six months from outlining the requirement to acquiring the system.

Ludwig Decamps, NCIA General Manager, stated that the deal with Palantir is focused on "providing customized state-of-the-art AI capabilities to the Alliance, and empowering our forces with the tools required on the modern battlefield to operate effectively and decisively."

4.3 The Anthropic Dispute

In 2025, Anthropic mustered Claude, its large language model, for national service. Although the military-industrial complex is newly fashionable, Anthropic was not a natural fit. The firm had been founded in 2021 by seven OpenAI defectors who believed that AI safety should be prioritized. The company's CEO, Dario Amodei, wanted Claude to be helpful at the most sensitive level—Claude was the first AI certified to operate on classified systems.

The Pentagon has been using Claude to analyze data, write memos, and help generate battle plans. Intelligence contractors like Palantir offer platforms that synthesize, process, and surface decision-relevant information. As one Palantir employee noted, "Claude is just the best, by far." A human analyst might review signal intelligence to select military targets; Claude can do the same thing, only much faster and more efficiently.

However, tensions emerged when the Pentagon sought to renegotiate the contract to include "all lawful uses" of the product. Anthropic had stipulated that Claude be used neither to drive fully autonomous weaponry nor to facilitate domestic mass surveillance. The Pentagon accepted these stipulations initially, but later sought to remove them.

On February 27, 2025, Defense Secretary Pete Hegseth officially declared Anthropic a supply-chain risk, stating that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." This designation, which had only ever been applied to infrastructure firms with ties to adversarial foreign governments, threatened to extinguish the company.

Anthropic filed two lawsuits challenging the constitutionality of the ban. The company maintains that it cannot manipulate Claude once deployed—there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. The dispute represents a fundamental clash between Silicon Valley's ethical AI movement and the Pentagon's desire for unrestricted use of AI capabilities.

Part V: Real-World Deployment—From Theory to Practice

5.1 Gaza: AI-Driven Targeting Ecosystem

Gaza represents the most controversial and extensively documented deployment environment for AI targeting systems. Israeli forces have relied heavily on multiple AI tools that together constitute an automation of the find-fix-track-target components of the modern military "kill chain."

The Three Core AI Systems:

The Gospel (Habsora):An AI-powered database that generates targets based on apparent links to Hamas. During Israel's 11-day war with Hamas in May 2021, The Gospel generated 100 targets daily—a significant increase from the previous rate of 50 targets per year in Gaza. The system was developed by Unit 8200, Israel's elite intelligence and cyber technology unit.

Lavender:An AI recommendation system designed to use algorithms to identify Hamas operatives as targets. Lavender scans information on approximately 90% of Gaza's population and gives each individual a rating between 1 to 100, expressing the likelihood that the individual is a member of Hamas or Islamic Jihad military wings.

Where's Daddy?:A grotesquely named system that tracks targets geographically so they can be followed into their family residences before being attacked. One intelligence officer told +972 Magazine: "We were not interested in killing operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It's much easier to bomb a family's home."

AI Targeting Systems in Gaza:

System

Function

Scale

The Gospel

Infrastructure targeting

100 targets/day (vs. 50/year pre-AI)

Lavender

Individual targeting

37,000+ people rated

Where's Daddy?

Home location tracking

Family residence targeting

Table 7: Israeli AI Targeting Systems in Gaza

According to intelligence officers who spoke with +972 Magazine, sources revealed that approximately 10% of the people that Lavender marked to be killed were not Hamas militants—some had loose connections to Hamas, while others had completely no connection. The machine would bring people who had the exact same name and nickname as a Hamas operative, or people who had similar communication profiles, including civil defense workers and police officers in Gaza.

One source said he spent only 20 seconds per target before authorizing the bombing of alleged low-ranking Hamas militants—often civilians—killing those people inside their houses.

The combination of Lavender and Where's Daddy? led to entire Palestinian families being wiped out inside their houses. According to U.N. statistics, more than 50% of casualties in the first six weeks came from a smaller group of families—an expression of the family unit being destroyed by AI-enabled targeting.

5.2 Iran 2026: The Minab School Tragedy

On February 28, 2026, the first day of the Iran war, the Shajareh Tayyebeh girls' elementary school in Minab, Hormozgan province, southern Iran, was destroyed by missile strikes. According to witness accounts verified by satellite-based analyses, the school was triple-tapped by three distinct strikes. The roof collapsed on students, and according to Iranian media, between 175 and 180 people were killed, most of whom were schoolchildren.

Human rights organization Hengaw stated that around 170 students were present in the school at the time, while the Iranian Ministry of Education said 264 students were present, mostly girls between seven and 12 years old. The impact instantaneously killed dozens inside, destroying at least half of the two-story school building.

According to testimony from Red Crescent medics and victims' parents, the initial strike was followed by a "double-tap" strike. The school's principal moved students to a prayer room and called parents; that area was then hit by a second strike, killing most who had taken shelter. According to Minab's mayor, the school was triple-tapped—struck three times in total.

Investigations by The New York Times, CBC, NPR, BBC Verify, and others concluded that the United States was likely responsible for the strike. Sources involved with the US military's internal investigation corroborated that the strike was likely perpetrated by the US, despite the investigation not yet having reached a final conclusion.

The root cause was misclassification by the AI system combined with reliance on outdated targeting data. The New York Times reported that the U.S. preliminary investigation found that the United States is responsible for this strike due to outdated targeting data. This incident exemplifies the fundamental risk of algorithmic warfare: at scale, errors are not isolated incidents but become systemic outcomes with devastating humanitarian consequences.

On March 13, 2026, Congressman Jason Crow and 120 members of Congress demanded answers on the school strike in Iran, writing to Secretary Hegseth to express alarm regarding reports of civilian casualties arising from Operation Epic Fury. The letter specifically cited the Minab school attack as "the deadliest attack on civilians thus far" and demanded clear answers on how the Department plans to investigate these reports and prevent the risk of further civilian harm.

Part VI: The Ethical Collapse Point

6.1 Automation Bias and Human Control

The integration of AI into targeting has created multiple ethical crisis points. Automation bias—the tendency of humans to trust machine outputs and reduce independent verification—creates a dangerous dependency on algorithmic judgment. AI enables thousands of decisions per hour; humans cannot validate at that speed, creating an inherent tension between operational efficiency and moral accountability.

Contrary to popular belief, the Department of Defense has never had a policy requiring autonomous weapons to have a "human in the loop." What DoD Directive 3000.09 states is that autonomous weapons "will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." This subtle but crucial distinction has significant implications for how AI warfare is actually conducted.

The problem with the "human-in-the-loop" framing is that it presumes a machine decision loop and asks where the human is relative to that pre-existing loop. Rather than give primacy to the machine's work, we should prioritize and make the human's decision cycle central.

Research from the Hellenic Air Force Academy's War Games Lab found that operators who took AI suggestions into consideration made decisions aligned with International Humanitarian Law and Rules of Engagement 78% of the time. However, 36% of operators discussed the risk of over-trusting AI's suggestions due to time constraints, and 88% emphasized the need for constant training on the platform as well as on ethics and legal constraints.

6.2 Accountability Vacuum

There is no clear responsibility between engineers, commanders, and AI systems when errors occur. The transformation from limited, deliberate strikes to continuous target production pipelines represents a fundamental change in the nature of warfare itself. Targets become data points and probability scores rather than human lives, fundamentally altering the psychological and moral framework of warfare.

AI-enabled targeting systems, even those that retain humans-in-the-loop, generate significant moral challenges. Such systems draw on vast volumes of data, virtually guaranteeing an opaque process of data crunching, analyzing, and target proposition. Human operators are unlikely to have a clear overview of what data such systems have available, what they are trained on, what the specific parameters are for algorithmic calculations, or what frequent updates do to accuracy.

Summary of Ethical Concerns:

Issue

Description

Severity

Automation Bias

Over-reliance on AI outputs

High

Speed vs. Ethics

Cannot validate at machine speed

Critical

Dehumanization

Targets as data points

Critical

Accountability Gap

Unclear responsibility chain

High

Violence Industrialization

Continuous targeting pipeline

Critical

Table 8: Ethical Concerns Summary

Part VII: Strategic Consequences

7.1 The Global AI Arms Race

A military artificial intelligence arms race has emerged between major powers to develop and deploy advanced AI technologies and lethal autonomous weapons systems. The goal is to gain strategic or tactical advantage over rivals, similar to previous arms races involving nuclear or conventional military technologies.

Russian President Vladimir Putin stated that the leader in AI will "rule the world." An AI arms race is sometimes placed in the context of an AI Cold War between the United States and China. Researchers warn that the AGI race between major powers could reshape geopolitical power, including AI for surveillance, autonomous weapons, decision-making systems, and cyber operations.

The competitive pressure to automate creates a destabilizing dynamic. As one nation deploys AI-enabled targeting systems, others feel compelled to follow suit, potentially leading to a destabilizing arms race in autonomous weapons systems. The ethical implications extend beyond the battlefield: when targeting becomes faster than verification and larger than human oversight, errors become structural outcomes rather than isolated accidents.

AI Arms Race Participants:

Nation

AI Military Focus

Key Programs

United States

Decision support, targeting

Maven, CJADC2, JADC2

China

Surveillance, autonomous systems

AI-enabled ISR, drone swarms

Russia

Autonomous weapons, cyber

Lethal autonomous systems

Israel

Target identification

Lavender, Gospel, Where's Daddy

Table 9: Global AI Military Development

7.2 Transformation of Military Roles

The human role shifts from decision-maker to system supervisor. This represents a fundamental redefinition of military professionalism and command responsibility. AI-enabled warfare reduces time for diplomacy and increases escalation risk. The compression of decision timelines leaves less room for de-escalation and negotiation.

Rapid adoption by major powers creates competitive pressure to automate, potentially leading to a destabilizing arms race in autonomous weapons systems. The ethical implications extend beyond the battlefield: when targeting becomes faster than verification and larger than human oversight, errors become structural outcomes rather than isolated accidents.

The transformation affects every level of military operations. Intelligence analysts now work alongside AI systems that can process data thousands of times faster than humans. Commanders must make decisions based on AI-generated recommendations with limited time for independent verification. The traditional skills of military judgment, situational awareness, and ethical reasoning are being supplemented—and in some cases replaced—by algorithmic decision-making.

Part VIII: The Future of War

The emerging reality includes AI-integrated battle networks, autonomous targeting assistance, and global real-time surveillance. The next phase will feature AI-coordinated warfare ecosystems where multiple systems operate in concert. The trajectory is clear: war is no longer constrained by human limits, but by algorithmic capability.

The integration of AI in military operations aims to enhance the speed and accuracy of target identification. Positive target identification (PID) is at the forefront of the targeting process. The speed at which a hostile target can be detected is crucial to the remaining steps of the targeting cycle (Decide, Detect, Deliver, Assess). AI assists by filtering specific user-defined parameters, sifting through large amounts of data, extracting what is relevant, and providing analysts with near-real-time data used by the operations community for validation against the commander's objective.

Emerging Capabilities Roadmap:

Capability

Status

Timeline

AI-Integrated Battle Networks

Deployed

2024-2026

Autonomous Targeting Assistance

Operational

2025-2027

Global Real-Time Surveillance

In Development

2026-2028

AI-Coordinated Warfare Ecosystems

Emerging

2027-2030

Table 10: Future Warfare Capabilities Roadmap

The question is no longer whether AI will transform warfare. It already has. The question is whether humanity can maintain meaningful control over the machines we have created to kill on our behalf. As one military ethicist observed: ethics is not a constraint on military operations—it is a force multiplier. Targeting decisions that are legally grounded, morally defensible, and procedurally transparent ensure operational legitimacy, effectiveness, and public trust.

Looking ahead, the international community must grapple with urgent questions: How do we regulate algorithmic warfare? What constraints should be placed on AI targeting systems? How do we ensure accountability when machines make lethal decisions? The answers to these questions will determine whether the future of war remains a human endeavor—or becomes something else entirely.

Final Conclusion: The Industrialization of Decision and Death

Project Maven represents a fundamental shift in warfare. It solved the problem of too much data and created a system capable of machine-speed targeting. However, it introduced a new risk: decision-making detached from human cognition.

War is no longer constrained by human limits. It is constrained by algorithmic capability.

The ultimate ethical reality is stark: When targeting becomes faster than verification and larger than human oversight, errors are no longer accidents—they become structural outcomes. The Minab school tragedy, the Gaza targeting operations, and countless other incidents demonstrate this fundamental truth.

The debate about "human in the loop" misses the essential point. What matters is not the presence of humans in the process, but the preservation of human judgment, moral responsibility, and accountability. When AI systems generate thousands of targets per day, when human review is reduced to seconds per target, when family homes are bombed because an algorithm calculated it was "easier" than targeting militants in military contexts—the loop has already been broken.

The question is no longer whether AI will transform warfare. It already has. The question is whether humanity can maintain meaningful control over the machines we have created to kill on our behalf. Project Maven has given us the answer: not without a fundamental recommitment to human judgment, ethical constraints, and the recognition that efficiency in killing is not the same as justice in war.

As military AI continues to evolve, the international community must grapple with urgent questions: How do we regulate algorithmic warfare? What constraints should be placed on AI targeting systems? How do we ensure accountability when machines make lethal decisions? The answers to these questions will determine whether the future of war remains a human endeavor—or becomes something else entirely.

The legacy of Project Maven extends far beyond its technical achievements. It has established the template for algorithmic warfare in the 21st century—a template that is being adopted by nations around the world. The choices we make now about how to govern these systems will shape the nature of conflict for generations to come.


Resources:

https://mwi.westpoint.edu/big-data-at-war-special-operations-forces-project-maven-and-twenty-first-century-warfare/

https://www.researchgate.net/publication/399487363_Algorithmic_Warfare_in_Evolution_A_Cyber-Conflict_Analysis_of_Israel's_AI_Targeting_from_Gaza_2021_to_Gaza_2023-2025

https://www.independent.co.uk/news/world/americas/project-maven-ai-us-airstrike-iraq-anthropic-b2929138.html

https://www.nga.mil/news/GEOINT_Artificial_Intelligence_.html

https://debuglies.com/2025/12/17/beyond-tactical-brilliance-the-decadal-shift-to-human-machine-fusion/

https://www.researchgate.net/publication/392770698_Improving_Arabic_Image_Captioning_with_Vision-Language_Models

https://arxiv.org/html/2503.21910v1

https://aoav.org.uk/2025/kill-codes-and-command-lines-understanding-the-rise-of-algorithmic-warfare/

https://odsc.medium.com/palantir-secures-480-million-dod-deal-for-ai-powered-maven-smart-system-prototype-2869135cbc90

https://fedscoop.com/project-maven-dod-machine-learning/

https://www.army.mil/article/290021/data_centric_command_and_control_unlocking_mercurys_potential_with_c2_next

https://www.aflcmc.af.mil/NEWS/Article/4241273/air-force-battle-lab-advances-the-kill-chain-with-ai-c2-innovation/

https://blog.palantir.com/maven-smart-system-innovating-for-the-alliance-5ebc31709eea

https://www.c4isrnet.com/it-networks/2018/07/27/targeting-the-future-of-the-dods-controversial-project-maven-initiative/

https://www.digitaldividedata.com/blog/geospatial-data-geoint-use-cases-in-defense-tech

https://publications.armywarcollege.edu/News/Display/Article/4361748/mission-commands-asymmetric-advantage-through-ai-driven-data-management/

https://reliefweb.int/report/world/human-loop-how-oversight-turns-ai-humanitarian-ally

https://www.dvidshub.net/news/329789/sensor-shooter-faster

https://www.defenceiq.com/glossary/intelligence-surveillance-target-acquisition-and-reconnaissance

https://www.theguardian.com/technology/2026/mar/13/anthropic-pentagon-artificial-intelligence

https://www.tandfonline.com/doi/full/10.1080/16544951.2025.2540131

https://www.researchgate.net/publication/389355214_AI_Translation_of_the_Gaza-Israel_War_Terminology

https://v45.diplomacy.edu/updates/anthropic-pentagon-military-ai

https://www.war.gov/News/News-Stories/article/article/1356172/project-maven-industry-day-pursues-artificial-intelligence-for-dod-challenges/

https://www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/Other/15-F-0070_DOC_05_FINAL_ODNA_Revolution_in_Military_Affairs_Conference_November_1996.pdf

https://sites.bu.edu/pardeeatlas/research-and-policy/the-effectiveness-of-employee-activism-in-big-tech-dod-partnerships/

https://ndupress.ndu.edu/Media/News/News-Article-View/Article/2054156/the-ethics-of-acquiring-disruptive-technologies-artificial-intelligence-autonom/

https://www.nga.mil/assets/files/170901-038_GEOINT_Basic_Doctrine_Pub_1.pdf

https://www.lawfaremedia.org/article/military-ai-policy-by-contract--the-limits-of-procurement-as-governance

https://www.missiledefenseadvocacy.org/maven-smart-system/

https://www.orfonline.org/english/expert-speak/ai-in-real-time-warfare-lessons-from-project-maven/

https://www.noonpost.com/361260/

https://www.maris-tech.com/blog/what-is-istar-intelligence-surveillance-target-acquisition-and-reconnaissance/

https://lieber.westpoint.edu/ai-based-targeting-gaza-surveying-expert-responses-refining-debate/

https://www.jns.org/behind-the-scenes-of-the-idfs-war-drill-a-digital-revolution/

https://organiser.org/2026/03/09/343310/bharat/maven-smart-system-an-artificial-intelligence-that-is-shaping-us-iran-war-do-we-have-a-similar-system/

https://www.cnas.org/publications/commentary/project-maven-brings-ai-to-the-fight-against-isis

https://emerj.com/big-data-military/

https://www.mitchellaerospacepower.org/app/uploads/2021/02/a2dd91_4892807f169341188b7ebcd2f775671d.pdf

https://news.futunn.com/en/post/70095661/the-real-ai-for-warfare-claude-is-just-the-foundation

https://defensescoop.com/2025/04/14/nato-palantir-maven-smart-system-contract/

https://www.habtoorresearch.com/programmes/maduro-khamenei-artificial-intelligence/

https://journals.law.harvard.edu/nsj/2025/05/on-the-pitfalls-of-technophilic-reason-a-commentary-on-kevin-jon-hellers-the-concept-of-the-human-in-the-critique-of-autonomous-weapons/

https://www.reddit.com/r/ArtificialInteligence/comments/1rqpghu/project_maven_palantir_and_anthropic/

https://sundayguardianlive.com/world/how-israel-is-using-ai-in-its-wars-in-gaza-and-iran-174676/

https://www.faf.ae/home/2026/3/12/x1-1

https://www.brennancenter.org/our-work/research-reports/business-military-ai

https://baptistnews.com/article/ai-goes-to-war-and-schoolchildren-are-dead/

https://www.the-independent.com/news/world/americas/project-maven-ai-us-airstrike-iraq-anthropic-b2929138.html

https://systematic.com/int/industries/defence/products/deep-dives/fire-support/

https://breakingdefense.com/2019/11/secarmys-multi-domain-kill-chain-space-to-cloud-to-ai/

https://projectgeospatial.org/geospatial-frontiers/the-new-battlespace-how-geospatial-ai-is-reshaping-military-intelligence

https://aoav.org.uk/2026/who-commands-the-god-in-the-machine-ai-and-the-future-of-military-authority/

https://www.salesforce.com/blog/ai-and-human-touch/

https://politicalkeys.net/?p=6655

https://www.arabnews.com/node/2624225/amp

https://www.madarcenter.org/

https://en.wikipedia.org/wiki/Project_Maven

https://comptroller.war.gov/Portals/45/Documents/defbudget/fy2020/budget_justification/pdfs/01_Operation_and_Maintenance/O_M_VOL_1_PART_1/Volume_1_Part_1.pdf

https://mwi.westpoint.edu/big-data-at-war-special-operations-forces-project-maven-and-twenty-first-century-warfare/

https://politicalkeys.net/?p=6655

https://apps.dtic.mil/sti/tr/pdf/ADA353436.pdf

https://aclanthology.org/2020.alvr-1.1.pdf

https://www.researchgate.net/publication/399487363_Algorithmic_Warfare_in_Evolution_A_Cyber-Conflict_Analysis_of_Israel's_AI_Targeting_from_Gaza_2021_to_Gaza_2023-2025

https://www.independent.co.uk/news/world/americas/project-maven-ai-us-airstrike-iraq-anthropic-b2929138.html

https://api.army.mil/e2/c/downloads/361884.pdf

https://www.nga.mil/news/GEOINT_Artificial_Intelligence_.html

https://debuglies.com/2025/12/17/beyond-tactical-brilliance-the-decadal-shift-to-human-machine-fusion/

https://www.researchgate.net/publication/392770698_Improving_Arabic_Image_Captioning_with_Vision-Language_Models

https://arxiv.org/html/2503.21910v1

https://aoav.org.uk/2025/kill-codes-and-command-lines-understanding-the-rise-of-algorithmic-warfare/

https://en.wikipedia.org/wiki/Project_Maven

https://apps.dtic.mil/sti/pdfs/AD1179110.pdf

https://dsb.cto.mil/wp-content/uploads/reports/2000s/ADA463361.pdf

https://odsc.medium.com/palantir-secures-480-million-dod-deal-for-ai-powered-maven-smart-system-prototype-2869135cbc90

https://fedscoop.com/project-maven-dod-machine-learning/

https://www.army.mil/article/290021/data_centric_command_and_control_unlocking_mercurys_potential_with_c2_next

https://www.aflcmc.af.mil/NEWS/Article/4241273/air-force-battle-lab-advances-the-kill-chain-with-ai-c2-innovation/

https://blog.palantir.com/maven-smart-system-innovating-for-the-alliance-5ebc31709eea

https://www.c4isrnet.com/it-networks/2018/07/27/targeting-the-future-of-the-dods-controversial-project-maven-initiative/

https://www.digitaldividedata.com/blog/geospatial-data-geoint-use-cases-in-defense-tech

https://publications.armywarcollege.edu/News/Display/Article/4361748/mission-commands-asymmetric-advantage-through-ai-driven-data-management/

https://reliefweb.int/report/world/human-loop-how-oversight-turns-ai-humanitarian-ally

https://www.andrewwmarshallfoundation.org/wp-content/uploads/2022/11/AIRMA_FINAL.pdf

https://www.dvidshub.net/news/329789/sensor-shooter-faster

https://www.defenceiq.com/glossary/intelligence-surveillance-target-acquisition-and-reconnaissance

https://en.wikipedia.org/wiki/Glossary_of_military_abbreviations

https://www.theguardian.com/technology/2026/mar/13/anthropic-pentagon-artificial-intelligence

https://www.tandfonline.com/doi/full/10.1080/16544951.2025.2540131

https://www.researchgate.net/publication/389355214_AI_Translation_of_the_Gaza-Israel_War_Terminology

https://www.marines.mil/portals/1/mcrp%205-12a%20with%20ch.%201%20z.pdf

https://v45.diplomacy.edu/updates/anthropic-pentagon-military-ai

https://www.war.gov/News/News-Stories/article/article/1356172/project-maven-industry-day-pursues-artificial-intelligence-for-dod-challenges/

https://www.arabnews.com/node/2624225/amp

https://www.madarcenter.org/

https://www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/Other/15-F-0070_DOC_05_FINAL_ODNA_Revolution_in_Military_Affairs_Conference_November_1996.pdf

https://sites.bu.edu/pardeeatlas/research-and-policy/the-effectiveness-of-employee-activism-in-big-tech-dod-partnerships/

https://ndupress.ndu.edu/Media/News/News-Article-View/Article/2054156/the-ethics-of-acquiring-disruptive-technologies-artificial-intelligence-autonom/

https://www.nga.mil/assets/files/170901-038_GEOINT_Basic_Doctrine_Pub_1.pdf

https://www.lawfaremedia.org/article/military-ai-policy-by-contract--the-limits-of-procurement-as-governance

https://www.missiledefenseadvocacy.org/maven-smart-system/

https://www.orfonline.org/english/expert-speak/ai-in-real-time-warfare-lessons-from-project-maven/

https://www.noonpost.com/361260/

https://www.maris-tech.com/blog/what-is-istar-intelligence-surveillance-target-acquisition-and-reconnaissance/

https://lieber.westpoint.edu/ai-based-targeting-gaza-surveying-expert-responses-refining-debate/

https://www.jns.org/behind-the-scenes-of-the-idfs-war-drill-a-digital-revolution/

https://organiser.org/2026/03/09/343310/bharat/maven-smart-system-an-artificial-intelligence-that-is-shaping-us-iran-war-do-we-have-a-similar-system/

https://www.cnas.org/publications/commentary/project-maven-brings-ai-to-the-fight-against-isis

https://en.wikipedia.org/wiki/Intelligence,_surveillance,_target_acquisition,_and_reconnaissance

https://www.refaad.com/Files/JALLS/JALS-6-2-4.pdf

https://en.wikipedia.org/wiki/Anthropic%E2%80%93United_States_Department_of_Defense_dispute

https://emerj.com/big-data-military/

https://www.mitchellaerospacepower.org/app/uploads/2021/02/a2dd91_4892807f169341188b7ebcd2f775671d.pdf

https://news.futunn.com/en/post/70095661/the-real-ai-for-warfare-claude-is-just-the-foundation

https://defensescoop.com/2025/04/14/nato-palantir-maven-smart-system-contract/

https://irp.fas.org/doddir/army/adp1_02.pdf

https://www.habtoorresearch.com/programmes/maduro-khamenei-artificial-intelligence/

https://journals.law.harvard.edu/nsj/2025/05/on-the-pitfalls-of-technophilic-reason-a-commentary-on-kevin-jon-hellers-the-concept-of-the-human-in-the-critique-of-autonomous-weapons/

https://www.europarl.europa.eu/meetdocs/2004_2009/documents/dv/270/270907/270907nolingeneral_en.pdf

https://www.reddit.com/r/ArtificialInteligence/comments/1rqpghu/project_maven_palantir_and_anthropic/

https://www.youtube.com/watch?v=7QTPIY9wRDU

https://library.au.int/fr/modern-military-dictionary-english-arabic-arabic-english-4

https://www.usmcu.edu/portals/218/jams_fall2020_11_2_web2.pdf

https://sundayguardianlive.com/world/how-israel-is-using-ai-in-its-wars-in-gaza-and-iran-174676/

https://ieeexplore.ieee.org/iel8/6287639/10820123/11146650.pdf

https://www.faf.ae/home/2026/3/12/x1-1

https://comptroller.war.gov/Portals/45/Documents/defbudget/fy2020/budget_justification/pdfs/01_Operation_and_Maintenance/O_M_VOL_1_PART_1/Volume_1_Part_1.pdf

https://www.brennancenter.org/our-work/research-reports/business-military-ai

https://baptistnews.com/article/ai-goes-to-war-and-schoolchildren-are-dead/

https://www.salesforce.com/blog/ai-and-human-touch/

https://www.the-independent.com/news/world/americas/project-maven-ai-us-airstrike-iraq-anthropic-b2929138.html

https://systematic.com/int/industries/defence/products/deep-dives/fire-support/

https://breakingdefense.com/2019/11/secarmys-multi-domain-kill-chain-space-to-cloud-to-ai/

https://projectgeospatial.org/geospatial-frontiers/the-new-battlespace-how-geospatial-ai-is-reshaping-military-intelligence

https://unidir.org/wp-content/uploads/2023/05/Table-Top-Exercises-on-the-Human-Element-and-Autonomous-Weapons-Systems-Summary-Report-UNIDIR-Final.pdf

https://aoav.org.uk/2026/who-commands-the-god-in-the-machine-ai-and-the-future-of-military-authority/


Post a Comment

0 Comments