The Europe Today

Discover, Engage & Empower

Pentagon Expands GenAI.mil Platform with ChatGPT

Pentagon

Washington, February 14, 2026 – The Europe Today: The U.S. Department of Defense (DoD) has expanded its generative artificial intelligence capabilities by integrating OpenAI’s ChatGPT into its GenAI.mil platform, a move experts say could significantly enhance operational efficiency while also introducing new security challenges.

According to media reports citing foreign sources, the Pentagon announced on February 9 that it had added ChatGPT to GenAI.mil, a platform launched in December to provide AI-powered tools for Department of Defense personnel. The system leverages machine learning on large datasets to function as a chatbot capable of generating text, images, and software code using unclassified information.

Initially powered by Google’s Gemini for Government GPT, GenAI.mil later incorporated xAI’s government suite based on its Grok model. The platform has already surpassed one million unique users, signaling rapid adoption across defense departments. With ChatGPT’s inclusion, analysts expect further growth, given its dominant position in the generative AI market. A January web traffic study found ChatGPT accounted for nearly 65 percent of public generative AI chatbot visits—three times more than Google’s Gemini.

Gregory Touhill, a retired U.S. Air Force brigadier general and current director of cybersecurity at Carnegie Mellon University’s Software Engineering Institute, underscored the strategic importance of AI adoption within the armed forces.

“I think it’s important for our Airmen today; we want them to be well prepared for the future, and the future is racing toward us now,” Touhill told Air & Space Forces Magazine. “AI is a tool that our Airmen and our Guardians can use to obtain decisive capabilities in the cyber domain.”

Touhill noted that his institute is working with the Pentagon and other government agencies to develop robust risk management frameworks for AI deployment in official settings. He expressed confidence that AI systems could automate routine tasks, enabling service members to focus on higher-order operational responsibilities.

Caleb Withers, a research assistant in the Technology and National Security Program at the Center for a New American Security, also highlighted potential applications. He projected that AI tools would enhance prototyping, wargaming, research, and administrative processes, quickly becoming indispensable across defense operations.

However, both experts cautioned that the rapid expansion of AI use demands vigilance. Touhill warned of the “fusion of hardware, software, and wetware”—the latter referring to human operators—emphasizing the danger of personnel inadvertently inputting sensitive data into systems not designed to handle classified information.

“We don’t want our Airmen and Guardians disclosing information into a system not designed to process that information,” Touhill said, noting that once information is entered into an AI system, it cannot easily be retracted.

Experts advised that the use of secure, official defense applications—rather than open commercial platforms—along with comprehensive training, clear protocols, and a cautious mindset, would be essential to mitigating risks.

“These systems are not yet fully reliable, and in some cases can be quite unreliable or fail,” Withers warned. “There’s a risk of overconfidence in them.”

As the Pentagon accelerates its embrace of generative AI, the balance between innovation and security remains central to ensuring that emerging technologies enhance national defense without compromising sensitive information.