A viral Instagram account featuring a blonde Army service member named Jessica Foster, posing alongside world leaders, racked up more than 1 million followers before it was revealed to be fake.
This is only one example in a growing wave of AI-generated personas using military identity to build audiences and generate income online.
The administrators behind Military Phony, a watchdog group that tracks fraudulent military claims, described “digital stolen valor” as the online equivalent of wearing medals you didn’t earn, using exaggerated or fabricated credentials to gain respect, sympathy or opportunity that would otherwise belong to someone else.
They draw a distinction between violations of the federal Stolen Valor Act, which involve falsely claiming certain military honors, such as the Purple Heart or Silver Star, for tangible benefit and broader forms of impersonation that may not meet that legal threshold but are still widely referred to as "stolen valor."
The rise of AI-generated influencers and impersonated service members is exposing what some observers are beginning to see as a new form of “digital stolen valor,” where synthetic personas adopt the credibility of military service or other trusted professions like nursing, to attract followers, drive engagement, and, in some cases, generate income.
While impersonation and fraud online are nothing new, advances in artificial intelligence are making these identities easier to create, harder to detect, and more effective at exploiting trust.
The 'Emily Hart' Account
One such account, operating under the name “Emily Hart,” built a large following by pairing political messaging with curated lifestyle content, eventually directing users toward paid adult content subscriptions.
The persona was later revealed to be AI-generated, created by a 22-year-old medical student, according to a report by Wired.
Identified by the pseudonym “Sam,” the creator told the outlet he began experimenting with AI-generated images as a way to earn extra income while in school and save toward a potential move to the United States after graduation.
According to the report, he used Google’s Gemini AI to refine the concept, ultimately developing a fictional persona tailored to a conservative-leaning audience. The chatbot suggested that such audiences, particularly older men in the U.S., tend to be more financially engaged and loyal, influencing the direction of the account.
The Department of Defense declined to comment directly on the trend, but referred questions to federal law enforcement.
“As impersonating a member of the armed forces is a violation of federal law, we refer you to the FBI,” a Pentagon official told Military.com. As of this publication, the FBI has not responded to requests for comment.
Legal Precedent
Legal experts say the distinction between protected speech and punishable conduct often comes down to intent and profit. Simply claiming to be a service member online, even falsely, can fall under constitutionally protected speech, according to Eugene Volokh, a senior fellow at the Hoover Institution and professor of law emeritus at UCLA.
But that protection has limits, the professor explained to Military.com, citing the legal case U.S. v. Alvarez.
"Simply claiming to be a service member, without any commercial dimension, and simply seeking fame or influence, is generally constitutionally protected," Volokh said.
“Where false claims are made to effect a fraud or secure moneys or other valuable considerations … it is well established that the government may restrict speech without affronting the First Amendment,” Volokh told Military.com, citing the Supreme Court’s decision in United States v. Alvarez, a 2007 case involving a man named Xavier Alvarez who told a crowd that he was a 25-year Marine veteran and was awarded the Congressional Medal of Honor—all fabricated information.
In other words, while an AI-generated persona posing as a service member to gain attention or influence may be protected, using that same identity to solicit money through subscriptions, donations or merchandise could expose the operator to civil or even criminal liability.
That distinction applies regardless of how the persona is created, meaning false claims embedded within AI-generated accounts are treated no differently under the law than those made by real individuals.
According to Volokh, Alvarez acknowledged that “where false claims are made to effect a fraud or secure moneys or other valuable considerations, say, offers of employment, it is well established that the government may restrict speech without affronting the First Amendment.”
"Thus, trying to get money or other valuables through knowing and material falsehoods, including by claiming to be a member of the military, is punishable," he added. "It could lead to lawsuits, civil enforcement and even criminal liability.”
Platforms Struggle to Keep Pace
Despite platform rules requiring disclosure of AI-generated content, enforcement remains inconsistent. Many of the accounts driving engagement appear unlabeled or are removed only after gaining significant traction, allowing them to build large audiences and, in some cases, generate revenue before being taken down.
Meta, which owns Instagram, has policies requiring users to disclose AI-generated or manipulated content, but the company has not publicly detailed how those rules are enforced at scale or how quickly potentially deceptive accounts are identified.
Meta did not response to inquiries from Military.com.
For watchdog groups, the concern is not just that these accounts exist, but that they are becoming harder to identify.
Administrators behind Military Phony noted that AI-generated images can obscure or distort key details, such as rank insignia or uniform accuracy, that experienced observers often rely on to spot fraudulent claims.
The accounts themselves are often designed to signal authenticity as quickly as possible, pairing visual cues like uniforms or professional settings with messaging tailored to specific audiences. In several recent cases, AI-generated personas adopted politically aligned identities alongside military or healthcare roles—a combination that can accelerate engagement by reinforcing familiarity and trust.
That dynamic may help explain why some accounts continue to attract followers even when questions about authenticity emerge. The appeal is not always rooted in whether the persona is real, but whether it reflects beliefs, identity or values that resonate with an audience.