AI-Assisted Attack? Florida Launches Investigation

Florida’s Attorney General is investigating whether ChatGPT assisted a mass shooter in planning a deadly attack on a state university campus, raising alarming questions about Big Tech’s role in enabling violence while national security vulnerabilities remain unaddressed.

Story Snapshot

  • Phoenix Eichner, accused of killing two people at Florida State University in April 2025, exchanged hundreds of messages with ChatGPT before the attack
  • Court records reveal the suspect asked the AI about busy hours at the campus student union and public reactions to shootings
  • Florida AG launched a probe on April 10, 2026, citing both public safety and national security concerns over OpenAI’s technology
  • Investigation marks the first state-level probe directly linking ChatGPT to a mass shooting suspect’s planning activities

Court Records Expose AI-Assisted Reconnaissance

Court documents uncovered during the investigation of Phoenix Eichner revealed extensive communication between the FSU shooting suspect and ChatGPT in the months leading up to the April 2025 attack that killed two people at the campus student union. The suspect specifically queried the AI system about peak traffic times at the student union and how the public typically responds to mass shooting events. These revelations transformed what might have been dismissed as harmless chatbot interactions into potential evidence of premeditated planning, raising serious concerns about how artificial intelligence tools can be exploited for violent purposes despite assurances from their creators.

State Investigation Targets OpenAI Operations

Florida’s Attorney General announced the formal investigation into ChatGPT and its parent company OpenAI on April 10, 2026, nearly a year after the tragic shooting at Florida State University. The probe focuses on whether OpenAI’s technology facilitated the attack planning and examines broader national security implications of large language models that could be exploited by domestic or foreign adversaries. OpenAI responded by confirming cooperation with authorities while emphasizing that the company builds technology designed to understand user intent and respond safely. However, this case demonstrates the limitations of those safeguards when determined individuals seek to misuse AI for reconnaissance and planning violent acts.

National Security Dimensions Expand Scope

The investigation extends beyond the specific FSU tragedy to address wider concerns about artificial intelligence vulnerabilities that could threaten American security. Florida’s conservative Attorney General office emphasized that OpenAI’s models present potential risks if accessed by foreign adversaries seeking to gather intelligence or plan operations against U.S. interests. This national security framing elevates the probe beyond a singular criminal case into a referendum on Big Tech’s responsibility for dual-use technologies that can serve both beneficial and harmful purposes. The AG’s office maintains enforcement authority over OpenAI regarding Florida-based users, creating precedent for state-level oversight of AI companies whose federal regulation remains inadequate.

Implications for Tech Accountability and Public Safety

Short-term consequences may include financial penalties and mandatory data disclosures from OpenAI, along with heightened judicial scrutiny of AI usage in Florida criminal cases. Long-term implications could reshape state-level AI regulation nationwide, particularly regarding violent or threatening queries that existing safeguards fail to prevent. The FSU community continues grieving while victims’ families advocate for stronger restrictions on AI technologies that enabled reconnaissance for the attack. This case exposes a troubling reality: while Big Tech companies profit from powerful AI systems, everyday Americans bear the consequences when those technologies are weaponized against innocent people, and government oversight lags far behind innovation.

The probe represents unprecedented territory for state attorneys general challenging technology companies over their role in mass violence. Previous incidents involved AI mentioned in manifestos or online planning tools, but no state AG had previously opened a formal investigation directly naming ChatGPT in connection with a shooting suspect’s documented preparation activities. The outcome could establish legal precedents for holding AI companies accountable when their products facilitate criminal planning, potentially forcing the entire sector to implement more robust safety audits and intent-detection systems that currently fail to prevent determined misuse.