French cybercrime authorities raided the Paris offices of Elon Musk's X platform on February 3, 2026, and issued a formal summons for the billionaire to appear for questioning, escalating a global investigation into the social media company's handling of child safety and sexual content.
The raid by France's cybercrime unit, conducted with assistance from European police agency Europol, marks a significant escalation in regulatory action against X as multiple countries simultaneously launch investigations into the platform's content moderation practices and its AI chatbot Grok's creation of sexual deepfakes.
French Investigation Expands to Include Sexual Deepfakes
The Paris prosecutor's office confirmed that the investigation, originally opened in January 2025 to examine allegations of algorithmic manipulation and fraudulent data extraction, has been broadened to include "alleged complicity in the detention and diffusion of images of a child-pornographic nature and the violation of a person's image rights with sexually explicit deepfakes."
According to the prosecutor's statement, the expanded probe was triggered by complaints regarding X's artificial intelligence chatbot Grok, which has been accused of generating non-consensual sexual content and deepfakes of real individuals without their permission.
"The operation involves EU police agency Europol and forms part of an investigation into whether X's algorithm was facilitating the spread of illegal content,"
— Paris Prosecutor's Office
Musk has been summoned to appear for questioning in April, though the specific date has not been disclosed. The tech billionaire's summons represents one of the most direct regulatory challenges he has faced from European authorities since acquiring the platform formerly known as Twitter.
UK Opens Parallel Investigation into Grok AI
The French action coincides with the UK's Information Commissioner's Office (ICO) launching its own formal investigation into X and Musk's AI company xAI over Grok's production of indecent deepfakes without consent. The ICO is examining whether the companies have complied with data protection laws, specifically the General Data Protection Regulation (GDPR).
The UK investigation follows reports that Grok AI has been used to create sexually explicit images of real people, including public figures, without their knowledge or consent. This has raised serious concerns about the platform's safeguards against the misuse of artificial intelligence for creating non-consensual intimate imagery.
International Regulatory Coordination
The simultaneous investigations across multiple European jurisdictions suggest unprecedented coordination among regulators in addressing concerns about X's platform governance. Monaco's tech sector is closely monitoring the French action, with officials expressing concerns about how the outcome could affect the Principality's emerging digital ecosystem and its regulatory alignment with European Union standards.
The timing is particularly significant given the broader context of Musk's recent $1.25 trillion merger announcement between SpaceX and xAI, creating what would become the world's most valuable private company. The investigations threaten to overshadow this major corporate development and could potentially impact the planned initial public offering.
Platform's Response and Ongoing Challenges
X has not issued a public response to the French raid or the multiple international investigations as of Tuesday evening. The platform has faced increasing scrutiny over its content moderation practices since Musk's acquisition, with critics arguing that policy changes have weakened safeguards against harmful content.
The investigations come amid broader concerns about social media platforms' role in facilitating the spread of illegal content and their responsibility for protecting users, particularly minors. Spain recently announced plans for one of Europe's most aggressive regulatory responses, including a ban on social media access for users under 16 and criminal penalties for algorithmic manipulation.
Previous Regulatory Actions
This latest action builds on a series of regulatory challenges X has faced across multiple jurisdictions. The platform has previously been investigated for its handling of content related to the January 6, 2021 Capitol riots, its temporary suspension of journalists' accounts, and changes to its content moderation policies following Musk's acquisition.
The European Union has been particularly active in pursuing digital platform regulation through the Digital Services Act and Digital Markets Act, which impose significant obligations on large social media companies regarding content moderation and user safety.
Implications for Tech Industry Regulation
The coordinated international response to X's alleged violations could set important precedents for how tech executives and their companies are held accountable across jurisdictions. The case tests whether European authorities can effectively exercise jurisdiction over U.S.-based tech executives and companies.
Legal experts note that the investigation's focus on both algorithmic manipulation and AI-generated content represents a new frontier in tech regulation, addressing emerging concerns about artificial intelligence's potential for harm when inadequately supervised.
The outcome of these investigations could significantly influence future regulatory approaches to social media platforms and AI systems globally. If successful, the cases could embolden other countries to take similar action against tech companies that fail to meet local content safety standards.
Next Steps in Legal Proceedings
The investigation timeline remains unclear, but the charges being examined could result in significant fines, operational restrictions, or even criminal charges if violations are proven. The breadth of the allegations—spanning child safety, data protection, and AI governance—suggests the potential for comprehensive sanctions that could fundamentally alter how X operates in European markets.
As the investigations proceed, they will likely influence ongoing debates about the balance between digital innovation and user protection, particularly regarding the deployment of AI systems capable of generating realistic but fabricated content.
The case also highlights the growing challenge for global tech companies in navigating an increasingly complex patchwork of national and regional regulations while maintaining their business models and technological capabilities.