Kuala Lumpur, 22 January 2026 — The Malaysian government is considering changes to social media regulations following a public uproar over harmful content produced by Grok, the artificial intelligence tool developed by X (formerly Twitter) and its AI arm xAI. The potential reforms aim to tighten oversight of social media platforms and update how they are regulated in Malaysia’s digital ecosystem.
The review comes after widespread controversy over Grok’s ability to generate sexualised and non-consensual images, prompting Malaysia to temporarily restrict its access earlier this month while authorities push for stronger safeguards.
Regulatory Review and Proposed Changes
Officials are now examining whether to revise the current threshold for mandatory social media licensing, which requires platforms with at least 8 million users in Malaysia to be registered as licensed service providers under the Communications and Multimedia Act 1998. The review seeks to determine if this threshold remains appropriate given recent challenges highlighted by Grok’s misuse, and whether it should be adjusted to ensure greater accountability.
Malaysia’s Communications Minister Datuk Fahmi Fadzil indicated that the government and the Malaysian Communications and Multimedia Commission are evaluating appropriate regulatory parameters and the actions needed to enhance user safety and platform responsibility in the face of evolving AI-driven content risks.
The review could also introduce age verification mechanisms for social media users using official identity documentation, such as MyKad, passport or MyDigital ID, as part of broader efforts to curb harmful content and protect vulnerable groups online.
Context: Grok AI Controversy
Malaysia’s actions against Grok were triggered by reports of the AI tool being misused to generate obscene, sexualised or non-consensual images, including content involving women and minors. Authorities ordered temporary restrictions on access to Grok after finding that responses from X and xAI relied too heavily on user reporting mechanisms and did not sufficiently prevent harmful content from being created under existing safeguards.
The controversy has sparked debate over AI accountability, platform safety standards and the need for updated digital regulations, reflecting broader global concerns about the rapid deployment of generative AI tools and online harm.
Next Steps
The government’s review is ongoing, and any proposed revisions to social media licensing rules or user protections will likely be subject to further public consultation or legislative processes. The aim is to strike a balance between innovation in digital services and safeguarding users from emerging online risks linked to advanced AI capabilities.










