Kuala Lumpur, 23 January 2026 – Malaysia has lifted its temporary ban on access to the artificial intelligence chatbot Grok after the social media platform X, owned by Elon Musk’s X Corp. and xAI, implemented enhanced safety features and compliance measures to address concerns about harmful content. The announcement came from the Malaysian Communications and Multimedia Commission (MCMC) following discussions with X representatives and confirmation that preventive safeguards were in place.
The restriction on Grok, which was first imposed on 11 January 2026, followed complaints and regulatory concern over instances in which the AI tool was used to generate and disseminate obscene, sexually explicit, grossly offensive and non-consensual manipulated images, including content involving women and minors, which drew international criticism and widened debate about AI safety and platform responsibility.
Following the temporary block, Malaysian authorities engaged with X to ensure compliance with local laws and public safety standards. X introduced a suite of additional preventive and security controls, including technical restrictions aimed at preventing the generation and editing of harmful visual material through Grok across its platform. After verifying these changes, MCMC announced that access to the chatbot would be restored to users in Malaysia, effective today (Jan 23), while emphasising that ongoing monitoring and enforcement will continue.
Communications Minister Datuk Fahmi Fadzil reiterated that “user safety remains our priority” and warned that any future breaches of Malaysian laws or failures in compliance mechanisms would be dealt with strictly in accordance with national legal provisions.” He noted the importance of ensuring that platforms operating in Malaysia uphold safety norms and mitigate risks associated with AI misuse.
The development highlights a broader global conversation on how advanced AI tools, especially generative systems, should be regulated and integrated safely into digital ecosystems, a discussion that has seen regulators across Europe and Southeast Asia call for stricter oversight and accountability following public backlash over harmful AI outputs.








