SAN JOSE, 19 March 2026 – Jensen Huang has called on global technology leaders to avoid spreading fear about artificial intelligence, warning that excessive alarmism could undermine one of the most transformative technologies of the modern era.
Speaking at Nvidia’s flagship GTC conference, the chief executive emphasised that while it is important to communicate the capabilities and risks of AI, leaders must strike a careful balance between awareness and unnecessary fear. He noted that educating the public is constructive, but fearmongering risks slowing adoption of a technology that is “too important” to society’s progress.
Huang’s remarks come at a time when AI is rapidly advancing into a new phase, driven by the rise of autonomous agents and enterprise-scale deployment. As the technology becomes more embedded in daily life and business operations, public concern has intensified, particularly around job displacement, data security, and the ethical use of AI systems.
However, Huang pushed back against the narrative that AI poses an immediate threat to employment. Drawing parallels with past technological revolutions such as the internet and mobile computing, he argued that innovation historically creates more opportunities than it eliminates, even if it reshapes how work is performed.
Instead of focusing on worst-case scenarios, Huang urged industry leaders to guide the conversation responsibly, ensuring that society understands both the potential and the safeguards being built into AI systems. His comments also reflect growing tensions within the tech sector, where companies, governments, and researchers are increasingly vocal about AI risks, sometimes in ways that amplify public anxiety.
At the same time, Huang acknowledged that AI development must remain grounded in strong governance frameworks. He stressed the importance of ensuring that systems operate within legal and ethical boundaries, particularly as AI becomes more autonomous and capable of executing complex tasks across industries.
The debate over AI communication has intensified alongside recent developments in defence-related AI contracts and national security concerns, where questions over transparency, control, and accountability have taken centre stage. Against this backdrop, Huang’s message signals a broader shift within the industry, from unchecked enthusiasm or fear toward a more balanced, pragmatic narrative.
For investors and the broader technology ecosystem, the implications are significant. As AI adoption accelerates, market sentiment will increasingly be shaped not just by technological breakthroughs, but by how the technology is perceived by regulators, businesses, and the public.
Huang’s stance suggests that the next phase of AI growth will depend as much on trust and communication as it does on innovation itself. In a landscape where narratives can influence regulation and adoption, the tone set by industry leaders may ultimately determine how quickly, and how widely, AI is integrated into the global economy.













