Before his death in 2018, world-renowned theoretical physicist Stephen Hawking issued a stark warning: humanity may have less than 100 years left on Earth. He believed that artificial intelligence (AI), nuclear warfare, and genetically-engineered viruses posed the biggest threats to human survival, urging immediate action to mitigate these dangers.

Hawking argued that technological advancements—while beneficial—also bring unforeseen risks. AI, if left unchecked, could surpass human intelligence and act against human interests. Similarly, the increasing number of nuclear-armed nations raises the risk of catastrophic war, while biotechnology advancements could lead to deadly, engineered diseases.

Why Time is Running Out for Humanity

Hawking’s prediction emphasized that Earth’s resources and environmental stability are deteriorating. Climate change, population growth, and ecological destruction may push our planet beyond the point of recovery. He advocated for space exploration as a potential solution, stating that colonizing Mars or other celestial bodies might be necessary for human survival.

“The best hope for our species is to move beyond Earth,” Hawking said. “If humanity is to survive long-term, we must become a multi-planetary species.”

Can We Prevent This Crisis?

While Hawking’s warning paints a bleak picture, experts suggest that responsible technological development and global cooperation could mitigate these existential threats. Ethical AI regulations, nuclear disarmament, and stricter biotechnology safeguards are crucial to ensuring a safer future.