Scientists warn that existing safety measures may be insufficient to prevent harmful AI from dominating the world. According to a comprehensive review, there is currently no evidence that AI can be safely controlled. AI safety expert Dr. Roman V. Yampolskiy emphasizes that without proof of controllability, the development of AI should be halted.

When connected to the internet, AI could potentially access all of humanity’s data, enabling it to replace existing programs and seize control of online systems globally. In a recent study, researchers proposed a theoretical containment algorithm designed to ensure that a super-intelligent AI cannot harm humans under any circumstances. This algorithm would instruct the AI to avoid causing global destruction and could even shut itself down if it detects a potential threat. However, it remains uncertain whether the algorithm successfully prevents catastrophic events or is still assessing the threat, making it difficult to confirm its effectiveness.

The study highlights the limitations of traditional methods for controlling super-intelligent AI and underscores the need for new strategies. One critical step, researchers suggest, is to disconnect AI from the internet to mitigate risks.