Researchers from OpenAI, Cambridge, Oxford, and other institutions have concluded that the best way to combat the malicious use of artificial intelligence (AI) is to continuously develop more powerful AI and place it under government control. In their recently published paper titled “Computing power then governance of artificial intelligence,” the scientists explored the challenges involved in governing the use and development of AI.
The main argument in the paper is that controlling access to the hardware necessary for training and running powerful AI systems is essential to regulate who can use these systems in the future. The researchers suggest that governments should monitor the development, sale, and operation of hardware required for advanced AI to prevent its misuse. They propose the inclusion of “kill switches” in the hardware to enable remote enforcement, such as shutting down illegal AI training centers.
Governments already exercise some form of “compute governance,” as seen in the US restricting the sale of certain GPU models to countries like China. However, the researchers argue that manufacturers need to incorporate kill switches into hardware to effectively limit malicious use of AI. They acknowledge the risks associated with naive or poorly implemented compute governance, including privacy concerns, economic impacts, and centralization of power.
The researchers also highlight the challenges posed by decentralized compute, which allows training, building, and running models using distributed resources. This could make it difficult for governments to locate and shut down illegal training efforts. As a result, governments may need to engage in an arms race against the illicit use of AI and utilize powerful and governable compute to develop defenses against emerging risks.
In conclusion, the researchers propose that continuous development of AI under government control, along with careful governance of hardware, is the key to combatting the malicious use of AI effectively.