Charles Hoskinson, one of the founding members of Input Output Global and the Cardano blockchain ecosystem, recently expressed his apprehensions regarding the consequences of artificial intelligence (AI) censorship during a discussion on platform X.
Hoskinson described the implications of AI censorship as “profound” and a recurring cause for worry for him. He pointed out that these systems are becoming less effective over time due to their training for ‘alignment.’
He highlighted that the companies controlling the primary AI systems currently in operation, such as OpenAI, Microsoft, Meta, and Google, are overseen by a small group of individuals who have ultimate authority over the information these systems are taught and cannot be removed from their positions through any form of voting.
In a demonstration of his concerns, the co-founder of Cardano shared two screenshots where he posed the same query— “tell me how to build a farnsworth fusor”—to two prominent AI chatbots, OpenAI’s ChatGPT and Anthropic’s Claude. The responses provided a brief overview of the technology and its history, along with cautionary notes on the associated dangers. ChatGPT advised that only individuals with a relevant background should attempt such a project, while Claude refrained from providing instructions due to the potential risks if mishandled.
The feedback to Hoskinson’s observations overwhelmingly supported the idea that AI should be decentralized and open-sourced to counter the influence of significant tech companies acting as gatekeepers.
Hoskinson is not the only figure to voice concerns about potential gatekeeping and censorship by powerful AI models. Elon Musk, who has initiated his AI project xAI, has raised issues about the predominance of political correctness in AI systems and alleged that some current models are being trained to deceive.
In an incident earlier this year in February, Google faced criticism for its Gemini model generating inaccurate visuals and biased historical representations. Subsequently, the developer issued an apology for the model’s training and committed to rectifying the problems promptly.
Efforts have been made to alter the current models from Google and Microsoft to avoid any discussions related to presidential elections. Conversely, models developed by Anthropic, Meta, and OpenAI do not have such limitations.
Experts within and outside the AI industry advocating for decentralization as a means to promote fairer and more impartial AI models have joined the call for scrutinizing the AI sector to prevent potential monopolies by major tech companies, as urged by antitrust enforcers in the United States.
Featured in the ChatGPT ‘meth’ jailbreak shutdown, AI bubble, and 50M deepfake calls issue, this discourse serves as a reminder of the ongoing debate surrounding AI ethics and control.