Assisterr is taking aim at the monopolization of AI by Big Tech, advocating for the ownership of data and the democratization of AI through community-owned small language models (SLMs), which offer customized and efficient solutions.
The debate surrounding the potential global threat of artificial intelligence (AI) often overlooks a crucial point: the real danger lies not in AI itself, but in the potential for tech giants, commonly known as Big Tech, and governmental bodies to monopolize it. These powerful entities have the ability to subtly manipulate public perceptions and behaviors for their own gain, whether it be for maximizing profits or exerting political control.
This scenario, far from being a dystopian fantasy, is actually reflective of our current reality and requires immediate intervention. The issue at the heart of the problem with AI technology is data ownership. Big Tech has effectively taken ownership of the collective knowledge of humanity, training their large language models (LLMs) on freely available information and then restricting access to it behind expensive monthly subscriptions, costing as much as $20.
The stark contrast between the value created by community contributions and the compensation received by those contributors is evident in Google’s $60 million annual investment for access to Reddit’s user-generated content. This highlights the disparity and raises questions about fair compensation.
Against this backdrop, Assisterr, a data layer for decentralized AI based in Cambridge, positions itself as a catalyst for change. It aims to create an infrastructure that supports decentralized AI data inference and a network of community-owned SLMs, empowering the very people who contribute to the data ecosystem.
SLMs offer a targeted approach to AI, specifically designed to address specific use cases with greater efficiency and lower costs compared to their larger counterparts. By combining efficiency with high-quality assistance, SLMs excel in automating and enhancing real-time interactions and providing support for developers within the Web3 ecosystem.
Assisterr’s integration of blockchain technology provides a transparent mechanism for tracking community contributions and incentivizes the sharing of previously inaccessible knowledge and data through rewards.
Community-owned SLMs have two key advantages over LLMs: they are more efficient and cost-effective to train and maintain, making them ideal for specific business or technical needs. Additionally, the importance of a dynamic data pipeline often surpasses that of sheer model size, as regular updates to data are crucial for keeping AI models relevant.
Assisterr addresses the challenge of data sharing, a key obstacle in the development of AI-powered solutions, by creating an infrastructure that facilitates quick model setups and incentivizes data sharing. The goal is to encourage individuals and organizations to share their data by providing a framework that rewards them for doing so.
At its core, Assisterr enables the creation of SLMs that are specialized for specific domains or business functions. These models can be integrated with user interfaces and improved through community contributions. The SLMs benefit from the expertise and continuous data updates provided by individual contributors, resulting in highly effective models in their designated areas.
Assisterr has trained AI-powered developer relations agents (DevRel AI agents) for various platforms, including Solana, Near, Particle Network, and Light Link. These agents, trained using extensive tech documentation and codebases, have improved customer service by handling up to 95% of support requests, reducing wait times, and identifying areas for documentation improvement.
Assisterr’s model ensures that SLMs maintain their expertise in specific fields and advocates for community data ownership. The project includes an AI infrastructure layer for the interoperability of community-owned models and a mechanism for incentivizing data contribution and verification, ensuring that the models remain up-to-date and efficient.
In addition to launching a contributors program, Assisterr plans to launch its testnet and deploy 100 AI Agents in the second quarter of 2024. This will be followed by the transition to its mainnet, integration with Solana, and the beta release of the AI Lab. The official launch of Assisterr’s AI Lab and Monad integration will come later.
The goal of Assisterr is to maintain an up-to-date knowledge base for each AI model, ensuring an optimal user experience and interface. In addition to providing support to the developer community, Assisterr envisions expanding its capabilities to include the development of decentralized applications (DApps) and applications on behalf of users in the near future.