The concentration of power in centralised AI

FLock.io
3 min readJun 17, 2024

--

Until regulations similar to the EU AI Act are adopted globally, centralised corporations control the trajectory and value alignment of AI. We have to trust initiatives like OpenAI’s ‘superalignment team’ — despite its leader’s recent resignation — to ensure that it is beneficial to the public.

Meta came under fire this month for its plans to train AI tools on Facebook and Instagram images, attracting backlash from users and data rights groups.

It raises a crucial question of who should govern AI: businesses, the government, or the public. The essence of centralised AI is to have a central authority hosting, processing and monetising data. So, just how aligned with human intentions and values can it be?

The concentration of power in centralised corporations

Centralised corporations have innovated with a speed that has thrown AI into the limelight. Vast resources, talent and infrastructure have made this scalability possible. Their Achilles’ heel has been this speed overstepping good judgement. Governance takes place behind closed doors, where corporate goals and limited data rights are drawn up on the blackboard.

The conservation of control is an imperative part of the business model. Data sent to the central server is used as both a resource for training models and a revenue generator. Thus, it is in the interest of their bottom line to leave no seat at the table.

These business goals and values do not always align with those of users or the public. This may stifle innovation when commercial interests are prioritised over the broader benefit of collaboration; this was more than likely on Stability AI CEO Emad Mostaque’s mind when he resigned saying that “[you’re] not going to beat centralized AI with more centralized AI”.

This misalignment problem is multiplied when we consider the ‘black box’ nature of centralised AI. The opacity of decision-making and lack of accountability undermines trust, especially if AI is used in sectors like healthcare and the justice system.

Decision-making in decentralised AI systems like FLock

In open-source decentralised systems, the community democratically and transparently leads AI development and co-owns the models. Participants get involved with voting, monitoring, hosting, training and auditing.

The result? Models truly created with users in mind.

Through federated learning and blockchain, data remains local on devices for training. This eliminates the risk of data in a central server being monetised, and thus there is far less influence of commercial goals. Decentralisation offers a far greater reassurance of benevolent and user-oriented value alignment than committees like OpenAI’s superalignment team.

FLock invites the public to participate in incentivised training tasks as a training node, validator or delegator to create a diverse range of AI models required by communities, such as AI assistants, crypto trading bots, and a Web3 search engine.

Closing thoughts

FLock.io recently launched the world’s first decentralised AI Arena beta on train.flock.io, where the public can collaboratively train models for equitable incentives. FLock addresses the need for bespoke, on-chain and community-governed AI models. Through integrating federated learning and blockchain, FLock trains models without exposing source data.

We invite the public to participate in training models today. Get whitelisted here.

Find out more about FLock.io here, and read our docs.

For future updates, follow FLock.io on Twitter.

--

--