The vulnerabilities of centralised AI

FLock.io
3 min readJun 11, 2024

--

In 2024, ChatGPT is still getting jailbroken. “Godmode GPT” recently came under the public eye, bringing us back to the early days of asking for napalm recipes. System manipulation is a critical vulnerability of centralised AI, among data breaches, service outages, and data bias.

The way we see it — there is far less incentive to jailbreak AI if the public already controls it. These hacking attempts are a cry from the AI community to have a seat at the decision-making table, which centralised corporations refuse.

Centralisation, up until now, has been an effective architecture for rapidly scaling AI. However, its very hastiness has been its pitfall — the technical, operational and ethical holes are now revealing themselves. Tech leaders are realising that AI needs to be decentralised, fast.

Centralised systems suffer from technical vulnerabilities

Centralised AI systems are characterised by a central authority storing, processing and monetising data.

Reliance on a central server can lead to system-wide failures. This is referred to as a single point of failure, and can cause worldwide disruption. A familiar example is an Instagram and Facebook outage lasting six hours, prompting international frustration and an apology from Zuckerberg. Moreover, users located far from the central server may in general experience high latency, leading to slower performance.

Storing large amounts of data in one location makes them an attractive target for cyberattacks. Medibank, an Australian health insurance company, is a notorious example from 2022. Cybercriminals accessed the medical data of ten million customers after the refusal to pay a ransom. See our previous article for more on malicious attacks.

Jailbreaking is one way of exploiting software vulnerabilities to manipulate the AI system. Once jailbroken, AI can be manipulated to perform tasks or access data that it was not intended to. Jailbroken AI can also be used for unethical purposes, such as spreading fake news, conducting illegal surveillance, and creating biased algorithms.

Decentralised systems like FLock benefit from superior security

Security measures, such as encryption, authentication and regular audits, can help protect centralised AI systems to some extent.

However, decentralisation offers a far more robust architecture that is less susceptible to manipulation. In systems like FLock, this is thanks to the large community of participants voting, monitoring, hosting, training, and auditing democratically and transparently.

The immutable nature of blockchain public ledgers and consensus mechanisms means that data and transactions cannot be altered without being detected by other nodes, allowing any participant to audit the system.

Distributed hosting, such as that provided by decentralised compute marketplaces io.net and Akash Network, operates across multiple nodes. This means no single central server for attackers to target, and the overall system can continue to function correctly when one node is compromised.

With federated learning (FL), data remains on local devices during AI model training rather than being sent to a central server.

Closing thoughts

FLock.io recently launched the world’s first decentralised AI Arena beta on train.flock.io, where the public can collaboratively train models for equitable incentives. FLock addresses the need for bespoke, on-chain and community-governed AI models. Through integrating federated learning and blockchain, FLock trains models without exposing source data, reducing data breaches.

We invite the public to participate in training models today. Get whitelisted here.

Find out more about FLock.io here, and read our docs.

For future updates, follow FLock.io on Twitter.

--

--