Power & society

The AI Power Shift: Why Control Matters as Much as Capability

The AI story is not only about smarter tools. It is also about who controls the systems, data, infrastructure, defaults, and decisions that shape how those tools reach society.

6 May 2026 · 6 min read

Many AI conversations focus on capability: what the systems can write, draw, summarise, code, plan, or automate.

Capability matters. But it is only half the story.

The other half is control.

Who owns the models? Who controls the infrastructure? Who sets the defaults inside the tools people use every day? Who decides which systems are safe enough, cheap enough, or convenient enough to become normal?

Those questions are not abstract. They shape how AI power moves through society.

AI is becoming infrastructure

A tool becomes infrastructure when people stop thinking about it as optional and start relying on it as a background layer of everyday life.

Search became infrastructure. Smartphones became infrastructure. Cloud computing became infrastructure. Payment networks, app stores, maps, and social platforms became infrastructure for large parts of modern life.

AI may follow a similar path.

If AI becomes the layer through which people search, write, learn, hire, manage customers, analyse documents, produce media, write code, and interact with public services, then the organisations controlling that layer gain enormous influence.

They do not only sell software. They shape the environment in which decisions are made.

Defaults are powerful

Most people do not configure every setting. They use the default tool their employer provides, the assistant built into their phone, the model embedded in their office software, or the answer surfaced by a search engine.

That makes defaults powerful.

A default can influence:

The power of AI may therefore be less visible than the power of a dramatic robot or a single breakthrough model. It may sit inside the ordinary interface everyone uses.

Concentration and dependence

Advanced AI systems require talent, data, chips, energy, capital, and distribution. That favours large companies and well-funded states.

Smaller organisations can still build useful products on top of AI platforms, but many will depend on a small number of underlying providers. Schools, councils, startups, charities, publishers, and businesses may all adopt tools whose deeper rules they do not control.

Dependence is not automatically bad. Infrastructure always involves dependence. The question is whether that dependence is understood, governed, and balanced by alternatives.

What ordinary readers should watch

The power story can feel remote, but there are visible signals to watch:

These signals show AI moving from novelty to dependency.

The practical question

The practical question is not “Should AI exist?” It already does.

The better question is: where should control sit?

Some control belongs with individuals: the right to understand, refuse, correct, or appeal automated decisions. Some belongs with organisations: responsible procurement, training, auditing, and human oversight. Some belongs with democratic institutions: rules for safety, competition, privacy, labour, education, and public-sector use.

Capability tells us what AI can do. Control tells us who gets to decide how it is used.

Boiling Frogs will keep returning to both, because the future of AI is not only a technical contest. It is a power shift unfolding through tools that may soon feel ordinary.