Back to Blog
Announcements

Our Position on AI Regulation

August 27, 2024

Share
There is currently substantial interest in directly regulating the development and availability of AI, rather than its applications. Here is our position.

As applied AI specialists with a focus on humanity verification and security, we have frequently been asked to provide context and make predictions.

Our job is to identify malicious activity, whether automated or human. Managing AI and human abuse at scale over many years has given us a very concrete and pragmatic view on these topics.

Below are answers to questions we are often asked by policymakers and regulators, followed by a brief summary of their policy implications for AI.

Our aim in offering this analysis is to contribute to regulations that acknowledge the realities and tradeoffs of AI development in our interconnected, multipolar world.

Context

Is automated or AI-mediated abuse a new phenomenon?

No. We have seen attempts to automate attacks against online services for decades, and deep learning tools have been used by attackers for many years. As an industry, we have long familiarity with this class of threats.

Does better AI fundamentally change the strategic calculus for defenders?

Not at its current state of development. Defenders must already deal with human-level threats, not just simple automation.

As AI capabilities advance, defensive strategies will continue to evolve in tandem. Defenders are structurally advantaged over attackers in most cases, and modern defenses take advantage of this to continuously and automatically adapt to attacks.

Would enforcing a delay in public release of improved models achieving human-level intelligence materially aid defenders?

No. Modern defenses already need to take into account human-level intelligence, e.g. from clickfarms: pools of people in low-cost countries whose labor is harnessed for malicious purposes.

Correctly designed modern systems are not intrinsically weakened by human-level intelligence applied to scaled attacks, and they already see this on a regular basis.

Would enforcing a delay in public release of improved models achieving human-level intelligence materially hamper their abuse by attackers?

No. It is already demonstrable that no major AI lab has a dramatic advantage over any other. In the past year every lab has converged on very similar performance. 

The commercial incentives at play mean leading edge labs are currently spending huge amounts of resources as they attempt to prioritize speed over efficiency. This should not be interpreted to mean no one else can reach similar performance without a similar level of expenditures.

"Fast follower" labs spend as little as 1% of the resources to achieve similar results, with the time to reproduction falling from around 24 months in 2022 to 12 months in 2023, and at most 6 months in Q3 2024. We expect this will continue to compress.

Is it possible to guard the secrets of any leading AI lab indefinitely?

No. The leading commercial labs hire internationally, including citizens of China, Russia, Iran, and other nations of special interest to policymakers in Western countries. Some have even helpfully located their research centers in those countries. For example, much of Microsoft's LLM research is done in their Beijing offices.

Employees of these labs rotate between employers on a regular basis, taking their knowledge of the latest results with them. This includes members of Western labs moving to competitors in countries like China, all of whom may act as arms of their sponsor state to a greater or lesser degree.

Even if the human element were more tightly controlled, every company and government sponsoring these labs has also been subject to major security breaches in the recent past. It must be assumed these labs have already been or will be compromised to the extent that doing so has any value.

Is it possible to block any particular country from obtaining enough modern hardware to reproduce any particular breakthrough?

No. Huge quantities of the latest GPUs have already been sold into petrostates and other non-Western nations with no local controls on re-export, and no interest in hampering their local industries.

It is also very likely that reaching general human performance in reasoning and planning is primarily a software problem rather than a hardware problem, and it is not at all clear that tens or hundreds of thousands of GPUs are required to solve it.

Existing models appear to be much better latent reasoners than initial benchmarks showed. This has been demonstrated recently via techniques like multi-sampling, i.e. asking the same question many times and picking the best answer rather than only scoring the first answer. Billions of dollars of GPUs are probably not required to reach the next major milestones in model performance.

Even if they were required, most leading edge fabs currently in operation are located in the vulnerable territory of Taiwan, and even if nothing changes there China has already achieved 5nm domestic fab capabilities. It would be imprudent to assume that a nation state with centrally planned industrial capacity in critical industries could not produce as many chips as needed to be competitive.

Tensor math chips are not very complex, and domestic Chinese AI accelerators are within 30% of NVIDIA's performance already. It is likely that they will match efficiency at the same node within two to three years at most.

Large model training can also be efficiently parallelized, and faster chips are largely an economic optimization rather than a capability limiter. It is equally possible to reach similar total performance by simply using more slow chips. Hardware availability should not be considered a major constraint on any nation state actor.

Is it likely that regulating development of AI via treaty could succeed?

No. Many nations are capable of operating at a high level in this space, and most would continue to do research in either secret or deniable ways even in the very unlikely event that such a treaty was ever adopted.

In reality, we expect that most weapons will be autonomous in the next decade, as weapons system autonomy greatly simplifies operating in contested environments. 

As we saw when drones were universally adopted on the battlefield by both sides in Ukraine, electronic warfare became increasingly important as a countermeasure. AI automation is the only reliable answer to communications and GPS blackouts, and thus we expect it will soon become universal in autonomous weapons systems.

This means that the odds of regulating AI development are approximately zero. Weapons ban treaties have been widely disregarded by their signatories to the precise extent they saw a benefit to doing so, and there is no reason to expect AI treaties to be any different.

Analysis

Implications for AI Regulation

Effective AI governance must be pragmatic, balancing safety objectives while avoiding positions that disadvantage and harm those under its jurisdiction.

1. Focus on applications: Regulating specific applications of AI, rather than its development, is more feasible and impactful. Regulating development will have no material impact on progress and will only disadvantage those regions that do so.

2. International realism: Given the global nature of AI development, international collaboration on governance frameworks has extremely limited utility if intended to slow development. International collaboration should focus on homologation of regulations around specific high risk applications, keeping in mind that regulations which harm national competitiveness tend to be overturned only after lasting damage has been done to the regulating nation.

3. Flexible policies: Regulations should be limited in scope at this stage, and avoid assuming that the current early state of play accurately reflects the final form of a more developed ecosystem. Monitoring is warranted, but it is too early for prescriptive policies in many areas.

4. Transparency and accountability: Encouraging transparency in AI development and clarifying accountability mechanisms can help address risks to a limited extent, but liability must remain at the level of application, not development of AI.

5. Copyright: The current conversation is over-extrapolating from early approaches to building datasets. It is not necessary to train large models on huge amounts of public data to achieve good reasoning performance. The best results will likely be achieved with mostly or entirely synthetic data in the near future. Putting too much energy into mandating complicated licensing regimes may quickly look misguided.

The Role of Industry

As practitioners, we recognize our responsibility in shaping the future of AI. We propose:

1. Increased collaboration between industry, academia, and policymakers to keep abreast of ongoing developments and develop practical and effective governance approaches as they become necessary, rather than on a hypothetical future basis.

2. Investment in research on AI safety and security to stay ahead of potential threats, and policies that encourage affected industries to invest in defenses against these threats in a timely manner, allowing a market-based response.

3. Development of best practices for responsible AI development and deployment, recognizing the practical impossibility of enforcement on a global scale. This means guidelines based on broad consensus are a better approach than rigid and prescriptive policies. Such policies are unlikely to be followed by other parties, and thus will only disadvantage the regulating entity and its subjects.

Summary

While the challenges of regulating AI are significant, we believe a nuanced, collaborative approach can help harness benefits while mitigating potential risks. 

No matter the regulatory approach, it must be grounded in a realistic understanding of both facts on the ground and the likely evolution of these technologies.

In our opinion, it is very likely that the AI space will radically change in the next few years. Attempting to regulate tomorrow's ecosystem with strict prescriptive legislation based on today's practices is likely to fail, harming those who adopt such legislation and helping competitors and adversaries who do not.

As a recent example, many countries over-regulated the internet during its early commercialization. They caused their local talent to leave, lost out on trillions of dollars of value, and created structural competitive disadvantages for themselves that still persist decades later. The risks of premature AI regulation to future prosperity and national competitiveness should not be underestimated. 

We recognize there are many discordant voices in this debate, and acknowledge our obligation to share our domain expertise with all parties. We invite further dialogue with policymakers, researchers, and other practitioners in industry to refine and implement effective governance strategies in this field.

Subscribe to our newsletter

Stay up to date on the latest trends in cyber security. No spam, promise.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Back to blog