The world as we know it runs on software. From schools to hospitals and the financial sector to grocery stores, modern society is connected by digital software that connects supply chains and the information network. This is why cybersecurity should be a first rank issue for policymakers.
Anthropic, the company behind the AI chatbot Claude, have developed a new model that has particularly excelled in identifying vulnerabilities in software. The model is so proficient, in fact, that they have not released the model to the public. They have instead banded with major software developers such as Microsoft, Apple, the Linux Foundation and more to give them exclusive access to this model in order to patch out these vulnerabilities.
To be clear, the AI is not creating these exploits or risks: many of these loopholes have existed for years, but have been previously undiscovered by humans. This new model has excelled at finding previously unknown vulnerabilities that leave users open to attack.
When state legislatures, such as Maine, attempt to ban data center development, these are the kinds of risks they’re leaving on the table. Should the U.S. completely stop data center expansion and reduce our AI competitiveness globally, it is likely that discoveries like these will be found by foreign nefarious actors first. Again, even if the entire U.S. economy were entirely disconnected from any AI models, foreign AI would still eventually be able to identify the cybersecurity risks in our existing software stack and cripple the U.S. economy.
My full piece exploring the data is available here published by the Taxpayer Protection Alliance.








