SaltTyphoon Hackers Exploit ‘Backdoor’ Installed by Congress

Regulators shouldn’t complicate an already delicate balance between security, functionality, and privacy. Their bumbling rigidity inadvertently creates new vulnerabilities.

Several weeks ago, the WSJ broke a story that hackers backed by the Chinese government had gained access to highly sensitive communications in the US, including on AT&T and Verizon networks. The full scope of the attack and the data stolen are still unknown, but are estimated to be substantial, including bank information, text messages, and more personal data. The โ€œSaltTyphoonโ€ incident underscores an ongoing vulnerability in US cybersecurity caused by increased policy involvement in advanced technology. 

The 1994 Communications Assistance for Law Enforcement Act (CALEA) required telecommunications companies to build systems with โ€œbackdoorsโ€ that allowed law enforcement to intercept and inspect communications. Backdoors refer to intentionally designed access points in software systems that allow authorized parties to bypass encryption. But cyber experts have warned for years that no backdoor access can be indefinitely limited to โ€œthe good guys.โ€ The US government has a long history of wiretapping digital communications with and without required warrants, much of which was revealed in 2013 by Edward Snowden. The US and other governments still demand backdoor access, but they are also making other transparency demands for US companies which could similarly increase the risk of hacker exploitation and government abuse of private communications.  

These include: the Department of Justiceโ€™s (DOJ) antitrust case against Google, in which the government is proposing to fragment Googleโ€™s software ecosystem and require publication of certain search features and customer data; pressures to open Appleโ€™s tightly controlled software ecosystem; and emerging Artificial Intelligence (AI) transparency and reporting laws. Instead of backdoor policies like CALEA and mandatory open-access laws, policymakers should recognize that they cannot legislate a single national solution to privacy, accept that there is no safe method for creating a backdoor, and instead utilize market forces to balance privacy and security. 

The Parallels Between CALEA and New Tech Policy Proposals

The lessons of CALEA are critical to understanding the potential risks posed by new tech policies currently under debate. In its antitrust case, the DOJ argues that Googleโ€™s dominance in search and digital advertising stifles competition. Prosecutors in the case say that Google controls too much personal data, so one proposed solution is to force Google to share its search data with competitors. While intended to foster competition, opening up such data poses significant privacy risks. Google’s search engine processes billions of queries daily, including personal and sensitive information. These data are often linked across services and the Android operating system so that Google can ensure safe passage of data between services. If access to this data is broadened and services are broken up, the potential for misuse or breaches by hackers is increased. Much like the vulnerabilities created by CALEA, government-mandated data access could โ€œgive the blueprintsโ€ to hackers about how to get into the system.  

The US federal governmentย is alsoย suing Apple to open up its โ€œwalled gardenโ€ approach to software, where the company tightly controls what apps can be installed on its devices. Proponents of this change argue it will enhance competition and reduce the data one company controls, but it would also reduce Appleโ€™s ability to vet apps for security. Allowing unverified third-party apps could significantly increase the risk of malware and other cyber threats, as many industry experts have warned. Just as CALEAโ€™s backdoors created new opportunities for exploitation, dismantling Apple’s strict controls could expose users to more significant security risks.

New AI transparency laws, designed to increase accountability and fairness in artificial intelligence systems, require companies to disclose critical details about how their AI models work. These requirements may lead to the exposure of proprietary algorithms and training data. If malicious actors access these details, they could more easily identify weaknesses and launch attacks. 

Using Emergent Solutions to the Difficult Balance of Privacy and Security 

Apple has famously denied governments backdoor access to the type the CALEA creates specifically to prevent a SaltTyphoon level hack. In one instance, this encryption prevented Russian spies from accessing American data. Signal, an end-to-end encryption messaging app, has also denied backdoor access to its messages, vocally supporting the idea that โ€œthereโ€™s no way to build a backdoor that only the โ€˜good guysโ€™ can use.โ€ No matter how attractive its other aims, no legislation can achieve a safe software environment for every service. One flavor of โ€˜securityโ€™ can compromise others. 

By pursuing policy goals without regard for the reality of cybersecurity, the US and European governments inadvertently create new opportunities for cybercriminals, foreign adversaries, and other malicious actors to exploit these weaknesses. The key flaw in these policies is that they presume that security and competition can be legislated or imposed through public policy, without recognizing that security is not a static feature but an evolving process. This approach assumes that only the โ€œgood guysโ€ will use these backdoors or access sensitive information, but history has repeatedly shown that once a vulnerability exists, it becomes a target for everyone, including malicious actors.

In contrast to government-imposed mandates, the free market is better equipped to balance privacy and security through an emergent, imperfect process of success and failure, marked by iterative improvement. In a competitive market, companies must constantly innovate to offer products that meet consumer demands for both security and functionality. Some companies may prioritize privacy and create products with strong encryption and limited data sharing, while others may offer more-open systems that integrate better with third-party applications. Through this process, consumers can choose the level of privacy and security that best suits their needs, and companies that fail to protect their users will face reputational and financial consequences.

Rather than imposing one-size-fits-all mandates that create systemic vulnerabilities, the government should allow the market to determine the appropriate balance between privacy, functionality, and security. Companies that offer the most secure, user-friendly solutions will thrive, while those that fail to protect their users will gradually be driven out of the market. This process allows for flexibility and innovation, ensuring that security evolves in response to emerging threats and consumer preferences rather than being stifled by rigid regulations that inadvertently create new vulnerabilities. Governments can offer support by studying and sharing best cyber practices, helping to โ€œred teamโ€ American cybersystems, and teaching Americans how to be safe online. In short, there are many steps the government can take before it reaches for more extreme policies like backdoors and open-access laws which create more cyber risks. 



Post on Facebook


Post on X


Print Article