3 areas of implicitly trusted infrastructure that can lead to supply chain compromises

The SolarWinds compromise in December 2020 and the ensuing investigation into their build services put a spotlight on supply chain attacks. This has generated a renewed interest by organizations to reevaluate their supply chain security posture, lest they become the next SolarWinds. But SolarWinds is just one of many recent supply chain attacks.

supply chain compromises

To get a broader understanding of what organizations are up against, let’s look at three major supply chain compromises that occurred during the first quarter of 2021. Each one of these supply chain attacks targeted a different piece of implicitly trusted infrastructure—infrastructure that you may or not be paying attention to as a potential target in your organization.

1. Package squatting via software package repositories

In February 2021, researcher Alex Birsan exploited the implicit trust developers have in software package repositories. By looking at public software repositories containing the names of private packages, Birsan was able to determine that companies like Apple and Microsoft had internal packages for certain teams hosted on private repositories. By uploading a package of the same name with a higher version number to the public repositories—an attack known as “package squatting”— Birsan was able to run code inside of these companies.

Software development, by its very nature, requires the use of external packages. On a system level, this appears as APIs such as the Microsoft Windows CreateProcessA.

As we move further down the abstractions from systems programming to scripting languages (such as NodeJS, Python, and Ruby) we begin to see a lot more public, open-source, community-managed packages that can be installed through the respective package repositories (NPM, PyPI, and RubyGems).

Most developers innately trust packages from these sources and use them in their automated build processes and when developing software. This provides an opportunity for attackers to use slightly misnamed packages as delivery mechanisms for malicious code.

Prevent package squatting with package signing

The best way to assure that we are receiving the right package from the right people is a cryptographically signed package, verified by the public key of the package maintainer.

Unfortunately, most major language repositories implement nothing of the sort. Only a few have repository-level signing (where the repository itself signs the packages), some offer author-level signing (where the author does the signing), and more lack any sort of package signing outside of some basic checksums (where the code is run through a hashing algorithm and the download is verified accurate by the hash).

Without package signing, the next best way to attack these packaging problems is in the local environment. You can operate an internal package mirror and verify the code, then pin the versions of packages used in the software development cycle. This eliminates the potential for typo-squatting as packages with incorrect names will not exist in your local mirror and eliminates the version issue discovered by Birsan where newer versions will be automatically pulled.

Unfortunately, we will still have the problem of confirming that packages have come from the right source, but at least this eliminates the low-hanging fruit for attackers. Hopefully, in the future, more languages’ package managers will begin to include author-level signing so package builds can be verified.

2. Malicious commits via version control systems

In March 2021, two malicious commits were pushed to the PHP git repository under the guise of coming from Nikita Popov and Rasmus Lerdof, two recognized contributors to the PHP project. Both developers confirmed that their SSH credentials had not been compromised.
The only other answer could be that the self-hosted git server run by the PHP project was compromised, and that attackers masqueraded as known members to insert malicious code into the code base. It later came out that attackers most likely managed to dump the server’s database and obtain the usernames and passwords for HTTPS-based git access.

Version control systems are another essential part of modern software development: the location where all code and changes to that code are checked in and stored. The most common is Git, but there are others such as mercurial and svn. They’re now coupled with automated build systems in continuous integration pipelines and sometimes into automated deployment as well.

In the case of SolarWinds, the Orion build pipeline was what was targeted by the attackers so they could inject their malicious DLL into the codebase. Automated pipelines, after being written, are only occasionally modified and therefore provide an excellent spot for attackers to inject themselves into the software development lifecycle.

Version control systems, moreover, are the location where any malicious code will have to be checked into and are therefore an essential dependency for any adversary looking to compromise a company’s software pipeline and leverage the implicit trust of that company’s users.

Stop malicious commits with signed commits

Once the server a software repository is hosted on is compromised, an attacker can do just about anything with the repositories on that machine if the users of the repository are not using signed git commits.

Signing commits works much like with author-signed packages from package repositories but brings that authentication to the individual code change level. To be effective, this requires every user of the repository to sign their commits, which is weighty from a user perspective. PGP is not the most intuitive of tools and will likely require some user training to implement, but it’s a necessary trade-off for security.

Signed commits are the one and only way to verify that commits are coming from the original developers. The user training and inconvenience of such an implementation is a necessary inconvenience if you want to prevent malicious commiters masquerading as developers. This would have also made the HTTPS-based commits of the PHP project’s repository immediately suspicious.

Run auditing and code reviews regularly

Signed commits do not, however, alleviate all problems, as a compromised server with a repository on it can allow the attacker to inject themselves into several locations during the commit process. Git has the concept of server-side hooks that allow for all sorts of modifications on code commits using the pre-receipt and post-receipt commits.

If a server has an automated build process, as the SolarWinds Orion server had, this also presents an excellent place for attackers to implement their changes. When a build server takes checked-in code, runs tests, and automatically builds release artifacts, many extras are generated that can have malicious code inserted.

Additionally, version control systems often store developer tooling for the project at hand. This can be exploited by attackers (as we saw with the North Korean group that targeted security researchers in January of this year). Their attack used Microsoft Visual Studio build files and the trust placed in them by researchers to con them into running malicious code. An attack of this sort could easily translate to developer tooling stored in your repositories, such as a malicious Rust cargo file that pulls down an attacker-controlled library.

The best way to prevent this is to run auditing and code reviews of all changes in a repository and set alerts on changes to any developer tooling or build infrastructure that automates code running. If a developer hasn’t communicated the changes to a team, then it’s time to scrutinize those changes.

3. Man-in-the-middle attacks via TLS certificates

In January 2021, Mimecast was informed by Microsoft that the private key for one of their TLS certificates was compromised as a part of the SolarWinds event. This compromised certificate, according to Mimecast, was used by 10% of its 36,000 customers to establish encrypted connections with Microsoft 365 Exchange to their services.

TLS certificates, colloquially referred to as SSL certificates, provide the backbone for encryption on the Internet. What many people don’t know is that they are often used for authentication as part of the TLS handshake: a server provides its certificate to prove its authenticity to the client by mentioning a Certificate Authority (CA) that the user can use to validate that authenticity.

In the case of browsers, where most users commonly use TLS seamlessly each day, the certificates from these CAs are stored locally and checked against whenever a certificate comes from a server. To establish a secure connection, after validating the certificate, clients send an encrypted random number using the server’s public key which only the server can decrypt by using its private key. These private keys must remain secure to protect the connection—otherwise anyone could eavesdrop on the connection between the client and the server.

Compromises of TLS certificates are nothing new and present a large problem, possibly one of the largest in the cryptography supply chain. CAs are a single point of failure and have a history of security compromises. Major attacks like the DigiNotar breach or the unauthorized issuance of certificates for test[.]com by CA Certinomis (which led to their trust being pulled by Mozilla) are unfortunately a yearly occurrence.

Protecting against certificate compromises

Choose a reliable Certificate Authority and make sure that your private keys are only ever generated by you. TLS 1.3 provides Perfect Forward Secrecy for all its TLS sessions. This means that even if an attacker compromises a certificate they will not be able to access the data of any past sessions encrypted using that certificate. This is a major feature for protecting any prior communications.

Unfortunately, there is not much else that can be done outside of more frequent certificate rotation to make sure that if a certificate is compromised it is not used for man-in-the-middle attacks for years at a time. That is why in 2020, Apple, Google, and Mozilla all announced a one-year maximum certificate age for trusted certificates to force server administrators to more rapidly rotate certificates. For this same reason, Let’s Encrypt offers only 90 day certificate lifetimes.

You should follow these short certificate lifecycle guidelines even when implementing self-signed certificates for internal projects and authentication. Tools like Hashicorp’s Vault for managing your certificate store and certificate lifecycles can help.


Supply chains are vast, and this is by no means a comprehensive list of potential problems. A threat modeling exercise within your organization can give you a more robust view of vulnerable infrastructure that is often overlooked. Take a concentrated look at the implicit trust relationships that you have with vendors and open-source software used in your build or manufacturing process and you will likely find many areas where trust supersedes security.