The first advice any cybersecurity expert will tell you is to install the latest updates for your software and system. Updates prevent hackers from exploiting vulnerabilities on computers to carry out evil deeds such as spreading malware or stealing information.
But what do you do when the updates themselves contain malware? This is exactly what happened in mid-September, when an infected version of the famous security and maintenance tool CCleaner was widely distributed among its users. What made the attack especially noteworthy was the fact that the attackers pushed their malware through the hacked servers of Avast, the company that owns CCleaner.
One of the main challenges hackers have to solve when conducting attacks is to establish trust with their victims. For instance, in spear phishing scams, attackers might spend weeks or months interacting with a single target before sending their malicious payload over email. Most people who have minimum security awareness will look suspiciously at an application that comes as an email attachment, or a download link from an unofficial website.
But trust is automatically established when an update comes from the original publisher of an application. That’s why millions of users installed the infected version of CCleaner without thinking twice about whether it contained any malicious code.
The CCleaner episode displays one of the biggest challenges that software vendors face: securing their supply chain. Failing to protect critical assets such as update and distribution platform can inflict more damage than any phishing scam. But worse than the immediate damage is the erosion of trust. When users can’t trust official updates, who can they trust?
Unfortunately, CCleaner is not the only company whose supply chain has been compromised. Earlier this year, hackers hacked the servers of a Ukrainian accounting software vendor to to distribute the NotPetya ransomware to thousands of computers, which was subsequently propagated to thousands of other computers.
In another case last year, hackers uploaded an infected version of Linux Mint on its official distribution website. Thousands of users downloaded the ISO file before it was removed by the company.
For end users, protection against supply chain attacks is a catch 22 situation. On the one hand, if you wait out before installing an update to make sure that it’s safe, but you might be missing out on a critical security fix that will leave you open to other attacks. On the other, if you rush to install an update as soon as it’s released, you might be installing an infected version of the application. Given the current rate of supply chain attacks, which are not very frequent, I think it’s safer for the moment to install updates as soon as possible. However, users can improve their defenses against potentially malicious updates by keeping their antimalware solutions up to date, and using some of the new security tools that use behavioral analysis methods to detect suspicious behavior in installed programs.
As far as vendors are concerned, they should do more to to protect their production and distribution pipeline from attackers, especially as software continues to eat the world and becomes a more important part of every activity we perform, software vendors must do more.
Last year, I discussed new approaches to securing software in an article I wrote for TechCrunch. The use of static and dynamic software testing tools can help find intentionally injected malicious code in software by incorporating security auditing into the development process. Using code vetting and auditing tools in the software development lifecycle will put software vendors in a better position to stop supply chain attacks.
Another measure that companies can adopt in order to protect the integrity of their software is to distribute their assets. Most companies uses signing keys to give their clients a measure to verify that a software package truly comes from them. Putting the signing keys and distribution server application in the same machine or cloud server is a bad practice because once attackers gain access to the server, they’ll have all they need to stage their attack. Storing keys in a separate location and limiting access to them makes it more difficult for attackers to distribute malicious software that looks legitimate.
My own personal experience is to use abstraction as tool for software security. A well designed application will break down its code into different independent software components, each of which is a black box that responds to a set of contracts (interface). When managing software development projects, I design the interfaces of the components and assign each component to a developer or team. No person has access to the full source code of the application. Every team writes the code of their own component and interact with the rest of the components through the defined interfaces. This was an approach that was initially intended to make the software development process easier. But I eventually came to the conclusion that it also improves security by distributing the source code and limiting access to each of the individual components.
Software is complex, and managing complexity is difficult. Securing software will ultimately rest on the collective shoulder of the developers, vendors and users.