Dmitri Kaslov Dmitri Kaslov

Software supply chain integrity

There is a deluge of supply chain hacks that could leave industry professional dispirited - from high-profile SolarWinds hack to other lesser known and lesser reported cases. In all honesty, there is a case for the dejection that could be felt by some in the industry; there is also a case for why attackers seem to be spending more focus on this attack vector.

Enterprises do not exist in a vacuum. There is only so much vertical integration one can do, before having to rely on some third-party software.

Our third-party security due diligence also hasn’t seemed to help in addressing these supply chain attacks. One is almost assured SolarWinds completed countless “third-party assurance” forms/documents.

Robust Application Security (AppSec) programs are certainly one way to try address this - but it’s not the whole picture. A process of “Shifting security left”, as its fondly referred, typically consists of doing security scans/steps early in the process and includes Static, Dynamic and Dependencies scans.

These sorts of programs have no doubt prevented a few nasties from materializing. But it’s doubtful this would have prevented the SolarWinds-type attack. This software assurance pipeline is great and detecting software vulnerabilities, but it’s doubtful this assurance pipeline could detect a backdoor (intentional, often covert vulnerability).

Almost everyone in security is aware of, and hopefully comfortable with, checksums. These are used to verify the integrity of downloaded software.

checksum.jpg

That SHA-256 provides assurance that the executable you downloaded hasn’t been tampered with in any way. This cryptographic assurance is important and ensures people download software intended by the vendor, as opposed to some malware.

Problem is, these checksums only cryptographically verify what’s downloaded/distributed - the end result. It’s better than nothing, but more could be done.

Enter intoto, a framework that does exactly that - defines a cryptographic layout of the steps in a software supply chain that are carried out in order to write, test, package and distribute software. It is essentially a checksum, but for the entire software supply chain, as opposed to only the end result.

Courtesy of Torres-Arias et al. (2019)

Courtesy of Torres-Arias et al. (2019)

A typical software supply chain is as portrayed above :

Code > Test > Build > Package > Distribute

What we have come to discover is that attackers can, and have, compromised any point of that software supply chain. If the compromise is at the build phase (as with the SolarWinds incident), a checksum of the package will only attest of the final version having integrity. As you can see, that assurance isn’t complete - especially in todays Cyber-ridden world.

Courtesy of Torres-Arias et al. (2019)

Courtesy of Torres-Arias et al. (2019)

Now, some solutions do exist for the various components of the software supply chain, as depicted above.

The afore mentioned AppSec pipelines are often included in the CI/CD component.

intoto-4.jpg

In-toto provides a layout of each step of the supply chain, including all the signing keys in the chain, as well as any other artifact (inputs & outputs) of every step, hash-chained and cryptographically signed. It’s like a recipe book that includes all ingredients needed to make a special dish.

It allows an end user to have assurance, via the cryptographically signed hash-chain, that the recipe is indeed prepared according to all the steps in recipe book - from the shop to your table.

in-toto-5.jpg

I think this is a great step, which many of us should incorporate as our security programs mature. Supply chain attacks will be the death of us all for some time to come, but using frameworks like in-toto provides an additional layer of integrity in our software supply chain.

For critical software deployed in enterprises, could the industry start demanding this sort of software supply chain integrity? I would hope so.

EULAs, contracts, etc. would have to change though, and that may not be easy to do. Adding this sort of integrity in the software supply chain will certainly not stop all attacks (nothing can), but it does add an additional assurance layer. I think it’s a great, much needed step :-)

The in-toto project has a website helps create a basic layout for a software project, specifying who does what and how everything fits together, so that clients can be sure the software was produced exactly how you wanted it to be; check it out here !

Update (28-April): it appears there is a framework proposal, akin to OWASP ASVS, but for supply chain - Supply-chain Levels for Software Artifacts. Looks interesting and I intend to either contribute or keep my eyes on it. Good times!

Read More
Dmitri Kaslov Dmitri Kaslov

Static analysis lessons from big tech co’s

Software is eating the world - Marc Andreeseen

Software is eating the world - Marc Andreeseen

In 2018 and 2019, Google and Facebook released their research outputs of lessons learnt from scaling static code analysis in their respective companies. Their articles were titled Scaling Static Analyses at Facebook and Lessons from Building Static Analysis Tools at Google, respectively

These companies build software that is used by hundreds of millions of people daily, with code bases that are hundreds of thousands if not millions lines of code. The tools mentioned and used in these respective companies may not yet be Open Sourced, but the lessons shared can be applicable to most companies building their own software.

Given my current role of working with multiple engineering teams with multiple products, a few points in these Google and Facebook articles occasionally resurface in my mind every now and then. These aren’t ground breaking ideas.

Below are a few points that continue to be impressed upon me even after months of having last read those articles:

Static analysis tools can be powerful

Static code scanning is great but can be annoying when it comes to high number of false positives. These high false positive are often justification for inaction by engineering teams - and rightly so.

Work on reducing false positive cases before they are surfaced to engineering teams. For teams that have highly configured and tested static analysis tools, these can prove to be powerful enablers of security with engineering teams.

Facebook Zoncolan.jpg

Focus on developer teams and their tool/workflow integrations

This means that static scan dashboards and outputs must feed into engineering workflows instead of requiring engineers to go out of their way to login into your separate bug dashboard/system.

Know & focus on bugs that matter

Not all bugs matter. Enable and empower engineers to address bugs that matter by using data from production to determine which those are (e.g. exploitable bugs from pentests and bug bounties). Not every bug is important enough to address as a matter of urgency.

Mental effort of context switch

If a developer is working on one problem, and they are confronted with a report on a separate problem, they must then swap out the mental context of the first problem and swap in the second, and this can be time consuming and disruptive. Try to minimize this.

Report often, report early

Report bugs to developer teams early to get optimum fix rate. In one of the articles, the engineering teams, responding to a query from the security team, reported that issues found and flagged during compile time were deemed to be more important issues compared to issues found in checked-in code.

The lessons learnt from big tech companies who have more mature processes, tools and experiences make for good reading for almost everyone in security and developers/engineering teams themselves.

Read More