Our previous case-study showed that it is not straight-forward to both produce accurate SBOMs and compare SBOMs coming from different tools. This somewhat surprising difficulty as well as the interest in both our Fosdem talk and our Webinar on the topic motivated us to continue down that road…
Different from the previous blog post, however, we share a simple script that makes it easy to reproduce the results and - more importantly - run it on other projects, both open source and proprietary ones.
Please find the script, some documentation as well as the data sets used for computing the accuracy of four SBOM generators (CycloneDX Maven Plugin, Eclipse jbom, Syft and Trivy) on our GitHub repository https://github.com/endorlabs/sbom-lab.
We would be grateful to receive your feedback, bug fixes and improvements, e.g., regarding the configuration options of the chosen open source SBOM generators (or the addition of new ones).
High-level SBOM requirements
At high level, according to the DoC/NTIA guidelines on The Minimum Elements For a Software Bill of Materials, an SBOM “captures and presents information used to understand the components that make up software” (p.8). But it does not require digging much deeper to understand that the document provides a lot of freedom in regards to mandatory components and component identification.
For example, only “all top-level dependencies must be listed”, which renders transitive dependencies an optional SBOM element (p.12). In regards to identification, a component should have a name, which is determined by the supplier. The “capability to note multiple names or aliases [...] should be supported if possible” (p.9), which acknowledges that components may have many different names. Universal identification schemes like CPEs and Package URLs “should be used if they exist” (p.10).
All this flexibility and careful phrasing is supposedly due to the fact that the DoC/NTIA guideline has to cover diverse software ecosystems, but it does not help SBOM consumers to evaluate and compare SBOM generators that use this flexibility to different degrees.
It is way easier to define SBOM requirements in a narrower scope, such as Java/Maven-based software development. But even here, people may have different expectations as to which components should be included and which not, e.g., test dependencies or shaded dependencies. And as shown in the previous blog post and highlighted by Xia et al., an SBOM is not something static - it will contain different components depending on when it is generated.
In order to approach something like a ground truth, which is important for evaluation and comparison, we take the following conservative approach in this experiment:
We use the ubiquitous Maven Dependency Plugin to determine all compile and runtime dependencies of a given Java/Maven project, represented as a set of PURLs. This subset of Maven dependencies is required at application runtime, hence, must be monitored in regards to known vulnerabilities, which is one of today's primary SBOM use-cases.
When comparing Maven dependencies and SBOM components we solely rely on Maven GAVs and Package URLs. The use of well-defined PURLs facilitates automation and is possible since all tools report it for the majority of components. Other optional fields like CPE or digest are ignored, because they are not consistently provided by all tools.
Correspondence between a Maven dependency and an SBOM component requires that the component has a PURL and that its namespace, name and version must be equal to the groupId, artifactId and version (GAV) of the Maven dependency. Potential Maven classifiers (uber, with-dependencies, etc.), Maven types (jar, war, etc.), PURL qualifiers and PURL subpaths are ignored.
Any Maven dependency not having a corresponding SBOM component is considered a false-negative of the respective SBOM, which will negatively affect its accuracy. On the other hand, SBOM components not corresponding to a Maven dependency can be both false-positives (e.g., wrongly included test dependencies) or true-positives (e.g., shaded Java archives or runtime components unknown at development time). Thus, the computation of SBOM precision requires manual review (which we skip this time).
The set of “expected” compile/runtime dependencies of a Maven project is compared to the SBOMs generated at different points in time (lifecycle stages) by four open source SBOM generators according to the following matrix:
The invocation of all tools happens through a shell script, which produces an SBOM in CycloneDX format for every tool and stage. The script extracts all component PURLs contained in those SBOMs, writes them to a simple text file, compares them against the expected PURLs as identified through the Maven Dependency Plugin, and prints several metrics on the console output (false-negatives, true-positives and recall).
Spring PetClinic Sample Application
The sample application used for this case study is version 2.6.2 of the REST version of the Spring PetClinic Sample Application (commit ee236ca). It is a Spring Boot application for which a Docker container has been published on Docker Hub.
As the application is packaged as self-contained executable JAR, it is possible to also create a runtime SBOM, which can be compared with the SBOMs produced at other stages of the development lifecycle.
The dependency tree includes 99 compile, 6 runtime and 26 test dependencies. Only the former two dependency scopes are considered, thus, we expect to find the PURLs of 105 compile/runtime dependencies in the generated SBOMs.
The following table reports important metrics for all tool invocations in respect to the identification of the 105 compile/runtime Maven dependencies at different stages of the development lifecycle (as explained above). First, the number of false-negatives, thus, components missing in the SBOM, and in brackets the ratio of true-positive SBOM components vs. all expected components (sometimes called recall).
Important: Remember that the computation is done using PURLs only, in order to facilitate the comparison. If we would consider other fields like the name, CPE or digests, the number of false-negatives is likely below what is reported in the table.
Keeping this in mind, here are a couple of observations regarding the individual SBOMs and tools.
CycloneDX Maven Plugin correctly identifies all components, which is likely due to the fact that it integrates into the Maven build process (opposed to identifying components “from the outside”). 20 components have the CycloneDX scope “optional”, due to a dependency analysis that aims at identifying declared but unused components.
Eclipse jbom produces the same result when being run on a Java archive and when attached to the Java runtime process. 61 out of 95 SBOM components have no PURL, which negatively impacts the accuracy as computed in our study. The reason for not including any PURL is maybe due to problems identifying the group. Jbom includes a digest and link to Maven Central, which would allow finding the complete GAV and PURL manually or with additional scripting (which we deliberately decided against).
Syft identifies few components when run on a fresh clone, probably due to the fact that the POM file is not resolved. Transitive dependencies are missing as well as version identifiers, which are declared in the dependencyManagement section of the parent POM. The performance of Syft when run on the JAR and the image is comparable. Many false-negatives are due to problems identifying the group. While jbom decided to omit the PURL in such cases, Syft seems to use the name also as groupId, e.g., “pkg:email@example.com”. Just like jbom, Syft also includes the digest, which would allow SBOM consumers to lookup the complete GAV and PURL, at least for artifacts published on Maven Central.
Trivy has a relatively low number of false-negatives when being run on the Git directory and the Docker image. Example false-negatives are “org.apache.logging.log4j/log4j-to-slf4j” (where it instead determined a wrong version “23”), or “pkg:firstname.lastname@example.org” (where it determined a wrong groupId “org.springframework.data.build”, which does not exist on Maven Central). The single false-negative when run on the Docker image is “pkg:email@example.com” (where it determined a wrong groupId “org.glassfish.external”, which does exist on Maven Central).
The performance of Trivy in correctly identifying group identifiers is probably due to a digest look up on Maven Central (online by default or with help of a dump for air-gapped environments), even though the digests themselves are not included in the SBOM.
The numbers of the above table can be re-produced using the txt files in the GitHub repository, and they are also contained in the console output of the script execution, for example:
You can of course use the script to reproduce the above results or investigate SBOMs generated for another Java/Maven project. The script should work out of the box for other standard, single-module Maven projects.
For example, in order to scan the OWASP Webgoat project, another stand-alone Spring Boot application, it suffices to use the following variables:
Anything beyond that requires adapting the script.
To this end, we deliberately choose to include all logic in a single bash script, without requiring any other programs or libraries. This does not only make it very transparent on how the data sets are produced, but should also allow easy modification and extensions. Regardless whether it is to analyze other projects or add other SBOM generators.
Establishing ground truth is very difficult, and doing it wrongly can render any evaluation and comparison useless.
For the scope of this case study, we decided to focus on an important but limited set of Maven dependencies, namely compile and runtime dependencies that should be present throughout the whole lifecycle, from development till runtime.
When evaluating SBOMs, we focussed on false-negatives and did not manually review potential false-positives. As mentioned above, some SBOM components cannot be known at development time when looking at the Maven project.
Moreover, we use PURLs as the only means to establish correspondence between Maven dependencies and PURL components. Using other elements like the simple name, the digest or CPEs would probably allow to establish correspondence in more cases. However, due to the partial unavailability of this information as well as name ambiguities, this approach would have hampered automation, which is required when wanting to evaluate tools on the basis of a larger number of projects.
After all, we hope that the script shared in our GitHub repositories helps other people to systematically evaluate and compare SBOMs. We kept the script very simple to maintain transparency and allow for modifications.