Skip to content

Benchmarking SBOMit against SBOM generators #45

@absol27

Description

@absol27

@mlieberman85 is working on a SBOM Quality Report— 10 SBOM generators tested against 22 fixtures covering 10 ecosystems and 15 use-case scenarios(link). We want to run those use-cases against SBOMit and understand where we stand.

Key questions :

  • Where does sbomit miss packages that static tools find? (potential gaps in our resolvers)
  • Where does sbomit report fewer packages than static tools, and is that actually correct? (e.g. unused deps, shadowed packages)
  • How do our PURLs compare against ground truth and other tools?
  • Do we or can we handle the scenarios no tool currently passes(multi-stage builds, native linkage) any better given we have runtime trace data?

Going through the benchmark scenarios, there are several cases we clearly don't handle today. Need to track them as separate issues.

@Marc-cn has started working on this, on use-cases(not sure if exactly the same project as from the report) we currently handle.

Metadata

Metadata

Assignees

No one assigned

    Labels

    discussionLong-term tasks requiring community discussion

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions