<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Aspect Build Blog - Insights on Development and Bazel Best Practices]]></title><description><![CDATA[Stay updated with the Aspect Build blog for the latest insights on development workflows, Bazel best practices, and tips to enhance project efficiency.]]></description><link>https://blog.aspect.build</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 11:54:49 GMT</lastBuildDate><atom:link href="https://blog.aspect.build/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[rules_js 3.0 - out with the old (and default to the new)]]></title><description><![CDATA[rules_js 3.0 is now available. This release simplifies the internals by dropping support for older Bazel, pnpm, and Node versions while keeping the public rules_js APIs stable, prioritizing maintainab]]></description><link>https://blog.aspect.build/rules-js-3</link><guid isPermaLink="true">https://blog.aspect.build/rules-js-3</guid><category><![CDATA[bazel]]></category><category><![CDATA[turborepo]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[build]]></category><dc:creator><![CDATA[Jason Bedard]]></dc:creator><pubDate>Mon, 09 Feb 2026 16:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/618c0607f991b025366e18c6/f013f19a-35db-4dda-a8aa-98f2fd38bcc1.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><code>rules_js</code> 3.0 is now available. This release simplifies the internals by dropping support for older Bazel, pnpm, and Node versions while keeping the public <code>rules_js</code> APIs stable, prioritizing maintainability, performance, and correctness over new surface features.</p>
<p>We've also launched new documentation with this release, at <a href="https://docs.aspect.build/bazel/javascript">https://docs.aspect.build/bazel/javascript</a></p>
<p>If you're already on Bazel 7+, loading <code>rules_js</code> via <code>MODULE.bazel</code>, and using pnpm 9+, the upgrade should be straightforward.</p>
<h2>What’s Changed?</h2>
<ul>
<li><p>🚫 Remove Bazel 6 support</p>
</li>
<li><p>🚫 Remove pnpm 8 support</p>
</li>
<li><p>🚫 Remove WORKSPACE support</p>
</li>
<li><p>🧹 Remove some rarely used APIs</p>
</li>
<li><p>✅ Enable enhancements that were previously opt-in</p>
</li>
</ul>
<h2>Ecosystem Compatibility</h2>
<p>Related rulesets including <code>rules_ts</code>, <code>rules_swc</code>, <code>rules_jest</code>, and <code>rules_webpack</code> have been tested with <code>rules_js</code> 3.0 and are compatible with their current releases.</p>
<p>The dependency on <code>aspect_bazel_lib</code> 2.x has been replaced with <code>bazel_lib</code> 3.0 (<a href="https://github.com/aspect-build/rules_js/pull/2567">2567</a>). With this change, <code>rules_js</code> ensures any <code>aspect_bazel_lib</code> usage remains forward-compatible with <code>bazel_lib</code>, as long as your repository does not override the <code>aspect_bazel_lib</code> module.</p>
<hr />
<p>The most significant change is the removal of pnpm 8 and WORKSPACE support.</p>
<p>Early versions of <code>rules_js</code> were designed around:</p>
<ul>
<li><p>The pnpm lockfile format at the time</p>
</li>
<li><p>Limitations of Bazel repository rules in WORKSPACE mode</p>
</li>
</ul>
<p>To avoid diverging code paths:</p>
<ul>
<li><p>When pnpm updated its lockfile format, <code>rules_js</code> downgraded it internally.</p>
</li>
<li><p>When Bazel MODULEs (bzlmod) were introduced, <code>rules_js</code> delegated much of the logic back to WORKSPACE-based implementations.</p>
</li>
</ul>
<p>That approach kept behavior consistent, but added complexity and limited adoption of newer pnpm and bzlmod features.</p>
<p>With pnpm 8 and WORKSPACE removed:</p>
<ul>
<li><p>The core architecture is simpler</p>
</li>
<li><p>Legacy compatibility layers are gone</p>
</li>
<li><p>A single modern code path replaces multiple branches</p>
</li>
<li><p>Bugs such as the pnpm lockfile being parsed twice under bzlmod (<a href="https://github.com/aspect-build/rules_js/pull/2480">2480</a>) are eliminated</p>
</li>
</ul>
<hr />
<h2>Notable CHANGELOG</h2>
<p><strong>📦 Platform-specific optional npm dependencies (</strong><a href="https://github.com/aspect-build/rules_js/pull/2538"><strong>2538</strong></a><strong>)</strong></p>
<p>That annoying <code>@esbuild/android-arm64</code> you always see being fetched and know you <em>really</em> don't need will no longer be fetched. Other common examples are platform specific <code>@swc/*</code>, <code>@rollup/*</code> or the upcoming <code>@typescript/native-preview-*</code> packages.</p>
<p><strong>🔗</strong> <code>proto_library</code> <strong>support in JS deps (</strong><a href="https://github.com/aspect-build/rules_js/pull/2721"><strong>2721</strong></a><strong>)</strong></p>
<p>Experimental support for directly depending on <code>proto_library</code> targets in <code>js_library(deps)</code> or <code>ts_project(deps)</code>. You register a <code>protoc</code> plugin toolchain for your choice of codegen tooling.</p>
<p><strong>🗂 Package exclusion presets + default exclusion list (</strong><a href="https://github.com/aspect-build/rules_js/pull/2652"><strong>2652</strong></a><strong>)</strong></p>
<p>That 10mb <code>CHANGELOG.md</code> you always notice? Gone!</p>
<p>The <code>npm_exclude_package_contents</code> API now has multiple presets instead the previous single <code>use_defaults = True</code> which was opt-in. The previous <code>use_defaults = True</code> has been replaced with the <code>"yarn_autoclean"</code> preset which mimics the <code>yarn autoclean</code> command. A new <code>"basic"</code> preset is now available and enabled by default - this is less aggressive then the yarn list while still excluding common unnecessary files often shipped in npm packages.</p>
<p><strong>⚡ Better Bazel 9 compatibility and performance</strong></p>
<p>Bazel 9 has some new features such as the <a href="https://bazel.build/rules/lib/builtins/module_ctx#facts">facts API</a> which will now be used to make things like pnpm downloads reproducible even if you don't manually provide SHAs (<a href="https://github.com/aspect-build/rules_js/pull/2698">2698</a>).</p>
<p>Path-mapping support has also been expanded (<a href="https://github.com/aspect-build/rules_js/pull/2575">2575</a>).</p>
<p><strong>🛣 Initial support for Bazel path mapping (</strong><a href="https://github.com/aspect-build/rules_js/pull/2575"><strong>2575</strong></a><strong>)</strong></p>
<p>Bazel path mapping allows some actions to be cached and reused across compilation modes such as <code>dbg</code> and <code>release</code>.</p>
<p><strong>🟢 Default Node version bumped to v22 (</strong><a href="https://github.com/aspect-build/rules_js/pull/2649"><strong>2649</strong></a><strong>)</strong></p>
<p>Along with upgrading the minimum <code>rules_nodejs</code> version the default node version is now also updated to the LTS v22.</p>
<hr />
<h2>What's Next</h2>
<p><code>rules_js</code> 3.0 is launched and ready for you to upgrade. Check out the <a href="https://docs.aspect.build/bazel/javascript">updated docs</a> or try a new project using the <a href="https://github.com/bazel-starters/js">https://github.com/bazel-starters/js</a> repo which has all the latest versions.</p>
<p>Having problems? Aspect offers a paid support plan with a dedicated Slack channel for your team under an SLA.</p>
]]></content:encoded></item><item><title><![CDATA[Aspect's rules_lint Reaches 2.0]]></title><description><![CDATA[You’ll find the project and examples at https://github.com/aspect-build/rules_lint.
We’ve been incredibly gratified by the more than 60 open-source contributors who have improved the ruleset, and everyone who has filed or answered issues. Thank you! ...]]></description><link>https://blog.aspect.build/rules-lint-2</link><guid isPermaLink="true">https://blog.aspect.build/rules-lint-2</guid><category><![CDATA[bazel]]></category><category><![CDATA[Linter]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Mon, 26 Jan 2026 20:11:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/gLZUc5hVFDE/upload/7050af1ab76ef0194195f7a6126d1783.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You’ll find the project and examples at <a target="_blank" href="https://github.com/aspect-build/rules_lint">https://github.com/aspect-build/rules_lint</a>.</p>
<p>We’ve been incredibly gratified by the <a target="_blank" href="https://github.com/aspect-build/rules_lint/graphs/contributors">more than 60 open-source contributors</a> who have improved the ruleset, and everyone who has filed or answered issues. Thank you! We support 30 languages now and continue to grow.</p>
<h2 id="heading-highlights-of-aspect-ruleslint-release-version-20">Highlights of Aspect rules_lint release version 2.0</h2>
<ul>
<li><p>The Aspect Extension Language (AXL) is now used to provide the <code>lint</code> and <code>format</code> tasks on your command-line. Just install the <a target="_blank" href="https://docs.aspect.build/cli">Aspect CLI</a> and run <code>aspect lint</code>. This is wired in the <code>MODULE.aspect</code> file that sits next to <code>MODULE.bazel</code>.</p>
</li>
<li><p>Most of the <a target="_blank" href="https://github.com/bazel-starters">bazel-starters</a> demonstrate rules_lint 2.0, and give an easy playground to try it out.</p>
</li>
<li><p>The rules_lint <a target="_blank" href="https://github.com/aspect-build/rules_lint/tree/main/examples">examples folder</a> is now broken out into standalone Bazel modules per-language, making it much easier to find the bits you need for your own repo.</p>
</li>
<li><p>Pythonistas will love the new <code>ty</code> linter support. Ty is a fast Python type-checker written in Rust from <a target="_blank" href="https://astral.sh">Astral</a> - and we’re working with them to add incremental type-check support to avoid quadratic runtime. Thank you to <a target="_blank" href="https://github.com/whoahbot">https://github.com/whoahbot</a> for the contribution!</p>
</li>
<li><p>We’ve added Rust’s <code>clippy</code> tool so you can get auto-fixes like unused imports applied automatically during code review. We expect <code>rules_rust</code> to decrease scope, possibly removing their Clippy support. Thank you to <a target="_blank" href="https://github.com/blorente">https://github.com/blorente</a> for adding this support!</p>
</li>
<li><p>Full Bazel 9 support, including Bzlmod <code>module_extension</code> support for fetching all tools, obviating the need for any <code>WORKSPACE</code> file or <code>http_archive</code> rules.</p>
</li>
</ul>
<h2 id="heading-about-ruleslint">About rules_lint</h2>
<p><code>aspect-build/rules_lint</code> is a Bazel ruleset that makes <strong>linting</strong> and <strong>formatting</strong> “first-class” in Bazel — so you can run common static analysis tools via Bazel without wrapping your existing BUILD targets or changing the rulesets you already use.</p>
<p>As before, rules_lint v2 is really two rulesets in one:</p>
<h3 id="heading-formatting">Formatting</h3>
<ul>
<li><p>Typically <em>one formatter per language</em>; deterministic output; “just apply the changes.”</p>
</li>
<li><p>Formatting tools are run as side-effects outside of Bazel actions and wired with a <code>git</code> pre-commit hook.</p>
</li>
<li><p>That means it can <strong>run on files not modeled in Bazel’s dependency graph</strong>: formatting runs on the file tree (helpful for scripts, docs, etc.).</p>
</li>
</ul>
<p>Get started at <a target="_blank" href="https://github.com/aspect-build/rules_lint/blob/main/docs/formatting.md">https://github.com/aspect-build/rules_lint/blob/main/docs/formatting.md</a></p>
<h3 id="heading-linting">Linting</h3>
<ul>
<li><p>Can run <em>multiple linters per language</em>; may propose fixes; results can be shown as terminal output, failing tests, or code-review feedback.</p>
</li>
<li><p><strong>No BUILD-file clutter</strong>: you lint existing <code>*_library</code> targets rather than adding special wrapper macros.</p>
</li>
<li><p><strong>Incremental &amp; cache-friendly</strong>: lint runs as Bazel actions (works with remote execution/cache).</p>
</li>
<li><p><strong>Practical for legacy repos</strong>: supports “lint only what changed” workflows so you can start without fixing all historic issues. We refer to the “Water Leak Principle” which says to stop the leak before mopping the spill.</p>
</li>
</ul>
<p>Get started at <a target="_blank" href="https://github.com/aspect-build/rules_lint/blob/main/docs/linting.md">https://github.com/aspect-build/rules_lint/blob/main/docs/linting.md</a></p>
<h2 id="heading-wiring-into-code-review">Wiring into code review</h2>
<p>Aspect’s platform includes Marvin, our mascot and helpful Bazel bot. Marvin comments on your pull requests, and includes lint results displayed as GitHub Checks. Even better, when the linter tool has a <code>—-fix</code> mode, Marvin will provide the Suggested Fixes in the GitHub code review so you can just accept the improvements to your code.</p>
<h2 id="heading-next-steps">Next steps</h2>
<p>To learn more about our <a target="_blank" href="https://www.aspect.build/platform">Aspect Workflows developer productivity platform</a> and <a target="_blank" href="https://www.aspect.build/services">expert Bazel support services</a>, talk to us on Bazel Slack or email us at hello@aspect.build.</p>
]]></content:encoded></item><item><title><![CDATA[Bazel 9 Upstream Prebuilt Protobuf]]></title><description><![CDATA[Bazel 9.0 is a major long-term support (LTS) release. It contains new features and backwards incompatible changes.

Bzlmod is now always enabled, and all WORKSPACE logic has been removed from Bazel (#26131). The Bzlmod migration tool is available.

A...]]></description><link>https://blog.aspect.build/bazel-9-protobuf</link><guid isPermaLink="true">https://blog.aspect.build/bazel-9-protobuf</guid><category><![CDATA[bazel]]></category><category><![CDATA[protocol-buffers]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Fri, 16 Jan 2026 13:44:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768408452933/21927abc-3ee1-4142-8840-76a2b15dc1a0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Bazel 9.0 is a major long-term support (LTS) release. It contains new features and backwards incompatible changes.</p>
<ul>
<li><p>Bzlmod is now always enabled, and all WORKSPACE logic has been removed from Bazel (<a target="_blank" href="https://github.com/bazelbuild/bazel/issues/26131">#26131</a>). The <a target="_blank" href="https://bazel.build/external/migration_tool">Bzlmod migration tool is available</a>.</p>
</li>
<li><p>All C++-related <a target="_blank" href="https://github.com/bazelbuild/bazel/issues/26131">rules</a> are <a target="_blank" href="https://bazel.build/external/migration_tool">removed</a> from Bazel and must be loaded from @rules_cc, as part of the <a target="_blank" href="https://github.com/bazelbuild/bazel/issues/23043">Starlark effort</a>.</p>
</li>
<li><p><a target="_blank" href="https://github.com/bazelbuild/bazel/issues/23043">Bazel 9</a> ships with the latest protobuf module (version 33.4), which includes support for a prebuilt protobuf compiler.</p>
</li>
</ul>
<p>This prebuilt protobuf compiler follows up on our blog article <a target="_blank" href="https://blog.aspect.build/never-compile-protoc-again">https://blog.aspect.build/never-compile-protoc-again</a> which ended with the promise “We’re hoping to upstream our <code>toolchains_protoc</code> to the protobuf repository, so that the default Bazel experience will be fast.”</p>
<p>This is now (nearly) true as of the <a target="_blank" href="https://github.com/protocolbuffers/protobuf/issues/19558">https://github.co</a><a target="_blank" href="https://github.com/protocolbuffers/protobuf/releases/tag/v33.4">m/protocolbuffers/protobuf/releases/tag/v33.4</a> release.</p>
<p>We thank the <a target="_blank" href="https://github.com/protocolbuffers/protobuf">Google Protobuf</a> team for sponsoring this work and reviewing the implementation, including changes to the release process.</p>
<h2 id="heading-enabling-the-protobufs-toolchains-feature">Enabling the Protobufs toolchains feature</h2>
<p>Bazel 9 flips the <a target="_blank" href="https://registry.build/flag/bazel/?flag=incompatible_enable_proto_toolchain_resolution"><code>--incompatible_enable_proto_toolchain_resolution</code></a> flag to true. This means Bazel is responsible for resolving the symbol <code>@protobuf//bazel/private:proto_toolchain_type</code> to a “concrete” toolchain that provides the right <code>protoc</code> binary for your execution platform. This is also the case for a toolchain_type for each language stub generator.</p>
<p>If you’re on Bazel 9, there’s nothing to do, but earlier Bazels require you set the flag.</p>
<h2 id="heading-opting-in">Opting-in</h2>
<p>Until <a target="_blank" href="https://github.com/protocolbuffers/protobuf/pull/25313">https://github.com/protocolbuffers/protobuf/pull/25313</a> lands and is released, you need to opt-in to using the <code>protoc</code> binaries from <a target="_blank" href="https://github.com/protocolbuffers/protobuf/releases">https://github.com/protocolbuffers/protobuf/releases</a>. Add to your <code>.bazelrc</code>:</p>
<pre><code class="lang-bash">common --@protobuf//bazel/toolchains:prefer_prebuilt_protoc
</code></pre>
<blockquote>
<p>note, if you have a <code>repo_name=com_google_protobuf</code> you’ll have to adapt the <code>@protobuf</code> name</p>
</blockquote>
<h2 id="heading-enforcing">Enforcing</h2>
<p>You can enforce that no rules do the “Wrong Thing” of directly referencing the <code>cc_binary</code> target <code>@protobuf//:protoc</code> by following our snippets: <a target="_blank" href="https://github.com/aspect-build/toolchains_protoc#ensure-protobuf-and-grpc-never-built">https://github.com/aspect-build/toolchains_protoc#ensure-protobuf-and-grpc-never-built</a></p>
<p>If this fails to build, it means that there’s a bug you should find or report.</p>
<p>The <a target="_blank" href="https://github.com/aspect-build/toolchains_protoc">https://github.com/aspect-build/toolchains_protoc</a> and <a target="_blank" href="https://github.com/bazelbuild/rules_proto">https://github.com/bazelbuild/rules_proto</a> repositories are now archived, completing the deprecation period.</p>
<h2 id="heading-next">Next</h2>
<p>Need help migrating from WORKSPACE to Bzlmod or other steps to move to Bazel 9? View <a target="_blank" href="http://www.aspect.build/services">Aspect Build services</a> and email us at hello@aspect.build or ping us on Bazel Slack. We’re happy to support you.</p>
]]></content:encoded></item><item><title><![CDATA[Bazel for SONiC: What We've Learned and Contributed]]></title><description><![CDATA[As the Linux Foundation’s SONiC Foundation continues to drive forward an open, standards-based network operating system for the industry, one challenge has become increasingly visible across the community: how to build, test, and release SONiC compon...]]></description><link>https://blog.aspect.build/bazel-for-sonic</link><guid isPermaLink="true">https://blog.aspect.build/bazel-for-sonic</guid><category><![CDATA[SONiC Foundation]]></category><category><![CDATA[bazel]]></category><category><![CDATA[Aspect Build]]></category><dc:creator><![CDATA[Şahin Yort]]></dc:creator><pubDate>Mon, 12 Jan 2026 13:00:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767979042661/9ef51b32-4766-4f81-9857-37be31fca72b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As the <a target="_blank" href="https://www.linuxfoundation.org/">Linux Foundation</a>’s <a target="_blank" href="https://sonicfoundation.dev/"><strong>SONiC Foundation</strong></a> continues to drive forward an open, standards-based network operating system for the industry, one challenge has become increasingly visible across the community: <strong>how to build, test, and release SONiC components with speed, consistency, and confidence</strong>. The project has grown in complexity—diverse hardware platforms, multiple languages and toolchains, distributed teams, and the rising expectation that networking software behave like modern cloud software.</p>
<p>This is exactly where <strong>Bazel</strong>, the open-source, high-performance build and test system originally created at Google, can play a transformative role. Bazel offers <strong>SONiC</strong> (<strong>S</strong>oftware for <strong>O</strong>pen <strong>N</strong>etworking <strong>i</strong>n the <strong>C</strong>loud) contributors, adopters and vendors the reliability, scalability, and repeatability needed to sustain a world-class network OS at global scale.</p>
<p>Below, we explore the case for adopting Bazel across SONiC Foundation projects and how it can meaningfully improve developer productivity, platform compatibility, security posture, and release engineering.</p>
<hr />
<h2 id="heading-1-reproducible-builds-are-no-longer-optional">1. Reproducible Builds Are No Longer Optional</h2>
<p>SONiC today supports an expanding matrix of devices, ASICs, and software packages. That's its strength—but also its build challenge. Makefiles, ad-hoc scripts, and hand-rolled toolchains multiply variability and increase onboarding time, especially for new vendors.</p>
<p>Bazel provides hermetic, reproducible builds, ensuring that:</p>
<ul>
<li><p>The same inputs always produce the same outputs</p>
</li>
<li><p>Dependencies are fetched deterministically and cached</p>
</li>
<li><p>Toolchains and container images are versioned and immutable</p>
</li>
<li><p>Builds can run anywhere—from a developer's laptop to CI runners to cloud build farms</p>
</li>
</ul>
<p>This consistency dramatically reduces "it works on my machine" issues. For SONiC's multi-vendor ecosystem, it means any contributor can build and test with confidence against a shared standard.</p>
<hr />
<h2 id="heading-2-a-universal-build-system-for-a-polyglot-codebase">2. A Universal Build System for a Polyglot Codebase</h2>
<p>SONiC involves C++, Python, Go, Rust, Docker, kernel modules, switch-vendor SDKs, and more. Bazel excels here:</p>
<ul>
<li><p>First-class multi-language support</p>
</li>
<li><p>Extensibility through Starlark rules</p>
</li>
<li><p>Deterministic container and image builds</p>
</li>
<li><p>Cross-compilation support for ARM, x86, PowerPC, and ASIC-specific toolchains</p>
</li>
</ul>
<p>Instead of maintaining fragmented build logic across repositories, SONiC developers can standardize on a single system that handles everything from low-level DPDK components to high-level services.</p>
<hr />
<h2 id="heading-3-cloud-native-cicd-that-scales-with-the-sonic-community">3. Cloud-Native CI/CD That Scales With the SONiC Community</h2>
<p>Bazel was designed for massive scale and parallelism. For SONiC maintainers, this directly translates into:</p>
<ul>
<li><p>Faster builds through fine-grained caching</p>
</li>
<li><p>Incremental rebuilds that recompile only what changed</p>
</li>
<li><p>Remote execution to distribute builds across clusters</p>
</li>
<li><p>Remote caching to avoid duplicated work between developers and CI jobs</p>
</li>
<li><p>Unified pipelines across all repos and languages</p>
</li>
</ul>
<p>As the community continues to expand, this infrastructure lets new contributors ramp up quickly and ensures that CI is fast, stable, and cost-efficient.</p>
<hr />
<h2 id="heading-4-stronger-security-and-compliance">4. Stronger Security and Compliance</h2>
<p>Networking software is increasingly subject to regulatory scrutiny. SONiC vendors must prove the provenance, integrity, and patch level of every component.</p>
<p>Bazel strengthens security by:</p>
<ul>
<li><p>Locking down external dependencies</p>
</li>
<li><p>Ensuring reproducibility (critical for supply-chain audits)</p>
</li>
<li><p>Providing deterministic SBOM generation</p>
</li>
<li><p>Enforcing hermetic builds that eliminate environment drift</p>
</li>
<li><p>Integrating easily with tools for SLSA, sigstore, and in-toto</p>
</li>
</ul>
<p>For organizations integrating SONiC into large-scale infrastructure, this reduces risk and simplifies compliance workflows.</p>
<hr />
<h2 id="heading-5-better-collaboration-between-vendors-and-the-community">5. Better Collaboration Between Vendors and the Community</h2>
<p>One of SONiC's greatest strengths is its vendor-neutral ecosystem. Bazel reinforces this mission:</p>
<ul>
<li><p>All vendors compile with the same rules and toolchains</p>
</li>
<li><p>Contributors can share Starlark extensions for SDKs or hardware targets</p>
</li>
<li><p>Build logic becomes collaborative, reviewable, and testable—just like code</p>
</li>
<li><p>Onboarding new vendors becomes faster and less error-prone</p>
</li>
</ul>
<p>Instead of each party maintaining private build scripts, SONiC can establish a shared, open-standard build vocabulary.</p>
<hr />
<h2 id="heading-6-future-proofing-sonic-for-the-next-decade">6. Future-Proofing SONiC for the Next Decade</h2>
<p>SONiC is evolving quickly—into new form factors, new silicon, new service models, and new deployment patterns. With Bazel, SONiC gains a foundation that:</p>
<ul>
<li><p>Scales horizontally as code volume increases</p>
</li>
<li><p>Easily supports new languages or frameworks</p>
</li>
<li><p>Provides deterministic releases that downstream integrators can trust</p>
</li>
<li><p>Enables advanced workflows like distributed testing or reproducible builds in the cloud</p>
</li>
</ul>
<p>Bazel is not a short-term patch; it is an investment in SONiC's long-term velocity and stability.</p>
<hr />
<h2 id="heading-the-legacy-build-system-complexity-at-scale">The Legacy Build System: Complexity at Scale</h2>
<p>To understand why Bazel is necessary, it's worth examining what SONiC's build process looked like before: a Makefile-based system built around a "slave" container image that encapsulated system dependencies.</p>
<h3 id="heading-arbitrary-commands-arbitrary-problems">Arbitrary Commands, Arbitrary Problems</h3>
<p>The legacy system consists of <strong>317 Makefile rules</strong> (<code>.mk</code> files) and dependency files (<code>.dep</code> files), one for nearly every package, container, and component in SONiC. Each rule is allowed to invoke arbitrary commands: <code>apt-get install</code>, <code>pip install</code>, <code>dget</code>, shell scripts, custom build logic. On the surface, this sounds reasonable—the "slave" container provides isolation. But at SONiC's scale, this approach becomes a liability:</p>
<p><strong>Brittleness</strong>: Each recipe fetches dependencies from the internet on-demand. A network hiccup, a removed package from Debian archives, a deprecated Python module on PyPI, and the entire build fails. There's no guarantee that tomorrow's build will work the same way as today's. No pinning, no snapshot repositories, no explicit version control of dependencies.</p>
<p><strong>Non-Hermeticity</strong>: Because recipes are allowed to fetch and execute arbitrary commands, the build's output depends on what's available on the internet <em>right now</em>. Build two identical SONiC commits a week apart, and they may produce different binaries. The build machine's OS and installed packages are implicit dependencies—if you upgrade Ubuntu on your build server, you risk silently changing SONiC's outputs.</p>
<p><strong>Unmaintainability at Scale</strong>: With 317 separate rule files, coordinating changes is nightmarish. A Debian package update might require changes to multiple recipes. Dependency injection during build (where rules can declare new dependencies at build time) means the full dependency graph is opaque until runtime. You don't know what's actually going to be built until you run <code>make</code>.</p>
<p><strong>Hidden Complexity</strong>: The "slave" container approach obscures the real problem. Developers think they're building in a consistent environment because it's Docker-based, but the container itself is built by the same brittle, non-hermetic process. You're wrapping chaos in a box and calling it reproducibility.</p>
<h3 id="heading-recipes-inject-dependencies-during-build">Recipes Inject Dependencies During Build</h3>
<p>Making matters worse, the legacy system allows rules to <strong>inject dependencies dynamically during the build process</strong>. A recipe isn't just a fixed input-output mapping; it's a script that can decide at runtime what to depend on. This makes the dependency graph fundamentally unknowable until build execution. You can't reason about what the artifact will contain—you have to run it and see.</p>
<h3 id="heading-the-scale-problem">The Scale Problem</h3>
<p>SONiC spans dozens of C/C++ components, multiple Python packages, Go binaries, container images, kernel modules, and ASIC-specific toolchains. The legacy system required maintainers to write and maintain Makefiles for all of this, with no unifying framework. Each component has its own recipe, its own build logic, its own fragile internet-based dependency fetching. When SONiC grew to support ARM architectures, multiple Debian versions (Stretch, Buster, Bullseye, Bookworm), and dozens of ASIC vendors, the number of recipes multiplied.</p>
<p>The build system became a maintenance burden that grew faster than the codebase itself.</p>
<hr />
<h2 id="heading-bazel-is-decades-ahead-as-a-build-systembut-its-not-autopilot-you-still-need-a-good-engineer-to-make-it-fit-the-problem">Bazel is decades ahead as a build system—but it’s not autopilot. You still need a good engineer to make it fit the problem.</h2>
<p>Bazel is uniquely suited to address the intricate build and test challenges faced by the SONiC project. As SONiC evolves to support a growing range of devices, languages, and architectures, the need for a robust build system becomes paramount. Bazel’s hermetic, reproducible builds help ensure that the same inputs consistently yield the same outputs—an essential property in a diverse ecosystem. With deterministic dependency management and strong cross-compilation support, Bazel aligns well with SONiC’s goals of consistency across platforms and contributors.</p>
<p>But Bazel isn’t a silver bullet. It’s decades ahead as a build system, yet its architecture doesn’t solve the problem automagically—real success still depends on good engineering: clear rule boundaries, disciplined dependencies, and a toolchain/packaging strategy that makes Bazel’s guarantees real in practice.</p>
<h2 id="heading-the-deep-technical-challenge-hermetic-package-dependency-resolution">The Deep Technical Challenge: Hermetic Package Dependency Resolution</h2>
<p>Building a polyglot, multi-architecture system like SONiC using Bazel exposed a fundamental tension between how traditional Linux package managers work and how Bazel is designed to operate. The solution has required solving multiple interconnected problems that most build systems never encounter.</p>
<p>The sections below summarize some of the Bazel for SONiC issues that <a target="_blank" href="http://www.aspect.build">Aspect Build</a> and has helped to identify and address.</p>
<h3 id="heading-the-deb-package-overlay-problem">The .deb Package Overlay Problem</h3>
<p>Debian packages are designed to be installed sequentially into a shared filesystem. When you run <code>apt-get install</code>, each package unpacks its files into the same root directory—creating a virtual "overlay" of files from potentially hundreds of packages. Circular dependencies are resolved through this overlay: library A might contain a symlink to library B, which is provided by a completely different package. The order of installation handles conflicts, and the running system sees a unified merged filesystem.</p>
<p>Bazel, by contrast, abhors large, opaque target outputs. Each target should be hermetic and reproducible, with explicit dependencies and minimal side effects. Representing "unpack all these packages and overlay them" as a single Bazel target creates an enormous, unwieldy build artifact that defeats caching, incremental builds, and reproducibility. Yet SONiC needs precisely this—a complete, consistent sysroot with hundreds of interdependent Debian packages.</p>
<p>The solution: <em>Bazel needed to compute the transitive closure of package dependencies, resolve symlinks ahead of time, and produce a metadata representation of the merged filesystem without materializing the entire overlay</em>**.** Rather than creating a giant unpacked sysroot target, <code>rules_distroless</code> generates a "Contents" file—a snapshot mapping every filename to the package that provides it. This allows Bazel to:</p>
<ul>
<li><p>Resolve symlink chains before build time (avoiding "dangling symlink" errors that plague container builds)</p>
</li>
<li><p>Determine the final location of each library or header file despite circular package dependencies</p>
</li>
<li><p>Share the metadata across builds without replicating hundreds of MB of unpacked packages</p>
</li>
<li><p>Enable fine-grained caching at the package level, not the sysroot level</p>
</li>
</ul>
<h3 id="heading-the-rpath-nightmare">The RPATH Nightmare</h3>
<p>The ELF dynamic linker, <a target="_blank" href="http://ld.so"><code>ld.so</code></a>, searches for shared libraries in a precise order:</p>
<ol>
<li><p>Directories in the binary's <code>DT_RPATH</code> attribute (if <code>DT_RUNPATH</code> is absent)</p>
</li>
<li><p><code>LD_LIBRARY_PATH</code> environment variable</p>
</li>
<li><p>Directories in <code>DT_RUNPATH</code> (the modern preferred approach)</p>
</li>
<li><p>System cache (<code>/etc/</code><a target="_blank" href="http://ld.so"><code>ld.so</code></a><code>.cache</code>)</p>
</li>
<li><p>Default paths (<code>/lib</code>, <code>/usr/lib</code>, and architecture-specific variants)</p>
</li>
</ol>
<p>By default, binaries built in SONiC expect their dependencies in <code>/usr/lib</code>, <code>/lib</code>, and architecture-specific variants like <code>/usr/lib/aarch64-linux-gnu</code>. But Bazel builds place outputs in <code>bazel-out/</code>, a completely different path structure. The linker has no idea where to find <a target="_blank" href="http://libyang.so"><code>libyang.so</code></a><code>.2.0</code> when it's buried in <code>bazel-out/aarch64-linux-gnu/bin/external/com_github_sonic_net_sonic_mgmt_common/lib/</code><a target="_blank" href="http://libyang.so"><code>libyang.so</code></a><code>.2.0</code>.</p>
<p>The fix requires embedding <code>RPATH</code> entries directly into the binary at link time—telling the linker "look in <code>$ORIGIN/../lib</code> for my dependencies." But this introduces new challenges:</p>
<ul>
<li><p><strong>Relocation</strong>: A binary linked with <code>RPATH=$ORIGIN/../lib</code> expects a specific directory structure. If that binary is later used as a tool in another Bazel action (where it's relocated to a different path), the <code>RPATH</code> becomes invalid.</p>
</li>
<li><p><strong>Transitive Dependencies</strong>: When a binary depends on library A, which depends on library B, the linker must find all three—but each may have different <code>RPATH</code> settings. Bazel must ensure the transitive closure is visible to the linker without creating massive, monolithic targets.</p>
</li>
</ul>
<p>SONiC solved this through <strong>Bazel Configurations and Transitions</strong> —a mechanism that applies different compiler flags to different parts of the dependency graph. Some targets (like Python's C extensions) are compiled with <code>-fPIC</code> and packaged into a sysroot. Everything else uses the standard compilation model. Bazel's transitions ensure these different compilation modes never mix, preventing linker errors where position-dependent code tries to reference position-independent code.</p>
<h3 id="heading-the-rpath-nightmare-continues-finding-dependencies-in-the-sandbox">The RPATH Nightmare Continues: Finding Dependencies in the Sandbox</h3>
<p>When Bazel runs a test target, it constructs a temporary sandbox directory containing only the files that test explicitly depends on. A test might link against a dynamic C++ library, which depends on <a target="_blank" href="http://libc.so"><code>libc.so.6</code></a>, which transitively depends on <a target="_blank" href="http://ld-linux-x86-64.so"><code>ld-linux-x86-64.so.2</code></a>. At test runtime, none of these libraries are in the system <code>/usr/lib</code>—they're scattered across <code>bazel-out/</code> in the test's sandbox.</p>
<p><strong>The breakthrough</strong>: Rather than relying on <code>LD_LIBRARY_PATH</code> (which is fragile and defeats reproducibility), <code>rules_distroless</code> ensures that when a test binary is linked, its <code>RPATH</code> contains the exact paths to all transitive dependencies <em>within that test's sandbox</em>. The binary becomes self-contained—it knows exactly where to find every library it needs, relative to its own location. A test can now be run anywhere, on any machine, and it will find its dependencies without environment variable manipulation. This is genuinely hermetic testing: the test's success or failure depends only on the code and its declared dependencies, not on what happens to be installed on the developer's machine.</p>
<h3 id="heading-container-runtime-binaries-finding-dependencies-across-layers">Container Runtime: Binaries Finding Dependencies Across Layers</h3>
<p>When a SONiC container starts, thousands of binaries are present—p4rt, swss, gnmi, monitoring agents, and hundreds of utilities. None of them are statically linked. Each binary must find its shared libraries at runtime.</p>
<p>In a traditional Docker/container image built with a Dockerfile, everything is installed into standard locations (<code>/lib</code>, <code>/usr/lib</code>). The dynamic linker searches these paths by default. Simple and naive.</p>
<p>But SONiC containers built with Bazel take a different approach. Multiple packages provide the same functionality (e.g., multiple versions of <a target="_blank" href="http://libprotoc.so"><code>libprotoc.so</code></a>), but only one should be "selected" for the final image. Furthermore, if SONiC switches from Debian bookworm to Debian trixie, or moves between Ubuntu LTS versions, the exact libraries available change. Hardcoding <code>/usr/lib/x86_64-linux-gnu/</code><a target="_blank" href="http://libfoo.so"><code>libfoo.so.1</code></a> in a binary's <code>RPATH</code> would break when the library moves or changes versions.</p>
<p><strong>The solution</strong>: Bazel generates a container image where <em>the RPATH of every binary is already computed to find the exact libraries that will be present in that specific image</em>. When p4rt starts in the container, its <code>RPATH</code> contains the paths where <code>rules_distroless</code> placed <a target="_blank" href="http://libprotoc.so"><code>libprotoc.so</code></a>, <a target="_blank" href="http://libyang.so"><code>libyang.so</code></a>, and everything else. The binary finds its dependencies through the RPATH, not through the system's default search paths. This means:</p>
<ul>
<li><p>The container is self-describing: a binary's dependencies are "baked in" rather than discovered at runtime</p>
</li>
<li><p>Moving from Debian bookworm to trixie doesn't break existing binaries—Bazel recomputes the RPATH for the new distro</p>
</li>
<li><p>Libraries can be placed anywhere in the image without breaking binaries</p>
</li>
<li><p>Container images are reproducible: given the same Bazel build configuration, the same binaries will find the same libraries every time</p>
</li>
</ul>
<h3 id="heading-liberation-from-host-distro-dependencies">Liberation from Host Distro Dependencies</h3>
<p>Perhaps the most profound impact: <code>rules_distroless</code> decouples the build machine's operating system from the built artifact.</p>
<p>Traditionally, if you build SONiC on Ubuntu 24.04 and then try to run the binaries on Ubuntu 22.04, you hit subtle glibc incompatibilities. Some binaries depend on functions only available in Ubuntu 24's glibc version. Others link against the "wrong" version of OpenSSL. Developers spend weeks debugging "works on my machine" failures.</p>
<p>With <code>rules_distroless</code>, the build process explicitly pins every single Debian package using Debian snapshot repositories. When you build SONiC on an Ubuntu 22.04 machine, the build ignores the system's installed packages and fetches exact, pinned versions from the snapshots. The build output contains binaries linked against those exact packages, regardless of what's on the build machine.</p>
<p>This means:</p>
<ul>
<li><p>A developer on Ubuntu 20.04 can build SONiC targeting Debian bookworm without installing bookworm libraries locally</p>
</li>
<li><p>CI/CD runners can be any modern Linux distro—the build is isolated from the host</p>
</li>
<li><p>Build results are reproducible across machines because dependency versions are explicit, not implicit</p>
</li>
<li><p>Upgrading the build machine's OS doesn't accidentally change SONiC's binaries</p>
</li>
</ul>
<p>The build machine becomes nearly irrelevant. You're no longer asking "how do I build this on my machine?" You're asking "given a Bazel configuration and a snapshot of the Debian archive, what do I build?" The answer is the same everywhere, because Bazel controls everything.</p>
<h3 id="heading-radical-hermeticity-explicit-dependencies-zero-implicit-assumptions">Radical Hermeticity: Explicit Dependencies, Zero Implicit Assumptions</h3>
<p>The commitment to hermeticity goes far beyond RPATH tuning and package pinning. SONiC's C/C++ compilation is built with compiler flags that reject the very concept of "system libraries":</p>
<pre><code class="lang-plaintext">-nostdinc -nostdinc++ -nostdlib
</code></pre>
<p>These flags tell the compiler: "There are no standard libraries. There are no default include paths. Everything must be explicitly declared." Without these flags, a compiler would silently fall back to the build machine's <code>/usr/include</code> and <code>/usr/lib</code>. A developer on Ubuntu 24.04 might accidentally use headers from Ubuntu 24's glibc, even if the target is Debian bookworm. With <code>-nostdinc -nostdinc++ -nostdlib</code>, that accident becomes impossible—the build fails loudly if a dependency is missing.</p>
<p>Every header file, every standard library function, every bit of C runtime must be explicitly provided as a Bazel dependency. This guarantees that:</p>
<ul>
<li><p>A build on any machine produces identical binaries</p>
</li>
<li><p>Adding a dependency automatically updates the build configuration (nothing hidden)</p>
</li>
<li><p>Removing unused dependencies is impossible—they're explicitly declared</p>
</li>
<li><p>Switching between libc implementations or versions is straightforward—just change the declared dependency</p>
</li>
</ul>
<h3 id="heading-auto-generated-cclibrary-targets-turning-deb-packages-into-bazel-dependencies">Auto-Generated cc_library Targets: Turning .deb Packages into Bazel Dependencies</h3>
<p>But declaring every libc header and every system library explicitly would require thousands of manual <code>cc_library</code> rules. That's where <code>rules_distroless</code> shines.</p>
<p>When <code>rules_distroless</code> processes a Debian package, it doesn't just extract files. It <strong>automatically generates Bazel</strong> <code>cc_library</code> targets that encapsulate:</p>
<ul>
<li><p>All header files from the package (C standard library headers, C++ standard library headers, architecture-specific headers)</p>
</li>
<li><p>All shared object libraries</p>
</li>
<li><p>The correct include paths and linker settings</p>
</li>
<li><p>All transitive dependencies, automatically resolved</p>
</li>
</ul>
<p>From a SONiC developer's perspective, instead of hoping that libc headers and libraries exist somewhere on the system, they explicitly declare:</p>
<pre><code class="lang-python">cc_binary(
    name = <span class="hljs-string">"my_tool"</span>,
    srcs = [<span class="hljs-string">"main.cc"</span>],
    deps = [
        <span class="hljs-string">"@debian//libc6"</span>,
        <span class="hljs-string">"@debian//libstdc++"</span>,
    ],
)
</code></pre>
<p>The Bazel build system now understands exactly which Debian packages this binary depends on. If you want to upgrade libc, you change one line in MODULE.bazel. If you want to use a different libc or switch distributions entirely, the build system tracks it explicitly.</p>
<p><strong>This auto-generation is the key to polyrepo hermetic builds.</strong> In a monorepo, one team can write all the libc rules. But SONiC spans multiple GitHub repositories. Each repo's BUILD files can independently declare which Debian packages it needs. The <code>rules_distroless</code> machinery automatically generates compatible <code>cc_library</code> targets, and Bazel's module system ensures they all use the same versions across all repositories.</p>
<p>No more silent fallback to system libraries. No more "it works on my machine but not in CI." Every dependency is explicit, versionable, and traceable.</p>
<h3 id="heading-the-performance-paradox">The Performance Paradox</h3>
<p>Computing transitive closure of dependencies across a polyrepo, resolving symlink chains, and managing RPATH dynamically sounds expensive — and it is. But doing it at build time, once, and caching the result is far cheaper than doing it at container runtime or — worse — debugging "symbol not found" errors weeks later in production.</p>
<p>The trick is recognizing what can and cannot be cached:</p>
<ul>
<li><p><strong>Package metadata</strong> (which files each package provides, symlink targets) is stable and highly cacheable</p>
</li>
<li><p><strong>Sysroot layout</strong> (which package provides which file when all dependencies are considered) is computed once and reused</p>
</li>
<li><p><strong>Binary relocations</strong> (embedding RPATH in binaries) happens at link time and is reproducible</p>
</li>
<li><p><strong>Container assembly</strong> (final layer selection) happens downstream and doesn't affect upstream caching</p>
</li>
</ul>
<p>SONiC's Bazel infrastructure treats these as separate concerns, allowing massive parallelism and cache reuse even though the final build is complex.</p>
<hr />
<h2 id="heading-solving-the-python-dependency-maze-explicit-versioned-cross-compilable">Solving the Python Dependency Maze: Explicit, Versioned, Cross-Compilable</h2>
<p>Python presents a unique challenge in reproducible builds. Unlike C/C++ where dependencies are system packages installed via a package manager, Python's ecosystem fetches from PyPI—a centralized repository where package availability and versioning can change. The standard approach to managing Python dependencies in Bazel has been <code>rules_python</code>, but it has limitations that made it unsuitable for SONiC's polyglot, multi-architecture environment.</p>
<h3 id="heading-the-pypi-problem-at-scale">The PyPI Problem at Scale</h3>
<p>SONiC uses Python extensively: configuration generation tools, management daemons, testing frameworks, and utilities. Each Python package has transitive dependencies on other packages, many of which are compiled extensions (<code>.so</code> files). The challenge is that <code>rules_python</code>'s traditional <code>pip_parse</code> implementation:</p>
<ul>
<li><p>Assumes a single target architecture (no cross-compilation support)</p>
</li>
<li><p>Doesn't pin package versions deterministically</p>
</li>
<li><p>Struggles with compiled Python extensions that depend on system libraries</p>
</li>
<li><p>Doesn't integrate well with polyrepo ecosystems where multiple repositories have different Python dependency requirements</p>
</li>
</ul>
<p>In SONiC's case, you might need to build the same Python package for x86-64 and ARM64 simultaneously. The standard tooling wasn't built for that.</p>
<h3 id="heading-enter-aspect-rules-python-modern-dependency-management">Enter Aspect Rules Python: Modern Dependency Management</h3>
<p>To solve these problems, <a target="_blank" href="https://github.com/aspect-build/rules_py"><strong>Aspect Rules Python</strong></a> (<code>aspect_rules_py</code>) provides a modern reimplementation of Python dependency management in Bazel. It replaces the traditional <code>pip_parse</code> with a new implementation based on <strong>uv</strong>, a fast, production-grade Python package resolver.</p>
<p>Key improvements:</p>
<p><strong>Explicit Versioning</strong>: Dependencies are pinned in a lock file (similar to <code>requirements.lock</code>), ensuring that every build uses the exact same package versions. No surprises from PyPI changes.</p>
<p><strong>Cross-Compilation Support</strong>: Unlike the original <code>rules_python</code>, <code>aspect_rules_py</code> can build Python packages for different architectures simultaneously. This is critical for SONiC: the same Python source code can be compiled as a wheel for x86-64, then recompiled for ARM64, with all dependencies resolved correctly for each target.</p>
<p><strong>Integration with Hermetic Sysroots</strong>: When a Python package has compiled extensions that depend on system libraries (e.g., a C extension that links against <code>libyang</code>), the sysroot provided by <code>rules_distroless</code> makes the correct headers and libraries available. <code>aspect_rules_py</code> integrates seamlessly with this sysroot, so the package's build automatically uses the right C compiler flags, header locations, and link paths for the target architecture.</p>
<p><strong>Polyrepo-Friendly</strong>: Each SONiC repository declares and resolves its own Python dependency closure independently. <code>aspect_rules_py</code> uses uv to compute the complete transitive dependency graph for each repository, generating a lock file that pins all transitive dependencies. Because each repo has its own lock file, different repositories can use different versions of shared dependencies without conflict—Bazel's module system keeps them isolated.</p>
<h3 id="heading-enabling-container-cross-compilation">Enabling Container Cross-Compilation</h3>
<p>Combined with the Debian package management improvements (<code>rules_distroless</code>), <code>aspect_rules_py</code> enables a powerful capability: <strong>building complete container images for different architectures within a single Bazel build</strong>.</p>
<p>Before, cross-compiling a SONiC container for ARM64 required:</p>
<ol>
<li><p>Running the build on an ARM64 machine (or emulating it, which was slow)</p>
</li>
<li><p>Managing different Python dependency versions for different architectures</p>
</li>
<li><p>Handling the mismatch between the build machine's Python environment and the target architecture</p>
</li>
</ol>
<p>With <code>aspect_rules_py</code> and <code>rules_distroless</code>, Bazel can:</p>
<ol>
<li><p>Resolve Python dependencies for the target architecture (e.g., ARM64)</p>
</li>
<li><p>Build wheels for that architecture using the sysroot's C compiler and libraries</p>
</li>
<li><p>Layer those wheels into the container image alongside the pinned Debian packages</p>
</li>
<li><p>All within a single Bazel invocation on any machine</p>
</li>
</ol>
<p>This means a developer on an x86-64 laptop can type <code>bazel build //path/to:docker-swss-arm64</code> and get a fully cross-compiled container image without any special setup, emulation, or native ARM64 hardware.</p>
<hr />
<h2 id="heading-a-call-to-action-for-the-sonic-foundation">A Call to Action for the SONiC Foundation</h2>
<p>The SONiC community is at an inflection point. As deployments reach hyperscale and the contributor base grows more diverse, the project needs a build and test system that matches its ambitions.</p>
<p>Bazel offers exactly that: a modern, reproducible, cloud-scale platform that empowers contributors, accelerates releases, and strengthens the entire ecosystem.</p>
<p>Adopting Bazel across SONiC Foundation projects will:</p>
<ul>
<li><p>Reduce fragmentation</p>
</li>
<li><p>Increase developer productivity</p>
</li>
<li><p>Improve security and auditing</p>
</li>
<li><p>Enable faster and more reliable releases</p>
</li>
<li><p>Provide a consistent experience for every vendor and contributor</p>
</li>
</ul>
<p>The benefits compound over time—and they're aligned with SONiC's vision of an open, interoperable, high-performance network OS. <strong>Let's give SONiC the build system it deserves</strong>.</p>
<h3 id="heading-accelerating-innovation-through-reproducible-scalable-cloud-native-build-systems">Accelerating Innovation Through Reproducible, Scalable, Cloud-Native Build Systems</h3>
<p>In this article, we have explored the case for adopting Bazel across SONiC Foundation projects and how it can meaningfully improve developer productivity, platform compatibility, security posture, and release engineering. We’ve also updated the Bazel and SONiC communities on some of our recent contributions to empower Bazel for SONiC.</p>
<p>As the SONiC Foundation continues to drive forward an open, standards-based network operating system for the industry, one challenge has become increasingly visible across the community: <strong>how to build, test, and release SONiC components with speed, consistency, and confidence</strong>. The project has grown in complexity—diverse hardware platforms, multiple languages and toolchains, distributed teams, and the rising expectation that networking software behave like modern cloud software.</p>
<p>This is exactly where Bazel, the open-source, high-performance build and test system originally created at Google, can play a transformative role. Bazel offers SONiC contributors and vendors the reliability, scalability, and repeatability needed to sustain a world-class network OS at global scale.</p>
<h2 id="heading-next-steps">Next Steps</h2>
<p>Interested in learning more about how to succeed with Bazel for SONiC? <a target="_blank" href="https://calendly.com/aspect-build/intro?back=1&amp;month=2026-01">Schedule time to talk with us</a> or email us at hello@aspect.build.</p>
]]></content:encoded></item><item><title><![CDATA[What's New at BazelCon 2025]]></title><description><![CDATA[Wow, it’s been an exciting year in Bazel-land, and Aspect has made the most of it. BazelCon is our yearly cadence for “conference-driven development”.
Here’s a retrospective of the conference and our highlights including from my Monday morning keynot...]]></description><link>https://blog.aspect.build/bazelcon-2025</link><guid isPermaLink="true">https://blog.aspect.build/bazelcon-2025</guid><category><![CDATA[Aspect Build]]></category><category><![CDATA[Bazel Python]]></category><category><![CDATA[Buildbarn]]></category><category><![CDATA[Bazel Gazelle]]></category><category><![CDATA[bazel]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Thu, 20 Nov 2025 21:10:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763412721239/1948a7af-3831-4294-8906-9ce5e6a1db28.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Wow, it’s been an exciting year in Bazel-land, and Aspect has made the most of it. BazelCon is our yearly cadence for “conference-driven development”.</p>
<p>Here’s a retrospective of the conference and our highlights including from my Monday morning keynote.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763670419529/4db52df0-f894-4122-80cf-d36d7c586a31.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-travel-was-a-pain">Travel Was a Pain</h2>
<p>U.S. flight cancellations and delays brought many attendees to Atlanta in the wee hours of the morning. Either that, or it was a convenient excuse for feeling hungover during the morning sessions.</p>
<p>All told, BazelCon 2025 attracted over 330 in-person attendees in Atlanta. Thankfully everyone arrived safely.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763670493508/4517d93c-cb0d-4112-b64b-55f815828315.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-aspect-extension-language">Aspect Extension Language</h2>
<p>Bazel is great at two things: Loading &amp; Analysis of the Dependency/Action graphs, and using plugins (“rulesets”) to populate the <code>bazel-out</code> folder from your sources.</p>
<p>A broad class of extensibility has been missing, and as a result most teams have written their own local dev scripts wrapping <code>bazel</code> and also some CI/CD YAML for their pipelines.</p>
<p>Our solution: a new Starlark dialect we call “Aspect Extension Language (AXL)”. Similarly to how <code>.bzl</code> files have some Bazel-specific standard libraries available as global symbols in starlark code, <code>.axl</code> files give you extension points to hook your developer workflows!</p>
<p>Check out my BazelCon 2025 <a target="_blank" href="https://www.youtube.com/watch?v=j7-IMZ2q5W4&amp;list=PLak8-7eFSpowmNiR2lhvJEomLA140yban&amp;index=21">10 minute lightning talk about AXL</a> and our <a target="_blank" href="https://cdn.prod.website-files.com/62fe361319fc7d5a70696095/690f4cd660ffe62f2a6e21e9_Marvin%20Saves%20the%20BUILD.pdf">Marvin Saves the BUILD comic</a> (as a pdf file).</p>
<p>We host a collection of extensions at <a target="_blank" href="https://github.com/aspect-extensions">https://github.com/aspect-extensions</a> and you can easily write your own. Start at <a target="_blank" href="https://aspect.build/axl">https://aspect.build/axl</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763671222527/d2bf1aeb-eb4e-4452-b56c-14a9e65daf94.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-notable-bazelcon-2025-content">Notable BazelCon 2025 Content</h2>
<p>Check out the <a target="_blank" href="https://www.youtube.com/playlist?list=PLak8-7eFSpowmNiR2lhvJEomLA140yban">conference session recordings</a>. Our <a target="_blank" href="https://www.youtube.com/watch?v=MB6Txen7rUk&amp;list=PLak8-7eFSpowmNiR2lhvJEomLA140yban&amp;index=2">Bazel 102 course on Python</a> was recorded, and has lots of updated content.</p>
<p>We recorded Aspect Insights video podcasts with Mícheál, Yun, and Xudong from Google. These episodes will drop soon!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763671088170/744597fb-367e-4c76-a554-ddf0de13d87c.jpeg" alt class="image--center mx-auto" /></p>
<p>We hosted a Hackathon the day after the conference. It felt like a hit! Notably Juan from Verkada presented an AXL extension he wrote, which will post soon.</p>
<p>We want to continue getting stuff done around Bazel, so we are going to host monthly meetups in the San Francisco Bay Area starting next month at Figma. See <a target="_blank" href="https://luma.com/build-meetup-sf">https://luma.com/build-meetup-sf</a> for events in San Francisco and <a target="_blank" href="https://luma.com/98o72bge">https://luma.com/98o72bge</a> for events in Palo Alto and Silly Valley, aka Silicon Valley.</p>
<p>Next year at BazelCon 2026 we’ll host another Hackathon event like this too.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763674987558/196163fb-36b6-4f59-9bd4-f0c87b5df3fc.jpeg" alt class="image--center mx-auto" /></p>
<p>Malte gave a talk on rules_img. We’ve supported his work and intend to make an easy transition for users of rules_oci.</p>
<p>The BUILD Foundation is forming. Uber, Spotify, and Canva already committed to be founding members. The first meeting will be December 4, email foundation@bazel.build if your company would consider participating.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763670545462/68754f4e-e761-4f8e-acc8-a5c767226598.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-workflows-free-open-source-tier">Workflows Free Open-Source Tier</h2>
<p>We’ve opened up a free tier of Aspect Workflows, our CI/CD infrastructure platform. Projects like Buildbarn are now getting our warm CI runners, Buildbarn cache, Web Results UI, and observability. Learn more in <a target="_blank" href="https://blog.aspect.build/free-tier-oss">our blog</a>.</p>
<h2 id="heading-orion-and-gazelle-prebuilt">Orion and gazelle-prebuilt</h2>
<p>Jason Bedard has worked with our awesome partners at Adobe to extract the Starlark BUILD file generator to a standalone Gazelle extension called Orion. You can now include this in your own custom Gazelle binary, along with custom extensions you wrote in Go.</p>
<p>We also extracted the pre-compiled Gazelle out of Aspect CLI to a standalone repo: aspect-gazelle. This way your engineers can skip the slow Go source builds and we can reliably depend on C extensions like tree-sitter.</p>
<h2 id="heading-community-contributions">Community Contributions</h2>
<p>As a leader in the Bazel ecosystem and Aspect’s Developer Evangelist, I’ve been active behind the scenes.</p>
<ul>
<li><p>Suggested that Cloudflare host a mirror of the BCR, since our customers tripped on unreliable hosting from <code>ftp.gnu.org</code> and <code>gitlab.arm.com</code>. They did!</p>
</li>
<li><p>Bazel docs are the #1 user-reported problem. For initial progress, the Bazel Central Registry (BCR) has added features to improve transparency:</p>
<ul>
<li><p>Starlark API docs are now visible in the registry for modules that publish their documentation.</p>
</li>
<li><p>Deprecation flags now clearly mark deprecated or archived modules in the registry.</p>
</li>
</ul>
</li>
<li><p>Next step for documentation: move hosting to a place the community can edit more easily, for example by having a preview of docs PRs (tentatively <code>bazel.online</code>). I encouraged Alan Mond to pick up the torch of improving Bazel’s docsite after he wrote an excellent blog post and coordinated with Bazel team to launch it. I arranged for the Rules Authors SIG to pay a third-party doc hosting service, https://mintlify.com. You can see https://preview.bazel.build for what’s coming!</p>
</li>
<li><p>We donated our bazel-lib library to Linux Foundation. The 3.0 release completes the rename of the module, removing Aspect branding from the name.</p>
</li>
<li><p>I had a LOT of meetings to sell the BUILD Foundation mission to large enterprises and the community, connecting their resources to the OSS projects that need it.</p>
</li>
<li><p>In addition, Jason contributed to upstream Gazelle.</p>
</li>
</ul>
<h2 id="heading-free-marvin-plush">Free Marvin Plush</h2>
<p>Including free U.S. shipping. While supplies last. <a target="_blank" href="https://share.hsforms.com/137ta79VISxWWSrUao8ysyQrr96h">Get yours here</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763670685595/b402518d-6bf6-417f-8018-bf5dfb41b0c2.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-see-you-at-bazelcon-2026">See you at BazelCon 2026!</h2>
<p>We participate in the BazelCon 2026 planning meetings. Little birds chirp that it may be in Europe. My colleague Brett Sheppard suggested Bazel, Belgium. He lived nearby in Flanders as a student. No word yet on the final location for next year’s get together.</p>
<p>Happy coding from the Aspect Build team at BazelCon 2025.  </p>
<p>To learn more about our Bazel support and platform, visit <a target="_blank" href="http://www.aspect.build">www.aspect.build</a> or <a target="_blank" href="https://calendly.com/aspect-build/">pick a time to talk with us</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763670590073/31a3c14e-0947-4188-b9fa-94dcc661ed4d.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Starlark linter: Buildifier]]></title><description><![CDATA[Bazel uses its own configuration language called Starlark: https://starlark-lang.org. It’s a Python dialect that allows parallel evaluation to make builds faster.
Linting is the process of using a static code analysis tool, known as a "linter," to id...]]></description><link>https://blog.aspect.build/buildifier</link><guid isPermaLink="true">https://blog.aspect.build/buildifier</guid><category><![CDATA[bazel]]></category><category><![CDATA[formatting]]></category><category><![CDATA[Linting]]></category><category><![CDATA[bazelbuild]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Tue, 14 Oct 2025 18:47:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760467365525/838eb882-f894-468d-ac0f-c18164fe8293.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Bazel uses its own configuration language called Starlark: https://starlark-lang.org. It’s a Python dialect that allows parallel evaluation to make builds faster.</p>
<p><a target="_blank" href="https://docs.aspect.build/workflows/features/lint">Linting</a> is the process of using a static code analysis tool, known as a "linter," to identify and flag potential programming errors, bugs, stylistic issues, and suspicious constructs in source code. It essentially examines code without executing it.</p>
<p>Of course every language needs linting and formatting. Starlark has one too! It was originally created because the Go team at Google wanted to machine-edit <code>BUILD</code> files, but didn’t want to get into code reviews with teams who liked their hand-formatting of files. Read <a target="_blank" href="https://laurent.le-brun.eu/blog/the-story-of-reformatting-100k-files-at-google-in-2011">https://laurent.le-brun.eu/blog/the-story-of-reformatting-100k-files-at-google-in-2011</a> for more on this back-story.</p>
<p>Buildifier is a tool for formatting Bazel BUILD and .bzl files with a standard convention. Buildifier works on the Aspect Extension Language too! Here’s how that looks in VSCode.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760453652854/60548fa4-de61-45ba-a04e-80b108a6fec8.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-setting-it-up">Setting it up</h2>
<p>There are several ways to install <a target="_blank" href="https://docs.aspect.build/workflows/features/buildifier/">Buildifier</a> for developers. After working with over 50 companies on their Bazel config, we encoded our learnings into our Starter repos: https://<a target="_blank" href="http://github.com/bazel-starters">github.com/bazel-starters</a>. To save you a bunch of browsing, here’s a summary of what Aspect recommends:</p>
<ol>
<li><p>Buildifier is written in Go, but most engineers don’t want to wait to compile it when it’s a cache miss. This is why <a target="_blank" href="https://github.com/bazelbuild/buildtools/tree/main/buildifier#setup-and-usage-via-bazel">the “official” instructions</a> look HORRIBLE. We like <a target="_blank" href="https://github.com/keith/buildifier-prebuilt">https://github.com/keith/buildifier-prebuilt</a> as an easy way to get pre-built binaries, along with a build rule to run it. Note that you could fetch binaries directly from the <a target="_blank" href="https://github.com/bazelbuild/buildtools/releases">project releases</a> as well.</p>
</li>
<li><p>Developers will want to run <code>buildifier</code> from their PATH. We recommend https://direnv.net to hook the shell to update PATH as you <code>cd</code> into the workspace folder, then <a target="_blank" href="https://github.com/buildbuddy-io/bazel_env.bzl">https://github.com/buildbuddy-io/bazel_env.bzl</a> to add tools to the PATH. <a target="_blank" href="https://github.com/buildbuddy-io/bazel_env.bzl/blob/85e57cfd869cbbcc412c79db050191d88eef554a/examples/BUILD.bazel#L38">Here’s the spot in the example</a> that sets up Buildifier.</p>
</li>
<li><p>bazel_env.bzl also provides a stable path for the tool that editors can reference. For example in VSCode, add this to your <code>.vscode/settings.json</code>:<br /> <code>"bazel.buildifierExecutable": "./bazel-out/bazel_env-opt/bin/tools/bazel_env/bin/buildifier",</code><br /> you probably also want to enable it on save:<br /> <code>"bazel.buildifierFixOnFormat": true,</code></p>
</li>
</ol>
<h2 id="heading-buildifier-formatting">Buildifier formatting</h2>
<p>We want to make developers productive! What’s not productive? Discussion of whitespace, or waiting to re-run your CI job because of a formatter nit. We can make these basically disappear.</p>
<p>First, setup Aspect rules_lint, following <a target="_blank" href="https://github.com/aspect-build/rules_lint/blob/main/docs/formatting.md">https://github.com/aspect-build/rules_lint/blob/main/docs/formatting.md</a>. You don’t need this for buildifier itself, but assuming you also want to run other formatters for other languages, it gives you a single setup that formats all files in the repo.</p>
<p>We setup the editor earlier to run buildifier on save, but engineers have a lot of editor choices. As a fallback, add a <code>pre-commit</code> hook so any unformatted files get fixed at <code>git commit</code> time. A couple options for this are documented in formatting.md.</p>
<p>Finally we do want to enforce that files in the repo are formatted. This isn’t because we hate reading code with different whitespace - it’s just to avoid the next engineer who touches the file ending up with spurious deltas in their pull request. You can run the <code>format.check</code> target from rules_lint on your CI. Give developers a nice error message when it fails, guiding them to setup their dev environment so they don’t need to hit a red CI job in the future.</p>
<p>When you first format the repo, it’s good practice to list your commit hash in the <code>.git-blame-ignore-revs</code> - see <a target="_blank" href="https://git-scm.com/docs/git-blame#Documentation/git-blame.txt---ignore-revs-filefile">https://git-scm.com/docs/git-blame#Documentation/git-blame.txt---ignore-revs-filefile</a>. This way you don’t pollute the blame layer.</p>
<h2 id="heading-buildifier-linting">Buildifier linting</h2>
<p>Buildifier has about 100 checks that catch certain coding issues in Starlark files, though many of them are specific to Bazel’s standard library. The list is here: <a target="_blank" href="https://github.com/bazelbuild/buildtools/blob/main/WARNINGS.md">https://github.com/bazelbuild/buildtools/blob/main/WARNINGS.md</a></p>
<p>Using rules_lint only runs linters over the dependency graph, and you probably didn’t want to have to add your <code>BUILD</code> files to a <code>filegroup</code> in the <code>BUILD</code> file. Too self-referential! So we recommend running buildifier linting as a standalone step, such as this <a target="_blank" href="https://github.com/aspect-build/rules_jasmine/blob/fdfda3c75ff986c82dbb486d9ed93ee9caa9ff6f/.github/workflows/buildifier.yaml">GitHub Actions workflow</a>. Note that this is not incremental - it runs the linter across all the code in the repository, every time it’s run.</p>
<p>When you first setup the linter, it will point out a big pile of issues, and this may result in a massive PR that is hard to rebase and disruptive to engineers when it lands. We recommend enabling a single check at a time, and slowly rinse-and-repeat following the “<a target="_blank" href="https://qntm.org/ratchet">Ratchet principle</a>”.</p>
<h2 id="heading-its-easier-on-our-platform">It’s easier on our platform!</h2>
<p>The Aspect Workflows developer productivity platform includes buildifier as a first-class task type. This lets you skip some of the setup steps above, and get our recommendations running automatically on your continuous integration (CI) system. Check it out at <a target="_blank" href="https://aspect.build/platform">https://aspect.build/platform</a>.</p>
<p>Or <a target="_blank" href="https://calendly.com/aspect-build/intro">talk with us</a> to learn more.</p>
]]></content:encoded></item><item><title><![CDATA[The best tool for the Bazel job might be older than you]]></title><description><![CDATA[A lot of engineering is tribal: you know a few languages, use the technology that they enable, and follow the ideas and opinions of that community. Bazel is a cross-language build tool, and that means as Bazel experts, we end up cross-pollinating a l...]]></description><link>https://blog.aspect.build/mtree</link><guid isPermaLink="true">https://blog.aspect.build/mtree</guid><category><![CDATA[BSD]]></category><category><![CDATA[bazel]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Wed, 08 Oct 2025 22:08:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/aPxuGCxcCe4/upload/1a9ca72cfa047e1f5e50f374a1656886.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A lot of engineering is tribal: you know a few languages, use the technology that they enable, and follow the ideas and opinions of that community. Bazel is a cross-language build tool, and that means as Bazel experts, we end up cross-pollinating a lot of ideas between tribes. We don’t even make fun of COBOL or Fortran!</p>
<p>Some tribes are dismissive of old technology. However, a lot of old projects are not abandoned: they are stable, and in many cases are the bedrock on which a whole stack of tools have been developed. So in this article I’ll take you back to the turn of the decade - and not the most recent one. I suggest you listen to 🎶 “<em>Poison</em>” by Bell Biv DeVoe while you read this article — here’s the music video <a target="_blank" href="https://www.youtube.com/watch?v=hgnhVcyLy1I">https://www.youtube.com/watch?v=hgnhVcyLy1I</a>. That’s because <em>Poison</em> came out in March 1990, a couple months before 4.3BSD-Reno, which introduced a tool called <code>mtree(5)</code>.</p>
<p>From the archive:</p>
<p><a target="_blank" href="https://man.archlinux.org/man/mtree.5.en">https://man.archlinux.org/man/mtree.5.en</a></p>
<blockquote>
<p>The <code>mtree</code> format is a textual format that describes a collection of filesystem objects. Such files are typically used to create or verify directory hierarchies.</p>
</blockquote>
<p>Well, that sounds useful to use in a build system like Bazel. We love textual formats for the ease of manipulation, and Bazel is obsessed with small files produced in one place being inputs to various tools.</p>
<h2 id="heading-the-mtree-format">The mtree format</h2>
<p>Here’s a simple mtree file:</p>
<pre><code class="lang-bash">usr/bin uid=0 gid=0 mode=0755 time=1672560000 <span class="hljs-built_in">type</span>=dir
usr/bin/bazelisk uid=0 gid=0 mode=0755 time=1672560000 <span class="hljs-built_in">type</span>=file contents=/path/to/download
usr/bin/bazel uid=0 gid=0 mode=0755 time=1672560000 <span class="hljs-built_in">type</span>=link link=bazelisk
</code></pre>
<p>The format includes everything we need for reproducible builds, especially that <code>time</code> attribute. We can describe any arbitrary filesystem structure along with permissions, symlinks, and a bunch more.</p>
<p>Now, following the Unix philosophy, Bazel enables you to compose tools or random bash one-liners with ease. So let’s say we need <code>/usr/bin/bazel</code> to be owned by the right user since we’ll stick this filesystem into a Docker container image. It can be as simple as <code>sed</code>:</p>
<pre><code class="lang-python">genrule(
  name = <span class="hljs-string">"change_owner"</span>,
  cmd = <span class="hljs-string">"sed 's/uid=0/uid=1000/;s/gid=0/gid=500/' &lt;$&lt; &gt;$@"</span>,
  ...
)
</code></pre>
<h2 id="heading-where-mtrees-are-used">Where mtree’s are used</h2>
<p>Since the <code>mtree</code> format is 35 years old, it can run for President of the United States! And with age comes wisdom — it also has a lot of useful utilities, for example <a target="_blank" href="https://github.com/vbatts/go-mtree">https://github.com/vbatts/go-mtree</a> is a Go utility to interact with manifests. The most useful one, however is <code>tar</code> - at least BSD tar (the GNU tar is not very reproducible and doesn’t have an intermediate representation of the filesystem being archived)</p>
<p>We wrote a Bazel rule, <a target="_blank" href="https://registry.bazel.build/modules/tar.bzl">https://registry.bazel.build/modules/tar.bzl</a>, which is a simple starlark wrapper around a hermetic BSD tar toolchain and the mtree format. This rule doesn’t have to be smart, because it just takes an mtree to describe how the inputs should be laid out into the archive.</p>
<p>I gave that <code>sed</code> example earlier - it turns out another Very Old tool is a great way to make more general edits to mtrees - <code>awk</code>. We built this into <code>tar.bzl</code> to provide various mutations of the filesystem layout:</p>
<p><a target="_blank" href="https://github.com/bazel-contrib/tar.bzl/blob/main/tar/private/modify_mtree.awk">https://github.com/bazel-contrib/tar.bzl/blob/main/tar/private/modify_mtree.awk</a></p>
<p>And wrapping that with a minimal <code>mutate</code> API lets us replace most of <code>rules_pkg</code> in a simple way:</p>
<p><a target="_blank" href="https://github.com/bazel-contrib/tar.bzl/blob/main/examples/migrate-rules_pkg/BUILD">https://github.com/bazel-contrib/tar.bzl/blob/main/examples/migrate-rules_pkg/BUILD</a></p>
<h2 id="heading-the-philosophy">The philosophy</h2>
<p>This post isn’t just about <code>mtree</code>. It turns out many problems were solved long ago in compilers, optimization, packaging, verification, and so on. It’s easy to be tribal and only look for solutions in the ecosystem you’re familiar with, like GitHub repos with a bunch of social followers and recent commits. Often the best tool for the job is on a website that looks like the Wayback Machine, and will serve you a lot better than The Latest Rewrite in Rust.</p>
<p>Shoutout to <a target="_blank" href="http://github.com/thesayyn">http://github.com/thesayyn</a> for finding all the clever ideas I’ve described in this post!</p>
<h2 id="heading-whats-next">What’s next</h2>
<p>Meet the Aspect Build team at <a target="_blank" href="https://events.linuxfoundation.org/bazelcon/">BazelCon 2025</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Bazel Starlark Docs on the Registry]]></title><description><![CDATA[About a year ago, I wrote about Bazel’s “stardoc” API documentation generator In https://blog.aspect.build/experiment-with-buf-and-starlark-docgen.
I didn’t make any public announcement at the time. This past year I’ve been continuing to evolve the d...]]></description><link>https://blog.aspect.build/stardocs-on-bcr</link><guid isPermaLink="true">https://blog.aspect.build/stardocs-on-bcr</guid><category><![CDATA[bazel]]></category><category><![CDATA[starlark]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Wed, 01 Oct 2025 17:38:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759341125696/c62fbd09-960f-4da4-a419-bcbb1f477caa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>About a year ago, I wrote about Bazel’s “stardoc” API documentation generator In <a target="_blank" href="https://blog.aspect.build/experiment-with-buf-and-starlark-docgen">https://blog.aspect.build/experiment-with-buf-and-starlark-docgen</a>.</p>
<p>I didn’t make any public announcement at the time. This past year I’ve been continuing to evolve the design, and I’m excited to have finally launched it!</p>
<p>For example here is <a target="_blank" href="https://registry.bazel.build/modules/tar.bzl">https://registry.bazel.build/modules/tar.bzl</a>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759256714659/12365068-6246-4a3a-906a-dac4471f3093.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-documenting-bazel-apis">Documenting Bazel APIs</h2>
<p>Some documentation for the Bazel “core” appears on https://bazel.build. However Bazel is extended by rulesets which are published as modules. How do developers find the API documentation for these?</p>
<p>The answer has been quite fragmented, with the community taking several approaches:</p>
<ol>
<li><p>Markdown or ReStructured Text files checked into the repo like <a target="_blank" href="https://github.com/bazel-contrib/rules_go/tree/master/docs/go/core">https://github.com/bazel-contrib/rules_go/tree/master/docs/go/core</a>. These rely on some testing that the checked-in files match the stardoc output. So a <code>stardoc_with_diff_test</code> is commonly included, like <a target="_blank" href="https://github.com/bazel-contrib/rules_go/blob/master/docs/doc_helpers.bzl">this one for the Go example</a>. This is not great for contributors who submit a pull request (PR) with a minor correction to some Starlark code — they are met with a red PR and have to update the codegen. Worse, that process uses a Java program in the stardoc module which has to be built from source. This has a bunch of dependencies like protoc which also builds from source. It can take 10min to update the codegen following a 10sec minor correction. This leads to contributors abandoning the fix.</p>
</li>
<li><p>Using GitHub Pages like <a target="_blank" href="https://bazelbuild.github.io/rules_rust/">https://bazelbuild.github.io/rules_rust/</a> or adding a versioning scheme like <a target="_blank" href="https://bazelbuild.github.io/rules_pkg/">https://bazelbuild.github.io/rules_pkg/</a></p>
</li>
<li><p>Using Sphinx and a <a target="_blank" href="https://rules-python.readthedocs.io/en/latest/sphinxdocs/sphinx-bzl.html">custom Bazel publishing pipeline</a> like <a target="_blank" href="https://rules-python.readthedocs.io/en/latest/">https://rules-python.readthedocs.io/en/latest/</a></p>
</li>
<li><p>Aspect had a legacy docsite where we rendered API docs.</p>
</li>
</ol>
<p>Most of these were not great, missing features like versioning, full-text search, a button to copy a code sample, or nav-to-edit.</p>
<p>And most importantly, there was no one place to look. As a developer, if you don’t know which Bazel Module contains the doc you need, you end up searching around all these places. Bazel Modules are commonly published on the Bazel Central Registry (BCR) at https://registry.bazel.build. So, that’s where I added them!</p>
<h2 id="heading-how-it-works">How it Works</h2>
<h3 id="heading-generating-the-docs">Generating the Docs</h3>
<p>Behind the scenes, Bazel has a <a target="_blank" href="https://bazel.build/versions/8.3.0/reference/be/general#starlark_doc_extract">built-in rule <code>starlark_doc_extract</code></a>, in the Java core code, which runs Bazel’s Starlark interpreter over a given Starlark file. The interpreter is required because <code>.bzl</code> files use a standard library which is Bazel-specific and not part of the Starlark language spec, and the documentation needs to be aware of that. It also needs to read from transitively-loaded files in the <code>deps</code> of a <code>bzl_library</code> target.</p>
<p>As a Starlark author, you don’t really want to think about generating the API docs, it should just work! So writing <code>bzl_library</code> in your BUILD files is sufficient - and there’s a Bazel Gazelle extension to write those for you.</p>
<p>The implementation of <code>bzl_library</code> in bazel-skylib doesn’t actually do any documentation generation, or even <a target="_blank" href="https://github.com/bazelbuild/bazel-skylib/issues/568">validation of the inputs</a>.</p>
<p>So we have an improved one in bazel-lib. Here’s my first opportunity to link you to the documentation on the BCR: <a target="_blank" href="https://registry.bazel.build/modules/bazel_lib#-bzl_library-bzl">https://registry.bazel.build/modules/bazel_lib#-bzl_library-bzl</a></p>
<p>At this point, a rule author can <code>bazel query ‘kind(starlark_doc_extract, //...)’</code> to see what APIs have their documentation available.</p>
<h3 id="heading-publishing-the-docs">Publishing the Docs</h3>
<p>We want to include these docs in the releases, next to the artifact users download. Most rules use the <a target="_blank" href="https://github.com/bazel-contrib/publish-to-bcr">Publish-to-BCR</a> workflow or app which require a <code>release_prep.sh</code> script, so we can just add a snippet there:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Add generated API docs to the release</span>
<span class="hljs-comment"># See https://github.com/bazelbuild/bazel-central-registry/blob/main/docs/stardoc.md</span>
docs=<span class="hljs-string">"<span class="hljs-subst">$(mktemp -d)</span>"</span>; targets=<span class="hljs-string">"<span class="hljs-subst">$(mktemp)</span>"</span>
bazel --output_base=<span class="hljs-string">"<span class="hljs-variable">$docs</span>"</span> query --output=label --output_file=<span class="hljs-string">"<span class="hljs-variable">$targets</span>"</span> <span class="hljs-string">'kind("starlark_doc_extract rule", //...)'</span>
bazel --output_base=<span class="hljs-string">"<span class="hljs-variable">$docs</span>"</span> build --target_pattern_file=<span class="hljs-string">"<span class="hljs-variable">$targets</span>"</span>
tar --create --auto-compress \
    --directory <span class="hljs-string">"<span class="hljs-subst">$(bazel --output_base=<span class="hljs-string">"<span class="hljs-variable">$docs</span>"</span> info bazel-bin)</span>"</span> \
    --file <span class="hljs-string">"<span class="hljs-variable">$GITHUB_WORKSPACE</span>/<span class="hljs-variable">${ARCHIVE%.tar.gz}</span>.docs.tar.gz"</span> .
</code></pre>
<blockquote>
<p>Note: the <code>--output_file</code> flag was added in Bazel 7.5.0</p>
</blockquote>
<p>Next, we want the module's metadata to point to that docs.tar.gz file, by editing the <code>source.template.json</code> to include `"docs_url": "<a target="_blank" href="https://github.com/%7BOWNER%7D/%7BREPO%7D/releases/download/%7BTAG%7D/%7BREPO%7D-%7BTAG%7D.docs.tar.gz">https://github.com/{OWNER}/{REPO}/releases/download/{TAG}/{REPO}-{TAG}.docs.tar.gz</a>",`</p>
<blockquote>
<p>Note: you need publish-to-bcr version v0.2.3 or greater to pick up <a target="_blank" href="https://github.com/bazel-contrib/publish-to-bcr/pull/290">a fix</a> for multiple replacements in source.template.json</p>
</blockquote>
<p>Now the module just gets published as usual. The docs_url property points to an archive of all the stardocs in their binaryproto format.</p>
<h3 id="heading-rendering-the-docs">Rendering the Docs</h3>
<p>Google maintains <a target="_blank" href="https://github.com/bazelbuild/stardoc">https://github.com/bazelbuild/stardoc</a> which is a Java implementation of a Velocity template renderer for stardoc binaryprotos. However the BCR UI is a TypeScript Next.js application. We don’t want to introduce a Java source dependency. Not only that — but look at the “Legacy WORKSPACE setup” on <a target="_blank" href="https://github.com/bazelbuild/stardoc/releases/tag/0.8.0">https://github.com/bazelbuild/stardoc/releases/tag/0.8.0</a> for an idea of the mass of transitive dependencies it requires at runtime.</p>
<p>As I pointed out in that blog post I linked at the start, we can just use the <code>@buf/bazel_bazel.bufbuild_es</code> NPM package to parse the binaryproto’s, so we get a short implementation of data fetching: <a target="_blank" href="https://github.com/bazel-contrib/bcr-ui/blob/main/data/stardoc.ts">https://github.com/bazel-contrib/bcr-ui/blob/main/data/stardoc.ts</a></p>
<p>Now that we have the Stardocs in a TypeScript object, we just need a standard React component to render them: <a target="_blank" href="https://github.com/bazel-contrib/bcr-ui/blob/main/components/Stardoc.tsx">https://github.com/bazel-contrib/bcr-ui/blob/main/components/Stardoc.tsx</a> — this one is a little longer, but it’s mostly written and maintained by AI.</p>
<h2 id="heading-come-use-it-and-contribute">Come Use It and Contribute!</h2>
<p>The BCR-UI project is governed by the Rules Authors SIG under Linux Foundation: <a target="_blank" href="https://github.com/bazel-contrib/bcr-ui">https://github.com/bazel-contrib/bcr-ui</a></p>
<p>Your help is greatly appreciated!</p>
<ul>
<li><p>Update a ruleset to publish its docs</p>
</li>
<li><p>Improve ruleset documentation, like by adding more copy-paste examples</p>
</li>
<li><p>Suggest usability improvements to the Next.js rendering</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Aspect Workflows Free for Open Source Projects]]></title><description><![CDATA[At Aspect, open source is in our DNA. From day one, we've built tooling with and for the Bazel community. We’re passionate about helping open source projects ship faster and with higher quality.
Today, we’re excited to offer Aspect Workflows free for...]]></description><link>https://blog.aspect.build/free-tier-oss</link><guid isPermaLink="true">https://blog.aspect.build/free-tier-oss</guid><category><![CDATA[OSS]]></category><category><![CDATA[bazel]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Fri, 19 Sep 2025 16:16:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750343843473/8794d699-427d-4e48-a6f5-c08d82d90a43.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At Aspect, open source is in our DNA. From day one, we've built tooling with and for the Bazel community. We’re passionate about helping open source projects ship faster and with higher quality.</p>
<p>Today, we’re excited to offer <strong>Aspect Workflows free for a limited number of open-source projects</strong>.</p>
<h2 id="heading-try-our-cicd-platformbuilt-for-bazel-ready-to-support-oss">Try Our CI/CD Platform—Built for Bazel, Ready to support OSS</h2>
<p>Aspect Workflows provides a modern CI/CD experience powered by <strong>Buildkite</strong>—the same engine we use to run our own projects. It also supports other CI systems like <strong>CircleCI</strong> and <strong>GitHub Actions</strong>, so you can integrate with the workflows you already use.</p>
<p>We’ve already been dogfooding this setup with our own <a target="_blank" href="https://github.com/aspect-build/bazel-examples">bazel-examples</a> repo and many of our Bazel rulesets—including several in the <a target="_blank" href="https://github.com/bazel-contrib">bazel-contrib GitHub org</a>, such as <a target="_blank" href="https://github.com/bazel-contrib/rules_oci"><code>rules_oci</code></a>. The Buildbarn project is also on board, using it for their <a target="_blank" href="https://github.com/buildbarn/bb-storage"><code>bb-storage</code></a> CI. You can see the full list at <a target="_blank" href="https://buildkite.com/aspect-build">https://buildkite.com/aspect-build</a>.</p>
<p>To access the platform, <strong>head over to</strong> <a target="_blank" href="https://docs.aspect.build/">https://docs.aspect.build/</a> and create a free account.</p>
<h2 id="heading-what-you-get">What You Get</h2>
<p>Projects on our OSS tier get access to:</p>
<ul>
<li><p>🚀 <strong>Fast CI</strong> that powers our persistent runners that avoid cold-start latency</p>
</li>
<li><p>🔎 <strong>Detailed UI</strong> that shows exactly which test, or build step failed</p>
</li>
<li><p>💬 <strong>Streaming feedback in GitHub</strong> that posts real-time results directly to your PRs</p>
</li>
<li><p>🧰 <strong>Managed Buildbarn</strong> that provides a remote cache and execution backend, fully hosted</p>
</li>
</ul>
<h2 id="heading-a-few-things-to-know">A Few Things to Know</h2>
<p>This program runs in a <strong>shared (co-tenant) environment</strong> with other open-source workloads, which makes it ideal for public projects but not suitable for confidential code. There’s no formal SLA, and because open-source workloads are often idle and Workflows scales down to zero, it can sometimes take a couple of minutes for new machines to spin up. But once your build kicks off, it’s fast and smooth. ⚡️</p>
<h2 id="heading-ready-to-get-started">Ready to Get Started?</h2>
<p>If you're maintaining an open-source project and want to level up your CI/CD, we’d love to hear from you. Start at <strong>https://portal.aspect.build</strong> to begin the sign-up process.</p>
]]></content:encoded></item><item><title><![CDATA[Announcing bazel-starters]]></title><description><![CDATA[Aspect’s command line interface (CLI) has long had an init command to get started with a new full-featured Bazel workspace. It’s a wizard that leads you through some questions, resulting in some languages or features being enabled/disabled in the res...]]></description><link>https://blog.aspect.build/announcing-bazel-starters</link><guid isPermaLink="true">https://blog.aspect.build/announcing-bazel-starters</guid><category><![CDATA[bazel]]></category><category><![CDATA[starter-kit]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Mon, 15 Sep 2025 17:27:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Q-e1-nxrRIc/upload/9349131c2ddcc4e4b327676a2b23f9e5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Aspect’s command line interface (CLI) has long had an <code>init</code> command to get started with a new full-featured Bazel workspace. It’s a wizard that leads you through some questions, resulting in some languages or features being enabled/disabled in the resulting project.</p>
<p>We think this is a great way to customize your new workspace, but it has some downsides:</p>
<ul>
<li><p>You have to start by installing the Aspect Build CLI (or the <code>scaffold</code> tool that the templating is based on).</p>
</li>
<li><p>It’s hard to point to a link on GitHub that says “here’s why we have Bazel configured this way” or to copy-paste some boilerplate from it.</p>
</li>
<li><p>You don’t get a green “Use this Template” button on GitHub to press that creates a new repo, which is useful for one-off reproductions, playgrounds, or training courses.</p>
</li>
<li><p>We didn’t have a nice README explaining how you can use the new workspace.</p>
</li>
<li><p>It’s Aspect-branded, and so any training courses we might donate to Linux Foundation would likely trigger objections from competitors.</p>
</li>
</ul>
<h2 id="heading-introducing-template-repositories">Introducing template repositories</h2>
<p>The continuous integration (CI) setup for our template has always had “presets” — one for each language, plus a zero-language “minimal” preset and one with everything thrown in (the “kitchen sink”). We now publish the output of the generator for each preset to its own repo under its own org:</p>
<p><a target="_blank" href="https://github.com/bazel-starters">https://github.com/bazel-starters</a></p>
<p>The README here explains a whole bunch of features that are included:</p>
<ul>
<li><p>📦 <strong>Curated</strong> <code>bazelrc</code> flags via <a target="_blank" href="https://github.com/bazel-contrib/bazelrc-preset.bzl"><code>bazelrc-preset.bzl</code></a></p>
</li>
<li><p>🧰 <strong>Developer environment setup</strong> with <a target="_blank" href="https://github.com/buildbuddy-io/bazel_env.bzl"><code>bazel_env.bzl</code></a></p>
</li>
<li><p>🎨 <strong>Formatting and linting</strong> built-in, using <a target="_blank" href="https://github.com/aspect-build/rules_lint"><code>rules_lint</code></a></p>
</li>
<li><p>✅ <strong>Pre-commit hooks</strong> for automatic linting and formatting</p>
</li>
<li><p>📚 <strong>Language-specific package manager</strong> integration (e.g. <code>pip</code>, <code>go.mod</code>, <code>npm</code>, etc.)</p>
</li>
<li><p>🧱 <strong>Latest Bazel version</strong> pinned and working</p>
</li>
<li><p>🐳 <strong>Docker container</strong> support using <a target="_blank" href="https://github.com/bazel-contrib/rules_oci"><code>rules_oci</code></a></p>
</li>
<li><p>🧪 <strong>Code generation tools</strong> (e.g. <a target="_blank" href="https://copier.readthedocs.io/"><code>copier</code></a>, <code>scaffold</code>, <code>yeoman</code>) to help you and your team <strong>stamp out new services or components quickly</strong></p>
</li>
</ul>
<p>We also found a <a target="_blank" href="https://gist.github.com/bwoods/1c25cb7723a06a076c2152a2781d4d49">clever trick</a> to make a Markdown file directly executable as a Bourne Shell script. So each of the README files, such as <a target="_blank" href="https://github.com/bazel-starters/cpp">https://github.com/bazel-starters/cpp</a> are actually executed as part of our CI process. That means you can trust those instructions will actually work after you clone the repo.</p>
<h2 id="heading-whats-next">What’s next</h2>
<p>We are always improving the state-of-the-art for to get started more easily with Bazel. Look for an Aspect Build announcement coming up in November for BazelCon 2025 where these starters will get a lot more useful!</p>
]]></content:encoded></item><item><title><![CDATA[The 'outside of Bazel' pattern]]></title><description><![CDATA[The Bazel build tool is fantastic for taking a well-defined dependency graph, which is really a tree coming up from the root (the artifact to be built or test to be run), and progressing through increasingly wide branches of direct and transitive dep...]]></description><link>https://blog.aspect.build/outside-of-bazel-pattern</link><guid isPermaLink="true">https://blog.aspect.build/outside-of-bazel-pattern</guid><category><![CDATA[bazel]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Tue, 26 Aug 2025 18:32:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756223473680/92d08107-8263-426b-ae0a-cc6faa6484c9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Bazel build tool is fantastic for taking a well-defined dependency graph, which is really a tree coming up from the root (the artifact to be built or test to be run), and progressing through increasingly wide branches of direct and transitive dependencies, all the way up to the leaves, which are source files that live in your repository, or possibly even third-party sources.</p>
<p>However, I often see developers struggling to model something that’s not a tree. Sometimes it’s a bush, or a chandelier. This is usually a sign that Bazel’s dependency + action graphs won’t work well, due to bad ergonomics and ruined incrementality.</p>
<p>Bazel dogma teaches us that all logic should be in Starlark (Bazel’s extension language) and described in <code>BUILD</code> files. I’ll show a few examples where this isn’t the “Right Tool for the Job”.</p>
<p>However, I’ll make a stronger case: Bazel is the inner core of a wider system. The core really only performs two jobs well:</p>
<ol>
<li><p>Inspect the dependency and action graphs (<code>aquery</code> and <code>cquery</code>)</p>
</li>
<li><p>Populate a subset of <code>bazel-bin</code> and <code>bazel-testlogs</code> (<code>build</code> and <code>test</code> - though the latter can really be thought of as “build text files containing all the test runner exit codes”)</p>
</li>
</ol>
<p>The wider system is a “task runner”. <code>Makefile</code> commonly serves this purpose, surrounding Bazel commands, but it has a trap: it overlaps with Bazel’s capabilities and makes it impossible to ensure you don’t have Build steps sneaking into the outer layer. At BazelCon this year, I’ll present a better task runner that lets you write tasks in Starlark. In the meantime, I’ll just illustrate the task runner layer with some Bash one-liners.</p>
<h2 id="heading-an-archive-of-the-whole-repo">An archive of the whole repo</h2>
<p>Our first example is common in Bazel rulesets. You want an archive that represents the whole source repository, so the shape is “take all the leaves and connect them directly to the root”.</p>
<p>This doesn’t work well in Bazel because packages are encapsulated, so a <code>glob([**/*])</code> doesn’t gather up all the sources in subpackages. Instead you need an awkward tree of <code>filegroup</code> targets in every package that are linked together.</p>
<p>The alternative is to use the <strong>Right Tool for the Job</strong>: <code>git archive</code>. It has some lesser-known configuration options you can set in the <code>.gitattributes</code>file (see <a target="_blank" href="https://git-scm.com/docs/git-archive#ATTRIBUTES">https://git-scm.com/docs/git-archive#ATTRIBUTES</a>) that help a bunch:</p>
<ul>
<li><p>Filtering out contents: instead of Bazel <code>glob(excludes=[])</code> you can use <code>export-ignore</code> patterns</p>
</li>
<li><p>Version Stamping the result: use <code>export-subst</code> to configure which file is stamped, then something like the following in that file:</p>
</li>
<li><pre><code class="lang-plaintext">        _VERSION_PRIVATE = "$Format:%(describe:tags=true)$"

        VERSION = "0.0.0" if _VERSION_PRIVATE.startswith("$Format") else _VERSION_PRIVATE.replace("v", "", 1)
</code></pre>
</li>
</ul>
<p>In an earlier post I covered this in more detail, including real-life examples: <a target="_blank" href="https://blog.aspect.build/releasing-bazel-rulesets-rust">https://blog.aspect.build/releasing-bazel-rulesets-rust</a></p>
<h2 id="heading-all-the-something-targets">All the “Something” targets</h2>
<p>This one comes up a lot. Recently I’ve been working on distributing API documentation which is generated for code across the repo. You’ll recognize this pattern whenever it seems like <code>bazel query 'some expression` | xargs bazel build</code> is the model of what you want to ask Bazel for.</p>
<p>It’s tempting to model this is a “collector” target that’s just a long list of <code>deps</code> and then wonder “how am I going to keep this list of deps up-to-date as we add more <code>something</code> targets?” You can’t and shouldn’t.</p>
<p>The biggest reason to avoid this one is the shape of dependency graph you end up with. Developers will commonly trip over analyzing this target (just doing a query over the repo, or <code>load</code>ing that package will do it). Then Bazel goes from incremental “only do the minimal work for the targets I requested” to performing a whole-repo step that downloads gigabytes of irrelevant tooling.</p>
<p>This time, the <strong>Right Tool for the Job</strong> is a small workflow outside of Bazel. Create a query expression that matches the targets you care about, and selects the outputs you need from them. We do this for the <code>lint</code> command to select “all the report files” for example. Here’s a full code listing for the API docgen task (note that we don’t literally use <code>xargs</code> because of the length of the command line can exceed <code>ARG_MAX</code> and spill into multiple spawns):</p>
<pre><code class="lang-bash">docs=<span class="hljs-string">"<span class="hljs-subst">$(mktemp -d)</span>"</span>; targets=<span class="hljs-string">"<span class="hljs-subst">$(mktemp)</span>"</span>
bazel --output_base=<span class="hljs-string">"<span class="hljs-variable">$docs</span>"</span> query --output=label --output_file=<span class="hljs-string">"<span class="hljs-variable">$targets</span>"</span> <span class="hljs-string">'kind("starlark_doc_extract rule", //...)'</span>
bazel --output_base=<span class="hljs-string">"<span class="hljs-variable">$docs</span>"</span> build --target_pattern_file=<span class="hljs-string">"<span class="hljs-variable">$targets</span>"</span>
tar --create --auto-compress \
    --directory <span class="hljs-string">"<span class="hljs-subst">$(bazel --output_base=<span class="hljs-string">"<span class="hljs-variable">$docs</span>"</span> info bazel-bin)</span>"</span> \
    --file <span class="hljs-string">"<span class="hljs-variable">$GITHUB_WORKSPACE</span>/<span class="hljs-variable">${ARCHIVE%.tar.gz}</span>.docs.tar.gz"</span> .
</code></pre>
<h2 id="heading-compare-with-another-version-of-the-code">Compare with another version of the code</h2>
<p><a target="_blank" href="https://buf.build/docs/cli/build-systems/bazel/?h=bazel#breaking-change-detection">buf_breaking</a> is a good example. It wants to see the prior state of the output (say at the Base commit of a Pull Request), and compare with the current one. Bazel sees a single snapshot of the source code for a given build. I’ve seen some customers write a repository rule to clone a different commit of the repository, which seems very brittle to me. Checking in the prior output is too hard to automate.</p>
<p>The Right Tool for this Job is to find CI artifacts from the Base commit for a given change, and run a comparison/validation tool after the build runs. Then write an updated artifact from builds on the <code>main</code> branch for subsequent comparisons.</p>
<h2 id="heading-gazelle"><code>gazelle</code></h2>
<p>BUILD file generation is in this category, because Bazel has always refused to allow dynamic dependency graphs based on file contents. The team argues that the “no-op” build has to remain fast.</p>
<p>But it doesn’t matter how fast their tool is, if every user is then forced to wrap it in something slower. In this case, we always want something to run <strong>before</strong> Bazel’s loading phase: a step like how C++ builds run <code>autoconf</code> with a <code>./configure &amp;&amp; make</code> workflow.</p>
<p>Today engineers mostly have to discover their BUILD files are outdated (maybe there’s a compilation error about a missing dependency) and then do a manual <code>bazel run //:gazelle</code> - but if we had a Task Runner layer around Bazel, we’d just setup a step to run ahead of time.</p>
<p><em>(By the way, at BazelCon I’ll present two things: we can use Starlark to write the task that invokes Gazelle, and we can also extend Gazelle’s BUILD generation logic in Starlark!)</em></p>
<h2 id="heading-coverage">Coverage</h2>
<p>Bazel has a <code>coverage</code> command, so why is this example here? Well, in my experience, it was a mistake - this command wanted to live in the task runner layer, but Google never wrote one. The coverage system is bad mostly because of things like lcov transformation and merging, and how difficult it is to configure.</p>
<p>Coverage really should have been formulated as a Task Runner that:</p>
<ol>
<li><p>Builds the code under a Transition that enables an Instrumentation Configuration (pokes counters into the executable to track how many times a line or statement executes)</p>
</li>
<li><p>Runs the tests as usual. The coverage data files are configured to be additional outputs (using the <code>TEST_UNDECLARED_OUTPUTS</code> feature my intern added (hi John!))</p>
</li>
<li><p>After the tests are complete, collects the resulting data files. They might be LCOV format, or something else that needs to be transformed.</p>
</li>
<li><p>Presents the results, frequently by consulting the VCS so you can show incremental coverage (how many of the added/edited lines were tested)</p>
</li>
</ol>
<h2 id="heading-run">Run</h2>
<p>Even the <code>bazel run</code> command is a mistake in my opinion. It’s a nice “syntax sugar” for building a single target, then spawning it as a subprocess. But it’s missing things like <code>watch</code> mode, we needed a separate <code>rules_multirun</code> to get multiple servers to start, and has a ton of bugs around how the working directory is selected.</p>
<p>If we had a Task Runner, it would clearly not be Bazel’s job to do these things.</p>
<h2 id="heading-print"><code>print</code></h2>
<p>Buildozer is a great tool for machine-editing BUILD files, but also for quickly inspecting their contents in a purely syntactic pass that doesn’t trigger Bazel’s fetching, Loading, and Analysis phases. With a Task Runner layer, we can easily expose these Bazel-adjacent tools through the same interface engineers use to request build outputs, instead of making them install and learn about a variety of other tools. So we’d add a <code>print</code> task that doesn’t even invoke Bazel at all!</p>
<h2 id="heading-next">Next</h2>
<p>To learn more about Aspect Build's Bazel developer workflow platform and professional services, visit <a target="_blank" href="http://aspect.build">aspect.build</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Pnpm v10 and rules_js: Better Alignment and Improved Build Determinism]]></title><description><![CDATA[Pnpm v10 was released earlier this year. After some initial issues release 10.11.1 now fully passes regression tests with rules_js.
This release brings notable improvements in hermeticity, performance, and reliability, aligning pnpm more closely with...]]></description><link>https://blog.aspect.build/pnpm-v10-and-rulesjs</link><guid isPermaLink="true">https://blog.aspect.build/pnpm-v10-and-rulesjs</guid><category><![CDATA[bazel]]></category><category><![CDATA[bazelbuild]]></category><dc:creator><![CDATA[Jason Bedard]]></dc:creator><pubDate>Tue, 22 Jul 2025 17:34:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753195821709/302d943c-16ce-4d77-a5ce-791c04a44796.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Pnpm v10</strong> was released <a target="_blank" href="https://github.com/pnpm/pnpm/releases/tag/v10.0.0">earlier this year</a>. After some <a target="_blank" href="https://github.com/pnpm/pnpm/issues/9531">initial issues</a> release <a target="_blank" href="https://github.com/pnpm/pnpm/releases/tag/v10.11.1">10.11.1</a> now fully passes regression tests with <a target="_blank" href="https://github.com/aspect-build/rules_js">rules_js</a>.</p>
<p>This release brings notable improvements in <strong>hermeticity</strong>, <strong>performance</strong>, and <strong>reliability</strong>, aligning <code>pnpm</code> more closely with the expectations and requirements of Bazel-based workflows that use <code>rules_js</code>.</p>
<h3 id="heading-only-built-dependencies">(Only) Built Dependencies</h3>
<p>Since pnpm v9 rules_js has required packages with build steps to be manually declared in the <code>pnpm.onlyBuiltDependencies</code> field of <code>package.json</code>. Now pnpm v10 has the exact same requirements as rules_js, see pnpm <a target="_blank" href="https://github.com/pnpm/pnpm/pull/8897">#8897</a> (as well as <a target="_blank" href="https://github.com/pnpm/pnpm/pull/7710">#7710</a> and <a target="_blank" href="https://github.com/pnpm/pnpm/pull/7715">#7716</a> for more background).</p>
<p>Explicitly declaring which packages require a build step improves determinism and hermeticity in both pnpm and rules_js. This allows <code>rules_js</code> to model Bazel build actions without first needing to download and inspect package contents.</p>
<p>This change was followed-up with <code>pnpm.neverBuiltDependencies</code> (<a target="_blank" href="https://github.com/pnpm/pnpm/pull/8958">#8958</a>) in pnpm 10.1 to suppress <code>pnpm install</code> warnings about packages containing install logic without being listed in <code>pnpm.onlyBuiltDependencies</code>.</p>
<h3 id="heading-secure-sha256-hashing">Secure SHA256 Hashing</h3>
<p>Pnpm v10 has switched to more secure <strong>SHA256</strong> hashing of content in the <code>pnpm-lock.yaml</code> file, see <a target="_blank" href="https://github.com/pnpm/pnpm/pull/8530">#8530</a>. Bazel and rules_js already use sha256/512 for integrity checks, and rules_js will continue to align with pnpm lockfiles where pnpm v10 has upgraded to sha256.</p>
<h3 id="heading-configuration-changes">Configuration changes</h3>
<p>Pnpm v10 has made many other configuration related changed that do not directly effect integration with rules_js, but may effect your experience when upgrading such as:</p>
<ul>
<li><p>default hoisting has changed <a target="_blank" href="https://github.com/pnpm/pnpm/issues/8378">#8378</a></p>
</li>
<li><p><code>NODE_ENV</code> is now ignored on install <a target="_blank" href="https://github.com/pnpm/pnpm/issues/8827">#8827</a></p>
</li>
<li><p>the <code>@yarnpkg/extensions</code> package was upgraded, this may alter resolved dependencies in edge cases</p>
</li>
</ul>
<h3 id="heading-catalogs">Catalogs</h3>
<p>While actually a pnpm v9.5 feature, <a target="_blank" href="https://pnpm.io/catalogs">pnpm catalogs</a> is a feature worth mentioning again. Catalogs have provided a way to stop repeating version numbers throughout your <code>package.json</code> files and declare a single version for a package in a single location, while keeping fine grained dependencies in your projects.</p>
<p>Catalogs are especially useful in large monorepos where Bazel and rules_js are normally used.</p>
<p>Catalogs are used by pnpm when generating the <code>pnpm-lock.yaml</code> file and does not change the underlying lockfile format, so rules_js supports comes for free.</p>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>Like pnpm v9 last year, pnpm v10 continues to move toward more deterministic builds that continue to align with Bazel and rules_js, further proving pnpm was the right choice for the Bazel and rules_js ecosystems.</p>
<p>See the <a target="_blank" href="https://github.com/pnpm/pnpm/releases/tag/v10.0.0">pnpm v10 release</a> and <a target="_blank" href="https://github.com/pnpm/pnpm/releases">followup releases</a> for a full list of changes.</p>
]]></content:encoded></item><item><title><![CDATA[Bazel technique for Continuous Delivery]]></title><description><![CDATA[The term "CD" is ambiguous. Some engineers use it to mean "Continuous Deployment", in which changes are automatically released, e.g. into a "dev" environment.
Aspect recommends that Continuous Delivery is modeled as the step of the pipeline where bui...]]></description><link>https://blog.aspect.build/bazel-technique-for-continuous-delivery</link><guid isPermaLink="true">https://blog.aspect.build/bazel-technique-for-continuous-delivery</guid><category><![CDATA[ci-cd]]></category><category><![CDATA[bazel]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Thu, 03 Jul 2025 18:04:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/R_UAaSoAvbo/upload/a26fd83dcc9da2ede6ed1091e5bcc6e2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The term "CD" is ambiguous. Some engineers use it to mean "Continuous Deployment", in which changes are automatically released, e.g. into a "dev" environment.</p>
<p>Aspect recommends that Continuous Delivery is modeled as the step of the pipeline where built artifacts are uploaded from the build machine to a well-known repository location. This could be a container image registry like Docker Hub, a blob store like AWS S3, or even a database.</p>
<p>This makes a clear separation of responsibilities between CI, CD and Deployment:</p>
<ul>
<li><p>The CI pipeline runs all the tests to confirm the repo is in a shippable state</p>
</li>
<li><p>The CD pipeline should then upload only the artifacts that are:</p>
<ul>
<li><p>configured with <code>BUILD.bazel</code> files. Product engineers don't need to worry about setting up CD</p>
</li>
<li><p>green: it can prove that all relevant tests are passing</p>
</li>
<li><p>changed from a previous build</p>
</li>
</ul>
</li>
<li><p>The deployment system</p>
<ul>
<li>locates and "promotes" artifacts to the next environment, such as "dev", "staging", or "prod".</li>
</ul>
</li>
</ul>
<h2 id="heading-build-vs-buy">Build vs. Buy</h2>
<p>The recommendations in this guide can be applied in two ways:</p>
<ul>
<li><p>DevInfra teams may wish to implement and operate a custom system for their organization, or</p>
</li>
<li><p>Use <a target="_blank" href="https://aspect.build/workflows">Aspect Workflows</a>, which provides this feature out-of-the-box.</p>
</li>
</ul>
<h2 id="heading-what-is-deliverable">What is "deliverable"</h2>
<p>A deliverable artifact is one that contains both the binary or files to push, as well as the "pushing" logic that knows how to perform the upload. It might also send a message to the deployment system to trigger an auto-deployment of the new artifact.</p>
<p>In Bazel terms, this means a deliverable should be an executable program that can be <code>bazel run</code>.</p>
<h3 id="heading-container-images">Container Images</h3>
<p>The rules_oci <a target="_blank" href="https://github.com/bazel-contrib/rules_oci/blob/main/docs/push.md#oci_push"><code>oci_push</code></a> or rules_docker <a target="_blank" href="https://github.com/bazelbuild/rules_docker/blob/master/docs/container.md#container_push"><code>container_push</code></a> rules can both be executed with <code>bazel run</code> to push a Docker image to a registry like Docker Hub.</p>
<p>Therefore these rules are considered "deliverable".</p>
<h3 id="heading-git-push">Git Push</h3>
<p>Sometimes artifacts belong in a separate code repository. For example, an SDK built from the API definitions in a monorepo needs to be published.</p>
<p>See an example <code>git_push</code> executable <a target="_blank" href="https://github.com/aspect-build/bazel-examples/tree/main/git_push">in this repository</a>.</p>
<h3 id="heading-s3-upload">S3 upload</h3>
<p>See <a target="_blank" href="https://github.com/aspect-build/rules_aws/tree/main/examples/release_to_s3">s3_sync</a>.</p>
<h2 id="heading-which-targets-to-deliver">Which targets to deliver</h2>
<p>A <code>bazel query</code> expression is the most convenient way to locate deliverable targets. Users may choose a tagging scheme for their workspace (i.e. "all targets with <code>tags = ['artifact']</code>"), or deliver well-known rule kinds (i.e. <code>oci_push</code>), or both.</p>
<h2 id="heading-which-changes-to-deliver">Which changes to deliver</h2>
<p>To optimize time and money, it's best to deliver only "changed" targets. This avoids wasted time and resources uploading the same artifact repeatedly. It also means that release engineers won't have to sort through a massive list of duplicates when choosing a release.</p>
<p>There are two approaches for choosing "changed" targets:</p>
<ol>
<li><p>Predict the changes based on a version control delta. For example you could <code>git diff</code> between the hash being delivered and the "prior successful" delivery hash, then use a tool like <a target="_blank" href="https://github.com/Tinder/bazel-diff">bazel-diff</a> or <a target="_blank" href="https://github.com/bazel-contrib/target-determinator">target-determinator</a> to produce a list of targets that might be affected by those changes.</p>
</li>
<li><p>Determine empirically based on what is actually different. This requires determinism, so it must only use <a target="_blank" href="https://github.com/bazel-contrib/bazel-lib/blob/main/docs/stamping.md">unstamped</a> build results (<code>--nostamp</code>). In most cases a green CI run just completed, so these unstamped outputs are easily available.</p>
</li>
</ol>
<p>Aspect recommends following the second approach because the first has some downsides:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Tinder/bazel-diff">bazel-diff</a> is incorrect and will sometimes miss affected targets, so they aren't delivered.</p>
</li>
<li><p><a target="_blank" href="https://github.com/bazel-contrib/target-determinator">target-determinator</a> is slow and may hurt the "service level indicator" of time between pushing a hotfix and being able to release that fix.</p>
</li>
<li><p>It will over-deliver, because sometimes a source change doesn't actually factor into whether the release binary changes, such as for a comment-only change.</p>
</li>
</ul>
<p>The rest of this section provides more details about the second approach (determine empirically).</p>
<p>To determine whether Workflows should deliver that executable target on a particular commit, it is first hashed using the <a target="_blank" href="https://docs.aspect.build/cli/commands/aspect_outputs/"><code>aspect outputs</code> command</a> with a special pseudo-mnemonic "ExecutableHash", for example:</p>
<pre><code class="lang-sh">$ aspect outputs <span class="hljs-string">'attr("tags", "\bdeliverable\b", //...)'</span> ExecutableHash
//cli:release h1:cj8OUC3l3fIr3Zxnffk6y7gukLOJmiWRCAQoqadg66Y=
//workflows/rosetta:release h1:kjHVajw+Nta2kh3Epcd32DkZxTE1NHA8b5N7hCNFNSM=
</code></pre>
<p>You need a lookup database to store previously delivered hashes. If the hash value matches one previously seen, then skip delivery of that target.</p>
<h3 id="heading-debugging-changes-to-deliver">Debugging changes to deliver</h3>
<p>You can run the <code>aspect outputs</code> command locally to understand whether a given change to a source file results in a new executable. Sometimes the result may be surprising. For example, if a comment in a <code>.go</code> source file is changed, the compiler produces the same <code>.a</code> file as a result, so the hash seen on the uploader executable is unchanged.</p>
<p>Another scenario that won't change the executable is when some production configuration is changed. For example, you may use Helm charts to deploy to Kubernetes. If these aren't included inside the image, then changes to these files won't cause a new delivery.</p>
<h2 id="heading-perform-the-delivery">Perform the delivery</h2>
<p>Run each deliverable target with <a target="_blank" href="https://registry.bazel.build/modules/bazel_lib#lib-stamping-bzl">stamping</a> enabled. You can do this in a script which reads the targets from a manifest file, essentially <code>cat $delivery_manifest | xargs -N1 bazel run --stamp</code>.</p>
]]></content:encoded></item><item><title><![CDATA[Device management: tools on your developers PATH]]></title><description><![CDATA[“Device Management”, or MDM, is that thing which forces your work computer to have security software installed. It has the ability to push tools to developer machines too - however it’s owned by the security team at your company. While it would be co...]]></description><link>https://blog.aspect.build/bazel-devenv</link><guid isPermaLink="true">https://blog.aspect.build/bazel-devenv</guid><category><![CDATA[bazel]]></category><category><![CDATA[mdm]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Tue, 13 May 2025 20:08:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746814801998/d0a89415-feaa-4e5e-bd7d-c5f4322220df.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>“Device Management”, or MDM, is that thing which forces your work computer to have security software installed. It has the ability to push tools to developer machines too - however it’s owned by the security team at your company. While it would be convenient, I’ve found it’s difficult for the Developer Platform team to use that to distribute the "canonical developer environment”.</p>
<p>Some folks use a devcontainer or VDI (Virtual Desktop Infrastructure) to describe the tooling you need installed, but that’s heavy and slow. If we’re using Bazel, it already has features to give a hermetic environment, right?</p>
<p>A year ago, I wrote an article describing how to get tools on your developers local machine using Bazel, and today I have an update.</p>
<p>In the technique from my original post (<a target="_blank" href="https://blog.aspect.build/run-tools-installed-by-bazel">https://blog.aspect.build/run-tools-installed-by-bazel</a>), users have to change their behavior, typing <code>./tools/my-tool</code> rather than just <code>my-tool</code>. Retraining developers to have a different command under their fingers is hard, especially for things they run all the time.</p>
<p>Shortly after I wrote that post, Fabian Meumertzheim created <a target="_blank" href="https://github.com/buildbuddy-io/bazel_env.bzl">https://github.com/buildbuddy-io/bazel_env.bzl</a>. This is an alternative technique I’ll write about today. It puts the tools on the <code>$PATH</code> instead, fixing the ergonomic issue from the first technique — however there are always trade-offs! This one requires engineers manually install the <code>direnv</code> tool before they get setup.</p>
<h2 id="heading-how-direnv-works">How direnv works</h2>
<p><strong>direnv</strong> is a command-line tool that automatically sets and unsets environment variables when you <code>cd</code> into or out of a directory. It’s especially useful for managing project-specific environment variables, such as secrets, configuration settings, or language versions (like Python or Node.js versions).</p>
<ul>
<li><p>You place a <code>.envrc</code> file in your project directory.</p>
</li>
<li><p>This <code>.envrc</code> file contains shell commands to export environment variables or run setup scripts.</p>
</li>
<li><p>When you enter the directory, <code>direnv</code> loads the <code>.envrc</code> file and applies the environment changes.</p>
</li>
<li><p>When you leave the directory, it automatically reverts those changes.</p>
</li>
</ul>
<p>Here’s how it looks when I enter Aspect’s monorepo (named “silo”):</p>
<pre><code class="lang-plaintext">alexeagle@aspect-build ~ % cd Projects/silo
direnv: loading ~/Projects/silo/.envrc
direnv: export ~PATH
</code></pre>
<p>Installing <code>direnv</code> isn’t just a matter of getting the program on your machine. It also needs to hook into your shell (it supports <code>bash</code>, <code>zsh</code> and many others). And finally, you must explicitly allow each <code>.envrc</code> to be trusted.</p>
<p>To follow this pattern, you’ll need to instruct your developers to install this first tool manually. Fortunately they’ll get a reminder of the instructions in the next step.</p>
<h2 id="heading-bazelenv-creates-a-envrc">bazel_env creates a .envrc</h2>
<p>After installing <code>bazel_env.bzl</code> you’ll have a runnable Bazel target, typically <code>bazel run //:_bazel_env</code> or <code>bazel run //tools:bazel_env</code>.</p>
<p>The <code>bazel_env</code> target is defined with a dictionary that maps a tool name to put on the PATH, to some other target that provides it. What kinds of targets can those be? Take a look at the example: <a target="_blank" href="https://github.com/buildbuddy-io/bazel_env.bzl/blob/main/examples/BUILD.bazel">https://github.com/buildbuddy-io/bazel_env.bzl/blob/main/examples/BUILD.bazel</a></p>
<ol>
<li><p>Binary targets for programs you author yourself in the monorepo</p>
</li>
<li><p>Tools provided by a toolchain, like <code>go</code>, <code>node</code>, <code>pnpm</code>, <code>cargo</code>, etc</p>
</li>
<li><p>With <a target="_blank" href="https://github.com/theoremlp/rules_multitool">https://github.com/theoremlp/rules_multitool</a> you can run <code>multitool</code> to update a <code>tools.lock.json</code> file, and all of these tools are installed.</p>
</li>
<li><p>CLI utilities distributed by a package manager, like console scripts from PyPI, <code>bin</code> entries from the <code>package.json</code> of NPM packages, Go utilities (see <code>scaffold</code> example here), and so on.</p>
</li>
</ol>
<pre><code class="lang-plaintext">% bazel run //tools:bazel_env
INFO: Analyzed target //tools:bazel_env (580 packages loaded, 72963 targets configured).
INFO: Found 1 target...
Target //tools:bazel_env up-to-date:
  bazel-bin/tools/bazel_env_all_tools
INFO: Elapsed time: 9.399s, Critical Path: 0.45s
INFO: Running command line: bazel-bin/tools/bazel_env.sh

====== bazel_env ======

✅ direnv is installed
✅ direnv added bazel-out/bazel_env-opt/bin/tools/bazel_env/bin to PATH

Tools available in PATH:
  * aws:             @aws
  * pnpm:            @pnpm
  * gofumpt:         @@rules_multitool~~multitool~multitool//tools/gofumpt:gofumpt
  * jsonnetfmt:      @@rules_multitool~~multitool~multitool//tools/jsonnetfmt:jsonnetfmt
  * shfmt:           @@rules_multitool~~multitool~multitool//tools/shfmt:shfmt
  * terraform:       //tools:terraform
  * yamlfmt:         @@rules_multitool~~multitool~multitool//tools/yamlfmt:yamlfmt
  * ruff:            @@rules_multitool~~multitool~multitool//tools/ruff:ruff
  * shellcheck:      @@rules_multitool~~multitool~multitool//tools/shellcheck:shellcheck
  * buf:             @@rules_multitool~~multitool~multitool//tools/buf:buf
  * buildozer:       @@rules_multitool~~multitool~multitool//tools/buildozer:buildozer
  * docker-compose:  @@rules_multitool~~multitool~multitool//tools/docker-compose:docker-compose
  * diesel-cli:      @@rules_multitool~~multitool~multitool//tools/diesel-cli:diesel-cli
  * etcdctl:         @@rules_multitool~~multitool~multitool//tools/etcdctl:etcdctl
  * grpcurl:         @@rules_multitool~~multitool~multitool//tools/grpcurl:grpcurl
  * ibazel:          @@rules_multitool~~multitool~multitool//tools/ibazel:ibazel
  * multitool:       @@rules_multitool~~multitool~multitool//tools/multitool:multitool
  * otel-cli:        @@rules_multitool~~multitool~multitool//tools/otel-cli:otel-cli
  * otelcol-contrib: @@rules_multitool~~multitool~multitool//tools/otelcol-contrib:otelcol-contrib
  * promtool:        @@rules_multitool~~multitool~multitool//tools/promtool:promtool
  * pyrra:           @@rules_multitool~~multitool~multitool//tools/pyrra:pyrra
  * tflint:          @@rules_multitool~~multitool~multitool//tools/tflint:tflint
  * tfsec:           @@rules_multitool~~multitool~multitool//tools/tfsec:tfsec
  * buildifier:      @buildifier_prebuilt//:buildifier
  * scaffold:        @com_github_hay_kot_scaffold//:scaffold
  * node:            $(NODE_PATH)
  * cargo:           $(CARGO)
  * rustfmt:         $(RUSTFMT)

Toolchains available at stable relative paths:
  * nodejs: bazel-out/bazel_env-opt/bin/tools/bazel_env/toolchains/nodejs
  * rust:   bazel-out/bazel_env-opt/bin/tools/bazel_env/toolchains/rust

direnv: loading ~/Projects/silo/.envrc
direnv: export ~PATH
</code></pre>
<h2 id="heading-try-it-out">Try it out!</h2>
<p>This is now the pattern we stamp out in new Bazel repositories generated by running our CLI, <code>aspect init</code>.</p>
<p>See <a target="_blank" href="https://docs.aspect.build/guides/getting-started/">https://docs.aspect.build/guides/getting-started/</a></p>
<h2 id="heading-guardrails">Guardrails</h2>
<p>There are a few things that can go wrong, which are worth pointing out.</p>
<p>You’ll note that the tools are actually installed under a named output folder, <code>bazel-out/bazel_env-opt</code> - what if the user runs a <code>bazel clean</code>? This case is well handled, since <code>direnv</code> is able to report errors in the <code>.envrc</code> file with a custom message:</p>
<pre><code class="lang-plaintext">alexeagle@aspect-build silo % bazel clean
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
direnv: loading ~/Projects/silo/.envrc
direnv: ERROR[bazel_env.bzl]: Run 'bazel run //tools:bazel_env' to regenerate bazel-out/bazel_env-opt/bin/tools/bazel_env/bin
direnv: export ~PATH
</code></pre>
<p>Any time the tools change, everyone has to run it again.</p>
<p>It’s also not lazy (at least not yet). When you run <code>bazel_env</code> it needs to fetch all the tools, even those you never plan to run. See <a target="_blank" href="https://github.com/buildbuddy-io/bazel_env.bzl/issues/14">https://github.com/buildbuddy-io/bazel_env.bzl/issues/14</a></p>
<p>As a workaround, you can have multiple <code>bazel_env</code> targets, but users have to choose the right one for the work they intend to do.</p>
]]></content:encoded></item><item><title><![CDATA[Never Compile protoc Again]]></title><description><![CDATA[Background
Protocol Buffers needs a code generation step, turning your .proto files into “stub” code in your language of choice which provides marshaling/unmarshaling of proto messages (binary or JSON typically) into language-idiomatic data structure...]]></description><link>https://blog.aspect.build/never-compile-protoc-again</link><guid isPermaLink="true">https://blog.aspect.build/never-compile-protoc-again</guid><category><![CDATA[bazel]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Mon, 28 Apr 2025 15:41:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/BthSqlD2Cso/upload/72aeeb64ffcad65cf0144809bef4a786.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-background">Background</h2>
<p>Protocol Buffers needs a code generation step, turning your <code>.proto</code> files into “stub” code in your language of choice which provides marshaling/unmarshaling of proto messages (binary or JSON typically) into language-idiomatic data structures. If you use <code>services</code>, then you also want “stub” code to implement clients and/or servers that use gRPC.</p>
<p>This code generation needs a “compiler” for the protobuf language and the idiomatic compiler provided by the Protobuf team at Google is <code>protoc</code>. They provide binary releases of this tool on <a target="_blank" href="https://github.com/protocolbuffers/protobuf/releases">https://github.com/protocolbuffers/protobuf/releases</a> and that’s where most protobuf users get it (maybe via a plugin for their Build system, such as <a target="_blank" href="https://github.com/google/protobuf-gradle-plugin">https://github.com/google/protobuf-gradle-plugin</a>)</p>
<p>However under Bazel, the common means of getting the tool is to compile it yourself from <a target="_blank" href="https://github.com/protocolbuffers/protobuf/blob/5014f131760523702281f3e05c1e539f08f450c9/BUILD.bazel#L348-L354">this <code>cc_binary</code> target</a>. This is what Googlers expect, having lived in an extreme monorepo where everything is vendored, and security is paramount. See an <a target="_blank" href="https://github.com/bazelbuild/bazel-central-registry/pull/3843#issuecomment-2675948147">interesting comment</a> I got in a discussion:</p>
<blockquote>
<p>Interesting. I wasn't aware there was a contingent of Bazel rules maintainers who are opposed to google3-style source-only builds.</p>
</blockquote>
<p>Yes! I’m one of those maintainers. That’s because most of us don’t have perfect caching. Ideally you don’t notice <code>protoc</code> compilation other than “the first time”. In practice, I wait for <code>protoc</code> to build all the time, and it spams my CI log with <code>gcc</code> warnings that are irrelevant to me. And on some machines, the compilation fails. If you haven’t listened to the Aspect Insights podcast, this topic was all the way back at <a target="_blank" href="https://www.youtube.com/watch?v=s0i_Ra_mG9U&amp;list=PLLU28e_DRwdtpojOqWM5UeFyxad7m9gCF&amp;index=12">episode 1</a>.</p>
<h2 id="heading-avoiding-it">Avoiding it</h2>
<p>The exciting news is that many Bazel projects can stop compiling <code>protoc</code> TODAY.</p>
<p>Aspect engineer Sahin worked with the Bazel team long ago to help <a target="_blank" href="https://github.com/bazelbuild/rules_proto/discussions/213">introduce a new flag</a>, allowing the <code>protoc</code> binary to be registered as a <a target="_blank" href="https://bazel.build/extending/toolchains">toolchain</a>. This gives us the flexibility of registering something other than the <code>cc_binary</code> target. In fact, you’re free to register <code>/bin/false</code> as your protocol buffer compiler, if you want something very fast and very broken.</p>
<p>I then wrote <a target="_blank" href="https://github.com/aspect-build/toolchains_protoc">https://github.com/aspect-build/toolchains_protoc</a> which gives the convenience of downloading the official binary release, and registering that. See the docs on that repo, and especially the <code>examples</code> folder.</p>
<p>Unfortunately we haven’t made as much progress as we’d like, because there’s an ecosystem-wide change required. Everywhere that references the protobuf compiler needs to use the toolchain to resolve it. As of this writing, <a target="_blank" href="https://github.com/protocolbuffers/protobuf/pull/19679">https://github.com/protocolbuffers/protobuf/pull/19679</a> is pending as a fix for the worst culprit.</p>
<p>Then there is a long, long tail of issues. For example, I was waiting 10 minutes to cut releases of our Bazel OSS rulesets, just because the stardoc documentation generator is a <code>java_binary</code> target with a hard-coded transitive dep on the <code>cc_binary(name=”protoc”)</code> target and the CD step wasn’t getting a cache hit for it. So I worked around that with another pre-build: <a target="_blank" href="https://github.com/alexeagle/stardoc-prebuilt">https://github.com/alexeagle/stardoc-prebuilt</a> and then applied a small patch to our rulesets to override the <code>renderer</code> attribute, for example: <a target="_blank" href="https://github.com/aspect-build/rules_js/pull/2156">https://github.com/aspect-build/rules_js/pull/2156</a>. It’s unfortunate to have to work around the problem, but in this case it’s worth it to fix the slowest step in our releases.</p>
<h2 id="heading-making-it-official">Making it official</h2>
<p>We’re hoping to upstream our <code>toolchains_protoc</code> to the protobuf repository, so that the default Bazel experience will be fast. Of course, you can still choose to register the <code>cc_binary</code> target as your proto compiler toolchain, if you want to build it from source.</p>
]]></content:encoded></item><item><title><![CDATA[Containerizing JavaScript Applications with Bazel]]></title><description><![CDATA[Containerizing JavaScript applications is controversial because they come in so many flavors. They could be bundled into a single file or the original layout of the source tree could be kept intact. There is not a one-size-fits-all approach to creati...]]></description><link>https://blog.aspect.build/containerizing-javascript-applications-with-bazel</link><guid isPermaLink="true">https://blog.aspect.build/containerizing-javascript-applications-with-bazel</guid><category><![CDATA[Docker]]></category><category><![CDATA[OCI]]></category><category><![CDATA[bazel]]></category><category><![CDATA[containers]]></category><category><![CDATA[rules_js]]></category><dc:creator><![CDATA[Şahin Yort]]></dc:creator><pubDate>Mon, 21 Apr 2025 16:32:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/1cqIcrWFQBI/upload/dccef660c7f9bb75b1cfc28e90844519.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Containerizing JavaScript applications is controversial because they come in so many flavors. They could be bundled into a single file or the original layout of the source tree could be kept intact. There is not a one-size-fits-all approach to creating a container out of your JavaScript application.</p>
<p>With Bazel, this story is different. All <a target="_blank" href="https://bazel.build/extending/rules#executable_rules_and_test_rules">*_binary tar</a><a target="_blank" href="https://en.wikipedia.org/wiki/Tar_/\(computing/\)">get</a><a target="_blank" href="https://bazel.build/extending/rules#executable_rules_and_test_rules">s</a> have a well known directory structure called Runfiles which makes it insanely easy to decide what structure the Javascript container will have. You just take the Runfiles directory tree, put it into a <a target="_blank" href="https://en.wikipedia.org/wiki/Tar_/\(computing/\)">tar</a> archive, add it to your <code>oci_image</code> and call it a day, right?</p>
<p>Though this is a fine approach for small applications (&lt;50MB), it does not scale well beyond a few gigabytes because of increased build and deploy times. You could take a nap waiting for the whole layer to be uploaded and redeployed after a single line change.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you don’t know about how the default Docker storage OverlayFS works, give <a target="_self" href="https://jvns.ca/blog/2019/11/18/how-containers-work--overlayfs/">this fun page</a> a try.</div>
</div>

<p>The <a target="_blank" href="https://github.com/aspect-build/rules_js/blob/main/docs/js_image_layer.md"><code>JsImageLayer</code> rule from r</a><a target="_blank" href="https://github.com/aspect-build/rules_js/tree/main">ules_js keeps y</a>ou moving. It is a packaging rule that efficiently creates JavaScript containers using the Runfiles structure.</p>
<p>In the early days, <code>JsImageLayer</code> created two layers for the whole container. The <code>node_modules</code> layer contained everything that changed infrequently such as npm dependencies and node interpreter (yes, <code>rules_js</code> includes a hermetic Node.js interpreter) and the app layer contained first party JavaScript code.</p>
<p>This worked fairly well because a single line change did not cause <code>node</code> and <code>node_modules</code> to be uploaded and redeployed. However, we realized that it was not good enough. node binary rarely changes and <code>node_modules</code> changes more frequently than <code>node</code>, so it is not economical to bundle them together as a single layer.</p>
<p>That’s exactly what prompted the <code>rules_js</code> maintainers (us) to add more layers, ordered from infrequently changed to frequently changed.</p>
<p>In version 2.0 of <code>rules_js</code>, <code>js_image_layer</code> created more layers for better build and deploy time performance. It worked well for most JavaScript containers, but there is no one-size-fits-all approach. People reached the limit of that optimization too.</p>
<p>$$\[ \begin{array}{ccccc} \text{node} &amp; \text{package_store_3p} &amp; \text{package_store_1p} &amp; \text{node_modules} &amp; \text{app} \\ \uparrow &amp; \uparrow &amp; \uparrow &amp; \uparrow &amp; \uparrow \\ \text{interpreter} &amp; \text{3rd party npm} &amp; \text{1st party npm} &amp; \text{symlinks} &amp; \text{application code} \\ \end{array} \]$$</p><p>What happens if you have 150,000 files from 3rd party npm packages? Changing one npm package led to the whole layer being rebuilt and sent over the network, causing unwelcome flashbacks to the early days of <code>js_image_layer</code>. The problem was even worse if you had npm packages that shipped with prebuilt binaries or .node bindings. In my <a target="_blank" href="https://www.aspect.build/services">consulting work</a> with AI companies, I learned how monstrous <code>pip</code> packages can be (👀 CUDA). I knew what I had to do.</p>
<p>Introducing <code>JsImageLayer</code> layer groups, a new <code>rules_js</code> feature that allows fine grained control over the number of layers created. Users can create additional layers to further optimize <code>JsImageLayer</code> by supplying a dictionary of names and regex that is evaluated against the path.</p>
<p>An example of putting <code>@huge/pkg</code> into its own layer can be written as follows.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744238924291/fb4a4a46-f46f-4042-bdf7-4f16d974c3e0.png" alt class="image--center mx-auto" /></p>
<p>$$\[ \begin{array}{cc} \text{layer_groups} &amp; \text{default layers} \\ \uparrow &amp; \uparrow \\ \text{any number of additional layers} &amp; \text{the layers shown above} \\ \end{array} \]$$</p><div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><code>JsImageLayer</code> creates 5 default layers for an easy out-of-the-box experience even if they are empty due to preceding layer_groups.</div>
</div>

<p>We also took this as an opportunity to optimize how we generate layers. Previously, <code>js_image_layer</code> had a custom Node.js program to create layers (<code>.tar</code> archives). Though it worked great for medium-size archives (&lt;= 200MB), <a target="_blank" href="https://nodejs.org/en/learn/modules/backpressuring-in-streams">streaming backpressure</a> greatly reduced its efficiency. Fixing this was as easy as building the archives with good ol’ <strong>libarchive</strong> (also known as bsdtar).</p>
<p>One of our customers saw a <strong>40%</strong> speed improvement with no additional configuration change. With some additional layers, it became <strong>50% faster</strong> due to better parallelization of build actions.</p>
<p><a target="_blank" href="https://github.com/thesayyn/js_image_layer_bench">A benchmark</a> with the cold build times for a <code>js_image_layer</code> target with <strong>no change</strong> to the BUILD file demonstrates <strong>52%</strong> speed improvement for overall build time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744237874104/94202ac6-dc36-4ffc-97e1-724e8d8212d9.png" alt class="image--center mx-auto" /></p>
<p>You can now add as many layers as you want based on size, how frequently they change, or any other criteria. You can even override the default layers by using the same keys in the dictionary.</p>
<p>The <a target="_blank" href="https://github.com/aspect-build/rules_js/blob/main/docs/js_image_layer.md#js_image_layer-layer_groups">Layer Groups feature</a> is now available in <code>rules_js</code> version <a target="_blank" href="https://github.com/aspect-build/rules_js/releases/tag/v2.3.5">v2.3.5</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Securing Bazel's Module Registry]]></title><description><![CDATA[As a developer, I view most posts about Supply-Chain Security with skepticism. Fear, uncertainty, and doubt (FUD) are a great way for security vendors to get leads and make sales, whether or not the underlying problem represents a real vulnerability....]]></description><link>https://blog.aspect.build/securing-bcr</link><guid isPermaLink="true">https://blog.aspect.build/securing-bcr</guid><category><![CDATA[bazel]]></category><category><![CDATA[SLSA]]></category><category><![CDATA[supplychainsecurity]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Mon, 14 Apr 2025 15:32:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/pZld9PiPDno/upload/61832271e04ce4048fb755d81b5c6542.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a developer, I view most posts about Supply-Chain Security with skepticism. Fear, uncertainty, and doubt (FUD) are a great way for security vendors to get leads and make sales, whether or not the underlying problem represents a real vulnerability.</p>
<p>But a couple weeks ago, there was yet another wake-up call: a <a target="_blank" href="https://thehackernews.com/2025/03/cisa-warns-of-active-exploitation-in.html">vulnerability in tj-actions</a> infected an unknown number of open-source projects, whose CI pipelines probably exposed tokens that can be used to hijack their projects.</p>
<p>Like the <a target="_blank" href="https://en.wikipedia.org/wiki/XZ_Utils_backdoor"><code>xz</code> vulnerability</a> last year, this happened in an OSS repo where a tiny number of individuals (often hobbyists) are responsible for holding up the security posture of enterprise users. And many Bazel rulesets are similarly-staffed projects! Google is transferring repositories to the Linux Foundation (under the bazel-contrib GitHub org) and this comes with no resources for maintenance.</p>
<p>What would happen if a Bazel ruleset author leaks a Personal Access Token? Rules do things like download toolchains, and invoke them to produce your build outputs. A malicious release artifact could easily inject code into the binaries you build and ship. This attack vector is outside of your code, and outside of the third-party packages that your security scanner runs on. I don’t know of any companies who are taking steps to prevent this today.</p>
<h2 id="heading-what-can-you-do-about-it">What can you do about it?</h2>
<p>Whenever taking a new dependency, an enterprise security team will scan the sources for exploits, because they cannot trust the maintainers have a security posture that meets their requirements. When the dependency is updated by dependabot or renovate, hopefully they’re reviewing the delta to the sources since the prior trusted version.</p>
<p>It’s convenient that rulesets (and C++ packages) are published on the <a target="_blank" href="https://registry.bazel.build">Bazel Central Registry</a> so users don’t have to vendor the sources of all the modules they depend on. However as a consequence of GitHub’s <a target="_blank" href="https://blog.bazel.build/2023/02/15/github-archive-checksum.html">checksum stability outage</a> two years ago, Bazel modules generally publish a release artifact, which is constructed from the sources. How can an enterprise consumer know whether that artifact is truly built from the sources they’ve reviewed?</p>
<p>This is the assurance we can get from a framework like <a target="_blank" href="https://slsa.dev/">https://slsa.dev/</a>. Thanks to a grant from Google and a bunch of help from <a target="_blank" href="https://github.com/loosebazooka">Appu</a> on the Distroless team, Aspect has upgraded the supply-chain for Bazel rulesets to optionally include an attestation, which is a cryptographic proof-of-trust. It allows an end-user of the module to place their trust in the build system vendor (GitHub in this case) that the build artifacts are provably constructed on a secure machine, given the sources and build scripts that exist in the repository for that release.</p>
<h2 id="heading-attestations-of-bcr-modules">Attestations of BCR modules</h2>
<p>As an example, let’s look at <a target="_blank" href="https://registry.bazel.build/modules/aspect_rules_lint/1.3.4">https://registry.bazel.build/modules/aspect_rules_lint/1.3.4</a>. To be secure, one should follow the “Release Notes” link to navigate to the GitHub repo, and use the “Full Changelog” link to scan through the source code changes since the prior release we trusted. Okay, nothing looks malicious. How can we trust the release artifact on BCR?</p>
<p>That release also has three new files, with a <a target="_blank" href="https://github.com/aspect-build/rules_lint/releases/download/v1.3.4/MODULE.bazel.intoto.jsonl"><code>.intoto.jsonl</code></a> extension. These are attestations of provenance. What does that mean? Since the release was built on GitHub Actions, using GitHub-hosted runners, GitHub is providing a proof that the release artifact is constructed from the sources in the repo, using the workflow definition in the repo.</p>
<p>GitHub CLI provides the <code>attestation verify</code> command. If we download the release artifact we can try it out:</p>
<pre><code class="lang-plaintext">% gh attestation verify ~/Downloads/rules_lint-v1.3.4.tar.gz --owner aspect-build --signer-repo bazel-contrib/.github       
Loaded digest sha256:a7dfbfe0aa2fb960911a1589fa2ebc4f9fd0e25b0090edb54a9dfac73fdd6444 for file:///Users/alexeagle/Downloads/rules_lint-v1.3.4.tar.gz
Loaded 1 attestation from GitHub API

The following policy criteria will be enforced:
- Predicate type must match:................ https://slsa.dev/provenance/v1
- Source Repository Owner URI must match:... https://github.com/aspect-build
- Subject Alternative Name must match regex: (?i)^https://github.com/bazel-contrib/.github/
- OIDC Issuer must match:................... https://token.actions.githubusercontent.com

✓ Verification succeeded!

The following 1 attestation matched the policy criteria

- Attestation #1
  - Build repo:..... aspect-build/rules_lint
  - Build workflow:. .github/workflows/release.yml@refs/tags/v1.3.4
  - Signer repo:.... bazel-contrib/.github
  - Signer workflow: .github/workflows/release_ruleset.yaml@refs/tags/v7.1.0
</code></pre>
<p>A similar verification is built-in to the BCR presubmit process as well, ensuring that the attestations in the repository (<a target="_blank" href="https://github.com/bazelbuild/bazel-central-registry/blob/main/modules/aspect_rules_lint/1.3.4/attestations.json">https://github.com/bazelbuild/bazel-central-registry/blob/main/modules/aspect_rules_lint/1.3.4/attestations.json</a>) may be trusted. Ultimately we expect that the Bazel client itself will transparently verify modules as they are downloaded, which will also support private registries.</p>
<h2 id="heading-rolling-out-to-the-ecosystem">Rolling out to the ecosystem</h2>
<p>We didn’t just make this work for rules_lint. The changes needed have been made to the <a target="_blank" href="https://github.com/bazel-contrib/publish-to-bcr">Publish to BCR</a> helper, which is now a reusable workflow rather than a GitHub App. That means we’ll soon be able to trust releases from all the rules that reuse this release and publish workflow.</p>
<p>Thanks to <a target="_blank" href="http://github.com/kormide">Derek</a> for doing the engineering work! If you’re a module author, follow that link to the docs for Publish to BCR.</p>
]]></content:encoded></item><item><title><![CDATA[Dagger and Bazel]]></title><description><![CDATA[For those of you in the Bazel ecosystem, when you hear "Dagger", you probably think of the dependency injection framework for Java, Kotlin, and Android. And for good reason. Dependencies are frequently part of a build process. But there's another Dag...]]></description><link>https://blog.aspect.build/dagger-and-bazel</link><guid isPermaLink="true">https://blog.aspect.build/dagger-and-bazel</guid><category><![CDATA[code as infrastructure]]></category><category><![CDATA[bazel]]></category><category><![CDATA[dagger]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Chris Chinchilla]]></dc:creator><pubDate>Mon, 24 Feb 2025 19:14:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740424608799/a2ffeebf-7260-4aaa-a3ae-9757b732722a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For those of you in the Bazel ecosystem, when you hear "Dagger", you probably think of <a target="_blank" href="https://dagger.dev/">the dependency injection framework for Java, Kotlin, and Android</a>. And for good reason. Dependencies are frequently part of a build process. But <a target="_blank" href="https://dagger.io/">there's another Dagger in the wild</a>, a tool and platform for ephemerally building and testing multi-language projects. Sound familiar?</p>
<p>In this post, I look at what (the new) Dagger is, how it works, and how it compares to Bazel.</p>
<h2 id="heading-dagger-history-and-concepts">Dagger history and concepts</h2>
<p>Dagger, started in 2022 by Solomon Hyke, the same creator as Docker, is part of the ecosystem of tools I call "code as infrastructure." In this model, you use your programming language of choice to define your infrastructure and test and build pipelines.</p>
<p>Under the hood, everything runs in containers, making Dagger pipelines portable and moveable between local runs or running on CI.</p>
<p>Dagger defines every task and workflow (including "infrastructure" setup) as a Dagger Function written in one of the currently available SDKs: Go, Typescript, and Python. Yes, this means you write functions to create Functions. It's a little confusing.</p>
<p>You can extend the core Dagger functionality with <a target="_blank" href="https://daggerverse.dev/">Daggerverse modules</a>, which include a wide variety of use cases, from using helm charts to spinning up Kubernetes servers and various package managers.</p>
<p>At the center of everything is the Dagger Engine, <a target="_blank" href="https://github.com/dagger/dagger">which is open source</a> and handles maintaining the connections between functions, caching, state, telemetry, and more.</p>
<p>There's also the Dagger Cloud, which currently provides a browser-based interface for tracing and debugging issues with Dagger Functions. This is Dagger's monetization strategy and is free for one user.</p>
<h2 id="heading-example">Example</h2>
<p>This is the TypeScript example Daggerized application pipeline from <a target="_blank" href="https://docs.dagger.io/quickstart/daggerize">the Dagger documentation</a>. It builds two containers, runs tests, and then publishes one of the built containers.</p>
<pre><code class="lang-ts"><span class="hljs-keyword">import</span> { dag, Container, Directory, <span class="hljs-built_in">object</span>, func } <span class="hljs-keyword">from</span> <span class="hljs-string">"@dagger.io/dagger"</span>

<span class="hljs-meta">@object</span>()
<span class="hljs-keyword">class</span> HelloDagger {
  <span class="hljs-comment">/**
   * Publish the application container after building and testing it on-the-fly
   */</span>
  <span class="hljs-meta">@func</span>()
  <span class="hljs-keyword">async</span> publish(source: Directory): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">string</span>&gt; {
    <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.test(source)
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.build(source).publish(
      <span class="hljs-string">"ttl.sh/myapp-"</span> + <span class="hljs-built_in">Math</span>.floor(<span class="hljs-built_in">Math</span>.random() * <span class="hljs-number">10000000</span>),
    )
  }

  <span class="hljs-comment">/**
   * Build the application container
   */</span>
  <span class="hljs-meta">@func</span>()
  build(source: Directory): Container {
    <span class="hljs-keyword">const</span> build = <span class="hljs-built_in">this</span>.buildEnv(source)
      .withExec([<span class="hljs-string">"npm"</span>, <span class="hljs-string">"run"</span>, <span class="hljs-string">"build"</span>])
      .directory(<span class="hljs-string">"./dist"</span>)
    <span class="hljs-keyword">return</span> dag
      .container()
      .from(<span class="hljs-string">"nginx:1.25-alpine"</span>)
      .withDirectory(<span class="hljs-string">"/usr/share/nginx/html"</span>, build)
      .withExposedPort(<span class="hljs-number">80</span>)
  }

  <span class="hljs-comment">/**
   * Return the result of running unit tests
   */</span>
  <span class="hljs-meta">@func</span>()
  <span class="hljs-keyword">async</span> test(source: Directory): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">string</span>&gt; {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.buildEnv(source)
      .withExec([<span class="hljs-string">"npm"</span>, <span class="hljs-string">"run"</span>, <span class="hljs-string">"test:unit"</span>, <span class="hljs-string">"run"</span>])
      .stdout()
  }

  <span class="hljs-comment">/**
   * Build a ready-to-use development environment
   */</span>
  <span class="hljs-meta">@func</span>()
  buildEnv(source: Directory): Container {
    <span class="hljs-keyword">const</span> nodeCache = dag.cacheVolume(<span class="hljs-string">"node"</span>)
    <span class="hljs-keyword">return</span> dag
      .container()
      .from(<span class="hljs-string">"node:21-slim"</span>)
      .withDirectory(<span class="hljs-string">"/src"</span>, source)
      .withMountedCache(<span class="hljs-string">"/root/.npm"</span>, nodeCache)
      .withWorkdir(<span class="hljs-string">"/src"</span>)
      .withExec([<span class="hljs-string">"npm"</span>, <span class="hljs-string">"install"</span>])
  }
}
</code></pre>
<p>You chain Functions together, meaning other Functions can call them. To run this example, you call the <code>publish</code> Function, passing a local code directory:</p>
<pre><code class="lang-sh">dagger call publish --<span class="hljs-built_in">source</span>=.
</code></pre>
<p>The <code>publish</code> Function, in turn, calls <code>test</code>, which calls <code>build</code>, which calls <code>buildEnv</code>. You can pass variables between them, such as the code directory, and use any other aspect of the programming language you're using should you need them.</p>
<p>The Dagger-specific methods in each Function are fairly self-explanatory and often mirror their name and function in a Dockerfile. Again, they use chaining from the base <code>dag</code> client and any of its core types, such as a <code>Container</code>.</p>
<p>Once you run the pipeline and it is complete, besides anything retained for caching, the Dagger engine tears everything down. However, like with Docker, you can maintain state by mounting local volumes from the host system.</p>
<h2 id="heading-bazel-andor-dagger">Bazel and/or Dagger</h2>
<p>I initially set out to write this post looking at how to use Bazel and Dagger together. But I almost immediately came across <a target="_blank" href="https://docs.dagger.io/adopting#:~:text=You%20are%20happily%20using%20a%20monolithic%20toolchain%2C%20such%20as%20Gradle%2C%20Nix%20or%20Bazel%2C%20with%20no%20exception%20and%20no%20fragmentation%20within%20the%20team">this line in the documentation</a>:</p>
<blockquote>
<p>Before going any further, you should look for reasons not to adopt Dagger. Your project may not be a good fit for Dagger if: …</p>
<ul>
<li>You are happily using a monolithic toolchain, such as Gradle, Nix or Bazel, with no exception and no fragmentation within the team</li>
</ul>
</blockquote>
<p>There are also many discussions within the community on how the tools compare and contrast, often coming down to "stick with what works for you".</p>
<p>So, instead, what are the key differences and overlaps? In the general spirit of technology and developer tools, there are no hard and fast "best" answers. Often, it depends.</p>
<h3 id="heading-native-verses-containers">Native verses Containers</h3>
<p>Dagger runs tasks in containers, which makes its pipelines portable. However, not everything can run in containers. They add their own overhead, such as container runtimes.</p>
<p>Bazel uses native environments to run tasks which are more performant, but can have inconsistencies between platforms and require accommodations for portability.</p>
<h3 id="heading-configuration-language">Configuration language</h3>
<p>Bazel uses Starlark, which is Python-like but requires understanding some complex new concepts. It has widespread support for plugins and linters in IDEs.</p>
<p>Dagger uses Python, Go, or TypeScript with the relevant SDK, making it simpler to start. The world of "code as infrastructure" tools is interesting and includes something like <a target="_blank" href="https://www.pulumi.com/">Pulumi</a>. I'm unsure if it has widespread adoption as a concept or if it's something that dev and DevOps teams actually want to mix together.</p>
<h3 id="heading-language-support">Language support</h3>
<p>Support for most aspects of Bazel comes in the form of "rules". By default, Bazel has support for C, C++, Java, Objective-C, Proto Buffer, Python, and Shell. Through 3rd party community rules, Bazel can support many other languages natively.</p>
<p>While you can only write pipelines in one of the supported SDKs (or call the Dagger API directly), Dagger can build, test, and run whatever programming language you can run in containers.</p>
<h3 id="heading-reproducibility">Reproducibility</h3>
<p>One of Bazel's main features is its reliable reproducibility no matter when or where you run a build or test.</p>
<p>Dagger relies on the reproducibility of containers for its own guarantees. As it has this reproducibility, it can add caching, helping speed up subsequent runs.</p>
<h3 id="heading-community">Community</h3>
<p>Bazel originates in Google, but now has a large community of users, contributors, and a large ecosystem of rulesets and plugins around it to extend core functionality. Bazel is about nine years old. This is relatively new in the grand scheme of similar tools such as Nix or Gradle, but also perhaps considered "old" by some cutting-edge development teams.</p>
<p>Dagger is new and largely directed by one company. However, it is open source and has a healthy mix of contributors. The module ecosystem is reasonable, but it's always hard to predict how much maintainers will keep those up to date.</p>
<h3 id="heading-devex">DevEx</h3>
<p>Bazel treats most things as artifacts that depend on each other, which can require some rethinking for your processes but is also similar to Nix. Bazel often needs a reasonable amount of configuration files and rulesets to start.</p>
<p>While it uses common language patterns and containers (which are fairly established now), Dagger also requires some rethinking, especially if you're coming from "as code" tools. I also found that whilst the practice of chaining was conceptually straightforward, I found myself getting lost in it sometimes and passing lots of variables back and forth. Perhaps the biggest blocker is if you don't use Python, Go, or TypeScript, writing pipelines is more challenging. It's still possible to use it by calling the GraphQL API, but then you lose a lot of the advantages. And while containers are fairly flexible, there are still certain tasks that you can't or don't want to run in them.</p>
<h2 id="heading-stick-with-what-you-know">Stick with what you know</h2>
<p>Dagger and other "code as infrastructure" tools like it are interesting and tend to attract a lot of tech media attention. However, they are still new and often developed largely by one company funded by venture capital. This can mean the tool won't stick around, no matter how promising it is.</p>
<p>Echoing the statements I read in the Dagger community, but also those of any pragmatic developer tool community. There are always new shiny tools to try. They may offer benefits over what you use already, but if what you have works reliably, and you have staff who understand it, is it worth the effort? That's time you could spend on servicing and improving customer's needs.</p>
]]></content:encoded></item><item><title><![CDATA[Self-hosting your CI/CD infra]]></title><description><![CDATA[Aspect Workflows is the Software-as-a-Service product that runs Bazel developer workflows, such as continuous integration and delivery, with the speed and cost benefits promised by this advanced tool. But it’s not like most Cloud-hosted SaaS that run...]]></description><link>https://blog.aspect.build/self-hosting-your-cicd-infra</link><guid isPermaLink="true">https://blog.aspect.build/self-hosting-your-cicd-infra</guid><category><![CDATA[SaaS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[IAM,MFA,Access key ID,Secret access key]]></category><category><![CDATA[Data Compliance]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Tue, 17 Dec 2024 01:49:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734117305032/eeb1a1b2-009f-4dd2-aae8-052c11555d7a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Aspect Workflows is the Software-as-a-Service product that runs <a target="_blank" href="https://bazel.build">Bazel</a> developer workflows, such as continuous integration and delivery, with the speed and cost benefits promised by this advanced tool. But it’s not like most Cloud-hosted SaaS that runs on an account the vendor provides. Instead we deploy into our customers cloud accounts, sometimes called “Bring Your Own Cloud” (BYOC). In this post I’ll explain why we do it this way, and how our customers benefit.</p>
<h3 id="heading-enhanced-security">Enhanced Security</h3>
<p>Customer cloud accounts have security protocols to isolate networks on Virtual Private Clouds (VPCs), enforce custom IAM roles, and firewall applications to prevent unintended access. CI/CD systems are core to the software supply-chain, so vulnerabilities matter! Self-hosted CI infrastructure is subject to the same policies.</p>
<p>It requires less trust from the vendor. While Aspect is SOC2 certified of course, there are always other security and business risks of relying on a vendor to operate the infrastructure. The reduced risk in self-hosting has advantages in legal and procurement processes as well.</p>
<p>Self-hosted infrastructure-as-code (IaC) also allows your security scanning tools to operate over the vendor’s infrastructure definitions, including any co-maintenance policies granted to the vendor’s on-call engineers.</p>
<h3 id="heading-data-control-and-compliance">Data Control and Compliance</h3>
<p>Data is retained in the customer’s cloud, and is subject to the access auditing, encryption, and retention policies enforced by their platform team. This is especially useful in industries where regulations like GDPR, HIPAA, FINRA require constraints around data management.</p>
<h3 id="heading-cloud-pricing">Cloud Pricing</h3>
<p>Many companies have a cloud contract that reserves some capacity to get a better pricing agreement. Others like startups have credits given by the cloud sales team or from their investors. Sometimes there’s a requirement to utilize all the compute credits or reservations.</p>
<p>Hosting CI infrastructure in the customers cloud account lets them save on hosting costs, compared with a vendor-hosted model.</p>
<h3 id="heading-honest-billing">Honest Billing</h3>
<p>Running tests can be resource-intensive, and you want to be able to run as many as you need. When a vendor meters their service based on usage, it’s difficult to budget for predicted SaaS costs.</p>
<p>In the case of Build &amp; Test, the vendor should be expected to reduce the cloud compute costs by optimizing use of caching and right-sized, elastic-scaling instances. If the vendor is rewarded for consuming compute, they have a perverse incentive to use more resources rather than less.</p>
<p>Self-hosting infrastructure puts the resource billing under the customers control, and allows them to scrutinize the optimizations the vendor provides.</p>
<h3 id="heading-low-latency-high-bandwidth-integrations">Low-latency, high-bandwidth integrations</h3>
<p>Running infrastructure in the customers cloud makes it much more straightforward to introduce components developed in-house or by other vendors. For example, a Build system infrastructure should be network-adjacent to a remote development cluster or remote IDE backends.</p>
]]></content:encoded></item><item><title><![CDATA[Bootstrap Complete: Aspect Build Raises $3.85M to Enable Developers in Massive, Multi-Language Codebases]]></title><description><![CDATA[Today we are thrilled to be featured in TechCrunch, announcing that Aspect Build has raised $3.85M in pre-seed and seed round funding led by FirstMark, with participation from Preston-Werner Ventures and several angel investors. This funding will acc...]]></description><link>https://blog.aspect.build/seed-round</link><guid isPermaLink="true">https://blog.aspect.build/seed-round</guid><category><![CDATA[funding]]></category><dc:creator><![CDATA[Alex Eagle]]></dc:creator><pubDate>Tue, 01 Oct 2024 16:15:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727736775237/8ed3b0aa-ef25-4f12-8e10-d39b0846a1b4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today we are thrilled to be featured in <a target="_blank" href="https://techcrunch.com/2024/10/01/aspect-build-gets-3-85m-to-help-developers-create-software-with-bazel/">TechCrunch</a>, announcing that <strong>Aspect Build</strong> has raised $3.85M in pre-seed and seed round funding led by <strong>FirstMark</strong>, with participation from <strong>Preston-Werner Ventures</strong> and several angel investors. This funding will accelerate the growth of our product, <strong>Aspect Workflows</strong> as we continue to empower software teams to overcome the challenges of modern development, enabling them to ship faster, more reliably, and at larger scale.</p>
<p><strong>Addressing the Productivity Bottlenecks</strong></p>
<p>It’s 2024, and many engineering organizations still struggle with slow, costly, and unpredictable developer workflows! The causes vary from unreliable dependency management, brittle IDE configuration, slow builds, and the impracticality of automated integration testing. Despite advancements in the DevOps stack like containerization, CI/CD, and infrastructure as code, the increasing complexity of software projects is driving up build times and slowing down teams. As projects grow, particularly with the rise of <strong>monorepos</strong> and reliance on <strong>third-party dependencies</strong>, idiomatic language-specific tooling is becoming a major bottleneck for companies striving to deliver features quickly and securely at scale.</p>
<p>At Aspect, we are uncomfortably excited to solve these problems. By mastering advanced techniques such as <strong>remote execution and</strong> <strong>caching</strong> and <strong>hermetic dependency management</strong>, we help teams eliminate the delays caused by slow and flaky builds. We bring developers along with us thanks to unrelenting focus on ergonomics, documentation, and support. Our system optimizes every step of the build process, ensuring that development teams can focus on creating and shipping great products.</p>
<p><strong>Our Journey: bootstrapping from Google</strong></p>
<p><strong>Greg Magolan</strong> and I <a target="_blank" href="https://github.com/gregmagolan/abc-demo-build-with-aot-universal">first worked together</a> at Google on the Angular team adopting <strong>Bazel</strong>, a large-scale multi-language build system that pioneered how massive organizations like Google handled complex software builds. We loved that Google decided to open-source the tool, but it was very clear that companies would have difficulty achieving the benefits promised in the <a target="_blank" href="https://blog.bazel.build/2019/10/17/bazel-reaches-10-milestone.html">Bazel 1.0 blog post</a>.</p>
<p>Rather than start with venture funding, we bootstrapped for three years by providing Bazel consulting services to over 50 companies. This gave us a front-row seat to witness painful bottlenecks that processes like dependency management, builds and Continuous Integration can create, and also the first taste of success in helping early customers overcome them.</p>
<p>Aspect placed a big bet on our <a target="_blank" href="https://github.com/aspect-build">Open Source Software</a>, which for us is both a passion and our code of ethics. We’ve been the authors of Bazel’s JavaScript and TypeScript support from the beginning, and have added Docker (OCI), Python, and more. Just recently, the <a target="_blank" href="https://registry.bazel.build/">Bazel Central Registry</a> got its 2,000th entry, and we’re proud that 23% of those are from Aspect! I was also the catalyst for moving Bazel’s community and yearly conference to the Linux Foundation, setting the stage for meaningful community governance.</p>
<p>With <strong>Aspect Workflows</strong>, we’re layering on our open-source and bringing that same level of optimization from our consulting days to a wider audience, helping software teams scale their workflows to manage scale and complexity.</p>
<p>Since launching, Aspect Workflows has deployed at numerous customers, provided 40-60% cost savings, accelerated builds 10x and tests 2-3x, all while maintaining the reproducibility and security necessary for complex, large-scale development.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdrvBMFMvzcxMWPJsAp06R-jhxwyrez5jpN-VK0vmJFC_V5K49EF_WKL69Xr-fiDVFmqL-wQfc6Q1zxah9xm87aJ5p0NXgbP1hR9eXVUYL2mczipRw5FpesGSnhXEef6dp4sEiB4Yl2rrCjA-uva_LoPp92?key=fc_55_3Qsg0ITiidEeTDLQ" alt /></p>
<p><strong>Fueling Our Growth with Seed Funding</strong></p>
<p>We met <a target="_blank" href="https://www.linkedin.com/in/davidwaltcher/">David Waltcher</a> over two years ago. His insight was immediately obvious, as he wrote in his introduction to us, “I’ve been spending a lot of time around Bazel… as the Google family has permeated other organizations, everyone in our network has been raving about the multi-language support and caching.” We’re glad he followed us through our bootstrap journey and our gradual conversion to recurring revenue, and we’re proud to partner with <strong>FirstMark</strong> and <strong>Preston-Werner Ventures</strong>, who bring a wealth of experience in developer tools and share our vision for transforming how software teams operate. With their support, we’re excited to accelerate our product roadmap, expand our team, and push the boundaries of what’s possible in large-scale multi-language software development.</p>
<p>We frequently share exciting announcements, so follow us at <a target="_blank" href="https://www.linkedin.com/company/aspect-build/">https://www.linkedin.com/company/aspect-build/</a></p>
<p>-Alex Eagle</p>
<p>CEO, Aspect Build</p>
]]></content:encoded></item></channel></rss>