How to run a bzl file?
This cheatsheet provides quick tips on how to build and test code in our repository using Bazel.
Start here if you're completely new to Bazel.
The original design documents for our Bazel build can be found at the following Golinks:
This section includes steps every engineer should follow to get a consistent development experience.
Bazelisk is a wrapper for Bazel that downloads and runs the version of Bazel specified in //.bazelversion. It serves a similar purpose as nvm does for NodeJS.
Bazelisk is recommended over plain Bazel because the bazel command on our gLinux workstations is automatically updated every time a new version of Bazel is released.
To install Bazelisk, grab the latest binary for your platform from GitHub, then add it to your PATH.
Tips:
If you wish to use RBE to speed up your builds and test runs (see the --config=remote flag below) run the following command:
We use Gazelle to automatically generate BUILD.bazel files for most of our Go and TypeScript code.
Note that we occasionally edit Gazelle-generated BUILD.bazel files by hand, e.g. to mark tests as flaky.
Run make gazelle from the repository's root directory.
TypeScript support is provided via a custom Gazelle extension which can be found in //bazel/gazelle/frontend.
Tip: See here for details on how this extension decides which rule to generate for a given TypeScript file.
Buildifier is a linter and formatter for BUILD.bazel files and other Bazel files (WORKSPACE, *.bzl, etc.).
Run bazel run //:buildifier.
Our Bazel build is tested on RBE via the following tasks:
We regard the above tasks as the source of truth for build and test correctness.
As an insurance policy against RBE outages, we also have the following tasks:
The non-RBE tasks tend to be a bit more brittle than the RBE ones, which is why they are excluded from the CQ.
Use commands bazel build and bazel test to build and test Bazel targets, respectively. Examples:
Any build artifacts produced by bazel build or bazel test will be found under //_bazel_bin.
Note that it‘s not necessary to bazel build a test target before bazel test-ing it. bazel test will automatically build the test target if it wasn’t built already (i.e. if it wasn't found in the Bazel cache).
More on bazel build here.
More on bazel test here.
By default, Bazel will build and test targets on the host system (aka a local build). To build on RBE, invoke Bazel with flag --config=remote, e.g.:
This repository contains some scripted actions that shell out to Bazel, such as certain make targets (e.g. make gazelle, make buildifier) and go generate actions. These actions use the “mayberemote” configuration via the --config=mayberemote flag, e.g.:
By default, the “mayberemote” configuration does nothing. This is to support users that might not have RBE access, or when working offline (e.g. on a plane with no WiFi). To get the benefits of RBE when running scripted actions, please create a //bazel/user/bazelrc file with the following contents:
To learn more about the mayberemote configuration:
Use command bazel run to run binary Bazel targets (such as go_binary, sh_binary, etc.), e.g.:
Alternatively, you can run the Bazel-built artifact directly, e.g.:
The exact path of the binary under //_bazel_bin depends on the Bazel rule (go_binary, py_binary, etc.). As you can see, said path can be non-obvious, so it's generally recommended to use bazel run.
More on bazel run here.
Our Go codebase is built and tested using Bazel rules from the rules_go repository. The go_test rule documentation is a great read to get started.
As mentioned in the Gazelle section, all Bazel targets for Go code are generated with Gazelle.
Read go/skia-infra-bazel-backend for the full details.
On non-Bazel Go projects, developers typically use locally installed binaries such as go and gofmt for code generation and code formatting tasks. However, our Bazel build aims to be as hermetic as possible. To this end, rather than requiring the developer to install a Go SDK on their system, we provide convenience Bazel targets defined in //BUILD.bazel to invoke binaries in the Bazel-downloaded Go SDK and other Bazel-downloaded tools.
Example invocations:
Our CI tasks and Makefiles use these Bazel targets. This prevents diffs that might arise from using locally installed binaries, which might differ from system to system. Developers should always use Bazel-downloaded binaries for any tasks that produce changes in checked-in files.
Note that it might still be desirable to have a locally installed Go SDK. For example, Visual Studio Code‘s Go extension requires a locally installed Go SDK to enable autocompletion and debugging. It is the developer’s responsibility to ensure that their locally installed Go SDK matches the version used by the Bazel build, which is defined in the //WORKSPACE file.
Simply use bazel build (and optionally bazel run) as described earlier.
Tip: Start by reading the General testing tips section.
Our setup differs slightly from typical Go + Bazel projects in that we use a wrapper macro around go_test to handle manual tests. Gazelle is configured to use this macro via a gazelle:map_kind directive in //BUILD.bazel. The macro is defined in //bazel/go/go_test.bzl. Read the macro's docstring for the full details.
To mark specific Go test cases as manual, extract them out into a separate file ending with _manual_test.go within the same directory.
The go_test macro in //bazel/go/go_test.bzl places files ending with _manual_test.go in a separate go_test target, which is tagged as manual.
More on manual tests here.
The go test command supports flags such as -v to print verbose outputs, -run to run a specific test case, etc. Under Bazel, these flags can be passed to a go_test test target via --test_arg, but they need to be prefixed with -test., e.g.:
The following example shows what a typical bazel test invocation might look like while debugging a go_test target locally.
Our front-end code is built and tested using a set of custom Bazel macros built on top of rules provided by the rules_nodejs repository. All such macros are either defined in or re-exported from //infra-sk/index.bzl. This section uses the terms macro and rule interchangeably when referring to the macros exported from said file.
As mentioned in the Gazelle section, most Bazel targets for front-end code are generated with Gazelle.
Read go/skia-infra-bazel-frontend for the full details.
Simply use bazel build (and optionally bazel run) as described earlier.
Demo pages are served via a Gazelle-generated sk_demo_page_server rule.
Use bazel run to serve a demo page via its sk_demo_page_server rule, e.g.:
To rebuild the demo page automatically upon changes in the custom element‘s directory, use the demopage.sh script found in the repository’s root directory, e.g.:
This script uses entr to watch for file changes and re-execute the bazel run command as needed. The above demopage.sh invocation is equivalent to:
Install entr on a gLinux workstation with sudo apt-get install entr.
In the future, we might replace this script with ibazel, which requires changes to the sk_demo_page_server rule.
Tip: Start by reading the General testing tips section.
Front-end code testing is done via three different Bazel rules:
Gazelle decides which rule to generate for a given *_test.ts file based the following patterns:
Use bazel test to run a Karma test in headless mode:
To run a Karma test in the browser during development, use bazel run instead:
As an alternative to bazel run when debugging tests in the browser, consider using the karmatest.sh script found in the repository‘s root directory. Similarly to the demopage.sh script mentioned earlier, it watches for changes in the custom element’s directory, and relaunches the test runner when a file changes. Example usage:
As with demopage.sh, this script depends on the entr command, which can be installed on a gLinux workstation with sudo apt-get install entr.
Use bazel test to run a Puppeteer test, e.g.:
To view the screenshots captured by a Puppeteer test, use the //:puppeteer_screenshot_server target:
To extract the screenshots captured by a Puppeteer test into a directory, use the //:extract_puppeteer_screenshots target:
To step through a Puppeteer test with a debugger, run your test with bazel run, and append _debug at the end of the target name, e.g.:
This will print a URL to stdout that you can use to attach a Node.js debugger (such as the VS Code Node.js debugger, or Chrome DevTools). Your test will wait until a debugger is attached before continuing.
Example debug session with Chrome DevTools:
By default, Puppeteer starts a Chromium instance in headless mode. If you would like to run your test in headful mode, invoke your test with bazel run, and append _debug_headful at the end of the target name, e.g.:
Run your test in headful mode to visually inspect how your test interacts with the demo page under test as you step through your test code with the attached debugger.
Use bazel test to run a NodeJS test, e.g.:
The below tips apply to all Bazel test targets (e.g. go_test, karma_test, etc.).
By default, Bazel omits the standard output of tests (e.g. fmt.Println("Hello")).
Use flag --test_output=all to see the full output of your tests:
Note that Bazel runs tests in parallel, so it will only print out their output once all tests have finished running.
Flag --test_output=errors can be used to only print out the output of failing tests.
To see the tests' output in real time, use flag --test_output=streamed. Note however that this forces serial execution of tests, so this can be significantly slower.
Bazel caches successful test runs, and reports (cached) PASSED on subsequent bazel test invocations, e.g.:
To disable caching, use flag --nocache_test_results, e.g.
Flaky tests can cause the CI to fail (see Bazel CI tasks).
Tests can be marked as flaky via the flaky argument, e.g.:
Bazel will execute tests marked as flaky up to three times, and report test failure only if the three attempts fail.
Using flaky is generally discouraged, but can be useful until the root cause of the flake is diagnosed (see Debugging flaky tests) and fixed.
As a last resort, consider marking your flaky test as manual (see Manual tests).
More on the flaky attribute here.
While --nocache_test_results can be useful for debugging flaky tests, flag --runs_per_test was specifically added for this purpose. Example:
Manual tests are excluded from Bazel wildcards such as bazel test //....
To mark a test target as manual, use the manual tag, e.g.:
Note that the instructions to mark go_test targets as manual are different. See Manual Go tests for more.
Note that manual tests are excluded from the Bazel CI tasks.
More on manual tests and Bazel tags here.
By default, Bazel will report TIMEOUT if the test does not finish within 5 minutes. This can be overridden via the --test_timeout flag, e.g.
$ bazel test //go/util:slow_test --test_timeout=20
This can also be overridden via the timeout and size arguments of the test target, e.g.
More on how to handle timeouts and slow tests here.
Use flag --test_arg to pass flags to the binary produced by a test target.
For example, our go_test targets define custom command-line flags such as flag.Bool("logtostderr", ...). This flag can be enabled with --test_arg, e.g.:
As an alternative, command-line flags can be specified via the args argument of the Bazel test target, as follows:
More on test arguments here.
By default, Bazel isolates test targets from the host system's environment variables, and sets the environment with a number of variables with Bazel-specific information that some *_test rules depend on (documented here).
Use flag --test_env to specify any environment variables, e.g.
To pipe through an environment variable from the host's system:
More on the --test_env flag here.
By default, Bazel sandboxes every build step. Effectively, it runs the compile command with only the given source files for a particular rule and the specified dependencies visible, to force all dependencies to be properly listed.
For steps that have a lot of files, this can have a bit of I/O overhead. To speed this up, one can use tempfs (e.g. a RAM disk) for the sandbox by adding --sandbox_base=/dev/shm to the build command. When compiling Skia, for example, this reduces compile time by 2-3x.
Sandboxing can make diagnosing failing rules a bit harder. To see what command got run and to be able to view the sandbox after failure, add --subcommands --sandbox_debug to the command.
Bazel builds fast and correct by making use of cached outputs and reusing them when the input file is identical. This can make it hard to debug a slow or non-deterministic build.
To get a detailed log of all the actions your build is taking:
Bazel has a query feature that lets one extract information from the build graph.
There's a query and cquery variant that lets one query for the maximal set of information or the information in one specific case, respectively.
Bazel is an open-source build and test tool similar to Make, Maven, and Gradle. It uses a human-readable, high-level build language. Bazel supports projects in multiple languages and builds outputs for multiple platforms. Bazel supports large codebases across multiple repositories, and large numbers of users.
Bazel offers the following advantages:
To build or test a project with Bazel, you typically do the following:
In addition to building, you can also use Bazel to run tests and query the build to trace dependencies in your code.
When running a build or a test, Bazel does the following:
Since all previous build work is cached, Bazel can identify and reuse cached artifacts and only rebuild or retest what's changed. To further enforce correctness, you can set up Bazel to run builds and tests hermetically through sandboxing, minimizing skew and maximizing reproducibility.
The action graph represents the build artifacts, the relationships between them, and the build actions that Bazel will perform. Thanks to this graph, Bazel can track changes to file content as well as changes to actions, such as build or test commands, and know what build work has previously been done. The graph also enables you to easily trace dependencies in your code.
To get started with Bazel, see Getting Started or jump directly to the Bazel tutorials:
Starlark is a Python-like configuration language originally developed for use in Bazel and since adopted by other tools. Bazel's BUILD and .bzl files are written in a dialect of Starlark properly known as the "Build Language", though it is often simply referred to as "Starlark", especially when emphasizing that a feature is expressed in the Build Language as opposed to being a built-in or "native" part of Bazel. Bazel augments the core language with numerous build-related functions such as glob, genrule, java_binary, and so on.
See the Bazel and Starlark documentation for more details, and the Rules SIG template as a starting point for new rulesets.
To create your first rule, create the file foo.bzl:
When you call the rule function, you must define a callback function. The logic will go there, but you can leave the function empty for now. The ctx argument provides information about the target.
You can load the rule and use it from a BUILD file.
Create a BUILD file in the same directory:
Now, the target can be built:
Even though the rule does nothing, it already behaves like other rules: it has a mandatory name, it supports common attributes like visibility, testonly, and tags.
Before going further, it's important to understand how the code is evaluated.
Update foo.bzl with some print statements:
and BUILD:
ctx.label corresponds to the label of the target being analyzed. The ctx object has many useful fields and methods; you can find an exhaustive list in the API reference.
Query the code:
Make a few observations:
To analyze the targets, use the cquery ("configured query") or the build command:
As you can see, _foo_binary_impl is now called twice - once for each target.
Notice that neither "bzl file evaluation" nor "BUILD file" are printed again, because the evaluation of foo.bzl is cached after the call to bazel query. Bazel only emits print statements when they are actually executed.
To make your rule more useful, update it to generate a file. First, declare the file and give it a name. In this example, create a file with the same name as the target:
If you run bazel build :all now, you will get an error:
Whenever you declare a file, you have to tell Bazel how to generate it by creating an action. Use ctx.actions.write, to create a file with the given content.
The code is valid, but it won't do anything:
The ctx.actions.write function registered an action, which taught Bazel how to generate the file. But Bazel won't create the file until it is actually requested. So the last thing to do is tell Bazel that the file is an output of the rule, and not a temporary file used within the rule implementation.
Look at the DefaultInfo and depset functions later. For now, assume that the last line is the way to choose the outputs of a rule.
Now, run Bazel:
You have successfully generated a file!
To make the rule more useful, add new attributes using the attr module and update the rule definition.
Add a string attribute called username:
Next, set it in the BUILD file:
To access the value in the callback function, use ctx.attr.username. For example:
Note that you can make the attribute mandatory or set a default value. Look at the documentation of attr.string. You may also use other types of attributes, such as boolean or list of integers.
Dependency attributes, such as attr.label and attr.label_list, declare a dependency from the target that owns the attribute to the target whose label appears in the attribute's value. This kind of attribute forms the basis of the target graph.
In the BUILD file, the target label appears as a string object, such as //pkg:name. In the implementation function, the target will be accessible as a Target object. For example, view the files returned by the target using Target.files.
By default, only targets created by rules may appear as dependencies (such as a foo_library() target). If you want the attribute to accept targets that are input files (such as source files in the repository), you can do it with allow_files and specify the list of accepted file extensions (or True to allow any file extension):
The list of files can be accessed with ctx.files.
If you need only one file, use allow_single_file:
This file is then accessible under ctx.file.
You can create a rule that generates a .cc file based on a template. Also, you can use ctx.actions.write to output a string constructed in the rule implementation function, but this has two problems. First, as the template gets bigger, it becomes more memory efficient to put it in a separate file and avoid constructing large strings during the analysis phase. Second, using a separate file is more convenient for the user. Instead, use ctx.actions.expand_template, which performs substitutions on a template file.
Create a template attribute to declare a dependency on the template file:
Users can use the rule like this:
If you don't want to expose the template to the end-user and always use the same one, you can set a default value and make the attribute private:
Enough said. Critically, this ensures that we don't stray outside of the restricted feature-set of the Starlark language (the Buck runtime is currently much more permissive).
Take a look at the doc in antlir/bzl/TARGETS. This is kind of a chore, but it helps kick off the right CI jobs when we edit .bzl files, so it's worth doing.
Ideally, we would just write a linter to do this on our behalf. However, we haven't yet found time.
Note: The vmtest macros have not yet been updated to follow this pattern, help is welcome!
This convention follows fbcode/folly/. One concrete benefit is that it's easier to spot when a python_binary is being used as a library without the -library suffix to reference the implicit library target.
The failure mode here is writing something that is neither clearly a function nor a macro, but a mix.
If you define a module-level a = , and mutate it from your macros, this is a sure-fire way to get non-deterministic builds.
The precise reason is that Buck doesn't guarantee order of evaluation of your macros across files, so a macro that updates order-sensitive mutable globals can create non-determinism that breaks target determinators for the entire repo, potentially costing many human-days to triage & fix.
If you're not sure whether some container or traversal is guaranteed to be deterministically ordered in Buck, sort it (or check).
Keep in mind that Buck currently supports at least two frontends for .bzl files: python3 and Starlark (and the default differs between FB-internal and open-source). You must write code that is compatible with both.
To check both back-ends, run:
If your macro defines a purely internal target, make sure it's namespaced so that, ideally: - It does not show up in buck TAB-completion (put your magic in the prefix, not suffix) - The magic prefix should discourages people from typing it manually into their TARGETS files or .bzl files -- provide an accessor method when this is necessary, see e.g. the FB-internal fetched_layer in fbpkg.bzl.
There are exceptions to this, which are magic target names that we expect users to type as part of a buck command-line on a regular basis. Reference Helper Buck Targets for a list of examples.
There are a lot of failure-modes here, from quoting to error-handling, to mis-uses of command substitution via \$(), to mis-uses of $(exe) vs $(location), to errors in cacheability. For now, treat any diff with such code as blocked on a review from @lesha. We need a second domain expert ASAP.
To get a taste of some potential problems, carefully study _wrap_bash_build_in_common_boilerplate and maybe_wrap_runtime_deps_as_build_time_deps. This is not exhaustive.
You know what "$(ls)" does in bash. Now you want this in the bash = field of your genrule. Unfortunately, this is hard. You have to do this two-liner:
Understanding what follows starts with carefully reading the genrule docs.
You have to use exe instead of location because the latter will rebuild your genrule if the runtime dependencies of the executable target change, while the former will only rebuild if the content of the executable change. Specifically, in @mode/dev, if the executable is a PAR, its content is just a symlink, which never changes, so your genrule never rebuilds. Even with C++, you would fail to rebuild on changes to any libraries that are linked into your code, since in @mode/dev those are .sos that are not part of the target's "content".
You have to use a bash array because $(exe) expands to multiple shell words, Because Buck (TM). E.g. for PARs, the expansion of $(exe) might look like something like python3 "/path to/the actual/binary".
The out field is not user-visible, it is just an implementation detail of the filesystem layout under buck-out. As such, its value does not matter. Unfortunately, Buck requires it. To minimize cognitive overhead and naming discussions, we prefer for it to always say out = "out". Feel free to update legacy callsites as you find them -- there is no risk.
If your macro takes an argument that is a target, and that target might sometimes be an in-repo file, use maybe_export_file.
This shim exists to bridge the differences between the semantics of FB-internal build rules, and those of OSS Buck. If you bypass it, you will either break Antlir for FB-internal users, or for OSS users.
Note that any newly shimmed rules have to follow a few basic practices:
All Buck rules used within Antlir have an antlir_rule kwarg.
You can declare Buck rules in one of three contexts. The context corresponds to the value of the antlir_rule kwarg:
Marking rules "user-internal" is important, since FB on-diff CI only runs builds & test within a certain dependency distance from the modified sources, and "user-internal" targets get excluded from this distance calculation to ensure that the right CI targets get triggered.
To ensure that all user-instantiated ("user-facing" / "user-internal") rules are annotated, un-annotated rules will fail to instantiate from inside a user project. That is, if your rule doesn't set antlir_rule, it defaults to "antlir-private", which triggers _assert_package(), which will fail if the Buck package path does not start with antlir/. This has two desirable effects:
The implementation details and more specific docs can be found in antlir/bzl/oss_shim_impl.bzl.
Shape types should be named with a trailing _t to indicate that it is a shape type. Shape instance variable names should conform to the local style conventions.
For example, the type and instance for installing a tarball might look like this:
- Set up Bazel. Download and install Bazel.
- Set up a project workspace, which is a directory where Bazel looks for build inputs and BUILD files, and where it stores build outputs.
- Write a BUILD file, which tells Bazel what to build and how to build it.
- Run Bazel from the command line.