Skip to content

mewbak/fun-with-flags

 
 

Repository files navigation

This is the submission artifact for “Fun with flags: How Compilers Break and Fix Constant-Time Code”. This archive contains the binaries, libraries, analysis results, tables and table generation scripts used for the paper.

Directory structure:

  • build_setup: folder containing the setup to build the libraries and benchmarks under different combinations of compilers and options, and to run Microwalk
  • clang: benchmarks binaries and Microwalk results obtained for Clang (Section 3)
  • gcc: benchmarks binaries and Microwalk results obtained for GCC (Section 3)
  • mitigations: benchmark binaries and Microwalk results obtained for our mitigations (Section 5.1)
  • lib: cryptographic libraries used for all our experiments
  • perf_experiment: results for our mitigations’ performance impact evaluation, including the full table for clang-18 O3 (Section 5.2)
  • tables: tables generated for the paper

Building

Libraries and benchmarks can be built in build_setup as such:

make lib
make bench

params.mk defines the parameters used for building, including the compiler versions, optimization levels and additional options (for examples using our mitigating set of flags). By default, clang-12, clang-18, gcc-9 and gcc-13 will be used and are assumed to be installed.

Source archives for BearSSL and MbedTLS can be found in build_setup/lib/src/. make lib will build the libraries with the parameters set above. This will extract the source code of each library, apply a configuration then run each library’s build system.

The source code for each benchmark can be found in the build_setup/src/benchmark folder. We use only a subset of the original benchmarks for our experiments.

Running Microwalk

The benchmarks can be run using the python scripts supplied in build_setup.

./run_benchmarks.py -c amd64_clang-18_O3 amd64_clang-18_Os amd64_clang-12_O3 amd64_clang-12_Os -t microwalk -b compilers_study

This will run Microwalk on the binaries built for clang-[12|18] with optimization level O[3|s]. Microwalk is assumed to be installed as a docker container, following the procedure in the project’s Github repo.

Performance experiments

Performance experiments are done using the benchmarks directly supplied with the libraries. These can be run on a specific core as such (on our device core 0 is isolated from the scheduler):

taskset 0x1 ./benchmark_perf.sh

Generating the tables

The tables for our results can be generated using the supplied python script:

./generate_table.py

This will write the tables in the folder tables.

About

Studying how compiler optimizations break and fix constant-time

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 99.3%
  • Other 0.7%