TestProf III: guided and automated Ruby test profiling

Cover for TestProf III: guided and automated Ruby test profiling

The trend towards a better developer experience (and developer tools) is increasing, and engineers are craving smarter tools and faster feedback loops for maximum productivity. In the Ruby on Rails community, long-running CI/CD builds due to slow tests can be singled out in particular. The introduction of TestProf six years ago made Ruby developer lives easier and their tests much faster. Even though the project has proven its effectiveness and could be considered complete, it’s never stopped evolving. Today, we’d like to share our updated tests profiling playbook along with new TestProf tools and features!

Other parts:

  1. TestProf: a good doctor for slow Ruby tests
  2. TestProf II: Factory therapy for your Ruby tests
  3. TestProf III: guided and automated Ruby test profiling

We originally began this series more than six years ago before overhauling the first two installments and updating this one, but the problems we’ve been discussing regarding slow Ruby/Rails tests—and solutions we proposed—are ever relevant.

We’ve gone forward two Rails versions and one Ruby version since 2018, but the best way to keep your tests fast is still by using let_it_be, instead of let / let!; and before_all, instead of before(:each). So, we’re not going to repeat ourselves.

Instead, we’d like to talk about how TestProf and its usage patterns have evolved over time. In this part, we’ll cover the following topics:

Introducing the TestProf playbook

Over the years, we (Evil Martians) have worked on dozens of Rails applications, and, for each of those, the test suites got at least a bit of love attention (we’ve even had gigs focused solely on test performance). Unsurprisingly, both the reasons for test slowness and the way we approach these test suites are very similar across the cases we’ve dealth with, so we’ve come up with the following framework (or playbook):

  • We start with some eye-profiling: that is, checking the application’s configuration, looking through the Gemfile searching for known perpetrators.
  • We continue by analyzing StackProf samples with Speedscope.
  • Then, we kick into full gear with TestProf’s own profilers: TagProf, EventProf, FactoryProf, and the rest.

The full version of the TestProf playbook is available here: test-prof.evilmartians.io/playbook

Our playbook helps you get started with test profiling. Still, this process requires a bunch of manual operations to be conducted by a developer. Given that we deal with slow tests, profiling naturally results in long feedback loops. So, we started looking into ways to minimize human interactions and came up with the idea of putting things on autopilot.

Schedule call

Irina Nazarova CEO at Evil Martians

Schedule call

Automating profiling with TestProf Autopilot

Having something described as a formal process is the first step towards automation. So, after formalizing our playbook, we were ready to take this step. This is how the TestProf Autopilot was born.

TestProf Autopilot is a CLI tool (and a gem) that helps execute complex TestProf scenarios. You write a script in Ruby and execute it against your codebase.

For example, this is how you can automate the generation and merging of StackProf reports to see the aggregated results:

# spec/.test-prof/stack_prof_scenario.rb

# First, profile the boot time
run :stack_prof, boot: true, sample: 10

save report, file_name: "stack_prof_boot"

# Then, generate multiple sampled profiles and aggregate the results
aggregate(3) { run :stack_prof, sample: 100 }

save report, file_name: "stack_prof_sample"

And then, you run the script via the auto-test-prof command:

bundle exec auto-test-prof -i spec/.test-prof/stack_prof_scenario.rb

Autopilot performs four runs and saves the reports to the specified locations—all without distracting you from other tasks you might have switched to knowing that profiling would take some time.

Since scenarios are Ruby scripts, you’re free to use all the power and expressiveness of the language to serve your profiling needs. Let me share another real-life scenario: selecting the files responsible for >50% of the time spent in factories for a given group:

# spec/.test-prof/event_prof_scenario.rb

run :event_prof, event: "factory.create", top_count: 1000

total_run_time = report.raw_report["absolute_run_time"]
total_event_time = report.raw_report["total_time"]

threshold = 0.5 * total_run_time

sum = 0.0
sum_event = 0.0

top_groups = report.raw_report["groups"].take_while do |group|
  next false unless sum < threshold

  sum += group["run_time"]
  sum_event += group["time"]
  true
end

msg = <<~TXT
  Total groups: #{report.raw_report["groups"].size}
  Total times: #{total_event_time} / #{total_run_time}
  Selected groups: #{top_groups.size}
  Selected times: #{sum_event} / #{sum}

  Paths:

TXT

paths = top_groups.map { _1["location"] }

puts msg
puts paths.join("\n")

And this is how you run it:

$ bundle exec auto-test-prof -i spec/.test-prof/event_prof_scenario.rb \
  -c "bundle exec rspec spec/models"

Total groups: 497
Total time: 2210.41 / 2394.49
Selected groups: 26
Selected time: 1118.15 / 1199.34

Paths:

./spec/models/appointment_spec.rb:4
./spec/models/user_spec.rb:3
./spec/models/assessment_spec.rb:3
...

You can see that this scenario helped us narrow down optimization work to only 26 files out of 497—that means that just ~5% of all files were responsible for 50% of the entire run time!

Although it wasn’t a part of the original plan, Autopilot became especially useful for dealing with larger codebases, those where running the whole test suite using a single process would take hours. If you take a look in the playbook, you may notice that some steps require full test suite runs, e.g., TagProf analysis.

We already had the functionality to save reports in JSON format and merge them (see the aggregate example above), so we only needed to expose these features to the CLI. As a result, the --merge option was added:

$ auto-test-prof --merge tag_prof --reports tag_prof_*.json

Merging tag_prof reports at tag_prof_1.json, tag_prof_2.json, tag_prof_3.json

[TEST PROF] TagProf report for type

       type          time   factory.create    total  %total   %time           avg

      model     28:08.654        19:58.371     1730   56.44   46.23     00:00.976
    service     20:56.071        16:14.435      808   29.18   28.35     00:01.554
        api     04:48.179        03:54.178      214    7.32    4.78     00:01.346
        ...

But how can we collect the reports? Just run TestProf Autopilot on CI and store the reports as artifacts! Here is an example scenario:

run :tag_prof, events: ["factory.create"]

save report, file_name: "tag_prof_#{ENV["CI_NODE_INDEX"]}"

You may still need to download all the files manually, which is not ideal. One of the possible workarounds we’re considering adding in future releases is the #upload method; this would store reports in the cloud (maybe, even TestProf Cloud 😉).

TestProf Autopilot has been an internal project for quite a while and has become our trusted companion when it comes to test optimization tasks. It’s still in (not-so) active development but nevertheless usable enough to be revealed to the public—so, please, give it a try and share your thoughts (and issues)!

Now, let’s see what else we prepared for in this TestProf release.

More features and futures

TestProf hit v1.0 more than three years ago; it’s a stable and mature software. But maturity doesn’t mean we don’t have anything new to show you. Here are the highlights of the recent releases, v1.3.0 and v1.4.0.

We introduced a brand new profiler—MemoryProf. It helps identify examples and groups responsible for RSS usage spikes or a high number of object allocations.

You may ask, “Why should we care about memory usage in tests?“. First, this is to optimize your CI infrastructure (self-hosted or not—it doesn’t matter, you still pay for it). Second, this kind of profiling can help you identify production memory issues, too. Give it a try and see what insights it reveals!

Another new feature we’re excited about is Vernier integration. Vernier is a next-generation Ruby sampling profiler with better multi-threading, GVL, and GC event support. One of my favourite features of Verniner is the Active Support Notifications integration: now you can see Rails events along stackraces. I recommend to take a look at “load_config_initializer.railties” to see why your test suite boots slowly. Another feature of Vernier is its compatibility with Firefox Profiler UI. Here’s how I used it to find the cause of a sudden slow down in the test suite (spoiler: it was Sentry.capture_message):

Demo of using Firefox Profiler to analyze Vernier reports

So, what’s next for TestProf? We don’t have big plans for v2 (okay, I do, but no timeline yet). Instead, we’re focusing on expanding the TestProf universe and bringing new tests-oriented tools. We also released Mock Suey, a toolkit to make mocks more reliable and avoid false positives.

We plan to further automate the TestProf playbook and provide something like test-prof-audit—an tool for analyzing your configuration and dependencies and suggesting optimizations (see logging and Wisper examples in the playbook).

Finally, speaking of automation, we couldn’t resist joining the AI hype train, and we’ve already started the AI-ification of TestProf—and it’s goes amazingly well!

Stay tuned and join the faster tests movement! (Or you can always support us!)

Schedule call

Irina Nazarova CEO at Evil Martians

Playbook.com, StackBlitz.com, Fountain.com, Monograph.com—we joined the best-in-class startup teams running on Rails to speed up, scale up and win! Solving performance problems, shipping fast while ensuring maintainability of the application and helping with team upskill. Hire the most skilled team in working with Rails startups!