When to test dsp code

Writing solid dsp code can be pretty tricky enough. Does writing tests for audio even make sense?

Yes! dsp implicit emphasis on optimization can trick programmers into thinking they shouldn’t create organized, tested and performant code. But dsp code is absolutely ideal for something like unit tests — it’s a bunch of numbers!

Digital Spaghetti Processing?

Writing performant dsp can feel like a free pass to write some of that good ole’ spaghetti code:

Don’t add too much function overhead! Better not refactor, it might change the performance profile! It’s ok to leave it like that, it’ll never be touched again! Don’t name your variables anything longer than two characters!

Ok, that last one has nothing to do with performance and might make sense if you are referencing specific math, but… dsp code tends to rival PHP for the most 1990s low hygiene code on the internet. I blame the academics! 🧑‍🏫

The problem is brittle code makes refactoring and adding new features risky. Each and every change dooms you to manually check and QA everything to see what might have broken. This can be a significant deterrent to improving code, maintaining code, building upon code, etc.

Insert <Root of all evil>

Obviously we all care about performance, but it’s good reason to preemptively sacrifice readability, maintainability, or testability. “Let’s hope it performs well and then never touch it” isn’t a sustainable strategy.

If you are anxious about performance, there’s only ever one prescription: measure it. Only then have will you have enough information to make an informed decision. Then you can happily keep your code in spaghetti format.

Caveat: remember that dsp is extremely sensitive to memory layout. So going the other extreme and wrapping everything in nicely named nested classes which require loops within loops to access some floats is going too far! In other words, design your code to perform well AND be testable (not just the latter).

End to end / Null testing / Integration testing for dsp

Most of this article is about unit tests. But sometimes you just want to feed a whole signal in to verify the output of your algorithm.

In other paradigms, this could be called end-to-end testing or integration testing. In audio, this translates to setting up your parameters in a certain way, running some audio (or playing a note) and verifying the output.

This is also a great way to regression test an entire algorithm. You’ll get yelled at when the output changes for any reason.

The best way to do this with dsp code is with null testing. Run your test, take the output and sum it with an inverted copy of the expected result and expect it to be within some threshold of 0.

Marius’ plugalyzer is a great tool to help with this. It’s a CLI DAW that even lets you render automation.

Writing unit tests for dsp

green feels great

A unit test just tests a small piece of code. In the most ideal world, this is one function that performs one task.

Examples:

  • Where a write pointer is after X samples.
  • Whether or not a function returns true under certain conditions.
  • How a piece of the code reacts when there’s no signal.
  • The contents of a buffer after some operation.

I personally like writing dsp unit tests because they:

  1. Immediately let me know when and where I broke something (vs. waiting for it to be “discovered” and hunting down the “where”).
  2. Remove refactoring risk, fear and friction.
  3. Provide additional documentation and clarity for Future-Me around choices that would otherwise be implicit or hidden.
  4. Make it trivial to confidently reuse and extend code.

Ok, the rest of the code is about WHEN I like to unit test. Hopefully this will help you identify a places to get started writing a test or two, as they are really helpful in these situations.

I write tests when I catch myself printing numbers to cout

Printing to cout (DBG in JUCE) to manually verify numbers is a pretty common way to sanity check or debug things. However, if I notice that I keep printing out numbers throughout a session, I immediately switch to tests.

20 minutes of test writing will save me hours of poking around now and in the future. Tests are the perfect tool for nailing down numerical requirements and making sure they work for all permutations and edge cases.

Examples:

  1. Converting between float values and UI-displayable strings like “20ms” (some examples).
  2. A buffer should be filled to sample number 256 and should be empty after that sample (see my JUCE test helpers).
  3. There should never be an audio discontinuity.
  4. A value in the buffer should never go above 1.0.
  5. A filter should reduces a frequency by a specific amount (the “ground truth” here can be arbitrary, just explicit codifying something you want, like, or sounds good).

I write tests when juggling sample rates and buffer sizes

I manually tried the code on a few sample rates. It should work on the rest, right…? – Me, every time.

Why hope when I can verify? With tests running against every sample rate, buffer size and platform, I can see under which conditions things break. If tests weren’t able catch the problem this time, I can add sprinkle in some new ones to act as future insurance.

Without tests, I might not catch the problem immediately enough to know what change introduced it. Someone else (a user) might catch the problem for me. I might have to work backwards from the UI to even reproduce the conditions.

I write tests when I’m threading the logical needle

Juggling myriad logical requirements in my head is prone to error. Like it or not, trees of conditionals get blurry.

Writing tests helps me get crystal clear about requirements and exposes the edge cases. Writing tests encourages me to think about code branching more discretely and cleanly. It lets me thread the logical needle with more ease.

Bonus: the tests act as documentation, so future me (or others) can see how exactly the system is expected to behave. It’s there, in my editor, clear as day, and there’s no need to second guess when revisiting the code 6 months from now.

I write tests when I know I’ll need to refactor

What happens when I need to add another feature to my synth engine, such as pitch modulation?

How do I make sure I didn’t break anything critical while moving code around?

When I do inevitably break something, how will I know which piece of the code actually broke?

Answer: Tests. (The alternative is excessive trial and error!) Being able to rely on the test harness to prove a refactor didn’t break anything is a great feeling.

When I’m too scared to refactor

Feeling safe to refactor the hairiest code is something I only feel with good coverage at my back.

The best thing about the fear of refactoring: I know that that once I put in the work in to add the first few tests, the fear dissipates.

When I want to reliably trigger edge cases

Dropping into a debugger is very “whack-a-mole”, especially for real time audio. Tossing in temporary lines of code to trigger conditions, recompiling, setting the UI in just the right way, etc… lots of trial and error.

Once edge cases are known, I like to codify them in tests. Not only does this document them, but makes them instantly reproducible.

The next time I have a bug or feature in the same area later down the road, everything is already setup: I can immediately drop into a debugger session right from that test.

How to get started writing dsp tests

If you aren’t testing audio yet, I recommend trying out Catch2 for C++. You can also come hang out in the #testing-and-profiling channel on the Audio Programmer Discord.

If you are getting started with JUCE, check out my Catch2 on GitHub Actions project template on GitHub and have a peek at the Catch2 matchers I use with JUCE AudioBlocks.


Leave a Reply

Your email address will not be published. Required fields are marked *