Writing solid dsp code can be pretty tricky enough. Does writing tests for audio even make sense?
Yes! The emphasis on optimization in dsp can trick programmers into thinking it’s not possible to create organized, tested and performant code. But dsp code is absolutely ideal for unit tests.
Writing performant dsp can feel like a free pass to write some of that good ole’ spaghetti code:
Don’t add too much function overhead! Better not refactor, it might change the performance profile! Better safe than sorry! It’s ok to leave it like that, it’ll never be touched again! Don’t name your variables anything longer than two characters!
Ok, that last one has nothing to do with performance and might make sense if you are referencing specific math, but still…
Brittle code makes refactoring and adding new features risky. Each and every change dooms you to manually check everything to see what might have broken. This can be a significant deterrent to improving code, maintaining code, building upon code, etc.
Just to get this out of the way: Being concerned about performance is not a good reason to preemptively sacrifice readability, maintainability, or testability. “Let’s hope it performs well and then never touch it” isn’t a sustainable performance strategy.
If you are anxious about performance, measure it and measure the alternative. Only then have will you have enough information to make an informed decision. Perhaps that decision will be to keep your code in spaghetti format. If so, please angrily @ me on twitter.
I personally like writing dsp tests because they:
- Immediately let me know when and where I broke something (vs. waiting for it to be “discovered” and hunting down the “where”).
- Remove refactoring risk, fear and friction.
- Provide additional documentation and clarity for Future-Me around choices that would otherwise be implicit or hidden.
- Make it trivial to confidently reuse and extend code.
Enough preachin’. The rest of the articles when I personally consider tests mandatory: the contexts where I know from experience I’m doing myself a disservice if I don’t write them.
DBG in JUCE) to manually verify numbers is a pretty common way to sanity check or debug things. However, if I notice that I keep printing out numbers throughout a session, I immediately switch to tests.
20 minutes of test writing will save me hours of poking around now and in the future. Tests are the perfect tool for nailing down numerical requirements and making sure they work for all permutations and edge cases.
- Converting between float values and UI-displayable strings like “20ms” (some examples).
- A buffer should be filled to sample number 256 and should be empty after that sample (see my JUCE test helpers).
- There should never be an audio discontinuity.
- A value in the buffer should never go above 1.0.
- A filter should reduces a frequency by a specific amount (the “ground truth” here can be arbitrary, just explicit codifying something you want, like, or sounds good).
I manually tried the code on a few sample rates. It should work on the rest, right…? – Me, every time.
Why hope when I can verify? With tests running against every sample rate, buffer size and platform, I can see under which conditions things break. If tests weren’t able catch the problem this time, I can add sprinkle in some new ones to act as future insurance.
Without tests, I might not catch the problem immediately enough to know what change introduced it. Someone else (a user) might catch the problem for me. I might have to work backwards from the UI to even reproduce the conditions.
Juggling myriad logical requirements in my head is prone to error. Like it or not, trees of conditionals get blurry.
Writing tests helps me get crystal clear about requirements and exposes the edge cases. Writing tests encourages me to think about code branching more discretely and cleanly. It lets me thread the logical needle with more ease.
Bonus: the tests act as documentation, so future me (or others) can see how exactly the system is expected to behave. It’s there, in my editor, clear as day, and there’s no need to second guess when revisiting the code 6 months from now.
What happens when I need to add another feature to my synth engine, such as pitch modulation?
How do I make sure I didn’t break anything critical while moving code around?
When I do inevitably break something, how will I know which piece of the code actually broke?
Answer: Tests. (Or: excessive trial and error and time.) Being able to rely on the test harness to prove a refactor didn’t break anything is a great feeling.
Feeling safe to refactor the hairiest code is something I only feel with good coverage at my back.
The best thing about the fear of refactoring: I know that that once I put in the work in to add the first few tests, the fear dissipates.
Dropping into a debugger is very “whack-a-mole”, especially for real time audio. Tossing in temporary lines of code to trigger conditions, recompiling, setting the UI in just the right way, etc… lots of trial and error.
Once edge cases are known, I like to codify them in tests. Not only does this document them, but makes them instantly reproducible.
The next time I have a bug or feature in the same area later down the road, everything is already setup: I can immediately drop into a debugger session right from that test.
If you are getting started with JUCE, check out my Catch2 on GitHub Actions project template on GitHub and have a peek at the Catch2 matchers I use with JUCE AudioBlocks.