At the moment we’re in a round of optimising Arva-js, and it’s going almost surprisingly well. We are trying to improve the performance up to sensure smooth UX and high FPS animations. Some changes are making a lot of difference but of course not all changes are improvements. To see the impact of the changes we are pushing we often use Chrome developer tools. But these tools have some limitations. Chrome devtools quite some flaws
So we tried to measure FPS by sound.
We have been trying to figure out how hardware accelerations affect performance, however it’s not immediately clear which one is faster when switching between views or scrolling using the built-in Chrome FPS meter. One annoying aspect in this is that the chrome meter actually slows the application down. Another issue with it is that if there are no animations going on the meter drops to 0, falsely dropping the FPS.
For examining two different scenarios you can export your data and perform a comparison, but the app scenarios has to be totally identical for the mesurement to be accurate. It can be hard studying the graphs to see what performed better. And the main drawback is still that there is no live feedback that shows a difference in performances.
There are many possible ways of fixing these problems, but we went with the fun one. Measuring FPS by the use of sound. We wanted to experiment with audio feedback in measuring the performance, because the sense of hearing is inherently bound to time in a different way than the visual sense is. Without going in too much detail, sounds needs time to exist. Sound waves are dependent on time in order to vibrate.
Humans are also quite good in comparing sounds to one another, by speaking and listening. But also playing and enjoying music, it all trains your hearing. So listening to a sound for measuring and comparing FPS was something we thought that was possible!
But how did we do it?
We started by modulating the frequency of a note using the web audio API to the current FPS.
An oscillator is the thing that makes a note for us. The ‘sawtooth’ type refers to the shape that the sound wave will make (you can check out the shape of the sawtooth here).
This is the result, and the first time you will hear your FPS expressed in sound!:
To make the sound more bearable, we add some reverb that was build by soundbank-reverb. Without going in detail about what is does exactly it helps to give it a bit more echo and makes it just sound better.
Le’ts also try emit notes separately, to distinguish the rhythm as a secondary aid of hearing the pulse of frames being processed. We do this by making a sound every 3 frames.
We also make it follow the pentatonic scale. The pentatonic scale is basically a set of note that is known for it’s catchy melodies for just about sequence. There’s some boilerplate code for going from a note to a frequency. The mathematical formula for retrieving the frequency of a note comes from the equal temperament tuning formula. Let’s check out the code for makeOscillator:
The result looks and sounds like this:
The steady high beep sounds is 60 FPS. We recognise the tendency for higher notes to stick out more, so we reverse it and let the higher fps represent the lower notes. It’s much harder to hear difference between the really low notes. This speaks to our advantage, but that’s just good since it doesn’t really matter if we’re at 55 or 60 fps, but the difference between 10 and 15 FPS is a much more crucial one.
60 FPS now sounds like a steady and slightly more comfortable buzz, and spikes of fps drops are easier to identify. We put this to the test by comparing the different layering setups side by side. The first test is to switch screens (switch_screens). The first setup uses one single layer, and the second example an individual layer for each HTML node.
We hear here that the second setup stutters more, reaches higher notes, and takes a few ticks longer to stabilise. We also compare scrolling. Again, the first example uses a single layered setup, and the latter uses as many layers as possible.
Now we clearly hear the difference. Not promoting any layers makes it a lot smoother, as can be heard from the melody playing in a higher register in the latter half of the video snippet.
Although we saw that for this experiment, not using layer compositing is better for performance, Chrome implemented layers for a reason. The problem is promoting layers too much. Google warns developers not to promote layers excessively, and here we can see why. The ideal way is most likely to identify a few key elements to promote, which is something we will look into in upcoming versions of Arva.