In this proceedings we demonstrate some advantages of a top-bottom approach in the development of hardware-accelerated code.
We start with an autogenerated hardware-agnostic Monte Carlo generator, which is parallelized in the event axis. This allow us to take advantage of the parallelizable nature of Monte Carlo integrals even if we don't have control of the hardware in which the computation will run (i.e., an external cluster).
The generic nature of such an implementation can introduce spurious bottlenecks or overheads.
Fortunately, said bottlenecks are usually restricted to a subset of operations and not to the whole vectorized program. By identifying the more critical parts of the calculation one can get very efficient code and at the same time minimize the amount of hardware-specific code that needs to be written. We show benchmarks demonstrating how simply reducing the memory footprint of the calculation can increase the performance of a $2 \to 4$ process.