Optimization problems in HEP often involve maximizing a measure of how sensitive is a given analysis to an hypothesis with respect to another hypothesis; the latter is referred to as $\it null$ hypothesis and in a frequentist framework is tested against the former, which is referred to as $\it alternative$ hypothesis.
In most cases, it is desirable to fully compute the expected frequentist significance, accounting for all sources of systematic uncertainty and interpreting the result as the real sensitivity of the analysis to the effect sought.
Sometimes, however, either computational or conceptual reasons can favour the use of different or approximate figures of merit, often collectively called "pseudosignificances", which exhibit different properties depending on the relationship between the hypotheses being tested.
This work will review the most common definitions of sensitivity (pseudosignificances), and compare them with the fully frequentist significances computed in toy analyses spanning a spectrum of typical HEP use cases. A connection will be made with the concept of Bayes Factor, and evidence values from Bayesian significance tests will be studied and evaluated in the same toy cases, attempting to build an improved approximate condition-aspecific figure of merit. Finally, an attempt will be made at transporting to the typical HEP cases a Bayesian solutions to the on-off problem developed in an astrophysics context.