Lattice QCD is notorious for its computational expense. Modern lattice simulations require
large-scale computational resources to handle the high number of Dirac operator inversions used
to construct correlation functions. Machine learning (ML) techniques that can increase, at the
analysis level, the information inferred from the correlation functions would therefore be beneficial.
We apply supervised learning to infer two-point lattice correlation functions at different target
masses. Our work proposes a new method for separating data into training and bias correction
subsets for efficient uncertainty estimation. We also benchmark our ML models against a simple
ratio method.