Generative models, such as the method of normalizing flows, have been suggested as alternatives to
the standard algorithms for generating lattice gauge field configurations. Studies with the method
of normalizing flows demonstrate the proof of principle for simple models in two dimensions.
However, further studies indicate that the training cost can be, in general, very high for large
lattices. The poor scaling traits of current models indicate that moderate-size networks cannot
efficiently handle the inherently multi-scale aspects of the problem, especially around critical
points. We explore current models with limited acceptance rates for large lattices and examine
new architectures inspired by effective field theories to improve scaling traits. We also discuss
alternative ways of handling poor acceptance rates for large lattices.