The workflow is quite simple. Just load a pic into img2img. Use the same size as the original image, enable the tiles controlnet. Set a high denoise ratio. Run it, maybe feed it back and run it a couple of times more. Then enable ultimate SD upscale, set the ratio to 2x and run it again. Then accidentally run it again. Naturally, you put the result of each run back into img2img and update the picture size. The model is RPGArtistTools3.
The final image is actually quite cool!
But yes, I think we could use a LoRa to direct the granularity of the details being generated, which is progressively scaled down as we upscale to normalize the generation a bit.
This would allow to keep higher denoise ratios and avoiding the "fractal" generation.