Can This AI Model Improve on Its Own, or Is Progress Entirely Developer-Driven?
The Perchance dev noted that the new image model "is still training" and "will steadily improve in quality and gain new knowledge over the next few months." Some users found this wording suggests possible self-improvement or ongoing learning. However, in standard practice, most AI models do not learn or evolve after deployment unless manually retrained by developers.
Some questioned whether the model is truly learning over time or whether this statement is simply a placeholder for continued dev-side maintenance.
Goal of the Question:
To clarify based on common AI practice:
Is this model capable of continuous or micro-scale learning post-deployment?
If so, can users contribute indirectly (e.g., through use, rating, or feedback) to guide its development?
Or is all improvement strictly the result of developer-side retraining and version updates?
Understanding this helps the community know whether their activity matters in shaping the model—or if they should wait for official updates. It also opens the door to potential participatory development, if such feedback loops are supported.
It is possible (and quite standard) to add additional training data to a model after deployment. It takes a long time, and a considerable amount of GPU power. It is also the only way we are likely to get good celebrity support again, because the vast majority of models are now shipped without training on celebrity data sets anymore.
boo lol.. so what's the last model that's good for celebrities that we can still get somewhere?
edit: sorry don't want to hijack the thread, just curious
User generated input would be AI generated images and if that is fed back to the AI it would poison the AI instead of improving it, because I don't know leave me alone :worried_tiger_emoji:
That's what CEO's of major AI companies like OpenAI say on youtube and x. All but one of them are optimistic about continuous learning AI's by mid 26, and the other guy says "within 5 years".
You mean the devs of FLUX or the devs of perchance?
Because they are obviously not the same.
I am not sure about the resources of Perchance but guess it is not enough to really train big new models.