Google Co-Scientist AI cracks superbug problem in two days! — because it had been fed the team’s previous paper with the answer in it
Google Co-Scientist AI cracks superbug problem in two days! — because it had been fed the team’s previous paper with the answer in it

pivot-to-ai.com
Google Co-Scientist AI cracks superbug problem in two days! — because it had been fed the team’s previous paper with the answer in it

Perhaps it's not exactly equivalent since this is an LLM, but from what I've learnt in my undergrad machine learning course, shouldn't the test data be separate from the training data?
The train-test (or train-validate-test) split was one of the first few things we learnt to do.
Otherwise, the model can easily get a 100% accuracy (or whatever relevant metric) simply by regurgitating training data, which looks like the case here.
but that won't trick investors into funding more of it