A secretive, Google-backed lab is using artificial intelligence to invent new medicines for humanity's worst illnesses. But can we trust drugs designed by a mind that isn't human?
There really needs to be a rhetorical distinction between regular machine learning and something like an llm.
I think people read this (or just the headline) and assume this is just asking grok "what interactions will my new drug flavocane have?" Where these are likely large models built on the mountains of data we have from existing drug trials
Those models will almost certainly be essentially the same transformer architecture as any of the llms use; simply because they beat most other architectures in almost any field people have tried them.
An llm is, after all, just classifier with an unusually large set of classes (all possible tokens) which gets applied repeatedly
A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn't really the same as "the LLMs".
I'm not talking about the specifics of the architecture.
To the layman, AI refers to a range of general purpose language models that are trained on "public" data and possibly enriched with domain-specific datasets.
There's a significant material difference between using that kind of probabilistic language completion and a model that directly predicts the results of complex processes (like what's likely being discussed in the article).
It's not specific to the article in question, but it is really important for people to not conflate these approaches.
I mean I hate AI in general.. but to be honest... assuming no one is stupid enough to bypass the trials etc... I'm all for it, 90% of these problems already exist in the existing system, who owns it, can a corporation charge us to death.
The only reasonable fear is, if they come out with more than they can develop trials for, and they lobby to lower standards in trials. Even that honestly is a more acceptable risk in the context of terminal diseases/severe cancers.
Sure it helps with a bottle neck but it is not the only one. Until you gain biological and biochemical understanding of the disease no amount of throwing neural networks will help you. I am really sick and tired of AI people hyping up their stuff to get more investments. It even feels like all this "secretive" bullshit is also a part of the show.