Some lawmakers think additional guardrails are needed for future uses. For now, the facility will use AI to comply with regulations.
For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.
It's just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.
Does anyone read more than the headline? OP even said this in the summary.
It depends what purpose that paperwork is intended for.
If the regulatory paperwork it's managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.
Learning and understanding is hard work. An LLM can't do that for you.
Sure it can summarise instructions for you to show you what's more pertinent in a given instance, but is that the same as someone who knows what to do because they've been wading around in the logs and regs for the last decade?
It seems like, whether you're using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.
As always, there's a risk that a user just won't identify a problem in the information produced.
I don't think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.
I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.
If you've never used a custom LLM or wrapper for regular ol' ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it's trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?
Well, considering it's exclusively for paperwork and compliance, the worst that can happen is someone might rely on it too much and file incorrect, I dunno, license renewal with the DOE and be asked to do it again.
When it comes to compliance and regulations, anything with the literal blast radius of a nuclear reactor should not be trusted to LLM unless double or triple checked by another party familiar with said regulations. Regulations were written in blood, and an LLM hallucinating a safety procedure or operating protocol is a disaster waiting to happen.
I have less qualms about using it for menial paperwork, but if the LLM adds an extra round-trip to a form, it's not just wasting the submitter's time, but other people's as well.
to people who say it's just paperwork or whatever it doesn't matter: this is how it begins. they'll save a couple cents here and there and they'll want to expand this.
it's not actually. there's barely an intermediate step between what's happening now and what I'm suggesting it will lead to.
this is not "if we allow gay marriage people will start marrying goats". it's "if this company is allowed to cut corners here they'll be cutting corners in other places". that's not a slope; it's literally the next step.
slippery slope fallacy doesn't mean you're not allowed to connect A to B.