I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.
The problem is that generative AI can't determine fact from fiction, even though it has enough information to do so. For instance, I'll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn't work, it will respond with "Sorry about that. You can't do [x] with [y] because [z] reasons."
The reasons are often correct but ChatGPT isn't "intelligent" enough to ascertain that an approach will fail based on data that it already has before suggesting it.
It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.
So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.