I'd say it's not so much that this tech doesn't have value, but that it gets hyped up and used for things it really shouldn't be used for. Specifically, the way models work currently, they're not suitable for any scenario where you need an exact answer. So, it's great for stuff like generative art or creative writing, but absolutely terrible for solving math problems or driving cars. Understanding the limitations of the tech is key for applying it in a sensible way.
Interesting video and glad to see open-source suggested as a potential solution at the end... yet, it does not solve hallucinations (for LLMs), energy consumption (any form of AI) or... the fact that the hype itself is an economical and political tool at the service of a few. On the final point on regulators, I believe it's damaging to imply that regulators are ignorant. They are not technical, indeed, but they are not supposed to. Regulators didn't need to know how to build a plane to dictate rules that would improve safety in the industry, same for not being engineers in order to make the seatbelt mandatory. Yet, they do learn from technical experts, e.g in Europe the JRC that informs the Europeen Commission, Parliament, etc.
Open source does actually pave the way towards addressing many of the problems. For example, Petals is a torrent style system for running models which allows regular people to share resources to run models.
Problems like hallucinations and energy consumption aren't inherent either. These problems are actively being worked on, and people are finding ways to make models more efficient all the time. For example, by using the same techniques Google used to solve Go (MTCS and backprop), Llama8B gets 96.7% on math benchmark GSM8K. That’s better than GPT-4, Claude and Gemini, with 200x fewer parameters. https://arxiv.org/pdf/2406.07394
The reality is that we can't put the toothpaste back in the tube now. This tech will be developed one way or the other, and it's much better if it's developed in the open.
Edit: I know of Petals, I even discussed with some people working on it, and I learned about federated AI or federated learning back then, since at least 2019 (proof) so this isn't new to me.