Depending on what you meant by this question, I'd say Perplexity. It's got access to a number of different LLMs, and cited its sources. The biggest concern I've had when it comes to LLMs is that they eventually make shit up. If you can verify its answers by checking its sources, you have a much higher confidence level in the answer.
This site will let you test different GPTs (far from all of them, but its a good list) in various ways. I particularly like the Arena, in which you get responses from 2 random models to your input, you say which you like best, and then it tells you which is which.
It's got me considering Claude2 for local projects. I just need to revive the hardware I'd use to run it on.
Uncensored Llama2 70B has the most flexibility as far as a model without training IMO. The mixtral 8×7B is a close second with faster inference and only minor technical issues compared to the 70B. I don't like the tone of mixtral's alignment.
koboldcpp works fairly well. There's lots of different models to try and the choice depends a fair bit on your computer specs and what you wanna do with it.
https://github.com/LostRuins/koboldcpp