I think it's a good idea to share experiences about LLMs here, since benchmarks can only give a very rough overview on how well a model performs.
So please share how much you're using LLMs, what you use them for and how they well they perform at those tasks. For example, here are my answers to these questions:
Usage
I use LLMs daily for work and for random questions that I would previously use web search for.
I mainly use LLMs for reasoning heavy tasks, such as assisting with math or programming. Other frequent tasks include proofreading, helping with bureaucracy, or assisting with writing when it matters.
Models
The one I find most impressive at the moment is TheBloke/airoboros-l2-70B-gpt4-1.4.1-GGML/airoboros-l2-70b-gpt4-1.4.1.ggmlv3.q2_K.bin. It often manages to reason correctly on questions where most other models I tried fail, even though most humans wouldn't. I was surprised that something using only 2.5 bits per weight on average could produce anything but garbage. Downsides are that loading times are rather long, so I wouldn't ask it a question if I didn't want to wait. (Time to first token is almost 50s!). I'd love to hear how bigger quantizations or the unquantized versions perform.
Another one that made a good impression on me is Qwen-7B-Chat (demo). It manages to correctly answer some questions where even some llama2-70b finetunes fail, but so far I'm getting memory leaks when running it on my M1 mac in fp16 mode, so I didn't use it a lot. (this has been fixed it seems!)
All other models I briefly tried where not too useful. It's nice to be able to run them locally, but they were so much worse than chatGPT that it's often not even worth it to consider using them.
I really enjoy Wizard Coder for coding tasks. I use this specifically at work for code review and unit test writing. I also just generally like the Wizard LM.
Outside of that, I'll say that I haven't had a great experience with llama-2 yet, no matter how hard I've tried 🤣
I've primarily used WizardLM as well but I've found that it tends to constantly try to follow the same format for every answer:
<paragraph explaining/answering/arguing the question>
<paragraph beginning with "However" that gives a counterargument>
<sentence beginning with "In conclusion, it is important to remember" that gives a lecture about how both sides of the argument must be considered and there are multiple factors that might influence the answer>
Not only is this repetitive, boring, and belittling to converse with, but it means that the model often won't directly answer a question or give an actual argument/justification for something. It feels vaguely like it's refusing to commit to a side and telling me off for trying to talk in absolutes rather than actually giving an answer.
Additionally, in cases where there isn't a counterargument to be made, it will make up nonsense to fill the counterargument section. e.g. "Explain your reasoning for the above answer" tends to result in:
<"You can arrive at the above answer by doing ..." followed by mostly sensible reasoning>
<"Alternatively, you could do ..." followed by either a made up illogical reasoning or the exact same reasoning as before presented as if it was a different thing>
<lecture about how it is important to consider multiple approaches depending on external factors, or a brief summary of the similarities and differences between the two strategies presented>
When I can get it to break out of this pattern, e.g. following the "thought action observation" loop script, it seems to perform marginally better than other models that I have tried.