Skip Navigation
Natural Language Programming | Prompting (chatGPT) @lemmy.intai.tech circle @lemmy.world
kNN using a gzip-based distance metric outperforms BERT and other neural methods for OOD sentence classification

intuition: 2 texts similar if cat-ing one to the other barely increases gzip size

no training, no tuning, no params — this is the entire algorithm

https://aclanthology.org/2023.findings-acl.426/

0
Natural Language Programming | Prompting (chatGPT) @lemmy.intai.tech circle @lemmy.world
Is there some resource to benchmark LLM performance on a hardware

As the title suggests, basically i have a few LLM models and wanted to see how they perform with different hardware (Cpus only instances, gpus - t4, v100, a100). Ideally it's to get an idea on the performance and overall price(vm hourly rate/ efficiency)

Currently I've written a script to calculate ms per token, ram usage(memory profiler), total time taken.

Wanted to check if there are better methods or tools. Thanks!

4
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CI
circle @lemmy.world
Posts 2
Comments 10