Skip Navigation
pogo stick high jump
0
jet pack racing!
0
Speedrun @lemmy.sdf.org goosethe @lemmy.sdf.org
[WR] Hydro Thunder(Arcade) The Far East 1:19.46
0
Speedrun @lemmy.sdf.org goosethe @lemmy.sdf.org
[WR] Super Mario Odyssey Any% (2 Player) in 56:54
www.twitch.tv Twitch

Twitch is the world's leading video platform and community for gamers.

Twitch
0
0
Underwater rugby
0
www.bbc.com Man v horse: Runner becomes only fourth to beat horse

The winner outran the fastest horse by over 10 minutes in the 22-mile race held in mid Wales.

Man v horse: Runner becomes only fourth to beat horse
0
0
open source math textbooks
github.com GitHub - rossant/awesome-math: A curated list of awesome mathematics resources

A curated list of awesome mathematics resources. Contribute to rossant/awesome-math development by creating an account on GitHub.

GitHub - rossant/awesome-math: A curated list of awesome mathematics resources

some links are broken but otherwise good. Post your open source math textbooks here

0
After state board approves first taxpayer-funded Catholic school, Hindus seek same | KGOU
www.kgou.org After state board approves first taxpayer-funded Catholic school, Hindus seek same

As Oklahoma pushes ahead with plans for the first ever taxpayer-funded Catholic public charter school, some say other religions should be included.

After state board approves first taxpayer-funded Catholic school, Hindus seek same

cross-posted from: https://lemmy.sdf.org/post/43170

> >

0
"Prompt Gisting:" Train two models such that given inputs "Translate French<G1><G2>" and "<G1>G2>The cat," then G1 and G2 represent the entire instruction.
arxiv.org Learning to Compress Prompts with Gist Tokens

Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs with...

Learning to Compress Prompts with Gist Tokens

cross-posted from: https://lemmy.sdf.org/post/36227

> Abstract: "Prompting is now the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and re-encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs without prompting, but require retraining the model for each task. To avoid this trade-off entirely, we present gisting, which trains an LM to compress prompts into smaller sets of "gist" tokens which can be reused for compute efficiency. Gist models can be easily trained as part of instruction finetuning via a restricted attention mask that encourages prompt compression. On decoder (LLaMA-7B) and encoder-decoder (FLAN-T5-XXL) LMs, gisting enables up to 26x compression of prompts, resulting in up to 40% FLOPs reductions, 4.2% wall time speedups, storage savings, and minimal loss in output quality. "

0
NVIDIA's everything 2 anything
0
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GO
goosethe @lemmy.sdf.org
Posts 20
Comments 1