It's venture capital. Eventually it will stop being open source and will enshitify just like every other platform. So nothing is changing long term in my opinion.
I remember using an app that blocked spam calls using a collaborative database. The one I use now is Truecaller, but it's always trying to get me to subscribe. I liked the one I used before better. What is the best caller ID app that can block spam that you know about?
The ones I buy contain lemon for preservation, but I don't like the acidic taste of lemon in tomato sauce.
Oh, so it’s mostly a side effect, but they are still primarily being trained to predict the next word.
And the only solution to the dead internet theory is scanning our eyeballs for Worldcoin. There don’t seem to be any non-dystopian timelines in our future.
I've been reading about recent research on how the human brain processes and stores memories, and it's fascinating! It seems that our brains compress and store memories in a simplified, low-resolution format rather than as detailed, high-resolution recordings. When we recall these memories, we reconstruct them based on these compressed representations. This process has several advantages, such as efficiency, flexibility, and prioritization of important information.
Given this understanding of human cognition, I can't help but wonder why AI isn't being trained in a similar way. Instead of processing and storing vast amounts of data in high detail, why not develop AI systems that can compress and decompress input like the human brain? This could potentially lead to more efficient learning and memory management in AI, similar to how our brains handle information.
Are there any ongoing efforts in the AI community to explore this approach? What are the challenges and benefits of training AI to mimic this aspect of human memory? I'd love to hear your thoughts!
Hey fellow Lemmings,
I've been thinking about how we measure the liveliness of our communities, and I believe we're missing the mark with Monthly Active Users (MAU). Here's why I think Posts + Comments per Month (PCM) would be a superior metric:
Why PCM is Better Than MAU
-
Quality over Quantity: MAU counts lurkers equally with active participants. PCM focuses on actual engagement.
-
Spam Resistance: Creating multiple accounts to inflate MAU is easy. Generating meaningful posts and comments is harder.
-
True Reflection of Activity: A community with 1000 MAU but only 10 posts/comments is less vibrant than one with 100 MAU and 500 posts/comments.
-
Encourages Participation: Displaying PCM could motivate users to contribute more actively.
-
Easier to Track: No need for complex user tracking. Just count posts and comments.
Implementation Ideas
- Show PCM in the community list alongside subscriber count
- Display PCM in each community's sidebar
- Use PCM for sorting "hot" communities
What do you think? Should we petition the Lemmy devs to consider implementing this? Let's discuss!
There are 16M comments per day according to the observer website.
30k communities and 9M posts per day. I find the number of posts per day very hard to believe. Each community would have an average of 300 posts per day, and most communities are abandoned. Maybe it's the bot communities that repost all the Reddit posts that inflate the number so high.
> Yeah because first of all, content had to be spread out across 562826 different communities for no reason other than that reddit had lots of communities, after growing for many many years. It started with just a few. > > Then 99% of those were created on Lemmy.world, and every new user was directed to sign up at Lemmy.world. > > I guess a lot of people here are younger than me and didn’t experience forums, but we had like 30 forum channels. That was enough to talk about anything at all. And I believe it’s the same here, it would have been enough. And then all channels would have easy to find content. > > source
Hey everyone! I'm curious about the number of communities on Lemmy and the activity levels within them. Specifically, is there a reliable source where I can check the total number of communities and the average number of posts per month? It seems like the number of communities might be quite high, but I wonder how low the post activity is across most of them. Any insights or links to resources would be greatly appreciated!
I often find myself browsing videos on different invidious instances or posts on various lemmy instances, and I would love to be able to create a "watch later" list or a "favorite" list that works across all of them. I don't want to have to manually import and export these lists between different instances, either, like I have to do on lemmy, invidious, etc.
I'm currently using a single bookmarks folder to keep track of everything, but I don't like this because it's a mess. I'd like to be able to create two or three different lists for different groups of websites, so that I can easily find what I'm looking for. For example, a favorite list for reddit, tumblr, etc, another favorite list and a watch for later list for invidious instances, and other lists for other sites.
Is there any way to achieve this? I'm open to using browser extensions, third-party apps, or any other solutions that might be out there. I would prefer a free solution, but I'm willing to consider paid options as well.
A bookmark can only exist in one folder at a time, whereas I want to be able to add a single item to multiple lists (e.g., both "favorites" and "watch later").
I believe the closest to what I'm looking for are Raindrop.io, Pocket, Wallabag, Hoarder, etc.
https://github.com/hoarder-app/hoarder?tab=readme-ov-file#alternatives
I use Manjaro Linux and Firefox.
The technology, which marries Meta’s smart Ray Ban glasses with the facial recognition service Pimeyes and some other tools, lets someone automatically go from face, to name, to phone number, and home address.
I want to create a collage of 20 screenshots from a video, arranged in a 5x4 grid, regardless of the video’s length. How can I do this efficiently on a Linux system?
Specifically, I’d like a way to automatically generate this collage of 20 thumbnails from the video, without having to manually select and arrange the screenshots. The number of thumbnails should always be 20, even if the video is longer or shorter.
Can you suggest a command-line tool or script that can handle this task efficiently on Linux? I’m looking for a solution that is automated and doesn’t require a lot of manual work.
Here's what I've tried but I only get 20 black boxes:
```bash #!/bin/bash
Check if input video exists
if [ ! -f "$1" ]; then echo "Error: Input video file not found." exit 1 fi
Get video duration
duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$1")
Calculate interval between frames
interval=$((duration / 20))
Extract 20 frames from the video
for i in {1..20}; do ffmpeg -ss $((interval * ($i - 1))) -i "$1" -vf scale=200:-1 -q:v 2 "${1%.*}_frame$i.jpg" done
Create collage
montage -mode concatenate -tile 5x4 -geometry +2+2 "${1%.}_frame.jpg" output_collage.jpg
Clean up temporary files
rm "${1%.}_frame.jpg"
echo "Collage created: output_collage.jpg" ```
A collection of modern/faster/saner alternatives to common unix commands. - ibraheemdev/modern-unix
DeepSeek Coder: Let the Code Write Itself. Contribute to deepseek-ai/DeepSeek-Coder development by creating an account on GitHub.
Permanently Deleted
BleachBit, the popular free system cleaner, has just released a major update — its first since 2021. For those unfamiliar with it, BleachBit is an
CogVLM: Visual Expert for Pretrained Language Models
Presents CogVLM, a powerful open-source visual language foundation model that achieves SotA perf on 10 classic cross-modal benchmarks
repo: https://github.com/THUDM/CogVLM abs: https://arxiv.org/abs/2311.03079
A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration. - GitHub - bitmagnet-io/bitmagnet: A self-hosted ...
A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.
This is a significant release with lots of major and long requested features. Here's a run down: Session Resurrection This version adds a built-in capability to resurrect sessions. Attaching to "ex...
A terminal workspace with batteries included
article: https://x.ai
trained a prototype LLM (Grok-0) with 33 billion parameters. This early model approaches LLaMA 2 (70B) capabilities on standard LM benchmarks but uses only half of its training resources. In the last two months, we have made significant improvements in reasoning and coding capabilities leading up to Grok-1, a state-of-the-art language model that is significantly more powerful, achieving 63.2% on the HumanEval coding task and 73% on MMLU.