I'm not referring to the amount of content but how it is curated. If it showed the content sorted by votes from the local instance instead of an aggregate of all instances the content would differ form instance to instance.
I've noticed that the "All" feed on Lemmy is pretty much the same across all instances, showing posts from every instance regardless of the specific focus or community vibe of the instance you're on. This seems like a missed opportunity to make the experience more tailored and engaging for each instance's unique audience.
For example, if there were an instance dedicated to literature lovers, wouldn't it make sense for the "All" feed on that instance to prioritize content that's more relevant to people who enjoy books, poetry, and writing? Instead of being a global feed that shows everything from memes to tech news, it could reflect the interests and values of the instance's community.
I feel like making the "All" feed more tailored to each instance would not only improve user experience but also strengthen the sense of community within each instance. What do you think? Would love to hear everyone's thoughts!
Available online as in, you just log in to a website and use it, not on hugging face or github, where you need to download, install and configure.
LLMs are already made so "safe" that they won't even describe an erotic or crime story - content you would easily find visually represented in all its detail on Netflix, Amazon, HBO, youtube, etc. Ie writing "Game of Thrones" with an AI is not possible in most chat bots anymore.
YouTube Video
Click to view this content.
I would love to be more active in posting links to articles and websites I find interesting to the fediverse, but I find that searching for the appropriate community can be a hassle. With so many different instances hosting the same communities, it can be difficult to know where to post. Is there a Firefox extension that would allow me to quickly and easily post links to a single Lemmy community (for example https://reddthat.com/c/random)? I'm envisioning something like a bookmarking tool that lets me post the website I'm viewing with a single click. If there isn't an existing extension that does this, I'd be interested in finding a similar program that I could use for inspiration to create one myself.
Latest move to tighten regulation comes amid soaring use of algorithms for content recommendation, e-commerce and gig work distribution.

As long as I'm not looking at it I'd feel more comfortable with it that being surrounded by mosquitoes. Would you rather be surrounded by mosquitoes than be in the same room as that thing?
You are overcomplicating the issue by suggesting a "favorite" option when there is already a "subscribe" option. At the very least, consider proposing something distinct that helps users discover more of the small communities they are subscribed to, rather than suggesting something that has already been implemented.
Although there were some proposed solutions for this issue, when scaled sort was implemented, @nutomic@lemmy.ml closed all related issues, even when they weren't being solved by scaled sort. So, it's clear that since there are no longer any open issues about this, no one is going to care about solving it. Therefore, it seems like the only option is to accept this fact and learn to cope with it. At this point, I've come to terms with the fact that Lemmy is mainly a platform for shitposts, while Reddit is for everything else. When I look at the feed, I mostly see memes, US politics, and some tech.
Custom feeds may not be the most efficient solution due to scalability concerns. However, an alternative approach could be to make the metadata about the posts (votes, comments, etc) available through an API call. This would enable users to develop their own algorithms for content discovery and potentially create a more personalized experience. Users could then implement, share and install these algorithms using tools like Tampermonkey or other userscript managers.
The video gives the impression that the mosquitoes can be tracked with the ultrasonic sensor used and killed with the laser, but this is not the case

Something like this:
Viral video shows ‘radar air defense system’ claimed to shoot mosquitos with lasers
YouTube Video
Click to view this content.
Over the past few days, I’ve witnessed a remarkable surge in the number of communities on browse.feddit.de. What started with 2k communities quickly grew to 4k, and now it has reached an astonishing 8k. While this exponential growth signifies a thriving platform, it also brings forth challenges such as increased fragmentation and the emergence of echo chambers. To tackle these issues, I propose the implementation of a Cross-Instance Automatic Multireddit feature within Lemmy. This feature aims to consolidate posts from communities with similar topics across all federated instances into a centralized location. By doing so, we can mitigate community fragmentation, counter the formation of echo chambers, and ultimately foster stronger community engagement. I welcome any insights or recommendations regarding the optimal implementation of this feature to ensure its effectiveness and success.
I think it’s because it’s just memes and also quite hard moderation and downvotes. It feels like a reddit clone that has the exact same mindset as reddit. I get annoyed when I see people being moderated for having an opinion that is not popular.
I saw a post being locked yesterday for asking about moderation. Doesn’t anyone else see the problem with that? Your channels rules are not more important than making people feel they can talk and express what’s on their mind.
I hate that so much. Stop treating people like they are just resources to moderate.
I don’t see much discussions. But I’m sure there is a few here and there.
Yeah because first of all, content had to be spread out across 562826 different communities for no reason other than that reddit had lots of communities, after growing for many many years. It started with just a few.
Then 99% of those were created on Lemmy.world, and every new user was directed to sign up at Lemmy.world.
I guess a lot of people here are younger than me and didn’t experience forums, but we had like 30 forum channels. That was enough to talk about anything at all. And I believe it’s the same here, it would have been enough. And then all channels would have easy to find content.
It certainly doesn't help that Lemmy had and still has absolutely no sensible way to actually surface niche communities to its subscribers. Unlike Reddit, it doesn't weigh posts by their relative popularity within the community but only by total popularity/popularity within the instance. There's also zero form of community grouping (like Reddit's multireddits) - all of which effectively eliminates all niche communities from any sensible main view mode and floods those with shitty memes and even shittier politics only. This pretty much suffocated the initially enthusiastic niche tech communities I had subscribed to. They stood no chance to thrive and their untimely death was inevitable.
There are some very tepid attempts to remedy this in upcoming Lemmy builds, but I fear it's too little too late.
I fear that Lemmy was simply nowhere near mature enough when it mattered and it has been slowly bleeding users and content ever since. I sincerely hope I'm wrong, though.
Visibility-Based Ranking: Factor in how often a post is shown to users by tracking the number of times a post appears in users' feeds and calculating an "engagement rate" by dividing votes by views. Rank "Top of All Time" posts using this engagement rate. This option cannot be implemented as the software does not keep track of post views or the number of times a post appears in users' feeds.
Community-Specific Normalized Scoring: Adjust post scores based on each community's monthly active user count at the time of posting. Unfortunately, this option cannot be implemented as the software does not keep track of the monthly active user count for each community over time.
Normalized Scoring: Adjust post scores based on the instance's monthly active user count at the time of posting. However, this option cannot be implemented as the software does not keep track of the monthly active user count over time.
The "Top of All Time" lists on Lemmy are currently dominated by posts from the exodus period, potentially overshadowing excellent content from both before and after this event.
Unfortunately, none of the suggested solutions can be implemented as the required data hasn't been tracked over time by the software.
Makes me wonder how much time people will waste debating with AIs in the internet in the future.
YouTube Video
Click to view this content.
Hello, I'm looking for a new distro that aligns with my privacy preferences and offers a wide range of packages without requiring me to search for PPAs, similar to Manjaro. I've grown uneasy about Manjaro's decision to collect unique data like MAC addresses and disk serial numbers by default, even if it's for diagnostic purposes.
In light of this, I'd like to ask for your recommendations on a Linux distro that meets the following criteria:
- No opt-out telemetry: I'm looking for a distro that doesn't collect any unique data by default.
- Access to a wide range of packages: I prefer a distro that offers a vast repository of packages, so I don't have to search for PPAs or third-party repositories.
- User-friendly: I'm not a fan of complicated configurations or steep learning curves, so a distro with a user-friendly approach would be ideal.
I'm curious to hear any recommendations you might have. Thanks!
To automatically and recursively download subtitles for all videos in a directory on Arch Linux, you have several options. Here's a comprehensive approach using some of the tools mentioned in the search results:
Using Subliminal
Subliminal is a powerful command-line tool that can recursively search for video files and download subtitles for them[1].
- Install Subliminal:
bash sudo pacman -S subliminal
- Use the following command to download subtitles recursively:
bash subliminal download -l en /path/to/your/video/directory
Replace "en" with your preferred language code and adjust the directory path as needed.
Using QNapi
QNapi is another excellent option for downloading subtitles[5].
- Install QNapi:
bash sudo pacman -S qnapi
- Use QNapi in console mode to download subtitles recursively:
bash find /path/to/your/video/directory -type f \( -name "*.mp4" -o -name "*.mkv" -o -name "*.avi" \) -exec qnapi -c {} +
This command finds all video files with .mp4, .mkv, or .avi extensions and passes them to QNapi for subtitle download.
Using yt-dlp
While primarily used for downloading videos, yt-dlp can also download subtitles for local video files[2].
- Install yt-dlp:
bash sudo pacman -S yt-dlp
- Use the following command to download subtitles recursively:
bash find /path/to/your/video/directory -type f \( -name "*.mp4" -o -name "*.mkv" -o -name "*.avi" \) -exec yt-dlp --write-sub --sub-lang en --skip-download {} +
Replace "en" with your preferred language code.
Using OpenSubtitlesDownload
OpenSubtitlesDownload is a Python script that can be used to download subtitles[3][4].
- Install OpenSubtitlesDownload:
bash yay -S opensubtitlesdownload
- Use the following command to download subtitles recursively:
bash find /path/to/your/video/directory -type f \( -name "*.mp4" -o -name "*.mkv" -o -name "*.avi" \) -exec OpenSubtitlesDownload.py {} +
Additional Tips
- For all these methods, you may need to adjust the file extensions in the
find
command to match your video file types. - Some of these tools may require you to create an account on the subtitle service they use (e.g., OpenSubtitles.org).
- If you encounter rate limiting issues, you may need to add delays between downloads or use a tool that handles this automatically.
Remember to respect copyright laws and the terms of service of the subtitle providers when downloading subtitles.
Citations: [1] https://www.tecmint.com/best-linux-movie-subtitles-player-software/ [2] https://wiki.archlinux.org/title/Yt-dlp [3] https://aur.archlinux.org/packages/opensubtitlesdownload [4] https://bbs.archlinux.org/viewtopic.php?id=162416 [5] https://man.archlinux.org/man/qnapi.1.en
To synchronize your home directory between two Manjaro systems using rsync, you can follow these steps:
Preparation
- Ensure both systems are connected to the same network.
- Install rsync on both systems if it's not already installed:
bash sudo pacman -S rsync
- Determine the IP address of the destination system:
bash ip addr show
Syncing the Home Directory
To sync your home directory from the source system to the destination system, use the following command on the source system:
bash rsync -av --update ~/ username@destination_ip:/home/username/
Replace username
with your actual username on the destination system, and destination_ip
with the IP address of the destination system[1][2].
Explanation of the Command
-a
: Archive mode, which preserves permissions, ownership, timestamps, etc.-v
: Verbose mode, which provides detailed output of the sync process.--update
: This option skips files that are newer on the receiver side.~/
: This is the source directory (your home directory on the current system).username@destination_ip:/home/username/
: This is the destination, specifying the user, IP address, and path on the remote system[1][3].
Additional Considerations
-
SSH Key Authentication: For a smoother experience, set up SSH key authentication between the two systems. This eliminates the need to enter a password each time you run rsync[4].
-
Exclude Files: You might want to exclude certain directories or files. Use the
--exclude
option:bash rsync -av --update --exclude '.cache' --exclude '.local/share/Trash' ~/ username@destination_ip:/home/username/
-
Dry Run: Before performing the actual sync, you can do a dry run to see what would be transferred:
bash rsync -av --update --dry-run ~/ username@destination_ip:/home/username/
-
Bandwidth Limit: If you're concerned about network usage, you can limit the bandwidth:
bash rsync -av --update --bwlimit=1000 ~/ username@destination_ip:/home/username/
This limits the transfer to 1000 KB/s[3].
-
Incremental Backups: The
--update
flag ensures that only newer files are transferred, making subsequent syncs faster.
Remember to run this command from the source system, and ensure you have the necessary permissions on both systems. Always double-check your command before running it to avoid unintended data loss or overwriting[2][5].
Citations: [1] https://www.bleepingcomputer.com/forums/t/748252/a-guide-to-backing-up-your-home-directory-using-rsync/ [2] https://www.reddit.com/r/linux4noobs/comments/qtu0ww/backup_and_restore_home_directory_with_rsync/ [3] https://www.cherryservers.com/blog/how-to-use-rsync-on-linux-to-synchronize-local-and-remote-directories [4] https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ [5] https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories [6] https://stackoverflow.com/questions/9090817/copying-files-using-rsync-from-remote-server-to-local-machine/9090859 [7] https://www.heficed.com/tutorials/vps/how-to-use-rsync/
While yay doesn't have a direct --needed option like pacman, you can still achieve a similar result using a loop. Here's how you can install packages from a list, skipping those that are already installed:
- First, export the list of installed packages from the source system:
yay -Qqe > installed_packages.txt
-
Transfer the "installed_packages.txt" file to the new Manjaro system.
-
On the new system, use the following bash script to install packages:
#!/bin/bash
# Function to print a separator line
print_separator() {
echo "========================================"
}
# Function to handle Ctrl+C
ctrl_c() {
echo
echo "Skipping current package..."
return 1
}
# Set up trap for Ctrl+C
trap ctrl_c INT
# Read the file line by line
while IFS= read -r package; do
print_separator
echo "Processing package: $package"
print_separator
if ! yay -Qi "$package" &> /dev/null; then
echo "Installing $package..."
echo "Press Ctrl+C to skip this package."
print_separator
if yay -S --noconfirm "$package"; then
echo "Installation of $package completed."
else
if [ $? -eq 130 ]; then
echo "Installation of $package skipped."
else
echo "Installation of $package failed."
fi
fi
else
echo "$package is already installed. Skipping."
fi
echo # Print an empty line for better readability
done < installed_packages.txt
# Remove the trap
trap - INT
print_separator
echo "All packages processed."
print_separator
This script does the following:
- It reads each package name from the "installed_packages.txt" file.
- For each package, it checks if it's already installed using
yay -Qi
. - If the package is not installed, it installs it using
yay -S --noconfirm
. - If the package is already installed, it skips it and prints a message.
Save this script as "install_packages.sh" and make it executable:
chmod +x install_packages.sh
Then run the script:
./install_packages.sh
Important considerations:
- This method will install packages one by one, which may be slower than installing them all at once.
- The
--noconfirm
option is used to avoid prompts, but be cautious as it will automatically accept all default options. - Some AUR packages might still require manual intervention during the build process.
- Be aware that not all packages may be compatible with the new system, especially if there are significant differences in hardware or software versions.
- This method only installs packages, not their configurations. You'll need to transfer configuration files separately.
By using this approach, you can replicate the functionality of the --needed
option while using yay to install packages from both official repositories and the AUR[2].
Citations: [1] https://www.hostzealot.com/blog/how-to/installing-yay-aur-helper-on-arch-linux-a-step-by-step-guide [2] https://www.reddit.com/r/archlinux/comments/jtaraj/is_there_a_way_to_automate_pkg_install_with_yay/ [3] https://linuxcommandlibrary.com/man/yay [4] https://stackoverflow.com/questions/53921707/loop-in-order-to-load-or-install-packages-does-not-work-what-am-i-doing-wrong [5] https://forum.manjaro.org/t/force-yay-to-rebuild-aur-package-s/141726 [6] https://es.hostzealot.com/blog/how-to/instalacion-de-yay-aur-helper-en-arch-linux-guia-paso-a-paso [7] https://xerolinux.xyz/posts/install-yay-paru/ [8] https://bbs.archlinux.org/viewtopic.php?id=281104
To install all programs from one Manjaro system on another, you can follow these steps:
Export Package List
On the source Manjaro system:
- Open a terminal
- Run the following command to export a list of explicitly installed packages:
bash pacman -Qqe > packages.txt
This will create a file called "packages.txt" containing the names of all explicitly installed packages[1].
Transfer the Package List
Transfer the "packages.txt" file to the target Manjaro system. You can use various methods like USB drive, network transfer, or cloud storage.
Install Packages on Target System
On the target Manjaro system:
- Open a terminal
- Navigate to the directory containing the "packages.txt" file
- Run the following command to install all packages from the list:
bash sudo pacman -S --needed - < packages.txt
This command will install all packages listed in the file, skipping any that are already installed[1].
Additional Considerations
-
AUR Packages: The above method only covers official repository packages. For AUR packages, you'll need to install them manually or use an AUR helper like
yay
[2]. -
Configuration Files: Remember that this process only installs packages, not their configurations. You may need to transfer configuration files separately.
-
System Differences: Be aware that some packages might not be compatible if the two systems have different architectures or Manjaro versions.
-
Updates: After installation, run a system update:
bash sudo pacman -Syu
- Cleaning Up: You might want to remove unnecessary packages on the target system:
bash sudo pacman -Rns $(pacman -Qtdq)
This process should help you replicate most of the software environment from one Manjaro system to another. However, always review the package list before installation to ensure it's appropriate for the target system[1][2].
Citations: [1] https://www.reddit.com/r/ManjaroLinux/comments/ifowrz/how_to_install_software_in_manjaro/ [2] https://github.com/manzurahmed/manjaro-software-installation-guide [3] https://www.youtube.com/watch?v=82YinI2Cgbc [4] https://www.youtube.com/watch?v=VTE4vtrvIM4 [5] https://forum.manjaro.org/t/install-software-from-terminal-cd-downloads-in-manjaro/88674 [6] https://forum.manjaro.org/t/how-to-install-software-over-applications-in-manjaro-hello/110060 [7] https://forum.manjaro.org/t/how-to-install-apps-on-manjaro/84114 [8] https://forum.manjaro.org/t/how-to-install-new-software/141060
To automatically open output in a pager when it exceeds the screen height in Zsh, you can use a shell function that wraps commands and pipes their output to a pager if necessary. Here's how you can implement this:
- Add the following function to your
.zshrc
file:
zsh auto_pager() { local cmd="$1" shift $cmd "$@" | { if [ $(tput lines) -lt $(/usr/bin/env wc -l) ]; then $PAGER else cat fi } }
- Create aliases for commands you want to automatically page:
```zsh alias ls='auto_pager ls' alias grep='auto_pager grep' alias tree='auto_pager tree'
Add more aliases as needed
```
This solution works as follows:
- The
auto_pager
function takes a command and its arguments. - It runs the command and pipes its output to a subshell.
- Inside the subshell, it compares the number of lines in the output to the terminal height.
- If the output exceeds the terminal height, it uses the pager defined in the
$PAGER
environment variable (usuallyless
). - If the output fits within the terminal, it simply uses
cat
to display it.
To ensure the pager behaves correctly, you can set the following options in your .zshrc
:
zsh export LESS="-FXR"
This configures less
to:
- Exit if the entire file fits on one screen (-F)
- Not clear the screen when exiting (-X)
- Display ANSI colors (-R)
By using this approach, you can automatically page output that exceeds the screen height while still displaying shorter output directly[1][2]. Remember to restart your Zsh session or source your .zshrc
file after making these changes.
Citations: [1] https://stackoverflow.com/questions/15453394/would-it-be-possible-to-automatically-page-the-output-in-zsh/15488779 [2] https://www.reddit.com/r/bash/comments/jbcp5x/how_to_automatically_display_the_output_in_a/ [3] https://github.com/ohmyzsh/ohmyzsh/issues/3016 [4] https://github.com/sharkdp/bat/issues/749
YouTube Video
Click to view this content.
YouTube Video
Click to view this content.
A new AI horror film competition has been announced. Here is the one from last year: https://youtu.be/tCa-9ik5ffA
YouTube Video
Click to view this content.
Tarta de Yogur y Manzana
Ingredientes:
- 2 manzanas
- 5 huevos
- 80 g de maicena (aproximadamente 6,5 cucharadas)
- 500 g de yogur griego (sin azúcar)
- 150 g de azúcar (puedes usar azúcar moreno o el edulcorante de tu preferencia)
- 200 ml de nata (crema de leche)
- Masa de hojaldre (comprada o casera) para un molde de 24 cm
- Mantequilla (opcional, para engrasar el molde)
Instrucciones:
-
Preparar el molde: Engrasa ligeramente el molde de silicona (si es necesario) y forra la base con la masa de hojaldre. Recorta el exceso de masa y pincha la base con un tenedor para evitar que suba durante la cocción. Lleva el molde al refrigerador mientras preparas el relleno.
-
Preparar el relleno: En una cacerola a fuego medio, mezcla el yogur griego y el azúcar. Remueve bien hasta que se integren.
-
Agregar las manzanas: Lava y corta las manzanas en trozos medianos (puedes dejarlas con piel para dar color). Añádelas a la mezcla de yogur y azúcar.
-
Incorporar los huevos y la maicena: En un bol aparte, bate los 5 huevos y la maicena. Luego, añade esta mezcla a la cacerola con el yogur y las manzanas. Remueve constantemente hasta que la mezcla espese (aproximadamente 5 minutos).
-
Añadir la nata: Una vez que la mezcla esté espesa, retírala del fuego y añade la nata. Mezcla bien hasta que esté completamente integrada.
-
Hornear: Vierte la mezcla en el molde preparado con la masa de hojaldre. Precalienta el horno a 175 ºC (347 ºF) y hornea durante 45 minutos, o hasta que la parte superior esté dorada y la mezcla esté firme.
-
Enfriar: Una vez horneada, retira la tarta del horno y déjala enfriar a temperatura ambiente. Luego, refrigérala durante al menos 2 horas antes de desmoldar.
-
Desmoldar y servir: Con cuidado, desmolda la tarta utilizando un plato o un utensilio adecuado. Sirve fría y disfruta de esta deliciosa tarta de yogur y manzana.
I came across a statement that claims, "Biden/Harris have just pushed through DoD Directive 5240.01 giving the Pentagon power — for the first time in history — to use lethal force to kill Americans on U.S. soil who protest government policies." This sounds incredibly alarming and reminiscent of the dystopian government tactics depicted in V for Vendetta.
I always thought the CIA was already involved in actions against Americans, with or without permission. So, is there any truth to this claim about the Pentagon's new authority? What does DoD Directive 5240.01 actually say, and does it really grant such power? I’d appreciate any insights or clarifications on this topic!
https://rumble.com/v5jxz85-world-war-3-incoming-north-korea-send-troops-to-russia-as-brics-prepares-to.html?start=2444
This is golden.
Raspberry Pi 5 with 8 GB of RAM