says AI wasn’t the main factor
That's exactly what an Ai would say.
Would be so nice if Meta will be seen as hostile and blocked Europe wide. The problem is, this can be interpreted as blocking Free Speech. But is it really Free Speech to misuse peoples rights and steal their privacy, that is against the law?
In the 60s it was the atom bomb regulation they had to sign. A "few hours later" in 2025 it's the Ai regulation. My prediction 60 years from now, there will be a regulation of who an what you can do on Mars. But I'm getting ahead of myself.
I know its user produced content. But there are still rules that are enforced by Archteam and they host and link to it. And why not tell people about security issues? They could at least tell people in the news, so we can act accordingly. This is super disappointing. Is there any trustworthy RSS feed that covers this?
Why is this not found in the official news for Archlinux? https://archlinux.org/news/
Another maybe is they might outsource some work heavy stuff.
There are couple of factors that makes this a confusing topic.
Vim: On a high level, normally Vim (and Neovim) have their own clipboard system. Vim has multiple internal clipboards that can be used like variables and accessed with other commands. So its kind of sandboxed from your system. But you can explicitly use the commands to access the system clipboard. There is a configuration you can set to use the system clipboard by default.
Linux: Unlike Windows, in Linux we also have two kind of clipboards: the "system" clipboard as you know and the "primary" clipboard. This has nothing to do with Vim and is a feature on Linux systems itself. If you in example in your browser mark a text without copying, it is automatically copied into the "primary" clipboard. Then you should be able to access and paste it with middle mouse button in example. The system clipboard where you explicitly copy stuff is not affected by it.
You should read following documentation: 09.3 The clipboard - Neovim
Unless its coded with JavaScript (me mocking the date
function). Not that I am a JavaScript programmer, I just happen to know its totally broken.
Could someone explain what's going on with this comment and why its upvoted so much? It's a genuine question. What's wrong with his phone (and how do you know what he uses)?
I just hope they don't get burned out and there is no crunch. The only other company that did so much in short time I can think of is Insomniac Games. It's actually how it was used to be with how frequent we got games from single studios. But since they are so big and expensive nowadays, it seems a bit unreal for how fast some companies are able to pump out so many high quality games in relatively short time.
YouTube Video
Click to view this content.
- YouTube video (first 7 minutes): https://youtu.be/rIR3PpQ82yE
- SkipVids (same video, but without ad): https://skipvids.com/?v=rIR3PpQ82yE
- New site: https://www.thisweekinvideogames.com/ (at the time of the posting the website opens extremely slow, they might get hit with lot of visitors)
The first 7 minutes segment explains it. Its kinda self advertisement, but I think this is important. One of my favorite Gaming YouTube channels "Skill Up" launched a new website for gaming articles. The goal is to have articles without Ai, no advertisements, no sponsored articles, no CEO optimized content, to maintain a high quality content. I think this is really really important and a good step.
I know the title itself will result in a ban
We are not in Reddit my dude. Criticizing something in a productive way without being toxic should never be banned, even if the moderators disagree with your opinion (I assume).
I am a diehard Firefox user since its inception at version 1! And admittely there are couple of problems, especially with the company itself. Even me thinks about switching to a fork, but I still want to keep using the Firefox eco system and dislike the idea of a Chrome based browser.
I am responding to your points. These are no personal attacks or like that and I hope you take my points to heart as I did yours and treat with respect. Just because you encountered some troublesome people in the past does not mean I am one too. I am criticizing some of your critique (and agree on some).
-
the design is shittier every iteration.
It's not that bad for me and it looks similar good to all other browsers. But I am also a person who make modifications in the default configuration and even go so far as to change settings via userChrome.css (no this has nothing to do with the Chrome browser, its just named like that). But your argumentation is understandable here.
-
the tech behind the design is shittier every iteration.
I don't understand this point much. On my system the theme uses "System theme -- auto", which will make it look exactly like what the operating system decides to look for all applications. If you make changes to your operating systems theme, then this should be reflected in Firefox. And this is the correct default configuration every browser should do in my opinion. If you want some specialized settings that look different in your browser, then you have the ability to use other themes.
How does the system theme make Firefox unusable? Are all other applications using the system theme unusable too?? So as said, I don't get the problem here.
Moz again went full god mode and make all decisions for you
Not really. It uses what YOU have decided to use your operating system.
-
the people behind the tech behind the design are shit.
This is just an insult point without explaining anything. This is one of those points that could lead to a ban. And that would be a good reason to! If you do not criticize with respect, then you won't get respect either. Do not insult and then say you got banned.
-
the company behind the people behind the tech behind the design is shit by definition.
By your definition. There are good and bad things of the design people. And we are talking about the designers here right? Not the management. The management of the main company Mozilla is awful, I agree. And the decisions the Firefox team makes are sometimes bad too. So I don't think putting all of them together into one bucket and insult them is a productive comment.
2 or 3 sentences in row might look intimating for some (I still don't buy this reasoning, did they ever visit a school?), but I think it actually hinders readability and not improves. Because every sentence is its own paragraph, the brain won't find the connected sentences that build up a uniq thought process. Breaking up text with logic makes more sense for readability and is even better for quickly scanning through text.
I don't believe breaking up text for each sentence makes it more readable. Even for those who say it makes, because I think they are "wrong". Sorry if I came over a opposite-argumentative, but this is a thing that bothers me a lot when reading and I just explain why I think its wrong.
I know, my question was not directly why its in this news this way. Its a more general question by me. I don't get it. Lot of personal blogs do this too, BTW. In my opinion this is "wrong". A paragraph should consists multiple sentences, that contain a single thought process or something else to group. Its like in programming code to do every single statement a blank line in between. Instead an idea should be grouped together into a block. Just an analogy with code. /My Opinion
My believe is, they want intentionally make the post look bigger. Instead 3 or 4 paragraphs (which would be the entire article in this case) they spread it out like this. So you have to scroll and see more ads and links and they have more possibilities to put more ads and links in between every paragraph. /My Conspiracy
Edit: Maybe there is something else. Lot of people read news articles on smartphones. And text would wrap around very quickly and look longer than they are. So maybe using more paragraphs gives more room between each sentence. As I am not reading much on smartphones, can't judge this. But I still don't like this. /My Edit
Why is every sentence in its own paragraph??
One should not make the mistake to just judge a single photograph, for a role in a film. Its also important how they move and talk, and what the perception of the person based on existing films is. I'm not in the position to judge about any of these castings or your suggestion, just wanted bring in this point into the discussion.
You'll be chippin' into the dark future 52 years early,
Still 5 years late. But better late than never. :D
Removing an extension should not have a huge impact on the RAM usage. I assume Outlook in the browser just requires a lot of RAM. Browsers and huge browser apps require ton of RAM, that sounds normal to me. Especially if you have lot of RAM, then Firefox will make use of that more, because a lot is available. That ensures a fast operation. Reducing the RAM usage might cause Firefox to cache data on the filesystem, then it will get slower, if it really needs that much RAM.
For testing purposes create a new fresh Firefox profile, which is basically what you get when you install Firefox new without your personal configuration and extensions. Try using Outlook in a fresh profile. Also try a different browser that is not based on Firefox (in example Brave) to see if it requires that much RAM too. This way we know if its because of the App, because of your profile or because of Firefox.
Dolphin filemanager from KDE. Nowadays I default to "compact" view without "preview" enabled. This is similar to "Icon" view, but the icons are small. Lot of files scrolls horizontal instead vertical.
- filenames in compact mode can be longer in one line, which is kind of similar to the look as "details" view, but are all displayed in a multiple rows instead one row
- preview disabled, because this is extremely fast, as I have ton of files that do not even have a preview image
That's my default. Occasionally I enable preview image and switch to bigger "icon" view when I look into images or videos. Or sometimes I enable "details" view when needed. In normal usage I don't need the details anyway.
I'm so drunken from lot of WINE news recently. Not complaining though.
YouTube Video
Click to view this content.
Watch on SkipVid platform, alternative to YouTube client watching YouTube videos indirectly, but without ads: https://skipvids.com/?v=-cTsFt-j7rk
---
I just found this creator who is super excited about the new Bash version. He goes through some aspects of the new changes and features. There is something funny about a guy getting so excited about a new Bash version, that I wanted to share it because of that. :D
Also its nice to see the changes in action and have an explanation from someone who (seemingly) knows what he is doing.
Video (partial) description:
---
Source Code: github.com/bahamas10/bash-changes
$ whoami Yo what's up everyone my name's dave and you suck at programming! Connect with me on my socials below and if you're reading this you're legally required to subscribe to my channel.
$ cat source-code The source code for my YSAP series (or related videos) is available for free under the MIT License on GitHub: Source Code → github.com/bahamas10/ysap
Python Tutorial: argparse advanced-help with additional options - thing.py

Example script: https://gist.github.com/thingsiplay/ae9a26322cd5830e52b036ab411afd1f
Hi all. I just wanted to share a way to handle a so called advanced help menu, where additional options are listed that are otherwise hidden with regular help. Hidden options should still function. This is just to have less clutter in normal view.
I've researched the web to see how people does it, and this is the way I like most so far. If you think this is problematic, please share your thoughts. This is for a commandline terminal application, that could also be automated through a script.
How it works on a high level
Before the ArgumentParser()
is called, we check the sys.argv
for the trigger option --advanced-help
. Depending on this we set a variable to true or false. Then with the setup of the parser after the ArgumenParser()
call, we add the --advanced-help
option to the list of regular help.
```python advanced_help = False for arg in sys.argv: if arg == "--": break if arg == "--advanced-help": advanced_help = True
parser = argparse.ArgumentParser() ```
Continue setting up your options as usual. But for the help description of those you want to exclude when using just regular -h
, add an inline if else statement (ternary statement). This statement will put the help description only if advanced_help
variable is true, otherwise it puts argparse.SUPPRESS
to hide the option. Do this with all the options you want to hide.
python parser.add_argument( "-c", "--count", action="store_true", default=False, help="print only a count of matching items per list, output file unaffected" if advanced_help else argparse.SUPPRESS, )
At last we need to actually parse what you just setup. For this we need to assign our custom list, that is based on the sys.argv
, plus the regular --help
option. This way we can use --advanced-help
without the need for -h
or --help
in addition to show any help message.
python if advanced_help: args = parser.parse_args(sys.argv[0:0] + ["--help"] + sys.argv[1:]) else: args = parser.parse_args()
Run following program once with ./thing.py -h
and ./thing.py --advanced-help
.
YouTube Video
Click to view this content.
Watch on YouTube: https://youtu.be/_Pqfjer8-O4
Watch on SkipVids: https://skipvids.com/?v=_Pqfjer8-O4 (watch YouTube without using YouTube directly, and without ads)
Video Description:
---
Inside your smartphone, there are billions of transistors, but have you ever wondered how they actually work and how they can be combined to perform tasks like multiplying two numbers together? One rather interesting thing is that transistors are a lot like Lego Bricks assembled together to build a massive Lego set, which we’ll explore further. In this video, we dive into the nanoscopic world of transistors. First, we'll see how an individual transistor works, then we’ll see how they are connected together and organized into logic gates such as an inverter or an AND gate. Finally, we’ll see how logic gates are connected together into large Macrocells capable of performing arithmetic.
Table of Contents:
00:00 - Inside your Desktop Computer 00:26 - Transistors are like Lego Pieces 01:09 - Lego Bricks vs Transistors and Standard Cells 02:12 - Examining the Inverter Standard Cell 03:24 - How do Basic Transistors work? 09:09 - Schematic for an Inverter Standard Cell 10:45 - Exploring the Macrocell 13:20 - Conceptualizing how a CPU Works 15:11 - Brilliant Sponsorship 16:55 - The NAND Standard Cell 20:35 - A Surprisingly Hard Script to Write 21:42 - The AND Standard Cell 23:16 - The Exclusive OR Standard Cell 23:54 - CMOS Circuit 24:27 - Understanding Picoseconds 25:51 - Special Thank You and Outro
I desperately need some Python help. In short, i want to use multiple keys at once for sorting a dictionary. I have a list of keys and don't know how to convert it to the required list.
This is a single key. The self.items
is a list of dictionaries, where d[key]
is resolved to the actual key name such as "core_name", that the list is then sorted as. This works as expected for single sort, but not for multiple.
key = "core_name" self.items = sorted(self.items, key=lambda d: d[key]) key = "label" self.items = sorted(self.items, key=lambda d: d[key])
Problem is, sorting it multiple times gives me wrong results. The keys need to be called in one go. I can do that manually like this:
self.items = sorted(self.items, key=lambda d: (d["core_name"], d["label"]))
But need it programmatically to assign a list of keys. The following does not work (obviously). I don't know how to convert this into the required form:
```
Not working!
keys = ["core_name", "label"] self.items = sorted(self.items, key=lambda d: d[keys]) ```
I somehow need something like a map function I guess? Something that d[keys]
is replaced by "convert each key in keys into a list of d[key]". This is needed inside the lambda, because the key/value pair is dynamically read from self.items
.
Is it understandable what I try to do? Has anyone an idea?
---
Edit: Solution by Fred: https://beehaw.org/post/20656674/4826725
Just use comprehension and create a tuple in place: sorted(items, key=lambda d: tuple(d[k] for k in keys))
Regular call to fzf, but output the index number of the selected entry, instead the text itself. It's a pretty niche use case, but there was a few times in the past when I needed it. You can use options for fzf just normally too.
bash fzn() { nl | fzf --with-nth 2.. "${@}" | awk '{print $1}' }
Usage:
bash find . -maxdepth 1 -type d | fzn -e -m
I always forget how to do this manually, so I made this simple function for Bash. Just copy this like an alias into your .bashrc and use it like any other command in a pipe.
It only works with the first command in the recorded history, not with any sub shells or chained commands.
``` #!/usr/bin/env bash
1. history and $HISTFILE do not work in scripts. Therefore cat with a direct
path is needed.
2. awk gets the first part of the command name.
3. List is then sorted and duplicate entries are removed.
4. type -P will expand command names to paths, similar to which. But it will
also expand aliases and functions.
5. Final output is then sorted again.
type -P $(cat ~/.bash_history | awk '{print $1}' | sort | uniq) | sort ```
After reading a blog post, I had this script in mind to see if its possible. This is just for fun and I don't have an actual use for it. Maybe some parts of it might inspire you to do something too. So have fun.
Edit 1:
After some suggestions from the comments, here is a little shorter version. sort | uniq
can be replaced by sort -u
, as the output of them should be identical in this case (in certain circumstances they can have different effect!). Also someone pointed out my useless cat
, as the file can be used directly with awk
. And for good reason. :D Enjoy, and thanks for all.
type -P $(awk '{print $1}' ~/.bash_history | sort -u) | sort
I still have no real use case for this one liner, its mainly just for fun.
The multinational scientific collaboration COSMOS releases the largest map of the universe, going back to almost the beginning of time

Direct link to the image in the browser: https://cosmos2025.iap.fr/fitsmap/?ra=150.1203188&dec=2.1880050&zoom=2
Article copied:
---
In the name of open science, the multinational scientific collaboration COSMOS on Thursday has released the data behind the largest map of the universe. Called the COSMOS-Web field, the project, with data collected by the James Webb Space Telescope (JWST), consists of all the imaging and a catalog of nearly 800,000 galaxies spanning nearly all of cosmic time. And it’s been challenging existing notions of the infant universe.
“Our goal was to construct this deep field of space on a physical scale that far exceeded anything that had been done before,” said UC Santa Barbara physics professor Caitlin Casey, who co-leads the COSMOS collaboration with Jeyhan Kartaltepe of the Rochester Institute of Technology. “If you had a printout of the Hubble Ultra Deep Field on a standard piece of paper,” she said, referring to the iconic view of nearly 10,000 galaxies released by NASA in 2004, “our image would be slightly larger than a 13-foot by 13-foot-wide mural, at the same depth. So it’s really strikingly large.” An animated zoom-out from the center of the COSMOS-Web field to a full-size comparison between COSMOS-Web and the Hubble Ultra Deep Field
The COSMOS-Web composite image reaches back about 13.5 billion years; according to NASA, the universe is about 13.8 billion years old, give or take one hundred million years. That covers about 98% of all cosmic time. The objective for the researchers was not just to see some of the most interesting galaxies at the beginning of time but also to see the wider view of cosmic environments that existed during the early universe, during the formation of the first stars, galaxies and black holes.
“The cosmos is organized in dense regions and voids,” Casey explained. “And we wanted to go beyond finding the most distant galaxies; we wanted to get that broader context of where they lived.” A 'big surprise'
And what a cosmic neighborhood it turned out to be. Before JWST turned on, Casey said, she and fellow astronomers made their best predictions about how many more galaxies the space telescope would be able to see, given its 6.5 meter (21 foot) diameter light-collecting primary mirror, about six times larger than Hubble’s 2.4 meter (7 foot, 10 in) diameter mirror. The best measurements from Hubble suggested that galaxies within the first 500 million years would be incredibly rare, she said.
“It makes sense — the Big Bang happens and things take time to gravitationally collapse and form, and for stars to turn on. There’s a timescale associated with that,” Casey explained. “And the big surprise is that with JWST, we see roughly 10 times more galaxies than expected at these incredible distances. We’re also seeing supermassive black holes that are not even visible with Hubble.” And they’re not just seeing more, they’re seeing different types of galaxies and black holes, she added.
“Since the telescope turned on we’ve been wondering ‘Are these JWST datasets breaking the cosmological model? Because the universe was producing too much light too early; it had only about 400 million years to form something like a billion solar masses of stars. We just do not know how to make that happen."
'Lots of unanswered questions'
While the COSMOS-Web images and catalog answer many questions astronomers have had about the early universe, they also spark more questions.
“Since the telescope turned on we’ve been wondering ‘Are these JWST datasets breaking the cosmological model? Because the universe was producing too much light too early; it had only about 400 million years to form something like a billion solar masses of stars. We just do not know how to make that happen,” Casey said. “So, lots of details to unpack, and lots of unanswered questions.”
In releasing the data to the public, the hope is that other astronomers from all over the world will use it to, among other things, further refine our understanding of how the early universe was populated and how everything evolved to the present day. The dataset may also provide clues to other outstanding mysteries of the cosmos, such as dark matter and physics of the early universe that may be different from what we know today.
“A big part of this project is the democratization of science and making tools and data from the best telescopes accessible to the broader community,” Casey said. The data was made public almost immediately after it was gathered, but only in its raw form, useful only to those with the specialized technical knowledge and the supercomputer access to process and interpret it. The COSMOS collaboration has worked tirelessly for the past two years to convert raw data into broadly usable images and catalogs. In creating these products and releasing them, the researchers hope that even undergraduate astronomers could dig into the material and learn something new.
“Because the best science is really done when everyone thinks about the same data set differently,” Casey said. “It’s not just for one group of people to figure out the mysteries.” Image Caitlin Casey wears a puffy coat in front of a lake Photo Credit Courtesy Photo Caitlin Casey
Caitlin Casey is an observational astronomer with expertise in high-redshift galaxies. She uses the most massive and unusual galaxies at early times to test fundamental properties of galaxy assembly (including their gas, stars, and dust) within a ΛCDM cosmological framework. Read more
For the COSMOS collaboration, the exploration continues. They’ve headed back to the deep field to further map and study it.
“We have more data collection coming up,” she said. “We think we have identified the earliest galaxies in the image, but we need to verify that.” To do so, they’ll be using spectroscopy, which breaks up light from galaxies into a prism, to confirm the distance of these sources (more distant = older). “As a byproduct,” Casey added, “we’ll get to understand the interstellar chemistry in these systems through tracing nitrogen, carbon and oxygen. There’s a lot left to learn and we’re just beginning to scratch the surface.”
The COSMOS-Web image is available to browse interactively ; the accompanying scientific papers have been submitted to the Astrophysical Journal and Astronomy & Astrophysics.
YouTube Video
Click to view this content.
I like listening to oldschool videogame music. Recently I listened to some music of games I never played and one song in particular blew my mind. Its wonderful and since it lives rent free in my head, coming back to it over and over again. I'm loving it.
Listen on:
"Sacred Somnom Woods" in Mario & Luigi - Dream Team for the Nintendo 3DS. The composer is the well known Yoko Shimomura, also known for work on Street Fighter 2, Kingdom Hearts and many more legendary games.
To me this track has this Breath of the Wild or Tears of the Kingdom vibes to it. Because I did not play the actual Mario & Luigi games, I always interpret this as a Zelda song now. Its name does contribute to this factor too! Do you also have sometimes game music that captures you?
- Purchases have been restored & the system is working - If you're still missing items contact Support ( support.splitgate.com ) - XP is now active (we've also turned on Double XP) - Sometimes XP is delayed but it is still going through - The Beta now supports players on Linux thru Proton - We're wo...

cross-posted from: https://beehaw.org/post/20234081
> 2 days ago I made a post that the game would not run on a Linux desktop PC (but it would on the Steam Deck). 10 hours ago they released an update that resolves this issue and makes the game run through Proton on a Linux desktop PC. > > > \- The Beta now supports players on Linux thru Proton > > I can confirm it does run and I just did the short tutorial. I still have to play more, but wanted to inform anyone who is interested into the game.
- Purchases have been restored & the system is working - If you're still missing items contact Support ( support.splitgate.com ) - XP is now active (we've also turned on Double XP) - Sometimes XP is delayed but it is still going through - The Beta now supports players on Linux thru Proton - We're wo...

2 days ago I made a post that the game would not run on a Linux desktop PC (but it would on the Steam Deck). 10 hours ago they released an update that resolves this issue and makes the game run through Proton on a Linux desktop PC.
> \- The Beta now supports players on Linux thru Proton
I can confirm it does run and I just did the short tutorial. I still have to play more, but wanted to inform anyone who is interested into the game.
I want to share some thoughts that I had recently about YouTube spam comments. We all know these early bots in the YouTube comment section, with those "misleading" profile pictures and obvious bot like comments. Those comments are often either random about any topic or copied from other users.
OK, why am I telling you that? Well, I think these bots are there to be recognized as bots. Their job is to be seen as a bot and be deleted and ignored. In that case everyone feels safe, thinking all bots are now deleted. But in reality there are more sophisticated bots under us. So the easy bots job is to get delete and basically mislead us, so we don't think that any is left, because they are deleted.
What do you think? Sounds plausible, doesn't it? Or do I have paranoia? :D
Splitgate 2 is the only free-to-play shooter with portals, delivering next-level gunplay, fluid movement, and a constantly evolving experience, featuring a new Battle Royale where you can portal between unique worlds. Welcome to the future of sport.

Splitgate 2 opened the public beta since today or yesterday. Unfortunately the game does not run on desktop PC with a Linux operating system. Others have the same problem.
But whats weird is, people claim it works on Steam Deck and even the official blog post from the devs says they support the Steam Deck. There is no word about general Linux desktops.
So does the developers treat the Steam Deck like a console and make their games not playable on general purpose Linux desktops? Its weird, because otherwise it is playable on a general desktop with Windows too. Even the previous game Splitgate 1 (which they shut off) worked on desktop Linux. It makes no sense!
I'm totally disappointed right now. Because I was excited for this game. It got some hero abilities (I like that) and even a map creator.
YouTube Video
Click to view this content.
Alternative link: https://skipvids.com/?v=BA_HMsznNKg (Ad-free and does not use YouTube directly)
Technical explanation of why almost all Nintendo 64 games looked so blurry. Kaze Emanuar is an expert in this field and does lot of Romhacks and Mods and creates his own Super Mario 64 games with it. So he is quiet knowledgeable.
Note: I recommend watching the video at 1.4x speed, or at the very minimum at 1.25x speed.
YouTube Video
Click to view this content.
Video description:
---
In this video, we'll talk about NVIDIA's last several months of pressure to talk about DLSS more frequently in reviews, plus MFG 4X pressure from the company. NVIDIA has repeatedly made comments to GN that interviews, technical discussion, and access to engineers unrelated to MFG 4X and DLSS are made possible by talking about MFG 4X and DLSS. NVIDIA has explicitly stated that this type of content is made "possible" by benchmarking MFG 4X in reviews specifically, despite us separately and independently covering it in other videos, and has made repeated attempts to get multiplied framerate numbers into its benchmark charts. We will not play those games. In the time since, NVIDIA has offered certain unqualified media outlets access to drivers which actual qualified reviewers do not have access to, but allegedly only under the premise of publishing "previews" of the RTX 5060 in advance of its launch. Some outlets were given access to drivers specifically to publish what we believe are puff pieces and marketing while reviewers were blocked.
TIMESTAMPS
00:00 - Giving Access, Then Threatening It 04:29 - Quid Pro Quo 08:28 - Social Manipulation 09:44 - It's Never Good Enough for NVIDIA 12:08 - NVIDIA is Vindictive 14:28 - Stevescrimination 17:38 - Not The First Time 19:00 - Gamers Are Entitled
https://browseraudit.com/
I just downloaded Tor browser (which is a configured Firefox browser BTW) using the torbrowser-launcher that automatically downloads and manages the browser. And I thought for funs sake, checking and comparing some tests from browseraudit against my current personal Firefox setup. And to my surprise I got more warnings with Tor Browser v14.5 (based on Mozilla Firefox 128.9.0esr) vs My personal setup of Firefox Browser v137.0.2 (custom configurations and plugins installed). Both at the most up to date version in their official version.
I just found this interesting and wanted to share with you.
Tor Browser
My Firefox Browser
Print CRC-32 (binary mode) checksums with Python on Linux. - thingsiplay/crc32sum

cross-posted from: https://beehaw.org/post/19564932
> https://github.com/thingsiplay/crc32sum
>
> > # usage: crc32sum [-h] [-r] [-i] [-u] [--version] [path ...] > > crc32sum *.sfc > 2d206bf7 Chrono Trigger (USA).sfc >
>
> Previously I used a Bash script to filter out the checksum from 7z output. That felt always a bit hacky and the output was not very flexible. Plus the Python script does not rely on any external module or program too. Also the underlying 7z program call would automatically search for all files in sub directories recursively when a directory was given as input. This would require some additional rework, but I decided it is a better idea to start from scratch in a programming language. So I finally wrote this, to have a bit better control. My previous Bash script can be found here, in case you are curious: https://gist.github.com/thingsiplay/5f07e82ec4138581c6802907c74d4759
>
> BTW, believe it or not, the Bash script running multiple commands starts and executes faster than the Python instance. But the difference is negligible, and the programmable control in Python is much more important to me.
>
> ---
>
> ## What is this program for?
>
> Calculates the CRC hash for each given file, using Python's integrated zlib module. It has a similar use like MD5 or SHA, but is way, way weaker and simpler. It's a quick and easy method to verify the integrity of files, in example after downloading from the web, to check data corruption from your external drives or when creating expected files.
>
> It is important to know and understand that CRC-32 is not secure and should never be used cryptographically. It's use is limited for very simple use cases.
>
> Linux does not have a standard program to calculate the CRC. This is a very simple program to have a similar output like md5sum offers by default.
> Why use CRC at all?
> Usually and most of the time CRC is not required to be used. In fact, I favor MD5 or SHA when possible. But sometimes, only a CRC is provided (often used by the retro emulation gaming scene). Theoretically CRC should also be faster than the other methods, but no performance comparison has been made (frankly the difference doesn't matter to me).
>
Print CRC-32 (binary mode) checksums with Python on Linux. - thingsiplay/crc32sum

https://github.com/thingsiplay/crc32sum
```
usage: crc32sum [-h] [-r] [-i] [-u] [--version] [path ...]
crc32sum *.sfc 2d206bf7 Chrono Trigger (USA).sfc ```
Previously I used a Bash script to filter out the checksum from 7z output. That felt always a bit hacky and the output was not very flexible. Plus the Python script does not rely on any external module or program too. Also the underlying 7z program call would automatically search for all files in sub directories recursively when a directory was given as input. This would require some additional rework, but I decided it is a better idea to start from scratch in a programming language. So I finally wrote this, to have a bit better control. My previous Bash script can be found here, in case you are curious: https://gist.github.com/thingsiplay/5f07e82ec4138581c6802907c74d4759
BTW, believe it or not, the Bash script running multiple commands starts and executes faster than the Python instance. But the difference is negligible, and the programmable control in Python is much more important to me.
---
What is this program for?
Calculates the CRC hash for each given file, using Python's integrated zlib module. It has a similar use like MD5 or SHA, but is way, way weaker and simpler. It's a quick and easy method to verify the integrity of files, in example after downloading from the web, to check data corruption from your external drives or when creating expected files.
It is important to know and understand that CRC-32 is not secure and should never be used cryptographically. It's use is limited for very simple use cases.
Linux does not have a standard program to calculate the CRC. This is a very simple program to have a similar output like md5sum offers by default. Why use CRC at all? Usually and most of the time CRC is not required to be used. In fact, I favor MD5 or SHA when possible. But sometimes, only a CRC is provided (often used by the retro emulation gaming scene). Theoretically CRC should also be faster than the other methods, but no performance comparison has been made (frankly the difference doesn't matter to me).
crc32sum - Calculate CRC32 for each file (Bash using 7z) - crc32sum

Hi all. This is an update on my script extracting CRC32 checksum from the 7z commandline tool. The output should be similar to how the md5sum tool outputs, the checksum and the file name/path.
The initial version of this script was actually broken. It would not output all files if a directory was included (wrong counting of files through argument number). Also filenames that contained a space would only output the first part until the space character. All of this rookie mistakes are solved. Plus there is a progress bar showing what files are processed at the moment, instead showing a blank screen until command is finished. This is useful if there are a lot of files or some big files to process.
Yes, I'm aware there are other ways to accomplish this task. I would be happy to see your solution too. And if you encounter a problem, please report.
(Note: Beehaw does not like the "less than" character and breaks the post completley. So replace the line cat %%EOF
with ! or copy it from the Github Gist link below:)
``` #!/usr/bin/env bash
if [[ "${#}" -eq 0 ]] || [[ "${1}" == '-h' ]]; then self="${0##*/}" cat %%EOF usage: ${self} files...
Calculate CRC32 for each file.
positional arguments: file or dir one or multiple file names or paths, if this is a directory then traverse it recursively to find all files EOF exit 0 fi
7z h -bsp2 -- "${@}" | \grep -v -E '^[ \t]+.*/' | \sed -n -e '/^-------- ------------- ------------$/,$p' | \sed '1d' | \grep --before-context "9999999" '^-------- ------------- ------------$' | \head -n -1 | \awk '$2=""; {print $0}'
```
YouTube Video
Click to view this content.
Marathon looks like an Ai agent would create. Art style, gameplay and story wise.
This is the next game from the Destiny creator Bungie. A multiplayer extraction shooter. It has nothing to do with the original Marathon game its based on, an old single player game. Those who could hands on the game describe it as a Destiny like controls and animation, but as an extraction shooter mode.
As for me, I would probably even check the game out, if it was free to play (its full price game, like Concord) and if it would be playable on Linux. Bungie is anti Linux, so not for me anyway.

- https://thingsiplay.game.blog/
- https://github.com/thingsiplay
I'm here to stay.