Which areas of Linux would benefit most from further standardization?
The diversity of Linux distributions is one of its strengths, but it can also be challenging for app and game development. Where do we need more standards? For example, package management, graphics APIs, or other aspects of the ecosystem? Would such increased standards encourage broader adoption of the Linux ecosystem by developers?
Yea I like how a lot have moved to using .config but mozilla just moved out of there and now has a .mozilla folder outside of it.. wtf... It is insanely sad.
I have actually moved my entire "user home folder".. folders out of there just because it is so ugly and unorganized. I now use /home/user/userfolders/.. all my stuff like documents / videos etc in here
it's pretty bad. steam for example has both
~/.steam and
~/.local/share/Steam
for some reason. I'm just happy I moved to an impermanent setup for my PC, so I don't need to worry something I temporarily install is going to clutter my home directory with garbage
This would be convenient indeed, but I've learned to be indifferent about it as long as the manual or readme provides helpful and succinct information.
I'm not sure whether this should be a "standard", but we need a Linux Distribution where the user never has to touch the command line. Such a distro would be beneficial and useful to new users, who don't want to learn about command line commands.
And also we need a good app store where users can download and install software in a reasonably safe and easy way.
I really don't understand this. I put a fairly popular Linux distro on my son's computer and never needed to touch the command line. I update it by command line only because I think it's easier.
Sure, you may run into driver scenarios or things like that from time to time, but using supported hardware would never present that issue. And Windows has just as many random "gotchas".
Mint is pretty good, but I found the update center GUI app to always fail to update things like Firefox with some mirror error (regardless of whether you told it to use it or not). It happened for my old desktop (now my dad’s main computer), my LG laptop or used HP elitedesk G4. Using “sudo apt update” + “sudo apt upgrade” + Y (to confirm) on the command line was 10x easier and just worked. I do feel better/safe now that they use Linux for internet browsing instead of windows too.
Each monitor should have its own framebuffer device rather than only one app controlling all monitors at any time and needing each app to implement its own multi-monitor support. I know fbdev is an inefficient, un-accelerated wrapper of the DRI, but it's so easy to use!
Want to draw something on a particular monitor? Write to its framebuffer file. Want to run multiple apps on multiple screens without needing your DE to launch everything? Give each app write access to a single fbdev. Want multi-seat support without needing multiple GPUs? Same thing.
Right now, each GPU only gets 1 fbdev and it has the resolution of the smallest monitor plugged into that GPU. Its contents are then mirrored to every monitor, even though they all have their own framebuffers on a hardware level.
Yes and no. It would solve some problems, but because it has no (non-hacky) graphics acceleration, most DEs wouldn't use it anyway. The biggest benefit would be from not having to use a DE in some circumstances where it's currently required.
While all areas could benefit in terms of stability and ease of development from standadization, the whole system and each area would suffer in terms of creativity. There needs to be a balance.
However, if I had to choose one thing, I'd say the package management. At the moment we have deb, rpm, pacman, flatpak, snap (the latter probably should not be considered as the server side is proprietary) and more from some niche distros. This makes is very difficult for small developers to offer their work to all/most users.
Otherwise, I think it is a blessing having so many DEs, APIs, etc.
Domain authentication and group policy analogs. Honestly, I think it's the major reason it isn't used as a workstation OS when it's inherently more suited for it than Windows in most office/gov environments. But if IT can't centrally managed it like you can with Windows, it's not going to gain traction.
Linux in server farms is a different beast to IT. They don't have to deal with users on that side, just admins.
Ubuntu Server supports Windows Active Directory. I haven't used it for anything but authentication (and authentication works flawlessly) and some basic directory/share permissions but theoretically it should support group policy too.
It'd be cool if there was a mainstream FOSS alternative though (there might be, I've done literally 0 research), but this works okay-ish in the meantime.
But for management of the actual production servers at work I use a combination of ManageEngine (super great and reasonably priced) and Microsoft's Entra (doesn't work well, don't do it)
An immutable distro would be ideal for this kind of thing. ChromeOS (an immutable distro example) can be centrally managed, but the caveat with ChromeOS in particular is that it's management can only go through Google via their enterprise Google Workspace suite.
I don't think anyone was saying it's impossible, just that it needs standardization. I imagine windows is more appealing to companies when it is easier to find admins than if they were to use some specific linux system where only a few people are skilled to manage it.
i’ve never understood why there’s not a good option for using one of the plethora of server management tools with prebuilt helpers for workstations to mimic group policy
like the tools we have on linux to handle this are far, far more powerful
I've never understood putting arbitrary limits on a company laptop. I had always been seeking for ways to hijack them. Once I ended up using a VM,
without limit...
One of my coworkers (older guy) tends to click on things without thinking. He's been through multiple cyber security training courses, and has even been written up for opening multiple obvious phishing emails.
People like that are why company-owned laptops are locked down with group policy and other security measures.
Generally speaking, Linux needs better binary compatibility.
Currently, if you compile something, it's usually dynamically linked against dozens of libraries that are present on your system, but if you give the executable to someone else with a different distro, they may not have those libraries or their version may be too old or incompatible.
Statically linking programs is often impossible and generally discouraged, making software distribution a nightmare. Flatpak and similar systems made things easier, but it's such a crap solution and basically involves having an entire separate OS installed in parallel, with its own problems like having a version of Mesa that's too old for a new GPU and stuff like that. Applications must be able to be packaged with everything they need with them, there is no reason for dynamic linking to be so important in Linux these days.
I'm not in favor of proprietary software, but better binary compatibility is a necessity for Linux to succeed, and I'm saying this as someone who's been using Linux for over a decade and who refuses to install any proprietary software. Sometimes I find myself using apps and games in Wine even when a native version is available just to avoid the hassle of having to find and probably compile libobsoletecrap-5.so
nix can deal with this kind of problem. Does take disk space if you're going to have radically different deps for different apps. But you can 100% install firefox from 4 years ago and new firefox on the same system and they each have the deps they need.
I don't think static linking is that difficult. But for sure it's discouraged, because I can't easily replace a statically-linked library, in case of vulnerabilities, for example.
You can always bundle the dynamic libs in your package and put the whole thing under /opt, if you don't play well with others.
You'll never get perfect binary compatibility because different distros use different versions of libraries. Consider Debian and Arch which are at the opposite ends of the scale.
And yet, ancient Windows binaries will still (mostly) run and macOS allows you to compile for older system version compatibility level to some extent (something glibc alone desperately needs!). This is definitely a solvable problem.
Linus keeps saying “you never break userspace” wrt the kernel, but userspace breaks userspace all the time and all people say is that there’s no other way.
What you described as the weakness, is actually what is strong of an open source system. If you compile a binary for a certain system, say Debian 10, and distribute the binary to someone who is also running a Debian 10 system, it is going to work flawlessly, and without overhead because the target system could get the dependency on their own.
The lack of ability to run a binary which is for a different system, say Alpine, is as bad as those situations when you say you can't run a Windows 10 binary on Windows 98. Alpine to Debian, is on the same level of that 10 to 98, they are practically different systems, only marked behind the same flag.
The thing is, everyone would agree that it's a strength, if the Debian-specific format was provided in addition to a format which runs on all Linux distros. When I'm not on Debian, I just don't get anything out of that...
Configuration gui standard.
Usually there is a config file that I am suppose to edit as root and usually done in the terminal.
There should be a general gui tool that read those files and obey another file with the rules. Lets say it is if you enable this feature then you can't have this on at the same time. Or the number has to be between 1 and 5. Not more or less on the number.
Basic validation. And run the program with --validation to let itself decide if it looks good or not.
One that Linux should've had 30 years ago is a standard, fully-featured dynamic library system. Its shared libraries are more akin to static libraries, just linked at runtime by ld.so instead of ld. That means that executables are tied to particular versions of shared libraries, and all of them must be present for the executable to load, leading to the dependecy hell that package managers were developed, in part, to address. The dynamically-loaded libraries that exist are generally non-standard plug-in systems.
A proper dynamic library system (like in Darwin) would allow libraries to declare what API level they're backwards-compatible with, so new versions don't necessarily break old executables. (It would ensure ABI compatibility, of course.) It would also allow processes to start running even if libraries declared by the program as optional weren't present, allowing programs to drop certain features gracefully, so we wouldn't need different executable versions of the same programs with different library support compiled in. If it were standard, compilers could more easily provide integrated language support for the system, too.
Dependency hell was one of the main obstacles to packaging Linux applications for years, until Flatpak, Snap, etc. came along to brute-force away the issue by just piling everything the application needs into a giant blob.
That already happened though. Tens of thousands of games on Steam can be played by hitting the install and then the play button. Only a few "competitive multiplayer" holdouts with rootkits and an irrational hatred of Linux don't work.
Yep. Two solid years of steady gaming on various Linux distributions. No issues aside from no more pubg, no more valorant. Oh wait, that’s not an issue at all. Fuck their rootkits.
These are eventually going to be blocked on Windows. Microsoft are making changes to what's allowed to run in the kernel after the Crowdstrike issue last year.
Have you tried recently? We've been pretty much at parity for years now. Almost every game that doesn't run is because the devs are choosing to make it that way.
Not offering a solution here exactly, but as a software engineer and architect, this is not a Linux only problem. This problem exists across all software. There are very few applications that are fully self contained these days because it's too complex to build everything from scratch every time. And a lot of software depends on the way that some poorly documented feature worked at the time that was actually a bug and was eventually fixed and then breaks the applications that depended on it, etc. Also, any time improvements are made in a library application it has potential to break your application, and most developers don't get time to test the every newer version.
The real solution would be better CI/CD build systems that automatically test the applications with newer versions of libraries and report dependencies better. But so many applications are short on automated unit and integration tests because it's tedious and so many companies and younger developers consider it a waste of time/money. So it would only work in well maintained and managed open source types of applications really. But who has time for all that?
Anyway, it's something I've been thinking about a lot at my current job as an architect for a major corporation. I've had to do a lot of side work to get things even part of the way there. And I don't have to deal with multiple OSes and architectures. But I think it's an underserved area of software development and distribution that is just not "fun" enough to get much attention. I'd love to see it at all levels of software.
Flatpak with more improvements to size and sandboxing could be accepted as the standard packaging format in a few years. I think sandboxing is a very important factor as Linux distros become more popular.
Stability and standardisation within the kernel for kernel modules. There are plenty of commercial products that use proprietary kernel modules that basically only work on a very specific kernel version, preventing upgrades.
Or they could just open source and inline their garbage kernel modules…
I’m struggling with this now. There’s an out of tree module I want upstreamed, but the author (understandably) doesn’t want to put in the work to upstream, so I did. The upstream folks are reluctant to take it because I didn’t actually write it.
At this point, package management is the main differentiating factor between distro (families). Personally, I'm vehemently opposed to erasing those differences.
The "just use flatpak!" crowd is kind of correct when we're talking solely about Linux newcomers, but if you are at all comfortable with light troubleshooting if/when something breaks, each package manager has something unique und useful to offer. Pacman and the AUR a a good example, but personally, you can wring nixpkgs Fron my cold dead hands.
And so you will never get people to agree on one "standard" way of packaging, because doing your own thing is kind of the spirit of open source software.
But even more importantly, this should not matter to developers. It's not really their job to package the software, for reasons including that it's just not reasonable to expect them to cater to all package managers. Let distro maintainers take care of that.