

But the attacking line is also much lower. So? 🤷
But the attacking line is also much lower. So? 🤷
This is misleading. While Red Hat contributes significantly to Linux and some open source projects, they did not create the Linux kernel, GCC, or glibc - those are GNU or community projects. You can absolutely use Linux without Red Hat software, especially with distros like Alpine, Gentoo, or Guix. Red Hat is influential, but not essential.
Thanks for the awesome news! I really hope more distros follow that move - more independence means more real freedom.
When exploring the libre distributions recommended by GNU.org or broader FOSS communities, I find myself questioning whether being „blob-free" is truly enough. Some suggested distributions - such as Guix - host their code on GitHub, which is owned by Microsoft.
Similarly, systemd is maintained by Red Hat, a company closely tied to IBM and known to collaborate with Microsoft. It’s used in distributions like Parabola and Trisquel. This raises concerns about centralization and corporate influence, which makes me wonder whether these choices truly align with the spirit of software freedom.
That said, maybe I’m misunderstanding what „libre“ fully entails.
Thank you for mentioning SourceHut as another option - I didn’t know about it. In my opinion, it doesn’t matter whether Void Linux or other distributions choose Forgejo or another platform, as long as they move away from Microsoft-controlled GitHub. Doing so would reduce the risk of corporate influence and give them greater independence, even if I fully understand that it would also mean more work.
Not only GNU projects, but also entire distributions. Void Linux, for example, is still on GitHub! I hope so much that they will turn to Forjego, Codeberg or Gitea.
You could not be more wrong. It is well alive and still kicking.
Defaults are generally who do not want to understand in depth what they are doing (no offence). Example from other sphere: in R-Cran (used to write statistical models), some functions have defaults to either choose a particular algorithm or an optimisation value. I have heard almost about nobody among students, PhDs and even higher up the ladder, who took the time to understand what is happening below the shell. Instead these people took just the defaults, it worked (result was significant), done. However, if they may have chosen another algorithm, things may have turned differently, which would open up a box with many questions concerning modelling adequacy and understanding of data. It is the same with defaults in Linux.
IMHO the entire voting thing is useless. If you don’t like a post, don’t read it. If the post is aggressive and very harmful (racist, fascist), inform the admin to remove it. If the post is interesting, read it and mark as done. So, why voting? In Reddit and even here on Lemmy, I saw critical comments - which I myself sometimes do not like, but did not downvote - that were heavily downvoted by others (though it was just a critical view). What does this mean? That a user has to play according to the rules of the masses? That he/she cannot express his/her different views? If you don’t like or think a comment is weirded, ask why. Engage the person in a discussion (which may be promoted by the lack of a voting system). Perhaps you can convince him/her, or perhaps the other user can show you a different perspective, which may turn out to be a bit extreme, but not that wrong either. Right?
Can you give 1-2 links, please? Would like to see these guys and what are they saying.
How comes that Vim is proprietary? Jetbrain offers community versions which are afaik open source too, so you can look at the source code, you do not need to pay or agree to an EULA.
Ask if you can join.
Attention Ubuntu users if you haven’t heard about it. There is currently a problem with the update, which is why it is stopped: Release Manager Simon Quigley on Reddit.
It has less to do with people than with jurisdiction. The US administration can demand to do this or that on US soil and the maintainer, owner, programmer has little chance to do otherwise if he/she does not want to end in the prison. Hence, my opinion to choose distro with as least as possible influence by the US.
No. SUSE has ties in the US. There are many in the list which are not totally off the US, because either several servers or maintainers or their main distro (Arch, Ubuntu, Slackware, Gentoo, RedHat) is located in the US or has strong ties in the US. The few in the list which may stand out a bit are VoidLinux (community based and mainly in Europe), Crux (community, mainly Europe, but this distro is a tough one), and Alpine (small group mostly in Europe). With Kali I am not sure. If you won’t stay outside the US, have safety, but sacrifice new hardware, look also at OpenBSD.
AFAIK depends OpenSUSE on the company SUSE, which - though based in Germany - has partners and hence ties in the US.
IMHO it is at first much more important that the distribution is running well, is safe, and gets the required support so that it can establish itself among the many distros and remains for many many years an entirely European distro! I do not care in the beginning if it is called Donald Duck OS, mc2 Linux or whatever.
I do not have a Slimbook but they look really nice on their webpage. However, I miss the possibility to choose among hardware components like with Tuxedo Computers, which is also located in Europe.
This, to me, seems like the standardization vs optimization argument. So much of the tech world could be optimized like crazy, but the more complex it gets, the hard it is to communicate with others and keep things consistent. This complexity actually hinders production overall. Standardization, even if it’s not the most optimized, allows us to create vastly more complex and reliable systems because we can ensure we are all on the same page. Even if that standardization isn’t the best way to do it.
Standardization is the reason why systems collapse or are more prone to attacks. Just think about a monoculture vs an organic mixed culture. Also, the impact on standardized systems is much bigger, because it affects the entire system. But on the other hand, yes, it requires more time and people. When reading comments from Rust people, I have always the impression that in the best case everything is replaced with Rust code. If this is indeed their intention, I disagree.
I mean, if you want to talk about absolute control over your code, why don’t you write in assembly? Are all programming languages not virtually assembly with training wheels?
Perhaps difficulty to learn, apply, and make changes? Also no interest, trigger and coolness among people? Assembly are considered the old nerds aka the hated boomers, while Rust people are sometimes the hipsters, the new generation. I do not like this attitude of exclusion. BTW, if you want to try out an OS written in assembly look at Kolibri OS.
Writing in code that is not memory safe is going to mean you are substantially more likely to have mistakes that lead both to user annoyance and straight up security vulnerabilities.
Depends on your skills.
Having applications written in a memory safe languages, especially when worked on by large swaths of people, is absolutely the best route.
I am sorry but I am unable to mix “safe language”, “large swaths of people”, and “best route” somehow in my brain. I just see “tilt, tilt, tilt”, because it does not make sense to me as there are no connections between all three points.
It provides a secure standard way to write memory safe code. This will reduce security vulnerabilities, decrease program crashes, and allow for more efficient developers.
The secure I put in question mark (aka time will show) and are you serious about efficient developers? In case you mean producing a larger program faster, yes, I agree. Memory safer? Very likely (although you can write safe programs in C as well). But more efficient in terms of more competent? I would not say this.
Changing a bike tire is something for a single person, maybe two at most. Writing code is often a team effort. And the more people that are involved, the more likely mistakes are going to happen.
Does not change my intention: either you know the in and outs, or you are a slave of others - in the case of Rust, the slave of the compiler.
People absolutely can still learn the complexities, and still choose to use Rust because honestly, it’s the smart thing to do.
Haven’t said anything against, but the smart thing to do is up to the personal choice, not because there is a loud community of followers.
And it doesn’t need to be rust. Any memory safe language would accomplish the same goal.
This is the point I would underline. It is not only Rust, but there are many languages out there worth regards and time, even for low level and systems.
I would say that Arch is not the best distro to learn the ins and outs of Linux. Arch is comparable to Void in that both are rolling-release distributions and require comfort with the command line.
Gentoo goes a step further by allowing you to tweak CPU-specific and software compile-time options before building packages from source. Then you have PLD Linux, whose installation process demands a strong understanding of the system and its internals.
A step further down is CRUX, which leaves you with the bare essentials - essentially just the kernel. You need to manage repositories yourself to a significant extent.
Finally, we arrive at Linux From Scratch (LFS), which is somewhat similar to CRUX, but with an even more hands-on approach. With LFS, you must manually install virtually everything, including the toolchain, libraries, and basic utilities.
So, from Arch to LFS, there’s still a huge gap in terms of how deeply you engage with the system.
Finally, what does it really mean to “learn Linux”? You can learn Linux with any distro, but when you are using a distro, you are mostly just learning that particular distro.