• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2024

help-circle
  • “Every accusation is a confession” is not a new trend. Right-wing hysteria has always been the Regressivist response to their own fascist fantasies. They scream about the perceived, feared, and fabricated sins of the Left because they’re terrified of being exposed as the evil people they are. Always have been. Always will be.

    They obsess over “family values” because their values are empty shells animated by dogma, delusion, and psychopathy.

    They obsess over purity culture because they are sexually monstrous (or, on the opposite end of the spectrum, aren’t but have been convinced they are).

    They obsess over race and minority success because they fear the inevitable: becoming the minority.

    All covered in veneers of affluence meant to echo Aristocracy and stuffed with petty but insatiable greed for control over every perceived threat. That fear and insecurity manifests as an obsession with power and admiration for those who wield power selfishly without punishment.

    They willed their own nightmare into existence because their worst nightmare and their wettest dream spawn from the same putrid muck. The only difference those broken self-dehumanizing narcissists see between Heaven and Hell is who cracks the whip.


  • Even if so… If this is as effective and safe as it seems then it will get leaked to the public or reversed engineered and then made public. The original paper’s abstract says “this active exopolysaccharide is ubiquitous among the genus Spongiibacter” which means it’s accessible.

    The repression of such a boon could not last long. History has proven the human spirit is nothing if not irrepressible. There are plenty of people capable and motivated enough to run what little information we already have all the way to a consistent home manufacturing solution. Its publication and distribution is another game entirely but I’d bet on the public there as well.

    Take a look at the Four Thieves Vinegar Collective for some tangible encouragement. Knowledge is power. Together we can be powerful enough to create what we need to survive. Government buy-in encouraged but optional.


  • It isn’t just one thing. The big money wants to present this unified front to the public like LLMs are a single commodity anyone can use. In reality they’re a collection of complex tools that few can use " correctly" and whose utility is highly specialized for niches those few find valuable.

    So you’re correct in a way. I’m sure model decoherence isn’t helping much either and isn’t as visible in those niche applications as it is for the general public.


  • I’d like to tack on that this point can be used to highlight why this is so. It’s a deep concept that can be explained simply and produces a lasting positive impact.

    Everyone has fantasies. Sometimes we want them to be realized. Most often: we don’t. Many people carry internal shame because of their fantasies and some of those people have difficulty with intimacy because of it.

    Good sex with other people requires our investment in their comfort and pleasure. This can be emotionally complex and fulfilling to navigate. Masturbation is free of those complications but we often make up the difference via fantasy. This is normal and there’s no need to confuse one space for the other. Masturbation and sex may fulfill similar basic needs on the surface but, in practice, they are very different exercises. It’s normal for one’s preferences to be different for each and for those preferences to shift over time.

    Don’t worry about “normal”. Focus on having a healthy, honest, and emotionally aware sex life instead.



  • Sure! That’s an SMTP Relay. A lot of folks jumped on the poopoo wagon. It’s common wisdom in IT that you don’t do your own email. There are good reasons for that, and you should know why that sentiment exists, however; if you’re interested in running your own email: try it! Just don’t put all of your eggs in one basket. Keep your third party service until you’re quite sure you want to move it all in-house (after due diligence is satisfied and you’ve successfully completed at least a few months of testing and smtp reputation warming).

    Email isn’t complex. It’s tough to get right at scale, a pain in the ass if it breaks, and not running afoul of spam filtering can be a challenge. It rarely makes sense for even a small business to roll their own email solution. For an individual approaching this investigatively it can make sense so long as you’re (a.) interested in learning about it, (b.) find the benefits outweigh the risks, and (c.) that the result is worth the ongoing investment (time and labor to set up, secure, update, maintain, etc).

    What’ll get you in trouble regardless is being dependent on that in-house email but not making your solution robust enough to always fill its role. Say you host at home and your house burns down. How inconvenient is it that your self-hosted services burned with it? Can you recover quickly enough, while dealing with tragedy, that the loss of common utility doesn’t make navigating your new reality much more difficult?

    That’s why it rarely makes sense for businesses. Email has become an essential gateway to other tooling and processes. It facilitates an incredible amount of our professional interactions. How many of your bills and bank statements and other important communication are delivered primarily by email? An unreliable email service is intolerable.

    If you’re going to do it make sure you’re doing it right, respecting your future self’s reliance on what present-you builds, and taking it slow while you learn (and document!) how all the pieces fit together. If you can check all of those boxes with a smile then good luck and godspeed says I.


  • derek@infosec.pubtolinuxmemes@lemmy.worldnow I know why
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    8 months ago

    The features would break if they were built in.

    You can’t know that and I can’t imagine it would be true. If the plugins many folks find essential were incorporated into GNOME itself then they’d be updated where necessary as a matter of course in developing a new release.

    GNOME has clear philosophy and they work for themselves, not for you so they decide what features they care to invest time and what features they don’t care about.

    You’re not wrong! This is an arrogant and common take produced in poor taste though. A holdover from the elitism that continues to plague so many projects. Design philosophy leads UX decision making and the proper first goal for any good and functional design is user accessibility. This is not limited to accomodations we deem worthy of our attention.

    Good artists set ego aside to better serve their art. Engineers must set pet peeves aside to better serve their projects. If what they find irksome gets in the way of their ability to build functionally better bridges, homes, and software then it isn’t reality which has failed to live up to the Engineer’s standards. This is where GNOME, and many other projects, fall short. Defenders standing stalwart on the technical correctness of a volunteer’s lack of obligation to those whose needs they ostensibly labor for does not induce rightness. It exposes the masturbatory nature of the facade.

    Engineers have every right to bake in options catering to their pet peeves (even making them the defaults). That’s not the issue. When those opinions disallow addressing the accessibility needs of those who like and use what they’ve built there is no justification other than naked pride. This is foolish.

    Having a standardised method for plugins is in my opinion good enough, nobody forces you to use extensions. And if you don’t want extensions to break, then wait till the extensions are ready prior updating GNOME.

    I agree! Having a standardized method for plugins is good, however; the argument which follows misses the point. GNOME lucked into a good pole position as one of the default GNU/Linux DEs and has enjoyed the benefit of that exposure. Continuing to ignore obvious failures in method elsewhere while enshrining chosen paradigms of tool use as sacrosanct alienates users for whom those paradigms are neither resonant nor useful.

    No one will force Engineers to use accessibility features they don’t need. Not needing them doesn’t justify refusing the build them. Not building them as able is an abdication of social responsibility. If an engineer does not believe they have any social responsibility then they shouldn’t participate in projects whose published design philosophy includes language such as:

    People are at the heart of GNOME design. Wherever possible, we seek to be as inclusive as possible. This means accommodating different physical abilities, cultures, and device form factors. Our software requires little specialist knowledge and technical ability.

    Their walk isn’t matching their talk in a few areas and it is right and good to call them to task for it.

    Post statement: This is coming from someone who drives Linux daily, mostly from the console, and prefers GNOME to KDE. All of the above is meant without vitriol or ire and sent in the spirit of progress and solidarity.




  • There’s some good advice in the comments already and I think you’re on the right track. I’d like to add a few suggestions and outline how I think about the problem.

    Ask if the vendor has installation administrator guides, whitepaper, training material, etc. If yes: ask that they send it to you. You may also be able to find these on the vendor’s website, customer portal, or a public knowledgebase / PDF repo.

    I would want to know three things.

    1. How do users authenticate through the application?
    2. What are all of the ways users may access the application (local only, remote desktop, LAN only, full server/client model)?
    3. Does the vendor have any prescribed solutions for defining who has access to the application, at what privilege level, with access to what features?

    i.e. What parts of the user access, authenticate, authorize pipeline do application admins or system admins have control over and how can we exercise that control?

    Based on some context I assume that the app is reading from Active Directory using RADIUS or LDAP for user auth and that people are physically logging into the machine.

    If this is the only method of authentication then I would gate the application with a second account for each employee who requires access for business reasons defined in their job description (or as close as you can get to that level of justification - some orgs never get there). You can then control who has access to the machine via group policy. Once logged in the user can launch the application with their second account (which would have the required admin access) via “Run as…” or whatever other methods you’d prefer. No local admins logging in directly and yet an application which users can launch as admin. Goal achieved.

    This paradigm lets us attempt balancing security concerns with user pain. The technically literate and daringly curious will either already know or soon discover they can leverage this privilege to install software and make some changes to the system. The additional friction, logging, and 1:1 nature of the account structure makes abusing this privilege less attractive and more easily auditable if someone does choose the fool’s path.

    I can imagine more complex set ups within these constraints but they require more work for the same or worse result.

    Ideally you run the app with a service account and user permissions are defined via Security Groups whose level of access is tied to application features instead of system privs. There are other reasonable schemes. This one is box standard and a decent default sans other pressures.

    If other methods of auth are available (like local, social, cloud, etc) then you’ll have more decent options. I would define the security objectives for application access, define the user access objectives from the Organization’s perspective, and then plot each solution against those two axes (napkin graphs - nothing serious). Whichever of the top three is the least administratively burdensome is then selected as my first choice for implementation with the other two as alternatives.

    An aside: unless there is only one reasonable choice most folks find one option insufficient, two options difficult to decide between, and four options as having one option too many - whenever possible, if another party’s buy-in is desired, present either three options or three variations on one option. This succeeds even when the differences are superficial, especially when the subject is technical, and 2x if the project lead is ignorant of the particulars. People like participating.

    I’d then propose these options to my team/direct report/client, decide on a path forward together, and plan the rest from there. There’s more to consider (again dependent on org maturity) but this is enough to get the project oriented and off the ground.

    Regarding FOSS alternatives: you’re likely locked in with the vendor’s proprietary software for monitoring the cameras. There are exceptions but most commercial security system companies don’t consider interoperability when designing their service offerings. It might be worth investigating but I’d be surprised if you find any third party solutions for monitoring the vendor’s cameras which doesn’t require either a forklift replacement of hardware, flashing all of the existing hardware, or getting hacky with the gear/software.

    I hope this helps! <3



  • I haven’t experienced what you’re describing. Previous experience suggests exposure is the next step for you. If a cooking class isn’t feasible right now then start with watching some videos online (best if they’re home cooks - you want to watch common cooking of foods you like to eat).

    You’re not trying to memorize anything or learn hard skills during this time. You’re only trying to become more familiar with people working in a kitchen so it doesn’t feel as alien and maybe not quite as scary.

    Do that regularly for a while. If it’s too much for you: dial it back. You do want to push your boundaries but only when you’re feeling ok about it. Small wins will turn into more small wins and eventually you might be interested in trying to cook something.

    If that happens, and I suspect it will, know that it is OK to start cautiously and take your time learning how to use the oven and stove top. Try turning a burner on with no pan or pot on top. Let it get hot. Turn it off. Let it cool down. Repeat that across a few days if the first one helps you.

    Once you’re comfortable you should do that practice again and add water to a pan until its half full. Once the burner is hot: place your pan of water on top of the stove burner. Let the water come to a boil. Remove the pan from the stove top. Let the pan and water cool down. Note how much water is missing (some of it will have steamed away while boiling). Add that much water back to the pan and practice this again.

    You can build your experiences, step by step, with safe extensions and new footholds, until you’re feeling confident about cooking something with the boiling water. You’re going to boil an egg!

    Complete your practice again but instead of taking the water off right after it boils: leave it on the burner for 6 minutes. Then remove it and let it cool. Success? Do that again using a pot instead of a pan. Pot half full of water. Grab a serving spoon or similar item. Once the water comes to a boil:

    1. Lower the burner temperature to half / medium. The water should be moving and steamy but the bubbles should be very gentle or cease. Dropping the egg into actively boiling water may cause the egg to crack prematurely.
    2. Use the serving spoon to gently place the egg in the center of the boiling water.
    3. Wait six minutes.
    4. Remove the pot of water from the burner.
    5. Turn the burner off.
    6. Use the serving spoon to lift the egg out of the hot water.
    7. Run the egg under cold water (this helps it from over cooking and helps make peeling easier).
    8. Enjoy your egg.

    You can absolutely boil any kind of pasta, lots of vegetables, and almost all starchy foods. Boiling is very safe because the water regulates the temperature for us. So long as there is water in the pot the pot is unable to meaningfully exceed 100 degrees Celsius (the boiling point of water / ~212F). It is very difficult to burn anything or start a fire while boiling water.

    Best of luck my friend.


  • That’s not true for all sites. If the page is static then it’ll have no clue. If it’s dynamic and running a client-side script to report this info back, and if that information is collected, then I can see how that might be a useful supplement for fingerprinting if the server owner is so inclined. At that point though I’m wondering why a security-conscious user is raw dogging the internet and allowing scripts to run in their browser without consent (NoScript saves browsers).

    Even then it’s unclear when/how altering the page to render it differently is commonly communicated back to the server, how much identifying information that talk-back is capable of conveying, and how we might mitigate those collections (wholesale abstinence and/or script control aside). What are the specific mechanisms of action we’re concerned about? This isn’t a faux challenge for the sake of hollow rhetoric. I’m ignorant, find the dialogue interesting, and am asking for help being less dumb. :)

    I found some brief and useful discussion in this Privacy Guides thread. Seems like the concern is valid but minimal for all but the most strict/defensive postures.

    Trying to validate this myself for Dark Reader without breaking out Wireshark and monitoring some big tech site while I toggle color modes (which I might do later if I think of it and find the time) I see Dark Reader is open source, an Open Collective member, and seems to engender little hand-wringing. The only public gripe I can find is this misguided Orion Browser feedback thread.

    Thanks for the interesting diversion!



  • This is admittedly a bit pedantic but it’s not that the risk doesn’t exist (there may be quite a lot to gain from having your info). It’s because the risk is quite low and the benefit is worth the favorable gamble. Not dissimilar to discussing deeply personal health details with medical professionals. Help begins with trust.

    There’s an implicit trust (and often an explicit and enforceable legal agreement in professional contexts (trust, but verify)) between sys admins and troubleshooters. Good admins want quiet happy systems and good devs want to squash bugs. If the dev also dons a black hat occasionally they’d be idiotic to shit where they eat. Not many idiots are part of teams that build things lots of people use.

    edit: ope replied to the wrong comment