• 0 Posts
  • 160 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle



  • If pi zero, you’re serving 12 users low latency over wifi? Does it route the actual audio?

    Yes, it’s sufficient. I wouldn’t advise it due to the extra overhead of wireless packet loss, but it’s absolutely technically possible. Don’t overestimate how little bandwidth voice chat really needs. It’s like 10-50kB/s per person and you’re unlikely to ever have more than 2 or 3 people talking at a time.


  • Wait unti you have to upgrade Zorin to a new release. I still haven’t gotten mine to work. Stick with Mint or Bazzite if you want a Windows alternate

    I wouldn’t exactly put up Mint as an example for a smooth upgrading experience…
    Maybe I lack the technical understanding, but it’s absolutely baffling to me, why one has to download mintupgrade. It’s a reasonable setup wizard once it’s running, but why on earth is that not part of the whole Update tool interface in the first place and gets downloaded automatically?


  • So, I’ve been having issues with voice chat on Discord and I’m looking for alternatives. In my search, I came across Mumble, here. Does anyone here have experience, or information regarding Mumble, or a better alternative to Discord with better latency? Is it relatively easy to set up? Is it safe? Any advice and help is greatly appreciated.

    Been running a server for my friends for over a decade now. Can recommend. It’s just one apt-get to set up, runs on a Pi Zero for a dozen people, has clients available for pretty much any platform and doesn’t really require any maintenance. Latency will depend on the routing between you and your friends’ ISPs, of course, but the whole purpose of the software itself was to provide a low-latency voicechat server for gaming.

    But: That’s it. You don’t get anything else. It’s a barebones voice chat server. You can set up rooms and have basic text-functionality, but you don’t get any fancy user management, no full-fledged chatrooms, no persistence beyond the room setup and only limited backend options. Keep that in mind.






  • I do a presentation of the Fediverse to my college students and will soon be giving short workshops to organization as well. I realize that a viable, decentralized altenative to Facebook is IMO the biggest missing piece of the puzzle. We need something that offers some kind of central platform for networking, events, groups

    Well if you want decentralised solutions, there’s Mattermost and there’s just a plain old Matrix server. Both are better-suited to collaboration projects than Facebook ever was. I’d argue the only reason it ever morphed into that role in the first place was because everyone was on there, it had little to do with features.



  • Basically what the title says. I know online providers like GPTzero exist, but when dealing with sensitive documents, I would prefer to keep it in-house. A lot of people like to talk big about open source models for generating stuff, but the detection side is not as discussed I feel.
    I wonder if this kind of local capability can be stitched into a browser plugin. Hell, doesn’t even need to be a locally hosted service on my home network. Local app on-machine should be fine. But being able to host it as a service to use from other machines would be interesting.
    I’m currently not able to give it a proper search but the first glance results are either for people trying to evade these detectors or people trying to locally host language models.

    In general it’s a fool’s errand, I’m afraid. What’s the specific context in which you’re trying to apply this?




  • splendoruranium@infosec.pubtoSelfhosted@lemmy.worldSelfhost an LLM
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    2 months ago

    I read about OLLAMA, but it’s all unclear to me.

    There’s really nothing more to it than the initial instructions tell you. Literally just a “curl -fsSL https://ollama.com/install.sh | sh”. Then you’re just a “ollama run qwen3:14b” away from having a chat with the model in your terminal.
    That’s the “chat with it”-part done.

    After that you can make it more involved by serving the model via API, manually adding .gguf quantizations (usually smaller or special-purpose modified bootleg versions of big published models) to your Ollama library with a modelcard, ditching Ollama altogether for a different environment or, the big upgrade, giving your chats a shiny frontend in the form of Open-WebUI.


  • that is a very cool idea! but then how to counter the fact that money is needed to produce these things such as art, books etc Like dont we pay artists ? directly?

    while digital property is really debated even believed that copyright for physical goods being copied to digital is no fair

    so i could dig into digital intellectual property i will see what i can find

    Excellent thinking! You can of course directly transition into discussions about things like basic income and the requirements of society to cater to the basic needs of all its members before anything like economic growth can even be allowed, but it might be more useful to ask the following questions:

    1. If we removed all art - all paintings, all books, all music, all movies, all games, all installations, that were not commercially produced or commercially produced at a loss - from existence, wouldn’t that still leave more than could ever be consumed in hundreds of human lifetimes? And if so, why do we even need commercial art?
    2. Or more nuanced: How much commercial art does the world need?

    Because once you answer that question you know roughly how much public funds to allocate to art production. Depending on who you ask the answer might even be zero or close to zero.