You’d still need to turn it on if it’s in hibernate. Well, you might not need to push the power button, might have a laptop that can, while off, key off the lid switch. But the laptop’s still off when it’s hibernated.
- 0 Posts
- 44 Comments
It doesn’t work with private DNS servers or forward DNS over VPN.
Like, you want to have it query some particular DNS server?
From
man 5 resolved.conf
:DNS= A space-separated list of IPv4 and IPv6 addresses to use as system DNS servers. For compatibility reasons, if this setting is not specified, the DNS servers listed in /etc/resolv.conf are used instead, if that file exists and any servers are configured in it.
If you specify your private server there, it should work. For VPN, I mean, whatever VPN software you’re using will need to plonk it in there. Maybe yours is not aware of systemd-resolved, is modifying
/etc/resolv.conf
aftersystemd-resolved
has already started, and it doesn’t watch it for updates?In my /etc/nsswitch.conf, I have:
hosts: files myhostname mdns4_minimal [NOTFOUND=return] resolve [!UNAVAIL=return] dns
I’m assuming that the “resolve” entry is for
systemd-resolved
.kagis
https://www.procustodibus.com/blog/2022/03/wireguard-dns-config-for-systemd/
With systemd-resolved, however, instead of using that DNS setting, add the following PostUp command to the [Interface] section of your WireGuard config file:
PostUp = resolvectl dns %i 9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net; resolvectl domain %i ~.
When you start your WireGuard interface up, this command will direct systemd-resolved to use the DNS server at 9.9.9.9 (or at 149.112.112.112, if 9.9.9.9 is not available) to resolve queries for any domain name.
tal@olio.cafeto linuxmemes@lemmy.world•"there is nothing less than PERPETUAL PERSISTENT SPY-WARE BUILT INTO MY LINUX OS. I am referring to the HIDDEN FILES recently-used.xbel and thumbnails"English18·6 days agoIt’s been a long time, but IIRC Windows’s file dialog also remembers your recently-used files for quick access in the file dialog, and I assume that Explorer has a thumbnail cache.
It looks like GTK 3 has a toggle for recently-used files:
https://linux.debian.user.narkive.com/m7SeBwTP/recently-used-xbel
While the guy sounds kinda unhinged, I do think that he has a point — he doesn’t want activity dumping breadcrumbs everywhere, unbeknownst to him. That’s a legit ask. Firefox and Chrome added Incognito and Private Browsing mode because they recorded a bunch of state about what you were doing for History, and that’s awkward if it suddenly gets exposed. There should really be a straightforward way to globally disable this sort of thing, even if logged history can provide for convenient functionality.
Emacs has a lot of functionality, but I don’t think anything I use actually retains state. If emacs can manage that so can oyher stuff. Hmm. Oh, etags will store a cached TAGS file for a source tree.
thinks
Historically, bash defaulted to saving ~/.bash_history on disk. Don’t recall if that changed at any point.
There’s ccache, which caches binary objects from gcc compilations persistently.
Firefox can persistently cache data in the disk cache or for LocalStorage or cookies.
System logfiles might record some data baout the system though they generally get rotated out.
Most of the time though, I don’t have a lot of recorded persistent state floating around.
Further complicating this, the Threadiverse also has “display names” for communities — something which I think is probably a mistake — and one has to know how to get the actual name for the bang-syntax link. For example, the display name here is “New Communities”, but the actual community name is “newcommunities”.
I’d like standard bang syntax to be able to link to a post and comment as well in a home-instance-agnostic fashion. That doesn’t exist today, and we can’t really do it today without Mbin, Lemmy, and PieFed adding support.
DNS
There’s
systemd-resolved
. I don’t know if you mean that it has some kind of limitation.
Honestly, the bang syntax for home-instance-agnostic community links isn’t very obvious to new users. It’s not a piece of standard Markdown syntax, either, so it’s not like someone can use knowledge from Reddit or elsewhere.
twitter doesn’t work for me rn)
https://nitter.space/moschino_bunny/status/1457773412957376530
Frankly, this should be implemented with something like a combination of:
https://github.com/QazCetelic/lemmy-know
Lemmy Know (let me know) is a lightweight CLI application / Docker service that monitors Lemmy for reports on posts and comments and sends notification. These can be sent to a Discord channel with a webhook or as MQTT messages (schema), which is useful for more complex setups with e.g., Node-RED.
https://www.home-assistant.io/
Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts.
https://www.home-assistant.io/integrations/mqtt/
MQTT (aka MQ Telemetry Transport) is a machine-to-machine or “Internet of Things” connectivity protocol on top of TCP/IP. It allows extremely lightweight publish/subscribe messaging transport.
https://github.com/DevelopmentalOctopus/ha-buttplug
Buttplug.io Integration for Home Assistant
Intiface® Central is an open-source, cross-platform application that acts as a hub for intimate haptics/sensor hardware access
Some collection of hardware devices from:
That’d permit for, say, having message events drive a state machine to control devices or something like that.
I haven’t been using instant messaging programs much for some years, but checking https://old.reddit.com/r/xmpp/ I see:
https://www.glukhov.org/post/2025/09/xmpp-jabber-userbase-and-popularity/
This has an estimate of 13–20 million users globally for 2023, but warns that because many servers don’t publish information about their userbase, there’s necessarily uncertainty. According to it, Germany is the country with the largest userbase, followed by Russia, followed by the US.
jabber.org is a major server.
tal@olio.cafeto linuxmemes@lemmy.world•Promised to the Linux god's i would light a candle for them if i'd make it thru the update when the system froze 1385 packets deep into the updateEnglish28·9 days agoI’m not familiar with Arch’s updating scheme, but I’d bet that it’s pretty similar to Red Hat’s and Debian’s. If you don’t complete an update, boot it up — even if it’s in a semi-broken state — and just start the update again. Even if the thing dies right in the middle of updating something boot-critical, so that it can’t boot, you can probably just use liveboot media, mount the drives in question, start a chrooted-to-your-regular-root-partition root shell, and restart the update.
Doing that and installing or reinstalling packages is a pretty potent tool to fix a system. It’s not absolutely impossible that you can manage to hork a system up badly enough to render it still unusable in that situation — I once wiped ld.so from a system, for example, and had to grab another copy and manually put it in place to get stuff dynamically-linked stuff like the package manager working again. But that’ll deal with the great majority of problems you could create.
tal@olio.cafeto Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish1·10 days agoI don’t know if there’s a term for them, but Bacula (and I think AMANDA might fall into this camp, but I haven’t looked at it in ages) are oriented more towards…“institutional” backup. Like, there’s a dedicated backup server, maybe dedicated offline media like tapes, the backup server needs to drive the backup, etc).
There are some things that
rsnapshot
,rdiff-backup
,duplicity
, and so forth won’t do.-
At least some of them (
rdiff-backup
, for one) won’t dedup files with different names. If a file is unchanged, it won’t use extra storage, but it won’t identify different identical files at different locations. This usually isn’t all that important for a single host, other than maybe if you rename files, but if you’re backing up many different hosts, as in an institutional setting, they likely files in common. They aren’t intended to back up multiple hosts to a single, shared repository. -
Pull-only. I think that it might be possible to run some of the above three in “pull” mode, where the backup server connects and gets the backup, but where they don’t have the ability to write to the backup server. This may be desirable if you’re concerned about a host being compromised, but not the backup server, since it means that an attacker can’t go dick with your backups. Think of those cybercriminals who encrypt data at a company and wipe other copies and then demand a ransom for an unlock key. But the “institutional” backup systems are going to be aimed at having the backup server drive all this, and have the backup server have access to log into the individual hosts and pull the backups over.
-
Dedup for non-identical files. Note that
restic
can do this. While files might not be identical, they might share some common elements, and one might want to try to take advantage of that in backup storage. -
rdiff-backup
andrsnapshot
don’t do encryption (thoughduplicity
does). If one intends to use storage not under one’s physical control (e.g. “cloud backup”), this might be a concern. -
No “full” backups. Some backup programs follow a scheme where one periodically does a backup that stores a full copy of the data, and then stores “incremental” backups from the last full backup. All
rsnapshot
,rdiff-backup
, andduplicity
are always-incremental, and are aimed at storing their backups on a single destination filesystem. A split between “full” and “incremental” is probably something you want if you’re using, say, tape storage and having backups that span multiple tapes, since it controls how many pieces of media you have to dig up to perform a restore. -
I don’t know how Bacula or AMANDA handle it, if at all, but if you have a DBMS like PostgreSQL or MySQL or the like, it may be constantly receiving writes. This means that you can’t get an atomic snapshot of the database, which is critical if you want to be reliably backing up the storage. I don’t know what the convention is here, but I’d guess either using filesystem-level atomic snapshot support (e.g.
btrfs
) or requiring the backup system to be aware of the DBMS and instructing it to suspend modification while it does the backup.rsnapshot
,rdiff-backup
, andduplicity
aren’t going to do anything like that.
I’d agree that using the more-heavyweight, “institutional” backup programs can make sense for some use cases, like if you’re backing up many workstations or something.
-
tal@olio.cafeto Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish3·10 days agoBecause every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
I think that you may be thinking of
rsnapshot
rather thanrdiff-backup
which has that behavior; both usersync
.But I’m not sure why you’d be concerned about this behavior.
Are you worried about inode exhaustion on the destination filesystem?
tal@olio.cafeto Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish5·11 days agoslow
rsync
is pretty fast, frankly. Once it’s run once, if you have-a
or-t
passed, it’ll synchronize mtimes. If the modification time and filesize matches, by default,rsync
won’t look at a file further, so subsequent runs will be pretty fast. You can’t really beat that for speed unless you have some sort of monitoring system in place (like, filesystem-level support for identifying modifications).
tal@olio.cafeto Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish1·11 days agoMost Unix commands will show a short list of the most-helpful flags if you use
--help
or-h
.
tal@olio.cafeto Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish2·11 days agosed
can do a bunch of things, but I overwhelmingly use it for a single operation in a pipeline: thes//
operation. I think that that’s worth knowing.sed 's/foo/bar/'
will replace all the first text in each line matching the regex “foo” with “bar”.
That’ll already handle a lot of cases, but a few other helpful sub-uses:
sed 's/foo/bar/g'
will replace all text matching regex “foo” with “bar”, even if there are more than one per line
sed 's/\([0-9a-f]*\)/0x\1/g
will take the text inside the backslash-escaped parens and put that matched text back in the replacement text, where one has ‘\1’. In the above example, that’s finding all hexadecimal strings and prefixing them with ‘0x’
If you want to match a literal “/”, the easiest way to do it is to just use a different separator; if you use something other than a “/” as separator after the “s”,
sed
will expect that later in the expression too, like this:sed 's%/%SLASH%g
will replace all instances of a “/” in the text with “SLASH”.
tal@olio.cafeto Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish25·11 days agoI would generally argue that rsync is not a backup solution.
Yeah, if you want to use rsync specifically for backups, you’re probably better-off using something like
rdiff-backup
, which makes use of rsync to generate backups and store them efficiently, and drive it from something likebackupninja
, which will run the task periodically and notify you if it fails.rsync
: one-way synchronizationunison
: bidirectional synchronizationgit
: synchronization of text files with good interactive merging.rdiff-backup
:rsync
-based backups. I used to use this and moved torestic
, as thebackupninja
target forrdiff-backup
has kind of fallen into disrepair.That doesn’t mean “don’t use
rsync
”. I mean,rsync
’s a fine tool. It’s just…not really a backup program on its own.
tal@olio.cafeto Today I Learned@lemmy.world•TIL there's a federated Tumblr alternative called WAFRNEnglish3·11 days agoFlash games tended to use vector art. This uses some flat color areas, but I’m pretty sure that that’s hand-drawn raster.
tal@olio.cafeto Selfhosted@lemmy.world•how do I find process that leads to oom?English191·13 days agoOOMs happen because your system is out of memory.
You asked how to know which process is responsible. There is no correct answer to which process is “wrong” in using more memory — all one can say is that processes are in aggregate asking for too much memory. The kernel tries to “blame” a process and will kill it, as you’ve seen, to let your system continue to function, but ultimately, you may know better than it which is acting in a way you don’t want.
It should log something to the kernel log when it OOM kills something.
It may be that you simply don’t have enough memory to do what you want to do. You could take a glance in
top
(sort by memory usage with shift-M). You might be able to get by by adding more paging (swap) space. You can do this with a paging file if it’s problematic to create a paging partition.EDIT: I don’t know if there’s a way to get a dump of processes that are using memory at exactly the instant of the OOM, but if you want to get an idea of what memory usage looks at at that time, you can certainly do something like leave a
top -o %MEM -b >log.txt
process running to get a snapshot every two seconds of process memory use.top
will print a timestamp at the top of each entry, and between the timestamped OOM entry in the kernel log and the timestamped dump, you should be able to look at what’s using memory.There are also various other packages for logging resource usage that provide less information, but also don’t use so much space, if you want to view historical resource usage. sysstat is what I usually use, with the
sar
command to view logged data, though that’s very elderly. Things like that won’t dump a list of all processes, but they will let you know if, over a given period of time, a server is running low on available memory.
I suspect that if you mandated human support for unpaid services that the Threadiverse wouldn’t exist.