

Because grocery stores don’t make that data accessible to third party developers, otherwise someone would do what you’re suggesting and they’d risk you shopping elsewhere.
Developer, 11 year reddit refugee
Because grocery stores don’t make that data accessible to third party developers, otherwise someone would do what you’re suggesting and they’d risk you shopping elsewhere.
Give NixOS a shot. It’s got a learning curve that may be difficult if you’ve never read code, but it’s my preferred immutable setup.
It even has more packages than Arch.
Here’s the video that got me onto it:
Salient demonstration, but if image proxying were to come to Lemmy I’d hope it was made optional, as it could overburden smaller instances, especially one-person instances (like mine). We also need a simple integrated way of configuring object storage.
I wrote a few scripts to automate this entire process for me:
If you’re able and willing to self-host, I’ve developed a pretty great system that automates my entire process. The app I’m using on mobile is also available on iOS
I welcome any alternatives to the current situation, but unfortunately that’s where we are right now.
The only solution would be a massive effort that requires decades of engineering hours and a few million dollars.
Firefox is actually a bit faster and lighter than Chrome these days. Worth checking out it or it’s forks over Chrome
Lesser of two evils
People should be attacking your idea, not their perception of you based on your choice in browser.
My objection with Brave, Vivaldi, and other other browser that is just Chrome with a different skin of paint is they are all signalling an acceptance of Google’s monopoly over the web standards ecosystem.
Mozilla is a shit organization run by a shit CEO, but they’re the only alternative we have to the megalith that is the advertising company known as Google. It really shouldn’t be a hard argument to understand that putting an advertising company at the head of the web standards process is a really bad idea if you care about anything other than Google’s revenue streams, ie a free and open web.
Chromium only exists as a way for Google to keep antitrust regulators from coming after them like they did to Microsoft when IE had a monopoly. It’s source-available, not open source, they don’t accept commits from non-Googlers. The moment they feel safe closing down the Chromium repos without having to lose too much money in fines or blowback, they absolutely will.
We’re literally watching this happen right now with Android, another formally open source project from Google that is slowly having all of its open source components clawed back so that they can maintain their control over the ecosystem and protect the revenue stream that is their data collection and app store.
When Google inevitably decides to pull the plug on Chromium the collective of forked browser developers is not going to be able to keep up with the massive engineering effort required to keep a modern browser going. Especially when a corporation like Google can and will push forward complex and difficult to implement standards expressly for the purpose of making those forks obsolete. They have the manpower, capital, and control over massive web properties to effectively push out anyone they don’t want.
All it takes is them making a change to Youtube that hinders alternative browsers and that will be the death of that open source ecosystem. They’ve literally pulled this exact move before with Youtube by hindering Firefox’s performance by pushing through the implementation of shadow DOM.
All of this has happened before and all of this will happen again. Trusting an advertising company with control of the open web is the nerd equivalent of leopards ate my face
I surely deserve death for using a browser you don’t like.
I’m not sure how you managed to come to that conclusion. You claimed Firefox is a poor choice, I’m demonstrating why I believe your alternative choice is worse. Nevermind the fact that use of Chromium is effectively an acceptance of Google’s monopoly over the web standards, which is the point we’re all arguing here. If you can’t handle criticism you should reconsider making such hyperbolic remarks.
Nothing about this is recent, those who pay attention to the standards process have been screaming for ages about the Google problem. It’s just that now between interest rates being what they are and them having a monopoly on the browser market that they’re cashing in on their investment.
It’s Brave, as evidenced in their history. The browser that peddles crypto ads, has a transphobe CEO, and has been accused of selling copyrighted data
For sites that rely on XHR using Javascript, which let’s be honest is pretty much all of them, this would not work due to CORS and CSP restrictions
My understanding is that the copyright applies to reproductions of the work, which this is not. If I provide a summary of a copyrighted summary of a copyrighted work, am I in violation of either copyright because I created a new derivative summary?
Quoting this comment from the HN thread:
On information and belief, the reason ChatGPT can accurately summarize a certain copyrighted book is because that book was copied by OpenAI and ingested by the underlying OpenAI Language Model (either GPT-3.5 or GPT-4) as part of its training data.
While it strikes me as perfectly plausible that the Books2 dataset contains Silverman’s book, this quote from the complaint seems obviously false.
First, even if the model never saw a single word of the book’s text during training, it could still learn to summarize it from reading other summaries which are publicly available. Such as the book’s Wikipedia page.
Second, it’s not even clear to me that a model which only saw the text of a book, but not any descriptions or summaries of it, during training would even be particular good at producing a summary.
We can test this by asking for a summary of a book which is available through Project Gutenberg (which the complaint asserts is Books1 and therefore part of ChatGPT’s training data) but for which there is little discussion online. If the source of the ability to summarize is having the book itself during training, the model should be equally able to summarize the rare book as it is Silverman’s book.
I chose “The Ruby of Kishmoor” at random. It was added to PG in 2003. ChatGPT with GPT-3.5 hallucinates a summary that doesn’t even identify the correct main characters. The GPT-4 model refuses to even try, saying it doesn’t know anything about the story and it isn’t part of its training data.
If ChatGPT’s ability to summarize Silverman’s book comes from the book itself being part of the training data, why can it not do the same for other books?
As the commentor points out, I could recreate this result using a smaller offline model and an excerpt from the Wikipedia page for the book.
Same, did you know the project is still around with active contributors? I have no idea how many active users there actually are but I was surprised to see the codebase is still alive.
Exactly, though I’d like to get a PR in to not show that on the admin screen, or in the very least to make the list collapsed by default. I think I’ll work on that today.
Those are bans where the user has been banned from their home instance. It actually doesn’t make a lot of sense that they show up in our admin panels since a user banned from their home instance won’t be able to authenticate and access remote instances with that account.
Go ahead and try scraping an arbitrary list of sites without an API and let me know how that goes. It would be a constant maintenance headache, especially if you’re talking about anything other than the larger chains that have fairly standardized sites