• 0 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Can’t say I’m deep in this space, but I think there’s a lot of sentiment towards going more lean with operations and aiming for direct donation toward Firefox development (which I don’t believe is presently an option) which seemingly, if Mozilla narrowed to their core (Firefox, MDN), the community would likely show heavy support. I have my doubts it would fully cover the bill in a sustainable way, but I at least think that’s one of the main sentiments.


  • Interesting points, maybe a book I’ll have to give a read to. I’ve long thought that information overload on its own leads to a kind of subjective compression and that we’re seeing the consequences of this, plus late stage capitalism.

    Basically, if we only know about 100 people and 10 events and 20 things, we have much more capacity to form nuanced opinions, like a vector with lots of values. We don’t just have an opinion about the person, our opinion toward them is the sum of opinions about what we know about them and how those relate to us.

    Without enough information, you think in very concrete ways. You don’t build up much nuance, and you have clear, at least self-evident logic for your opinions that you can point at.

    Hit a sweet spot, and you can form nuanced opinions based on varied experiences.

    Hit too much, and now you have to compress the nuances to make room for more coarse comparisons. Now you aren’t looking at the many nuances and merits, you’re abstracting things. Necessary simulacrum.

    I’ve wondered if this is where we’ve seen so much social regression, or at least being public about it. There are so many things to care about, to know, to attend to, that the only way to approach it is to apply a compression, and everyone’s worldview is their compression algorithm. What features does a person classify on?

    I feel like we just aren’t equipped to handle the global information age yet, and we need specific ways of being to handle it. It really is a brand new thing for our species.

    Do we need to see enough of the world to learn the nuances, then transition to tighter community focus? Do we need strong family ties early with lower outside influence, then melting pot? Are there times in our development when social bubbling is more ideal or more harmful than otherwise? I’m really curious.

    Anecdotally, I feel like I benefitted a lot from tight-knit, largely anonymous online communities growing up. Learning from groups of people from all over the world of different ages and beliefs, engaging in shared hobbies and learning about different ways of life, but eventually the neurons aren’t as flexible for breadth and depth becomes the drive.


  • Lots of immediate hate for AI, but I’m all for local AI if they keep that direction. Small models are getting really impressive, and if they have smaller, fine-tuned, specific-purpose AI over the “general purpose” LLMs, they’d be much more efficient at their jobs. I’ve been rocking local LLMs for a while and they’ve been great as a small compliment to language processing tasks in my coding.

    Good text-to-speech, page summarization, contextual content blocking, translation, bias/sentiment detection, click bait detection, article re-titling, I’m sure there’s many great use cases. And purely speculation,but many traditional non-llm techniques might be able to included here that were overlooked because nobody cared about AI features, that could be super lightweight and still helpful.

    If it goes fully remote AI, it loses a lot of privacy cred, and positions itself really similarly to where everyone else is. From a financial perspective, bandwagoning on AI in the browser but “we won’t send your data anywhere” seems like a trendy, but potentially helpful and effective way to bring in a demographic interested in it without sacrificing principles.

    But there’s a lot of speculation in this comment. Mozilla’s done a lot for FOSS, and I get they need monetization outside of Google, but hopefully it doesn’t lead things astray too hard.


  • I researched creative AI and how AI can help people be creative, people thought it was a ridiculous and pointless topic. I’m biased.

    Firstly, I think it’s important to see the non-chat applications. Goblin Tools is a great example of code we just couldn’t have written before. Purely from an NLP perspective, these tools are outstanding, if imperfect.

    I’m excited to see new paradigms of applications come up when talented new developers are able to locally run LLMs and integrate them into their everyday programming, and too see what they can cook up in a world where that’s normal.

    I’m interested in LLMs not to generate data on the fly, but to pre-generate and validate massive amounts of content or data than we’d otherwise be able to for things like games.

    From a chat perspective, I like that it can support fleshing out ideas, parsing lots of data in a usable way.

    And finally I’m excited for how lightweight LLMs could affect user interface design. I could imagine a future where OSs have swappable LLMs like they have shells that can allow for natural language interfacing with programs.

    I don’t know, it’s just really accessible NLP, and that’s great.