• pishadoot@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    13 days ago

    I agree with you to a point, but you should read the full plaintiff’s court filing: https://storage.courtlistener.com/recap/gov.uscourts.cand.461878/gov.uscourts.cand.461878.1.0.pdf

    It’s crazy to see how a bot like this can throw an insane amount of gas onto the fire of someone’s delusions. You really should look at some of it so you can see the severity of the danger.

    The risk is real, so yes although it’s just a piece of mindless software, the problem is that it hasn’t been designed with any guardrails to flag conversations like this, shut them down, redirect the user for help at all - and controls like those have been REPEATEDLY iterated out of the product for the sake of promoting “engagement.” OpenAI doesn’t want people to stop using their bots because the bot gives an answer someone doesn’t want to hear.

    It’s 100% possible to bake in guardrails because all these bots have them for tons of stuff; the court doc points to copyrighted materials as an example: if a user requests anything leaning towards copyrighted materials, the chat shuts down. There’s plenty of things that will cause the bot to respond and say that they can’t continue a conversation about _________, but not for this? So OpenAI will protect against Disney’s interests but not basic protective measures for people with mental health issues?

    They have scrooge mcduck vaults of gold coins to roll around in and can’t be assed to spend a bit of cash to bake some safety into this stuff?

    I’m with you that it’s not going to be possible to prevent every mentally ill person from latching onto a chatbot, or anything, for that matter - but these things are especially dangerous for mentally ill people and so the designers need to at least TRY. Just throwing something out there like this without even making the attempt is negligence.