For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
Seeing people mention dril here makes me think of someone saying “and what does FYAD have to say about this?”
FYAD is more coherent and trustworthy than LLM output.