Vittelius@feddit.org to Fuck AI@lemmy.worldEnglish · 3 months agoconsequences of the current AI induced rise in hardware pricesfeddit.orgimagemessage-square21linkfedilinkarrow-up1326arrow-down123
arrow-up1303arrow-down1imageconsequences of the current AI induced rise in hardware pricesfeddit.orgVittelius@feddit.org to Fuck AI@lemmy.worldEnglish · 3 months agomessage-square21linkfedilink
minus-squareClockworkOtter@lemmy.worldlinkfedilinkarrow-up13·3 months agoWhat services can LLM-AI providers offer that would otherwise require high RAM usage at home? I feel like people who do home video editing for example aren’t going to be asking ChatGPT to be splicing their footage.
minus-squarecron@feddit.orglinkfedilinkarrow-up7arrow-down1·3 months agoRunning a LLM on your own hardware.
minus-squareClockworkOtter@lemmy.worldlinkfedilinkarrow-up3·3 months agoI’m way out of the loop on that. Is that more than just a hobby?
minus-squarebthest@lemmy.worldlinkfedilinkEnglisharrow-up6·edit-23 months agoSlow as fuck unless you have a monster rig. Doing a basic job is like rendering a 4K 120 FPS video. Text comes out like an 1890s ticker-tape telegram.
What services can LLM-AI providers offer that would otherwise require high RAM usage at home? I feel like people who do home video editing for example aren’t going to be asking ChatGPT to be splicing their footage.
Running a LLM on your own hardware.
I’m way out of the loop on that. Is that more than just a hobby?
Slow as fuck unless you have a monster rig. Doing a basic job is like rendering a 4K 120 FPS video. Text comes out like an 1890s ticker-tape telegram.