Allah@lemmy.world to Ask Lemmy@lemmy.world · 20 hours agoWhat can an individual do to fight against tech billionaires?lemmy.worldimagemessage-square77fedilinkarrow-up1190arrow-down17
arrow-up1183arrow-down1imageWhat can an individual do to fight against tech billionaires?lemmy.worldAllah@lemmy.world to Ask Lemmy@lemmy.world · 20 hours agomessage-square77fedilink
minus-squaredasenboy@lemm.eelinkfedilinkarrow-up2·14 hours agoI use chatgpt a lot, what is the best non-billionaire funded llm? I really need to change to one that doesn’t worsen the world…
minus-squareknatschus@discuss.tchncs.delinkfedilinkarrow-up3·14 hours agoWhile deepseek is billionaire funded it still should be better if run locally I don’t think Foss llms are at that level yet
minus-squaredasenboy@lemm.eelinkfedilinkEnglisharrow-up2·6 hours agoThanks, another person mentioned it, I’m trying it now, hopefully it suits my needs.
minus-squareMajorSauce@sh.itjust.workslinkfedilinkarrow-up2·13 hours agoTry hosting locally DeepSync R1, for me the results are similar to ChatGPT without needing to send any into on the internet. LM Studio is a good start.
minus-squareMajorSauce@sh.itjust.workslinkfedilinkarrow-up1·edit-23 hours agoYou would benefit from it with some GPU offloading, this would considerably accelerate the speed of the answers. But you only need enough RAM to load the model at the bare minimum.
I use chatgpt a lot, what is the best non-billionaire funded llm? I really need to change to one that doesn’t worsen the world…
While deepseek is billionaire funded it still should be better if run locally I don’t think Foss llms are at that level yet
Thanks, another person mentioned it, I’m trying it now, hopefully it suits my needs.
Try hosting locally DeepSync R1, for me the results are similar to ChatGPT without needing to send any into on the internet.
LM Studio is a good start.
Don’t you need a fast GPU to do so?
You would benefit from it with some GPU offloading, this would considerably accelerate the speed of the answers. But you only need enough RAM to load the model at the bare minimum.
Thanks I’m trying it now!