cheese_greater@lemmy.world to Ask Lemmy@lemmy.world · edit-227 days agoWhat's a good local and free LLM model for Windows?message-squaremessage-square7fedilinkarrow-up116arrow-down111
arrow-up15arrow-down1message-squareWhat's a good local and free LLM model for Windows?cheese_greater@lemmy.world to Ask Lemmy@lemmy.world · edit-227 days agomessage-square7fedilink
minus-squareoccultist8128@infosec.publinkfedilinkarrow-up3·27 days agoWhat’s the minimum requirements for running it?
minus-squareToes♀@ani.sociallinkfedilinkarrow-up4·27 days agoLots of RAM and a good cpu, benefits from cores. if you’re comfortable with it being on the slow side. There’s other versions of that model optimized for lower vram conditions too. But for better performance 8GB of vram minimum.
minus-squaresobanto@feddit.orglinkfedilinkarrow-up1·edit-227 days agoDo you have a recommendation for Nvidia RTX 3070ti 8GB, ryzen 5600x +16GB DDR4? Does it even make sense to use it? Last time I tried the results were petty underwhelming.
minus-squareToes♀@ani.sociallinkfedilinkarrow-up3·27 days agoTry it with this model, using Q4_K_S version. https://huggingface.co/bartowski/mlabonne_gemma-3-12b-it-abliterated-GGUF You’ll probably need to play with the context window size until you get an acceptable level of performance. (Likely 4096) Ideally you’d have more RAM, but I want to say this smaller model should work. Koboldcpp will try to use both your GPU and CPU to run the model.
What’s the minimum requirements for running it?
Lots of RAM and a good cpu, benefits from cores. if you’re comfortable with it being on the slow side.
There’s other versions of that model optimized for lower vram conditions too.
But for better performance 8GB of vram minimum.
Do you have a recommendation for Nvidia RTX 3070ti 8GB, ryzen 5600x +16GB DDR4? Does it even make sense to use it? Last time I tried the results were petty underwhelming.
Try it with this model, using Q4_K_S version.
https://huggingface.co/bartowski/mlabonne_gemma-3-12b-it-abliterated-GGUF
You’ll probably need to play with the context window size until you get an acceptable level of performance. (Likely 4096)
Ideally you’d have more RAM, but I want to say this smaller model should work. Koboldcpp will try to use both your GPU and CPU to run the model.