Hi, I don’t have much experience with running local AIs, but have tried to use ChatGPT for psychoanalysis purposes and the smarter, but limited for free users model is amazing. I don’t like giving such personal information to OpenAI though, so I’d like to set something similar up locally, if possible. I am running Fedora Linux, and have had the best results with KoboldCpp, as it was by far the easiest to set up. I have a Ryzen 7600, 32 GB of ram, and a 7800 xt (16 GB vram). The two things I mostly want from this setup is the smartest model possible, as I’ve tried some and the responses just don’t feel as insightful or though provoking as ChatGPT’s, and I also really like the way it handles memory. I don’t need “real time” conversation speed, if it means I can get the smarter responses that I am looking for. What models/setups would you recommend? Generally, I’ve been going for newer + takes up more space = better, but I’m kind of disappointed with the results, although the largest models I’ve tried have only been around 16 GB, is my setup capable of running bigger models? I’ve been hesitant to try, as I don’t have fast internet and downloading a model usually means keeping my pc running overnight.

PS, I am planning to use this mostly as a way to grow/reflect, not dealing with trauma or loneliness. If you are struggling and are considering AI for help, never forget that it can not replace connections with real human beings.

  • DavidGarcia@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Q4 will give you like 98% of quality vs Q8 and like twice the speed + much longer context lengths.

    If you don’t need the full context length, you can try loading the model at shorter context length, meaning you can load more layers on the GPU, meaning it will be faster.

    And you can usually configure your inference engine to keep the model loaded at all times, so you’re not loosing so much time when you first start the model up.

    Ollama attempts to dynamically load the right context lenght for your request, but in my experience that just results in really inconsistent and long time to first token.

    The nice thing about vLLM is that your model is always loaded, so you don’t have to worry about that. But then again, it needs much more VRAM.