• NeilBrü
        link
        fedilink
        English
        2
        edit-2
        2 months ago

        Oof, ok, my apologies.

        I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized .gguf files.

        Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.

    • @anus@lemmy.worldOP
      link
      fedilink
      English
      -82 months ago

      I actually think that (presently) self hosted LLMs are much worse for hallucination