• 1 Post
  • 3 Comments
Joined 28 days ago
cake
Cake day: January 3rd, 2025

help-circle

  • Thanks for this tip, I don’t have a lot of VRAM just 64GB of regular RAM, but I don’t mind waiting for output :)

    But anyway, all non-Llama model weren’t so good and using RAG in plug-and-play mode, probably I should’ve spent more time working on system prompt and jinja as well as RAG curation to squeeze all juices, but I wanted something quick and easy to setup and for this needs Llama 3.2 8B Instruct was the best. I used default setup for all models and same system prompt.

    Also, new Qwen reasoning model was good, it was faster in my setup, but was too “independent” I guess, it tended to ignore instructions from system prompt and other settings, while Llama was more “obedient”.


  • This is probably true, I don’t have a lot of experience with RAGs from dev side, I was just a user.

    From my attempts with small structured data like <1000 words all Llama family AIs were good at “consuming” it without any additional preparations, just “plug and play”. If you want to feed your AI whole wikipedia you most likely will need to curate data first to get reliable results, yes. But for casual usage for ensuring that AI won’t forget or ignore some rules and be aware of present context it was enough. I was running Llama 3.2 8B Instruct with Q4 and Q8 and I believe this is the family of AIs that perchance uses for text generation. I was satisfied with results. Probably they were not ideal, but noticeably better with just default RAG and just some .txt file with markdown-like structured list, .json was also good. If it were up to me, I would incorporate it as a optional feature and left it up to users to evaluate results.

    In chat text generators at perchance there’s a feature “reminder note”, this is basically a text that goes before AI output. Could’ve been useful, but AIs tend to directly quote from it. There’s also /mem and /lore but UX of using it as some sort of RAG and especially live (where you constantly update it based on AIs output) one is not so great and it is not rare that AI just ignore it and makes something up.

    Qwen and Mistral family was not so good with default RAGs and simple structured files in my tests btw, Llama had best results.

    Thanks for tip on Storm, will look into it.