ylai@lemmy.ml to LocalLLaMA@sh.itjust.worksEnglish · 10 months agoMeta releases ‘Code Llama 70B’, an open-source behemoth to rival private AI developmentventurebeat.comexternal-linkmessage-square15fedilinkarrow-up170arrow-down17
arrow-up163arrow-down1external-linkMeta releases ‘Code Llama 70B’, an open-source behemoth to rival private AI developmentventurebeat.comylai@lemmy.ml to LocalLLaMA@sh.itjust.worksEnglish · 10 months agomessage-square15fedilink
minus-squareamzd@kbin.sociallinkfedilinkarrow-up2·10 months agoIf you use ollama you can try to use the fork that I am using. This is my config to make it work: https://github.com/Amzd/nvim.config/blob/main/lua/plugins/llm.lua
minus-squarez3rOR0ne@lemmy.mllinkfedilinkEnglisharrow-up0·10 months agoNice. Thanks. I’ll save this post in case I use ollama in the future. Right now I use a codellama model and a mythomax model, but am not running them via a localhost server, just outputted in the terminal or LMStudio. This looks interesting though. Thanks!
If you use ollama you can try to use the fork that I am using. This is my config to make it work: https://github.com/Amzd/nvim.config/blob/main/lua/plugins/llm.lua
Nice. Thanks. I’ll save this post in case I use ollama in the future. Right now I use a codellama model and a mythomax model, but am not running them via a localhost server, just outputted in the terminal or LMStudio.
This looks interesting though. Thanks!