![](/static/c15a0eb1/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/0da8d285-3457-4e5b-af21-b38609b07eea.webp)
The issue with LLMs that I have is that while they are great at certain tasks, they are bad at anything, let’s call it factual, due to their nature.
I can for example use it to quickly draft up a email or a piece of python code, and I can immediately see whether or not the response it generated is actually what I want.
If I go ask it what the hottest day in a given country was or ask it to explain something, I have absolutely no idea whether it’s bullshit or not and I will have to double check it anways.
I think the learning curve with LLMs as a tool is to be able to know when to use it and when to rely on other sources instead.
Any hyrule well