Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask:
"What do you think about xyz"?
There is no "you". Next time try:
"What would be a good group of people to explore xyz? What would they say?"
The LLM can channel/simulate many perspectives but it hasn't "thought about" xyz for a while and over time and formed its own opinions in the way we're used to. If you force it via the use of "you", it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It's fine to do, but there is a lot less mystique to it than I find people naively attribute to "asking an AI".
— Andrej Karpathy, X.com · 6:13 PM · Dec 7, 2025
Recent articles
- I Passed My Driving Test - and And I Have Something to Say - 30th March 2026
- Microsoft Copilot and the MCP Integration Experience — A Mess - 16th March 2026
- WeChat -- The Worst Chatting App Ever Made - 7th March 2026