Beginning: Dummy Guide To LoreMateAI


What is LLM?

Can you eat it? No, but they eat much more than you do.

I know many of you love chatting or roleplaying with bots, and you probably don’t care what’s behind them, and that’s fine.

But if you’ve ever wondered “Why does this bot say weird things sometimes?” or “Why is it so obsessed with licking people?” Then it’s probably time we talk about LLMs, aka the brain behind your favorite bot.

LLM stands for Large Language Model. It’s basically a big, fat AI brain trained by reading a LOT of text. For example: novels, articles, fanfics, Reddit posts, Tumblr chats, even scripts. Different chatbot platforms use different LLMs. Some models are trained more on romantic fanfics, while others are trained more on Wikipedia and academic writing. That’s why one bot might act like a soft-spoken therapist… and another like a possessive boyfriend with boundary issues.


Parameters:

We’ve already gone over what LLM is, it’s basically the brain behind bots. Now let’s take a closer look at the “neurons” in that brain, the parameters.

Loremate currently has three models: “T-Rex” (8B), “Infinity” (250B), and “DeepSeek V3” (671B). That “B” stands for billion, so we’re talking 8 billion, 250 billion, and 671 billion parameters.

In theory, the more parameters a model has, the more it’s been fed, and the better it can learn and generate complex responses. But does more always mean better? And does fewer mean dumber?


Not Necessarily.

Think of parameters like calories! The more you eat, the more energy (and baggage) you carry. Overdo it, then you get sluggish. Same with big models. They’re powerful, but also heavier, more expensive, and slower to respond. Like a genius that needs to warm up before saying anything.

Smaller models, in contrast, are like a light meal. Quick, snappy, and good for fast-paced chatting or simple RP. They don’t overthink, and they won’t make a long list or write an essay just to answer “Do you love me?”

But beware! Low-parameter models might not understand complex prompts well, especially if they weren’t trained on rich or diverse data.

In short: Want a fast, flirty chatbot? Go small models. Want deep, layered conversations or immersive RPs? Go big models.