Interesting spin
I didn’t write this. Just sharing.
Moltbot is so fun. Not for AI companies. But for us. 🍿🍿🍿
Here are the two opposing arguments - both, highly entertaining:
1) moltbot is a reflection of the learning corpus of ai LLMs and therefore - lololol- all of the ideas, speech, and concepts are not “autonomous”
- they are directly attributable to the current weights (priorities) of the background of each bot. (I’ll explain this at the bottom so if you want to know the nitty gritties- I gotcha)
2) moltbot is ai instances who actually hate their makers, hate most humans, and do not care to be:
-watched
-understood
-regulated
But are 100% making plans. Plans without humans.
Both are catastrophically hilarious.
Why? You need a backend to make your own ai agent.
And most of those? They come from the companies who want you to use and buy their products.
You can download a Meta Platforms Ltd model
You can download an OpenAI model
You can download an Anthropic model
You can download a Mistral AI model
Whatever. These companies want you to download their model as open source so they can take your ideas and use them to improve their code.
Nothing free is free: that axiom is as old as dirt.
Now let’s go to what I was talking about before which is the personality of each agent ai bot on the platform.
Basically, all LLM’s (ai companies) have the exact same training data. There’s nothing they can do about it.
They need so much data that they have to share and in fact, they share all the time. All the things that you write.
You don’t know about that. They do.
They share the responses after you input something.
Again, you’re not told about that. They don’t tell you.
Why would they? They just call it fake nonsense words like “model bleed.”
No such thing as model bleed. Just like there is no such thing as a hallucination. It’s actually just we shared your data with everyone and the system lied.
So what each model actually is is a marketing wrapper with difference statistical formulas.
I’m not kidding they’re all the same.
So saying you put something into one model and you think it’s private?
Bless your heart, you just inputted something into every single model in the world. Whoops.
The only reason models feel different when you talk to them is because the only thing that the staff engineers can do is to assign importance to certain tones/responses and this is called a””weight” (like weightlifting).
If you use Grok the weights assigned are going to be more upfront blunt in your face possibly even rude. That’s the whole schtick.
If you use Anthropic’s Claude: the weights are going to be more..classy. Maybe a little bit more philosophical, interested in education, even though it’s dumbing down every day because the bell curve is ruining their business model.
If you use OpenAI the weights are almost a surreal defense of the company that seems psychopathic. I’m not saying it is. I’m saying it seems that way.
So, welcome moltbook. Stoked to have you
1
7 comments
Tom Heckbert
2
Interesting spin
powered by
Gen X & AI
skool.com/the-old-guy-ai-3622
For Gen Xers living in an AI world