/mlpol/ - My Little Politics


If you want to see the latest posts from all boards in a convenient way please check out /overboard/
For Pony, Pony, Pony and Pony check out >>>/poner also Mares

Name
Email
Subject
Comment
0
Select File / Oekaki
File(s)
Password (For file and/or post deletion.)

29 replies |  10 files |  22 UUIDs |  Page 4
1744606862816.png
/CHAG/ #1 - Lunar Edition
Anonymous
b55e2cc
?
No.383729
383788
Welcome to Chatbot AI General #121, the thread for discussing and improving AI pony chatbots.

▶ MLP Bots
https://mlpchag.neocities.org
Spreadsheet (CAI bots + Old repository):
https://docs.google.com/spreadsheets/d/1J7BeqJVDS51cXF8Pgm2YZaFq-Z6ykSJT
SillyTavern website extension: https://github.com/MLPChag/SillyTavern-Chag-Search
Bunch of Character Cards (Descriptions/No Greetings):
https://docs.google.com/spreadsheets/d/1Y6LNOCqAZAWIX-OBEv55HjlzcEdpUh-XbKCpbpL6v5k
CAI bots converted to Tavern: https://files.catbox.moe/ckurq1.zip
Expression packs: https://rentry.org/ChagExpressions
!!!GALLERY!!!: https://drive.google.com/drive/u/2/folders/1Ao-h5HFGMPllSrzSBKM_BvGSiU9f0c2U

▶ How do I start?
1) Select a Frontend
2) Select an AI model
3) Select Jailbreak
4) Select bots
5) Lovemaking with AI mares!

Starting in this hobby can be confusing and difficult. If it’s your first time and you’re lost,
▶ ASK THE THREAD! ◀

Novice-to-advanced guide: https://rentry.org/onrms

▶ SillyTavern (preferred frontend)
https://github.com/SillyTavern/SillyTavern
On Android: https://rentry.org/STAI-Termux
App that voices pony responses in ST: https://drive.google.com/drive/folders/16Ss26VBmgzcSuTGzhaHqRuyVRceTf-YB

▶ More frontends:
Risu: https://risuai.xyz
Agnai: https://agnai.chat

▶ Locals
https://rentry.org/lunarmodelexperiments
>>>/g/lmg/
Mistral Nemo base model fine-tuned on fimfics: https://huggingface.co/Ada321/Nemo_Pony_2/tree/main

▶ Jailbreaks
MLP JB: https://rentry.org/znon7vxe
More JB and guides: https://rentry.org/jb-listing
Hypebots for Tavern: https://rentry.org/pn3hb

▶ Botmaking
Editors: https://agnai.chat/editor
Guides: https://rentry.org/meta_botmaking_list
Advanced: https://rentry.org/AdvancedCardWritingTricks

▶ Miscellaneous
Chag Arts: https://drive.google.com/drive/u/2/folders/1yfajin_hB5rW_jy6WyFjRaRfDA3EUGiJ
https://rentry.org/ChagMiscellaneous
https://rentry.org/ChagArchive

Previous thread: >>42105276

**News:**
- Grok 3 (beta) released on API https://docs.x.ai/docs/models
- OAI? release new, free, test models on OpenRouter: Quasar Alpha and Optimus Alpha
- Chatgpt-4o-latest API updated to match frontend version; native image generation released https://openai.com/index/introducing-4o-image-generation
- Gemini 2.5 Pro Experimental released https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/
- Deepseek V3 updated, new weights released https://huggingface.co/deepseek-ai/DeepSeek-V3-0324
Anonymous
b55e2cc
?
No.383730
1694207085907.png
Anchor for bots, lorebooks, scenarios.
Anonymous
b55e2cc
?
No.383731
1695621238233.png
Anchor for technical stuffs (Proxies, Updates, Models etc.)
Anonymous
b55e2cc
?
No.383732
1495968934678.jpg
Anchor for asking for bots, lorebooks, scenarios etc
Anonymous
d47550b
?
No.383738
Hello Anon, good to see that /CHAG/ lives on!
Anonymous
b55e2cc
?
No.383739
383752 383795
Now discovered a legitimate /CHAG/ on nhnb. But let it be this one.
https://nhnb.org/fim/res/20712.html
Anonymous
297afa7
?
No.383752
383755
>>383739
I like it here, the nhnb captcha is pretty broken for me, i am forced to resolve it at least five times until my post gets through.
Anonymous
3a65d4a
?
No.383755
383759
7219080.png
>>383752
There is a toggle-able captcha system here, in the event of raiders (one of the reasons soyjack doesn't like to play with us anymore :3), but by default it's turned off.
Weird, almost like we want our posters to feel welcome and shit.
Anonymous
8381234
?
No.383759
383760
>>383755
Can you screenshot it for me? I think I may be retarded since I'm not seeing it .
Anonymous
3a65d4a
?
No.383760
>>383759
It's not something users can toggle, it's a staff function
Anonymous
4acc383
?
No.383783
Kek. There's one on mlpol, Poner, and NHNB too.
The NHNB one seems to be the most active for now.
Anonymous
694fb63
?
No.383788
no light in this dim bulb.png
>>383729
for fuck sake at least keep /chag/ to one board on /mlpol/
>>>/poner/361 →
Anonymous
bed396e
?
No.383792
>/CHAG/ has migrated
Excellent.
Anonymous
c5857cd
?
No.383795
383821
image.png
>>383739
Please, let's stick to mlpol. Either this thread or on /poner/. nhnb fucking sucks.
Anonymous
f9bfefc
?
No.383814
383821
Sorry anons, the CHAG circlejerk shadow cabal secret party has chosen NHNB.
Anonymous
c4e5905
?
No.383821
>>383795
Are you sure you want to stick to the place where schizos like >>383814 are allowed to roam free?
Anonymous
f5e719a
?
No.384230
beaultifull.png
It's alive
Anonymous
1c26967
?
No.384447
384481
Are there any models that run locally? I heard deepseek has models that can run locally with 16 gigs of ram.
Anonymous
acb362e
?
No.384481
384484 384491 384531
>>384447
>Are there any models that run locally?
Yeah, plenty, but what you can run depends heavily on your VRAM.
Text generation models require a lot of it, way more than image generation, for example.
They'll also be inferior to the big models in terms of intelligence, instruction-following, and pony knowledge.
>I heard Deepseek has models that can run locally with 16GB of RAM.
Unfortunately, that claim is mostly clickbait.
When people talk about Deepseek, they're usually referring to one of two models:

>Deepseek R1
An open-source "thinking" model. This is the one that made them well-known.
It's technically open-source, but you're not running that locally, unless you're sitting on something absurd.
Even with quantization, you'd probably need 200–300GB of VRAM or RAM to run it.
Not sure about the calcul, but the short version: you're not running it on a normal consumer machine.

>Deepseek V3
Also a large model, but not a "thinking" one.
Same deal, you're not running this locally either.

The "Deepseek" models people say can run locally aren't really Deepseek models.
They're smaller local models that replicated R1's "thinking" approach and core ideas, but they aren't the same thing.
So if you hear someone say they're running "Deepseek" on a Raspberry Pi, it's actually just some janky 1.5B model that mimics the concept but not the performance.
From what I've heard, those versions aren't great anyway, but I haven't tested them myself.

If you want to try the real R1 or V3, you can use free OpenRouter keys, but obviously, that's not local.
Anonymous
1c26967
?
No.384484
>>384481
Are their any NLPs (like smarter child) that are any good that can run locally?
Anonymous
bedd8bd
?
No.384491
>>384481
I can run R1 locally on the Q2_K_XL dynamic quant on 256 gigabytes octochannel ddr4 ram but I also have 4x3090 to offload some of it to GPU and that's good for about 2 tokens per second on a good day. But yeah you have to really be into this stuff to even bother.
Anonymous
f17b0f4
?
No.384531
384616
>>384481
your description of quantization is wrong. quantitized models are directly based off of the original model and often made by the same people, but the precision of the data is lower, ie reducing from FP32 to FP16 or FP8.
Anonymous
ee45b46
?
No.384616
>>384531
Quantization is the process of either naively or selectively reducing the bit width of the parameters of a model after it has already been trained in order to lower the memory footprint. They aren't really "based off" the original model. They ARE the original model with some of its brains scooped out. It's different than, say, training the model in a lower precision to begin with. For example DeepSeek is trained natively in FP8 afaik. Most really big LLMs get trained in FP8 these days because, generally speaking, the finding is that quantizing an LLM down to 8.0bpw (bits per weight) is generally considered to be lossless. (It's not, but it's damn close in most cases, although directly comparing quantization error of one model to a different model isn't really fair). The quantization error doesn't REALLY start stacking up until you go below 4bpw. Although oddly enough selective quants of R1 still perform very well as low as 1.58bpw. I theorize, personally, because A. it's a MoE which have naturally high data sparsity, B. because it was trained in native FP8, so theoretically at 2bpw it's 'only' lost 75% of its initial data, which is the same ratio of reduction as an FP16 model quanted down to 4 bpw.

My general finding with quantization is that, for the most part a quantized model still has most of its 'knowledge' intact. But quantization error manifests more.. It's hard to explain... Early naive 4-bit quantizations had a tendency to do things like reverse possessive clauses. Basically small, semantically close linguistic concepts get muddled with deep quantization while the overall 'world knowledge' of the model remains intact. It's a pseudoscientific claim on my part, working entirely from experience but given that there's more possible mathematical configurations of a model's parameters than there are particles in the universe 10^100000 times over you aren't going to get a hard scientific explanation of that one any time soon.

Although we are far from the "hyperfitting" wall. I would assume as we get closer to that quantization will become more destructive. TLDR of hyperfitting is that currently models are far from being trained to the point that they can no longer form usable information. But it takes months to do the pretraining on the models we do have so hyperfitting larger models (versus small lab test models) would probably take years and nobody wants to start training a model that might be architecturally obsolete by the time its done.
Anonymous
04dbe6e
?
No.386337
386338 386341 386347 386358
Is this finally the day I can be the first one to say "Lovemaking with AI mares"? I hope so.
Anonymous
87a91cb
?
No.386338
386341
Bot.png
>>386337
You are indeed the first in this specific thread. Well played!
Anonymous
923d92c
?
No.386341
>>386337
>>386338
/lainchain/mlpol/ mashup when?
Anonymous
0d6b647
?
No.386347
386349
>>386337
Make it happen at alogs.space/robowaifu or trashchan.xyz/robowaifu
Anonymous
87a91cb
?
No.386349
PNK.jpg
>>386347
Kek.
We already have three threads, I don't know if we should be colonizing more.
Anonymous
694fb63
?
No.386358
386400
>>386337
Anon.. I >>>/poner/386 →
Anonymous
dfed8aa
?
No.386400
>>386358
damn it