latest
7.8GB
Roleplaying focused MoE Mistral model.
13B
202 Pulls Updated 7 weeks ago
Updated 7 weeks ago
7 weeks ago
87fc1d305675 · 7.8GB
model
archllama
·
parameters12.9B
·
quantizationQ4_K_M
7.8GB
Readme
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character: