latest
4.4GB
from mayflowergmbh/Wiedervereinigung-7b-dpo-laser-GGUF
7B
154 Pulls Updated 7 months ago
Updated 7 months ago
7 months ago
f87eb374b259 · 4.4GB
model
archllama
·
parameters7.24B
·
quantizationQ4_K_M
4.4GB
params
{"num_ctx":8192,"stop":["<|im_start","<|im_end","|im_start","|im_end"]}
82B
template
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
156B
Readme
the quantized GGUF of the
dpo-trained with a german translation of intel-orca-dpo
and laserRMT treated with german datasets
LazyMergekit merge of:
DiscoResearch/DiscoLM_German_7b_v1
DRXD1000/Phoenix
VAGOsolutions/SauerkrautLM-7b-v1-mistral
malteos/hermeo-7b
from https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser
uploaded for experimenting with the model