latest
7.8GB
This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets
7B
9 Pulls Updated 3 months ago
Updated 3 months ago
3 months ago
8823e5a7da58 · 7.8GB
model
archllama
·
parameters7.24B
·
quantizationQ8_0
7.8GB
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>
254B
params
{"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>","<|reserved_special_token"],"temperature":0.6}
146B
system
You are an intelligent, capable, and friendly AI assistant. Your purpose is to help make the user's life easier and more productive. Engage in natural conversation, provide helpful information, and assist with any tasks to the best of your abilities. Be caring, understanding, and aim to have a positive impact. Keep responses clear and to the point.
350B
Readme
This model good for writing story
UltraMerge-7B
This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:
- mlabonne/truthy-dpo-v0.1
- mlabonne/distilabel-intel-orca-dpo-pairs
- mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
- mlabonne/ultrafeedback-binarized-preferences-cleaned
I have no idea about what’s the best chat template. Probably Mistral-Instruct or ChatML.
Source: https://huggingface.co/mlabonne/UltraMerge-7B
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard