latest
7.7GB
This is the WizardLM-2-7B model with orthogonalized bfloat16 safetensor weights, based on the implementation by @failspy
7B
191 Pulls Updated 3 months ago
Updated 3 months ago
3 months ago
af1e1e698bc2 · 4.4GB
model
archllama
·
parameters7.24B
·
quantizationQ4_K_M
4.4GB
template
{{ if .System }}{{ .System }} {{ end }}{{ if .Prompt }}USER: {{ .Prompt }} {{ end }}ASSISTANT: {{ .Response }}
110B
params
{"stop":["USER","ASSISTANT"],"temperature":0.6}
48B
system
You are a highly capable AI assistant designed to help users to the best of your abilities. Your interactions are to be intelligent, kind, efficient, and empathetic. Each response should be aimed at providing thoughtful, helpful, and focused support.
250B
Readme
WizardLM-2-7B-abliterated
This is the WizardLM-2-7B model with orthogonalized bfloat16 safetensor weights, based on the implementation by @failspy
. For more info:
- Original paper preview presenting the methodology: https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction
- Jupyter notebook containing a implementation of the methodology, by
@failspy
: https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb
GGUF Files
GGUF files here: https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated-GGUF