Polish LLM - Bielik-11B-v2.3-Instruct ~ by SpeakLeash a.k.a Spichlerz!

910 6 weeks ago

Readme

! Quants from Q1 to Q8 (imatrix) are here:
https://ollama.com/SpeakLeash/bielik-11b-v2.3-instruct-imatrix

Bielik-11B-v2.3-Instruct-GGUF

This repo contains GGUF format model files for SpeakLeash’s Bielik-11B-v2.3-Instruct.

DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!

Available quantization formats:

  • Q4_K_M: (6.7GB) Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
  • Q5_K_M: (7.9GB) Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
  • Q6_K: (9.2GB) Uses Q8_K for all tensors
  • Q8_0: (12GB) Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.

This model is created using - Ollama Modfile

All models have: PARAMETER temperature 0.2 - set while creating this repo.

The GGUF file can be used with Ollama. To do this, you need to import the model using the configuration defined in the Modfile. For model eg. Bielik-11B-v2.3-Instruct.Q4_K_M.gguf (full path to model location).
Modfile looks like:

FROM ./Bielik-11B-v2.3-Instruct.Q4_K_M.gguf
TEMPLATE """<s>{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""

PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"

# Remeber to set low temperature for experimental models (1-3bits)
PARAMETER temperature 0.1

Model description:

Contact Us

If you have any questions or suggestions join our Discord SpeakLeash.