Quantized version of DeepSeek Coder v1.5 and Q8_0_L quantization of v2 model form bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF and bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
247 Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
d9457e7571b6 · 17GB
model
archdeepseek2
·
parameters15.7B
·
quantizationQ8_0
17GB
template
{{ if .System }}{{ .System }}
{{ end }}{{ if .Prompt }}User: {{ .Prompt }}
{{ end }}Assistant:{{ .
138B
Readme
No readme