latest
4.4GB
pulled from https://huggingface.co/MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF
7B
216 Pulls Updated 5 months ago
Updated 5 months ago
5 months ago
262e17c50448 · 4.4GB
model
archllama
·
parameters7.24B
·
quantizationQ4_K_M
4.4GB
params
{"num_ctx":4096,"temperature":0.8}
35B
Readme
No readme