It uses this one Q4_K_M-imat (4.89 BPW) quant for up to 12288 context sizes. for less than 8gb vram
2,162 Pulls 1 Tag Updated 6 months ago
1,899 Pulls 1 Tag Updated 6 months ago