latest
5.5GB
Vision
8B
687 Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
44c161b1f465 · 5.5GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>
254B
projector
archclip
·
parameters312M
·
quantizationF16
624MB
params
{"num_ctx":4096,"num_keep":4,"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]}
124B
Readme
Model
Source
https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf
Modified for easy to use with ollama
llava-llama-3-8b-v1_1 is a LLaVA model fine-tuned from meta-llama/Meta-Llama-3-8B-Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.
Note: This model is in GGUF format.
Resources:
- GitHub: xtuner
- HuggingFace LLaVA format model: xtuner/llava-llama-3-8b-v1_1-transformers
- Official LLaVA format model: xtuner/llava-llama-3-8b-v1_1-hf
- XTuner LLaVA format model: xtuner/llava-llama-3-8b-v1_1