The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
8x22B
335 Pulls Updated 5 months ago
Updated 5 months ago
5 months ago
9b000033acd8 · 80GB
model
archllama
·
parameters141B
·
quantizationQ4_0
80GB
Readme
Mixtral-8x22b
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
4bit quantization fits on 80GB A100
Converted from https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1
Keep in mind that this is foundation model, so prompt it accordingly.
Rather than:
Write me a function in typescript that takes two numbers and multiplies them
try something like:
/**
* This function takes two numbers and multiplies them
* @param arg1 number
* @param arg2 number
* @returns number
*/
export function
(example taken from: https://www.reddit.com/r/LocalLLaMA/comments/1c0tdsb/mixtral_8x22b_benchmarks_awesome_performance/)