-
mixtral-8x22b
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
8x22B335 Pulls 3 Tags Updated 5 months ago
-
wizardlm-2-8x22
Original version released Apritl 15th of WizardLM 2
8x22B126 Pulls 2 Tags Updated 5 months ago
-
zephyr-orpo-141b-a35b-v0.1
Zephyr is a series of language models that are trained to act as helpful assistants.
8x22B84 Pulls 3 Tags Updated 5 months ago
-
qra-13b
Qra is foundation language model trained with causal language modeling objective on a large corpus of texts.
13B68 Pulls 2 Tags Updated 5 months ago