The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.

tools 1b 3b

2,009 5 days ago

Readme

Granite mixture of experts models

The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.

The models are trained on over 10 trillion tokens of data, the Granite MoE models are ideal for deployment in on-device applications or situations requiring instantaneous inference.

Parameter Sizes

1B:

ollama run granite3.1-moe:1b

3B:

ollama run granite3.1-moe:3b

Supported Languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)

Capabilities

  • Summarization
  • Text classification
  • Text extraction
  • Question-answering
  • Retrieval Augmented Generation (RAG)
  • Code related tasks
  • Function-calling tasks
  • Multilingual dialog use cases
  • Long-context tasks including long document/meeting summarization, long document QA, etc.

Granite dense models

The Granite dense models are available in 2B and 8B parameter sizes designed to support tool-based use cases and for retrieval augmented generation (RAG), streamlining code generation, translation and bug fixing.

See model page

Learn more