Embedding models on very large sentence level datasets.
embedding
22m
33m
199K Pulls Updated 7 months ago
Updated 10 months ago
10 months ago
1b226e2802db · 46MB
model
archbert
·
parameters22.6M
·
quantizationF16
46MB
params
{
"num_ctx": 256
}
16B
license
Apache License
Version 2.0, January 2004
11kB
Readme
Note: this model requires Ollama 0.1.26 or later. Download it here. It can only be used to generate embeddings.
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective.
Usage
REST API
curl http://localhost:11434/api/embeddings -d '{
"model": "all-minilm",
"prompt": "The sky is blue because of Rayleigh scattering"
}'
Python library
ollama.embeddings(model='all-minilm', prompt='The sky is blue because of Rayleigh scattering')
Javascript library
ollama.embeddings({ model: 'all-minilm', prompt: 'The sky is blue because of Rayleigh scattering' })