latest
4.4GB
A text completion novel, trained on various novels.
7B
41 Pulls Updated 5 months ago
Updated 5 months ago
5 months ago
47b42f14e485 · 4.4GB
model
archllama
·
parameters7.24B
·
quantizationQ4_K_M
4.4GB
params
{"num_ctx":8192,"stop":["</s>"],"temperature":0.8,"top_k":60}
72B
Readme
Model Card for molbal/novelstral-7b
Short response, text completion model trained on various novels.
Model Details
This is a text completion model, designed to advance a story a few lines at a time. 8k context length.
- Developed by: https://huggingface.co/molbal
- Model type: Mistral 7b fine-tune
- Language(s) (NLP): English only
- License: wtfpl
- Finetuned from model: unsloth/mistral-7b-bnb-4bit
- Notes: This model is in 4bit quants only, as its primary purpose is experimentation and that’s what performs well locally on my laptop
Framework versions
- PEFT 0.10.0
- Unsloth for training