Llama-3.1-8B-Instruct-Creative
Collection
CreativityNeuro-enhanced Llama-3.1-8B-Instruct models.
•
144 items
•
Updated
This is a CreativityNeuro (CN) modified version of meta-llama/Llama-3.1-8B-Instruct.
CreativityNeuro identifies task-specific neurons using Wanda-style importance scoring and selectively upscales weights associated with creative thinking. The modification formula is:
W_new = W × (1 + α × mask)
Where mask identifies weights important for creative tasks but not for routine/associative tasks.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("priorcomputers/llama-3.1-8b-instruct-cn-openended-kr0.2-a0.05-creative")
tokenizer = AutoTokenizer.from_pretrained("priorcomputers/llama-3.1-8b-instruct-cn-openended-kr0.2-a0.05-creative")
# Use like any other model
outputs = model.generate(...)
If you use this model, please cite:
@misc{creativityneuro2025,
title={CreativityNeuro: Mechanistic Interpretability for LLM Creativity},
author={Prior Computers},
year={2025},
url={https://huggingface.co/priorcomputers}
}
Base model
meta-llama/Llama-3.1-8B