llama-3.1-8b-instruct-cn-openended-kr0.05-a0.05-creative

This is a CreativityNeuro (CN) modified version of meta-llama/Llama-3.1-8B-Instruct.

Model Details

  • Base Model: meta-llama/Llama-3.1-8B-Instruct
  • Modification: CreativityNeuro weight scaling
  • Prompt Set: openended
  • Keep Ratio: 0.05 (top 5.0% of task-specific weights)
  • Alpha: 0.05 (scaling strength)
  • Mode: creative

What is CreativityNeuro?

CreativityNeuro identifies task-specific neurons using Wanda-style importance scoring and selectively upscales weights associated with creative thinking. The modification formula is:

W_new = W × (1 + α × mask)

Where mask identifies weights important for creative tasks but not for routine/associative tasks.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("priorcomputers/llama-3.1-8b-instruct-cn-openended-kr0.05-a0.05-creative")
tokenizer = AutoTokenizer.from_pretrained("priorcomputers/llama-3.1-8b-instruct-cn-openended-kr0.05-a0.05-creative")

# Use like any other model
outputs = model.generate(...)

Citation

If you use this model, please cite:

@misc{creativityneuro2025,
  title={CreativityNeuro: Mechanistic Interpretability for LLM Creativity},
  author={Prior Computers},
  year={2025},
  url={https://huggingface.co/priorcomputers}
}
Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for priorcomputers/llama-3.1-8b-instruct-cn-openended-kr0.05-a0.05-creative

Finetuned
(2386)
this model

Collection including priorcomputers/llama-3.1-8b-instruct-cn-openended-kr0.05-a0.05-creative