Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Intel
/
DeepSeek-V2-Lite-Chat-BF16-FP8-STATIC-FP8-QKV-TEST-ONLY
like
0
Follow
Intel
3.42k
Safetensors
deepseek_v2
custom_code
compressed-tensors
Model card
Files
Files and versions
xet
Community
main
DeepSeek-V2-Lite-Chat-BF16-FP8-STATIC-FP8-QKV-TEST-ONLY
16.2 GB
2 contributors
History:
2 commits
sys-lpot-val
init model.
26f4d38
2 days ago
.gitattributes
1.52 kB
initial commit
2 days ago
chat_template.jinja
459 Bytes
init model.
2 days ago
config.json
3.15 kB
init model.
2 days ago
configuration_deepseek.bk.py
10.3 kB
init model.
2 days ago
configuration_deepseek.py
10.3 kB
init model.
2 days ago
generation_config.json
181 Bytes
init model.
2 days ago
model-00001-of-00004.safetensors
5 GB
xet
init model.
2 days ago
model-00002-of-00004.safetensors
5 GB
xet
init model.
2 days ago
model-00003-of-00004.safetensors
5 GB
xet
init model.
2 days ago
model-00004-of-00004.safetensors
1.15 GB
xet
init model.
2 days ago
model.safetensors.index.json
1.47 MB
init model.
2 days ago
modeling_deepseek.py
78.7 kB
init model.
2 days ago
quantization_config.json
1.43 kB
init model.
2 days ago
special_tokens_map.json
482 Bytes
init model.
2 days ago
tokenization_deepseek_fast.py
1.37 kB
init model.
2 days ago
tokenizer.json
7.5 MB
init model.
2 days ago
tokenizer_config.json
887 Bytes
init model.
2 days ago