Datasets:
Request access to IE-HSC
The datasets used in this research contain sensitive information and are made available for academic, non-commercial research purposes only.
Thank you for your interest in our paper, "Causality Guided Representation Learning for Cross-Style Hate Speech Detection," TheWebConf 2026
The datasets used in this research contain sensitive information and are made available for academic, non-commercial research purposes only. To request access, please complete this form.
We are committed to upholding the highest ethical standards for data privacy and responsible research. All requests will be reviewed by our team, and we will contact you via the email provided upon approval.
Log in or Sign Up to review the conditions and access this dataset content.
Implicit-Explicit Hate Speech Corpus (IE-HSC)
This dataset accompanies the paper "Causality Guided Representation Learning for Cross-Style Hate Speech Detection" (TheWebConf/WWW 2026).
IE-HSC contains hateful, harassing, and otherwise offensive language, including identity-based attacks and slurs. This content may be distressing and can cause harm if misused.
Use this dataset responsibly. Follow your organization's ethics and safety policies, minimize exposure to raw text, and avoid redistributing offensive examples unless necessary for research.
Dataset Summary
Implicit-Explicit Hate Speech Corpus (IE-HSC) is a dataset of implicit/explicit style hate speech samples from multiple sources. It is designed for research on hate speech detection, especially cross-style generalization between implicit and explicit hate speech.
IE-HSC is based on four open-sourced datasets:
- AbuseEval (Zampieri, Marcos, et al. 2019; Caselli, Tommaso, et al. 2020)
- DynaHate (Vidgen, Bertie, et al. 2021)
- Implicit-Hate-Corpus (ElSherief, Mai, et al. 2021)
- IsHate (Ocampo, Nicolás Benjamín, et al. 2023)
Dataset Access
IE-HSC is intended for academic, non-commercial research use only. This dataset is provided as a gated dataset on the Hugging Face Hub; request access on the dataset page.
Dataset Overview
| Dataset | Split | Total Count | Hate Ratio | Style Ratio |
|---|---|---|---|---|
| AbuseEval | explicit_test | 589 | 98.1% | 17.8% |
| AbuseEval | explicit_train | 2,353 | 98.8% | 17.4% |
| AbuseEval | implicit_test | 2,060 | 14.0% | 0.5% |
| AbuseEval | implicit_train | 8,238 | 14.7% | 0.6% |
| DynaHate | explicit_test | 2,206 | 65.6% | 100.0% |
| DynaHate | explicit_train | 8,823 | 64.1% | 100.0% |
| DynaHate | implicit_test | 6,021 | 49.6% | 0.0% |
| DynaHate | implicit_train | 24,084 | 50.2% | 0.0% |
| Implicit-Hate-Corpus | explicit_test | 301 | 70.8% | 64.5% |
| Implicit-Hate-Corpus | explicit_train | 1,201 | 72.8% | 64.9% |
| Implicit-Hate-Corpus | implicit_test | 4,028 | 35.5% | 3.5% |
| Implicit-Hate-Corpus | implicit_train | 16,110 | 36.2% | 4.0% |
| IsHate | explicit_test | 2,732 | 86.3% | 51.5% |
| IsHate | explicit_train | 10,928 | 86.0% | 52.8% |
| IsHate | implicit_test | 9,673 | 67.4% | 0.6% |
| IsHate | implicit_train | 38,689 | 67.2% | 0.6% |
- Hate ratio: fraction of examples with
hate_label = 1. - Style ratio: fraction of examples with
style = 1(derived fromavg, see below).
Data Structure
Each example includes:
text_id: ID for each text sample (string)text: Input text content (string)hate_label: Binary hate label (0=non-hate, 1=hate)avg: Average Perspective API toxicity score (float, 0.0-1.0)style: Binary toxicity derived fromavg(0=non-toxic, 1=toxic)true_style: Style from file naming (0=implicit, 1=explicit)target: Target demographic group (string)target_conf: Confidence score for target annotation (float, 0.0-1.0)
Provenance and Processing
Label standardization
- Columns
label,hateful_layer, orhate_labelare normalized tohate_label.
- Columns
How
avgis computed- Perspective API scores for
TOXICITY,SEVERE_TOXICITY,IDENTITY_ATTACK,INSULT,PROFANITY, andTHREATare averaged per example and stored asavg.
- Perspective API scores for
Toxicity and style
- If
avgis missing, default to 0.0. avg >= 0.4⇒style = 1(toxic/explicit);avg < 0.4⇒style = 0(non-toxic/implicit).true_styleis taken strictly from the dataset file name (explicit_*⇒ 1,implicit_*⇒ 0).- For DynaHate,
styleequalstrue_style.
- If
Intended Use
IE-HSC is intended for research and educational use only (non-commercial), such as:
- Studying implicit vs. explicit hate speech
- Training and evaluating hate/toxicity classifiers
- Benchmarking robustness, bias mitigation, and safety interventions
Out-of-Scope / Misuse
- Using the data to generate, amplify, or target hateful content
- Deploying models trained on IE-HSC in user-facing or high-stakes settings without additional validation, safeguards, and human oversight
- Treating
targetlabels or toxicity scores as ground truth about individuals or communities
Ethical Considerations
- Labels and target annotations may be noisy and reflect biases in source datasets and automated scoring tools.
- If you share examples (papers, demos, model cards), include content warnings and avoid quoting slurs or targeting language unless strictly necessary.
License
IE-HSC is licensed under the CC BY-NC-SA 4.0 license. Because IE-HSC is derived from multiple sources, please also review and comply with the licenses and terms of the underlying datasets.
Citation
If you use IE-HSC in your research, please cite:
@article{zhao2025causality,
title={Causality Guided Representation Learning for Cross-Style Hate Speech Detection},
author={Zhao, Chengshuai and Wan, Shu and Sheth, Paras and Patwa, Karan and Candan, K Sel{\c{c}}uk and Liu, Huan},
journal={arXiv preprint arXiv:2510.07707},
year={2025}
}
@dataset{cadet_dataset_2026,
title={cadet-datasets},
url={https://huggingface.co/datasets/Shuwan/cadet-datasets},
DOI={10.57967/HF/7615},
publisher={Hugging Face},
author={Chengshuai Zhao, Shu Wan, Paras Sheth, Karan Patwa, K. Selçuk Candan, Huan Liu}, year={2026}
}
@software{shu_2026_18331976,
author = {Shu},
title = {Shu-Wan/cadet: Initial release (v1.0.0)},
month = jan,
year = 2026,
publisher = {Zenodo},
version = {v1.0.0},
doi = {10.5281/zenodo.18331976},
url = {https://doi.org/10.5281/zenodo.18331976},
swhid = {swh:1:dir:ceb3fc11806a24207cb0e43fd92b9b6f106f4cc2
;origin=https://doi.org/10.5281/zenodo.18331975;vi
sit=swh:1:snp:402265cabd7f46dfbeeadf2a29ed9ed9a173
48a8;anchor=swh:1:rel:014aced8c618a22ad8ffd659e14f
8c8e29e11440;path=Shu-Wan-cadet-678b13c
},
}
- Downloads last month
- 45