| | --- |
| | language: |
| | - en |
| | tags: |
| | - BERTicelli |
| | - text classification |
| | - abusive language |
| | - hate speech |
| | - offensive language |
| | datasets: |
| | - OLID |
| | license: apache-2.0 |
| | widget: |
| | - text: "If Jamie Oliver fucks with my £3 meal deals at Tesco I'll kill the cunt." |
| | example_title: "Example 1" |
| | - text: "Keep up the good hard work." |
| | example_title: "Example 2" |
| | - text: "That's not hair. Those were polyester fibers because Yoda is (or was) a puppet." |
| | example_title: "Example 3" |
| | --- |
| | |
| | [Mona Allaert](https://github.com/MonaDT) • |
| | [Leonardo Grotti](https://github.com/corvusMidnight) • |
| | [Patrick Quick](https://github.com/patrickquick) |
| |
|
| | ## Model description |
| |
|
| | BERTicelli is an English pre-trained BERT model obtained by fine-tuning the [English BERT base cased model](https://github.com/google-research/bert) with the training data from [Offensive Language Identification Dataset (OLID)](https://scholar.harvard.edu/malmasi/olid). |
| |
|
| | This model was developed for the NLP Shared Task in the Digital Text Analysis program at the University of Antwerp (2021–2022). |