Skip to content

Commit 8594097

Browse files
committed
docs: add basic installation instructions
1 parent cfbe660 commit 8594097

File tree

1 file changed

+59
-2
lines changed

1 file changed

+59
-2
lines changed

README.md

Lines changed: 59 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,64 @@
22

33
A Simple PoC (Proof of Concept) of Hate-speech (Toxic content) Detector API Server using model from [detoxify](https://github.com/unitaryai/detoxify). Detoxify (unbiased model) achieves score of 93.74% compared to top leaderboard score with 94.73% in [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification).
44

5-
# License
5+
## Requirements
6+
7+
Python 3.9 or Python 3.10 is required to run the app. There is [bug/issue](https://github.com/unitaryai/detoxify/issues/94) for Python 3.11 or higher version affecting detoxify library.
8+
9+
## Getting Started
10+
11+
You can start by cloning this repository to run or modify it locally
12+
13+
```
14+
git clone https://github.com/atrifat/hate-speech-detector-api
15+
cd hate-speech-detector-api
16+
```
17+
18+
Create virtual environment using venv, pyenv, or conda. This is an example using venv to create and activate the environment:
19+
20+
```
21+
python3 -m venv venv
22+
source venv/bin/activate
23+
```
24+
25+
install its dependencies
26+
27+
```
28+
pip install -U -r requirements.txt
29+
```
30+
31+
and run it using command
32+
33+
```
34+
python3 app.py
35+
```
36+
37+
You can also copy `.env.example` to `.env` file and change the environment value based on your needs before running the app.
38+
39+
If you want to test the API server, you can use GUI tools like [Postman](https://www.postman.com/) or using curl.
40+
41+
```
42+
curl --header "Content-Type: application/json" \
43+
--request POST \
44+
--data '{"api_key":"your_own_api_key_if_you_set_them", "q":"hello world good morning"}' \
45+
http://localhost:7860/predict
46+
```
47+
48+
The result of classification will be shown as follow (Example: using unbiased-small model):
49+
50+
```
51+
{
52+
"identity_attack":0.0,
53+
"insult":0.0,
54+
"obscene":0.0,
55+
"severe_toxicity":0.0,
56+
"sexual_explicit":0.0,
57+
"threat":0.0,
58+
"toxicity":0.0010000000474974513
59+
}
60+
```
61+
62+
## License
663

764
MIT License
865

@@ -26,6 +83,6 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
2683
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
2784
SOFTWARE.
2885

29-
# Author
86+
## Author
3087

3188
Rif'at Ahdi Ramadhani (atrifat)

0 commit comments

Comments
 (0)