Shield Social Media users from harmful messages
You can try it out here: https://starfish-app-ls8qg.ondigitalocean.app/
Social media has lots of positives and negatives. Some of the most prominent negative aspects are the harmful messages that are not constructive to the conversation or initial post.
We want to enable users to not view the messages that are considered harmful on their Social Media, starting with the replies to their most recent tweet.
We use OpenAI's New-and-Improved Content Moderation Tooling to predict how harmful messages are based on the following categories:
- hate
- hate/threatening
- self-harm
- sexual
- sexual/minors
- violence
- violence/graphic
- A Twitter username
- Their tolerance for viewing potentially harmful replies to a tweet
.. Twitter's V2 API to:
- Get a user by their username
- Return their most recent tweet (not reply or re-tweet)
- Get up to 50 replies to the tweet
.. OpenAI's moderation endpoint to:
- Classify the content of the tweet's replies against the moderation endpoint
- Hide the reply if it's content moderation classification is above the tolerance level provided by the user
- NodeJS / TypeScript on the backend (tested on node 14+)
- Basic HTML5/CSS/JS on the frontend
- Use a modern JS framework for the frontend
- Use a modern UI library for the frontend
- HTTP error handling for external services
- Decouple the controller logic of username searching / root tweet finding from the one endpoint (into multiple endpoints)
- Add pagination to Twitter's API calls
- Add more unit and integration tests on the backend
- Threshold slider works after the backend API call, not only at the time of the request
- Containerize the app (with Docker)
- Add CI/CD
- Cache the API responses
- Allow users to search by a specific tweet and not only the latest ... much more
- Clone the repo
- Run
npm install
to install the depencencies - Run
npm run start
to start the application