Toxic Comment Classification
Building a multi headed model that s capable of detecting different types of toxicity like threats, obscenity, insult and identity based hate. Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to efficiently facilitate conversations, leading many communities to limit or completely shut down user comments. So far we have a range of publicly available models served through the perspective APIs, including toxicity. But the current models still make errors, and they don t allow users to select which type of toxicity they re interested in finding.
toxic comment classification
24-27
Issue-4
Volume-3
Pallam Ravi | Hari Narayana Batta | Greeshma S | Shaik Yaseen