Integrated feedbackWe have integrated a ultra-quick feedback loop within the tool, to help improve our detection algorithm.
Detecting method of suicide:Reporting the details around how the suicide occurred can have a very dangerous “contagion effect”. To prevent this, the tool detects these details for cases other than circumstances around mental illness (ie, reporting about how the death happened). For now we detect most common methods of suicides. Our algorithm is heavily dependent on how the phrasing is structured. It might highlight content by mistake if the content is not written in a common and simple way. We are currently working on a NLP model to improve the accuracy of detection.
Detecting language and stigmatization:The tool accurately detects almost all cases related to the wrongful association of suicide to a crime. It detects most vocabulary around “committing suicide” (reference to a crime), or around the notion of “attempting” suicide (reference to a choice). This is the most frequently found rule in articles reporting about suicide.
Detecting BlameDetecting causality and blame is a very complex NLP task. After analyzing consequential data, we split the problem in Explicit Blame and Implicit Blame. The tool successfully detects most cases of Explicit Blame in proximity to the word “suicide”. It will highlight Blame as wrongful when the root cause is not related to Mental Illness (i.e., losing money, unemployment...). We are actively working on models to detect implicit Blame.
Detecting resources and education:The tool looks for a reference to a Suicide Support Helpline phone number or to known support organizations. Such a reference is often found nowadays in articles, the tool highlights it as good practice and for positive reinforcement. If the reference is missing, the tool alerts the user and lets him copy the Helpline number from the rule description.