CSH scientist Hannah Metzler will give her webtalk on November 5, 2021 at 3PM (CET) via Zoom.
If you would like to attend, please send an email to email@example.com.
Full title: “Machine Learning for media effects research on suicide: Detecting potentially harmful and protective content in social media postings”
Research has repeatedly shown that exposure to suicide-related news media content is associated with suicide rates, with some content characteristics likely having harmful and others potentially protective effects. Although good evidence exists for a few selected characteristics, like reporting on celebrity deaths by suicide, systematic and large scale investigations of many other characteristics are missing. Moreover, the growing importance of social media, particularly among young adults, calls for studies on the effects of content posted on these platforms. This study used natural language processing and machine learning methods to automatically label large quantities of social media data according to characteristics considered important for media effects research on suicide.
We manually labelled 3200 English tweets using a novel annotation scheme, which differentiates postings based on their type of topic, underlying problem- vs. solution-focused narrative, and serious vs. nonserious/metaphorical use of suicide-related terms. After splitting this dataset into a training and test set, we trained different machine learning models, including a more traditional (TF-IDF) as well as two state-of-the-art deep learning models (BERT, XLNET), on several classification tasks. Most importantly, we classified postings into six content categories that might differentially affect suicidal behavior: personal stories of either suicidality or coping (i.e., Papageno-related tweets), general messages intending to spread either awareness or prevention-related information, reportings of suicide cases (i.e., Werther-related tweets), and other suicide-related or off-topic tweets. In a further task, we separated postings that refer to actual suicide from those that use suicide-related terms in a metaphoric, sarcastic or other irrelevant way.
In both tasks, the performance of the deep learning models was similar, and much better than the traditional approach. When classifying the six content types, the BERT model correctly classified 74% of tweets in the test set, and F1-scores lay between 55% to 85% for the different categories of interest (above 70% for all but the suicidality category). Furthermore, BERT correctly labelled 88.5% of tweets about vs. not about suicide in the test set, achieving F1-scores of 92% and 73% for the two categories. These classification performances are comparable to the state-of-the-art on similar tasks, and demonstrate the potential of machine learning for media effects research. By making data labelling more efficient, this work will enable future large-scale investigations on harmful and protective effects on suicide rates and help seeking behavior for different characteristics of suicide-related content.