24 September 2024
Bilbao Campus
The American magazine IEEE Spectrum, published by the Institute of Electrical and Electronics Engineers (IEEE), published on Wednesday 18 September an interview with Rubén Sánchez Corcuera, professor and researcher at the Faculty of Engineering of Deusto, on the occasion of the publication of an article derived from his PhD dissertation ‘From Forensic to Preventive: Language Agnostic Approach to Preventive Detection of Malicious Users’ in the IEEE Xplore database. The dissertation, directed by Deusto researcher Aitor Almeida and Queen Mary University of London professor Arkaitz Zubiaga, aimed to develop an algorithm capable of predicting malicious behaviour on X (Twitter) and reducing the impact of hate and disinformation campaigns on the social network.
In the interview for IEEE Spectrum, Rubén Sánchez Corcuera explains that they developed their prediction model based on an existing model known as JODIE (Jointly Optimizing Dynamics and Interactions for Embeddings), which predicts future user interactions on social networks. The three researchers modified the JODIE model with new machine learning algorithms to make it capable of predicting whether a user would be malicious in the future.
This predictive model was then tested on three datasets: 936 X accounts linked to the People's Republic of China, 1,666 accounts associated with the Iranian government, and 1,152 accounts dedicated to political propaganda backed by the Russian government. In the Iranian dataset, for example, the study showed that the new model was able to predict up to 75% of users who would engage in malicious behaviour by analysing just 40% of the data. It was therefore found to be 40 per cent better than a similar state-of-the-art model. Such models for detecting malicious behaviour may be particularly useful for text-based social networks such as X, although platforms such as TikTok, which are based on multimedia content, may require a different approach.
Sánchez Corcuera concludes that, ultimately, models such as the one he and his colleagues have developed can help prevent malicious activity on social networks, reduce the impact of hate campaigns and protect users, thus contributing to their psychological well-being and a more positive experience in online spaces.