INTRODUCTION
Suicide remains amongst the most complex and dire of the health, social, and psychological problems of the 21st century. Every year, 1 million people commit suicide because of depression [1, 2]. In the U.S., the number of deaths caused by suicide alone is much higher than the number of deaths from traffic accidents or murders [1, 2]. This results in a huge financial cost to the U.S. econo-my, in 2013 estimated at $93.5 billion a year. The greatest attention must be paid to young adults and teenagers, among whom suicide is the third most common cause of death [3].
Technological advancements, and most importantly, the development of artificial intelligence (AI), open new perspectives for the prediction and prevention of suicidal ideation. Thanks to the use of advanced algorithms, it has become possible to improve our understanding of risk factors, identify patients who need the help of a psychiatrist and reduce costs associated with the treatment of suicide attempts and related outcomes [4, 5].
Modern AI models, including natural language processing (NLP), neural networks (NN) and machine learning (ML) can analyze a vast amount of data and information, such as social media posts, medical history, and even conversations with chatbots and other platform users. This gives us the ability to personalize treatment methods, which may be a significant factor in suicide prevention. Integration of AI in systems, such as Clinical Decision Support Systems (CDSS) or Electronic Health Systems (EHS) will implement a novel approach in already existing structures [6].
On the other hand, the development of these technologies raises ethical and practical questions. Problems with data privacy and biased algorithms, as well as the need for enormous computing power, need to be reflected on. Consequently, the aim of this study was to discuss the possible moral and technological limitations as well as possible applications of AI in suicide prevention. To make this possible we present some key data about AI, and also how the algorithms work.
ADVANTAGES OF AI
The most relevant reason for AI to be used by doctors to prevent suicide is that 83% of patients have contact with a doctor or other health service in the year before committing suicide, and 45% have had such contact within a month of doing do [7]. However, clinicians find the likelihood of a suicide to be unpredictable even among patients met in clinical settings [8, 9].
On the other hand, AI can predict suicide with a high degree of probability, simply because it operates on huge databases which contain documents from different me-dical specialists or sources [6, 8, 10, 11]. For humans to manage or understand what is significant in such a maze of information would take a lot of time, whereas an algo-rithm needs only a few minutes. Moreover, it can also analyze sources of information unavailable to doctors, for example social media. Searches can be made on Internet forums like Reddit, Twitter, and Facebook to find sentences which can contain suicidal or depressed statements and determine if they are a genuine threat [8]. Furthermore, AI can connect information from social media with medical documents, which increases the probability of accurate prediction.
Another advantage of AI is that it can analyze body movement. It can study facial expressions and posture. Based on that information algorithms can tell if the patient is depressed or has suicidal tendencies [5].
The same algorithms can analyze real-time conversations to differentiate suicidal from non-suicidal individuals based on spoken language, considering language context, slang or sarcasm, and emotions [6, 12]. Furthermore, AI can analyze written notes and say if the writer has suicidal tendencies or is only simulating them [13].
Another advantage of AI is that it is impersonal. Some people are afraid to tell doctors that they are thinking about suicide or are depressed. We can try to reach these patients by using surveys, chatbots or other programs; algorithms can analyze what patients say or write and help them [8].
AI can also simulate humans by imitating human speech. It can adapt to the way patients talk and elicit more information from them [9].
The same technology can be used to train doctors. AI which imitates humans can be used to take the role of a patient, which can help doctors develop skills in speaking and managing patients with depression or suicidal tendencies [14].
TYPES OF ALGORITHMS
NLP
NLP is an algorithm utilized to read, understand and interpret human language. It has many applications, such as generating and understanding natural speech, content briefing, speech recognition, and many others. It is used in systems based on chatbots and spam detection and in translation. Although in simple tasks it performs extremely effectively, it is still challenging for it to detect and process the complexity of human language; for example, sarcasm and metaphors are not recognized.
NLP can be applied in detecting and monitoring patients with depression to prevent suicide attempts. To achieve accurate results, continuous monitoring of individuals from high-risk groups has been proposed.
Boamente-type programs (virtual keyboard applications) collect data from messages sent between users of social media platforms, allowing them to identify individuals experiencing suicidal ideation, facilitating detection and access to psychological support, and therefore making it easier to prevent suicide attempts. Despite promising results, there is a clear gap in such operations, mainly in the use of personal data [6, 15].
Pestian et al. [12] studied two groups, each containing 30 people. The first group included teenagers with suicidal tendencies. The second (control) group comprised healthy people without such tendencies. Video recordings of patients, questionnaires and interviews were used as data collection tools. Later the researchers used NLP to determine whether a patient belonged to the first or second group. The accuracy of AI accuracy in that process was 90%.
In other studies, Zhong et al. [16] created an algorithm that worked on NLP. This software collects data from the clinical notes of pregnant women and predictions whether they have suicidal tendencies. In the next step, the researchers compared the algorithm’s outcomes with the predictions by doctors who worked on the same database. It was found that the machines detected 11 times more pregnant women with suicidal tendencies than the humans.
ML
ML is an algorithm system designed to analyze and predict certain outcomes. ML may be used in NLP, scam detection and even psychiatric health assessment.
A study by Mentiss et al. [15], focused on predicting chronic stress with the use of AI and ML. The use of these technologies allowed them to detect PTSD with up to 90% accuracy. A new method, recognized as a subcategory of artificial intelligence, has been proposed – sharm intelligence (SI). Its aim is to provide a holistic perspective on the individual, solve complex problems, and detect signs of stress. A key element of SI is ensuring the privacy of the individual being examined, which is crucial in clinical research.
Table 1
Summary of the research reviewed
Machine learning was used to predict suicide in a group of veterans during 26 weeks of visits to a health center. The study found an area under the curve (AUC) of 0.72 for those with a priori hospitalization for psy-chiatric problems, 0.61 for those without a hospitalization, and 0.66 when the two samples were combined [17]. Two similar studies also used machine learning in the prediction of suicide. The first was a Welsh study in which AUC achieved values of 0.80 in predicting whether a suicide attempt was likely to occur within the next two years [18]. Another applied algorithms to a group of people with suicidal thoughts. In that research, AUC was 0.947 and accuracy of 88.9% [19].
NN
NN are data modelling systems constructed on the pattern of the human brain activity archetype. AI is evolving in various directions, from simple reactive systems to theoretical concepts of self-aware machines. Its applications span everyday technologies, industry, healthcare, and finance. In these fields neural networks enable the resolution of increasingly complex problems. NN’s immense advantage is their ability to solve practical problems without prior mathematical formulae or theoretical assumptions. Such networks are sometimes called a “black box” because we cannot fully understand how they work [20].
In 2018, Del Pozo-Banos et al. [21], conducted research in which a neural network was used to analyze information to evaluate the risk of suicide in patients who were admitted to hospitals for varied reasons. Using hospital data – which contained information about patients, such as general practice contact and hospital admission, diagnosis of mental health issues, injury and poisoning, substance misuse, various forms of abuse, sleep disorders, and the prescription of opiates and psychotropics, drawn from a period of over 5 years – the algorithm predicted whether a patient would go on to commit suicide with a 73% level of accuracy. Later, the algorithm was trained to differentiate between the group in which individuals experienced suicidal ideation and control group, by using the risk factor data mentioned above [21].
PROBLEMS WITH AI USE IN SUICIDE PREVENTION
The use of AI in the prediction and prevention of depressive and anxiety disorders presents both significant opportunities and notable challenges. Even though the diagnosis of mental health issues is possible for complex AI models, accurate identification of suicidal ideation remains difficult. To achieve precise analysis, there is a need for more advanced algorithms.
Another important ethical concern is data security. Data storage must be closely monitored to ensure the confidentiality of personal information is not breached, which is crucial for building trust between patients and new technologies.
According to international law, it is prohibited to discriminate against an individual on the basis of factors such as age, gender, ethnic background, political views, or skin color. According to international law, equal access to medical care and diagnosis is a fundamental human right. Databases used in AI systems are human made, which makes them vulnerable to the unintentional or intentional biases of their creators. Humans as creators are usually not impartial, which can affect the quality and representativeness of database information in AI systems. Such biases can be an effect of multiple factors such as upbringing and cultural, social, or even economic diffe-rences. These biases can create challenges in identifying certain groups, leading to a lower degree of accuracy in predictions of depression and suicide risk and making it harder to reach these populations. The introduction of ethical standards and transparency, as well as the constant monitoring of AI-powered systems, is therefore essential to ensure equality [6].
Moreover, the use of AI, especially in NLP, has its limitations, such as an inability to recognize languages other than those on which it was trained. This leads to the exclusion of groups that do not use the language programmed. Additionally, for a system to work efficiently and accurately it needs to have a vast amount of data [6].
Other issues include the maintenance and need for a significant amount of computing power of such systems, which leads to high costs. The implementation of such solutions may require considerable investment and close collaboration between experts in the fields of medicine, technology, finance, and ethics [3].
CONCLUSIONS
As shown above, AI algorithms may play a crucial role as tools for suicide prevention. The integration of information from medical records, social media data, clinical databases, and other sources underscores the potential of AI to shape the future of suicide prediction and prevention.
On the other hand, there are some problems which need to be addressed. It is absolutely essential to prepare effective methods for the securing of databases because they contain sensitive personal information. Another problem is that the risk factors of suicide such as ethnicity, skin color and socioeconomic status can also be treated as causes for discrimination. Algorithms can be used to track people on the basis, for example, of skin color and use the data for wrong purposes. Due to this reason, sensitive information should only be used for clearly specified purposes, such as suicide prevention, and, even then, it should be only accessible under strict supervision to ensure that these tools are applied responsibly.
To summarize, AI has great potential to help prevent suicide. The advantages of using technology are such that research into algorithms should be continually developed to help upgrade them and solve the potential problems.