Postępy Psychiatrii i Neurologii
eISSN: 2720-5371
ISSN: 1230-2813
Advances in Psychiatry and Neurology/Postępy Psychiatrii i Neurologii
Bieżący numer Archiwum Artykuły zaakceptowane O czasopiśmie Rada naukowa Recenzenci Bazy indeksacyjne Prenumerata Kontakt Zasady publikacji prac Opłaty publikacyjne Standardy etyczne i procedury
Panel Redakcyjny
Zgłaszanie i recenzowanie prac online
SCImago Journal & Country Rank
1/2026
vol. 35
 
Poleć ten artykuł:
Udostępnij:
Artykuł przeglądowy

Advantages and disadvantages of artificial intelligence in the prediction and prevention of suicide

Zuzanna Wątek
1
,
Kamil Sokołowski
1
,
Stefan Modzelewski
2
,
Napoleon Waszkiewicz
2

  1. Medical University of Bialystok, Poland
  2. Department of Psychiatry, Medical University of Bialystok, Poland
Adv Psychiatry Neurol 2026; 35 (1): 52-56
Data publikacji online: 2026/03/04
Plik artykułu:
Pobierz cytowanie
 
Metryki PlumX:
 

INTRODUCTION

Suicide remains amongst the most complex and dire of the health, social, and psychological problems of the 21st century. Every year, 1 million people commit suicide because of depression [1, 2]. In the U.S., the number of deaths caused by suicide alone is much higher than the number of deaths from traffic accidents or murders [1, 2]. This results in a huge financial cost to the U.S. econo-my, in 2013 estimated at $93.5 billion a year. The greatest attention must be paid to young adults and teenagers, among whom suicide is the third most common cause of death [3].

Technological advancements, and most importantly, the development of artificial intelligence (AI), open new perspectives for the prediction and prevention of suicidal ideation. Thanks to the use of advanced algorithms, it has become possible to improve our understanding of risk factors, identify patients who need the help of a psychiatrist and reduce costs associated with the treatment of suicide attempts and related outcomes [4, 5].

Modern AI models, including natural language processing (NLP), neural networks (NN) and machine learning (ML) can analyze a vast amount of data and information, such as social media posts, medical history, and even conversations with chatbots and other platform users. This gives us the ability to personalize treatment methods, which may be a significant factor in suicide prevention. Integration of AI in systems, such as Clinical Decision Support Systems (CDSS) or Electronic Health Systems (EHS) will implement a novel approach in already existing structures [6].

On the other hand, the development of these technologies raises ethical and practical questions. Problems with data privacy and biased algorithms, as well as the need for enormous computing power, need to be reflected on. Consequently, the aim of this study was to discuss the possible moral and technological limitations as well as possible applications of AI in suicide prevention. To make this possible we present some key data about AI, and also how the algorithms work.

ADVANTAGES OF AI

The most relevant reason for AI to be used by doctors to prevent suicide is that 83% of patients have contact with a doctor or other health service in the year before committing suicide, and 45% have had such contact within a month of doing do [7]. However, clinicians find the likelihood of a suicide to be unpredictable even among patients met in clinical settings [8, 9].

On the other hand, AI can predict suicide with a high degree of probability, simply because it operates on huge databases which contain documents from different me-dical specialists or sources [6, 8, 10, 11]. For humans to manage or understand what is significant in such a maze of information would take a lot of time, whereas an algo-rithm needs only a few minutes. Moreover, it can also analyze sources of information unavailable to doctors, for example social media. Searches can be made on Internet forums like Reddit, Twitter, and Facebook to find sentences which can contain suicidal or depressed statements and determine if they are a genuine threat [8]. Furthermore, AI can connect information from social media with medical documents, which increases the probability of accurate prediction.

Another advantage of AI is that it can analyze body movement. It can study facial expressions and posture. Based on that information algorithms can tell if the patient is depressed or has suicidal tendencies [5].

The same algorithms can analyze real-time conversations to differentiate suicidal from non-suicidal individuals based on spoken language, considering language context, slang or sarcasm, and emotions [6, 12]. Furthermore, AI can analyze written notes and say if the writer has suicidal tendencies or is only simulating them [13].

Another advantage of AI is that it is impersonal. Some people are afraid to tell doctors that they are thinking about suicide or are depressed. We can try to reach these patients by using surveys, chatbots or other programs; algorithms can analyze what patients say or write and help them [8].

AI can also simulate humans by imitating human speech. It can adapt to the way patients talk and elicit more information from them [9].

The same technology can be used to train doctors. AI which imitates humans can be used to take the role of a patient, which can help doctors develop skills in speaking and managing patients with depression or suicidal tendencies [14].

TYPES OF ALGORITHMS

NLP

NLP is an algorithm utilized to read, understand and interpret human language. It has many applications, such as generating and understanding natural speech, content briefing, speech recognition, and many others. It is used in systems based on chatbots and spam detection and in translation. Although in simple tasks it performs extremely effectively, it is still challenging for it to detect and process the complexity of human language; for example, sarcasm and metaphors are not recognized.

NLP can be applied in detecting and monitoring patients with depression to prevent suicide attempts. To achieve accurate results, continuous monitoring of individuals from high-risk groups has been proposed.

Boamente-type programs (virtual keyboard applications) collect data from messages sent between users of social media platforms, allowing them to identify individuals experiencing suicidal ideation, facilitating detection and access to psychological support, and therefore making it easier to prevent suicide attempts. Despite promising results, there is a clear gap in such operations, mainly in the use of personal data [6, 15].

Pestian et al. [12] studied two groups, each containing 30 people. The first group included teenagers with suicidal tendencies. The second (control) group comprised healthy people without such tendencies. Video recordings of patients, questionnaires and interviews were used as data collection tools. Later the researchers used NLP to determine whether a patient belonged to the first or second group. The accuracy of AI accuracy in that process was 90%.

In other studies, Zhong et al. [16] created an algorithm that worked on NLP. This software collects data from the clinical notes of pregnant women and predictions whether they have suicidal tendencies. In the next step, the researchers compared the algorithm’s outcomes with the predictions by doctors who worked on the same database. It was found that the machines detected 11 times more pregnant women with suicidal tendencies than the humans.

ML

ML is an algorithm system designed to analyze and predict certain outcomes. ML may be used in NLP, scam detection and even psychiatric health assessment.

A study by Mentiss et al. [15], focused on predicting chronic stress with the use of AI and ML. The use of these technologies allowed them to detect PTSD with up to 90% accuracy. A new method, recognized as a subcategory of artificial intelligence, has been proposed – sharm intelligence (SI). Its aim is to provide a holistic perspective on the individual, solve complex problems, and detect signs of stress. A key element of SI is ensuring the privacy of the individual being examined, which is crucial in clinical research.

Table 1

Summary of the research reviewed

sourcemethodologyKey findingsstrengthslimitations
Pestian et al.Analysis of interviews with suicidalSVM achieved 96.7%High model accuracy,Small sample size,
(2017) [12]vs. control adolescents.classification accuracy,innovative use of naturalyoung age group,
Used NLP + SVM and clusteringclustering 90%. NLPlanguage, comparisonlimited generalizability.
techniques.effectively differentiatedwith clinical tools (C-SSRS).
suicidal speech.
Zhong et al.Retrospective analysis of 275,843NLP identified 71% uniqueVery large sample,Single health system data,
(2018) [16]EMRs using NLP and ICD codes.suicide-related cases notmanual chart validation,complex case labelling.
Manual validation performed.captured by codes.comparison of methods.
Kessler et al.ML on retrospective veteran data.AUCs: 0.72 (hospitalized),Unique veteranLow generalizability,
(2017) [17]Groups: with vs. without0.61 (non-hospitalized),population, predictiveAUC differences suggest
psychiatric hospitalization.0.66 (combined).modelling acrossheterogeneity in risk
subgroups.profiles.
Walsh et al.EHR with ICD-9 codes fromAUC 0.80; predictiveHigh predictiveSingle-center dataset;
(2017) [18]patients with suicidal ideation;accuracy improvedperformance;no psychosocial data;
Random Forest assessed overcloser to event;dynamic approachretrospective design;
time windows (7-720 days beforehigh accuracy regardless(multiple time horizons);no real-time
attempt); analysis included bothof attempt history.inclusion of repeatimplementation.
first and recurrent attempts.attempts.
Lee et al.EHR from patients with suicidalAUC ≈ 0.947;Rich EHR dataset;No external validation;
(2019) [19]ideation; compared attemptersgood sensitivityeffective classificationno dynamic or time-series
vs. non-attempters.and precisionalgorithm.data.
Used Random Forest based onin distinguishing ideators
medical, demographic,likely to attempt suicide.
and psychosocial variables.
Tran et al.Applied multiple NN algorithmsGBM (GradientAlgorithm comparison,No behavioral or text
(2018) [21]on EHR data to predict suicideBoosting Machine) hadlarge EHR database.data included no testing
attempts. Cross-validation used.the best AUC = 0.73,in real-world clinical
outperforming modelssettings.
based on demographics
alone.

Machine learning was used to predict suicide in a group of veterans during 26 weeks of visits to a health center. The study found an area under the curve (AUC) of 0.72 for those with a priori hospitalization for psy-chiatric problems, 0.61 for those without a hospitalization, and 0.66 when the two samples were combined [17]. Two similar studies also used machine learning in the prediction of suicide. The first was a Welsh study in which AUC achieved values of 0.80 in predicting whether a suicide attempt was likely to occur within the next two years [18]. Another applied algorithms to a group of people with suicidal thoughts. In that research, AUC was 0.947 and accuracy of 88.9% [19].

NN

NN are data modelling systems constructed on the pattern of the human brain activity archetype. AI is evolving in various directions, from simple reactive systems to theoretical concepts of self-aware machines. Its applications span everyday technologies, industry, healthcare, and finance. In these fields neural networks enable the resolution of increasingly complex problems. NN’s immense advantage is their ability to solve practical problems without prior mathematical formulae or theoretical assumptions. Such networks are sometimes called a “black box” because we cannot fully understand how they work [20].

In 2018, Del Pozo-Banos et al. [21], conducted research in which a neural network was used to analyze information to evaluate the risk of suicide in patients who were admitted to hospitals for varied reasons. Using hospital data – which contained information about patients, such as general practice contact and hospital admission, diagnosis of mental health issues, injury and poisoning, substance misuse, various forms of abuse, sleep disorders, and the prescription of opiates and psychotropics, drawn from a period of over 5 years – the algorithm predicted whether a patient would go on to commit suicide with a 73% level of accuracy. Later, the algorithm was trained to differentiate between the group in which individuals experienced suicidal ideation and control group, by using the risk factor data mentioned above [21].

PROBLEMS WITH AI USE IN SUICIDE PREVENTION

The use of AI in the prediction and prevention of depressive and anxiety disorders presents both significant opportunities and notable challenges. Even though the diagnosis of mental health issues is possible for complex AI models, accurate identification of suicidal ideation remains difficult. To achieve precise analysis, there is a need for more advanced algorithms.

Another important ethical concern is data security. Data storage must be closely monitored to ensure the confidentiality of personal information is not breached, which is crucial for building trust between patients and new technologies.

According to international law, it is prohibited to discriminate against an individual on the basis of factors such as age, gender, ethnic background, political views, or skin color. According to international law, equal access to medical care and diagnosis is a fundamental human right. Databases used in AI systems are human made, which makes them vulnerable to the unintentional or intentional biases of their creators. Humans as creators are usually not impartial, which can affect the quality and representativeness of database information in AI systems. Such biases can be an effect of multiple factors such as upbringing and cultural, social, or even economic diffe-rences. These biases can create challenges in identifying certain groups, leading to a lower degree of accuracy in predictions of depression and suicide risk and making it harder to reach these populations. The introduction of ethical standards and transparency, as well as the constant monitoring of AI-powered systems, is therefore essential to ensure equality [6].

Moreover, the use of AI, especially in NLP, has its limitations, such as an inability to recognize languages other than those on which it was trained. This leads to the exclusion of groups that do not use the language programmed. Additionally, for a system to work efficiently and accurately it needs to have a vast amount of data [6].

Other issues include the maintenance and need for a significant amount of computing power of such systems, which leads to high costs. The implementation of such solutions may require considerable investment and close collaboration between experts in the fields of medicine, technology, finance, and ethics [3].

CONCLUSIONS

As shown above, AI algorithms may play a crucial role as tools for suicide prevention. The integration of information from medical records, social media data, clinical databases, and other sources underscores the potential of AI to shape the future of suicide prediction and prevention.

On the other hand, there are some problems which need to be addressed. It is absolutely essential to prepare effective methods for the securing of databases because they contain sensitive personal information. Another problem is that the risk factors of suicide such as ethnicity, skin color and socioeconomic status can also be treated as causes for discrimination. Algorithms can be used to track people on the basis, for example, of skin color and use the data for wrong purposes. Due to this reason, sensitive information should only be used for clearly specified purposes, such as suicide prevention, and, even then, it should be only accessible under strict supervision to ensure that these tools are applied responsibly.

To summarize, AI has great potential to help prevent suicide. The advantages of using technology are such that research into algorithms should be continually developed to help upgrade them and solve the potential problems.

Conflict of interest

Absent.

Financial support

Absent.

References

1 

https://wisqars.cdc.gov/ (Accessed: 25.01.2025).

2 

Rockett IRH, Regier MD, Kapusta ND, Coben JH, Miller TR, Hanzlick RL, et al. Leading causes of unintentional and intentional injury mortality: United States, 2000-2009. Am J Public Health 2012; 102: e84-e92. DOI: 10.2105/AJPH.2012.300960.

3 

Barua PD, Vicnesh J, Lih OS, Palmer EE, Yamakawa T, Kobayashi M, Acharya UR. Artificial intelligence assisted tools for the detection of anxiety and depression leading to suicidal ideation in adolescents: a review. Cogn Neurodyn 2022; 18: 1-22.

4 

Bernert RA, Hilberg AM, Melia R, Kim JP, Shah NH, Abnousi F. Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. Int J Environ Res Public Health 2020; 17: 5929. DOI: 10.3390/ijerph17165929.

5 

Lejeune A, Glaz AL, Perron P, Sebti J, Baca-Garcia E, Walter M, et al. Artificial intelligence and suicide prevention: a systematic review. Eur Psychiatry 2022; 65: e19. DOI: 10.1192/j.eurpsy.2022.8.

6 

Arowosegbe A, Oyelade T. Application of Natural Language Processing (NLP) in detecting and preventing suicide ideation: a systematic review. Int J Environ Res Public Health 2023; 20: 1514. DOI: 10.3390/ijerph20021514.

7 

Ahmedani BK, Simon GE, Stewart C, Beck A, Waitzfelder BE, Rossom R, et al. Health care contacts in the year before suicide death. J Gen Intern Med 2014; 29: 870-877.

8 

D’Hotman D, Loh E. AI enabled suicide prediction tools: a qualitative narrative review. BMJ Health Care Inform 2020; 27: e100175. DOI: 10.1136/bmjhci-2020-100175.

9 

Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Aust N Z J Psychiatry 2019; 53: 954-964.

10 

Hardy RC, Glastonbury K, Onie S, Josifovski N, Theobald A, Larsen ME. Attitudes among the Australian public toward AI and CCTV in suicide prevention research: a mixed methods study. Am Psychologist 2024; 79: 65-78.

11 

Li X, Chen F, Ma L. Exploring the potential of artificial intelligence in adolescent suicide prevention: current applications, challenges, and future directions. Psychiatry 2024; 87: 7-20.

12 

Pestian JP, Grupp-Phelan J, Cohen KB, Meyers G, Richey LA, Matykiewicz P, Sorter MT. A controlled trial using natural language processing to examine the language of suicidal adolescents in the Emergency Department. Suicide Life Threat Behav 2016; 46: 154-159.

13 

Wicentowski R, Sydes MR. Emotion detection in suicide notes using maximum entropy classification. Biomed Inform Insights 2012; 5 (Suppl 1): 51-60.

14 

Martínez-Miranda J. Embodied conversational agents for the detection and prevention of suicidal behaviour: current applications and open challenges. J Med Syst 2017; 41: 135. DOI: 10.1007/s10916-017-0784-6

15 

Mentis AFA, Lee D, Roussos P. Applications of artificial intelligence-machine learning for detection of stress: a critical overview. Mol Psychiatry 2024; 29: 1882-1894.

16 

Zhong QY, Karlson EW, Gelaye B, Finan S, Avillach P, Smoller JW, et al. Screening pregnant women for suicidal behavior in electronic medical records: diagnostic codes vs. clinical notes processed by natural language processing. BMC Med Inform Decis Mak 2018; 18: 30. DOI: 10.1186/s12911-018-0617-7.

17 

Kessler RC, Stein MB, Petukhova MV, Bliese P, Bossarte RM, Bromet EJ, et al.; Army STARRS Collaborators. Predicting suicides after outpatient mental health visits in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Mol Psychiatry 2017; 22: 544-551.

18 

Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clin Psychol Sci 2017; 5: 457-469.

19 

Ryu S, Lee H, Lee DK, Kim W, Kim CE. Detection of suicide attempters among suicide ideators using machine learning. Psychiatry Investig 2019; 16: 588-593.

20 

Walavalkar V. Exploring the frontiers of artificial intelligence: advancements, challenges, and future directions. Int J Res Appl Sci Eng Technol 2023; 11. DOI: 10.22214/ijraset.2023.50361.

21 

DelPozo-Banos M, John A, Petkov N, Berridge DM, Southern K, Loyd K, et al. Using neural networks with routine health records to identify suicide risk: feasibility study. JMIR Ment Health 2018; 5: e10144. DOI: 10.2196/10144.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). License allowing third parties to download and share its works but not commercially purposes or to create derivative works.
 
© 2026 Termedia Sp. z o.o.
Developed by Termedia.