Medycyna Paliatywna
eISSN: 2081-2833
ISSN: 2081-0016
Medycyna Paliatywna/Palliative Medicine
Bieżący numer Archiwum Artykuły zaakceptowane O czasopiśmie Rada naukowa Bazy indeksacyjne Prenumerata Kontakt Zasady publikacji prac Opłaty publikacyjne Standardy etyczne i procedury
Panel Redakcyjny
Zgłaszanie i recenzowanie prac online
NOWOŚĆ
Portal dla onkologów!
www.eonkologia.pl
3/2025
vol. 17
 
Poleć ten artykuł:
Udostępnij:
Artykuł przeglądowy

Bereavement support with the use of artificial intelligence – a quasi-systematic review

Jaśmina Joanna Bork-Zalewska
1

  1. Medical University of Gdansk, Gdansk, Poland
Medycyna Paliatywna 2025; 17(3): 130–137
Data publikacji online: 2025/09/11
Plik artykułu:
- Bereavement support.pdf  [0.18 MB]
Pobierz cytowanie
 
Metryki PlumX:
 

Introduction

International and regional standards in palliative care include providing support for caregivers of terminally ill patients not only during the course of the illness but also throughout the bereavement process. Definitions of the International Association for Hospice and Palliative Care (IAHPC), the European Association for Palliative Care (EAPC), and organisational standards for specialist palliative care for adult patients (OSSPCAP) in Poland all contain references to providing help for bereaved loved ones [1-3]. Since bereavement support is an integral part of palliative care, recognised in the literature as a form of family support, it seems valuable to explore and develop the most effective methods of providing such assistance [4, 5]. This is particularly important in light of the risk that the grieving process may not conclude naturally but instead develop into unhealthy, prolonged symptoms that could finally evolve into prolonged grief disorder (PGD). An estimated 7-10% of bereaved adults will experience such unrelenting bereavement responses, resulting in their functional impairment beyond cultural norms [6].
There are a wide range of support means available for individuals in mourning, some of which employ cutting-edge solutions to achieve the best possible outcomes. One such technology is the rapidly advancing artificial intelligence, which has already found applications in palliative medicine [7]. Artificial intelligence (AI) encompasses the creation of computer systems capable of performing tasks traditionally requiring human intelligence, such as problem-solving and learning. Machine learning (ML), a branch of AI, centres on systems that analyse data, optimise model parameters to achieve accurate approximations, and use these insights to infer new information. Computational models widely used in ML are neural networks, inspired by the structure and functioning of the human brain, consisting of interconnected layers of nodes (neurons) that process and transmit information. They identify patterns, make predictions, and solve complex problems by learning from data [8]. The latest advancements in this field have the potential to become valuable diagnostic, psychoeducational, and therapeutic tools, much like the way AI is already widely utilised in the field of radiology, or pathology within medicine [9]. This review sought to examine the literature for studies explicitly employing artificial intelligence to improve bereavement support in palliative care practice. Its goal was to provide a resource for healthcare professionals interested in integrating AI into their clinical work and palliative care research, while also highlighting significant advancements and problems that future research should address.

Methods

A quasi-systematic review was conducted, incorporating certain elements of a systematic review, such as pre-established selection criteria. However, it does not include a critical appraisal of study quality. The articles chosen were mostly original research focused on the application of AI in the broad process of grief, considering various perspectives. Additionally, one publication was a book chapter. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were not fully adhered to [10]. The literature was sourced from the electronic databases PubMed and Scopus.
Search strategy and study selection
A search strategy was implemented using the following Medical Subject Headings (MeSH) headings/keywords in titles or abstracts: artificial intelligence/AI/machine learning/ML/neural network/NN, bereavement/grief/mourning. The operator ‘AND’ was used to combine these terms (Fig. 1).
The searches were guided by pre-established selection criteria, restricting them to (i) original research articles, or books, or case studies (ii) focused on the use of artificial intelligence tools in the grieving process, and (iii) published in English language. After duplicates were removed, the titles and abstracts were reviewed and then assessed based on their content. Given the recent rapid advancement of artificial intelligence technology, a date restriction was applied, limiting the selection to articles published within the last decade, specifically from 2014 to 2024.

Results

From the quasi-systematic review of literature, 12 publications published between 2021 and 2024 were thoroughly analysed (Table 1) [11-22]. Five publications contained data from real people experiencing bereavement [11-14, 16], one article was based on a fictitious statistical Brazilian bereaved population [15], and the remaining 6 studies focused on the assessment of AI application in bereavement in a more general and philosophical sense [17-22]. The most repeated function of AI-based tools was interactive dialogue utilising conversational agents and deepfake technologies (n = 6), 3 publications focused on prediction models (n = 3), one on identification of factors correlating with grief severity (n = 1), one on clustering grief and major depressive disorder (MDD) (n = 1), and one on assignment to provide adequate therapy for bereaved individuals (n = 1).
Explainable artificial intelligence and machine learning
Explainable AI refers to artificial intelligence systems designed to provide human-understandable insights into how they make decisions or predictions. Unlike traditional “black box” AI models, which often lack transparency, explainable AI aims to make the inner workings of AI algorithms more interpretable, allowing users to trust, scrutinise, and effectively interact with these systems. It is critical in domains like healthcare, where understanding AI decisions is as important as the decisions themselves.
In the field of machine learning, a subset of AI, 2 primary approaches are employed: supervised and unsupervised learning, which differ based on the type of training data utilised. Supervised learning relies on datasets that include both inputs and corresponding outputs, while unsupervised learning processes only the input data without labelled outcomes.
Efficiency metrics used in assessment of AI systems
The area under the receiver operating characteristic (ROC) curve is a statistical and machine learning tool for assessing classification performance by plotting the true positive rate (sensitivity) against the false positive rate (1-specificity) across different threshold values. For models producing continuous output, thresholds are used to differentiate between classes. The area under the ROC curve (AUC ROC) serves as an indicator of a model’s ability to distinguish between classes; higher AUC values represent better classification performance. An AUC of 1 signifies a perfect classifier, while 0.5 indicates random guessing. AUC values below 0.5 imply that a model performs worse than random chance and is thus unsuitable for practical use. AUC ROC is widely recognised as a robust measure of diagnostic test validity, also useful in determining cost/benefit outcomes.
F1 score is another metric frequently used to evaluate models. It is calculated as the harmonic mean of precision and the true positive rate (recall), offering a balanced assessment of a model’s accuracy and reliability.
Additionally, a more standard statistical metric, called the squared Pearson correlation coefficient (denoted as r2), is used. It is a statistical measure that quantifies the proportion of variance in one variable that is explained by its linear relationship with another variable. It is derived from the Pearson correlation coefficient (r), which measures the strength and direction of the linear association between 2 variables.
Application: Monitoring of prolonged grief disorder and depression after bereavement
Three studies investigated the potential use of AI in PGD or depression prediction and monitoring. She et al. [11] developed a web-based system called “grief inquiries following tragedy (GIFT)”, a tool based on a machine learning model trained to classify PGD and inform individual users about the risk factors that could contribute to the development of this condition. Their best performing model achieved an acceptable outcome of an AUC performance score of 0.77 (F-score = 0.73, accuracy = 93.3%) in detecting PGD.
Cherblanc et al. [12] explored key factors determining PGD symptoms during the COVID-19 pandemic, finding 8 variables strongly associated with a possible PGD diagnosis: unexpected causes of death, living alone, seeking professional support, taking medication for anxiety and/or depression, utilising more grief services, employing confrontation-oriented coping strategies, and experiencing higher levels of depression and anxiety. The machine learning algorithm named CatBoost was the best predictive model in measuring PGD symptoms with the Traumatic Grief Inventory-Self Report (TGI-SR) questionnaire, achieving a squared Pearson correlation coefficient (r2) of 0.6479.
Another approach was taken by Schultebraucks et al. [13], who analysed 4 distinct trajectories of depressive symptoms due to a major life stressor like bereavement, using multiple polygenic scores (PGSs) data for supervised machine learning. PGSs combine genome-wide genetic influences into a single score that represents an individual’s genetic predisposition to a specific trait or disorder, and they have been demonstrated to account for the risk of various psychiatric conditions. Deep neural networks categorised these 4 trajectories with high discriminatory accuracy: multiclass micro-average AUC of 0.88 (95% confidence interval (CI): 0.87-0.89); multiclass macro-average AUC of 0.86 (95% CI: 0.85-0.87). It proved that with the application of AI models based on PGS data, it is possible to identify protective genomic factors of resilience against bereavement.
Application: Determining factors associated with the intensity of grief and grouping individuals
Uchimura et al. [14] examined in their study the importance of worry and secondary stressors on grief severity, addressing the problem by the usage of random forest regression modelling within a machine learning framework. It is an algorithm in which a randomly chosen subset of participants from the overall sample is repeatedly divided into a predefined number of subgroups (or trees) to evaluate the predictors’ role in minimising mean square error and optimising model performance. This splitting (or folding) process is conducted a set number of times and combined during the training phase. The final model is then validated by examining its performance on the remaining portion of the total sample. The study resulted in a model predicting grief severity, the key factors being dependency, worry, secondary stress, insomnia, pandemic-related stress, and satisfaction with the relationship with the deceased.
Loula et al. [15] in their mathematical approach clustered a virtual population to achieve a deeper understanding of the coexistence of bereavement and MDD. Their results indicate the existing difficulty both for neural networks and for a human physician to differentiate grief and MDD. Nevertheless, machine learning tools prove to be a useful auxiliary tool in diagnostics, especially when diagnosis guidelines are blurred for professionals.
Application: A personalised assignment to treatment
Argyriou et al. [16] in their study introduced a personalised treatment rule to optimize assignment for veterans among 2 evidence-based psychotherapies for grief: behavioural activation and therapeutic exposure (BATE-G) and cognitive therapy for grief (CT-G). This bespoke approach is based on machine learning methods. The authors reframe the given decision problem as a weighted classification one and solve it by using support vector machines [23] and statistical causal inference framework methods [24].
The effectiveness of the rule was evaluated by estimating the expected mean inventory of complicated grief (ICG) [25] value achieved when participants are assigned based on this rule. Then, the mean ICG value was compared to that achieved when all participants were assigned to either BATE-G or CT-G, regardless of individual characteristics. The mean ICG score achieved under the estimated personalised treatment rule was 13.14 (95% CI: 1.43, 24.84), showing good performance if compared to assigning everyone to one psychotherapy regardless of their characteristics. Then, the performance rate was 31.70 (95% CI: 25.18, 38.23) and 34.02 (95% CI: 26.81, 41.24) for BATE-G and CT-G, respectively.
Application: Interactions with deepfake and conversational agents
Six articles focused on the application of AI not in prediction, or clustering models, but in the novel phenomenon universally called deepfake. A deepfake is a synthetic media creation based on deep learning, specifically leveraging neural networks such as autoencoders or transformer models, to generate highly realistic but artificially altered audio, video, or images, often indistinguishable from authentic content. Digitally reproduced voices of the deceased and conversational agents, sometimes named “deathbots”, chatbots of the dead, found their place in the bereavement support for families, receiving mixed feedback. Lindemann [17] discussed ethical aspects of deathbots, considering both the dignity of the deceased, and the dignity and autonomy of bereft users. The author suggested the need for classification of deathbots as medical tools for the potential treatment of Prolonged Grief Disorder, to prevent uncontrolled and potentially harmful development of this technology.
Altaratz et al. [18] analysed one case study of a “digital resurrection” of a 7-year-old dead girl, whose image, voice, and playground were reconstructed to converse with her mother via a VR headset and haptic gloves, delivering genuine emotional responses. Nevertheless, AI technologies were assessed here as limited since only pre-planned scenarios could have occurred.
Maria Pizzoli et al. [19] concluded, similarly to Lindemann, that digital stimuli related to grief should be approached cautiously, and the use of audio clips in grief therapy should be carefully evaluated by ethical committees. The videos or audio clips may serve as a compensatory mechanism providing temporary comfort, but they hinder the natural physiological process of adapting to the loss.
Both psychological and legal aspects of deepfake therapy in bereavement were analysed by Hoek et al. [20], who observed the risk of overattachment, leading to complicated grief instead of acceptance of the loss. Disturbing implications can also occur in terms of privacy, when images of the deceased might be used without consent and even spread commercially.
Koukopoulos et al. [21] emphasised that in the shortage of mental health professionals, artificial intelligence mental health chatbots can be extremely useful, but they raise certain ethical and legal concerns. Babushkina et al. [22] strongly excluded artificial agents as suitable conversational partners based on the philosophy of psychology, underlining the limitations of conversational agents (CAs) in therapeutic dialogue: they cannot offer the same deep and adaptable therapeutic experience that emerges from the relationship between a patient and a human therapist.

Discussion

Artificial intelligence has been applied in the grieving process to identify and understand risk factors for PGD, a severe mental health condition characterised by prolonged yearning, emotional numbness, identity disruption, and loss of meaning. Given the severity of this mental disorder, the ability to predict and monitor it appears to be immensely valuable. Developing a standalone online application designed to screen for PGD within the first 12 months of bereavement, offering an intuitive translation of psychological assessments and personalized feedback to grieving users, seems to be a promising future work by She et al. [11]. A similar analysis of the key factors contributing to the development of PGD was conducted by Cherblanc et al., demonstrating that AI can assist in understanding the complex and dynamic process of grief [12].
Schultebraucks et al. focused on predicting resilience or depression after a major life stressor like bereavement [13]. Emphasising resilience is crucial because it aids in identifying individuals with a lower likelihood of developing stress-related psychiatric conditions over time. This insight is valuable because it can help redirect interventions to those who would benefit most, while also preventing overtreatment or inefficient inclusion in clinical studies. Uchimura et al. used a machine learning approach to provide accurate estimations of the relative importance of secondary stressors for post-loss symptom severity [14]. Identifying the factors that influence the onset and continuation of mental health issues after a loss is crucial for the progress of effective interventions.
A significant challenge for clinicians is distinguishing between MDD and bereavement, which can sometimes coexist. Using a mathematical model, Loula et al. found a way to better cluster these subgroups. As a result, people needing pharmacological or psychological treatment can be more accurately diagnosed [15]. Nevertheless, there is a need for additional verification for these outcomes in a group of real bereaved individuals, because the stimulation was extrapolated from population-wide statistics.
However, not only the diagnosis, but also the selection of appropriate psychotherapy for individuals in mourning can pose a challenge for clinicians. Therefore, the use of AI in creating a personalised approach to therapy selection offers a significant advantage, as Argyriou et al. proved in their study [16]. A machine learning personalised treatment rule to optimise assignment to psychotherapies for grief achieved satisfactory results and might be worth implementing in clinical practice.
The most common application of broadly defined AI in the grieving process involves deepfake-based technologies: conversational agents and other attempts to replicate the characteristics of deceased individuals. The popularity of deathbots is growing, but they remain legally unregulated, which raises ethical controversies—a topic extensively discussed in Lindemann’s work [17]. They undoubtedly influence their user’s affect, sometimes leading to unintended overreliance and emotional overregulation (26). It means that bereaved individuals may become reliant on the bots to regulate their grief, and they might create pseudo-bonds, which by definition are technologically externalised, and therefore untrustworthy, being dependent on commercial companies. As Koukopoulos et al. observed, since conversational agents successfully mimic the traits of the deceased loved one, the discontinuation of this service can be a potentially traumatic event for the end user [21].
However, they could prove useful in the treatment of PGD, provided they are thoroughly tested and classified as medical tools, operating under psychological or psychiatric supervision. The use of digital interventions has begun to be applied for complicated grief or bereaved caregivers, and there are some promising preliminary results of an 8-week web-based and virtual grief support program for widows, as well as other protocols [19]. Nevertheless, it appears to be exceptionally challenging to balance therapeutic benefits for living individuals with postmortem privacy where deepfake therapy is concerned. It is essential to approach deepfake therapy responsibly: with the consent of the deceased individual for the use of their personal data, and under the supervision of a human therapist, or in uncontrolled home settings only after establishing clear guidelines [20]. Babushkina et al. primarily focused in their publication on the limitations and potential risks of conversational agents, particularly their epistemic limitations as “a cognition technology”. However, they emphasised their potential in applications where therapy does not rely on intersubjectivity or the therapist’s ability to relate to the patient’s existential situation [22]. For example, with appropriate precautions, they could be useful for psychoeducation, assisting patients in developing specific skills (e.g. monitoring mood fluctuations), applying therapeutic insights to particular contexts, or facilitating relaxation and breathing exercises.

Conclusions

In summary, AI has a broad range of applications in the grieving process, from interactive tools such as deepfake-based conversational agents to auxiliary tools in the prediction, diagnosis, and treatment of grief-related symptoms and conditions. Undoubtedly, assistance in making more accurate diagnoses, distinguishing between depression and grief, and identifying high-risk groups for serious conditions such as PGD is extremely valuable. Similarly, selecting the appropriate form of psychotherapy based on the individual characteristics of mourners is an example of personalised medicine, the development of which should be supported due to its potential for better therapeutic outcomes. However, one must not overlook the widely discussed risks associated with the use of deepfake technology in interactions with bereaved individuals. Deathbots may provide a foundation for the development of overattachment, disrupting the natural grieving process of coming to terms with the loss of a loved one, and have the potential to limit the emotional and psychological wellbeing of their users. Furthermore, they transfer the authority to decide on the potential sudden cessation of conversational agents’ use to external companies providing these technologies. Finally, it is important to mention the legal aspects of using data from the deceased, such as their image or voice, as they may not always be able to consent to such processing. Therefore, the assessment of AI applications in the grieving process should be multidimensional, taking into account both the benefits and the risks. The legal and medical regulation of these aspects should be considered before widespread implementation of AI in this context.

Disclosures

  1. Institutional review board statement: Not applicable.
  2. Assistance with the article: The work examines the current use of AI in supporting families in the grieving process, therefore individual techniques and research results from other publications that based their methods on AI were analyzed.
  3. Financial support and sponsorship: None.
  4. Conflicts of interest: None.
References
1. Radbruch L, De Lima L, Knaul F, Wenk R, Ali Z, Bhatnaghar S, et al. Redefining palliative care – a new consensus-based definition. J Pain Symptom Manage 2020; 60: 754-764.
2. Payne S, Harding A, Williams T. Revised recommendations on standards and norms for palliative care in Europe from the European Association for Palliative Care (EAPC): A Delphi study. Palliat Med 2022; 36: 680-697.
3. Leppert W, Grądalski T, Kotlińska-Lemieszek A, Kaptacz I, Białoń-Janusz A, Pawłowski L. Organizational standards for specialist palliative care for adult patients: recommendations of the Expert Group of National Consultants in Palliative Medicine and Palliative Care Nursing. Palliat Med Pract 2022; 16: 7-26.
4. Kustanti CY, Fang H, Linda Kang X, Chiou J, Wu S, Yunitri N, et al. The effectiveness of bereavement support for adult family caregivers in palliative care: a meta‐analysis of randomized controlled trials. J Nurs Scholarsh 2021; 53: 208-217.
5. Bork-Zalewska J, Jarych W, Pawłowski L. An overview of  the role of family support in palliative care: a quasi-systematic review. Palliat Med Pract 2024; VM/OJS/J/102546. DOI: 10.5603/pmp.102546.
6. Szuhany KL, Malgaroli M, Miron CD, Simon NM. Prolonged grief disorder: course, diagnosis, assessment, and treatment. Focus (Am Psychiatr Publ) 2021; 19: 161-72.
7. Bork-Zalewska J. An overview of the role of artificial intelligence in palliative care: a quasi-systematic review. Palliat Med Pract 2024; VM/OJS/J/103020. DOI: 10.5603/pmp.103020.
8. Gero JS. Artificial intelligence in design ’96. Springer, Netherlands, Dordrecht 1996.
9. Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, et al. A guide to deep learning in healthcare. Nat Med 2019; 25: 24-29.
10. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021; n71.  DOI: 10.1136/bmj.n71.
11. She WJ, Ang CS, Neimeyer RA, Burke LA, Zhang Y, Jatowt A, et al. Investigation of a web-based explainable AI screening for prolonged grief disorder. IEEE Access 2022; 10: 41164-41185.
12. Cherblanc J, Gaboury S, Maître J, Côté I, Cadell S, Bergeron- Leclerc C. Predicting levels of prolonged grief disorder symptoms during the COVID-19 pandemic: an integrated approach of classical data exploration, predictive machine learning, and explainable AI. J Affect Disord 2024; 351: 746-754.
13. Schultebraucks K, Choi KW, Galatzer-Levy IR, Bonanno GA. Discriminating heterogeneous trajectories of resilience and depression after major life stressors using polygenic scores. JAMA Psychiatry 2021; 78: 744.
14. Uchimura KK, Papa A. Examining worry and secondary stressors on grief severity using machine learning. Anxiety Stress Coping 2024; 38: 206-218.
15. Loula R, Monteiro LHA. On the criteria for diagnosing depression in bereaved individuals: a self-organizing map approach. Math Biosci Eng 2022; 19: 5380-5392.
16. Argyriou E, Gros DF, Hernandez Tejada MA, Muzzy WA, Acierno R. A machine learning personalized treatment rule to optimize assignment to psychotherapies for grief among veterans. J Affect Disord 2024; 358: 466-473.
17. Lindemann NF. The ethics of ‘Deathbots.’ Sci Eng Ethics 2022; 28: 60.
18. Altaratz D, Morse T. Digital Séance: fabricated encounters with the dead. Social Sciences 2023; 12: 635.
19. Maria Pizzoli SF, Vergani L, Monzani D, Scotto L, Cincidda C, Pravettoni G. The sound of grief: a critical discussion on the experience of creating and listening to the digitally reproduced voice of the deceived. Omega (Westport) 2024; 00302228231225273. DOI: 10.1177/00302228231225273.
20. Hoek S, Metselaar S, Ploem C, Bak M. Promising for patients or deeply disturbing? The ethical and legal aspects of deepfake therapy. J Med Ethics 2025; 51: 481-486.
21. Koukopoulos A, Danias N. Accessibility in digital health: virtual conversational agents and mental health services. In: Anshari M, Almunawar MN, Ordonez De Pablos P (eds.). Advances in Healthcare Information Systems and Administration [Internet]. IGI Global 2024; 13-28. Available from: https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/979-8-3693-1463-0.ch002.
22. Babushkina D, De Boer B. Disrupted self, therapy, and the limits of conversational AI. Philos Psychol 2024; 1-27. DOI: https://doi.org/10.1080/09515089.2024.2397004.
23. Awad M, Khanna R. Support Vector Machines for Classification. In: Efficient Learning Machines [Internet]. Berkeley, CA: Apress; 2015; 39-66. Available from: http://link.springer.com/10.1007/978-1-4302-5990-9_3.
24. National Academies of Sciences Engineering, Medicine, others. Introduction to Causal Inference Principles. In: Advancing the framework for assessing causality of health and welfare effects to inform national ambient air quality standard reviews. National Academies Press, Washington (DC): 2022.
25. Prigerson HG, Maciejewski PK, Reynolds CF, Bierhals AJ, Newsom JT, Fasiczka A, et al. Inventory of complicated grief: a scale to measure maladaptive symptoms of loss. Psychiatry Res 1995; 59: 65-79.
26. Krueger J, Osler L. Engineering affect: emotion regulation, the internet, and the techno-social niche. Philosophical Topics 2019; 47: 205-231.
Copyright: © 2025 Termedia Sp. z o. o. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
© 2025 Termedia Sp. z o.o.
Developed by Bentus.