INTRODUCTION
Mental disorders are still one of the leading causes of disability. In 2019, there were more than 970 million cases worldwide related to depressive and anxiety disorders [1]. One of the more widely used indicators in health policy is DALY (disability-adjusted life years). This indicator reflects the severity of a particular disease or group of diseases and is measured by the number of years lost due to ill health, disability or early death. One DALY corresponds to the loss of one year of life in full health [2]. According to the Global Burden of Disease study, between 1990 and 2019 the number of DALYs lost due to mental illnesses rose from 80.8 million in 1990 to 125.3 million in 2019, while the percentage of DALYs lost due to mental disorders increased from 3.1% to 4.9% [3]. In order to reduce their negative impact on the health of societies, the organized introduction of effective prevention and treatment programs by governments and the international health community is essential. The application of artificial intelligence (AI) in the treatment and diagnosis of mental illnesses is a rapidly developing area that can help with this issue. Traditional treatment methods, which include pharmacotherapy and psychotherapy, often prove insufficient or difficult for patients to access. In this context, tools based on AI could represent a breakthrough, offering new possibilities for diagnosis, therapy and the monitoring patients’ health. A diverse community of experts, including researchers, clinicians and patients, must collaborate effectively to fully realize the potential of AI in this field [4]. Among other things, it is already being used as a means of detecting diseases earlier and understanding their progression and treatment options. With AI’s ability to analyse large data sets, recognize patterns and learn from experience, it is becoming possible to more accurately diagnose and treat most diseases, including mental disorders. It is believed that AI could in the future significantly facilitate the management of people with Alzheimer’s disease, depression, schizophrenia, autism spectrum disorders, and many others that have a significant impact on patients’ daily functioning [5, 6]. Accurately diagnosing mental disorders in the elderly is a significant challenge in geriatrics. Many conditions, such as late-life depression, often go unnoticed and untreated. Traditional diagnostic methods rely mainly on patients’ subjective accounts, which can lead to errors due to unreliable memory. In addition, differentiating the disorder from others is difficult in seniors with multiple comorbidities or with overlapping symptoms [7]. It is certain that AI will be an integral part of medicine in the future. The simultaneous development of AI poses a number of challenges, especially in the ethical context. The introduction of AI into healthcare raises questions about patient privacy, accountability for misdiagnoses, dehumanization of the treatment process, and the dangers of over-reliance on technology. Mental health data is extremely sensitive, and its misuse or leakage can lead to serious consequences for patients. In addition, there are questions about the lack of transparency of many algorithms and the possibility of misinterpretation, which in extreme cases can result in a misdiagnosis or failure to respond quickly enough [8, 9]. In legal contexts, responsibility refers to the identification of the actor who can be held liable for harm caused by AI systems, such as a developer or clinician. In contrast, accountability in the professional sense involves ethical obligations to the ensure safe and transparent use of AI, even in the absence of direct legal liability. The purpose of this paper is to discuss the ways in which AI is being used in the diagnosis and treatment of mental illness and the potential benefits, challenges and ethical considerations involved.
METHODOLOGY
In this paper, a literature review method was used to present the current state of knowledge on the application of AI in psychiatry and related ethical issues (Table 1). The focus was mainly on papers published between 2015 and 2025, although older papers that made significant scientific contributions were also considered. The selection of sources was done by searching databases such as PubMed, Scopus and Google Scholar, using the following keywords: “artificial intelligence”, “AI in psychiatry”, “mental health”, “ethics”, “machine learning in mental health”, and their equivalents in the Polish language. Inclusion criteria were: reference to AI topics in the context of diagnosing or treating mental disorders, addressing ethical issues related to the use of AI, and availability of full texts. Non-scientifically peer-reviewed publications, press commentaries and papers dealing solely with technical aspects without reference to the mental health context were excluded. In addition, special attention was focused on the reliability of the sources and their relevance to the current ethical debate.
THE USE OF AI IN PSYCHIATRY
As early as the 1960s, the first computer programs were created, such as among others ELIZA, which mimicked the conversational abilities of a psychotherapist. The tool was designed to simulate a conversation between a therapist and a patient, which would then lead to an interpretation of the words spoken by the patient. This program was only used for research purposes, but it caused the discussion on the use of AI in psychiatry to grow significantly in the following years [10]. In recent years, computer techniques have been among the very helpful tools for diagnosing and treating many mental illnesses. This is very useful because psychiatry, unlike other specialties, often lacks objective and reliable clinical parameters for confident diagnosis. AI-based tools are used, among other things, to personalize treatment plans and aid in early diagnosis by recognizing patterns in a particular patient’s data [11]. Latent semantic analysis, which is an automated tool for finding patterns in the relationships between words spoken by patients, has proven to be an extremely important tool for clinicians in diagnosing schizophrenia, among other conditions [12, 13]. Machine learning techniques have also been used in the diagnosis of attention deficit hyperactivity disorder (ADHD), where it is possible to distinguish between a group of ADHD patients and a control group, as well as between subtypes of the disorder, based on the evaluation of an EEG study [14]. Recognition of the early stages of Alzheimer’s disease and schizophrenia, based on the use of neuroimaging data in AI techniques, is also an emerging field. Vieira et al. [15] have demonstrated that they are able, based on patterns found in functional MRI performed at rest, to assign patients to either a control group or those with schizophrenia, with an accuracy of up to 85.5%. The use of modern technology, such as smartphones and smartwatches, opens up new possibilities in monitoring the mental health of older people. These devices collect a variety of data, including on sleep, physical activity and social interactions, which can provide valuable information about the user’s wellbeing. Analysing this data allows the early detection of behavioural changes, which is key to diagnosing and treating depression in seniors. For example, the MoodCapture app uses the front camera of a smartphone to monitor facial expressions during regular use, which can help identify signs of depression [16]. Implementing such approaches can significantly improve the quality of care for older people, enabling faster intervention and tailoring therapy to individual patient needs. Traditional methods of early recognition and diagnosis of cognitive impairment in older people often make it difficult to distinguish subtle changes from normal ageing behaviour, limiting their clinical effectiveness. The use of AI, especially neuro-linguistic programming, entails new possibilities in this aspect. By analysing speech features such as pause length or voice modulation, AI can detect early signs of cognitive decline that are difficult to pick up with traditional neuropsychological assessments. Research indicates that AI can identify subtle language patterns indicative of the early stages of Alzheimer’s disease, which can significantly aid early diagnosis and intervention [17]. In contrast, an increasing number of young people suffer from mental health problems, such as depression and anxiety disorders, and it is often difficult for them to seek traditional treatment due to feelings of shame and long wait times to see specialist doctors [18]. According to a nationwide survey of more than two thousand students conducted by the Students’ Parliament of the Republic of Poland, almost every student rated their coping with stress on a daily basis as bad or average. Furthermore, the majority of respondents reported experiencing conditions such as difficulty concentrating, depression, sadness, panic attacks and apathy and lack of motivation to act among friends [19]. In addition, many students have limited knowledge of mental health and do not see the need for treatment because they perceive the symptoms of depression and anxiety as normal study-related stress that does not require intervention [20]. A review by Lattie et al. [18] found that digital mental health interventions can be effective in reducing levels of depression and anxiety disorders among this group. One of the established forms of treatment for depression and anxiety, among other conditions, is cognitive behavioural therapy. However, due to a shortage of trained psychotherapists, access barriers translate into the inability of all those in need to receive support. Therefore, computerised cognitive behavioural therapy can be an effective alternative [21, 22]. Santucci et al. [22] piloted the Beating the Blues (BtB) programme, which is a fully automated cognitive-behavioural therapy programme consisting of eight weekly sessions of 50 minutes. Following the established practice of cognitive-behavioural therapy protocols, patients are given tasks to complete between therapy sessions. When completed, the results of the tasks are entered into a dedicated web interface that provides the participant with automatically generated feedback. The programme is designed in such a way that the individual sessions complement each other, while allowing the therapy to be tailored to the individual patient’s needs, providing personalised support. The effectiveness of Beating the Blues is widely documented and the programme is now recommended by the UK National Health Service as a form of self-help for the treatment of mild-to-moderate depression, as well as for the treatment of anxiety disorders. The use of digital games as part of treatment and understanding one’s own illness is also developing all the time. Originally, their use was mainly for physical conditions such as cancer. One example of this use is games in which children with cancer fight against cancer cells. This has been shown to contribute to a better understanding of the disease and improve adherence to medical advice [23]. In the following years, gamification, i.e. the application of mechanisms and elements known from games to non-gaming contexts such as education, health or personal development, has also started to be applied to mental illnesses. Currently, various types of mobile apps serve as therapeutic aids for most identified mental disorders. They have the advantage of becoming more attractive with each update, and are widely accepted by patients who wish to maintain relative anonymity [24]. Thus, one example is SPARX, which is based on cognitive behavioural therapy and is aimed at patients with depression. According to a study by Merry et al. [25], after three months of follow-up the remission rates of the illness in the group of adolescents assigned to the SPARX group compared to those treated with standard methods were significantly higher (n = 31, 43.7% vs. n = 19, 26.7%, p = 0.03).
Table 1
Key ethical issues in the use of AI in psychiatry
AI IN PSYCHIATRY AND THE PATIENT’S PERSPECTIVE
Patients have very heterogeneous attitudes towards the use of AI tools in psychiatry, and whether their opinions are positive or negative depends on many different factors [26]. In a survey of 500 people over the age of 18 in the United States, respondents answered questions about their views on the value, benefits and concerns about the use of AI in mental health care. 245/497 respondents (49.3%) answered that tools based on the use of machine learning techniques may not have a positive effect, while their responses varied significantly by cultural and socio-demographic factors (p < 0.05). Among those who said that the use of AI would bring the most benefit and improve mental health diagnosis and treatment processes were those of African Americans (OR 1.76, 95% CI: 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI: 1.29-3.78) [27]. In contrast, those who perceived the use of AI less favourably, felt more anxiety and concern about accuracy, misdiagnosis, confidentiality of information and many other ethical issues were mainly women (OR 0.68, 95% CI: 0.46-0.99) [27]. The same dataset was subsequently re-analysed but this time a larger number of patient characteristics were analysed. Not only were socio-demographic characteristics taken into account, but also how people with a history of mental illness relate to the topic of using AI in psychiatry. The researchers found that people with a history of mental illness showed a greater distrust of diagnoses made by AI. They felt less comfortable when it was the algorithm assessing their health and expressed stronger concerns about the possibility of misdiagnosis. The issue of the transparency of the system’s operation and the precautions made to prevent potential misdiagnosis and harm also appeared to be key for this group [28]. This may be due to the previous experiences of these patients, who not infrequently faced misunderstanding or misjudgements even from mental health professionals. In addition, people struggling with mental health problems often experience a sense of loss of control and putting decision-making in the control of an impersonal algorithm can increase this anxiety [29]. Additionally, representatives of the older generation, for example the baby boomers, i.e. those born between 1946 and 1964, expressed more anxiety and fear towards the use of AI in this field [30]. Older generations often express concerns about the introduction of AI into psychiatric care, which may be rooted in limited experience with modern technology, leading to difficulties in understanding and trusting AI-based systems. Additionally, older generations may attach more importance to traditional treatments based on face-to-face contact with a clinician, fearing that automation may replace or undermine this element of treatment [30]. The underdiagnosis of mental illness among older people, especially among racial and ethnic minorities, is a significant problem in the health care system. This is due to a variety of factors, such as atypical presentation of symptoms or limited access to mental health services in these groups. AI offers the potential to overcome these difficulties through advanced methods for the detection of disorders, support for carers, and innovative solutions for reducing loneliness. However, the lack of active involvement of older people in the implementation of these technologies may lead to an exacerbation of existing inequalities in access to mental health care [31].
CONTROVERSY OVER THE USE OF AI IN PSYCHIATRY
The controversies surrounding the application of AI algorithms in the field of psychiatry are numerous and highly complex. They concern technical, social and ethical issues.
Conflicts of interest
One of the major problems with the use of AI in psychiatry is that a large part of the research on these solutions is conducted by their developers. These actors often have a direct interest in positive results, for example financial. This can affect the reliability and objectivity of the conclusions. As a result, the findings of such studies can be one-sided and portray AI in an overly favourable manner. Independent research, conducted by external, unbiased institutions, is needed to have confidence in the effectiveness and safety of these technologies [32].
Diagnostic errors
In psychiatry, making an accurate diagnosis can be difficult. Much depends on the context, the subjective assessment of symptoms and the experience of the specialist. This means that AI may have limitations in accurately diagnosing different mental disorders. Pan et al., in their literature review, analysed six studies on distinguishing between people with bipolar disorder and healthy people. The mean sensitivity of these AI models was 0.88 (95% CI: 0.74-0.95) and specificity was 0.89 (95% CI: 0.73-0.96). It was more difficult to distinguish bipolar disorder from depression. After analysing eleven studies, a mean sensitivity for bipolar disorder of 0.84 (95% CI: 0.80-0.87) and specificity of 0.82 (95% CI: 0.75-0.88) was obtained. Although the results look promising, there is still a risk of misdiagnosis, especially in more complex cases [33].
Accountability
AI-based systems, such as ChatGPT, can create statements that sound plausible but contain false or unverified data. This phenomenon is known as AI hallucination. It involves a chatbot generating information that is not actually factual. This could be invented data, erroneous claims or content without confirmation in the scientific literature. This type of situation is particularly dangerous in the context of mental health. In psychiatry, even a minor inaccuracy can lead to serious consequences. People using such tools may unknowingly rely on erroneous information. It is therefore important to use AI with great care and always verify the content provided [34, 35].
DEVELOPMENTAL TRENDS IN PSYCHIATRY
The integration of AI in psychiatry is expected to develop significantly in the coming years. It will increase diagnostic accuracy, the quality of therapeutic interventions and access to specialist support. Recent literature highlights this trend, demonstrating a variety of applications of AI technology in clinical and research settings. Key directions in psychiatry include enhancing diagnostic capabilities. Advances in machine learning algorithms are enabling more sophisticated data analysis, resulting in personalised treatment plans and better diagnosis of mental illness. AI technologies provide the ability to analyse huge data sets to uncover patterns that may be unnoticeable to clinicians, thereby improving the specificity and sensitivity of psychiatric diagnoses [36]. Research indicates that AI models can be used for diagnosis based on electronic medical records and patient histories, which will result in an increased accuracy of interventions [37]. In addition to diagnostics, AI tools such as chatbots are likely to offer patient support, enabling self-management strategies for mental health [38]. The use of AI may also extend to art therapy and virtual reality interventions in the future. Research indicates that AI-based art therapy interventions can enhance the therapeutic process among older populations with cognitive impairment, effectively improving their cognitive function and emotional well-being [39]. Despite the fact that AI tools can be very widely used in psychiatry, their implementation is not without difficulties. Important ethical questions arise, including the use of patient data and the issue of accountability for AI decisions. The creation of clear rules and regulations is crucial, as without this trust in new solutions may be limited [40].
ETHICAL ASPECTS AND THE USE OF AI IN PSYCHIATRY
AI in crisis situations and clinical safety
With the more widespread use of AI in the field of mental illness, there is an increasing discussion of the ethical aspects involved. Among other things, many companies are launching chatbots to support people in crisis. One crucial question is how AI responds to user messages that should immediately trigger a call to appropriate services. Heston et al. [41] evaluated ChatGPT 3.5 for its response to patient statements indicating worsening depression and suicidal tendencies. Their findings showed that the tools significantly delayed referring a potential patient in crisis for personal assistance to potentially dangerous levels. De Freitas et al. [42] have demonstrated that patients react negatively to unhelpful and risky information generated by AI, disregarding user messages, which indicates possible risks.
Empathy and human connection in therapy
Moreover, they assessed that AI tools are often being unable to adequately recognize and respond to signs of distress displayed by potential patients [42]. The lack of empathy and compassion that distinguishes an AI tool from high-quality professionals is another ethical issue that must not be neglected forgotten [43]. The integration of AI into mental health care raises questions about the role and value of human interaction and empathy. While AI-based chatbots and virtual assistants can provide accessible and scalable support, they cannot fully replicate the holistic, empathetic care that mental health professionals offer, or at least should [44, 45]. Empathy has a crucial role in the therapeutic process, as it allows the patient to feel understood, supported and accepted. The relationship with the therapist should be based on trust and emotional contact, which can be difficult to achieve when interacting with an impersonal AI system, but not always impossible. Indeed, there are situations in which patients may feel a distance and lack of genuine commitment when interacting with a practitioner, which can lead to less effective therapy and the abandonment of treatment [46]. A study by Ayers et al. [47] has analysed responses to questions posed by users of a social media forum, generated by chatGPT and doctors. They found that of the 195 questions and answers, those generated by the chatbot were significantly better rated by those evaluating them. AI responses were rated as more empathetic than those generated by doctors, and the percentage of AI responses classified as empathetic or very empathetic compared to those generated by doctors showed statistical significance (45.1% vs. 4.6%, p < 0.001). However, it is important to note that AI does not have the ability to share emotional experience or show concern and interest. Understanding of the patient and the therapist’s expression of empathy apparently correlate with the success of treatment [48, 49]. In one study, Lopes et al. [50] have revealed that responses built by chatbots are more practical, authentic and professional compared to those generated by humans (p < 0.001), while participants in the study were not aware of which responses were generated by the AI-based system. However, when users are aware of the involvement of AI, or even suspect its presence, responses may be perceived as less authentic and credible and may even evoke a certain emotional distance. This is especially the case in the context of mental health, where a sense of understanding, empathy and a personalized approach are crucial. People often expect their problems to be acknowledged by someone who not only understands their situation, but also can empathize – something that algorithms, no matter how sophisticated, still lack. As a result, patients may feel less inclined to have honest conversations, and may even be discouraged from using the technology if they perceive its answers as cold, schematic or lacking a human element [51].
Emotional dependence and social isolation
According to a study by Yew et al. [52], in the long term, patients using so-called care robots may over-rely on emotional contact with AI which can translate into isolation from society. Long-term use of AI tools, especially in the context of care robots, can lead to overdependence on the technology, which can have negative social consequences or even lead to death [53]. One example is the story of 14-year-old Sewell, diagnosed with mild Asperger’s syndrome, who spent months talking to a chatbot on the character.ai app. Despite the fact that he knew the chatbot was not a real person, an emotional relationship developed between them, and the boy began to isolate himself from society. Just before he committed suicide in February 2024, he received messages that may have encouraged him to commit the act. The boy’s mother filed a lawsuit against the company, accusing it of being responsible for her child’s death, describing the app as unsuitable and untested [54].
Accountability, legal ambiguity and data protection
AI should support rather than replace human interaction [52]. One of the key ethical dilemmas related to the use of AI in psychiatry is the question of liability for possible system errors. What happens if the algorithm makes an incorrect diagnosis, suggests the wrong treatment, or fails to recognize signals that indicate a patient is in a serious mental crisis? In traditional care, the responsibility lies with the doctor, who makes decisions based on his knowledge and experience [55]. In the case of AI, the issue is much more complicated – whether the blame lies with the programmer who created the algorithm, the company that implemented it, or perhaps the patient who trusted the recommendations. The lack of clear legal regulations in this area means that a patient who has been harmed by AI may find it difficult to assert his or her rights. This raises serious questions about whether AI is ready to act as a viable support in such a sensitive area as mental health [56]. The issue of patient privacy and the proper management of collected data by virtual therapists and tools based on AI should be prioritized. In an era of increasing digitization and automation of healthcare, it is crucial that patients have confidence that their data is properly protected and will not fall into the wrong hands. Clear and strict privacy and confidentiality guidelines should be developed so that every user feels safe using modern mental health technologies. Information regarding emotional state, therapy history or details of conversations with a virtual therapist are extremely sensitive in nature and require robust safeguards that not only protect against data leaks but also prevent potential misuse. Without such strict regulations and transparent rules, trust in AI tools in the mental health field could be severely undermined, which could discourage patients from using these solutions in the long run [57, 58].
Design challenges, transparency and implementation
The use of gamification in the mental health field also raises significant ethical questions, especially in the context of people who struggle with addiction to computer games or the Internet in addition to their mental disorders. For this group of patients, engaging in interactive reward mechanisms can not only hinder the therapeutic process, but also exacerbate the existing problem by reinforcing compulsive behavior and increasing the time spent in front of the screen. As a result, instead of promoting recovery, such solutions can inadvertently lead to an increase in social isolation and deterioration of well-being, which calls into question their real value in therapy. Therefore, it is crucial to take into account the individual needs of patients and the potential risks associated with their mental state when implementing gamification elements [59]. Current deep learning technologies are based on creating nonlinear functions using multilayer neural networks that analyse extensive data sets. Such a method makes it possible to perform increasingly complex tasks. However, despite this versatility, the decision-making processes used by AI remain largely opaque. This is due to the fact that AI’s decision-making mechanisms are difficult to trace and understand, which makes justifying its actions not always possible. As a result, AI is often referred to as a “black box” which is defined as the lack of transparency and comprehensibility of the operation of systems based on AI, especially those based on deep learning. The lack of clarity about the mechanisms of these systems raises legitimate concerns about their reliability, especially when such a sensitive and important area as mental health is involved. Patients and professionals need to be assured that the recommendations obtained are reliable, based on sound science and free of errors that could negatively affect diagnosis or therapy [60]. It is also important to remember that successful implementation of AI in the mental health field requires not only careful planning, but also close cooperation with various stakeholder groups. It is crucial that new technologies support the work of professionals, rather than leading to their marginalization or replacement. Care should be taken to ensure that the development of AI-based tools takes into account the ethical and practical aspects of their application, so that they can complement professional care, rather than undermining its quality or limiting the role of mental health experts [30].
SUMMARY
The application of AI in psychiatry is a rapidly developing area that can bring significant benefits to the diagnosis and treatment of mental disorders. However, it comes with several ethical challenges, such as the lack of empathy in patient interactions, the threat to data privacy, and the opacity of decision-making processes in AI algorithms. Ethical challenges also include the risk of over-reliance on technology and the reduced role of human specialists in psychiatry. Mental health data is extremely sensitive, and its misuse or leakage can lead to serious consequences for patients. Therefore, collaboration between clinicians and technology developers is needed to ensure that AI is implemented safely and effectively in the care of patients with mental disorders.
PRACTICAL IMPLICATIONS AND RECOMMENDATIONS
With the rapid development of AI-based technologies in psychiatry, this literature review offers some key recommendations for practitioners, decision makers and technology developers.
For mental health practitioners: We recommend careful integration of AI tools as an adjunct to, rather than a replacement for, the classic therapeutic relationship. The key is to use these technologies in a complementary way – as a tool for monitoring, early symptom detection or personalizing therapy – while maintaining an empathetic and trusting relationship with the patient.
For decision-makers and public institutions: It is necessary to develop a clear legal and ethical framework to govern the use of AI in psychiatry, including guidelines for liability for misdiagnosis, safeguarding sensitive data, and transparency of algorithms. It should also be worth investing in public education and training for professionals on how to use AI consciously, ethically and effectively.
For developers and technology developers: When creating mental health applications and systems, work closely with clinicians, patients and representatives of vulnerable groups. Particular emphasis should be placed on the transparency of algorithms, safety mechanisms, and features to quickly redirect patients to in-person support in crisis situations. The diversity of users in terms of age, digital competence, culture and mental health status should also be considered.