eISSN: 2084-9885
ISSN: 1896-6764
Neuropsychiatria i Neuropsychologia/Neuropsychiatry and Neuropsychology
Current issue Archive Manuscripts accepted About the journal Editorial board Abstracting and indexing Subscription Contact Instructions for authors Ethical standards and procedures
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
1-2/2023
vol. 18
 
Share:
Share:
Review paper

Application of artificial intelligence for diagnosis, prognosis and treatment in psychology: a review

Mohammad Tahan
1
,
Tamkeen Saleem
2

1.
Department of Psychology and Education of Exceptional Children, University of Tehran, Tehran, Iran
2.
Department of Clinical Psychology, Shifa Tameer-e-Millat University, Islamabad, Pakistan
Neuropsychiatria i Neuropsychologia 2023; 18, 1–2: 36–45
Online publish date: 2023/07/03
Article file
Get citation
 
PlumX metrics:
 

Introduction

Artificial intelligence (AI) has been introduced in multiple fields including games, robotics, law, stock trading, remote sensing devices, scientific inventions, and even diagnostic procedures (Shukla and Jaiswal 2013). Most of these advanced fields of AI have trickled into routine applications, so much so that these AI applications cannot be distinguished individually, as is understood that once something is implemented as a general application and is of immense use it cannot be specified as AI anymore. Artificial intelligence applications seem to have a ubiquitous infrastructural presence in any given industry. The late 1990s and the beginning of the 21st century saw the intertwining of AI technological elements into those of the larger framework of systems (Shukla and Jaiswal 2013). The focal point in AI has primarily concentrated on autonomous and self-directed systems that are able to replace the skilled workforce in future in their respective line of work. Cognition related psychological perspectives may reveal novel opportunities of examination and exploration from the theory of algorithmic instruction, which may have more realistic implications because it assesses the complications of human behavioral data. This individualized focus is based on interdisciplinary and multidisciplinary synergistic research by integrating the conceptions of neuroscience, behavioral psychology, cognitive psychology, and computation and information science (Mozer et al. 2019).
There is often considered a contrast between AI’s engineering facet and its theoretical features. In the engineering field, the objective of AI is to devise and put into practice the machinery and technology which can perform operations in pragmatic ways. Accuracy, efficiency, competence, flexibility, and consistency are the primary criteria of accomplishment for such systems and the particulars regarding human routine and activities seems to be neither essential nor preferred (Tran et al. 2019). The historical records of AI demonstrate that program design and narrative have constantly depended on the fundamentals of the human psyche. The suppositions regarding the mental processes have been extracted from the insight or introspective scrutiny of the AI personnel, more willingly than from experimental and observational studies. Renowned programs, like Samuel’s checker-player (Lee et al. 2018), were developed, applied, and tested without randomizing with any correlative studies of human performance. Possibly, a fraction of the rationale for AI’s initial primitive lack of interest in human research was the indecisiveness with respect to the full-fledged assimilation of psychology to underscore the discrepancy in the constitution of intelligence. Gelernter’s remarks affirm it through the geometry-theorem proving program (Gelernter et al. 1963).
Although the human research considered in the AI field is still empirical, psychology may be ignored in favor of hunches or introspection. It has been noted that the researchers who openly admit the vitality of human based research ignore the exploration of psychological aspects from their work. Yet those researchers who explicitly acknowledge the significance of human data ignore psychological studies in their work. According to Winston, it is highly significant to characterize some sort of learning capability to be understood in order to develop an AI learning program (Winston 1979). However, his program involving the symbol of a student-teacher was not related to any studies in educational psychology (Klausmeier et al. 1974). And as a replacement he developed the program based on his own hunches and intuitions to describe the capability relevant to student-teacher interactions and behaviors. It would have been much more convincing if the program had been derived from the empirical foundation of psychology (Winston 1979). Thus, calling for strong bonding between AI and it may confer an improvement in both aspects, i.e., in the principles of intelligent behavior as well as their respective computer applications. Elevating the use of psychological assumptions from institutions to systematic and scientific observations will impart longitudinal improvement in terms of AI research quality and enable it to be joined together with interdisciplinary research.

Methods

This study was conducted, employing a systematic review, from November to December 2020. A short review was carried out to study the usage of AI in essential functions of psychology in the clinical field, which are diagnosis, prediction, and treatment of psychological disorders. It also focused on the challenges and limitations of AI in psychological practice. A search for literature in the databases Web of Science, PubMed, Elsevier, Scopus, and Google Scholar revealed papers, which were utilized to derive a primary framework for analysis. The information was then reviewed for mental health problems and AI, AI for detection and diagnosis, prognosis and therapeutic interventions of psychological disorders, and challenges and limitations of AI. The article also aspired to make some propositions for research on the usage of Artificial Intelligence in Psychology. The review was then integrated with evidence from gray literature and scientific information. The exclusion criteria were studies published in languages other than English, cases published after the end of 2020, those with no full texts, review studies and books, qualitative research, and articles not considering AI in psychological practice. In some articles, the study time was not clear; therefore, the researchers used the date published. A total of 1390 articles were found during the initial search. At the first stage, seven articles were removed due to being duplicated and having no full texts once their titles and abstracts were examined. At the second stage, 35 irrelevant cases were further excluded based on a review of their titles and abstracts. Finally, 10 research articles on AI in psychological practice were exploited in this study (Fig. 1).

Compliance with ethical standards

As a theoretical article, it included the data available as evidence in text based on only authentic sources published in well-reputed peer-reviewed journals and books. The information sought for the present article involved information that had no distorted realities; it formed the foundation of the research. Further, data from a previously published study was used in which informed consent was taken by the primary research investigators. For the present study, only reports and articles permissible for reproduction for the sake of research have been used. It has in no manner directly affected the people of concern, as composite narrative writing has been used. No human participants have been involved in this study.

Precision of task definition and description

For the past thirty years, AI experts have been developing systems that involve utilizing processes such as planning, understanding, logics, problem-shooting, decision-making, concept-formation, etc. (Luxton 2014). It has been observed that the terminological interpretation often alters from one project to the other, producing a crazy-quilt of terminologies. An example in this regard may be the term concept, applied in the explanation of semantic networks, which has more than five main explanations. Such a loose application of terminologies of psychology leads to three major issues. Firstly, the comparison between different AI projects becomes extremely difficult. The second issue is the type of task specifications; many AI experts are keen on utilizing the psychological terminologies to explicitly associate their AI techniques and programs with the characteristics of human mental activities and thus offer a pre-established theoretical framework (Hendrix and Lewis 1989).
When an investigator tries to conceptualize a program, it is not only because he/she wants to improvise the implementation of a well-understood task, but also because he/she intends to comprehend the first implementation. If the researcher characterizes his work as understanding, many people might be misled, not to mention him (McDermott 1980). The program definition gives rise to ambiguity since it tends to suggest a psychological model when actually it is not. So, it creates incomprehensibility and lack of accuracy in the task specification, and it also brings up the third problem, which is an inherent orientation to related cognitive activities. When psychological terminologies are employed heedlessly to describe the programs of AI, it is effortless to surrender to the enticement to allocate a cluster of associated cognitive activities with it. But this is very much inescapable as the pre-theoretic psychological terminologies exist with reference to the human cognitive functions. To understand the psychological terms is pretty difficult without assumption of an integrated backdrop of cognitive phenomena; therefore when it is considered in an AI context, such a backdrop is mandatory (Silva and Dwiggins 1980). These are the limitations and difficulties due to which AI is still not that well established and applied in psychological and clinical fields.

Mental health problems and artificial intelligence

Mental health problems are among the foremost bases of disability across the globe, influencing individuals of all ages (Kessler et al. 2007). These illnesses cause a heavy load on the individuals suffering from mental health problems as well as the society by escalating the outlay of health facilities (Trautmann et al. 2016). Common psychological disorders are depression, bipolar, anxiety, schizophrenia and drug-related disorders (World Health Organization 2017). Probably around 30% of people may undergo one of these psychological problems at some point in their life span (Steel et al. 2014). Therefore, there is a dire need to identify the early signs and symptoms of mental health illnesses. This may help in provision of effective treatment to the affected ones and prevention to the vulnerable ones.
Although AI in psychology and psychiatry is still in its embryonic stages, it has already modernized the mental health sector and has strongly inspired the practices of mental health professionals in diagnosing, predicting, and treating mental health disorders. This article is a review regarding the application of AI in vital functions of mental health care practice: diagnosis, prediction, and treatment of psychological disorders. It also emphasizes the challenges and limitations of AI in mental health practice (Fig. 2).

Artificial intelligence for detection and diagnosis of psychological disorders

Throughout the years the American Psychiatric Association has rendered its services to safeguard humane care and to provide effective treatment regimens for all people with mental illness. One of its pivotal contributions is development of the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1952, and since then it has been updating it. The development of the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) is the product of more than 10 years of effort by hundreds of international experts in all aspects of mental health, which was made available for use in 2013 (American Psychiatric Association 2013). Their devotion and careful work have produced an influential volume that outlines, describes and classifies mental health disorders for ease in diagnoses, treatment, and research. This manual is a new approach used by psychologists, psychiatrists and other mental health professionals mainly to diagnose clients with psychological disturbances and problems. Despite the availability of this new diagnostic manual, DSM-5 is said to have low reliability in clinical settings due to classifying false positives and differentiating risk from other disorders (Wakefeld 2016). There is also lack of availability of reliable biomarkers that may effectively determine mental disorders from normal mental health, although progresses in the fields of medicine, engineering, genetics and brain imaging is gradually paving the way for advanced and improvised diagnosis.
The literature reveals that the diagnostic divisions can be summarized using high-dimensional data via structural and functional brain imaging (Arbabshirani et al. 2017; Orru et al. 2012). There is also support available for the use of non-imaging modalities, such as genetics (Pettersson-Yeo et al. 2013), metabolomics data (Setoyama et al. 2016), and proteomic data (Diniz et al. 2016). The existing literature also reveals that machine learning may be utilized to detect individuals with psychological problems on account of brain statistics with accurateness of more than 75% (Arbabshirani et al. 2017; Kambeitz et al. 2016).
Recently, psychology has relied on interviews and clinical observations, which usually led to the reflection of subjective and imprecise assessments. With AI, psychology may have a hopeful opportunity to refine and transform the diagnosis and interventions for psychological disorders (Bzdok and Meyer-Lindenberg 2018). Artificial intelligence may help apply advanced computerized techniques such as automated language or speech analysis, computerized text analysis, and machine learning algorithms to assess a client’s mental state, which may be much more than something that could be evaluated via interviews and clinical observations (Elvevag et al. 2007).
Latent semantic analysis (LSA) is a computerized high-magnitude application. It is mostly involved in the analyses of texts of speech. It is well thought out as a flourishing tool that assists mental health professional in the diagnosis of people with mental illnesses and issues. LSA has been productively used in order to help mental health professionals for differentiating schizophrenics from healthy people and other mental disorders (Tenev et al. 2014).
Various techniques of machine learning are helpful to detect and distinguish people with attention deficit hyperactivity disorder (ADHD) and healthy individuals (Arbabshirani et al. 2017). There is also a rise in integrating systems and procedures of machine learning and neuroimaging which help in provision of superior understanding of psychological disorders such as schizophrenia, autism spectrum disorder and Alzheimer’s disease (Kloppel et al. 2008; Ju et al. 2017). Similarly, deep learning techniques based on the softmax regression layer and auto-encoders have also been used to forecast the early diagnosis of Alzheimer’s (AkhavanAghdam et al. 2018). One such model has been utilized to distinguish healthy individuals from people with autism spectrum disorder as well as patients of schizophrenia from healthy controls with an accuracy of 73.6%, which is good enough (Pinaya et al. 2016; Wunderink et al. 2009).
Therefore, it may be said that use of AI is promising for the benefits of mental health practice. It may be used to complement the clinical diagnoses and thus reduce the false-negative or false-positive diagnosis rate. The correct detection of a mental health disorder will lead to prospective treatment and prognosis.

Artificial intelligence for prognostic predictions of psychological disorders

Prognosis is a health care term that is used for referring for prediction of the likelihood or anticipated development of an illness, comprising whether the health will improve or worsen or stay stable over a period of time. Usually a clinician writes a prognosis after completing the assessment, evaluation, and diagnosis of a particular client. A clinician considers the severity level of a client’s activities, impairments in various domains of life, functioning level across settings, participation restrictions, and environmental factors in order to predict a level of progression in activities and decrease in symptomology as well as the total amount of time needed to reach the defined level. Thus, determining a client’s prognostic outcome is analytically significant in clinical practice of psychology so that high quality management, psycho-education, and treatment plans may be developed to heal the client with mental health problems.
For decades the best practice based on objective clinical experiences and research has indicated that if a client gets a diagnosis for a particular mental health disorder, they are referred with the overall population average chance of recovery and likelihood of remission. Another prediction is made on the basis of degree of illness, chronicity and remissions. This means that if a client tends to have chronic mental illness with an unremitting course for many years then it is expected to continue or relapse after full remission as well, which leads to a good prognosis for these chronic patients. However, when making a prognosis for a client with mental health illness for the first time and during the phase of early stage of mental illness, the prognosis is often made inaccurately (Siskind et al. 2016).
So, having stratified prognostic predictions may assist in accurate management and planning of treatment regimes to establish the stages through the route of mental health problems, for instance evolution from severe to moderate or mild episode of illness, partial/full remissions, relapses, alteration in levels of severity, functioning levels across domains, and standard of health for living across life domains.
Recent research indicated that there are encouraging outcomes in predicting the prognosis for various mental health disorders. Some neuroimaging based studies show that for patients with Alzheimer’s disease their mild cognitive impairment has been predicted to transit to psychosis and the accuracy was 70% (Schmaal et al. 2015). The machine learning approach data fusion of structural and functional task-based magnetic resonance imaging has been employed for predicting about the chronicity of depression, improvements and quick remission over a period of 2 years and it also had significant predictive rates (Bertocci et al. 2017). Individuals have also been stratified based on models that predict future substance abuse on the basis of neuroimaging data (Pestian et al. 2017). Machine learning algorithms based on linguistic and acoustic characteristics have been used for prognosis prediction of suicide in high-risk individuals, suicidal, mentally ill but not suicidal, or a control group with an accuracy of up to 85% (Bedi et al. 2015). Automated speech analysis in combination with machine learning has been used and the findings revealed that it accurately predicted the development of psychosis among high-risk youth, outstripping classification from interviews, where most of the assessments depend on the motivation of the client to accurately report his experience (Walsh et al. 2017). Another study was conducted to precisely predict future suicidal attempts among adults with a history of self-injury through the application of machine learning to electronic health records of the patients with an accuracy of 80% (Poulin et al. 2014). Computerized text analytics has been used to predict suicidal behavior in veterans by applying analysis to the unstructured medical records and the results indicated 65% accuracy; in this manner it allows clinicians to better screen and make the prognosis of the apparently healthy individuals and at the same time to assess their future risks for attempting suicide (Wong et al. 2010).
These studies focus on the ability to stratify individuals into clinical and non-clinical groups to optimize prognostic evaluations. Improving the capability to predict the prognosis in terms of worsening, improving or stable conditions could have noteworthy influences for the classification of high-risk individuals and may help clinicians with beneficial information on which treatment and prognostic decisions may be made.

Artificial intelligence for therapeutic interventions of psychological disorders

There are no objective, independent, and personalized approaches available to select several treatment options when tailoring the best psychotherapeutic and pharmacological interventions for psychological and psychiatric patients. Treatment modalities are often primarily steered by recommendations based on symptom classifications, for instance, the symptomology of mood disorders, anxiety, or psychotic problems, which are later converted into a personalized approach over time via trial-and-error learning and experiences.
Selection of treatment is more critical at the beginning of the treatment, as initial handling of the psychological systems can provide a better prognosis, beginning treatment, and treatment outcome. So, better methods and practices are needed for picking among the recognized pharmacological and psychotherapeutic modalities and innovative techniques such as noninvasive brain stimulation.
Nowadays, there is access to biological data such as MRI or genetics and the availability of objective biomarkers that may help lead to superior treatment decisions. Machine learning studies have utilized the command of far-reaching, multi-site databases and sophisticated biological and genetic databases to support the treatment decisions of primary health care providers and clients for depression (Insel et al. 2010) and clinical remission at a rate of 65% for drug abuse (Khodayari-Rostamabad et al. 2013). Recent studies have indicated that measures based on electroencephalographic technology are advantageous in forecasting treatment response to medicines for depressives and schizophrenics. Similarly, findings for non-invasive modalities, brain functional magnetic resonance imaging (MRI), have been used to calculate responsiveness to cognitive behavioral therapy for anxiety related disorders with precision and faithful measurement of 75% (Khodayari-Rostamabad et al. 2010; Whitfield-Gabrieli et al. 2016).
Artificial intelligence assisted interventions have been applied in psychology for dealing with various mental health problems. Computer-assisted therapy (CAT) may offer stimulating prospects in this regard by supplying a few features of psychotherapeutic and behavioral treatments. It can also be delivered via the internet, thus allowing for far more interactivity between the clinician and client. This mode is called e-therapy (Carroll and Rounsaville 2010). CAT typically comprises programs designed with videos and psychometric tools that are given to the client through a computerized podium to assist him/her to deal with his/her symptomology. For example, a computerized-assisted therapy called Beating the Blues is endorsed by the National Institute for Health and Clinical Excellence; its efficacy for reducing depressive and/or anxiety related symptoms has been proven in randomized controlled trials (Proudfoot et al. 2003). Another program of the moderated online social therapy (MOST) has been effective in treating depression and psychosis in young adults (Alvarez-Jimenez et al. 2013; Rice et al. 2018).
Various conservative AI and machine learning techniques have been used in clinical psychology. Deep learning architecture has been used for antidepressant treatment response and remission (Lin et al. 2018). The random forest method was used and showed an accuracy of 25% for treatment outcome based on an antidepressant. A decision tree is a flowchart-like diagram that demonstrates the various upshots from a series of decisions to develop a diagnosis and select treatment (Zhang et al. 2019). Decision trees have been used with an accuracy of 89% established based on age, mini-mental status examination scores, and structural imaging (Patel et al. 2015). A tree-based ensembles has been used for remission of depression and indicated a precision of 59% for remission with antidepressants (Chekroud et al. 2016).

Challenges and limitations of artificial intelligence

Despite being state-of-the-art, fast-growing and novel technologies, AI has its fair share of criticisms, challenges and concerns. There do exist some potential threats of privacy of client data, possibilities of medical errors, and ethical concerns.
The Nuffield Council on Bioethics highlights the significance of identifying the ethical issues raised by using AI in health care. Apprehensions include the erroneous decisions if any made by AI and in such circumstances who will be held responsible for such errors. There is often difficulty in validating the outputs of AI systems, and the probability for AI to be used for malicious purposes which may put privacy and confidential data at stake (Nuffield Council on Bioethics 2018). For those clinicians who consider implementing AI in their practice, it is important to distinguish where this technology belongs in a workflow and in the decision-making process. Clinicians may consider using AI as a consulting tool to eliminate the element of fear associated with not having control over diagnostics and management aspects of the client (Tahan 2023). Similarly, professionals working in mental health services have ethical accountability to notify other health professionals as well as significant others or authorities in case a client signifies the presence of risk or harm to self or others. However, how this could be addressed in AI interventions is a point of concern, especially in situations where there is no monitoring and supervision of the interaction between the AI therapist and the client by an eligible human health care professional. Further, it is also not clear how, in case of any threat to life or need for hospitalization or other protections, the chatbots would connect at-risk individuals to suitable services. This issue may be addressed by blended programs for diagnosis, prognosis and treatment.
Artificial intelligence therapists or applications need to have the same or similar ethical guidelines that may be followed for the therapeutic relationships as followed by human therapists. However, to date the code of practice for reporting harm and other ethical codes is unclear. A suggestion may be that there may be some kind of supervision by a qualified human mental health clinician who may evaluate self-harm, harm to others, need for hospitalization, and confidentiality of records and interpretation of related risks. These concerns and challenges are still a theme for further debate which may need more research and careful planning of AI applications in psychological practice.

Propositions for new research for usage of artificial intelligence in psychology

The role of AI in psychological science is still underestimated among psychological science specialists, despite the innovative wave that has emerged regarding the use of AI in psychology for diagnosis and psychotherapy.
The use of AI in psychology can actually expand the easy access of mental health patients to treatment and it may also reduce the cost of travelling and treatment for the masses. However, there are various challenges in the field. But the field has to triumph over several challenges prior to gaining these benefits.
The breaking out of the COVID-19 pandemic triggered a dynamic change in the healthcare delivery system and many of the services were provided via the internet or various new digital applications and were labeled as telehealth. This also impacted the services of psychology departments at the hospitals and a new field began: “telemental health”. Psychological services are among the areas of healthcare that can be delivered via telehealth without losing its essence. The increase of telemental health services has also brought up the utilization of advanced technology in health care, i.e. artificial intelligence (Shen et al. 2021).
Future research is needed to develop more applications based on new AI software, and more training is needed to use and empirically test the AI software across various populations and cultures so that it can benefit the field of psychology. There may be new AI programs based on instructor-led role plays which may not only be effective for clients, but also may serve as a unique opportunity to produce trained counselors. So, it can be said that the AI technology can help in improving the quality and flexibility of counselors, psychologists and other mental health care practitioners via the training process.
Research is needed verify the cross-cultural effectiveness of the chatbots to perceive, recognize and respond to human emotions, where the chatbots may respond to the various human vocal and facial expressions. Empirically tested and verified AI software may be further extended for usage by translating the content into native language as most of the apps are available in English language and developed for the white population.
The AI software may be developed for various mental health problems such as depression, drug abuse, anxiety, stress and loneliness, etc. Some psycho-educational sessions may be incorporated that may help to understand the illness as well as mode of treatment along with assessments and diagnosis.
In psychological services manually a therapist collects a lot of history and information from the client that may be beneficial for the personalized care and treatment. But many times its difficult to unlock and organize the data. The data can be useful for the clinical insights and benefit the process of diagnosis and treatment. Future research may target application of such data, where AI software may help the mental health professionals to go through the data, and gather clinically actionable targets that may improvise the mental health care system and patient care. It may also target preventive care as well. Thus, such research may generate evidence to approach the mental health problems in a more targeted way.

Conclusions

In the upcoming years digitalization will increase and along with that use of AI will be needed in psychology much more than now. Hence a detailed understanding for its usage in prognosis, diagnosis and treatment is needed. In the present review we have summarized the discoveries and application of AI in recent years that actually support the utilization of AI based interventions and software for preventive care, prognosis, forecast, diagnosis and treatment in psychology. In short, it can be said that the AI framework can help in making the current mental health care more efficient, accessible and cost effective. A collaboration of mental healthcare professionals, engineers, AI specialists, administrators and entrepreneurs is needed to achieve the full potential of the AI applications which will serve the common people.
As mentioned above in the review of studies, AI promises to deliver novel diagnostic, prognostic and therapeutic approaches for people with mental illnesses. Future research may concentrate on applying and assessing the usefulness of AI-assisted therapy in intervention-based studies through randomized controlled trials and comparing it to the traditional human-assisted therapy. Future studies may also explore the ethical aspects of AI-assisted therapies along with prognostic factors and diagnostic regimes. Such research will aid in bringing a new direction and breadth to the therapeutic progression and may play an imperative role in ameliorating the mental health conditions of clients with psychological problems.

Acknowledgments

I would like to express my great appreciation to Dr. Tamkeen Saleem for her valuable and constructive suggestions during the planning and development of this research work. Her willingness to give her time so generously is very much appreciated.

Data availability statement

The data supporting this study’s findings are available from the corresponding author upon reasonable request.

Disclosure

The authors declare no conflict of interest.
References
1. AkhavanAghdam M, Sharifi A, Pedram MM. Combination of rs-fMRI and sMRI data to discriminate autism spectrum disorders in young children using deep belief network. J Digit Imaging 2018; 31: 895-903.
2. Alvarez-Jimenez M, Bendall S, Lederman R, et al. On the HORYZON: moderated online social therapy for long-term recovery in first episode psychosis. Schizophr Res 2013; 143: 143-149.
3. American Psychiatric Association. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub 2013.
4. Arbabshirani MR, Plis S, Sui J, Calhoun VD. Single subject prediction of brain disorders in neuroimaging: promises and pitfalls. Neuroimage 2017; 145: 137-165.
5. Arbabshirani MR, Plis S, Sui J, Calhoun VD. Single subject prediction of brain disorders in neuroimaging: Promises and pitfalls. Neuroimage 2017; 145 (Pt B): 137-165.
6. Bedi G, Carrillo F, Cecchi GA, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. NPJ Schizophr 2015; 1: 15030.
7. Bertocci MA, Bebko G, Versace A, et al. Reward-related neural activity and structure predict future substance use in dysregulated youth. Psychol Med 2017; 47: 1357-1369.
8. Bzdok D, Meyer-Lindenberg A. Machine learning for precision psychiatry: opportunities and challenges. Biol Psychiatry Cogn Neurosci Neuroimaging 2018; 3: 223-230.
9. Carroll KM, Rounsaville BJ. Computer-assisted therapy in psychiatry: be brave-it’s a new world. Curr Psychiatry Rep 2010; 12: 426-432.
10. Chekroud AM, Zotti RJ, Shehzad Z, et al. Cross-trial prediction of treatment outcome in depression: A machine learning approach. Lancet Psychiatry 2016; 3: 243-250.
11. Diniz BS, Lin CW, Sibille E, et al. Circulating biosignatures of late-life depression (LLD): towards a comprehensive, data-driven approach to understanding LLD pathophysiology. J Psychiatr Res 2016; 82: 1-7.
12. Elvevag B, Foltz PW, Weinberger DR, Goldberg TE. Quantifying incoherence in speech: an automated methodology and novel application to schizophrenia. Schizophr Res 2007; 93: 304-316.
13. Gelernter H, Hansen JR, Loveland DW. Empirical explorations of the geometry-proving machine reprinted. In: E. Feigenbaum and J. Feldman (eds.). Computers and Thought. McGraw-Hill, New York 1963; 153-167.
14. Hendrix G, Lewis W. Transportable natural language interfaces to databases. Proc Assoc Comput Linguistics 1989; 159-165.
15. Insel T, Cuthbert B, Garvey M, et al. Research domain criteria (RDoC): toward a new classification framework for research on mental disorders. Am J Psychiatry 2010; 167: 748-751.
16. Ju R, Hu C, Zhou P, Li Q. Early diagnosis of Alzheimer’s disease based on resting-state brain networks and deep learning. IEEE/ACM Trans Comput Biol Bioinform 2017; 16: 244-257.
17. Kambeitz J, Cabral C, Sacchet MD, et al. Detecting neuroimaging biomarkers for depression: a meta-analysis of multivariate pattern recognition studies. Biol Psychiatry 2016; 82: 330-338.
18. Kessler RC, Amminger GP, Aguilar-Gaxiola S, et al. Age of onset of mental disorders: a review of recent literature. Curr Opin Psychiatry 2007; 20: 359-364.
19. Khodayari-Rostamabad A, Hasey GM, Maccrimmon DJ, et al. A pilot study to determine whether machine learning methodologies using pre-treatment electroencephalography can predict the symptomatic response to clozapine therapy. Clin Neurophysiol 2010; 121: 1998-2006.
20. Khodayari-Rostamabad A, Reilly JP, Hasey GM, et al. A machine learning approach using EEG data to predict response to SSRI treatment for major depressive disorder. Clin Neurophysiol 2013; 124: 1975-1985.
21. Klausmeier HJ, Ghatala ES, Frayer DA. Conceptual Learning and Development. Academic Press, New York 1974.
22. Kloppel S, Stonnington CM, Chu C, et al. Automatic classification of MR scans in Alzheimer’s disease. Brain 2008; 131 (Pt 3): 681-689.
23. Lee Y, Ragguett RM, Mansur RB, et al. Applications of machine learning algorithms to predict therapeutic outcomes in depression: A meta-analysis and systematic review. J Affect Disord 2018; 241: 519-532.
24. Lin E, Kuo PH, Liu YL, et al. A Deep learning approach for predicting antidepressant response in major depression using clinical and genetic biomarkers. Front Psychiatry 2018; 9: 290.
25. Luxton DD. Artificial intelligence in psychological practice: current and future applications and implications. Prof Psychol 2014; 45: 332-339.
26. McDermott D. Methodological Polemic, presentation to the Artificial Intelligence Society of New England. Yale University 1980.
27. Mozer MC, Wiseheartd M, Novikoffc TP. Artificial intelligence to support human instruction. Proc Natl Acad Sci U S A 2019; 116: 3953-3955.
28. Nuffield Council on Bioethics. The big ethical questions for artificial intelligence (AI) in healthcare. Axt J 2018.
29. Orru G, Pettersson-Yeo W, Marquand AF, et al. Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: a critical review. Neurosci Biobehav Rev 2012; 36: 1140-1152.
30. Patel MJ, Andreescu C, Price JC, et al. Machine learning approaches for integrating clinical and imaging features in late-life depression classification and response prediction. Int J Geriatr Psychiatry 2015; 30: 1056-1067.
31. Pestian JP, Sorter M, Connolly B, et al. A machine Learning approach to identifying the thought markers of suicidal subjects: a prospective multicenter Trial. Suicide Life Threat Behav 2017; 47: 112-121.
32. Pettersson-Yeo W, Benetti S, Marquand A, et al. Using genetic, cognitive and multi-modal neuroimaging data to identify ultra-high-risk and first-episode psychosis at the individual level. Psychol Med 2013; 43: 2547-2562.
33. Pinaya WH, Gadelha A, Doyle OM, et al. Using deep belief network modelling to characterize differences in brain morphometry in schizophrenia. Sci Rep 2016; 6: 38897.
34. Poulin C, Shiner B, Thompson P, et al. Predicting the risk of suicide by analyzing the text of clinical notes. PLoS One 2014; 9: e85733.
35. Proudfoot J, Goldberg D, Mann A, et al. Computerized, interactive, multimedia cognitive-behavioural program for anxiety and depression in general practice. Psychol Med 2003; 33: 217-227.
36. Rice S, Gleeson J, Davey C, et al. Moderated online social therapy for depression relapse prevention in young people: pilot study of a ‘next generation’ online intervention. Early Interv Psychiatry 2018; 12: 613-625.
37. Schmaal L, Marquand AF, Rhebergen D, et al. Predicting the naturalistic course of major depressive disorder using clinical and multimodal neuroimaging information: a multivariate pattern recognition study. Biol Psychiatry 2015; 78: 278-286.
38. Setoyama D, Kato TA, Hashimoto R, et al. Plasma metabolites predict severity of depression and suicidal ideation in psychiatric patients: a multicenter pilot analysis. PLoS One 2016; 11: e0165267.
39. Shen YT, Chen L, Yue WW, Xu HX. Digital technology-based telemedicine for the COVID-19 pandemic. Front Med (Lausanne) 2021; 6: 646506.
40. Shukla S, Jaiswal V. Applicability of artificial intelligence in different fields of life. IJSER 2013; 1: 2347-3878
41. Silva G, Dwiggins D. Toward a Prolog Grammar. SIGART 1980; 73: 20-25.
42. Siskind D, McCartney L, Goldschlager R, Kisely S. Clozapine v. first- and second-generation antipsychotics in treatment-refractory schizophrenia: systematic review and meta-analysis. Br J Psychiatry 2016; 209: 385-392.
43. Steel Z, Marnane C, Iranpour C, et al. The global prevalence of common mental disorders: a systematic review and meta-analysis 1980–2013. Int J Epidemiol 2014; 43: 476-493.
44. Tahan M. Robot-based psychological intervention program for the prevention of child sexual abuse: An over-view. Neuropsychopharmacol Hung 2023; 25: 18-25.
45. Tenev A, Markovska-Simoska S, Kocarev L, et al. Machine learning approach for classifcation of ADHD adults. Int J Psychophysiol 2014; 93: 162-166.
46. Tran BX, Vu GT, Ha GH, et al. Global evolution of research in artificial intelligence in health and medicine: a bibliometric study. J Clin Med 2019; 8: 360.
47. Trautmann S, Rehm J, Wittchen HU. The economic costs of mental disorders: do our societies react appropriately to the burden of mental disorders? EMBO Rep 2016; 17: 1245-1249.
48. Wakefeld JC. Diagnostic issues and controversies in DSM-5: return of the false positives problem. Annu Rev Clin Psychol 2016; 12: 105-132.
49. Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clin Psychol Sci 2017; 5: 457-469.
50. Whitfield-Gabrieli S, Ghosh SS, Nieto-Castanon A, et al. Brain connectomics predict response to treatment in social anxiety disorder. Mol Psychiatry 2016; 21: 680-685.
51. Winston PH. Learning by creating and justifying transfer frames. In: P.H. Winston and R.H. Brown (eds.). Artificial Intelligence: An MIT Perspective. MIT Press, Cambridge 1979; 347-376.
52. Wong EHF, Yocca F, Smith MA, Lee CM. Challenges and opportunities for drug discovery in psychiatric disorders: the drug hunters’ perspective. Int J Neuropsychopharmacol 2010; 13: 1269-1284.
53. World Health Organization. Depression and other common mental disorders: global health estimates. Geneva: World Health Organization 2017 (http://apps. who.int/iris/bitstream/10665/254610/1/WHO-MSD-MER-2017.2-eng.pdf?ua=1, accessed 5 March 2018).
54. Wunderink L, Sytema S, Nienhuis FJ, Wiersma D. Clinical recovery in first-episode psychosis. Schizophr Bull 2009; 35: 362-369.
55. Zhang L, Li J, Yin K, et al. Computed tomography angiography-based analysis of high-risk intracerebral haemorrhage patients by employing a mathematical model. BMC Bioinform 2019; 20 (Suppl 7): 193.
Copyright: © 2023 Termedia Sp. z o. o. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
Quick links
© 2024 Termedia Sp. z o.o.
Developed by Bentus.