eISSN: 2353-561X
ISSN: 2353-4192
Current Issues in Personality Psychology
Current issue Archive Articles in press About the journal Editorial board Journal's reviewers Abstracting and indexing Contact Instructions for authors Ethical standards and procedures

vol. 5
Review paper

Data integration levels. Between scientific research and professional practice in clinical psychology

Jerzy M. Brzeziński

Institute of Psychology, Adam Mickiewicz University in Poznan, Poland
Current Issues in Personality Psychology, 5(3), 163–171.
Online publish date: 2017/09/22
Article file
- Data integration.pdf  [0.15 MB]
Get citation
JabRef, Mendeley
Papers, Reference Manager, RefWorks, Zotero


There is no need to justify the thesis that psychological practice (here: clinical) carried out by professional psychologists2 is strongly dependent on the state of psychology research practice. This makes actions taken by psychologists (but not only) in the sphere of professional practice rational (and at the same time sensible). To put it more strongly, apart from the scientific achievements of psychology, there is no such thing as psychological practice. To go even further, such practice is unethical. So, what then is it? It is “something” in the shape of shamanic practices or practices that pretend to be (imitate) psychological practice. This scientifically unacceptable practice is based on a colloquialism, which simplifies and changes the meaning of terms derived from the language of psychological theories sensu proprio (such as emotion, personality, motivation, etc.), but is also (quite often with a desire for profit by representatives of this “auxiliary” trend) based on the naivety of the recipients of these deceptive practices.
I accept (Brzezinski, 2016a, 2016c) – remaining of course in the scientific realm – that clinical practice only makes sense when referring directly to scientific knowledge created in the field of psychology research. Thus, as an empirical discipline, it refers to experience as the only criterion for determining the truthfulness of statements formulated by psychologists. The ethical imperative for clinical psychologists is to build clinical practice on empirically tested scientific knowledge. This, however, arises only by constructing – in a way that respects the requirement of testability and the replicability of research results – empirical theories. It is therefore important to focus on the “immersion” of professional practice in the context of empirical psychological theory.
The relationship between the two spheres – research practice (psychology as empirical science) and professional practice (psychology as a scientifically meaningful practical action: diagnostic and therapeutic) – is schematically depicted in Fig. 1.
The state of specific social practice (here in the domain of clinical psychology) is primary. Practical actions undertaken by clinicians within it find their justification in scientific knowledge (description and explanation), and their effectiveness is a derivative of the method of practical action built on this knowledge: diagnostic and therapeutic. However, if these methods do not guarantee satisfactory service in the sphere of social practice, then there is a need for new, more effective methods of practical action that will remove this inconvenience. This need is addressed to the sphere of research practice in which new empirical theories are created (or existing ones corrected). These form a leaven for new diagnostic and therapeutic methods: their effectiveness is constantly checked, leading to results that are not always universally accepted by specialists. As an example, we can point to the “neverending story” that has been going on for years, discussing the advantages and disadvantages of two approaches to testing the efficacy of psychotherapy: efficacy vs. effectiveness (Cierpiałkowska, 2016 – see Table 34.2, p. 734). The methods developed (and approved within the professional environment) penetrate into the realm of clinical practice, which – for some time – satisfies the needs of social practice. I am avoiding here pathological cases where this demand is directed away from science (religious or shamanic practice), being carried out by people who pretend to be professionals. I am also not dealing with cases where clinicians build support programmes based on pseudoscientific (but façade, pretend) concepts (I hesitate about using the term “theory”). This is where psychoanalysis is particularly abused. In this article, scientific foundation means only testable empirical theories (as discussed in the following paragraph) and methods developed based on them.

Empirical theory – test theory – professional practice: diagnosis and therapy

Let us repeat: the scientific theories created by psychologists are empirical theories. This means that they are evaluated by confronting the “predictions” derived from them (by deduction) with the results of experiments and practical applications. They must meet the results of the empirical test. This test is conducted according to one of two strategies: (a) positive strategy: confirmation or (b) negative strategy: falsification.
According to the first strategy, the researcher looks for empirical data confirming the predictions derived from a theory’s claims. At the same time, let us note, however, that the researcher is not able to search through the whole set of potential results in which he/she could claim that a theorem is confirmed. By the nature of things, this number must be limited and at some point in the process of confirmation it must be recognised that a theory is sufficiently strong. Individual psychological subdisciplines have access to data of varying scales of “hardness”. Unfortunately, the weakness of clinical psychology is the fact that it uses data from the lower sub-scale of the “hardness” scale of results. The consequence of this is that clinicians, much more often than for example psychophysicalists, make false starts, i.e. they treat the theories still at the infancy stage as already methodologically mature constructions on which they can build responsible clinical practice. One can therefore speak of a confirmation delusion to which clinicians succumb.
In turn, the second, falsifying strategy, otherwise known as the method of putting forward and criticising hypotheses, was invented by Karl R. Popper, author of the fundamental work The Logic of Scientific Discovery (Popper, 1959/2005). According to Popper, the task of a scholar is not to seek, at all costs, proof of his/her theoretical ideas, but – which from a psychological point of view is difficult to accept by the researcher – to look for data that contradict the predictions derived from the assumptions of a theory. As a consequence, as Karl Popper wrote: “If this decision is positive, that is, if the singular conclusions turn out to be acceptable, or verified, then the theory has, for the time being, passed its test: we have found no reason to discard it. But if the decision is negative, or in other words, if the conclusions have been falsified, then their falsification also falsifies the theory from which they were logically deduced. It should be noticed that a positive decision can only temporarily support the theory, for subsequent negative decisions may always overthrow it. So long as a theory withstands detailed and severe tests and is not superseded by another theory in the course of scientific progress, we may say that it has ‘proved its mettle’ or that it is ‘corroborated’ by past experience” (p. 10).
This is true, although methodologists agree on the basic disadvantages of positive strategies and the advantages of negative strategies, in that research practice of psychologists prevailing over the former. Also note that the confirmation strategy adopted by clinicians favours – in order to defend the tested theory – the formulation of an ad hoc hypothesis that aims to modify the theory so that negative empirical data can be included in the set of data confirming it. In this way they will obtain a rather convoluted description of only those facts that the researcher identified and which – with the help of the ad hoc hypothesis – he/she considered as not contradicting the defence of the elaborate “theory”.
In the way described above, the “candidates” accepted by psychologists enter the social circulation (see Fig. 1). Also, contemporary clinical psychology in its scientific dimension assumes as a methodological starting point for undertaking professional activities a theory created and tested in the process of intersubjectively understood scientific research. Such a position on the scientific validation of clinical practice is consistent with the report by the American Psychological Association Presidential Task Force on Evidence-Based Practice (2006) and the model developed on its basis: evidence-based practice in psychology (EBPP): On the basis of its review of the literature and its deliberations, the Task Force agreed on the following definition: Evidence-based practice in psychology (EBPP) is “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences. […] Best research evidence refers to scientific results related to intervention strategies, assessment, clinical problems, and patient populations in laboratory and field settings as well as to clinically relevant results of basic research in psychology and related fields. APA endorses multiple types of research evidence (e.g. efficacy, effectiveness, cost-effectiveness, cost-benefit, epidemiological, treatment utilization) that contribute to effective psychological practice. Multiple research designs contribute to evidence-based practice, and different research designs are better suited to address different types of questions” (pp. 273-274) (my emphasis).
It is also consistent with Jerzy Brzeziński’s comprehensive model: Scientific Research and Professional Practice in Psychology (SRPPP) (Brzeziński, 2016b). The editors of the most important Polish textbook in clinical psychology, Psychologia kliniczna [Clinical Psychology], Lidia Cierpiałkowska and Helena Sęk (Cierpiałkowska & Sęk, 2016b) wrote in a text describing the current state and developmental trends of clinical psychology (Cierpiałkowska & Sęk, 2016a): “The scientific level of clinical psychology is determined by creating a theory, adhering to methodological assumptions, and conducting modern research. Therefore, interrelations between theory and practice constantly constitute a subject for reflection and an area with new tasks to undertake” (pp. 419-420) (my emphasis).
A properly constructed and tested psychological theory is the basis of professional practice. Both diagnosis and therapy – planned and conducted within it – must, in order not to be excluded, meet (now very strict) methodological and ethical standards: evidence-based assessment (EBA) or evidence-based practice in psychology (EBPP) (Brzeziński, 2016b). The relationships presented above are shown in Fig. 2.
When we talk about the realm of social practice (from a scientific perspective) through psychology, specifically the “import” of what is most valuable and what may help to resolve – in terms of practice (here: clinical) – problems (e.g. anxiety, depression, alcoholism, burnout syndrome, etc.), and namely empirical theories and the scientific justification contained in them of the methods: diagnostic (cf. concept of construct validity obligatory to psychological tests – Cronbach & Meehl, 1955) and therapeutic (e.g., cognitive behavioural therapy [CBT], developed by Aaron T. Beck). It can therefore be said that psychologists do what they know best, that is, they construct and test empirical theories: “...the work of the scientist consists in putting forward and testing theories” (Popper, 1959/2005, p. 7): “The major task in any science is the development of theory” (Schmidt, 1992, p. 1177) and (in collaboration with professionals) they construct and test diagnostic and therapeutic methods, and professionals use (or creatively modify) them to solve the problems of society and their individual clients. In a very broad understanding of the subject and the tasks of clinical psychology (cf. Cierpiałkowska & Sęk, 2016c), differences in the perceptions of the tasks of both spheres are eroded: psychologists undertaking basic research (the sphere of psychology as empirical science) and clinical practice psychologists (clinical practice) applying the results: One of the basic tasks of clinical psychologists is to take care of the theoretical achievements of the field and to reflect on its improvement. These tasks take different forms and rely on the creation of new models and concepts. They apply to a wide content area. These include the development of the concept of health and its genesis (saluto- and pathogenesis), psychological concepts of disorders, foundations of scientific research and diagnosis based on facts, as well as basic research into various types of psychological help (p. 31) (original emphasis).
Clinical psychology not only draws from basic psychological research (after their transformation), but also uses research findings from other scientific disciplines (e.g. neuroscience and anthropology), and these are positive influences enriching professional practice and research.
In Fig. 2, the box: “other applications” refers exclusively to those that – in line with the present state of scientific knowledge – are accepted by the scientific community (testable and intersubjective) and professional (e.g. the diagnosis meets the Daubert standard).
It is not only professional practice that is carried out on the basis of clinical psychology: supportive actions (therapy, counselling, prevention) and diagnosis. Empirical research related to the identification of sources of mental disorders or the effectiveness of psychotherapy, rehabilitation, counselling, etc, is also carried out. The results of scientific research influence – by elevating them to a higher methodological level – professional practice standards – EBA and EBPP. Reciprocally, they enrich scientific knowledge from psychology.
It must also be noted that professional practice is also affected by non-scientific destructive factors (e.g. religion, ideology, business, fashion), which puts into question the missionary nature and ethics of those psychologists who have been subject to such influence. We can also observe, unfortunately, a belief in pseudoscientific concept, a fascination with “therapeutic” concepts, which have little to do with science (except perhaps misleading, linguistic and empiric similarities) (e.g. Bert Hellinger’s systemic family constellations) and pseudo-scientific “diagnostic” methods (e.g. the Koch tree test, Lüscher colour test, Szondi test, and Rosenzweig picture-frustration test).
On the other hand, emphasis is placed to a much greater extent than previously on cultural factors (cf. American Psychological Association, 2008; Brzeziński, 2016b). And these are also positive – shown in Fig. 2 – “other conditions”.

Integration levels

A clinician seeking to identify the causes of the undesirable state of social practice (in an attempt to respond to social needs) must do the following:
a) explain the present state by referring to psychological theory (as understood above). It must be borne in mind that an explanation is always causal (“…the goal in every science is explanation, and explanation is always causal”, Schmidt, 1992, p. 1177),
b) design psychological corrective actions (treatment),
c) carry out the treatment,
d) examine the effectiveness of corrective actions taken.
These tasks may have a “deep” character – the clinician does not have a readily available empirical theory that has already been used to solve the same or similar problems. He/she first becomes a researcher and then a practitioner. Of course, these sub-tasks do not (and generally are not) performed by the same person, and the timing of their performance can be measured over years (as was the case with the theory and diagnostic and psychotherapeutic methods developed by Aaron Beck’s team for solving the problem of depression). They may also have a “shallow” character when solving the problem of a single person using commonly known clinical methods. This in general – using tried and tested diagnostic and therapeutic methods (this is the role of the professional development of clinicians) – is how a professional operates. He/she does, in fact, have (but not uncritically) the scientific competences of a researchers/psychologists and uses their tools. There is only one problem: does the practitioner know how to use these tools competently (and ethically)?
A clinician designing an empirical theory and carrying out empirical research to evaluate it (EBPP standard), designing diagnostic procedures (EBA standard, Daubert standard) and designing a therapeutic model for its effectiveness (EBPP standard) has to deal with a variety of data arising at different levels. It is here that the problem stated in the title – data integration – appears. Let us take a look at it.
In my opinion, four levels on which this integration takes place can be identified. At each level we are dealing with theories. The empirical data that emerge are indirectly brought forward and justified by these theories. These levels are somewhat dependent on each other. In other words, we are dealing with integration within each level and between levels.
These levels are:
a) Level I: Constructing variables and building hypothetical relationships between them,
b) Level II: Operationalisation of variables, i.e. giving variables from level I an empirical sense,
c) Level III. Quantitative interpretation of data obtained during empirical research (scientific, diagnostic interpretation evaluating the effectiveness of the assistance programme [e.g. therapy]). Here the interpretative framework is the theory of the psychological test (or another tool used in the operationalisation procedure – level II) and statistical theory,
d) Level IV: Qualitative (clinical) interpretation of data developed at level III. Here the interpretation framework is provided by the psychological empirical theory from level I.
Let us now proceed to describe what is going on at different levels.
Level I. This level can be called “theoretical” as in the proposed empirical scientific research and in the clinical diagnostic research that is modelled on scientific research (“...the diagnostic activity of a clinical psychologist should be defined as a form of scientific research”, Lewicki, 1969, p. 84). The psychologist must define variables and combine them into hypothetical relationships. They are defined, of course, in the language of empirical theory. This theory either already exists in the psychological community (it has undergone relevant empirical testing) or is only being built and requires empirical verification to be used to define a variable. In the language of theoretical variables, the psychologist forms hypotheses (scientific research and diagnostic research).
For example, referring to Strelau’s Regulative Theory of Temperament (Strelau, 1998), the psychologist may refer to a definition of the term “temperament”. J. Strelau’s theory is empirically grounded and well accepted by the worldwide psychological community, and a clinical psychologist who accepts Strelau’s point of view can simply incorporate the variable “temperament” into his/her matrix of variables. There is no need to push an already open door.
Two moments are key at this level of research (scientific or diagnostic) – (a) the creation of a hypothetical set of variables (independent) relevant, according to the psychologist’s best knowledge, for a given dependent variable, and (b) creating a hypothetical materiality function within a set of variables. It must be stressed that a theory is not built of occasional random combinations of variables that are of universal value. A theory is also not built up by the accumulation of a large number (as large as possible) of variables, which are then described for their percentage inclusion in explaining the variability of a dependent variable (e.g. with the help of effect size indicators). Sometimes it seems that psychologists (creative) have entrusted their thinking to a computer that performs complex statistical analyses (e.g. factor analysis or regression analysis) and them reconstructs the space of variables relevant to Y and establishes a “theory” on that basis. However, a computer will not replace a creator, because it is a creator, not a computer, in the final instance, that chooses the starting set of variables (and their size) and the researcher must accept what the computer “invented” (sic!).
Variables are generally defined within a single given theory. Sometimes, it is not uncommon that there are more of these theories. Theories are, thus, immersed in certain paradigms (as understood by Kuhn, 1970). The first step that must be performed by a psychologist (though they are not always aware of this) is related to the choice of paradigm in which a “theory” is “immersed” (and at the lowest level of theorizing, the theoretical definition of a given variable). Choosing a paradigms makes it possible to “descend” to a lower level of theoretical analysis and empirical analysis. Here, however, it is easy to fall into a trap when we want to build a model with variables taken from different mutually exclusive paradigms. All the theoretical terms introduced into a theory must be defined in the language of the same paradigm. The principle of paradigmatic consistency should be respected. Clinicians conducting research at the subparadigmatic level are not always aware of the importance of this principle. Breaking the principle of paradigmatic consistency is also evident in the fact that they often perform an operationalisation of variables using psychological tests (personality questionnaires) that are derived from psychological theories created in different paradigms. So how to reconcile Rorschach’s projection method (agreeing, for a moment, to accept it as science, because it is difficult to accept it according to the current methodological standards) with the MMPI personality questionnaire, when we consider seriously the theoretical assumptions (construct validity!) behind psychological tests? A synthetic description of level I is included in Table 1.
Level II: Every empirical theory requires empirical interpretation. Theoretical terms (e.g. intelligence, health, anxiety, etc.) must be related to observational terms. Very important, then, is operationalisation of variables adequate to the assumptions of a particular psychological theory that make up the hypothetical set of variables considered by the psychologist to be relevant for a given dependent variable (which happens at level I). Let us recall that operationalisation is based on giving “empirical meaning to theoretical terms” (Hornowska, 1989, p. 5) but not (!) in spirit – in fact, already in its classical form, passé, but its echoes appear in journals and brochures about “test-like” products – of Bridgman’s operationalism (Bridgman, 1927, p. 5): “…in general, we mean by a concept nothing more than a set of operations; the concept is synonymous with the corresponding sets of operations”. Operationalism was very popular for a time in psychology and set the methodological standards of research work (see Feest, 2005). His “peak” achievement in the field of psychological tests was the caricature definition of the term “intelligence” created by Edwin Boring (Boring, 1923, p. “Intelligence is what the tests test”. It seems that in some areas of practical applications of psychology it is still accepted. So, let us stress that a programme of variable operationalisation must be derived from a particular psychological theory and, in particular, consistent with it. And it is such a programme that Elżbieta Hornowska, quoted here, proposed.
Methodologically incorrect is a programme of the operationalisation of variables that refers to different theories (and precisely to the definitions of theoretical terms built on the basis of different theories) unrelated among themselves (breaking the principle of paradigmatic consistency). The researcher is then obliged to follow another rule, which could be called the principle of compliance of the operationalisation programme with the theoretical programme.
Too often, clinicians refer in an operationalisation programme (sometimes they simplify it) to psychological tests. However, the responsible use of psychological tests must take into account the specific theory (model) of psychological tests. Without going into details (above all, due to lack of space in this highly synthetic work), I will say that all those psychological tests in scientific and clinical use refer to the classical theory of tests – the true score theory developed by Harold Gulliksen (Gulliksen, 1950). So far, a practical, clinical use has not been found for these tests, which refer to the latest test model: Item Response Theory (IRT).
In addition to tests, psychologists refer to specialist diagnostic apparatus, whose construction is based on specific theoretical assumptions. Their knowledge is essential for their proper use.
A significant place in the instrumentation of clinicians is taken by non-test – also called clinical methods sensu proprio – diagnostic methods: clinical interview, observation, pathopsychological experiment (in the sense of B. W. Zeigarnik and S. J. Rubinsztejn – see Brzeziński, 1983) and analysis of products. These should also be derived from a psychological theory.
Let us conclude that only well-performed operationalisation determines the quality of scientific empirical and diagnostic research. As a consequence, it affects the quality of the psychological practice: diagnostic (EBA) and therapeutic (EBPP). What is going on at level II is shown synthetically in Table 2.
Level III. At this level, a psychologist, referring to statistical tools, prepares quantitatively (with the help of tools used in the operationalisation of variables process – see Level II) empirical data. This will be – in the case of clinical diagnosis, research on the effectiveness of psychotherapy or research conducted by psychologists/clinicians – the results of psychological tests (in particular personality questionnaires and scales of intelligence), instrumental measurements and data collected on the basis of clinical interviews and observations. The collected raw results are subject to standardisation or aggregation. Clinicians, like other psychologists, refer to models: null hypothesis significance testing (NHST) and confidence intervals. They also refer to multiple regression models and profile analysis. This is shown synthetically in Table 3.
The widespread availability of easy-to-use statistical packages such as SPSS, STATISTICA, SAS and STATA means that empirical research is being developed at a significantly higher methodological level than some time ago.
Level IV: Empirical theory was the starting point in psychological research (diagnostic or scientific) and is a point of departure. In the language of the same empirical theories, definitions are made of variables (cf. level I) from which diagnostic and scientific hypotheses are constructed and the results of the empirical research are interpreted. It is important, when we refer not to one, but to several theories, to observe the principle of paradigmatic consistency. Table 4 shows the specificity of level IV.
So far we have talked about the “horizontal” integration within each of the four levels (see Table 1-4). One can therefore also talk about “vertical” integration between levels. In fact, it was already signalled when characterising individual levels. The characteristic interlinking (conditioning) is shown in Fig. 3.
The solutions adopted by the psychologist at level I (primary) have a decisive impact on the quality of the whole research, and ultimately on the quality of professional practice: diagnosis and treatment (cf. standards included in the models EBA and EBPP). Fig. 1 shows that, in order to achieve coherence of the whole research programme, and consequently coherence of practical programme 2, it is necessary to move from a cohesive language, sometimes taken from different empirical theories, but respecting the primordial principle of paradigmatic consistency.
Thus, at Level II, the construct validity is established (or specifically constructed) in the programme of operationalisation of variables of psychological tests. This is considered the most important property of a psychological test (American Educational Research Association, American Psychological Association, National Council on Measurement in Education, 2014).
The psychologist also refers at level IV to the established language of the theory in which the variables were defined when interpreting the results of scientific or diagnostic research or testing the effectiveness of a new treatment programme. It cannot be that variables are defined in the language of a theory (assuming it is correctly tested and is a theory sensu proprio), and from them hypotheses (research or diagnostic) are constructed and in another language, created in a different paradigmatic perspective, a theory tries to ‘sensibly’ (sic!) interpret research results.
Operationalisation (level II) of a dependent variable and independent variables (quite often via a psychological test) refers to statistical tools, i.e. to findings from level III. The researcher chooses a specific statistical model within which he/she not only constructs the test itself, but also interprets the result. An interpretation referring to the concept of confidence intervals (American Educational Research Association, American Psychological Association, National Council on Measurement in Education, 2014) is recommended.
Synchronisation of the actions undertaken either by the clinician-researcher or the clinician-professional as shown in Fig. 3 has an impact on the reliability and appropriateness of actions taken.

End notes

1 On the evolution of the understanding of the term “professionalism” in psychology and the dual understanding of psychology as a “scientific discipline” and as a “field of practice” cf. Kimble (1984) and Bańka (1996).
2 See Note 1.


American Psychological Association Presidential Task Force on Evidence-Based Practice. (2006). Evidence-Based Practice in Psychology. American Psychologist, 61, 271–285.
American Psychological Association. (2008). Report of the Task Force on the Implementation of the multicultural guidelines. Washington, DC: Author. Retrieved from: http://www.apa.org/pi/
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: Author.
Bańka, A. (1996). O profesjonalizmie psychologicznym i jego związkach z nauką I etyką. Czasopismo Psychologiczne, 2, 81–100.
Boring, E. (1923). Intelligence as the test tests it. New Republic, 6, 35.
Bridgman, P. W. (1927). The logic of modern physics. New York: MacMillan.
Brzeziński, J. (1983). Wartość eksperymentu patopsychologicznego dla diagnostyki klinicznej. In Wł. J. Paluchowski (ed.), Z zagadnień diagnostyki osobowości (pp. 93–106). Wroclaw: Ossolineum.
Brzeziński, J. (2016a). Etyka postępowania psychologa klinicznego w badaniach naukowych i praktyce. In L. Cierpiałkowska & H. Sęk (eds.), Psychologia kliniczna (pp. 81–98). Warsaw: Wydawnictwo Naukowe PWN.
Brzeziński, J. (2016b). Towards a comprehensive model of scientific research and professional practice in psychology. Current Issues in Personality Psychology, 4, 2–10.
Brzeziński, J. (2016c). On the methodological peculiarities of scientific research and assessment conducted by clinical psychologists Roczniki Psychologiczne, 19, 453–468.
Cierpiałkowska, L. (2016). Chapter 34. Efektywność poradnictwa psychologicznego i psychoterapii. In L. Cierpiałkowska & H. Sęk (eds.), Psychologia kliniczna (pp. 738–738). Warsaw: Wydawnictwo Naukowe PWN.
Cierpiałkowska, L., & Sęk, H. (2016a). Scientific and social challenges for clinical psychology. Roczniki Psychologiczne, 19, 419–436.
Cierpiałkowska, L., & Sęk, H. (eds.). (2016b). Psychologia kliniczna. Warsaw: Wydawnictwo Naukowe PWN.
Cierpiałkowska, L., & Sęk, H. (2016c). Psychologia kliniczna jako dziedzina badań i praktyki. In L. Cierpiałkowska & H. Sęk (eds.), Psychologia kliniczna (pp. 21–33). Warsaw: Wydawnictwo Naukowe PWN.
Cronbach, L., & Meehl, P. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302.
Feest, U. (2005). Operationism in psychology: What the debate is about, what the debate should be about. Journal of the History of the Behavioral Sciences, 41, 131–149.
Gulliksen, H. (1950).Theory of mental tests. New York: Wiley.
Hornowska, E. (1989). Operationalization of psychological quantities. Założenia – struktura – konsekwencje. Wroclaw: Ossolineum.
Kimble, (1984). Psychology’s two cultures. American Psychologist, 39, 833-839.
Kuhn, T. S. (1970). The structure of scientific revolutions (2nd edition, enlarged). Chicago, IL: The Chicago University Press. Retrieved from http://projektintegracija.pravo.hr/_download/repository/Kuhn_Structure_of_Scientific_Revolutions.pdf
Lewicki, A. (1969). Psychologia kliniczna w zarysie. In A. Lewicki (ed.), Psychologia kliniczna (pp. 10–155). Warsaw: Państwowe Wydawnictwo Naukowe.
Popper, K. (1959/2005). The logic of scientific discovery. London & New York: Taylor and Francis e-Library. Retrieved from http://strangebeautiful.com/other-texts/popper-logic-scientific-discovery.pdf
Schmidt, F. L. (1992). What do data really mean? Research findings, meta-analysis and cumulative knowledge in psychology. American Psychologist, 47, 1173–1181.
Strelau, J. (1998). Temperament. A psychological perspective. New York and London: Plenum Pres.
Copyright: © 2017 Institute of Psychology, University of Gdansk This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
Quick links
© 2021 Termedia Sp. z o.o. All rights reserved.
Developed by Bentus.
PayU - płatności internetowe