eISSN: 2353-561X
ISSN: 2353-4192
Current Issues in Personality Psychology
Current issue Archive Articles in press About the journal Journal's reviewers Abstracting and indexing Contact Instructions for authors

vol. 4
Review paper

Towards a comprehensive model of scientific research and professional practice in psychology

Jerzy Marian Brzeziński

Current Issues in Personality Psychology, 4(1), 1–10
Online publish date: 2016/03/18
Article file
- towards.pdf  [0.16 MB]
Get citation
JabRef, Mendeley
Papers, Reference Manager, RefWorks, Zotero

Theory and practice – together, not separately

The activity of psychologists is conducted in two interlocking domains. The first one is the area of scientific research carried out according to the methodological standards of the empirical sciences. The second one is the domain of professional practice. When it comes to the issue of how the two domains are (or should be) treated, the analogy to Hans Reichenbach’s (1938) concept of two contexts comes to mind: the context of discovery and the context of justification. According to this concept, the two contexts should be treated separately. In particular, the analysis of the context of discovery would be the domain of psychology, or sociology, and the analysis of the context of justification would be the domain of methodology. Years later it was demonstrated that this dichotomy of contexts is impossible to maintain and nowadays the accepted thesis is that of a unity of the two contexts. To put it briefly, it is impossible to indicate where the first context ends and the other one starts. When carrying out research activities focused on empirical verification of a hypothesis, we also make some discoveries that can lead to new hypotheses, etc. The contexts intertwine, becoming one.
Similarly, the domains of scientific research (empirical) and psychological practice also intertwine. When conducting professional activity – as dictated by practical directives derived from a given psychological empirical theory – not only do we obtain the desired change of the given status quo (a good example would be psychotherapeutic or rehabilitative activities of a psychologist), but we receive feedback that can improve the initial theory. The practice of assessment has a similar effect on correcting the tools such as psychological tests (leading to an update of the norms – which also is, in the case of the intelligence scales, a derivative of the Flynn effect – or to a correction of the content validity) and ultimately improving them.
For the accurate reconstruction of the actions that psychologists conduct within those two domains their unity needs to be assumed (rather than their separateness). However, if we ask which party starts this kind of “game”, and leads in it, the answer is: theory. Similarly, theory precedes empirical research; as the prominent biologist François Jacob (1973, p. 15) once said: “In the dialogue between theory and experience, theory always has the first word. It determines the form of the question and thus sets limits to the answer.”

What theory supports (should support) psychological practice?

This consideration only make sense if we assume that psychology belongs to the group of empirical sciences, that is, those which integrate into the body of scientific (rational) knowledge only those claims that have been confronted with empirical data. These, in turn, have been obtained by psychology researchers in the course of controlled empirical research conducted in accordance with methodology. Ideally, the research is conducted as a randomized experiment. In any case, all of the comprehensive methodology used to design a modern experiment in behavioral sciences (including psychology) named simply experimental design (e.g., Winer, Brown, & Michels, 1991; Kirk, 1995; Brzeziński, 2008) refers to such statistical models as ANOVA or MANOVA. These, in turn, assume that the researcher applied the randomization principle. A study that resembles an experiment, but that does not respect the randomization principle, has the methodological status of a quasi-experiment (see Cook & Campbell, 1979). A “parallel” statistical model that constitutes a strong basis for an empirical study is a multiple regression model combined with structural equation modeling (see Pedhazur, 1997; Tabachnick & Fidel, 2001; Cohen, Cohen, West, & Aiken, 2003).
I agree with the notion that:
“The major task in any science is the development of theory. A good theory is simply a good explanation of the processes that actually take place in a phenomenon. […] But to construct theories, one must first know some of the basic facts, such as the empirical relations among variables. […] Theories are causal explanations. The goal in every science is explanation, and explanation is always causal” (Schmidt, 1992, p. 1177).
It is in line with what Karl Popper thought about the work of the scientist: “[…] the work of the scientist consists in putting forward and testing theories” (Popper, 2005, p. 7).
Empirical theory is therefore the gate to psychological practice. Without it, it is not possible to understand what happens in the psyche of a person whose life problems we try to help solve, or even more, it is not possible to design and carry out a rational (without referring to pseudoscientific shamanic practices) therapeutic treatment plan. Empirical psychological theory is the foundation on which the model discussed in this paper is built. Outside its contexts neither factually correct nor ethical psychological practice exist.
Following Karl Popper’s opinions about science (see in particular Popper, 2005) and also those of Kazimierz Ajdukiewicz (1974), it is expected that an empirical psychological theory that serves as a scientific justification of a professional psychological practice will be testable. Predictions derived by the researchers from theory (by way of deduction) are compared to the results of practical applications and experiments (Popper, 2005, p. 10):
If this decision is positive, that is, if the singular conclusions turn out to be acceptable, or verified, then the theory has, for the time being, passed its test: we have found no reason to discard it. But if the decision is negative, or in other words, if the conclusions have been falsified, then their falsification also falsifies the theory from which they were logically deduced.
It should be noticed that a positive decision can only temporarily support the theory, for subsequent negative decisions may always overthrow it. So long as theory withstands detailed and severe tests and is not superseded by another theory in the course of scientific progress, we may say that it has ‘proved its mettle’ or that it is ‘corroborated’, by past experience.
A theory is a social product, created under certain cultural conditions (to which I will return later in this article). Therefore it must be introduced by the scientists into this circulation.
An important feature of scientific cognition, the products of which are theories and methods, is – as Kazimierz Ajdukiewicz (1974) wrote – its intersubjectivity. Scientific statements have to be – in an intersubjective sense – communicable and verifiable. The intersubjectivity of those statements lets them be distinguishable from statements that are not products of scientific cognition (in the abovementioned sense). As Chava Frankfort-Nachmias and David Nachmias wrote (1996, pp. 15-16):
To be intersubjective, knowledge in general – and the scientific methodology in particular – has to be communicable. Thus if one scientist conducts an investigation, another scientist can replicate it and compare the two sets of findings. If the methodology is correct and (we assume) the condition under which the study was made or the events occurred have not changed, we would expect the findings to be similar. Indeed, conditions may change and new circumstances emerge, but the significance of intersubjectivity lies in the ability of a scientist to understand and evaluate the methods of others and to conduct similar observations so as to validate empirical facts and conclusions.
To sum up, professional psychological practice needs – as the rationale for its technical and ethical functioning – theory (see Spendel, 2014) that meets all of the methodological criteria of empirical theory, in order to be communicable and verifiable.

Methodological awareness

Scientific research (here, in psychology) whose primary objective is to create empirical theory does not arise as an effect of random and uncoordinated actions of psychologists. On the contrary, it is highly standardized (except, of course, the phase of formulating the problem and the hypotheses), and it leaves little place for spontaneous “reflexes”. That standardization takes on the form of the research process. Psychologists give different forms to this process (e.g., Spendel, 2005; Brzeziński & Zakrzewska, 2010).
In this paper, methodological awareness (MA) is understood as a set of methodological rules and directives that allow a particular – for a given phase of development of a scientific discipline (in this case psychology) – form of research practice, which is carried out in the form of research activities, undertaken by the researchers during the research process. This process, as I currently believe, comprises the following stages: 1: Problems and hypotheses, 2: Operationalization of variables, 3: Design and conduct of research, 4: Quantitative data analysis (statistical conclusion), 5: Interpretation and generalization of research results (research conclusion).
We can differentiate social MA from individual MA. The latter may, in some cases significantly, differ from the former. In particular it does not guarantee a correct execution of an empirical study by a particular researcher. The structure of MA is presented in Figure 1 (based on Brzeziński, 2013). It consists of five blocks. Block 1 covers empirical theories, each assigned to a specific scientific paradigm, as developed by generations of psychologists. It is by using the language of a particular theory that the researcher defines the variables and uses them to identify the problems and the hypotheses. The theory chosen by the researcher (or constructed by the researcher from scratch) has a decisive influence on the other elements of MA. In particular, the psychologists interpret the research results (Block 4) and generalize the results (Block 5) using the language specific to a certain theory. In order to carry out an empirical study, the researcher has to take a very important step: to give an empirical sense to the theoretical variables (as defined in the language of a theory from Block 1). This happens during the procedure of operationalization of variables (Block 2). Quite often, psychologists operationalize the variables using psychological tests. Nowadays, the majority – almost 100% – of mental tests (intelligence, abilities, interests) and personality inventories that are used by psychologists who conduct scientific research and conduct assessment refer to Harold Gulliksen’s classical test theory (the true-score theory; Gulliksen, 1950). Today, the latest psychometric theory is being quite intensively developed, i.e., the item response theory (IRT; see e.g., van der Linden & Hambleton, 1997). The results collected after conducting an empirical study – an experimental or a correlational one – planned according to a particular standard, are then subjected to statistical analysis (Block 3). To test their hypotheses, the researchers refer to the statistical test of significance. This process, still dominant, is carried out in the null hypothesis statistical testing (NHST) paradigm. A rival approach uses confidence intervals. A psychologist who does not have everyday contact with statistics can feel a little lost. The APA published two guides to explain those complex issues (Wilkinson & the Task Force on statistical inference, 1999; APA publications and communications board working group on the journal article reporting standards, 2008). They are also useful when writing research reports and empirical articles (Figure 1).

How does science meet the needs of social practice?

A model of the information flow between the domains of social practice and science is presented in Figure 2 (based on Brzeziński, 2013). A dissatisfying state of social practice (Block 1) becomes an impulse to look for new, more effective explanations of what is “going on” in there. It also becomes necessary to look for new methods of influence, that will allow one to effectively achieve the desired state of affairs. A social need addressed to science is created (I put aside the search for new methods beyond science e.g., shamanic activities; Block 2).
As an answer, science suggests two solutions (Block 3): (1) a new theory with greater explanatory power (but also a significant correction of the theory that previously “played” this part of the social practice); (2a) a new assessment method and (2b) a new, more effective method, which (in the hands of a psychologist) will help mark better (assessment) and fix more effectively (therapy) the “broken” state of things. Let us take depression as an example from the clinical field: Beck’s cognitive theory of depression, the Beck Depression Inventory BDI-II (authors: A. T. Beck, R. A. Steer, & G. K. Brown) and Beck’s cognitive behavioral therapy (CBT). Note that both methods, the diagnostic (BDI-II) and therapeutic (CBT), were based on the empirically verified Beck’s cognitive theory of depression.
A new solution to a problem originating in social practices – if it is to be based on scientific reasoning – has to pass three additional tests, go through three filters (Blocks 4-6). As for the first one, the Methodological Filter (Block 4), its effectiveness (in the sense of not letting through scientifically questionable or outright bad ideas) depends directly on social and individual MA (cf. Figure 1). High MA characterizing a particular scientific field shows the effectiveness of the Methodological Filter, as it will not allow an individual hypothesis with a low exploratory power into the circulation. It will also determine the degree of maturity of a given scientific discipline. In my opinion, at least potentially, psychology has reached a relatively high level of social MA. I think that it is higher than that of pedagogy or sociology. It is (also) a consequence of adequate 5-year long Master’s degree programs offered by the universities to future psychologists, which dedicate a relatively lot of room to teaching the methodology of empirical research, statistical methods and psychometrics (Brzeziński, 2012, 2013). Not without significance was the fact that psychologists did not let themselves be infected with the pseudoscientific ideas of so-called postmodern psychology.
If this “candidate theory” passes successfully through the Methodological Filter, it lets a team of psychology practitioners develop a practical action program. For example, clinicians focus on preparing a therapeutic program. However, before it is introduced into the circulation, before it becomes officially “blessed”, it has to go through the Praxiological Filter (Block 5). It must meet similar methodological criteria as a scientific hypothesis in the Methodological Filter. For example, when it comes to research on the quality of psychotherapy, for years there have been two competing research approaches, referred to (in short) as efficacy research and effectiveness research (cf. Mintz, Drake, & Crits-Christoph, 1996; Nathan, Stuart, & Dolan, 2000). The former refers to the experimental paradigm, and thus also respects the randomization principle. It is carried out according to a methodological pattern that fits the laboratory experiment model, i.e., randomized clinical trial (RCT). This type of research is characterized by high internal validity and relatively lower external validity. The latter, on the other hand, is carried out as correlation studies and field studies. These studies have low internal validity, but a relatively high external validity. The famous research by Martin E.P. Seligman (1995) on the effectiveness of therapy, conducted using consumer reports (CR), was in line with the second approach. Therefore, let us emphasize that an empirical check of a theory is not enough, as there must be a verified plan of action referring to this (verified) theory.
Another, extremely important obstacle that must be overcome by the project of practical activities is associated with the ethical standards that it needs to fulfill. That is the Ethical Filter (Block 6). My position is that it is not enough that the theory prepared by a psychologist (psychologists) fulfills the methodological criteria and that the practical action project’s design it is based on is effective. Professional actions of a psychologist undertaken in the sphere of social practice relate to specific people or social groups, or even to society. Of course, they cannot violate the law. However, they must also apply the ethical standards developed by generations of psychologists, in particular those that were placed in ethical codes for psychologists. Nonetheless, the most basic ethical requirement that the psychologists must respect is the people’s rights and dignity as adopted by the United Nations General Assembly on December 10, 1948 in Paris in the Universal Declaration of Human Rights – especially Articles 1 and 2:
Article 1. All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
Article 2. Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. […]
Psychologists – which is also stated in the ethical codes of psychologists formulated by professional societies – are obligated, in their contacts with those who receive psychological services as well as during their scientific research, to respect such elementary principles as informed consent, integrity, competency, fidelity and responsibility, privacy and confidentiality. Nothing can justify putting researchers’ interests (because the conducted scientific study is, from their perspective, an important one) above the rights and well-being of human participants in scientific research, and patients and clients in professional practice.
The response “yes” at the outcome of each of the three filters indicates that psychology’s proposition in response to social need will change social practice in the desired direction. However, the response “no” means that corrections need to be introduced to the proposed solution (dashed lines in the diagram).
In the diagram presented in Figure 2 there is one more negative filter: the Ideological Filter (Block 7). It is characteristic of countries with a totalitarian ideology (e.g., North Korea, and – in the not so distant past – countries dominated by the Nazi and communist ideologies). In an extreme situation (dotted line in the diagram) arguments that were de facto scientific, efficient, and ethical were disregarded. Instead, attempts were made to force pseudoscientific ideas into social practice through authoritarian decisions made by the regime (e.g. putting psychiatry into ideological service in the Soviet Union).
Apart from the extreme violation of the scientist’s ethos, described in the previous paragraph, we also encounter scientific misconduct on a smaller scale. “Impatient” researchers, deeply convinced about the legitimacy of their innovative (sometimes considered even revolutionary) ideas, but also crooks chasing society’s recognition and financial gains, deviate from the ethical path and disregard one of the filters (Block 4-6) in order to reach their goals quicker. However, such unethical (from the perspective of scientific work’s standards) actions – sometimes in the long run – ultimately lead to failure and disgrace.
A proper way to act, both scientifically and ethically, is marked in the diagram with a solid, bold line, and with a dashed, bold line.

Scientific research and professional practice in psychology – a comprehensive model

In the previous sections I attempted to demonstrate what important criteria need to be fulfilled by both sides of the “dialogue”: science (in this case psychology) and professional practice (in this case conducted by psychologists). On one hand, it is expected that psychology will provide an adequate response to the requirements set by social practice. On the other hand, it is expected that psychology’s response will be properly received and practically used. In order for that to happen, for psychology to provide valid scientific justification for undertaking given treatments, a planned scientific study needs to attain, sometimes very strict, methodological standards. I reconstructed a very important construct which is MA. Its level determines the scientific value of the psychological response to a given societal need.
Now I shall proceed to discuss the Scientific Research and Professional Practice in Psychology (SRPPP) model presented schematically in Figure 3. It is a theory (moreover an empirical theory) that supports – and justifies – actions taken by professionals in the domain of social practice. Its usefulness depends directly on the state of psychologists’ MA (Block 5).
Methodological awareness not only directly determines the quality of empirical psychological theory and shapes the theoretical “foundation” of the practice (Block 4), but also shapes two important standards of psychologists’ professional work. The first one refers to actions taken in the assessment domain (Block 5a). Current assessment standards are marked out by the set of rules for diagnostic proceedings known as evidence-based assessment (EBA).
The core of the EBA assessment standard is to stress that psychologists’ actions in their practice domain (assessment and therapy) need to be supported by empirical theory and that psychologists employ methods that have been empirically verified (see Methodological Filter and Praxiological Filter – Figure 2). These methodological commands were included in the seven Daubert guidelines. In all seven criteria the emphasis is put on theory and technique. Where does the name come from? It all started in the court. Jason Daubert and Eric Schuller, who had been born with physical birth defects, in 1993 in the US, sued a pharmaceutical corporation (Marion Merrell Dow, n.d.). They claimed that their defects were caused by the medication Bendectin that their mothers had taken during pregnancies. The court invoked expert testimony. However, the court could not make sense of the provided evidence because they differed both on the methodological quality and the associations with current scientific knowledge. Therefore, the court decided in the proceedings that expert testimony needed to be provided in compliance with the following guidelines (in Ritzler, Erard, & Pettigrew, 2002, pp. 202–203):
1) Is the proposed theory (or technique), on which the testimony is to be based, testable?
2) Has the proposed theory (or technique) been tested using valid and reliable procedures and with positive results?
3) Has the theory (or technique) been subjected to peer review?
4) What is the known or potential error rate of the scientific theory or technique?
5) What standards, controlling the technique’s operation, maximize its validity?
6) Has the theory (or technique) been generally accepted as valid by a relevant scientific community? (Grove & Barden, 1999, p. 226)
7) [Added later] Do the expert’s conclusions reasonably follow from applying the theory (or technique) to this case? (Grove & Barden, 1999, p. 226).
They became guidelines for judges in the American judicial system (more about the ruling of the Supreme Court of the United States in Daubert v. Merrell Dow Pharmaceuticals, 1993; see also Brown, 2014).
Psychologists (but not only them) use psychological tests during the assessment process. The test users should respect the standards included in the fundamental document created by American Educational Research Association, American Psychological Association, National Council on Measurement in Education, i.e. Standards for Educational and Psychological Testing (see American Educational Research Association, American Psychological Association, National Council on Measurement in Education, 2014). It needs to be noted that test takers have not only rights but also responsibilities.
Depending on the test difficulty (in terms of understanding the test, its application and interpretation of results) accessibility levels to various test categories were introduced. For example, Pearson (among others, publisher of Wechsler Adult Intelligence Scale WAIS-IV; “Qualification Policy”, n.d.) uses three qualification levels. The most demanding one, Level C “[…] requires a high level of expertise in test interpretation”. It also requires holding a “doctorate degree in psychology, education, or closely related field with formal training in the ethical administration, scoring, and interpretation of clinical assessments related to the intended use of the assessment”.
An illustration of how currently the empirical scientific research interacts with professional practice is – important for the shape of psychological practice in the health care system and health care policy – a report prepared for American Psychological Association (APA) by the APA presidential task force on evidence-based practice: Evidence-Based Practice in Psychology (EBPP). In the report prepared by the task force, EBPP is defined in the following way: “Evidence-based practice in psychology (EBPP) is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (APA presidential task force on evidence-based practice, 2006, p. 273). Reading the report carefully, one will notice that it refers to – in the research dimension – the EBA standards.
Of course, analogical associations between both domains, science (psychology as a science) and practice (psychology as a profession), can be reconstructed in those other domains of social practice that can benefit from scientific psychological achievements and professional experiences of practicing experts who use that experience. However, here I limit the examples to only this one, albeit socially very important.

Three contexts

In the Figure 3, another important element was included (see Block 6). It consists of three contexts in which the elements of the model are “immersed”. These are:
1. The Ethical Context,
2. The Psychological Context,
3. The Cultural Context.
There follows a brief discussion of each of them.
1. The Ethical Context. Out of three contexts, this is the most important one. It decides – as illustrated in Figure 2 (Block 6) – whether a given solution (formerly verified on the methodological dimension) can be applied to practice. Psychologists as professionals can act unethically when they brutally violate the dignity and privacy of a person (patient at the clinic, client in private practice) by using unacceptable pseudo-therapeutic treatments, or when they offer therapeutic services while not having required qualifications confirmed during the supervision process (see American Psychological Association, 2014; Jones et al., 2000).
Similar violations, evaluated negatively from the ethical perspective, can occur during empirical studies; see for example, two controversial – from this very perspective – experiments: the Milgram experiment on obedience and the Zimbardo Stanford prison experiment. Rules of ethical behaviors for psychologists in these and similar situations are included in the abovementioned ethical codebooks, e.g., the American Psychological Association (2010), the European Federation of Professional Psychologists Associations (1995), the International Union of Psychological Science (2008), and the American Psychological Association (1982) and the British Psychological Society (2010). It is hard to argue with the following sentence (British Psychological Society, 2010, p. 4): “Participants in psychological research should have confidence in the investigators. Good psychological research is only possible if there is mutual respect and trust between investigators and participants.”
Ethics are also violated – and psychologists do not always realize that – when their empirical studies are imperfectly planned, carelessly conducted, and results obtained through inadequate statistical methods and – in consequence – inaccurately interpreted. Ethical consequences of violating methodological rules that constitute the content of methodological awareness were pointed out by Robert Rosenthal (1994, p. 128), who wrote: “[…] bad science makes for bad ethics. […] Poor quality of research design, poor quality of data analysis, and poor quality of reporting of the research all lessen the ethical justification of any type of research project”.
Psychology (as a science) is destroyed by such fraudulent actions of researchers and authors of books and articles as data fabrication, data falsification, plagiarism, ghostwriting, and guestwriting.
Moreover, today there are more complex data manipulations, such as: HARKing (hypothesizing after results are known) and p-hacking (looking for results that are statistically significant at the “sacred” level of p = .05; see Chambers, Feredoes, Muthukumaraswamy, & Etchells, 2014). As a result, this leads to – in the domain of scientific publications – the disturbing phenomenon of publication bias. Because editors of scientific journals are reluctant to publish articles that demonstrate a lack of effect, these “non-significant” manuscripts remain in authors’ drawers (hence the term “file drawer effect”). This, in turn, negatively affects the results of meta-analyses. They are simply overestimated. One method to fight these pathological phenomena is a new publication format, i.e. pre-registration research (Chambers & Munafo, 2013).
2. The Psychological Context. Many years ago, Saul Rosenzweig (1933) wrote about three peculiarities of experimental research in psychology: (1) the researcher becomes a part of the research situation, (2) the research subject’s behavior in the research situation is affected by variables such as the research subject’s personality, motivation, etc., (3) “researcher–subject” interaction develops. On a side note, this seminal article preceded works of such psychologists as Martin T. Orne and Robert Rosenthal that were first created in the 1960s. Research by these psychologists indicated that study subjects were able to identify the goal of the study and then accordingly modify their behavior during the study. M. T. Orne (1962) wrote about the “demand characteristic of experimental situation” variable. R. Rosenthal, on the other hand (1963, 2002; Rosenthal & Rosnow, 2009), demonstrated that the researcher (but also a teacher, therapist, judge, or coach) could influence study results in order for them to be in line with the researcher’s expectations – hence the term “interpersonal expectation effect”.
3. The Cultural Context. The majority of psychology’s achievements (as a science) were built – according to the authors of a report created for the APA – “[…] upon Anglo Western middle class, Eurocentric perspectives and assumptions”. The world, including the one in which they live and conduct their studies, has a global character. It does not consist solely of people who share European or American values. If psychology is to provide theory and research results that can be valid across cultural groups, then it cannot disregard the cultural context. This imperative also applies to psychology as a profession.
This requirement was also noticed by the APA, and they prepared an important report on the subject, namely the Report of the task force on the implementation of the multicultural guidelines (see American Psychological Association, 2008). Six multicultural guidelines constitute a pivotal part of this report. In the context of this article I want to highlight two of them:
Guideline 4: Culturally sensitive psychological researchers are encouraged to recognize the importance of conducting culture-centered and ethical psychological research among persons from ethnic, linguistic, and racial minority backgrounds.
Guideline 5: Psychologists are encouraged to apply culturally appropriate skills in clinical and other applied psychological practices (p. 3).
Not long ago (Hardin, Robitschek, Flores, Navarro, & Ashton, 2014) a new approach was proposed to take into consideration the cultural factor in analyzing the validity of psychological theory. A similar situation is found in the case of the psychological tests. I think it can be said that to the traditional sources of validity evidence (see American Educational Research Association, American Psychological Association, National Council on Measurement in Education, 2014) another one can be added, i.e. the pattern and requirements of the culture in which test takers live.
Finishing this necessarily short profile of the cultural context, I will cite one more extract from the APA’s report (American Psychological Association, 2003, p. 390):
In analyzing and interpreting their data, culturally sensitive psychological researchers are encouraged to consider cultural hypotheses as possible explanations for their findings, to examine moderator effects, and to use statistical procedures to examine cultural variable. (Quintana, Troyano, & Taylor, 2001).


Ajdukiewicz, K. (1974). Pragmatic logic. Dordrecht-Holland/Boston-USA: Reidel.
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: Author.
American Psychological Association. (2014). Guidelines for clinical supervision in health service psychology. Retrieved from http://apa.org/about/policy/guidelines-supervision.pdf
American Psychological Association. (2010). Ethical principles of psychologists and code of conduct. Washington, DC: American Psychological Association. Retrieved from http://www.apa.org/ethics/code/principles.pdf
American Psychological Association. (2008). Report of the task force on the implementation of the multicultural guidelines. Washington, DC: Author. Retrieved from https://www.apa.org/about/policy/multicultural-report.pdf
American Psychological Association. (2003). Guidelines on multicultural education, training, research, practice, and organizational change for psychologists. American Psychologist, 58, 377–402.
American Psychological Association. (1982). Ethical principles in the conduct of research with human participants (rev. ed.). Washington, DC: Author.
APA publications and communications board working group on journal article reporting standards. (2008). Reporting standards for research in psychology. Why do we need them? What might they be? American Psychologist, 63, 839–851.
APA presidential task force on evidence-based practice. (2006). Evidence-based practice in psychology. American Psychologist, 61, 271–285.
British Psychological Society. (2010). Code of human research ethics. Retrieved from http://www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf
Brown, A. (2014). Expert Testimony and the Daubert and Frye Standards. Retrieved from http://www.aquilogic.com/pdf/Expert%20Testimony%20and%20the%20Daubert%20and%20Frye%20Standards.pdf
Brzeziński, J. (2012). Jakich kompetencji badawczych oczekujemy od psychologa? [What competencies are expected from a psychologist?]. In: H. J. Grzegołowska-Klarkowska (ed.), Agresja, socjalizacja, edukacja. Refleksje i inspiracje [Aggression, socialization and education. Reflections and inspirations] (pp. 383–409). Warszawa: Wydawnictwo Akademii Pedagogiki Specjalnej.
Brzeziński, J. (2013). Methodological awareness and ethical awareness in the context of university education (on the example of psychology). In: B. Bokus (ed.), Responsibility. A cross-disciplinary perspective (pp. 261–277). Warszawa: Lexem.
Brzeziński, J. (2008). Badania eksperymentalne w psychologii i pedagogice [Experimental research in psychology and education] (2nd ed.). Warszawa: Wydawnictwo Naukowe Scholar.
Brzeziński, J., & Zakrzewska, M. (2010). Metodologia. Podstawy metodologiczne i statystyczne prowadzenia badań naukowych w psychologii [Methodology. Methodological and statistical foundations for scientific research in psychology]. In: J. Strelau & D. Doliński (eds.), Psychologia akademicka. Podręcznik [Psychology. The handbook] (pp. 175–302). Gdańsk: Gdańskie Wydawnictwo Psychologiczne.
Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells. P. J. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1, 4–17. Retrieved from http://orca.cf.ac.uk/59475/1/AN2.pdf
Chambers, C., & Munafo, M. (2013). Trust in science would be improved by study pre-registration. Retrieved from http://www.theguardian.com/science/blog/2013/jun/05/trust-in-science-study-pre-registration
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. Mahwah, NJ: L. Erlbaum.
Cook T. D., & Campbell D. T. (1979). Quasi-experimentation. Design & analysis issues for field settings. Boston, MA: Houghton Mifflin Co.
Daubert v. Merrell Dow Pharmaceuticals (1993). (92-102), 509 U.S. 579. Retrieved from https://www.law.cornell.edu/supct/html/92-102.ZO.html
European Federation of Professional Psychologists Associations. (1995). Meta–Code of Ethics. Retrieved from ethics.efpa.eu/meta-code/
Frankfort-Nachmias, Ch., & Nachmias, D. (1996). Research methods in the social sciences (5th ed.). New York, NY: St. Martin’s Press.
Grove, W. M., & Barden, R. C. (1999). Protecting the integrity of the legal system: The admissibility of testimony from mental health experts under Daubert/Kumho analyses. Psychology, Public Policy, and Law, 5, 224–242.
Gulliksen, H. (1950). Theory of mental tests. New York, NY: J. Wiley.
Hardin, E. E., Robitschek, C., Flores, L. Y., Navarro, R. L., & Ashton, M. W. (2014). The cultural lens approach to evaluating cultural validity of psychological theory. American Psychologist, 69, 656–668.
Hardin, E. E., Robitschek, C., Flores, L. Y., Navarro, R. L., & Ashton, M. W. (2014). The cultural lens approach to evaluating cultural validity of psychological theory. American Psychologist, 69, 656–668.
International Union of Psychological Science. (2008). Universal Declaration of Ethical Principles for Psychologists. Retrieved from International Union of Psychological Science website http://www.iupsys.net/about/governance/universal-declaration-of-ethical-principles-for-psychologists.html
Jacob, F. (1973). The logic of life: A history of heredity. New York: Pantheon Books.
Jones, C., Shillito-Clarke, C., Syme, G., Hill, D., Casemore, R., & Murdin, L. (2000). Questions of ethics counselling and therapy. Buckingham, UK: Open University Press.
Kirk, R. E. (1995). Experimental design: Procedures for the behavioral sciences (3rd ed.). Belmont, CA: Brooks.
Marion Merrell Dow. (n.d.). In: Wikipedia. Retrieved December 24, 2015 from https://en.wikipedia.org/w/index.php?title=Marion_Merrell_Dow&redirect=no
Mintz, J., Drake R., & Crits-Christoph P. (1996) Efficacy and effectiveness of psychotherapy: two paradigms, one science. American Psychologist, 51, 1084–1085.
Nathan, P., Stuart, S., & Dolan, S. (2000). Research on psychotherapy efficacy and effectiveness. Between Scylla and Charybdis? Psychological Bulletin, 126, 964–981.
Orne M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implication. American Psychologist, 17, 776–783.
Quintana, S. M., Troyano, N., & Taylor, G. (2001). Cultural validity and inherent challenges in quantitative methods for multicultural research. In: J. G. Ponterotto, J. M. Casas, L. A. Suzuki, & C. M. Alexander (eds.), Handbook of multicultural counseling (2nd ed., pp. 604–630). Thousand Oaks, CA: Sage.
Pedhazur, E. J. (1997). Multiple regression in behavioral research. Explanation and research (3th ed.). Fort Worth, TX: Harcourt Brace College Publishers.
Popper, K. (2005). The logic of scientific discovery [Adobe Digitals Editions version]. Retrieved from http://strangebeautiful.com/other-texts/popper-logic-scientific-discovery.pdf
Qualification Policy. (n.d.). Retrieved from http://www.pearsonclinical.com/talent/qualifications.html
Reichenbach, H. (1938). Experience and prediction. An analysis of the foundations and the structure of knowledge. Chicago, IL: University of Chicago Press.
Ritzler, B., Erard, R., & Pettigrew, G. (2002). Protecting the integrity of Rorschach expert witnesses. A Reply to Grove and Barden (1999) Re: The Admissibility of Testimony Under Daubert/Kumho Analyses. Psychology, Public Policy, and Law, 8, 201–215.
Rosenthal, R. (1963). On the social psychology of the psychological experiment: The experimenter’s hypothesis as unintended determinant of experimental results. American Scientist, 51, 268–283.
Rosenthal, R. (2002). Covert communication in classrooms, clinics, courtrooms, and cubicles. American Psychologist, 57, 839–849.
Rosenthal, R., & Rosnow R. L. (eds.). (2009). Artifacts in behavioral research. Rosenthal and Rosnow’s classic books (A re-issue of Artifact in behavioral research; Experimenter effects in behavioral research; & The volunteer subject). Oxford, UK: Oxford University Press.
Rosenthal, R. (1994). Science and ethics in conducting, analyzing, and reporting psychological research. Psychological Science, 5, 127–134.
Rosenzweig, S. (1933). The experimental situation as a psychological problem. Psychological Review, 40, 337–354.
Schmidt, F. L. (1992). What do data really mean? Research findings, meta-analysis and cumulative knowledge in psychology. American Psychologist, 47, 1173–1181.
Seligman, M. E. P. (1995). The effectiveness of psychotherapy: The Consumer Reports study. American Psychologist, 50, 965–974.
Spendel, Z. (2005). Metodologia badań psychologicznych jako forma świadomości metodologicznej [Methodology of psychological research as a form of methodological consciousness]. Katowice: Wydawnictwo Uniwersytetu Śląskiego.
Spendel, Z. (2014). O pewnych kontrowersjach i nieporozumieniach wokół „teorii psychologicznej” i „psychologii teoretycznej” [About some controversies and misunderstandings related to “psychological theory” and “theoretical psychology”]. Czasopismo Psychologiczne, 20, 55–64.
Tabachnick, B. G., & Fidel, L. S. (2001). Using multivariate statistics. Boston, MA: Allyn and Bacon.
van der Linden, W. J., & Hambleton, R. K. (eds.). (1997). Handbook of modern item response theory. New York, NY: Springer.
Wilkinson, L. & Task force on statistical inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604.
Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design (3rd ed.). New York: McGraw-Hill.
Copyright: © 2016 Institute of Psychology, University of Gdansk This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
Quick links
© 2019 Termedia Sp. z o.o. All rights reserved.
Developed by Bentus.
PayU - płatności internetowe