In addition to content knowledge and pedagogical knowledge, pedagogical content knowledge (PCK) is regarded as an essential part of the professional competence of teachers (e.g. Baumert & Kunter, 2013a; Gess-Newsome, 2016; Shulman, 1986). Results of the COACTIV study show that the PCK of teachers in mathematics has a substantial influence on the quality of their teaching and on the learning progress of students (Baumert & Kunter, 2013b). Prompted by these results, current teacher education research in Germany has a strong focus on the question of how the acquisition of PCK can be fostered in teacher education and how PCK can be assessed. This question is considered to be relevant for practically all school subjects as interdisciplinary projects like FALKO demonstrate (Krauss et al., 2017).
Against this background, the current paper presents two newly developed PCK assessment tools for the subjects mathematics and German. Additionally, we report the findings from a study with preservice teachers of both subjects in which these instruments were piloted. The specific approach of this paper is that conclusions from the pilot study results are drawn from an interdisciplinary point of view. Thus, in this article we do not primarily aim at answering domain-specific questions concerning preservice teachers’ PCK, but rather investigate cross-disciplinary aspects of PCK. For this, we integrate findings on particularities and similarities of preservice teachers’ PCK by comparing their answers gathered by means of the newly developed instruments in the two fields of mathematics and German literature education. By this, we also contribute to a discussion presently going on in German research on subject-matter teaching and learning, namely the question if there is a general, overall nucleus in domain-specific concepts of education (Rothgangel & Vollmer, 2017). This paper offers a new contribution to this discussion in so far as we combine comparative perspectives on subject-matter education with a data driven research process.
An interdisciplinary look at the pilot findings displays that preservice teachers in both subjects show abilities and difficulties in coping with subject-specific problems that, however, go beyond the respective subject and can be generalised to preservice teachers in all subjects (Rothgangel & Vollmer, 2017). In terms of their potential transferability, these findings allow not only subject-specific but also interdisciplinary conclusions to be drawn for the design of learning environments in teacher education.
2 Theoretical Background
The concept of PCK was introduced by Shulman (1986, 1987) and since then it had a big impact on research on teacher competences. Shulman set a framework for the description of teachers’ knowledge relevant for teaching, with a special emphasis on prerequisites of subject matter teaching. Shulman differentiates amongst others between content knowledge, pedagogical knowledge, pedagogical content knowledge and curricular knowledge. As Shulman (1987, p. 8) points out,
[a]mong those categories, pedagogical content knowledge is of special interest because it identifies the distinctive bodies of knowledge for teaching. It represents the blending of content and pedagogy into an understanding of how particular topics, problems or issues are organized, represented, and adapted to the diverse interests and abilities of learners, and presented for instruction.
Shulman (1986, p. 9f.) already describes facets of PCK, namely knowledge about topic-specific representations and explanations, knowledge about the didactic potential of contents, and knowledge about students’ topic-specific (mis-)conceptions.
Shulman’s concept of PCK strongly influenced international research on knowledge and competences of teachers. Regarding Germany, the international study TEDS-M (Tatto et al., 2008; Döhrmann, Kaiser, & Blömeke, 2010; Laschke & Döhrmann, 2014) and the national studies TEDS-LT (Blömeke 2011; 2013), COACTIV (Baumert & Kunter, 2013a; 2013b) and FALKO (Krauss et al., 2017) are prominent examples of teacher education research that draw on Shulman’s PCK framework. TEDS-M and COACTIV focus on (future) teachers of mathematics, whereas TEDS-LT and FALKO include mathematics, German and other subjects. According to Shulman, it is a shared consensus of these studies to differentiate between the following constituents of PCK: knowledge or adequate estimation of students’ prior knowledge, knowledge of task characteristics or the potential of content, and knowledge of content representations. TEDS-M additionally considers knowledge of criteria for appropriate and helpful feedback as a facet of PCK. In Germany, the relevance of PCK was stressed by results of COACTIV revealing that the PCK of teachers in mathematics substantially influences both the quality of their teaching and the learning progress of students (Baumert & Kunter, 2013b).
As Shulman (2015) pointed out, one of the limitations of his PCK concept was that it did not sufficiently consider the relationship between knowledge and action (which is empirically corroborated, e.g., by Gvozdic & Sander 2018). In an enhanced model of teacher professional knowledge, Gess-Newsome (2015) related to this limitation when she took into consideration the difference between knowledge as disposition and classroom performance. Besides, Gess-Newsome sharpened the difference between normative thinking about what teachers ought to know and knowledge teachers actually rely on. In respect to the categories in which Depaepe et al. (2013) describe research on PCK, we understand PCK as dynamic situated knowledge in action.
To bridge the dichotomy between the understanding of competence (knowledge, motivation) as disposition on the one hand and performance on the other hand, Blömeke and colleagues suggested to model competence as a continuum (Blömeke, Gustafsson, & Shavelson, 2015). They pay special attention to the processes connecting disposition and performance, i.e. “the perception and interpretation of a specific job situation together with decision-making” (Blömeke et al., 2015, p. 7). With this idea, they refer to Schoenfeld (2010) and they are close to the concept of noticing that also highlights the importance of teachers’ selective attention, knowledge-based reasoning and interpretation of classroom events (e.g. van Es & Sherin, 2002; Sherin & van Es, 2009). Against this background, professional knowledge including PCK can be regarded as a prerequisite and a filter for the perception and interpretation of classroom situations, resulting in decision-making and performance (Krauss et al., 2017). With respect to the processes connecting disposition and performance, it can be supposed that professional competence not only depends on subject-specific knowledge like PCK, but also depends on generic cognitive aspects such as information processing abilities.
2.2 Assessment of PCK
The assessment of PCK as a facet of teachers’ competences, i.e. as a disposition that has to prove in job-related situations, requires the construction not only of declarative-knowledge tests but also of assessment tools which focus on knowledge application to authentic and relevant tasks (see, e.g., Blömeke et al., 2015). Thus, the assessment instruments should refer to real-life problems and allow for open responses. Recent PCK assessment tools are based on open-response-formats for not restricting possible problem solutions and making knowledge-based reasoning visible (e.g. Krauss et al., 2017; Rosenkränzer et al., 2016). However, “it is harder to define and assess quality of responses in complex situations than with respect to clearly-defined items” (Blömeke et al., 2015, p. 9). Consequently, during the development of an assessment tool much attention has to be paid to the elaboration of a coding system and to a sufficient interrater-reliability. Another problem of assessment instruments which consist of authentic tasks is that different task components might not be independent from each other (Blömeke et al., 2015).
2.3 Interdisciplinary Cooperation—Integrating Perspectives
Though there is a broad consensus about facets of teachers’ professional knowledge in general and of PCK in particular, PCK has to be specified for each domain or even topic. Gess-Newsome (2015) uses the term “topic-specific professional knowledge” for knowledge codified by experts that has “a normative function in terms of what we want teachers to know about topic- and context-specific instruction” (Gess-Newsome, 2015, p. 33). From an interdisciplinary perspective and with respect to teachers’ actual knowledge use, it is interesting to identify the general in the particular when it comes to the question how (future) teachers deal with problems of subject-matter teaching. This perspective is based on the assumption that some challenges that seem to be topic-specific might have a more general reason.
From this point of view, the objective of this paper is to combine domain-specific perspectives in a way that not only an addition, but a new perspective regarding PCK assessment results arises (Boix Mansilla, 2010; Defila & Di Giulio, 2015). Thus, we practice a form of supplementary interdisciplinarity (Heckhausen, 1972), focussing the overlap of PCK use of preservice teachers in mathematics and German literature education: „The correspondence is looked for and tentatively established in order to reconstruct life or social processes more fully“ (Heckhausen, 1972, p. 89).
3.1 Concept of PCK Assessment Tools
The construction of subject-specific knowledge assessment tools in interdisciplinary teams is a widespread approach used in research on the professionalisation of prospective teachers (e.g., Blömeke, 2011, 2013; Lindl & Krauss, 2017). The two pilot versions of PCK assessment tools we refer to in this paper are designed in parallel for the subjects German and mathematics. They can be characterised as performance assessments (Blömeke, 2015), i.e. they aim at central requirements in everyday teaching (Krauss et al., 2017), namely, the analysis of the quality and requirements of tasks, the analysis and knowledge-based interpretation of learner responses and the formulation of helpful feedback to learners. The assessment tools use an open answer format in order to provide insights into preservice teachers’ argumentation, to make visible the aspects of PCK they recall and to see whether and how knowledge is applied to authentic problems by the preservice teachers (cf. Krauss et al., 2017; Rosenkränzer et al., 2016). The two assessment tools have the following components:
(1) domain-specific content of the respective school subject,
(2) authentic tasks for students in the sixth grade and
(3) authentic students’ responses to the task.
In the assessments, the preservice teachers are required (a) to solve the tasks for the students, (b) to analyse the requirements of the tasks, (c) to analyse the answers given by the students and (d) to formulate feedback for the individual students that is conducive to learning. These requirements challenge the preservice teachers’ selective perception and knowledge-based interpretation of a job-related situation (a, b and c) as well as decision-making and performance (a, d) (Blömeke et al., 2015; see Section 2.1). Son (2013) assessed PCK of preservice teachers of mathematics with a similar instrument, asking the preservice teachers to solve a student task, to analyse a student’s answer and to explain how they would react. The assessment design addresses relevant aspects of PCK (see Section 2.1): knowledge of the potential of content and task requirements, knowledge of students’ topic-specific prior knowledge, and knowledge of criteria for appropriate and helpful feedback (Döhrmann, Kaiser, & Blömeke, 2010; Krauss et al., 2017; Laschke & Döhrmann, 2014).
3.2 Sample and Procedure
The pilot study this paper is based on was conducted with preservice teachers who were studying to earn a master’s degree in teaching either German (27 preservice teachers) or mathematics (40 preservice teachers). The preservice teachers of German studied at the University of Jena and attended a final examination seminar in the field of teaching German as a first language. The preservice teachers of mathematics attended preparatory seminars at the University of Oldenburg to prepare them for an internship in mathematics at a school, which is completed during the first semester of the Master of Education course. The participants worked on the assessment tools during a session of the respective seminar without further time restrictions.
3.3 Domain-specific and Interdisciplinary Analysis of Assessment Results
The analysis of the pilot data took place in four steps. First, preservice teachers’ answers were assessed by two experts in each field. The expert rating was conducted by means of an assessment scheme that was based on sample solutions. The sample solutions had been developed by experts from a normative point of view, i.e. they were based on what experts consider as important for future teachers “to know about topic- and context-specific instruction” (Gess-Newsome, 2015, p. 33). For each sample solution, a maximum score was determined. According to the sample solution, the experts initially evaluated the preservice teachers’ responses independently of each other giving scores with respect to the number of aspects mentioned. Then the experts of each subject agreed on a joint evaluation and scoring of each preservice teacher’s solution.
Both in German and in mathematics, the results were widely scattered. In this article, we look at the upper and lower quarters (i.e. the 25 percent best solutions and the 25 percent weakest solutions in both pilot assessments).
In a second step, in an iterative top-down and bottom-up process, experts of each field identified subject-specific categories for the description of qualities in preservice teachers’ responses, such as close reading in German and proving in mathematics. The third step had a key function concerning the objective of this paper. It can be characterised as “integration” of interdisciplinary perspectives sensu Defila and De Giulio (2015, p. 125):
[T]hose participating in a research project have to develop common answers to their shared research questions by integrating […] the findings from the different disciplines and/or fields of practice involved in the research. To this end, findings and approaches have to be selected in terms of their contribution to the common answers, they have to be reprocessed, related and brought together. The common result is the integrated knowledge produced in this process, the so-called ‘‘synthesis’’
That means that through a discussion among the authors, who are experts in the fields of mathematics education, German literature education, and educational psychology, the subject-specific categories were examined with regard to their generalisability. This interdisciplinary comparison was a method of gaining knowledge, as similarly obtained data from two different fields were examined for similarities and differences. During the discussion process, assessment results in each field were estimated by experts of the three fields mentioned above. As a result of knowledge integration, common interdisciplinary categories were synthesised.
Finally, the 25 percent best solutions and the 25 percent weakest solutions in both pilot assessments were analysed by the authors against the backdrop of these synthesised interdisciplinary categories. Thus, we are interested in the nature of preservice teachers’ PCK, aiming at understanding the difference in quality between strong and week students’ performances from an interdisciplinary point of view.
3.4 Description of the Assessment Tools
3.4.1 PCK Assessment Tool in German
The problems in the PCK assessment tool in the field of German refer to the fable The Wolf and the Lamb by Phaedrus. In this fable, a wolf and a lamb meet while drinking at the brook. The wolf makes false accusations against the lamb and eats it before it can refute the last accusation. The moral given by Phaedrus is a warning against people who prosecute others based on unjustified accusations. In the version of the fable presented to the students, the moral is missing, and the learners are asked to formulate a moral themselves. This is a typical task that is set for students in German lessons dealing with fables. To determine the solution, it is necessary to know that animals in fables embody general human characteristics or behaviours and that a respective transfer is required. For this transfer, one also needs knowledge about typical social problems. Knowledge about wolves and sheep appearing in fables, and particularly of their recurring characteristics, facilitates an understanding of the text. Learners in the sixth grade usually have appropriate prior knowledge, as the literary genre “fable” is part of the German curriculum for the fifth and sixth grade.
The difficulty of the student task results from an aspect of the Phaedrus fable that is unusual for the type of text. The text is difficult to understand insofar as the moral winner, meaning the lamb, is the victim, whereas the greedy and unjust wolf is successful and remains without punishment. It is difficult for sixth graders to recognise that this fable, unlike many others, does not directly convey a concrete rule for good interpersonal relations but rather highlights and condemns a complex phenomenon of human coexistence. Thus, it is also difficult to derive an adequate moral from the fable. The fable is likely to be counter to the expectations of the students in two ways: on one hand, its ending contradicts moral concepts conveyed to children (evil is punished), and on the other hand, it is not entirely in line with typical characteristics of the genre.
In the pilot assessment, preservice teachers were asked to write down three possible solutions for the task given to the students. The request to find alternative solutions corresponds with the ambiguity of literary texts as a content characteristic that future teachers will need to deal with. Suitable solutions are, for example, those that point to the immoral enrichment of the stronger at the expense of the weaker, to the concealment of injustice by false pretences or to the erosion of law and justice due to the power of the stronger. The results of the pilot assessment reveal that deriving a moral of the given fable is also difficult for preservice teachers.
In order to assess task difficulty based on PCK, a thorough text analysis was necessary that points out the features of the fable. The results of this text analysis must be related to general knowledge about the difficulty-determining characteristics of text comprehension tasks (e.g., Winkler, 2010) and to the knowledge of task-relevant student prerequisites.
In the pilot assessment, the preservice teachers were given three authentic answers of students. Preservice teachers were asked to analyse these answers and to formulate constructive feedback to each of the students. As typical for the age group (Zabka, 2006), the given student answers derived concrete instructions for action from the fable. Two of the answers formulated the moral with regard to the wolf (i.e., “that one should not take revenge on persons although they have done nothing” and “that one should not accuse others who had nothing to do with the matter”). The third answer gave a recommendation regarding the lamb, namely, “that sometimes (when it comes to life and death), you should lie rather than be honest”. The first two answers recognise partial aspects of the fable more or less accurately but reveal problems in fully grasping the wolf’s motives. The third answer shows difficulties in adequately reconstructing the situation of the lamb, for whom lying was not an option to save its life. The feedback given to the students would therefore first have to provide fundamental assistance in understanding the fable’s storyline. In addition, the students require a hint about how to take a step towards an adequate generalisation. In addition to general requirements for good feedback (e.g., Hattie & Timperley, 2007), it is important that the feedback refer to the concrete answer and not only contain general reading or writing instructions (cf. Sturm, 2016).
3.4.2 PCK Assessment Tool in Mathematics
In mathematics, the PCK problems were presented to the preservice teachers in the following form (Figure 1):
Figure 1. Authentic problem from the mathematics PCK assessment tool.
The given student task can be solved in many different ways: through a representation with dot patterns or box towers, with complete induction, through the observation of remains or with verbal description. Preservice teachers, but not students, have the option of algebraic representation and transformation. For preservice teachers who master this the problem is very easy to solve, as confirmed by our assessment. Kempen & Biehler (2014) show that a similar problem is difficult to solve for beginning preservice teachers. Kempen & Biehler (2019) reveal that advanced preservice teachers have extended skills in this field.
To assess the requirements with regard to grade six students (problem [b]), the preservice teachers first had to be aware of which knowledge and which ways of solving the problem are available in grade six. It was expected that the preservice teachers would refer to a solution that could be mastered by sixth graders and not merely write that sixth graders are not able to solve the problem with variables. Depending on the solution that is chosen, different requirements can be mentioned. The central difficulty, however, is to use the given relationship between the numbers (“consecutive”) in a purposeful and productive way. In particular, it demands the establishment of a relationship between the given additive property and the claimed divisor property. To serve as an explanation for the claimed phenomenon, the considerations must be presented in such a way that it becomes clear how the given structural conditions cause the divisibility by three. This requires a general representation or at least a representation from which the general validity becomes clear. For sixth graders, to whom algebraic means of representation are not yet available, it is a demanding task both to find a representation and to strive for the general validity of their arguments.
In the analysis of students’ responses (problem [c]), emphasis is placed on the difference between the two students’ answers, Monika and Dörthe. It was expected that the preservice teachers would recognise that Monika’s examples do not show any general validity but that Dörthe’s argument is structural and, thus, at least partly general (cf. generic examples in Yopp & Ely 2016). They should recognise that the two answers reflect different levels of thought (cf. difficulties in differentiating between the general and the specific in Mason 1984).
This difference between the two students’ answers should also be taken into account in the feedback given to the students (problem [d]). Dörthe must receive feedback that she has grasped the structural idea of evidence, and she may be asked to explain her answer in more detail. Monika, however, must be shown that her examples are not explanations, and she needs help in recognising the structure. The feedback should tie in with her approach using numerical examples. As in German, it is therefore a matter of concrete reference to the students' responses and of observing general rules for good feedback (cf. Hattie & Timperley, 2007).
4 Results of the Study from an Interdisciplinary Perspective
The following description of the findings does not focus on aspects that are primarily relevant to a specific subject. Rather, cross-disciplinary points were identified in the comparative interdisciplinary expert discussion. We focussed on points in which good and less good responses differ and on aspects of preservice teachers’ responses that are relevant for designing learning environments for preservice teacher education.
4.1 Creating References Between the General and the Specific
One common feature of the two PCK assessment tools is that the preservice teachers must be able to relate the specifics of the student tasks and the general concepts to each other in such a way that the conclusions are sound and promote knowledge. In the German assessment tool, to solve the student task, it is necessary to deduce from the fable a generally valid moral. In the analysis of the students’ answers, the preservice teachers must assess whether the students successfully drew this conclusion. In the mathematics assessment tool, a mathematical idea of general validity is required above all. This includes knowledge of how it can be represented (see Section 4.2). Both in the German and in the mathematics PCK assessment instrument, preservice teachers must give feedback on the students’ answers containing information on how a generally valid solution can be formulated without neglecting the specifics of the students’ answers. In both assessment instruments, preservice teachers must also consider the specific features of the respective task when analysing the task requirements and compare these features with generally valid statements on the problem difficulties.
Our analyses of preservice teachers’ answers revealed that in German, strong and weak answers differed significantly in the extent to which preservice teachers succeeded in inferring from the particular to the general and vice versa. The former was reflected in responses concerning an adequate moral of the fable. In particular, weak responses often formulated morals without a transfer from the fable to the world in which we live. This finding can be seen in five of the seven weakest solutions (e.g., in the following answer: this fable shows “that the truth costs the lamb’s life because the wolf is physically superior”). Stronger answers coped with the necessary transfer (e.g., this fable shows that “violence is justified by false pretences which have nothing to do with the truth”).
The analysis of preservice teachers’ answers to problems 2, 3 and 4 gave a different picture. The problem in many responses here was that the explanations remained so general that they could refer to any tasks or students’ answers. This finding occurred in both stronger and weaker solutions in the assessment. However, in the stronger solutions, this problem only existed in parts (e.g., in the analysis of the task requirements), while in other parts of their answers, preservice teachers successfully referred to the specifics related to the student task. In the weaker solutions, preservice teachers more often ignored specific features of the fable, of the student task or of students’ answers. An exemplary weaker solution did not examine whether the students’ answers correspond to the fable text; instead, the preservice teacher commented on the quality of the answers in general: “I personally would not have thought that students would come to these morals of the fable”. His or her feedback did not consider the specifics and problems of the students’ answers at all (e.g., “You understood the fable and learned a good lesson from it”).
It is not clear whether there is a lack of ability or a lack of motivation to go beyond commonplace feedback and to engage with the characteristics of the text and of students’ responses. In any case, the finding that better responses of preservice teachers are characterised by drawing conclusions from the particular to the general and from the general to the particular indicates that this is a facet of competence that should be specifically fostered in teacher training.
As in German, not all preservice teachers in mathematics were able to formulate feedback on students’ responses that considered both the concrete answer and the general problem behind it. In their analysis of a student’s response, preservice teachers must reflect on the extent to which it meets the requirements of a general explanation. If they conclude that it does not, they must help the student to advance to more structural thinking.
A positive example is the following response of a preservice teacher, in which a suitable conclusion is drawn that is also supported on the basis of the data material:
Dörthe, on the other hand, reveals information that was not explicitly included in the problem, which shows that she has investigated the problem with regard to a general principle.... The general validity of the prose part of her answer ... corresponds to a mathematical, generally valid answer.
The preservice teacher further formulated the following feedback to the student:
You seem to have recognized a principle in this problem, namely that you transform the numbers before adding.... Can you formulate this principle more precisely in words or say why exactly it is always possible?
An example in which a preservice teacher did not help the student to recognise the structure is the following: “Monika should be told that examples are not enough and that general and logical statements have to be made”.
Concerning the problem requirements, the results in mathematics are similar to those in German: the analysis of the preservice teachers’ answers shows that the best answers achieve a productive balance between the general and the example-specific. Three typical answer patterns can be distinguished in the problem analysis by the preservice teachers.
The first gives general features that describe the difficulty of any arithmetic problem demanding proof. It remains completely general. The second answer pattern is based on the opposite extreme and mentions individual steps in a special solution - usually the one chosen in problem (a) - without reflecting these with general criteria. The third and most sophisticated answer pattern is a middle path between the other two. It characterises the essential, problem-specific difficulties that must be overcome in all solutions. Answers at this highest level are only given by the best preservice teachers.
In their university classes, the preservice teachers had various practice opportunities in which specific tasks were taught in combination with general concepts. The findings show, however, that the majority of preservice teachers still have difficulty adopting this perspective. Establishing this connection should therefore be a greater focus when designing opportunities to learn in teacher education.
4.2 Dealing with Subject-Specific Concepts in a Sophisticated Way
In both subjects, it was assumed that the preservice teachers had acquired subject-related concepts during their studies, which they could use when working on the problems in the PCK assessment tools. These concepts are fundamental and prominent in their respective disciplines -in German, knowledge of the requirements of text comprehension tasks was particularly necessary, and in mathematics, the concept of general validity was needed. In fact, our study shows that both in German and mathematics, preservice teachers are able to name appropriate concepts. At the same time, however, it also became clear that many preservice teachers had learned these concepts rather superficially and were not able to apply them to the given tasks.
In German, both strong and weak responses documented that knowledge relevant to the analysis of learner tasks was acquired. In the task analysis, preservice teachers used technical terms that are common in the analysis of reading comprehension tasks. It can be seen, however, that these terms and the concepts behind them did not serve to better understand the task requirements but rather promoted the tendency, outlined above, to remain on a general level instead of referring to task-specific requirements. This phenomenon occurred in both stronger and weaker responses. In weaker answers, the technical terms were sometimes sprinkled into the statements in a senseless way. The enumeration of general task characteristics that determine difficulty, based on learned terms, also occurred in those preservice teachers’ answers which contribute to doubts as to whether the fable text was understood. Conversely, answers that accurately summed up the specific task requirements largely lacked the corresponding terminology.
In mathematics, it became clear that all preservice teachers were aware that general validity is important when working on a problem such as the given one. Almost all preservice teachers interpreted the instruction “explain why” as an invitation to prove/reason the general validity of the statement. It is also noteworthy that some linked the general term “explain” to the requirement of a formal representation. An explanation for this may be that these preservice teachers translated the instruction “explain” together with the statement “always” into the instruction “prove”, which they associate with a formal representation. Thus, they interpreted the term “explain” superficially schematically. A rich term (“explain”) is apparently replaced by another (“prove”), which some preservice teachers attributed to a reduced meaning.
Strong responses showed a broad understanding of the nature of reasoning and of generality, both in their own mathematical problem solutions and in their analyses of problem difficulty and students’ responses. They allowed for various representations and focussed on the core of the mathematical problem. Thus, the required mental operations are essential, not the form of representation. Strong responses also interpreted students’ answers in order to understand the underlying thinking or to make the arguments clearer. They distinguished between the surface of the representation and the imagined or meant arguments or the recognised connections.
Weak answers, however, linked the expression of general validity to the use of variables. This shows that, for weaker preservice teachers, the terms “justification/evidence” and “general validity” are keywords whose meaning they only grasp to a limited extent. These preservice teachers considered the problem to be not generally solvable for sixth graders because the language of formulae is not yet available to them. They look only superficially at both the problem and the answers given by the students.
On the whole, the fact that preservice teachers used subject-specific terms in a static and undifferentiated way shows that they have only superficially mastered these concepts; the fact that preservice teachers use these terms could simply indicate that they know that, at the university, the use of subject-specific vocabulary and notation is expected. In view of the findings outlined above, the fundamental question arises as to how subject-specific terms and concepts can be embedded in learning situations in such a way that they promote preservice teachers’ understanding of and ability to use these concepts for authentic problems, such as analysing task requirements and assessing students’ answers.
4.3 Accessing and Using Knowledge
A further overarching finding is remarkable concerning the extent to which preservice teachers use relevant knowledge in application-related situations. Both in mathematics and in German, the pilot study shows that preservice teachers who used knowledge in one part of problem solving did not always make use of this knowledge in another part, where it was also relevant.
For example, with regard to the concept of general validity in mathematics, different levels of understanding of the concept can occur in different parts of the assessment. Some preservice teachers reflected on this aptly when working on the student task and discussing the demands that the task placed on sixth graders, but they showed weak consideration for it regarding problem parts (c) and (d).
For example, one preservice teacher showed in problem parts (a) and (b) that his/her concept of general validity was not limited to the use of variables, because he/she showed the general structure by means of an example. In problem part (c), however, a superficial view of the students’ answer was shown, in that the numerical example was regarded only as an individual case, without taking the given transformations into account, despite the fact that the preservice teacher had made a similar argument in problem part (a).
This example shows that the preservice teachers did not reliably build on already documented knowledge or findings when working on the highly application-oriented sub-problems (c) and (d). In this setting, it remains unclear why this is the case. However, the reverse variant also occurred: in some preservice teachers’ responses, after weak answers were given to sub-problems (a) and (b), the discussion of the students’ answers became much more sophisticated.
On the one hand, there are preservice teachers who did not succeed in making concrete, well-founded problem analyses in the assessment of students’ responses. On the other hand, three of the ten preservice teachers in the top group benefitted from the analysis of students’ responses to assess the problem difficulty. This was not the case for preservice teachers in the weaker group.
The fact that preservice teachers only showed well-founded insights when analysing students’ responses—that is, insights that they did not seem to make in the preceding sub-problems—is also evident in German. For example, some better and some weaker responses in German showed that the preservice teachers made progress in the course of their work on the assessment instrument. The following response makes this acquisition of knowledge explicit by supplementing the preservice teacher’s own suggestions for a moral of the fable and the presented task analysis:
Supplement (after reading the students’ answers): Apparently, for me the task was more difficult to master than for the students. My answers 1 & 2 also correspond less to a moral than to a summary.
The given authentic students’ answers have the potential to help preservice teachers to see the requirements of the task more clearly, as is reflected in the remark quoted above. This also applies to the top group of responses. Here, in sub-problem (b) (i.e., the analysis of the student task requirements), only one answer considered the difficulty of recognising the wolf’s motives for action and his false accusations. Five of the seven answers from the top group obviously recognised this difficulty only through the students’ answers.
The preservice teacher’s comment quoted above verbalises how, in the course of processing the given problems, one’s own proposed solutions are evaluated, which can lead to new findings. With other preservice teachers’ responses, this can only be assumed (e.g., “I personally would not have thought that the students would come to these morals of the fable”, see Section 4.1). The relevant finding for teacher education courses is that concrete student responses apparently not only illustrate professional challenges but also increase the degree of instructional support given in the analysis of task requirements.
This article presents findings from a study in which two PCK assessment tools for mathematics and German were piloted. The assessment tools were constructed according to the same principles in order to ensure comparability of the findings. The interdisciplinary comparison of the preservice teachers’ performance in the two subjects makes it possible to focus on more general phenomena concerning reasoning of future teachers. Findings of particular interest are those that indicate problems encountered by preservice teachers both in mathematics and in German in coping with the requirements of everyday teaching. General conclusions can be drawn from these interdisciplinary results for the design of learning environments in teacher education.
The interdisciplinary comparison of the findings focuses on three aspects:
1) Preservice teachers in mathematics and German have difficulties in establishing appropriate links between the general and the specific. Only the best answers adequately related the specifics of a given example to the general concepts. Weaker answers mainly consisted of superficial statements, either without grasping the particularities of the example or without taking a step towards generalisation. From the view of developing competence, the concentration either on the specifics of the example or on generalities may be interpreted as variants of complexity reduction before the integration of both perspectives can be achieved. A possible explanation is that the given problem is a complex problem because many variables must be considered and interrelated. I.e. inferences must be drawn based on the characteristics of the given problem (domain-specific content, student task and student answers) on the one hand and PCK on the other hand. Problem complexity, however, affects the difficulty of a problem and, thus, learners’ ability to solve a problem (Funke, 2013; Jonassen, 2000). Besides, research on the development of reasoning reveals that success in analytic reasoning is influenced by “the demands of the task, the currently available working memory resources, and cognitive ability” (Ricco, 2015, p. 522). The question relevant to teacher education is how preservice teachers can be supported in taking the necessary developmental step of perspective integration and in solving professional problems with increasing complexity.
2) Another variant of complexity reduction can be seen in the second finding. Preservice teachers acquire relevant technical terms during their studies. However, with regard to the concepts behind the terms, it became apparent through this study that the preservice teachers often lacked deeper understanding. For this reason, the concepts can often only be mentioned by preservice teachers but cannot be adequately applied to authentic situations. In view of this finding, the interdisciplinary perspective proves to be fruitful. By comparing preservice teachers’ performance in the two different subjects, it became clear that the problem is not due to a lack of knowledge of subject-specific concepts; rather, the weaker preservice teachers, in particular, seemed to build up a superficial and reduced understanding of the concepts.
3) The results of the present study show that preservice teachers do not continuously and consistently apply their knowledge, meaning that their knowledge often remains inert (cf. Gruber & Renkl, 2000). However, when it comes to examining authentic student answers, some preservice teachers applied knowledge which they did not mention before. These findings support the idea of using students’ answers as a means to foster the development of applicable knowledge in teacher education.
The complexity given in the present assessment tool is omnipresent in classroom situations. Analysing tasks, assessing the quality of students’ responses and providing helpful feedback are central activities in teaching. The participants in our study are not yet capable of coping appropriately with these rather complex requirements, even in paper-and-pencil assessment tools (i.e. in situations that are less complex). In addition to subject-specific difficulties, preservice teachers also display cross-curricular difficulties. Further studies should investigate whether, in some cases, a lack of cross-curricular skills—such as relating the specific to general principles and vice versa or dealing with complexity—is the cause of the difficulties encountered by preservice teachers and whether it prevents them from further developing subject-specific competences. This also applies to the question of how best to promote these general skills in a subject-specific and cross-curricular way.
A promising approach might be an interdisciplinary discourse among teacher educators on the design of learning environments. It is particularly important here to further develop learning environments that involve both subject-specific and cross-curricular perspectives and to test their effectiveness in experimental intervention studies.
The present study provides a first step towards constructing instruments for assessing preservice teachers’ PCK in the fields of German and mathematics. Further studies on these tools are needed, especially given the small sample size in the present study. The new instruments should be tried in further studies with larger samples. However, the integration of interdisciplinary perspectives on preservice teachers’ results in the present study allows assumptions concerning general challenges for preservice teachers when dealing with problems of subject-matter teaching.
Baumert, J. & Kunter, M. (2013a). The COACTIV model of teachers’ professional competence. In M. Kunter, J. Baumert, W. Blum, U. Klusmann, S. Krauss, & M. Neubrand (Eds.), Cognitive activation in the mathematics classroom and professional competence of teachers. Results from the COACTIV Project (pp. 25-48). New York, NY: Springer Science + Business Media. https://doi.org/10.1007/978-1-4614-5149-5_2
Baumert, J. & Kunter, M. (2013b). The effect of content knowledge and pedagogical content knowledge on instructional quality and student achievement. In M. Kunter, J. Baumert, W. Blum, U. Klusmann, S. Krauss, & M. Neubrand (Eds.), Cognitive activation in the mathematics classroom and professional competence of teachers. Results from the COACTIV Project (pp. 175-205). New York, NY: Springer Science + Business Media. https://doi.org/10.1007/978-1-4614-5149-5_9
Blömeke, S. (2011). Teacher Education and Development Study. Learning to Teach (TEDS-LT) – Erfassung von Lehrerkompetenzen in gering strukturierten Domänen TEDS-LT – Assessment of professional competences of teachers in ill-structured domains. In S. Blömeke et al. (Eds.), Kompetenzen von Lehramtsstudierenden in gering strukturierten Domänen. Erste Ergebnisse aus TEDS-LT (pp. 7-24). Münster, Germany: Waxmann. https://doi.org/10.1177/0022487110386798
Blömeke, S. (2013). Einleitung: Professionelle Kompetenzen im Studienverlauf [Introduction: professional competencies during university studies]. In S. Blömeke et al. (Eds.), Professionelle Kompetenzen im Studienverlauf – Weitere Ergebnisse zur Deutsch-, Englisch- und Mathematiklehrerausbildung aus TEDS-LT (pp. 7-24). Münster, Germany: Waxmann.
Blömeke, S., Gustafsson, J.-E. & Shavelson, R. J. (2015). Beyond Dichotomies. Competence Viewed as a Continuum. Zeitschrift für Psychologie, 223(1), 3–13. https://doi.org/10.1027/2151-2604/a000194
Boix Mansilla, V. (2010). Learning to synthesize: the development of interdisciplinary understanding. In R. Frodeman, J. T. Klein, C. Mitcham, & J. B. Holbrook (Eds.), The Oxford Handbook of Interdisciplinarity (pp. 288–306). Oxford: Oxford University Press.
Defila, R. & Di Giulio, A. (2015). Integrating knowledge. Challenges raised by the “Inventory of Synthesis”. Futures, 65, 123-135.
Depaepe, F., Verschaffel, L. & Kelchtermans, G. (2013). Pedagogical content knowledge: A systematic review of the way in which the concept has pervaded mathematics educational research. Teaching and Teacher Education, 34, 12-25. dx.doi.org/10.1016/j.tate.2013.03.001
Döhrmann, M., Kaiser, G., & Blömeke, S. (2010). Messung des mathematischen und mathematikdidaktischen Wissens: Theoretischer Rahmen und Teststruktur [Measurement of content knowledge and pedagogical content knowledge in mathematics: Theoretical framework and test structure]. In S. Blömeke, G. Kaiser, & R. Lehmann (Eds.), TEDS-M 2008. Professionelle Kompetenz und Lerngelegenheiten angehender Mathematiklehrkräfte für die Sekundarstufe I im internationalen Vergleich (pp. 169-196). Münster, Germany: Waxmann.
Funke, J. (2013). Complex problem solving. In N. M. Seel (Ed.), Encyclopedia of the Sciences of Learning (pp. 682-685). Heidelberg: Springer.
Gess-Newsome, J. (2015). A model of teacher professional knowledge and skill including PCK. In A. Berry, P. Friedrichsen, & J. Loughran (Eds.), Re-examining pedagogical content knowledge in science education (pp. 28-42). New York: Routledge. https://doi.org/10.4324/9781315735665
Gruber, H., & Renkl, A. (2000). Die Kluft zwischen Wissen und Handeln: Das Problem des trägen Wissens [The gap between knowing and acting: The problem of inert knowledge]. In G. H. Neuweg (Ed.), Wissen – Können – Reflexion. Ausgewählte Verhältnisbestimmungen (pp. 155-174). Innsbruck, Austria: Studienverlag.
Gvozdic, K. & Sander, E. (2018). When intuitive conceptions overshadow pedagogical content knowledge: Teachers’ conceptions of students’ arithmetic word problem solving strategies. Educational studies in mathematics, 98, 157-175. doi.org/10.1007/s10649-018-9806-7
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. https://doi.org/10.3102/003465430298487
Heckhausen, H. (1972). Discipline and interdisciplinarity. In OECD (Ed.), Interdisciplinarity. Problems of teaching and research in universities (pp. 83-89). Paris: OECD, Centre for Educational Research and Innovation.
Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational Technology Research & Development, 48(4), 63-85.
Kempen, L., & Biehler, R. (2014). The quality of argumentations of first-year pre-service teachers. Proceedings of the joint meeting of PME 38 and PME-NA 35, Vol. 3, 425-432.
Kempen, L., & Biehler, R. (2019). Fostering first-year pre-service teachers’ proof competencies. ZDM, 51, 731–746. https://doi.org/10.1007/s11858-019-01035-x
Krauss, S. et al. (2017). Das Forschungsprojekt FALKO – ein einleitender Überblick [The research project FALKO – An introductory overview]. In S. Krauss et al. (Eds.), FALKO: Fachspezifische Lehrerkompetenzen. Konzeption von Professionswissenstests in den Fächern Deutsch, Englisch, Latein, Physik, Musik, Evangelische Religion und Pädagogik (pp. 9-65). Münster, Germany: Waxmann.
Laschke, C., & Döhrmann, M. (2014). Beispielitems zur Erhebung des mathematischen und mathematikdidaktischen Wissens [Exemplary items for the investigation of content knowledge and pedagogical content knowledge in mathematics]. In C. Laschke, & S. Blömeke (Eds.), Teacher Education and Development Study: Learning to teach Mathematics (TEDS-M 2008). Dokumentation der Erhebungsinstrumente (pp. 347-409). Münster, Germany: Waxmann.
Lindl, A., & Krauss, S. (2017). Transdisziplinäre Perspektiven auf domänenspezifische Lehrerkompetenzen. Eine Metaanalyse zentraler Resultate des Forschungsprojektes FALKO [Trans-disciplinary perspective of field specific competencies of teachers. A meta-analysis of central results of the research project FALKO]. In S. Krauss et al. (Eds.), FALKO: Fachspezifische Lehrerkompetenzen. Konzeption von Professionswissenstests in den Fächern Deutsch, Englisch, Latein, Physik, Musik, Evangelische Religion und Pädagogik (pp. 381-438). Münster, Germany: Waxmann.
Mason, J. (1984). Generic examples: Seeing the general in the particular. Educational studies in mathematics, 15, 277-289. https://doi.org/10.1007/BF00312078
Ricco, R. B. (2015). The development of reasoning. In L. S. Liben & U. Mueller (Eds.), Handbook of child psychology and developmental science: Vol. 2, Cognitive processes (7th ed., pp. 519-570). Hoboken, NJ: Wiley. https://doi.org/10.10029781118963418.childpsy213
Rosenkränzer, F. et al. (2016). Das Fachdidaktische Wissen von Lehramtsstudierenden zur Förderung von systemischem Denken: Konzeptualisierung, Operationalisierung und Erhebungsmethode [PCK of preservice teachers for fostering systemic thinking: Conceptualisation, operationalisation and method of data collection]. Zeitschrift für Didaktik der Naturwissenschaften, 22(1), 109-121. https://doi.org/10.1007/s40573-016-0045-0
Rothgangel, M., & Vollmer, H. J. (2017). Ausgangspunkte [Starting points]. In H. Bayrhuber et al. (Eds.), Auf dem Weg zu einer Allgemeinen Fachdidaktik: Allgemeine Fachdidaktik, Band 1 (pp. 22-30). Münster, Germany: Waxmann.
Schoenfeld, A. H. (2010). How we think: A theory of goal-oriented decision making and its educational applications. New York, NY: Routledge.
Sherin, M. G., & van Es, E. A. (2009). Effects of a video club participation on teachers’ professional vision. Journal of Teacher Education, 69(1), 20-37. https://doi.org/10.1177/0022487108328155
Shulman, L. S. (1986). Those who understand. Knowledge growth in teaching. Educational Researcher, 15(2), 4-14.
Shulman, L. S. (1987). Knowledge and teaching. Foundations of the new reform. Harvard Educational Review, 57(1), 1-22.
Shulman, L. S. (2015). PCK: Its genesis and exodus. In A. Berry, P. Friedrichsen, & J. Loughran (Eds.)., Re-examining pedagogical content knowledge in science education (pp. 3-13). New York: Routledge. https://doi.org/10.4324/9781315735665
Son, J. (2013). How preservice teachers interpret and respond to student errors: ratio and proportion in similar rectangles. Educational studies in mathematics, 84, 49-70. https://doi.org/10.1007/s10649-013-9475-5
Sturm, A. (2016). Beurteilen und Kommentieren von Texten als fachdidaktisches Wissen [Assessing and commenting on texts as part of PCK]. Leseräume, 3(3), 115-132.
Tatto, M.T., Schwille, J., Senk, S., Ingvarson, L., Peck, R., & Rowley, G. (2008). Teacher Education and Development Study in Mathematics (TEDS-M): Policy, practice, and readiness to teach primary and secondary mathematics. Conceptual framework. East Lansing, MI: Teacher Education and Development International Study Center, College of Education, Michigan State University.
van Es, E. A., & Sherin, M. G. (2002). Learning to notice: Scaffolding new teachers' interpretations of classroom interactions. Journal of Technology and Teacher Education, 10, 571-596.
Winkler, I. (2010). Lernaufgaben im Literaturunterricht [Learning tasks in literature classes]. In H. Kiper et al. (Eds.), Lernaufgaben und Lernmaterialien im kompetenzorientierten Unterricht (pp. 103-113). Stuttgart, Germany: Kohlhammer.
Yopp, D., & Ely, R. (2016). When does an argument use a generic example? Educational Studies in Mathematics, 91, 37-53. https://doi.org/10.1007/s10649-015-9633-z
Zabka, T. (2006). Typische Operationen literarischen Verstehens. Zu Martin Luther, Vom Raben und Fuchs (5./6. Schuljahr) [Typical operations in understanding literature. On Martin Luther’s “On raven and fox” (grade 5/6)]. In C. Kammler (Ed.), Literarische Kompetenzen – Standards im Literaturunterricht. Modelle für die Primar- und Sekundarstufe (pp. 80-101). Seelze, Germany: Klett/Kallmeyer.
Dr. Iris Winkler
is a professor of German language and literature education at Friedrich Schiller University Jena, Germany.
Dr. Astrid Fischer
is a professor of mathematics education at Carl von Ossietzky University of Oldenburg, Germany.
Dr. Ulrike-Marie Krause
is a professor of educational sciences at Carl von Ossietzky University of Oldenburg, Germany.
Dr. Birte Julia Specht
is a research assistant in mathematics education at the Institute of Mathematics at Carl von Ossietzky University of Oldenburg, Germany.