Chapter 2. Literature review

Through the window we could see eight children, each working on their own computer. They were spread out across the room as they were all working on individual projects having chosen different topics for their explanatory animations. The teacher was also the researcher but he behaved more like a teacher, partly because he actually was their teacher each week for Performing Arts sessions and he'd known the children for many years. His interactions with the children involved discussions and a critique of the imagery and words that they were using to explain topics for which they claimed no prior expertise. Towards the end of the session, each child began recording their own voice as a reflection about their progress and plans.

 

Introduction to the literature review

The empirical field for the Storyboard project is primary school animation authoring. Surprisingly, in a technologically rich era where electronic screens are a permanent part of our educational landscape, this field is fertile but dormant. The current study sought to investigate what impact the explanatory animation creation task has on the conceptual understanding of the animation author. A survey of the literature about explanatory animation creation would suggest that this multifaceted task is the exclusive domain of professional animators. This assumption runs so deep throughout the literature that it is not even questioned but, rather, accepted as a given. With the possible exception of Hoban, Nielsen and Carceller (2010) whose research investigated pre-service teachers using Claymation, it appears that there is no literature about children making explanatory animations for the sake of their own learning. For this reason, the explanatory animation task, although crucial to the current study, is largely absent from the literature review. This important theme will be revisited in Chapter 5 as a discussion informed by the data analysis and results from the current study.

As the overarching reason for the explanatory animation task in the current study was learning, this literature review begins by looking at constructionism to harness Papert’s (1991, 1993) idea that producing the right kind of knowledge artefacts can be both engaging and generative for learning and knowledge creation. Conceptual change is then explored as a re-evaluation of what concepts are and how they might inform learning. Visualisation and mental models then steer this literature review towards issues pertaining to representation whilst retaining the overarching interest in teaching and learning.

The explanatory potential of representations is then developed through other multimodal contexts such as storyboards and models. This focus on representation includes a discussion on the use of metaphors and analogies as mediating devices. The ability to use metaphors for an explanatory purpose is an example of paraphrasing through the identification and articulation of relevant variables. The importance of this process leads the discussion into a re-evaluation of the abstract and the concrete as the construction of an explanation using metaphors surfaces the designer’s conceptual understanding of their subject matter through their ability to make and justify these connections. Such design choices are further examined as design principles for teaching with animation after an introduction to learning from viewing explanatory animations.

A discussion on concept formation and the dual stimulation method leads into a critique of CHAT and its value to this study to forge strong theoretical links between the conceptual side of this project and the methodology that was devised to implement it.

Vygotsky’s zone of proximal development (ZPD) then brings the dynamics of teaching and learning into focus as “mutual zones” (John-Steiner, 2000, p. 177) of proximal development for the co-construction of meaning as collaborative partners. The ZPD constitutes a major theme for this study that is developed throughout the rest of the thesis.

The final discussion in this literature review involves schematic diagrams as conceptual metaphors as schematic diagrams and explanatory animations share many of the same design constraints in the interests of communicative clarity. The creation of schematic diagrams places deliberate limits on the graphic imagery involved as, by definition, schematic diagrams are selective rather than exhaustive. Likewise, careful consideration must also be given to decisions regarding essential content information for the purpose of effective communication. Schematic diagrams and constructionism as presented as metaphorical bookends for this literature review to suggest that the creation of explanatory artefacts can be generative for learning.

 

Constructionism

The learning theory that permeates the current study is constructionism as articulated by Harel and Papert in the seminal book Constructionism (1991). The current study sought to investigate connections between the act of making an explanatory animation and the process of conceptual consolidation. Constructionism provides a useful framework to investigate such dynamics due to the central focus on building artefacts and how these, in turn, become mediating tools for learning. Accordingly, from the constructionist point of view, “knowledge is a modelling (1) process, which shapes and edits reality to make it intelligible” (Floridi, 2011, p. 301).

Papert (1991) acknowledged that his formulation of constructionism was built on the foundation laid by Piaget's constructivism. Edith Ackermann (1991) was well qualified to comment on both of these epistemologies having worked closely with both Piaget and Papert for many years:

Because of its [constructionism’s] greater focus on learning through making rather than overall cognitive potentials, Papert’s approach helps us understand how ideas get formed and transformed when expressed through different media, when actualized in particular contexts, when worked out by individual minds (p. 4 original emphasis).

The production of digital artefacts generates multiple sources of data. Reconciling these multimodal sources of data became a research interest for Kafai and Resnick (1996) who theorised that the ideals of constructionism can integrate both design theories and learning theories which have traditionally been seen as emphasising either the product (design) or the process (learning). They note that “both design theorists and learning theorists now view ‘construction of meaning’ as a core process” (Kafai & Resnick, 1996, p. 4). According to Bateman (2008) this cross-pollination of design and learning has paved the way for digital representations and artefacts to become part of the qualitative researcher’s tool kit in instances where conceptual artefacts are constructed. Bereiter (2002) has reconciled even deeper epistemological issues inherent in design and learning by suggesting that improvement is a more fruitful attribute than truth:

You cannot know or justify claim that you are getting closer to truth. That would require that you already have an idea of what the truth is. But you can specify ways in which one conceptual artifact is an improvement over its predecessor. You can show how it overcomes faults that were detected in the predecessor, how it accounts for facts that an older theory could not, or merely that it does the same conceptual job more economically or elegantly (p. 429).

Bereiter’s quest for improvement is fundamental to the current study as the children’s explanatory animation creation process demonstrated instances of incremental improvement and showed how these iterations might display the learning that was embedded in the children’s conceptual artefacts. For Papert (1993), working with artefacts also means that learning can take place outside of the learner’s head as an artefact can be “shown, discussed, examined, probed, and admired. It is out there” (p. 142).

Rusk, Resnick and Cooke (2009) described constructionism as “a different model of learning and education, where the focus is on construction rather than instruction” (p. 19, original emphasis). This distinction between construction and instruction was occasioned by Papert’s (1980) famous speech to Japanese educators when he equated teaching with instruction and learning with construction saying “teaching is important, but learning is much more important” (original emphasis). Papert’s dismissal of instructionism (i.e., the idea that improved learning comes from improved instruction) formed the basis of his assertion that attempts to improve instruction alone are misdirected.

Papert and Harel (1991) further explained the contrast between constructionism and instructionism as an epistemological issue that goes “beyond the acquisition of knowledge to touch on the nature of knowledge and the nature of knowing” (p. 8). Papert (1993) continued to draw this distinction between teaching and learning by stating that the goal of constructionism is to “teach in such a way as to produce the most learning for the least of teaching” (p. 139). For Sutter (2001), the challenge is for constructionists to explain how instruction fits in with constructionist learning principles. Hoban, Nielsen and Carceller (2010) described constructionism as a “meta-theory” (p.434) rather than an explicit learning theory. They also noted that there are very few studies using constructionism as a theoretical framework to “articulate the process of designing and making artefacts and justify why this process is beneficial for student learning” (Hoban, Nielsen, & Carceller, 2010, p. 435). It is Floridi (2011) who has articulated and expanded constructionist principles through his article A defence of constructionism: Philosophy as conceptual engineering. The six principles that Floridi presented are paraphrased in the following list:

  1. The principle of knowledge - Only what is constructible can be known. (Anything that cannot be constructed, at least conceptually, can only be subject to working hypotheses).

  2. The principle of constructability - Working hypotheses can be investigated through conceptual models and simulations.

  3. The principle of controllability - Models must be controllable.

  4. The principle of confirmation - Confirmation or refutation relates to the model itself rather than the system being modelled.

  5. The principle of economy - The fewer resources used in a conceptual model the better.

  6. The principle of context-dependency - Points of correspondence between simulation and simulated are local rather than global (pp. 300-301).

These six principles are applicable to the current study as the primary school children used the conceptual modelling process to enhance and expand their understanding. As Nersessian (2008) has noted, modelling has a “generative” quality (p. 48). Hence, the next section focuses on conceptual change and how this is a generative process that leads towards conceptual consolidation.

 

Conceptual change and conceptual consolidation

A key component of the research question for the current study is conceptual change. Conceptual change is still the most commonly used term to describe the learning of concepts, which according to Chi (2008), is primarily a process of classification. Hewson (1992) discussed how the word “change” has a wide semantic range (pp. 3-7) in relation to conceptual learning, although the assumption within the conceptual change literature is that the change is leading to improvement. Smith, diSessa and Roschelle (1993) used the word “refinement” (p. 150) as a more nuanced term. I have used the term conceptual consolidation (Oliver & Ebers, 1998; Ortiz & Wright, 2010) throughout the current study, as the purpose of this research is to investigate how the explanatory animation creation process might effect conceptual change up to a point that demonstrates consolidation.

The more problematic word in the heading of this section is actually concept, as there is no consensus as to what a concept actually is. Adherence to a classification model of concepts led Jackendoff (1999) to make a distinction between internal and external concepts using the terminology of “I” and “E” concepts (p. 306) based on Chomsky’s (1986) notion of internal and external language. This I-E distinction might be generative for hierarchies and taxonomies of concepts but it is not conducive to defining what a concept is. Laurence and Margolis (1999) discussed a range of conceptual definitions before concluding that, “for most concepts, there simply aren’t any definitions” (p. 14). Medin and Rips (2005) introduced their summary of the cognitive science literature on conceptual change by saying that the word concept itself is “up for grabs” (p. 37). Perhaps the problem of defining a concept is inherent in language itself. As Evans (2006) argued, words don’t have meaning per se, and so their meaning must be inferred as a “function of situated use” (p. 527). In this sense, a concept can be defined according to how it is used. In the current study, the word concept is used as an explanatory concept (Murphy, 2000; Thomas, 1977; Zif, 1983), which is then defined as anything that requires explanation.

There are also different perspectives in the literature about the nature of conceptual change but one common thread is that conceptual change is seen as a phenomenon that occurs throughout a person's life and is therefore developmental. Hence, a central issue within the conceptual change literature concerns the nature and rate of change. Opinion is divided as to whether conceptual change is drastic or incremental. In the context of scientific thought, Kuhn (1970) characterised conceptual change as revolutionary, exhibiting paradigm shifts. However, Toulmin (1972) rejected this view, citing examples of a more gradual, evolutionary nature, such as Darwin’s understanding of biology exhibiting a consistent application of perpetual variation. Perhaps what is most illustrative from this debate is that there is ample support for each view.

What is common to both conceptual change and conceptual consolidation, however, is that they both speak of concept formation. “The main question about the process of concept formation - or about any goal-directed activity - is the question of the means by which the operation is accomplished” (Vygotsky, 1962, pp. 55-56). This issue of means continues to be of interest to science education, and particularly model-based reasoning (Nersessian, 1984, 2002, 2008, 2012) which speaks of an “interplay between concept formation and modelling practices” (2012, p. 222).

Özdemir and Clark (2007) compiled an overview of conceptual change theories where they organised the literature into two groups according to epistemological differences, namely:

Many researchers affirm the knowledge as elements perspective (Clark, 2006; diSessa, Gillespie & Esterly, 2004; Harrison, Grayson & Treagust, 1999). The knowledge as theory group is also strongly represented in the literature (Ioannides & Vosniadou, 2002; Wellman & Gelman, 1992). Wiser and Amin (2001) suggested that the answer might be somewhere in the middle because there is obvious merit, and evidence, for each view. The process of conceptual change varies in different contexts depending on the age of the person and the intrinsic complexity of each concept. Carey (1999) is insightful here by suggesting that conceptual restructuring is not global, but domain specific.

It is worth noting that the intended contrast in Özdemir and Clark's schema is between knowledge as elements and knowledge as theory, where knowledge is the common link, rather than understanding. The semantic range of common words such as knowledge, understanding and information have much overlap. The Oxford English Dictionary defines “knowledge” in various ways such as a general “fact” through to “familiarity gained by experience” and even a “person's range of information”. Knowledge as information might explain why the practice of classification is so widespread in the conceptual change literature. By contrast, Vygotsky's insight about concepts developing (Vygotsky, 1987) is more in line with understanding as conceptual consolidation is clearly a dynamic process where "the concept does not emerge in a static and isolated form but in the vital process of thinking and resolving a task” (p. 128).

There is a long tradition within the conceptual change literature to see conceptual change as a replacement of misconceptions with more accurate conceptions. Smith, diSessa and Roschelle (1993) have called for a more nuanced approach to balance this view:

The goal of instruction should be not to exchange misconceptions for expert concepts but to provide the experiential basis for complex and gradual processes of conceptual change. Cognitive conflict is a state that leads not to the choice of an expert concept over an existing novice conception but to a more complex pattern of system-level changes that collectively engage many related knowledge elements (p. 154).

Terms such as “conceptual conflict” (Smith, diSessa & Roschelle, 1993, p. 154) and “conceptual exchange” (Hewson, 1992, p. 4) revisit the issue of whether concepts are replaced or modified. This is another way of looking at the rate of change issue from the 1970s. Of interest is the more recent interest in the illusion of explanatory depth (IOED) (Rozenblit & Keil, 2002). Keil (2006) defined the IOED by stating that:

People of all ages tend to be miscalibrated with respect to their explanatory understandings; that is, they think they understand in far more detail than they really do how some aspect of the world works or why some pattern in the world exists (p. 242).

The IOED could then be paraphrased as a phenomenon whereby people often settle for an understanding of what rather than how. Keil (2006) suggested that one reason for the IOED is that identifying causation is often equated with understanding. For example, a person might realise that kidneys clean blood but not know how. Keil (2006) attributes the IOED to an initial surge of insight:

When we learn one of these causal-functional relations, we get an appropriate surge of insight into an explanatory relation that we did not have before. The problem occurs when we attach that surge of insight to an inappropriately low level of analysis (pp. 242- 243).

Alter, Oppenheimer and Zemla (2010) are insightful here by noting that this overconfidence is most noticeable in oral language and that it peaks “before attempting to express those explanations in writing” (p. 10). It would then appear that the task of writing conceptual explanations has a generative effect on a person when they are challenged to find the right words for their explanation.

The knowledge as elements versus knowledge as theory issue might suggest a continuum between these two positions, but it also follows that both elements and theory are involved in all cases as a theory can’t exist without elements and, likewise, a system can’t exist without components. The commonality between concepts and systems is worth exploring, as they both appear to have components and connections between these components. This is highly relevant to the current study because the children’s explanatory animation creation task involved constructing imagery in addition to speaking and writing to enhance their explanation of how these component parts might function, together.

 

Concepts as systems and variables

The Oxford English Dictionary defines a system as a "Complex whole, set of connected things or parts, organised body of material or immaterial things." In the most basic sense, a system implies relationships between component parts. The current study uses the term variables (Zimmerman, 2007) for the components parts of a system. A basic outline of scientific terminology is provided in Table 2.1 to define my use of the term ‘variable’ using two examples of how the following four variables impact one another during scientific experiments:

  1. Independent variables

  2. Dependent variables

  3. Constant (control) variables

  4. Extraneous variables

 

Table 2.1

Examples and Types of Variables

Type of variable

Definition

Scientific experiment example using paper plane designs (Helmenstine, 2014)

Scientific experiment example contrasting two growing plants

Independent variable

The variable that is changed in an experiment

Design of the plane - length of wings

Water for one plant

Dependent variable

The variable that is measured

Flight of the plane - duration or distance of flight

Growth rate of the plant receiving water

Control variable

A variable that is kept the same during the experiment

Size of the paper

Sunlight for both plants

Extraneous variable

A variable that has no effect on an experiment

Colour of the paper

Colour of the pots

 

Of note from Table 2.1 is that variables need to be identified and articulated before they can be classified according to their interrelationships. Davydov (1990) used the relationships between variables within a conceptual system as a way to explain concepts in other domains such as economics. The complexities of economics, for instance, become more manageable when broken down into variables with corresponding relationships.

Similarly, variables within music can be understood according to their role in the musical system. One musical element, for instance, is melody. A melody can only be expressed through individual notes and yet an individual note only has melodic value in relation to other notes. This point was made by Christian von Ehrenfels when he published his essay titled On gestalt qualities in 1890 and said that a transposed melody can be recognised even if it does not contain any of the same notes as the original. Musical transposition is a good example of a holistic structure because it affirms the relationships between variables using a system perspective. This musical example also influenced Langer's (1957) seminal work Philosophy in a new key where meanings are “understood only through the meaning of the whole, through their relations within the total structure (p. 97).

Kuhn and Dean (2005) found that they could improve learning for children by explicitly reminding them to focus on one variable at a time. Kuhn and Dean’s method not only simplified the task for their students, but also implied the strategy of articulating what the focus actually was at any given time. Zimmerman (2007) also noted that scientific learning is “multifaceted” (p. 213), and that the isolation of variables is an important part of the scientific method. Kuhn, Iordanou, Pease and Wirkala (2008) have sought to expand this traditional view by asking “what else, beyond control of variables, is involved in the development of skilled scientific thinking?” (p. 436). They suggested that skill in experimentation, skill in argumentation and an epistemological understanding of the nature of science will help induce scientific reasoning processes in children. It is this second area (i.e., skill in argumentation), that most informed the current study as the eight children were given the task of explaining their topics without actually participating in scientific experimentation per se. Instead, their task was largely representational and so the theoretical interest here shifts to visualisation and the existence of mental models.

 

Visualisation and mental models

An area of interest to the same cognitive psychologists who investigate conceptual change is the existence of mental models. According to Rapp (2007):

Mental models are internal representations of information and experiences from the outside world. Indeed, mental models have been discussed beyond psychology proper; they are often invoked by science educators to describe the types of representations that equate with adequate comprehension of educational material (p. 44).

Mental models depict an individual’s understanding of particular concepts. They are “representations that rely on a person's understanding, but are not always valid or reliable” (Rapp 2007, p. 45). For Rapp, the reliability of a mental model relates to its alignment with the topic. Hence, faulty models are common across various subject areas, ages and demographics for a variety of reasons ranging from the quality and nature of instruction through to other pedagogical considerations such as a student's prior knowledge (Carey, 1985; Diakidoy & Kendeou, 2001; Osbourne & Freyberg, 1985).

A challenge for teachers and researchers is trying to understand what students’ mental models actually look like. Explanations require students to construct mental models of the content from which they are presented, regardless of whether the explanation includes any diagrams. This is the essence of the ubiquitous phenomenon of visualisation. Gilbert (2007) argued that, “visualization is central to learning, especially in the sciences, for students have to learn to navigate within and between the modes of representation” (p. 9). Hence, visualisation is strongly associated with the development of mental models as mental models “refer to the model of the system actually constructed by the learner” (Mayer, 1993, p. 568). The structure of mental models gives representational form to the given or implied particulars of a concept but a mental model does not necessarily have a physical structure (Caws, 1974). The relevance of this duality for the current study is that the explanatory animation process facilitates tangible expressions of internal, mental models. Gilbert (2007) calls this an expressed model:

By its very nature, a mental model is inaccessible to others. However, in order to facilitate communication, a version of that model must be placed in the public domain and can therefore be called an expressed model (p. 12, original emphasis).

An expressed model is particularly useful to facilitate learning because it is tangible and thus available for further critique and discussion. Veresov (2013) described this as a basic premise pertaining to multimodality and knowledge representation, when he suggested that representations must be visible before they can be observable, and they must be observable before they can be analysable.

 

Multimodality and knowledge representation

A mode is a specific type of communication such as language, imagery or gesture. Kress (2010) has expanded the definition of ‘mode’ to include attributes, such as colour. The commonality amongst modes is that they can convey meaning. Wright (2010) has articulated many of the modes including “gesture, body language, facial expressions, eye contact, dress, writing, speech, narratives, the mass media, advertising, drawing, photography, space, cuisine and rituals” (pp. 11-12) but was careful to note that for children, meaning is constituted by its total effect as semiotic units and should be “understood as a single multimodal act” (Wright, 2010, p. 14).

The format in which a mode might be expressed, such as paper, email or text message, is referred to as the medium. The animation medium is a composite mode containing elements such as imagery, language, colour, movement and music. Eisner (1982), Green (1988), Knobel and Lankshear (2007) and Wright (2010) have noted that each modality, “affords a particular type of meaning” (Harste, 2010, p. 29).

A fundamental presupposition that binds all systems of representation is that they have “the power to evoke something else” (Pratt & Garton, 1993, p. 1). Davis, Shrobe and Szolovits (1993) have taken this notion further by describing representations as a “surrogate” (p. 17) for that which they represent. Pratt and Garton (1993, pp. 1-9) provided some general principles of how representations evoke mental associations with objects:

  1. Internal (i.e., mental imagery) and external representations (e.g., pictures, language) have some obvious overlap. For instance, “we cannot think about pictures, if we cannot represent them mentally” (Pratt & Garton, 1993, p. 2). Similarly, we cannot organise mental constructs without either imagery or language.
  2. The system of representation is distinct from the application of that system. An example of this is how language is a system with an entire body of linguistic research and literature but the application of language is communication. Art is also a system with a body of artefacts (e.g., paintings, sculptures, installations) but the application of art is creating and communicating thoughts and feelings through images and objects.
  3. The degree or extent to which representations are related to their objects can be understood as “dimension[s] of arbitrariness” (Pratt & Garton, 1993, p. 3). At the most arbitrary side of this scale is language itself. For example, the symbol ‘5’ could have been attached to any number initially but in Roman numerals, the use of a ‘V’ symbol to represent the number 5 is less arbitrary because the V was chosen because it looks like an outstretched human hand with 5 fingers. Photographs and videos are considered to be the least arbitrary because they generally look realistic unless they are heavily stylised.

Whether internal or external, arbitrary or realistic, the representational focus of the current study is in the communicative intent of the person making a representation. Kress and van Leeuwen (2006) have noted how the person who makes a representation (i.e., sign-maker) displays their communicative interest through the “criterial aspects” (p. 7) evident in a depiction. Representations are then value laden according to these criterial interests. “Representation is never neutral: that which is represented in the sign, or in sign-complexes, realizes the interests, the perspectives, the positions and values of those who make signs” (Kress & Mavers, 2005, p. 173). Jewitt (2008) has also noted that mode and meaning are deliberately aligned when constructing multimodal texts:

How knowledge is represented, as well as the mode and media chosen, is a crucial aspect of knowledge construction, making the form of representation integral to meaning and learning more generally (p. 241).

Mode and meaning has its origins in the notion of form and function. Form and function are often discussed together as important design issues. Indeed, the reciprocal relationship between form and function has been discussed for centuries. Sullivan (1896) coined the phrase “form ever follows function” (p. 407) but attributed the idea to the Roman architect Marcus Vitruvius Pollio. According to Andreou (2013), “in every attempt to communicate information the concepts of the medium and the message, form and content takes precedence” (p. 12). The point is that the analysis of the form should begin with an analysis of the function.

Analysing the function of a representation is an iterative process. Bateman (2008) suggested that a key to understanding multimodality is to understand the genre to which it belongs. He prefers the term “multimodal documents” (p. 9) for static pictures due to the way that additional elements such as arrows and notations can enhance meaning. Wright (2011) has further noted how the function of drawing for young children can extend beyond aesthetics and communication to include both provisional and generative elements. In this way drawing can surface what they “already know, what they are grappling with and what they are motivated to explore further” (pp. 171- 172). Engeström and Middleton (1996) also noted these reflexive affordances by stating that, “visual representations serve a reflexive function in that they break down the tight flow of written argument, forcing both the writer and the reader to stop and look, and then to realign the two modalities” (p. 5). It would appear that realigning the two modalities of graphics and language is intrinsic to the process of storyboarding.

 

The affordances of storyboards

A storyboard, in its most basic form, is a linear presentation of ideas using pictures and (usually) words. Using pictures and art to represent scenarios is an ancient practice with countless examples throughout history. The storyboard is a continuance of this tradition but it serves the unique and specific purpose of scripting movement for animations and films. According to Canemaker (1999), “storyboarding was invented at the Disney Studios and is today a worldwide standard procedure for the production of both animation and live-action films and video” (p. ix).

The storyboarding technique is still as central and useful as it was back in the 1930s but the process has now been enhanced through the use of computers. Computer- aided storyboarding facilitates the rendering of the storyboard frames into draft animations called animatics. The purpose of both traditional storyboards and animatics is to improve efficiency by avoiding the time and expense of creating imagery which won't be used. In this sense, storyboards assist and effectively troubleshoot the storytelling process as useful planning tools.

As the children in the current study were using storyboarding as a tool to create their animations, the various stages of the storyboarding process were also informed by using some of the same techniques employed by filmmakers more generally. As with film production, there are three main stages of the video production process namely pre-production, production and post-production (Steiff, 2005). Planning is normally done at the pre-production stage. The purpose and inherent utility of planning the order of the various scenes is to make the process of filming more efficient because the production proceeds in accordance with the storyboard plan. In this sense storyboards afford “an intermediate stage between the script and the editing process; it could, in fact, be seen as a form of pre-editing” (Michelson, 1993, p. 8).

An important new perspective about the affordances of storyboards was found by Mitchell, de Lange and Moletsane (2011) through their work with women in Rwanda to make short documentaries about gender inequality. They found that the group discussions surrounding the creation of storyboards were so fruitful (and time consuming), that they didn’t end up continuing with the filming stage. In this instance, where a final film was never made, “the storyboard became the product” (Mitchell, de Lange, & Moletsane, 2011, p. 224) which led them to conclude that most of the learning occurs during the storyboarding process, regardless of whether the storyboard is actually used to generate a final video artefact.

The affordances of storyboards would then appear to be more than a sum of its component parts (i.e., written text, imagery, movement, narrative, colour and so on) because the storyboard has its own affordance of order. This forces a designer to “break down a concept into its constituent parts and place them in a sequence” (Hoban, Nielsen & Carceller, 2010, p. 439). A storyboard not only implies order, it also implies rearranging order. The affordance of reordering is further enhanced when a storyboard has an explanatory purpose because the structure of a storyboard outlines the key points but in summary form. Mitchell et al. (2011) have noted that “the production of the storyboard is itself a way to go deeply into a discussion, but it is also a way to contain it” (p. 229). The available literature illuminating how storyboards might be used for explanatory purposes can be supplemented here through extant research on explanatory models.

 

Explanatory models

According to the Oxford English Dictionary, a model is “a representation in three dimensions of proposed structure etc.” although the common usage of the word ‘model’ has since abandoned any requirement for models to have three dimensions.(2) Caws (1974) discussed how the structure of a model is itself more indicative than the elements within the model. For example, a model house made of matches or cardboard resembles the function of bricks rather than the bricks themselves. This emphasis on structure is an affordance of models in general, because the structure is literally embodied within the model. Caws was careful to note that “a structure is not a set of entities but a set of relations” (1974, p.3). Caws also coined the term explanatory model as “one of the scientists’ mental structures” (Caws, 1974, p. 5) thus making a distinction between mental constructs and representational models, the latter of which he saw as a description of relationships. (3)

In cognitive science and educational psychology, an explanatory model is simply a representation that has an explanatory purpose (Floridi, 2011; Gilbert, Reiner & Nakhleh, 2008; Hubber, Tytler & Haslam, 2010; Treagust, Chittleborough & Mamiala, 2002; Wiser & Smith, 2008). The current study uses ‘explanatory model’ in this same tradition. A ‘cognitive model’ (Evans, 2006) however, is not an artefact but a conceptual knowledge structure that lists the “semantic potential that lexical concepts provide access to” (p. 496). Gemino and Wand (2004) and others have used the term ‘conceptual model’ as a synonym for explanatory model. Of interest is the way in which the creator of a conceptual model is forced to confront their own understanding of their subject matter. Gemino and Wand (2004) further noted how “discrepancies between a person’s understanding of the system, and the model used to represent the system leads to issues in both creating and interpreting” an artefact (p. 256).

One such issue is that all explanatory models have inherent limitations. Bonini's paradox has some relevance here. The paradox, named after Stanford business professor Charles Bonini, proposes that the more complete the model, the more difficult it is to comprehend (Dutton & Starbuck, 1971). Hence, the strength of any model is that it should simplify the subject matter by leaving out non-essential information. Michael Poole (1995) referred to a similar paradox when describing the limitations of language in relation to concepts and models saying that “every comparison has a limp” (p. 49) as comparisons deal with particulars and are therefore only partial. Hutchins (2012) has also noted that models are selective for the sake of clarity. He described this selectivity as a “filter” (p. 319) where certain elements and relationships are deliberately eliminated.

Within the science education literature, there is a renewed focus on representation as evidence of learning, because the creation of models as representations can surface what a child knows (Gilbert, Reiner & Nakhleh, 2008; Hubber, Tytler & Haslam, 2010; Treagust, Chittleborough & Mamiala, 2002). This interest has also sought to understand how student generated models can facilitate interactions between students and teachers (Chandrasegaran, Treagust & Mocerino, 2011). During these interactions, the learning process relies on the teacher’s ability to interpret “students’ representations as evidence of their understanding” (Waldrip & Prain, 2013, p. 29).

In application, a Representation Construction Approach (RCA) to learning (Tytler, Hubber, Prain & Waldrip, 2013) uses a pedagogical approach based on the central practice of students making representations and then using these representations as catalysts for conceptual consolidation through classroom discussion. Of particular interest is the emphasis that the RCA places on negotiation and co-construction of meaning. When an artefact has an explicit communicative role and the creator of that artefact is present for dialogue during its creation (rather than relying on interpretation after the fact), the co-construction of meaning revolves around the explanatory purpose of that artefact.

A RCA treads consciously around another issue pertaining to science education, which involves the timing and use of established, canonical representations and the exploratory, creative pursuits of letting students develop their own representations. A RCA has nuanced this issue with a keen interest in representations as a window into mental models as “learning about new concepts cannot be separated from learning both how to represent these concepts and what these representations signify” (Waldrip & Prain, 2013, p. 17). Perhaps the most important premise from the RCA is that representations must be explained and critiqued, as the explanatory purpose of representations is not always self-evident. This resonates with the work of Harrison and Treagust (1996) who made the same point about using metaphors.

 

Metaphors and analogies as mediating devices

The use of metaphors and analogies is a common device for the purposes of explaining one thing by relating it to another. Lakoff and Johnson (1980) have described the source domain as the familiar and the target domain as the subject for which we are seeking to infer a comparison. For example, when using the metaphor of water to explain electricity, it is assumed that people are quite familiar with the attributes of water as a source domain such as water pressure and water flowing and then some of these attributes are said to be illustrative for the less familiar attributes of electricity, which is the target domain. It is important, however, to remember that metaphors are not explanations. Their value is “more heuristic than analytical and more useful in the context of discovery than verification” (Weiner, 1991, p. 929).

Because metaphors are used to make connections, it is not uncommon to find multiple or even contrasting metaphors, as no metaphor provides a complete analogy.

Petrie (1979) has proposed that metaphors are epistemologically necessary. By this he means that it is by comparing and contrasting one thing to another that knowledge is constructed and shared. Andreou (2013) has also noted that the use of metaphors is a creative act by “linking things that are originally unrelated” (p. 14). Lakoff and Johnson (1980) have emphasised that metaphors are not merely a literary device, but rather, an essential part of thinking and reasoning. Yet others such as Green (1993) have cautioned that the common pedagogical practice of using metaphors to enhance explanations can also be problematic as students often “transfer attributes from the teacher’s analogue to the target” in a literal sense (Harrison & Treagust, 1996, p. 511).

According to Zittoun, Gillespie, Cornish and Psaltis (2007), “metaphors have their affordances, their side-effects, and their unexpected consequences” (p. 225). One problem then, is that students are often not told which parts of a metaphor are relevant and which parts are not or how one concept can be mapped onto another. Fichtner (1999) is insightful here by noting that an important aspect of working with metaphors is knowing how to handle them as, “metaphors are not illustrations of empirical facts, but rather visual images of theoretical relationships and, thus, a means of reflection” (p. 323).

Both deliberate instructional metaphors and figurative language are often misunderstood by children as language operates on a “horizontal (sequentially ordered) plane as well as a vertical (associational or metaphorical) plane” (Manning, 2003, p. 1023). Metaphors involving representations of physical objects appear to be particularly problematic for children. Deloache and Burns (1993) found that the ability of children to recognise the symbolic qualities of an object is a skill that develops throughout childhood.

Ackermann (1991) noted that children often interpret metaphors literally to the detriment of their own learning as, “drawings are not analogues of the ideas that they express” (p. 286). This issue is relevant to the current study as it shows how the mediating object (i.e., metaphor, representation or both), which is supposed to be a catalyst for conceptual consolidation, is often a stumbling block without clarification through sufficient dialogue. The following two examples bear this out in terms of how children can be prone to misconceptions when presented with metaphors:

  1. Ackermann’s observation (1991) was in the context of revisiting Piaget's water level experiment. One of her student's drew a ribbon around a bottle to depict the water level inside the bottle and then proceeded to treat the drawn ribbon as an actual ribbon rather than the water level.

  2. Butler (1998) recounted a story about a primary school teacher who boiled a kettle of water in class to show the water changing from a liquid state to vapour. When children were then asked to draw how this could occur in nature (assumedly with lakes and clouds and so on), one child drew a kettle placed out in the wilderness. The misunderstanding only became apparent through the child’s drawing, as the teacher had assumed that the children understood that the kettle was a metaphor.

The examples of the water level ribbon and the kettle in the wilderness show that the problem, in each case, occurred when the representations were taken literally. The line on the water bottle only became a metaphor when it was drawn as a ribbon. The kettle in the wilderness might well have been the result of only presenting the child with a single metaphor. The use of multiple metaphors helps in such instances, as children are orientated to the fact that metaphors are only intended to have particular points of similarity in each instance. The relevance and implication of this for the current study is that in a well-designed explanatory animation, when metaphors and analogies are used, they must be explained and not merely presented, as the intended inferences are usually not self-evident. Furthermore, “the point at which the analogy breaks down must be recognised by students to avoid wrong inferences on, or oversimplification of, the new concepts” (Mason, 1994, p. 289).

Metaphors, images and diagrams are particular types of signs known as icons. Teachers can mediate the understanding of metaphors by discussing icons and the process of iconicity by metaphorically mapping concepts across modes through drawing, sequencing and reflection. Nersessian (2008) noted that this process of mapping need not involve a direct correlation for all of the particulars but, rather, as “sources for constraints” (p. 28). Accordingly, the combination of constraints and direct mapping is then the basis for model-based reasoning (Nersessian, 1984, 2002, 2008, 2012), where models are constructed and critiqued to understand concepts, as “conceptual change involves such reasoning” (2008, p. 16). Nersessian built her theory through dialogue with others such as Clement (1988) who proposed that, when dealing with metaphorical comparisons, both the correlation and constraints of particulars are more than associations, but rather, “transformations” (Nersessian, 2008, p. 211). This transformation is not initially about the content itself, but rather a person’s understanding of that content. Ultimately, the content is also transformed as evidenced in the person’s updated explanatory model.

There would appear to be some commonality between knowledge transformation and transmediation. Transmediation is the process of translating content from one modality to another (Broudy,1977; Suhor, 1984). Siegel (1995) has suggested that transmediation always involves “an enlargement and expansion of meaning, not a simple substitution of one thing for another” (p. 457). Compared to metaphor, where core elements pertaining to a conceptual topic might be identified and then explained by mapping between the particulars of the target and source domains, transmediation is seen to be a catalyst for conceptual change because the core elements (i.e., variables) are translated from one modality to another. It is this notion of translation that is the hallmark of transmediation rather than the actual modalities that are involved. Vygotsky (1962) made a similar observation about semantics when he noted that thought can change into words and then back to thought and vice versa as, “word meanings are dynamic rather than static formulations” (p. 124).

The fluidity of these transmediations might appear to infer that conceptual change is an ongoing process that has no final destination point. Yet, as implied in the research question within this study, a final destination might involve the articulation and consolidation of conceptual understanding. However, rather than considering transmediation and conceptual change as infinite versus finite, a more suitable viewpoint might be to theorize conceptual consolidation along a continuum (Wilensky, 1991) between the abstract and the concrete.

 

The abstract and the concrete

In the seminal book Constructionism, Turkle and Papert (1991) proposed a “re-evaluation of the concrete” (pp. 161-192) where they questioned the nature of abstract thinking. This re-evaluation is clearly in reference to Piaget’s stage theory (Piaget & Inhelder, 1969), which suggested that the concrete operational stage (most clearly associated with primary school children), preceded the formal operational stage which is characterised by abstract reasoning. Piaget saw abstract reasoning as the hallmark of secondary school children coming of age as they grew into adulthood. Piaget’s theory of cognitive development is now widely critiqued but his categories still provide useful vocabulary for discussion.

In his chapter “Abstract Meditations on the Concrete”, Wilensky (1991) takes this re- evaluation further by suggesting that everything is abstract until you understand it. The term “concretizing” (Wilensky, 1991, p. 194) was then used as a metaphor for understanding. Difficult concepts gradually set when students are able to represent, manipulate and interact with them. “Concepts that were hopelessly abstract at one time can become concrete” (Wilensky, 1991, p. 198).

An implication of Wilensky's view is that abstract and concrete are not fixed categories to which different types of knowledge intrinsically belong. Instead, these categories could be more accurately described as different ends on a learning continuum across which meaning can be consolidated and represented. The relative movement of ideas from abstract to concrete depends on the comprehension of each concept, by each person, on a case-by-case basis. This notion is not new, as Dewey made the same point over 100 years ago when he commented on the interplay between concrete and abstract concepts as being “relative to the intellectual progress of an individual; what is abstract at one period of growth is concrete at another” (Dewey,1910/1997, pp. 136- 137).

Davydov (1990) also critiqued the notion of a fixed progress from concrete to abstract under the heading “The method of ascent from the abstract to the concrete” (1990, pp. 128-138). Of interest here is not whether abstract and concrete are constituted in a vertical relationship but, rather, that new ideas are metaphorically out there and that they become increasingly concrete as they move towards the learner and become internalised. Similarly, Vygotsky (1987) described the relationship between concrete and abstract as a two-way phenomenon depending on the context. He saw conceptual consolidation as proceeding from the abstract to the concrete whereas theorising was served by the capacity for abstraction:

Concept formation...does not occur through a gradual transition from the concrete to the abstract. The reverse movement, the movement from above to below, from the general to the particular or from the top of the pyramid to its base is as characteristic of this process as is the reverse movement toward the pinnacle of abstract thinking (p. 128).

Having considered some of the modalities and variables involved with conceptual learning, some mention is required about the animation medium itself, as this provided the context for the current study.

 

Learning from viewing explanatory animations

Animation, like multimedia, is not a single mode of representation but a medium or composite mode which is capable of combining imagery with sound and, of course, movement. Explanatory animations (i.e., animations containing a narrated explanation) are part of the visual narrative genre. According to Wright, (2010), “it is the integration of three modes in consort - graphic, narrative and embodied - that makes visual narrative a powerful source for children's learning, representational thought and creativity” (p. 20).

The literature on animation in education revolves around the central issue of how effectively the viewing of animations can be used to facilitate learning. Lowe (2001) was convinced that the potential of animation in education is enormous but expressed concerns that poorly designed examples could result in this important medium being dismissed as a mere gimmick. “If animation is poorly designed or applied, it may create more learning problems than it solves” (Lowe, 2001, p. 8). Animation can combine vision and sound with great potential and flexibility but these elements are often employed as a novelty that can assault our senses (Ellis, 2012). An implication from this is that instructional multimedia should be carefully designed to only enhance relevant elaboration.

The objective of an explanatory animation is for the author to communicate and explain their topic. Hence, the focus must be on the explanation and not simply on the showing. Tversky et al., (2008) has suggested that this emphasis on showing rather than on telling, might account for the underwhelming impact of explanatory animations as an educational medium. Animation design advocates such as Mayer (2001, 2005, 2009) have sought to remedy this situation by formulating a growing list of design guidelines to improve educational outcomes by improving instruction. However, it must be noted that both the animation critics and the animation advocates make the assumption that professional animators are creating the explanatory animations and that children comprise the audience for such animations.

Furthermore, researchers who have studied the effectiveness of the animation medium, and how this medium might impact learning, have primarily focused on the viewers of the animations, rather than the authors of the animation. The current study has sought to address this by focusing on the conceptual consolidation of the animation author and how meaning can be mediated through the creation of multimodal texts. Hubscher-Younger and Narayanan (2008) described the idea of learning through the process of making explanatory animations as tantamount to “turning the tables” (p. 235). Clearly, there is a need for more research into the potential for how this making process might augment the conceptual growth of the author.

 

Design principles for teaching with animation

One leading exponent of explanatory animation design guidelines is Richard Mayer. As the editor of The Cambridge Handbook of Multimedia Learning (2005), Mayer defined three key terms in his “Introduction to Multimedia Learning” (paraphrased from p. 2):

Multimedia: The presentation of words and pictures(4) together.

Multimedia instruction: Presenting words and pictures with the intention to promote learning.

Multimedia learning: Building mental representations from words and pictures.

Mayer's (2001) multimedia theory is known as the “cognitive theory of multimedia learning” (p. 3). Table 2.2 combines Mayer’s (2001) chapter headings with synopses of his seven multimedia principles.

 

Table 2.2

Mayer's (2001) Multimedia Theory

1

Multimedia principle

Students learn more from words and pictures than from words alone.

2

Spatial contiguity principle

Students learn more when corresponding words and pictures are presented close together, rather than far apart, on a page or screen.

3

Temporal contiguity principle

Students learn more when corresponding words and pictures are presented simultaneously rather than successively.

4

Coherence principle

Students learn more when extraneous words, pictures, and sounds are excluded.

5

Modality principle

Students learn more from animation and narration than from animation and on-screen text.

6

Redundancy principle

Students learn more from animation and narration than from animation, narration and on-screen text.

7

Individual differences principle

Design effects are stronger for low-knowledge learners than for high-knowledge learners and for high-spatial learners than for low-spatial learners.

Note. Paraphrased from Mayer, R. E. (2001). Multimedia learning. Cambridge University Press.

Mayer's seven principles of multimedia learning were expanded to twelve principles in his 2nd edition in 2009 and there appears to be no end to the actual number of principles that could be developed. For example, the voice principle states that “People learn better when the narration in multimedia lessons is spoken in a friendly human voice rather than a machine voice” (Mayer, 2009, p. 268). Whilst this new principle was validated through empirical testing, I believe that the explanatory power of Mayer's theory was best expressed in his original (2001) articulation. The new voice principle could have been logically implied from the original coherence principle that aims to eliminate distractions. Hence, for the purposes of this study, these seven principles will be used.

Mayer has been careful to stress that viewers of an animation must construct their own mental representations of the material that they are viewing. Although the viewer of an animation might appear to be passively watching, Mayer insists that learning can only occur through mental engagement with the words and pictures. This emphasis on constructing mental images, however, should not be equated with constructionism as Mayer’s experiments were measuring participants’ retention and transfer when viewing professionally made animations rather than participants’ building artefacts of any kind.

It is here that the current study deviates sharply from Mayer’s established principles as the participants in the current study were engaged in dialogue with me throughout the project to negotiate meaning. This dialogic approach is consistent with Prain and Tytler’s (2013) finding that when students generate their own representations, teacher and student negotiation of meanings are “evident in verbal, visual, mathematical and gestural representations” (p. 11). Such negotiations of meaning characteristically focus on issues such as those discussed in Lowe’s (2001) guidelines for creating educational animations (see Table 2.3).

 

Table 2.3

Lowe's Animation Creation Guidelines

1

Analyse the dynamic situation and its events

2

Select the graphic entities, relationships and properties

3

Determine main events

4

Devise a presentation sequence

5

Construct a temporal structure

6

Cue the critical information

Note. Paraphrased from Lowe, R. (2001). Beyond "eye-candy": Improving learning with animation. Apple University Consortium. http://auc.uow.edu.au/conf/conf01/downloads/AUC2001_Lowe.pdf

There are some fundamental differences between the principles of Mayer (2001) and the guidelines of Lowe (2001). Mayer focuses on the relationship between modes along with some spatial-temporal principles surrounding these; Lowe focuses more on events, sequences and structures. Lowe’s guidelines focus on practical applications to assist and guide the animation author through the actual stages of animation creation. Mayer’s principles function as a reference tool for making decisions about particular design elements of an animation.

An important assumption that both Mayer and Lowe make is that the animation authors already understand their subject matter. Yet the current study sought to explore whether the process of making explanatory animations itself causes the author to refine and deepen their own understanding of their subject matter. This specific exploration was conceptualised through Vygotsky and Sakharov’s dual stimulation method.

 

Concept formation and the dual stimulation method

Daniels (2012) has defined Vygotsky and Sakharov's dual stimulation method as an experimental approach where people are placed in a situation where “a problem is identified and they are also provided with tools with which to solve the problem or means by which they can construct tools to solve the problem” (p. 822). The first stimulus (i.e., problem) and the second stimulus (i.e., tools) are predetermined and so the point of this method is to understand the effect of the second stimulus on the first. According to Giest (2008), the cornerstone of the dual stimulation method is to investigate “human psychological functions in the process of their development by creating the conditions that mainly cause the development” (p. 103).

Vygotsky’s interest in conceptual change was inspired by one of the pioneers in this field, Narziss Ach (1871-1946). Vygotsky (1987) described Ach’s research as opening up an “entirely new plane” (p. 122) that “created the potential for studying the process of concept formation” (Vygotsky, 1987, p. 122). Vygotsky, however, found flaws in Ach's method and believed that Ach had failed to offer a causal-dynamic explanation for the formation of concepts. Vygotsky (1978) considered the use of tools as mediating devices within the dual stimulation method as being “important because it helps to objectify inner psychological processes” (p. 75, original emphasis). Subsequently, Vygotsky described his rationale for the dual stimulation method as being built on the following six principles (Vygotsky, 1987):

  1. The task is presented fully to the student in the initial moments of the experiment.
  2. The establishment of the task or emergence of the goal is a prerequisite for the development of the process as a whole.
  3. The means are introduced gradually.
  4. The stimulus-sign or word constitutes the variable.
  5. The task is the constant.
  6. Depending on how the word is used, depending on its functional application, we are able to study how the process of concept formation proceeds and develops (p. 128, numbers added).

Using the terminology from Table 2.1, the first stimulus was the dependent variable and the second stimulus was the independent variable. In other words, the second stimulus was introduced to measure or observe the effect on the first variable. The current study deviated slightly from Vygotsky’s and Sakharov’s use of this method in relation to the timing of the introduction of the second stimulus where the “means are introduced gradually” (i.e., principle three).(5) In the current study, the first stimulus (i.e., the task of explaining a topic) was augmented with the second stimulus (i.e., explanatory animation creation) almost immediately. There were two reasons for this:

  1. Vygotsky’s criterial interest when using the dual stimulation method was always on the second stimulus (Wertsch, 1991).

  2. The children had chosen to be involved in this study because they wanted to make an animation and were eager to get started.

The dual stimulation method and particularly Vygotsky’s interest in mediation, has inspired others such as Engeström (1987), to develop more comprehensive models of activity such as Cultural Historical Activity Theory (CHAT). In this study, the dual stimulation method was foundational for posing the research question, and CHAT provided the framework for answering it.

 

Cultural Historical Activity Theory (CHAT)

As the children’s conceptual task involved multifaceted activities, activity itself became an important issue as it provided a context for this research. Engeström conceptualised CHAT with a focus on knowledge creation stating that, “the object of activity is a moving target” (Engeström, 2001, p. 136). Unlike a scientific experiment, where variables could be determined in advance to ensure that the experiment proceeded with clearly defined dependent and independent variables, the children’s task was to determine what these variables actually were.

Activity Theory is based on the work of Lev Vygotsky and his colleagues A. R. Luria and A. N. Leontiev. Vygotsky's founding contribution to CHAT was through his understanding of mediation through the use of tools. Kaptelinin (2013) has suggested that in Vygotsky’s psychological framework, perhaps mediation was “the most important concept of all” (p. 206, original emphasis). CHAT was selected as an analytical framework for these data as a dynamic of Engeström’s theory is that, “processes of knowledge creation are intertwined and co-evolve with human practical activities” (Hakkarainen, Palonen, Paavola & Lehtinen, 2004, p. 119).

Figure 2.1 is Engeström’s first-generation CHAT model based on Vygotsky’s notion of tool mediation. The use of a triangle shape is an attempt to capture the system qualities of the activity, also known as an activity system.

Figure 2.1

First-Generation CHAT Model

CHAT 1st generation


Note. From http://www.helsinki.fi/cradle/chat.htm

Used with permission.

Engeström (1987) expanded the first-generation CHAT model into a second-generation model by including Leontiev's (1981a) additional categories of rules, community and division of labour. This new model allowed Engeström to present a framework for studying activity, as the second-generation model was sufficiently sophisticated to contextualise activity as a societal rather than individual phenomenon. Figure 2.2 shows Engeström’s second-generation model and also suggests an outcome for the model in relation to the object.

Figure 2.2

Second-Generation CHAT Model


CHAT 2nd generation


Note. From http://www.helsinki.fi/cradle/chat.htm

Used with permission.

Of additional interest here is that Engeström had substituted ‘meditating artefact’ with ‘instruments’ at the top of the triangle. I would like to address this by presenting my own contextualisation of Engeström’s second-generation model by using categories from the current study as Figure 2.3.

Figure 2.3

A Revised CHAT Model using Categories from the Current Study

CHAT model using Storyboard categories

Figure 2.3 equates ‘tool’ with ‘computer’, which would appear to be congruent with Engeström’s ‘instrument’ in his second-generation model. Of greater significance is that the ‘object’ in the current study (i.e., a storyboard as the embodiment of the explanatory animation creation process) has become the ‘mediating artefact’. This effectively collapses Engeström’s first-generation CHAT model, as three points have become two (see Figure 2.1). Vygotsky’s notion of tool use was intrinsic to his understanding of mediated action, as a tool only becomes such when it is used. As Dron (2012) has argued, “a tool separated from its use is meaningless: a stick lying in a forest is just a stick” (p. 25). Tools can be tangible or cognitive (Cross, 2010; Vygotsky, 1981) but artefacts can also function as tools if they are used for a purpose.(6) According to Christiansen (1996), an artefact attains its qualities of function when it is integrated into an actual activity. In other words, “to become a tool is to become part of someone's activity” (p.177).

Henley, Caulfield, Wilson and Wilkinson (2012) attributed the significance of Engeström’s second-generation model as enabling “both individual learning processes and social interaction to be viewed simultaneously” (p. 506) due to the system qualities of the model. The true value of CHAT for the current study is that it provided a sufficiently flexible framework to conceptualise both the activity (i.e., animation creation through discussion, representation and reflection) and the product (i.e., the evolving artefact).

Engeström (2001) evolved his CHAT framework into a third-generation model, which depicts a pair of the second-generation models to represent two interacting activity systems. The theoretical interest in this model (see Figure 2.4) lies in the interaction between the two models which is shown as a potentially shared object: (7)

Figure 2.4

Third-Generation CHAT Model

CHAT 3rd generation


Note. From http://www.helsinki.fi/cradle/chat.htm

Used with permission.

The main significance of the Engeström's third-generation CHAT model for the current study involves defining the unit of analysis as the re-enactment of activity over time that can occur when people collaborate on a project. The unit of analysis is important because, as Newman, Griffin and Cole (1989) have noted, “we can analyse specific aspects of the functional system in any particular investigation, yet the unit of analysis itself retains the critical components for change” (p. 72).

Patchen and Smithenry (2014) have further noted the system dynamics of the CHAT model. They describe the duality of the model as a bifocal lens for understanding activity in context:

Fundamentally, CHAT acts as a “bifocal” analytic lens, keeping an eye on parts, elements, or moments of praxis, while attending to the context in which these moments occur. The strength of this type of lens resides in its capacity to reveal the relationships between the parts and the whole, that is, the interconnected nature of activity in the classroom as it unfolds in real time and real working praxis (p. 607).

A common view within the CHAT literature is that activity itself is the unit of analysis (Hashim & Jones, 2007) and that the purpose of analysis is to understand human actions. Vygotsky (1962) understood a unit as “a product of analysis which, unlike elements, retains all the basic properties of the whole and which cannot be further divided without losing them” (p. 4). According to Blunden (2009) “The rest of Vygotsky’s work testifies to the fact that the shared use of cultural tools of any kind was Vygotsky’s unit of analysis” (p. 10).

Engeström and Miettinen (1999) made a similar claim to Vygotsky by suggesting that “culturally mediated human activity” (p. 9 original emphasis) or even the “activity system” itself (Engeström & Miettinen, 1999, p. 9, original emphasis) are strong candidates for the unit of analysis in Activity Theory. Engeström and Miettinen then proceeded to list the same six categories from the second-generation CHAT model (i.e., tools, subject, object, rules, community and division of labour) as the minimum elements. Blunden (2009) brings Engeström and Miettinen's position into question by asking how any one of these elements can be understood independently of the activity in which it is expressed by arguing that if “the ultimate reality we are dealing with is activity, then every one of these concepts is derivative of the concept of activity” (p. 16).

Blunden’s (2009) preferred unit of analysis is project collaboration. He emphasised the interrelationships through an analogy where “the atom is the system for particle physics, but unit for molecular physics” (p. 24) and where “project collaboration is the system for psychology, but the unit for social science” (Blunden, 2009, p. 24). Blunden (2009) sees projects as being both the means and the ends of activity as projects have teleological properties due to their objectives:

A project mediates collaborative activity, but it is not an artifact. All activity is artifact- mediated, but people can cooperate in a project by pursuing the common aim even if they are not in direct communication. The use of artifacts remains a part of collaborative projects, but the key mediator is the project itself, however it is represented (p. 18).

The following list summarises these various positions about the unit of analysis and shows an expanding concept of unit according to different perspectives:

These different perspectives, however, need not be seen as an evolution of thought. Rather, the unit of analysis can be whatever is appropriate for the research being conducted. This is in keeping with Thorne’s (2004) view that CHAT is not so much a theory per se but a conceptual framework for understanding human activity.

A final point about CHAT is that the activity system, like all systems, affords various perspectives on that system according to focus of the researcher. As noted by Patchen and Smithenry (2014), “CHAT directs attention to the generative process of activity - one element at a time, but never just one element alone” (p. 610). The system perspective helps define the scope of a project but the unit perspective allows for the artefact, or parts thereof, to be seen in terms of their generative, mediating role between existing knowledge and new knowledge creation.

According to Dang (2013), “the driving force of change and development in activity systems, is internal contradiction" (8) (p. 48). Understanding these contradictions requires the researcher to have a close proximity to the activity system, indeed, to become part of the activity system. Activity itself is then brought back into focus in the context of collaboration and co-authorship. In the current study, activity and the co- construction of artefacts was conceptualised through Vygotsky’s zone of proximal development.

 

Vygotsky's Zone of Proximal Development

Vygotsky's (1978) Zone of Proximal Development (ZPD) is usually defined as a situation where a learner can extend their learning through interaction with a more capable assistant. In my discussion of ZPD I commence with Wood, Bruner and Ross’ notion of scaffolding (1976) and how it might function as a mediating device.

The notion of scaffolding was introduced as a metaphor for guidance and assistance within the ZPD. As with most metaphors, there can be unintended implications that must be identified (Pimm, 1981). There appears to be at least three issues pertaining to scaffolding which have been abstracted from the metaphor itself rather than the ZPD:

  1. The first abstraction is that scaffolding is temporary because scaffolding is only in place during construction. According to Holton and Clarke (2006), “when the building is finished or the renovation complete, the scaffolding is removed. It is not seen in the final product” (p. 129). Yet the capacity for growth from a more capable peer will always be present as the phrase more capable peer opens up the more competent person to include a mentor rather than just a child/adult relationship. Although Vygotsky focused the ZPD on children, “the concept can be elaborated throughout childhood” (Moran, 2010, p. 143). John-Steiner (1985) also shares this view and recounts a story of the composer Stravinsky being mentored as a young adult composer as an example of the ZPD.

  2. The second abstraction is the notion of “closing the gap” (Shepard, Hammerness, Darling-Hammond & Rust, 2005, p. 279) within the ZPD, which is a common expression used to describe a child's growth or progress in ability. Venn diagrams of the ZPD show the child's ability as the smaller shape within the larger shape that represents their potential for learning. It follows that any growth experienced by the student doesn't close the gap but rather expands the boundaries.(9)

  3. The third abstraction involves self-scaffolding (Holton & Thomas, 2001). Holton and Clarke's (2006) notion of the epistemic self takes self-scaffolding even further by suggesting that self-scaffolding is “essentially equivalent to metacognition” (p. 128). Self-scaffolding implies that the scaffolding originates with the child. According to Connery, John-Steiner and Marjanovic-Shane (2010), children become increasingly competent learners by independently applying the scaffolds set up by others. This notion also affirms the inherent unity between the act of creating the zone and the zone itself. “The activity of creating the ZPD, of creating the environment for development, is inseparable from the development that occurs” (Connery, John-Steiner, & Marjanovic-Shane, 2010, p. 203).

Development through spontaneous interaction within the ZPD is what Saye and Brush (2002) have termed soft scaffolding as distinct from prepared interventions which they term hard scaffolding. There is also a mediating role for artefacts within the ZPD (Thompson, 2013). The mediation of learning through artefacts can also be applied to adult interactions, such as when a University lecturer provides written feedback to a doctoral student on a draft of their thesis. When the student edits their thesis under the guidance of their supervisor's notes, they are working within the ZPD through the mediating device of the notes.

The ZPD is a learning situation where students can experience profound growth because their needs have come into sharp focus in ways which they can enact and understand. Vygotsky (1978) wrote:

We propose that an essential feature of learning is that it creates the zone of proximal development; that is, learning awakens a variety of internal developmental processes that are able to operate only when the child is interacting with people in his environment and in cooperation with his peers. Once these processes are internalized, they become part of the child's independent developmental achievement (p. 90).

Such learning dynamics often result in shared learning for both the teacher and student. When both the student and the teacher are learning they could be said to enter a “mutual zone of proximal development” (John-Steiner, 2000, p. 177) as collaborative partners. Sutter (2001) has captured this dynamic with his term mutual performance:

It is like a dance, experienced people are leading, and everyone is taking part. Interpersonal processes of the dance are transformed into psychological processes also for the allegedly more knowledgeable persons; also they will learn and develop (p. 64).

Another way to harness the potential of the ZPD is by personalising learning activities to match the interests and potentials of the child as suggested by Wright (2011):

A serious interest in children’s semiotic dispositions and their sign-making processes and messages can lead to pedagogies, curricula and momentous projects that evolve from and are matched with the potentials and abilities of the child (p. 172).

Newman, Griffin and Cole (1989) speak of a construction zone within the ZPD where, “children’s actions get interpreted within the system being constructed with the teacher” (p. 63). Sutter (2001) credits Newman, Griffin and Cole’s statement as solving a learning paradox about conceptual structures. The paradox is about how a child’s simple psychological structure can ever get transformed into a more complex structure. The resolution of the paradox is that there is good reason to believe that “psychology does not reside only within a person’s skull, it take[s] place between people too, and between people and their artifacts” (Sutter, 2001, p. 43).

The current study used storyboarding and explanatory animation creation as a context for the ZPD. It was hoped that the creation of the ZPD might enhance the process of conceptual consolidation for the young animators through their active participation in this multifaceted task. The ZPD is often depicted as a Venn diagram where the helper’s knowledge is the outer shape encompassing the smaller child’s knowledge as the inner shape. Venn diagrams are useful for making such comparisons but the following section discusses schematic diagrams and their affordances as conceptual metaphors for understanding specific topics.

 

Schematic diagrams as conceptual metaphors

A recurring theme throughout much of the conceptual change literature (from both cognitive science and educational psychology) is that the conceptual change process can be characterised as an act of classification using perceptible attributes and features (Chi, 2008; Virkkunen & Ristimäki, 2012). This emphasis on classification is analogous to the use of Venn diagrams. Tversky, Heiser, Mackenzie, Lozano and Morrison (2008, pp. 280-281) however, suggested a more useful way to understand concepts through the metaphor of a schematic diagram. They used a map of a train network as a good example of how only the salient points are included, thus providing a conceptual focus. Yet, each train stop on the diagram is represented as being equidistant and the lines are all drawn as if they're straight because minor deviations in distance and position are not important to the schema. “Concepts are given meaning according to the image schematic structures with which they are associated” (Andreou, 2013, p. 16). Hence, in the train network example, identifying relevant variables and relationships between these variables is a system perspective.

Conceptually (in terms of content), schematic diagrams are deliberately selective rather than exhaustive because they are models and function according to their ability to convey essential information. Multimodally (as representations), schematic diagrams are the result of representational choices to only include essential information, as meaning is determined and articulated at the design stage and then conveyed accordingly “to make abstract or complex information graphically communicative” (Andreou, 2013, p. 12). In other words, an affordance of schematic diagrams is that both the conceptual content and representation of that content is subjugated to the intent of providing clear and effective communication.

The methodological implications from the schematic diagram metaphor would appear to be immediately applicable to the process of explanatory animation creation as a mandate to keep it simple. Prioritising communication also resonates with Einstein’s guideline to "Make everything as simple as possible but not simpler”. For primary school children attempting to understand and explain novel topics using unfamiliar animation techniques, this is good news.

 

Summary

Conceptual consolidation was a key theme in this literature review. Although the emphasis of the conceptual change literature was on classification, the importance of mental models established some common ground with the current study as the emphasis shifted towards the creation of imagery. A continuum between concrete and abstract was also discussed, where new ideas were seen to begin as abstract notions that become increasingly concrete, as they are understood.

There was also a consideration of the system qualities of concepts and how the essence of a concept might exist within the core relationships between the constitutive variables. Likewise, it was also suggested that meaning is afforded greater clarity by excluding non-essential and distracting information. Both of these guidelines require the communicator to become a pedagogical decision maker and thus determine the essential nature of their topic, much like the creation of a schematic diagram.

The multimodality literature provided the vocabulary for the construction of artefacts that find meaning in contexts beyond the written word. An investigation of multimodal affordances led to the conclusion that storyboards are semiotic tools for cross-modal cognition using tangible resources. This conclusion has support from the literature, but this support was inferred from other practices such as student-constructed representations for specific conceptual and pedagogical tasks (by information modelling in secondary-school science curricula).

Storyboarding and explanatory animation creation is clearly a multifaceted task due to the inherent conceptual and technical design issues. It would appear that student- generated explanatory animation creation, however, is not a common task, particularly in primary school settings and certainly not as represented in the literature. The literature about such a task, or lack thereof, is a blind spot that can be accounted for by the assumption from both the animation design advocates and the animation effectiveness sceptics, that the explanatory animation creation task is the exclusive domain of professional animators. In the current study, Vygotsky’s (1978) ZPD was seen as a way to extend the children’s ability to participate in the explanatory animation task, but also a way to afford me, as the researcher, additional insights into the children’s activity by entering this zone with them.

Lowe and Schnotz (2014) have noted that “animation is distinct from video in that it is not the result of merely capturing images of the external world - rather, it is the product of deliberate construction processes such as drawing” (p. 515). These construction processes were seen as continuing in the tradition of constructionism due to their conceptual content and use of computers as mediating tools. Vygotsky and Sakharov’s dual stimulation method provided a rationale to link the anticipated conceptual gains of this project to the animation task itself. CHAT provided both the vocabulary and perspective to understand conceptual change and artefact creation as recursive elements within a collaborative project environment. Chapter 3 will now address the methodological issues that shaped the explanatory creation process throughout the evolution of this case study.

----------------------------------------------------------------------

(1) Interestingly, as I’m finishing this section in August 2014, the 2014 Constructionism conference is being held in Vienna. According to the program, many of the sessions are about the use of model-based reasoning as conceptual models. This theme is developed later in this literature review under “Explanatory models”.

(2) Marr (1982) also introduced the term 21⁄2 dimensions to describe instances where two-dimensional surfaces are rendered in such a way as to assist the perception of depth.

(3) Since Kleinman (1980) however, the most common use of the term explanatory model is found in healthcare where a patient’s understanding of their condition is often compared and contrasted with a physician’s explanation. Kleinman defined explanatory models as “the notions about an episode of sickness and its treatment that are employed by all those engaged in the clinical process” (Kleinman, 1980, p. 105).

(4) Mayer’s definition of ‘multimedia’ implies that he restricted his model to these two modes of words and pictures but he also included audio (as both sound and narration), movement, colour, proximity and temporality as shown in Table 2.2.

(5) Vygotsky’s (1987) third principle was illustrated through an example with Leontiev and Luria (reported by Radzikhovskii, 1979; Wertsch, 1991) where patients with Parkinson’s disease were asked to walk across a room. When pieces of paper were placed on the floor as mediating devices to approximate footsteps, the patients were able to attempt their passage more easily. Delaying the introduction of the paper was necessary to show that the patients were unable (or reluctant) to attempt the journey without the introduction of the second stimulus.

(6) The notion of an artefact functioning as a tool is discussed further in relation to data from the current study in Chapters 4, 5 and 6.

(7) One possible interpretation and application of the potentially shared object being time will be discussed in Chapter 5 in relation to the current study.

(8) Internal contradictions are also an important part of the change laboratory technique developed by Engeström, Virkkunen, Helle, Pihlaja and Poikela (1996). The current study, however, did not utilise the change laboratory technique as that technique is used primarily to affect change at the institutional, organisation level.

(9) The idea of expanding the boundaries within the ZPD is discussed further in Chapter 4 and represented diagrammatically as Figures 4.1 and 4.2.

 

Main menu