They [the children] became software designers, and were representing knowledge, building models, and teaching concepts on their computer screens. They were thinking about their own thinking and other people's thinking - simultaneously - to facilitate their own learning (Harel & Papert, 1991, p. 45).
The data collection phase of the current study was conducted twenty years after Harel and Papert’s description of children using computers to build explanatory models as teaching artefacts. However, it should be noted that, although the technological environment has changed dramatically in the past twenty years, the actual software used in the current study has been around since Microsoft PowerPoint was released in 1990. This study was clearly not about technology but about learning. However, technology enabled the efficient construction of imagery and models to the extent that animation creation is now primarily a digital enterprise, whether it includes an explanatory role or not. Constructionism is also not about technology but about learning.
The current study was about conceptual consolidation and how this might be achieved and enhanced through the multifaceted process of explanatory animation creation. This chapter begins with three complementary theories of learning which deal specifically with conceptual consolidation. These three theories of conceptual consolidation are claims to new knowledge. The claims are tangential to the research question but are relevant as unintended outcomes of this research.
Issues more directly related to the research question are then addressed with regard to flexible models and the learning affordances of the explanatory animation artefacts. Causal links are shown to exist between explanatory animation creation and conceptual consolidation. Although many qualitative researchers generally avoid claims of causality (see Denzin & Lincoln, 2011), Maxwell (2012) believes that causation can be regarded as a “generative” process (p. 656) that conceptualises “causal explanation as fundamentally a matter of identifying the actual processes that resulted in a specific outcome in a particular context” (Maxwell, 2012, p. 656).
Conceptual change is now widely accepted as a process (Merenluoto & Lehtinen, 2004) but generalisations about the conceptual change process cannot be applied to this study because generalisations “don’t apply to particulars” (Lincoln & Guba 1985, p. 110). The following three theories of conceptual consolidation are then deliberately situated within the current study for consideration for future studies where these constructs might be investigated as cognitive tools for conceptual consolidation.
Three complementary theories of conceptual consolidation
Three different aspects of conceptual consolidation were explored in this study. The first, entitled concepts as systems and variables, offers a definition and discussion about the meaning of ‘concept’. The second section, entitled processes of conceptual consolidation summarises the evidence resulting from each of the participants in the study. The final section, entitled paraphrase and vector-based learning, suggests a new analogy for recognising evidence of conceptual consolidation.
Because these three theories describe different phases of conceptual consolidation they are both complementary and chronological. According to Smith (1989), any claims about a theory of concepts needs to specify, “just what kind of concepts the theory is intended to apply to” (p. 60, original emphasis). Accordingly, the three theories featured here are applicable to concepts that require explanation, which are also known as explanatory concepts (Murphy, 2000; Thomas, 1977; Zif, 1983). These three theories are described below.
1. Concepts as systems and variables
In the Literature review in which this same topic was addressed, I presented literature that described the system qualities of concepts as a collective of variables. The writers cited such as Dewey (1910) and Davydov (1990) primarily described the system qualities of concepts. It should also be noted that a system could not exist without components or variables as a component is normally defined as a part of a system. The word variable was therefore the preferred term because it implies change and affords further insight when seeking to classify a variable as dependent, independent, fixed and so on.
Participants in the current study were provided with the definition that a concept is a “system containing variables”. After reading Michael Cole’s (2011) comment that “theories have a way of masquerading as definitions” (p. 49), I realised that I had inadvertently presented the children with my own theory about what a concept is. Nonetheless, definitions help establish the basis upon which theory may be elaborated. This definition functioned as a cognitive tool because it guided the children to look at their conceptual topics in a very specific way, where they consciously looked for connections between variables. The notion of a theory having practical application was proposed by the action research pioneer Kurt Lewin who said, “there is nothing more practical than a good theory” (1952, p. 169). The “system containing variables” definition became a lens for the children to determine the system qualities, or lack thereof, of their conceptual topics.
As described in Chapter 4, all but one child identified the relevant variables required to explain their topic. In the case of Harriet we didn’t realise until it was too late, so rather than using the labels of ‘choice of metaphor’ and ‘implementation of metaphor’, Harriet’s summary table used other terms, namely ‘choice of imagery’ and ‘enhanced use of text’ as seen in Table 6.1.
Excerpt from Harriet’s summary table
Choice of imagery
Enhanced use of text
However, in the case of all other participants in the study, the children worked at a metaphoric level, which assisted them to develop correct terminology for the variables that were relevant to their animation. A key issue is that the visual helped inform and enrich the children’s understanding of these terms and how they functioned as variables. This was because the storyboarding process was a multimodal task that encouraged synthesis leading to conceptual consolidation. Table 6.2 shows how the other children were able to identify relevant variables, which were then used to construct suitable metaphors.
Excerpts of relevant variables from the children’s summary tables
Choice of metaphor
Implementation of metaphor
2. Processes of conceptual consolidation
The data that the children and I generated in this study displayed a striking commonality about the nature of conceptual consolidation that I had not anticipated. From my prior research (Jacobs, 2007) I was confident that the explanatory animation creation task would be beneficial for the animation author’s own understanding of their chosen topic. Extending upon this earlier work, the research question sought to uncover the specifics of how conceptual understanding is consolidated through the explanatory animation process.
The conceptual consolidation rubric (Table 3.2) proved to be more useful than originally anticipated in describing the children conceptual change over time. This rubric was revisited each week and what emerged was a consistent pattern. Progress across the rubric started with the same order of categories for all of the eight participants:
Each child’s growth was first evident in their use of correct terminology.
The discussion and adoption of correct terminology eventually led to the articulation of relevant variables.
Ultimately, this led to an understanding of the relationships between the variables.
This pattern is best described as a synopsis as it was clearly evident in the data. It must be noted, however, that each child’s progress was also consistent with Vygotsky’s (1978) notion of development having spiral properties where progress occurred “through the same point at each revolution, while advancing to a higher level” (p. 56). In other words, each child’s spiral was in the order of correct terminology, identifying relevant variables and then understanding the relationship between the variables, but these three criteria were not stage-gates in, and of, themselves.
As each row in the conceptual consolidation rubric contained general categories, I conceptualised that this pattern warranted restatement as a theory of conceptual consolidation. As Hewitt (2007) noted, “a theory is true to the extent that and so long as it continues to make sense of the data” (p. 241). This theory can be summarised as follows:
Conceptual consolidation is a complex process that can be simplified and managed by using the definition of a concept being a “system containing variables”.
Initial research for each topic begins by first identifying, and then using, correct terminology.
An eventual outcome of investigating correct terminology is the identification of relevant variables.
The pinnacle of conceptual consolidation involves understanding the dynamic relationships that exist between the different variables.
Conceptual consolidation itself must be understood on a case-by-case basis because, regardless of any similarities, every concept is different.
3. Paraphrase and vector-based learning
The analogy of vector-based learning is the last in these three sequential theories. It builds upon the above statement that the pinnacle of conceptual consolidation involves understanding the dynamic relationships that exist between the different variables. The vector-based learning analogy shifts the theoretical focus onto evidence for how conceptual consolidation might be demonstrated.
In Chapter 2, conceptual consolidation was theorised as moving between the abstract and the concrete. The question “What is conceptual consolidation?” could then be rephrased as “How do you know when conceptual understanding has become concrete?” My answer to this is that a person who has a consolidated understanding of a topic has obtained sufficient perspective on that topic that it could be represented and described in multiple ways (i.e., paraphrased).
Teachers often paraphrase their content to present different perspectives on the same information to make the same point from another angle. This is because they have a consolidated understanding of their topic so that they can look at it in different ways, and personalise or contextualise the essential elements in meaningful and relevant ways. As children develop, they too learn how to paraphrase their understanding of concepts. Bruner (1966) saw early childhood as a critical period when the opportunity to paraphrase verbally with adults was a determining factor in successful learning in later life.
An analogy for how ideas can be paraphrased involves vector-based graphics as contrasted with bitmap graphics. Bitmap graphics refer to a screen being mapped out as a grid of pixels, or a page being mapped out as a grid of ink dots. Bitmap image files contain the information about where the dots or pixels go and which colours they are. Vector-based graphics contain geometric information about how and where to position the pixels or dots but they have the distinct advantage of being scalable without any loss of clarity or detail thus avoiding distortion or pixilation.
Applying this graphics-oriented analogy to the verbal language system, a paraphrase uses different words or a different order of words to convey the same information. The actual pixels in a bitmap image can be likened to facts as discrete units of information. By contrast, a vector image contains this same information but as a geometric shape.
Yet another analogy to reinforce the utility of paraphrasing is Sol Feige as a musical convention for describing melodies (as depicted by Maria’s portrait). A melody is constructed by its constituent notes. Sol Feige allows a melody to be transposed into another key, as it is the relationships between the notes that are depicted rather than the particular notes in a single key. Sol Feige, like vector graphics, retains the shape, which I am suggesting is more important that the particulars. Of course, the shape is constructed through the particulars but the point here is that it is recognised as a shape holistically. The vector-based learning analogy would then suggest that concepts have a metaphorical shape depending on the key variables and how they relate to one another. This also brings the whole concrete/abstract notion full circle, as consolidated understandings can be abstracted with full clarity, much like vector graphics.
A map analogy for giving directions to another a person to help them get to a different location might be useful here to further reinforce this notion of conceptual consolidation. If a person is able to provide another person assistance with directions, but has a limited knowledge of the area, they might only offer a single suggestion. By contrast, a local resident who is more familiar with the area would have the ability to suggest different routes and even be able to suggest alternatives such as a scenic route, or the most direct and quickest route, and so on. Again, this analogy can be likened to a paraphrase, where information can be readily juxtapositioned into different contexts.
Paraphrasing is also a creative act as the paraphrase itself is not predetermined. Vygotsky (1962) noted that a verbal paraphrase involves movement from word to thought, and then back from thought to different words as, “word meanings are dynamic rather than static formations” (p. 124). This link between transmediation and paraphrase is important, because, like the ability to paraphrase, only essential features need make the transition between modes.
Transmediation as a catalyst for understanding
The creation of multimodal texts in general, and explanatory animations in particular, provides a fruitful context for conceptual consolidation. This is because the resolution of the pedagogical task is a reflexive process that can help make abstract ideas concrete through the core processes of representation and transmediation across various modes. Figure 6.1 depicts this process as a catalyst for understanding.
Figure 6.1. Transmediation as a catalyst for understanding.
The reason I have used a two-way arrow in Figure 6.1 is based on Vygotsky’s idea that abstraction and abstract thinking arises from understanding. A person’s understanding of a topic can then be linked to their ability to transmediate ideas as a cross-modal paraphrase. In the current study, the essential features that constituted the particulars of the paraphrase were the variables pertaining to each particular topic.
The focus of this conclusion chapter will now shift to the explanatory animation task and the use of storyboards.
Explanatory animation creation as a learning tool
Mental models are usually hidden. But animations that have been carefully designed to communicate conceptual topics provide digital depictions of the organisational structure of the author’s logic. Fjeld et al. (2002) were amongst the first to note that mental models play an important diagnostic role, informing and improving interactions within the ZPD:
This process of turning mental activity into an object or objectification is what Leontiev called exteriorization. While it is obvious that for any individual the moment of exteriorization is an important step in his or her creative design activity, it is perhaps less obvious that this is a crucial step in making ideas accessible to others (p. 154 original emphasis).
The current study encouraged the participants to develop imagery and voice-over scripts simultaneously by creating storyboards. The representational imagery that the children constructed in their storyboards provided a digital window into their conceptual consolidation and enabled me to diagnose where students might need assistance or clarification. In this sense, the explanatory animations were not only multimodal texts but also diagnostic tools. As shown throughout the numerous examples of misunderstandings from the children’s portraits, the most insightful data generated throughout this project related to areas where the children's knowledge about their concepts was incomplete or incorrect.
The learning affordances of the explanatory animation creation task can be evidenced in any modality. It is therefore not my intention to create a hierarchy amongst the various modes, as learning was evident throughout these modes at various times. An example of order was in Maria’s “Sol Feige” grid, which surfaced her misconception about the sequence of musical notes, as the construction of her grid required Maria to put the notes into the correct order. Ryan’s pong metaphor in his “Stadium design” animation surfaced the issue of movement, as he needed to differentiate wind from the transfer of sound energy. There were also examples where the terminology in the voice-over scripts provided fruitful contexts for discussion such as Sunny’s NP junction where we devised a graphical metaphor to emphasise the notion of a gap by stretching the words ‘band gap energy’ in his “Solar cell efficiency” animation. Neil’s “Satellites” animation also utilised on-screen text for his transponder portmanteau where the words ‘transmitter’ and ‘responder’ were morphed together. It should be noted that these four examples were all visual. Figure 6.2 uses the CHAT triangle as a prism to identify some of the modalities that were evident within the children’s storyboards.
Figure 6.2. Multimodality within the CHAT triangle.
The process of explanatory animation creation yielded a context for conceptual consolidation on a case-by-case basis. Much of the actual growth came from discussions with me as the teacher. As noted by Ford and Forman (2006), the teacher’s role in instruction is central because “without the constraints and guidance that the teacher provides, students would not encounter focused workable issues or engage experimental practices to address them” (p. 144).
The co-construction of meaning through my own participation in the children’s work afforded me a privileged insider position that was conceptualised as mutual zones of proximal development. This enhanced the diagnostic potential of our collaboration, as I was literally involved in each conceptual scenario. It also afforded me both insight and clarity to witness the children’s thinking in progress, and confirm that all eight of the participants developed deeper understandings of their chosen topics.
The creation of each explanatory animation required the children to represent and re- represent their topic through the process of transmediation. This transmediation process both concretised (Wilensky, 1991) the children's mental models and made them visible for critique and discussion. The children also learnt to look at their topics from several different angles as they sought to paraphrase their topics through the selection and critique of suitable metaphors.
Explanatory animation creation was enacted in a process of dual stimulation to facilitate conceptual consolidation. The explanatory animation creation task was initially the object of activity but the project itself became the unit of analysis. In the final analysis of the whole project, the task of explanatory animation creation functioned as a transmediating tool for cross-modal cognition.
Digital storyboards as flexible models
The dual stimulation method was used in the current study to harness the intrinsic unity between explaining a topic (as an initial stimulus) and using the explanatory animation creation process as a mediating tool to do so (as the second stimulus). Each child’s storyboard was shown to be both a mediating multimodal artefact and a cognitive tool at various times. This would suggest that tool and artefact can merge into one but it appears that convergence is a more accurate way to describe this dynamic. As Garton (1993) has noted, “representation and concept formation are regarded as fundamental to mental organisation and systemisation” (p. 264). The result of applying the dual stimulation method was that the explanatory animation creation task provided a multimodal chronology of the children’s conceptual consolidation.
When the author of an explanatory animation is attempting to represent their conceptual understanding, their mental models are given tangible expression through the words, images, sounds and movements that are used to construct the animation. This does not automatically lead to conceptual consolidation but it does provide a privileged level of insight into what the author of an animation is actually thinking. In the current study, explanatory animation creation and conceptual consolidation were seen as complementary processes with the evolving storyboards functioning as semiotic mediating tools.
The evolving storyboards became mediating tools between each child and their conceptual understanding, but these same storyboards also provided a mediating role between each child and me. My interactions with the children were not unlike regular classroom interactions, where I was available for assistance as required. In this information era, Drotner (2008) sees the teacher's role as more important than ever and she encourages educators to embrace the transition from information authority into knowledge facilitator. Knowing when to intervene is an issue that confronts teachers on a daily basis. Intervention, for researchers however, is a continuum on which the researcher must tread consciously. Wells (2007) discussed this issue as one common to both teachers and researchers where a “Vygotskian interpretation of the dual roles of the teacher as planner of appropriately challenging activities and provider of assistance in students’ zones of proximal development” are merged (p. 266).
Explanatory animation creation is primarily a pedagogical design process. As such, incremental refinements are anticipated throughout the animation creation process and are easily implemented due to the digital nature of the unfolding representations. Sullivan (2005) captures this dynamic well with his term create to critique. The intended contrast here goes beyond a dichotomy between process (storyboard technique) and product (completed animation). The actual storyboards, which the participants in the current study constructed, were more like the prototypes that Resnick (2007) has referred to:
We never expect to get things right on the first try. We are constantly critiquing, adjusting, modifying, revising. The ability to develop rapid prototypes is critically important in this process. We find that storyboards are not enough; we want functioning prototypes (p. 5).
The terms “functioning prototype” (Resnick, 2007), “flexible models” (Clement, 2008) and “mental models” (Mayer, 1993; Rapp, 2007) are not synonymous, but, in this current study, they converged at various times. Unlike most art and design, where gradual transformations produce the finished artefact, the component parts of the children's animations were physically distinct from the completed work, as the PowerPoint files were routinely duplicated as new date-based files. As such, they provided a digital trail leading to the finished artefacts.
Jonassen (2008) has noted the link between conceptual models and learning and how this connection “provides rich research opportunities in the effects of knowledge representation on conceptual change” (p. 690). The current study seized this opportunity by capturing tangible, mental models of the author's thinking as animation creation has evolved from a hand-drawn art form into a primarily digital process. Digital art forms have one important affordance over tangible mediums such as paper, in that they can be easily edited and also reproduced which helps enhance creativity as revisions are easy to make. The digital domain also facilitated experimentation, as changes were reversible. My claim here is that explanatory animations and storyboards, at any stage of development or completion, are also “flexible models” (Clement, 2008, p. 417).
Could similar learning outcomes have been achieved by the participants in the current study if they had been using a medium other than animation? It would seem not. The explanatory animation creation process provided two additional benefits over more common or traditional tasks, such as designing a PowerPoint presentation, in the following two ways:
The process was sufficiently engaging and complex to sustain the project for the whole seventeen weeks.
The insight into the children's mental models, which was so readily apparent through this process, provided opportunities for researcher input and diagnosis because the children needed and sought assistance due to the complexity of their task.
I would now like to change the tone of this thesis by presenting some final thoughts on this research as a coda (i.e., a musical term for ‘ending’).
This final section is presented as three encores to continue the musical analogy. It is common for a concert to include at least one encore for which an artist has reserved one or more of their best songs to ensure that the audience is satisfied. Indeed, an artist would be disappointed if the audience didn’t call them back out if they had reserved such material. This first encore is a summary of what I did as depicted in Figure 6.3 which presents an application of CHAT as a synthesis model.
The second encore is about things that I would have done differently if I had the chance. I have structured this second encore so that it might prove useful for anyone who wishes to build on this research in the future.
The third and final encore involves a discussion around digital artefacts including digital theses. These issues could have safely been omitted without compromising the current study and so this third encore is a risky enterprise. The performers returning to the stage for a third time might be further pushing their collective luck if they present new, unfamiliar material (i.e., issues that haven’t been raised in this thesis thus far). In spite of this risk, the affordance of digital theses might also prove useful for future researchers who are working with multimodal data sources. Let us begin with the first encore, which involves a synthesis of the children’s learning as conceptualised and enacted through an expanded CHAT model.
Application of CHAT as a synthesis model
The semiotic tool of explanatory animation creation helped the children conceptualise and communicate their understanding in powerful and complex ways that might not have been possible in more traditional forms of meaning-making. This is due to the ways that the modes enriched and informed understanding as that which was abstract or concrete in one mode might have been different in another mode. ‘Seeing’ through multiple ‘lenses’ or modes of understanding took the children (and me) to a higher plane. This is what Vygotsky (1978) referred to as children standing “a head taller” (p. 102). In other words, their understanding became deeper and more complex and they could demonstrate their sophisticated conceptual understanding at a higher level than one might expect for their developmental ‘level’.
Figure 6.3 is a synthesis model containing the main ideas from Figures 5.2, 5.3 and 6.2. Figure 6.3 brings together these three CHAT components into one to revisit three key aspects of this thesis in the three areas of:
Figure 6.3. Application of CHAT as a synthesis model.
Figure 6.3 seeks to present the multifaceted nature of the children’s explanatory animation task as having an intrinsic unity where one aspect could not be separated from the other aspects without destroying the unified meaning. Figure 6.3 also takes the dynamics of explanatory animation creation ‘beyond Mayer’ as it explicitly links design with learning whilst still giving full consideration to activity through the division of labour. This is largely because the focus of this research enabled the child participants to become the creators of the explanatory animations rather than the viewers. It was also due to the way in which they were making animations for the sake of their own learning.
Implications for future research
I believe that the methodology devised in the current study could be implemented as is to uncover additional perspectives on multimodal learning as it provided sufficient opportunities for deep and prolonged engagement with the participants. I do, however have a suggestion for future researchers that involves amending the methodology to involve a group of participants to create a single animation, rather than having each participant making their own animation. The utility of the explanatory animation creation process could then be applied as a project for group work to research collaboration and to further understand the role of community within the CHAT triangle.
My suggestion for how to approach this would be to have each participant take responsibility for animating one scene. For example, a group of six participants could animate their own scene and then record their own director’s commentary. (Most of the children in the current study required around six sentences in their voice-over script to adequately convey the explanation of their topic). This would result in seven artefacts for that group (i.e., collective explanatory animation plus six different directors’ commentaries). Of course, a smaller group of three could have the participants animate two scenes each. The decision-making process around the delegation of each scene could provide rich research opportunities in addition to the learning affordances already identified for the explanatory animation creation process.
A single teacher or researcher could conceivably run an entire class this way where each of the children is in an animation group of, say, five groups of five. A small group of researchers could also structure a classroom in this manner.
It also might be interesting to use screen capture technology (such as Camtasia) throughout the animation sessions. The researcher would not necessarily need to watch all of this footage but, in instances where children experienced breakthroughs, this footage would then be available to provide more insight into each child’s creative process as it unfolded. This last suggestion is also a reminder that there are a whole range of technological enhancements that could be used to further augment multimodal research using digital artefacts.
Affordances of digital artefacts
I am writing these final words from the beautiful Fondren Library located within Rice University in Houston Texas. From my elevated table on the second floor I can see down to the thousands of books that line the shelves below. Of the hundreds of students in the library today, I haven’t seen any of them go anywhere near these books as everyone is on a computer. Is this another books are dead observation? Certainly not! Of course many of the students are accessing these same books and other media from the convenience of their tables. Writing books was never about paper but, rather, the ideas and stories that are contained therein.
The current study was a multimodal undertaking. Brooks (2002) noted the affordances of directly embedding digital data in a hypermedia document (i.e., web page):
By allowing the viewers access to some of the source data, they are offered the possibility of closely examining original data. Such transparency and reflexivity establishes a strong foundation upon which an informed discussion in relation to both the artefacts and their analyses can take place.
Brooke’s quote inadvertently raises the citation issues for electronic resources. The quote is from 2002 but there were no page numbers available amongst the various sections of her hybrid thesis. A hybrid thesis is where a hard copy manuscript is supplemented with a disc and/or a web presence. The American Psychological Association has guidelines for referencing web sites (as described in APA 6th) to include the paragraph number where pages numbers are not available. Citation issues might improve when a digital thesis author publishes their work on a dedicated web site using logical naming conventions such as www.nameofthesis/chaptername. Embedding paragraph numbers might also be helpful to save people the trouble of counting them up individually. Including these numbers could be likened to the verse numbers that have been arbitrarily added to biblical texts to assist referencing.
Beyond these somewhat mechanical citation issues is the issue of how to navigate through a hypertext document. Dicks, Mason, Coffey and Atkinson (2005) have expressed concerns about people getting lost in a seemingly endless chain of links:
Hypertext opens up the text through multiple linking, allowing the reader the opportunity to generate unpredictable reading paths. Given this, how does an author, especially one dealing with academic argumentation, simultaneously orientate a reader towards intended readings as well as allow a reader to discover his or her own pathways through the hypertext? (p. 64).
Definitions of hypertext as being ‘non-linear’ are problematic as readers can still be encouraged to follow the same linear path in spite of the options that they are presented with. Of course, books have always had this affordance too as people are free to turn to any page and indeed, some books such as text books are specifically designed to encourage this.
Perhaps we have not adequately considered the genre with which multimodality might be presented. In Bateman’s (2008) book Multimodality and genre: A foundation for analysis, he chose to focus on static artefacts also known as “multimodal documents” (p. 9). The exclusion of animation in Bateman’s book was because of his assumption that there was already sufficient complexity in static artefacts and that we must “learn to walk before we can run” (Bateman, 2008, p. 9). In other words, the added variable of movement was considered to be problematic for interpreting and analysing multimodal documents. Although Bateman is correct in asserting that our understanding of multimodality is informed by understanding of the various modes, I would suggest that animation does not belong in the too hard basket or we might never make this transition.
Dicks et al. (2005) also noted that the publication of a multimodal thesis is an issue for Universities that has not been adequately considered. Accordingly, they suggest that “perhaps the biggest obstacle to academic hypermedia authoring is likely to be academic institutions” (p. 67). I would like to propose that until such time as purely digital theses are commonplace, the current hybrid format should be reversed where print options are linked as PDFs from the web pages rather than having hard copies augmented by web pages and/or discs.
The current hybrid approach is a compromise for both the reader and the author. For the reader it is inconvenient not to have the relevant media embedded at the point of discussion. For the author, the hybrid approach fails to recognise that the navigational structure of a digital thesis can also function as a map of intention as the architecture of a hypermedia framework reveals and even reinforces the author's thinking (Chen & Dwyer, 2003; Jonassen, 1988; Kearsley, 1988). Embedding PDF files would then be a step in the right direction as the multimodal affordances of the digital medium would not be restricted whilst still allowing access to hard copy text. As Moss (2008) noted:
How do we take up the potential of new data sources and their analytic representations? These issues are critical to understanding how we might craft original questions but also how we might develop the textual forms of ‘writing up’ research (p. 232).
My rationale for wanting to construct a digital thesis using hypermedia is an acknowledgement and respect for the digital nature of the data generated by the eight Storyboard participants, giving full recognition to the richness of multimodality. In no way am I suggesting that videos or images should replace written text as written text can actually enhance the functionality of other media such as audio and video. For example, it would have been easier to have left the spoken words from the children's reflections as audio files but, instead, they were also transcribed for closer analysis.
The current study was a purely digital thesis (i.e., web page) from its conception in 2008 up until 2013. During this time I sought to tread cautiously within the hypertext environment by retaining formal academic structures using traditional chapter headings. In 2013 it became clear that it would be easier for my supervisors to provide feedback to me if they had written text in chapters rather than a web page. Transferring the web page sections into a single Word document in 2013 however did have some affordances of its own. It wasn’t until I transferred the various sections of written text into a single, linear Word document that I was able to get a sense of where content was unintentionally repeated and what the overall structure of the argument was. If I was to start again in 2015 I would plan to enable online comments using a wiki (i.e., a web application that allows people to add, modify or delete content in collaboration with others) with restricted access to the site using password protection during the various draft stages.
Perhaps all of these issues are best understood in terms of using the most appropriate medium for the content. I will conclude this discussion by reiterating some examples of digital affordances from the current study. In other words, what elements within this thesis are unique to the digital medium?
The children’s explanatory animation creation task was intrinsically multimodal as they were simultaneously dealing with various modes.
The inclusion of the all of the children’s data through the links to their PowerPoint files in the Weekly reviews would not have been possible on paper due to the restriction of space.
Likewise, digital appendices such as the Researcher’s reflexive journal would not have been practical to have included as a paper appendix as this would have constituted an additional 40,000 words.
The children's imagery was qualitatively different from their text and so these images had to be handled differently. If words were used to describe the imagery without actually showing the imagery then it would force the reader to vicariously generate their own mental representations which would invariably be different from the children's own art.
In addition to multimodality, perhaps the most important part of the current study was that the children became teachers to enhance their own learning. The concept of learning by teaching, however, has been around for thousands of years. Papert (1991) saw the potential of technology to enhance this practice:
‘Learning by doing’ is an old enough idea, but until recently the narrowness of range of the possible doings severely restricted the implementation of the idea. The educational vocation of the new technology is to remove these restrictions (p. 22).
Education and schooling have a long tradition of embracing technology, but often to use new technology to do the same old things with these new tools. If we truly embrace the semiotic affordances of multimodality, we should also embrace the various mediums which support these modes, and more importantly, seize opportunities to create new multimodal artefacts.
The dual stimulation method can also provide powerful opportunities for learning using digital technologies. Sannino (2014) has described dual stimulation as both a method and a principle with the principle being a “path to volitional action” (p. 1). Other commentaries on Vygotsky’s work such as Engeström (2011) and Valsiner (1988) point to a further expansion of dual stimulation to give “freedom to participants to construct the task itself, not only the means to solve it” (Sannino, 2014, p. 6). The explanatory task, which is ubiquitous in schools of all descriptions, could then be likened to the first stimulus of the dual stimulation method. Creating a whole range of second stimuli based on the affordances of both technology and multimodality could then enrich the learning process in ways that were not previously possible.