تأثیر ارزشیابی پوشه‌ایی بر صحت دستوری و پیچیدگی نوشتاری دانشجویان کارشناسی‌ارشد آموزش زبان‌انگلیسی

نوع مقاله: علمی پژوهشی

نویسنده

گروه زبان‌انگلیسی، واحد تبریز، دانشگاه آزاد اسلامی، تبریز، ایران

چکیده

تحقیق شبه تجربی حاضر به بررسی تأثیر ارزشیابی پوشه ایی بر صحت دستوری و پیچیدگی نوشتاری دانشجویان کارشناسی ارشد آموزش زبان پرداخته است و بر این فرضیه تحقیق استوار می‌باشد که شرکت در فرآیند ارزشیابی عملکرد خود توجه زبان آموزان را نسبت به ویژگی‌های صوری و متنی کلام نوشتاری جلب نموده و موجب پیشرفت آنها می‌شود. آزمودنی‌های حاضر در این پژوهش شامل 40 دانشجوی مقطع کارشناسی‌ارشد آموزش زبان انگلیسی از جمعیت پنجاه نفری دانشجویانی انتخاب شدند که درس نگارش پیشرفته را اخذ نموده و پس از تأیید همگنی در مهارت‌های درک مطلب و نوشتار به صورت تصادفی به دو گروه تحقیق و شاهد تعیین شدند. هر دو گروه به مدت 12 جلسه بر اساس مطالب آموزشی یکسان و روش تدریس فرآیند-محور توام با بازخورد تعاملی، خود ویراستاری و بازخورد معلم آموزش دریافت نمودند. گروه تحقیق به صورت هفتگی در ارزشیابی پوشه‌ایی تکالیف نوشتاری خود نیز شرکت نمودند. تحلیل آماری داده‌های پس‌آزمون نشانگر پیشرفت معنی‌دار گروه تحقیق در صحت دستوری و پیچیدگی نوشتاری بود. نتایج پژوهش ضرورت شرکت دادن دانشجویان کارشناسی‌ارشد آموزش انگلیسی در فرآیند آموزش و ارزشیابی پیشرفت آموزشی خود را مورد تأکید قرار می‌دهد و کاربردهای آموزشی بسیاری دارد

کلیدواژه‌ها


عنوان مقاله [English]

The Impact of Portfolio Assessment on the Accuracy and Complexity of TEFL Postgraduate Students’ Writing

نویسنده [English]

  • Zohreh Seifoori
Department of English Language, Tabriz Branch, Islamic Azad University, Tabriz, Iran
چکیده [English]

This quasi-experimental study examined the impact of portfolio assessment (PA) on the accuracy and complexity of postgraduate TEFL students’ writing. It was hypothesized that engaging learners in the process of assessing their performance can focus their attention on formal and textual features and thereby promote their writing. 40 intermediate TEFL postgraduate students, selected from a population of 50 taking the “Advanced Writing Course” at Islamic Azad University, Tabriz Branch, participated in the study and were randomly assigned as the control and experimental groups after their homogeneity was assessed based on a Preliminary English Test (PET) and a writing test. Both groups received process-oriented instruction based on the same teaching materials for 12 sessions and underwent a process-oriented teaching approach complemented by interactive feedback, peer-editing, and teacher’s feedback on their writing samples. The experimental group was additionally engaged in weekly PA of their wiring. The paired samples t-test analysis of their writing post-test revealed that the experimental group surpassed the control group and produced more accurate and complex texts. The results accentuate the significance of engaging postgraduate TEFL students not only in the learning process but also in the process of evaluating their own progress over time, and offer pedagogical implications

کلیدواژه‌ها [English]

  • Accuracy
  • Complexity
  • interactive feedback
  • portfolio assessment
  • Writing

1. Introduction

Writing is a complex communicative process of converting thoughts to written language which usually starts off as a chance to practice vocabulary and grammar in pursuit of linguistic accuracy, but soon evolves into an opportunity to develop organizational skills with a focus on meaning. The complexity involved in forming original thoughts and organizing them, on the one hand, and the grammatical knowledge and organizational skill required for coherently transforming those thoughts to written language, on the other, has made learning of writing as an academic communicative skill a daunting task for many language learners in ESL and EFL contexts. Many language teachers and learners find it difficult to advance in writing by leaps and bounds partly owing to the inadequacy of the prerequisite sub-skills required for writing, as suggested by Wolf (2000, as cited in Topuz, 2004). Actually, teachers in EFL contexts hardly accomplish the objective of addressing multiple features of writing and thus highlight the linguistic component. For the same reason, EFL learners mostly deal with writing as a support technique to reinforce handwriting, grammar and vocabulary, and rarely succeed in developing a communicative command of this productive skill in its own right.

   The focus on formal features of the language and overlook of meaning and compositional skills in writing and assessment of writing characterized the traditional product-oriented approach that dominated the under-graduate, graduate, and post-graduate levels of many educational systems. However, the pedagogical pendulum started to sway gently and gradually away from the product to the process of constructing meaning and culminated in a more balanced learner-oriented, if not learner-centered, pedagogy even in EFL contexts where the paramount significance of engaging learners in the process of writing has now been widely acknowledged. The reformist process-oriented approach is totally in tandem with Progressive Educational Philosophy (PEP) (Clarke, 1987) and purports to extricate learners from the tyranny of teachers by teaching them how to manage the process of writing and how to evaluate their performance.

   The new trend called into question the static nature of evaluative techniques as well. Alternative assessment techniques (AAT) like self and peer-assessment, journal writing, and portfolio assessment (PA) were thus introduced to realign the traditional summative teacher-led uni-directional evaluation (Belanoff & Dickson, 1991). A portfolio has been defined as a purposeful collection of students' work reflecting their gradual progress, weaknesses, and achievements in various areas (Genesee & Upshur, 1996). Based on the type of collected data, PA might be used to serve a wide range of pedagogical purposes in second language learning contexts of all kinds (Shohamy & Walton, 1992). According to Genesee and Upshur (1996), such analysis and evaluation might focus on some specific aspect of the language, such as writing, or have a broad focus that includes examples of all aspects of language development. Yet, in EFL contexts this technique, as proposed by Mousavi (1999), represents an approach most widely employed to evaluation of writing where teachers provide suggestions on how to revise the product and comment on the individual’s progress in writing.

Unlike the traditional evaluation techniques like tests which offered a unidirectional flow of feedback from the teacher to the learner, PA, as postulated by Song and August (2002), enables learners to participate in the evaluation of their own progress over time and take responsibility for their own learning. Of course, the interactive nature of PA escalates the complexity and dynamicity of the evaluation process which, as posed by Karrol (1998, cited in Reid, 2000), originates from the difficulty and complexity of theoretically defining writing in the first place and the challenge of simultaneously controlling various factors involved in writing and assessing. Through this dynamic process student learning is monitored while students themselves are involved in making decisions about the degree to which their performance matches their ability. For this very reason, assessment can be regarded as a formative attempt rather than summative (Bachmann, 1990), and as postulated by Spolsky (1992), it can rightly be regarded as curriculum-driven as any other formative assessment because it shadows the curriculum and provides feedback to students and teachers. Coombe, Folse, and Hubley (2007) highlighted the compatibility of PA with process-oriented approach and its capability to engage the learners in self-assessment of multiple pieces of writing produced over time. Likewise, O'Malley and Valdez Pierce (1996) advocated this ongoing self-evaluation of writing samples as a motivating technique that encourages learners to reflect on their work.

  1. Review of the Related Literature

Empirical studies have addressed various aspects of the teaching and learning processes in EFL/ESL contexts in an attempt to find ways of facilitating the development of this skill for learners. One such fertile soil for research has been the use of portfolios with two major lines of research. The first line is mainlyconcerned with the effect of portfolio assessmenton learners' achievement in writing (Qinghua, 2010; Barootchi & Keshavaraz, 2002, Elahinia, 2004; Nezakatgoo, 2011; Song & August, 2002), the second with learners' reflections, comments, and attitudes toward portfolio assessment (Starck, 1999; Kear, Coffman, McKenna, & Ambrosio, 2000; Spencer, 1999). More recently, PA has been merged with merged with individual multiple intelligences (Faravani & Atai, 2015) to promote EFL learners’ reflective thinking skills. The present study is of the first type and it would be enlightening to review some empirical studies focused on the impact of portfolio assessment on language learning in some EFL contexts.

   In a longitudinal study, Song and August (2002) investigated the effect of portfolio assessment on two groups of advanced ESL learners’ composition writing. They found that the success rate of the PA group was higher than that of the control group not only in writing but also in the college exit exam. More recently, Qinghua (2010) conducted a quasi-experimental study to investigate the impact of PA on writing development of 34 EFL Chinese learners, aging from 18 to 2, in two sophomore English major classes of the same size, gender distribution, and writing proficiency. The findings revealed significant differences between the two groups in terms of accuracy and coherence with the PA group surpassing their counterparts in the control group.

   In the Iranian context, Barootchi and Keshavaraz (2002) examined the effect of PA on 60 Iranian female high-school sophomores’ achievement of course objectives and their feeling of responsibility towards monitoring their progress. The analysis of the scores obtained from the teacher-made achievement test and a satisfaction questionnaire verified the significant effect of PA. The portfolio assessment scores correlated significantly with those of the teacher-made achievement test, and high inter-rater reliability was also achieved. They, thus, suggested PA as a promising testing and teaching tool which can be deployed in conjunction with teacher-made tests to promote educational outcomes and learner engagement.

   Elahinia (2004) investigated the effect of portfolio assessment on writing achievement of 34 male and female Iranian EFL learners majoring English Translation. Analysis of their scores in a final essay writing exam displayed the positive effect of PA on the participants’ writing performance and attitudes. The effect of PA of writing was likewise explored by Tabatabaei and Assefi (2012) who reported significant impacts on focus, vocabulary, organization, conventions, and vocabulary of 40 male and female EFL learners at upper-intermediate level of proficiency. Still in another study, Nezakatgoo (2011) suggested that evaluating learners’ work through portfolio-assessment could impact students’ test performance.

   Apart from the impact of PA on the writing skill of EFL learners, however, more innovative application of this alternative assessment technique uncover its wide-ranging applicability in language pedagogy and far-reaching impacts on various features of the learning process. Oakley, Pegrum, and Johnston (2014), explored the impact of Wi-Fi-based e-portfolios into a Master of Teaching program at university level in the form of personal learning environments to accentuate reflective practice. While discussing the issue from different perspectives, they proposed that the e-portfolio helped the pre-service teachers acknowledge the value of reflection and the use of a guiding structure and achieved a medium level of reflection during the first year of study.

   In the context of Iran, Jafarigohar and Mortazavi (2013) examined different types of reflective individual, peer collaborative, and teacher collaborative journal writing on 60 upper intermediate female English learners’ self-regulated learning as measured through Self-Regulated Learning Scale (ASRL-S). The findings revealed that journal writing contributed significantly to the self-regulatory skills of the participants and that sharing the journal with peers and teachers could boost the impact. Self-assessment was also found effective in impacting EFL learners’ goal orientation (Baleghizadeh & Masoun, 2014).

   In another study, Faravani and Atai (2015) merged dialogic-based PA with individual multiple intelligences to promote reflective thinking skills of EFL participants. This innovative application of PA proved to escalate the growth of higher order thinking skills of forty EFL participants and the impact was found to be greater among intellectually homogeneous participants in the experimental group.

   Very few studies, to my best knowledge, have yet delved into the impact of PA on the accuracy and complexity of postgraduate TEFL students’ writing. It is normally assumed that these learners have already mastered conventions of the target writing system and should concern more with conceptual content of their courses and concentrate on higher level facets like content, organization, coherence, and other features of academic writing. A cursory examination of doctoral dissertations and master theses, however, is sufficient to uncover the full scale of the problems postgraduate students still have with accuracy and complexity, which are among the basic requirements in academic writing, despite years of schooling. Their writing is highly restricted in terms of grammatical and lexical accuracy, reflects traces of L1 influence, and is far from standard natural written discourse. Most importantly, the excessive simplicity of the expressions employed hampers the exchange of ripeness and complexity of the intellectual content required at this level. Such inadequacies are evident in the written assignments produced even by advanced learners who have already passed various English grammar, vocabulary enrichment, and composition writing courses through process-oriented instruction in writing and various types of feedback. Surprisingly, they know most of the rules of writing and even spend time generating ideas and editing their first drafts but still fail to approximate native-like standards of writing.

   Despite the prevalence of process-orientation in teaching writing to graduate and postgraduate students, they are rarely engaged in the evaluation process of evaluating the products they produce owing to various executive constraints. It is my conviction that although any serious treatment of the problem calls for radical improvements in the teaching conditions, it might still be viable to help postgraduate TEFL students attend to the accuracy and complexity of writing through engagement in PA. Hence, this quasi-experimental study was designed to examine the effect of involving Iranian TEFL learners in PA on the development of their writing skill and the following research question was formulated:

 

 

  • Does PA impact the accuracy and complexity of postgraduate TEFL students in a process-oriented writing course?
  1. Method

3.1. Participants

The participants in the current quasi-experimental study included 40 male and female Iranian post-graduate TEFL students at Islamic Azad University, Tabriz Branch who were within the age range of 24-36. All participants were taking the “Advanced Writing Course” and were attending two intact classes once a week during a fifteen-session semester. Most of them had learned English as a third language with Turkish and Farsi as their first and second languages. The classes were randomly assigned as the experimental group and the control group, each including mostly female participants. Initial homogeneity of the groups was assessed using a Preliminary English Test (PET) and a writing test.

3.2. Instruments

Two instruments were used to collect the research data. First, a modified 40-item version of PET was administered to verify the homogeneity of the participants in reading comprehension (35 items) and basic grammar (15 items). In addition, a writing test was administered with the topic of “Rural Life and Urban Life” to see if the participants were homogeneous in terms of the accuracy and complexity of their writing. The same topic was assigned as the post-test and the practice effect was controlled in two ways. Firstly, I had noted the participants that the purpose of the initial writing was to delineate their entry level in writing and verify the compatibility of the syllabus and that they would not expect any kind of feedback. Further, I administered the post-test with a 12-week interval at the end of the treatment during which the participants were working on eight different writing genres and assignments. To relieve the burden on the participants and alleviate their stress, both writing tests were administered during the class time as class work.

3.3. Materials and Procedure

The course book selected as the teaching materials was “Advanced Writing” (Birjandi, Alavi & Mosalanejad, 2004) which comprises twelve chapters, four on preliminary information about paragraph writing and eight on various genres of numeration, chronology, process, description, definition, cause and effect, comparison/contrast and argumentation. The eight genres were presented in the class during the 12-session treatment and the participants were weekly asked to write compositions based on each of these genres after they were presented in class.

   As righty argued by many experts in testing evaluation, a major concern in PA is maintaining the fairness of the evaluation which might be threatened owing to the growing subjectivity on the part of the teacher. In order to minimize the inevitable impact of this factor, I followed O’Malley and Valdez Pierce (1996) suggestion and based the evaluation on a common rubric that described numerical points. This rubric was focused on four major categories of unity, organization, accuracy, and complexity each with varying levels of specification and four levels representing a scale of 1, the weakest performance, to 4 which represented near native-like performance (See Appendix). Descriptions of the levels were based on a modified version of the writing scale in Jacobs, Zingraft, Wormuth, Harfield, and Hughey (1981).

   In order to direct the participants’ self and peer-assessment attempts, I introduced the rubric as a vivid evaluation criterion during the first session when its content was fully and interactively discussed. One sample paragraph was also evaluated based on the rubric to help learners understand how their writings would be quantified and evaluated. They were recommended to employ it in self and peer-assessment of their writings.

   During the course, the participants in both groups were first introduced to the basic concepts of the writing genre in question and followed the recognition to production order of learning based on the model paragraphs presented in the book. Their attention was drawn towards structural and organizational characteristics of each paragraph type and finally they were demanded to produce the first draft of a sample paragraph on one of the topics suggested in the book. The writing process would start in class and was to be completed and self-edited at home.

   The following session, the teacher would display a sample of the students’ writings on the projector screen asking all of the participants to read and edit the text in 15 minutes while the text was being evaluated and revised interactively in class. The teachers’ role during this interactive feedback session was to draw the participants’ attention to defective features through some general questions. It was assumed that the text along with the teachers’ scaffolding questions would function as object-regulation and other-regulation in the zone of proximal development and finally lead learners to self-regulation (Lantolf, 2000).

   Then, the participants in both groups were required to peer-edit one of their classmate’s writings in the class leaving some comments and notes for them to observe. The next step was for the writers to subsequently revise their writings based on the feedback offered by their peers. The third drafts were collected for final teacher assessment.

   In the experimental group, the participants had to revise their texts once more based on the teachers’ comments and to collect in a portfolio of four samples of their selected writings, along with the three revisions for each. Meanwhile, weekly meetings were arranged with the teacher to randomly assess and negotiate recently added content of the portfolios. Owing to the wide-range of grammatical and organizational problems the participants had in writing, the assessment focus in portfolio sessions was inevitably restricted to grammatical accuracy and complexity of the teaching points peculiar to each writing genre.

   No portfolio meetings were, however, arranged for the participants in the control group nor did they trace their gradual development through collecting and comparing versions of their work and evaluating various features of writing. The last session was allocated to collect the final sample of the participants’ writing in the form of a long paragraph on the same topic as the pre-test.

3.4. Measures

Grammatical accuracy might be measured as the ratio of error-free terminal units (t-units) or in terms of inaccuracy as the ratio of errors per t-unit (Larsen-Freeman, 2006). A t-unit is defined as each independent utterance providing referential or pragmatic meaning (Foster & Skehan, 1999) and may be made up of one simple independent finite clause or an independent finite clause plus one or more dependent finite or non-finite clauses. In this study, accuracy was quantified as the percentage of error-free clauses in overall writing (Ellis & Yuan, 2004; Foster & Skehan, 1999; Tavakoli & Skehan, 2005; Yuan & Ellis, 2003).

   Following Foster and Skehan (1999), complexity was also measured as the ratio of subordinate clauses to the overall t-units produced. These two methods supplanted the levels of the EFL composition profile to address the reliability concern posed by Song and August (2002) who underscored the increased subjectivity as a serious problem in PA. It was assumed that these quantification methods would render more objective measures of accuracy and complexity to estimate the extent to which the treatment was effective.

   Moreover, two independent scorers rated the participants’ pre-test and post-test writings to ensure the reliability of the accuracy and complexity measures. The inter-orator reliability of the score sets was further found to be .78 and .83 for accuracy and .86 and .91 for complexity measures of the pre-test and post-test scores, respectively.

 

 

 

  1. Results and Discussion

4.1. Results

The research data obtained from the pre-tests were analyzed to assess the initial homogeneity of the groups while the post-test scores were analyzed to find out the impact of PA on the dependent research variables.

 

3.1.1.      The Pre-test

To ascertain the groups’ initial homogeneity in reading, grammar, and accuracy and complexity of writing, I first estimated the descriptive statistics of the data obtained from the pre-test scores, as shown in Table 1.

 

 

Table 1. Descriptive Statistics of the Groups’ PET Scores and Pre-writing Accuracy and Complexity Measures

 

 

Groups          

N

Mean

Std. Deviation

Std. Error Mean

PET

Con. G.

20

23.40

6.30

1.40

Exp. G

20

26.15

5.16

1.15

Pre-test Accuracy

Con. G.

20

.41

.075

.01

Exp. G

20

.39

.09

.02

Pre-test Complexity

Con. G.

20

.20

.04

.01

Exp. G

20

.23

.05

.01

             

   

   Table 1 shows difference in the groups’ pre-test mean scores; hence, to verify the significance of the apparent differences, I ran three independent samples t-tests, the results of which are displayed in Table 2.

 

 

 

 

 

 

 

 

 

 

Table 2. Independent Samples t-test of the Groups’ PET Scores and Pre-writing Accuracy and Complexity Measures

 

 

Levene's Test

t-test for Equality of Means

 

 

F

Sig.

t

df

Sig.

(2-tailed)

Mean Difference

Std. Error Difference

95% Confidence Interval of the Difference

 

 

Lower

Upper

PET

Equal Varia