SALRC Assessment Workshop

University of California, Berkeley
October 27-28, 2006

Location: 33 Dwinelle Hall


Introduction

How should student learning be measured?

The workshop will provide participants an opportunity to review current assessment practices and explore theoretical and research bases of assessment to ensure that the assessments we use provide useful information on student progress.

Through discussion, presentation and group work, we will:

  1. gain a theoretical and practical foundation in learner-centered and performance-based approaches to effective assessment;

  2. explore a variety of assessment frameworks and models;

  3. discuss and identify standards and guidelines to align assessment with guidelines and curricular goals; and

  4. develop, expand, or adapt an effective and practical assessment(s).

Ursula Lentz of CARLA, University of Minnesota will lead this workshop.


Schedule

Friday, October 27

9 a.m. - 9:30 a.m.

  • Coffee and Breakfast

9:30 a.m. – 12 p.m.

  • Assessment Concepts and Theory
    • Matching assessment type to purpose
    • Guidelines and Standards, course level expectations
  • Samples of assessment types and use

(Lunch to be provided)

1 p.m. – 5 p.m.

  • Assessing for Proficiency
    • Models for Speaking and Writing Assessment
    • Portfolio Assessment
  • Rating Criteria: Checklists, Scales and Rubrics
    • Developing speaking and writing assessment and rating criteria
      • Work in groups

7 p.m. Dinner at a Chinese Restaurant (Please RSVP to Steven Poulos by Friday, October 20)

Saturday, October 28

8 a.m. - 8:30 a.m.

  • Coffee and Breakfast

8:30 a.m. – 12 p.m.

  • Developing Reading and Listening Assessments
  • Backwards Design for assessment (Wiggins & McTighe, 1998)
  • Integrated Performance Assessments – performance assessment units

(Lunch on your own)

1 pm – 3:30 p.m.

  • Topically based assessments
  • Developing a performance assessment unit/topically based
  • Planning for comprehensive assessment

Reading List

Before the workshop, please read:

  • ACTFL (American Council on the Teaching of Foreign Languages) Proficiency Guidelines, if you are not already familiar with them, please familiarize yourself with them. The proficiency guidelines for writing are found here: PDF (10 pages). The proficiency guidelines for speaking are found here: PDF (5 pages).
  • ACTFL Reading and Listening Guidelines. The 1985 ACTFL Proficiency Guidelines for reading and listening can be found here: HTML.(~10 pages)
  • A summary of the ACTL guidelines can be found here: PDF (1 page)
  • The Hindi ACTL guidelines can be found here: PDF (24 pages)
  • ILR (The Interagency Language Roundtable) Scale (~6 pages)
  • Boston, Carol (2002). The concept of formative assessment. Practical Assessment, Research & Evaluation, 8(9). (5 pages)
    • This article discusses formative assessment, diagnostic assessment that provides feedback to teachers and students throughout a course and provides examples and resources to support its implementation.
  • Furger, Rebecca (2002). Assessment for understanding. Edutopia, January, 2001. (4 pages)
  • Glossary of Assessment terms found below. (2 pages)
  • Wiggins, Grant (2006). Healthier testing made easy. Edutopia, April, 2006. (4 pages)
    • The article discusses the implications of assessment techniques.
  • Explore the Virtual Assessment Center on the CARLA (Center for Advanced Research on Language Acquisition) web site

Glossary of Assessment Terminology

Articulation: The smooth transition from one level of proficiency to the next along the continuum of language learning.

Assessment: all activities undertaken by teachers and by their students (that) provide information to be used as feedback to modify the teaching and learning activates in which they are engaged. (Black and William, 1998).

Assessment Techniques/methods: include tests, exhibits, interviews, surveys, observations, etc. good assessment requires a balance of techniques because each technique is limited and prone to error.

Assessment instrument: a specific device or way to measure student learning, such as an examination, senior project, portfolio, etc.

Authentic assessment: Assessment tasks that require demonstration of knowledge and skills in a “real world” context and purpose.

Formative assessment: Assessment for Learning, for improvement, provides feedback to the student.

Summative assessment: Assessment of Learning for accountability. Assessment of what students have learned at a particular point in time. The purpose is accountability.

Achievement: tests that assess instruction.

Criterion-referenced: measures student performance against criteria to see if criteria or goal met.

Discrete Point: Tests a single set of linguistic features.

Efficiency: practicality of cost to design and administer the assessment.

Evaluation: 1) Process of obtaining information that is used to make educational decisions about students, to give feedback about their progress/strengths/weaknesses, and to judge instructional effectiveness/curricular accuracy.
2) A value judgment about the results of assessment data. For example, evaluation of student learning requires that educators compare student performance to standards to determine how the student measures up. Depending on the result, decisions are made regarding whether and how to improve student performance.

Integrative: assesses a variety of language features simultaneously.

Norm-referenced: tests that compares the test results of students to other students. The SAT is a norm-referenced test..

Objective: can use a scoring key; true/false or multiple choice questions.

Subjective: impression and judgment (measured against criteria).

Portfolio: A purposeful, varied collection of evidence that shows student learning over time; it documents a range of student knowledge and skills and involves student selection of work included.

Proficiency: a goal for teaching rather than a methodology. It focuses on communication and allows teacher to take into consideration that learners may show proficiency at different levels in different modalities (skills) at any given time.” (http://www.carla.umn.edu/articulation/MNAP_ploa.html)

Self- assessment: students evaluate their own progress.

Standards and guidelines: A set of descriptors of expectations or abilities at a certain level in a certain skill. ACTFL language ability descriptions are guidelines. States have standards students must meet.

Traditional: refers to tests that are multiple choice or true and false or short fill in the blank tests where students provide a short or one word response.

Reliability: an essential quality of any assessment. It refers to the dependability of the test and the degree to which the scores of test takers are consistent over repeated test administrations; i.e., test results are replicable. (inter-rater reliability; internal consistency; parallel-forms reliability are different types of reliability).

Rubric: A scoring guide consisting of a set of general criteria used to evaluate a student’s performance on a given task. Rubrics consist of a fixed measurement scale, a list of criteria that describe the characteristics of products or performances.

Validity: That we are testing what we think we are testing. There are several types of validity:

  • Construct validity – test measure what it is intended to measure
  • Concurrent validity – test correlates with another measure
  • Predictive validity – test score predict future performance
  • Face validity – test appears valid to the test taker
  • Washback validity – a close relationship between assessment and instruction

Washback: the effect that testing has on teaching


Slide Show

The slide show from this workshop can be downloaded here: PDF


Participants

The participants at this workshop included:

Aftab Ahmad - University of California, Berkeley
Jameel Ahmad - University of Washington
Christine Everaert - University of Colorado, Boulder
Kausalya Hart - University of California, Berkeley
S. Akbar Hyder - University of Texas, Austin
Usha Jain - University of California, Berkeley
John Mock - University of California, Santa Cruz
Herman van Olphen - University of Texas, Austin
Abhijeet Paul - University of California, Berkeley
Gibb S. Schreffler - University of California, Santa Barbara
Prithvi Chandra Datta Shobhi - San Francisco State University
Upkar Ubhi - University of California, Berkeley