SALRC Workshop: Assessments that Matter

University of Chicago
March 30-31, 2007

Location: Cobb 201


Introduction

Good assessments are essential to teaching and learning. This workshop provides an opportunity to explore theoretical and research bases of assessment, examine current assessment practices and models of performance assessment to ensure assessments provide useful information on student progress. Participants will also work to develop an integrated assessment unit for use in their courses and are asked to bring a current assessment or course materials (curricular goals, a lesson or unit, text, or supplementary/literary text) for use in creating the unit.

Through discussion, presentation and group work, we will:

  1. gain a theoretical and practical foundation in learner-centered and performance-based approaches to effective assessment;
  2. explore a variety of assessment frameworks and models;
  3. discuss and identify standards and guidelines to align assessment with guidelines and curricular goals;
  4. review rubric development
  5. develop an integrated assessment and rubric
  6. explore common assessment issues

Ursula Lentz of CARLA, University of Minnesota will lead this workshop.


Schedule


Friday, March 30 (8 a.m. – 4:30 p.m.)

  • 8:00 a.m.

    • Registration and Breakfast

  • 8:30 – 9:45 a.m.

    • Assessment Fundamentals

      • Teaching, tests, and assessment

      • Principles of language assessment

  • 9:45 – 10:30 a.m.

    • Defining expectations, guidelines and standards

  • 10:30 – 10:45 a.m.

    • Break

  • 10:45 – 12:00 p.m.

    • Samples of assessment types and use

    • Assessing for Proficiency

  • 12:00 – 1:00 p.m.

    • Lunch (will be provided)

  • 1:00 – 2:00 p.m.

    • Proficiency Models, Large scale, classroom

    • Models for Speaking and Writing Assessment

      • CoWA and CoSA

      • Portfolio Assessment

      • STAMP

  • 2:00 – 2:15 p.m.

    • Break

  • 2:15 – 3:20 p.m.

    • Rating Criteria: Checklists, Scales and Rubrics

      • Group work to develop and apply speaking rubric

      • Work individually or in groups to develop a speaking or writing assessment and rubric

  • 3:20 – 4:30 p.m.

    • Challenges for assessing reading and listening

    • Methods for rating reading and listening assessments

    • Samples listening assessments and rubrics

  • 6 p.m.

    • Dinner (location TBA)

Saturday, March 31 (8:00 a.m. – 1:00 p.m.)

  • 8:00 – 8:30 a.m.

    • Review, Questions, and Breakfast

  • 8:30 – 9:45 a.m.

    • Using Backwards Design (Wiggins & McTighe, 1998) and essential questions to create assessments

    • Integrated Performance Assessments (IPA) – performance assessment unit framework

  • 9:45 – 10:20 a.m.

    • Group/pair work per level on computers to explore IPAs

  • 10:20 – 10:35 a.m.

    • Break

  • 10:35 – 12:15 p.m.

    • Work in group or individually to write an IPA – using your text or course materials

  • 12:15 – 12:25 p.m.

    • Break

  • 12:25 – 12:45 p.m.

    • Unit demonstration and feedback

  • 12:45 – 1:00 p.m.

    • Questions and wrap-up


Reading List

Before the workshop, please be familiar with:

Before the workshop, please read and explore:


Glossary of Assessment Terminology

Articulation: The smooth transition from one level of proficiency to the next along the continuum of language learning.

Assessment: all activities undertaken by teachers and by their students (that) provide information to be used as feedback to modify the teaching and learning activates in which they are engaged. (Black and William, 1998).

Assessment Techniques/methods: include tests, exhibits, interviews, surveys, observations, etc. good assessment requires a balance of techniques because each technique is limited and prone to error.

Assessment instrument: a specific device or way to measure student learning, such as an examination, senior project, portfolio, etc.

Authentic assessment: Assessment tasks that require demonstration of knowledge and skills in a “real world” context and purpose.

Formative assessment: Assessment for Learning, for improvement, provides feedback to the student.

Summative assessment: Assessment of Learning for accountability. Assessment of what students have learned at a particular point in time. The purpose is accountability.

Achievement: tests that assess instruction.

Criterion-referenced: measures student performance against criteria to see if criteria or goal met.

Discrete Point: Tests a single set of linguistic features.

Efficiency: practicality of cost to design and administer the assessment.

Evaluation: 1) Process of obtaining information that is used to make educational decisions about students, to give feedback about their progress/strengths/weaknesses, and to judge instructional effectiveness/curricular accuracy.
2) A value judgment about the results of assessment data. For example, evaluation of student learning requires that educators compare student performance to standards to determine how the student measures up. Depending on the result, decisions are made regarding whether and how to improve student performance.

Integrative: assesses a variety of language features simultaneously.

Norm-referenced: tests that compares the test results of students to other students. The SAT is a norm-referenced test..

Objective: can use a scoring key; true/false or multiple choice questions.

Subjective: impression and judgment (measured against criteria).

Portfolio: A purposeful, varied collection of evidence that shows student learning over time; it documents a range of student knowledge and skills and involves student selection of work included.

Proficiency: a goal for teaching rather than a methodology. It focuses on communication and allows teacher to take into consideration that learners may show proficiency at different levels in different modalities (skills) at any given time.” (http://www.carla.umn.edu/articulation/MNAP_ploa.html)

Self- assessment: students evaluate their own progress.

Standards and guidelines: A set of descriptors of expectations or abilities at a certain level in a certain skill. ACTFL language ability descriptions are guidelines. States have standards students must meet.

Traditional: refers to tests that are multiple choice or true and false or short fill in the blank tests where students provide a short or one word response.

Reliability: an essential quality of any assessment. It refers to the dependability of the test and the degree to which the scores of test takers are consistent over repeated test administrations; i.e., test results are replicable. (inter-rater reliability; internal consistency; parallel-forms reliability are different types of reliability).

Rubric: A scoring guide consisting of a set of general criteria used to evaluate a student’s performance on a given task. Rubrics consist of a fixed measurement scale, a list of criteria that describe the characteristics of products or performances.

Validity: That we are testing what we think we are testing. There are several types of validity:

  • Construct validity – test measure what it is intended to measure
  • Concurrent validity – test correlates with another measure
  • Predictive validity – test score predict future performance
  • Face validity – test appears valid to the test taker
  • Washback validity – a close relationship between assessment and instruction

Washback: the effect that testing has on teaching


Slide Show

The slide show from this workshop can be downloaded here: PDF


Participants

The participants at this workshop included:

Elena Bashir, University of Chicago
Phillip Engblom, University of Chicago
Pinderjeet Gill, University of Michigan
Xi He, University of Chicago
Sungok Hong, Indiana University
Vimaladevi Katikaneni, University of Chicago
Nisha Kommattam, University of Chicago
Hajnalka Kovacs, University of Chicago
James Lindholm, University of Chicago
Wasantha Liyanage, Cornell University
Rebecca Manring, Indiana University
Rakesh Ranjan, Emory University
Valerie Ritter, University of Chicago
Jishnu Shankar, Syracuse University