
Criterion Referenced Assessments (CRA) studies (sometimes known as “standard setting studies”) are used in the training industry to determine passing scores in training programs. Time, effort, and no small amount of attention to detail goes into running these studies.
Documenting evidence to support CRA studies is crucial, especially when that evidence is used to support accreditation or during a program review. This article discusses the primary elements of a solid criterion referenced assessment study provided as evidence for ASTM E2659-18 certificate program accreditation.
What is a Criterion Referenced Assessment Study?
First, let’s define what a Criterion Referenced Assessment study is.
- A broad definition: Criterion referenced assessment (CRA) is the process of evaluating (and grading) the learning of students against a set of pre-specified qualities or criteria, without reference to the achievement of others (Brown, 1998; Harvey, 2004). (UTAS, 2018)
- The ASTM E2659-18 definition: criterion-referenced assessment, n—an assessment intended to measure a learner’s performance through items linked to intended learning outcomes, with the goal of identifying those who do and do not meet a defined performance standard. ASTM E2659-18 para 3.1.18
These definitions differ from a “norm” referenced assessment, where learner results are compared to those of their peers.
“Passing Score” Defined
The passing score (also known as the passing point, the cutoff score, or the cut-score) is used to classify examinees as either masters or non-masters. (PTI©2006)
Care should be taken so management decisions don’t drive the passing score. Training organizations tend to like passing scores to be nice clean numbers (70% is a favorite). There may be financial reasons for this (“I can get more learners to buy my course if the score is not too high”), regulatory (established in law or regulation), or even historic (“We use 70% because we always did”).
In a criterion referenced assessment, the questions are designed and based on content driven by the intended learning objectives. The study is based on a group of subject matter experts reviewing them and determining the score of the “minimally competent” learner (a learner who is just able to pass the test).
Often, the resulting cut score is not a round number (for example 83%). In this case, the certificate issuer may either: round the cut score to the nearest acceptable score, either down (e.g., 80%) or up (e.g., 85%); leave it (83%); or revise the questions and rerun the study. If the cut score is changed, then the certificate issuer should document why it was adjusted. The rationale should be reasonable, informed by the results of the cut score study, and within industry generally accepted practice.
Quality Documentation for Criterion Referenced Assessment Studies
“Quality documentation” with regard to a CRA study means having all the pieces and being able to show that an organization did the process correctly to produce credible results… and that it could reproduce this process with high fidelity. Here is an outline that captures the key information that helps communicate how the study was conducted.
Criterion Referenced Assessment (CRA) Study Outline
- Purpose: CRA studies are used to determine the passing score.
- Criterion Referenced Assessment Method: Angoff, Beuk, etc.
- Specific certificate program and exam version(s) that are the focus of the study
- Description of the alignment of the exam questions with learning objectives
- When – Dates when the team met, and when the study was completed (report issued)
- Versioning the study is also helpful. Programs revise their studies regularly, usually each time a new test form is created or if the current form has been revised. The factors that trigger the need for a new cut score study should be identified in the certificate program instructional design plan (CPIDP).
- Who – Staff, Subject Matter Experts (SME) and Contractors involved (including qualifications and experience in working with minimally qualified learners)
- How – How the study was conducted (process), how data was captured (virtual or in person, SME surveys, meetings, etc.), how findings were calculated (including rationale for changing the score from the study’s outcome), and how SMEs were utilized (surveys and meetings, whether virtual or in person).
- Findings – the resulting cut score
Supporting Evidence:
- Certificate Program Instructional Design Plan (CPIDP), section and page number where the CRA policies, procedures, and the methodology used for determining the passing score are located. Policies should include rationale for when a new CRA will be needed and the statistical deviation allowed when determining the passing score.
- Documents showing how the study was planned, completed and how item weighting is determined (when differential item weighting is used).
- Approved, finalized, and version-controlled Intended Learning Objectives (ILO) Matrix
- Approved, finalized, and version-controlled documentation of the questions (exam) used in the study
- Minutes from meetings, including:
- SME meetings and instructions/training (spreadsheets, slides, email communications, etc.) provided to them;
- SME definition of a minimally qualified candidate;
- Data collected from each “round” (normally a spreadsheet is used);
- Advisory Group Review of report and findings (and where mandated, in certificate issuer policy, comment, approval of the process, report and findings);
- Documents describing how SME qualifications were defined (qualifications may be defined in a number of areas, such as under Advisory Member Composition or Contractor Monitoring, etc.)
- Staff, SME, and Contractor resumes/CVs
- SME and Contractor Agreements (contracts whether pay or no-pay, NDAs, etc.)
A solid criterion-referenced assessment study is a critical part of certificate program documentation. It is the culmination of the training development process and establishes the cut score used to determine whether your learners are able to perform the intended learning outcomes as stated on their certificates. As mentioned above, these studies take time, effort and no small amount of attention to detail to fill a key requirement of accreditation. More importantly, they reflect on the quality and maturity of the certificate program.
Tips for Preparing Criterion Referenced Assessment Studies
Spreadsheets should be able to stand alone (including date completed, linkage to the main report, etc.).
Where SME initials are used, a legend (identifying the SME) is helpful (for example: JD – Jane Doe).
The procedure should ensure that SME’s make their ratings independent from each other. In other words, each SME should make their rating without knowing the ratings of other SMEs.
An index is often used to direct assessors to where evidence (documentation) supporting the criterion referenced assessment is contained in certificate program documentation. This index includes the document name, section and paragraph where specific evidence may be found.
Finalized criterion-referenced assessment studies should be part of the certificate issuer historical record.
References
University of Tasmania (2018, May 2). Teaching & Learning Criterion Referenced Assessment.
ASTM International (2018). ASTM E2659-18 Standard Practice for Certificate Programs.
Professional Testing Inc. (© PTI 2006). Building High Quality Examination Programs: Test Topics Step 8. Conduct the Standard Setting.
Contributing Authors: Kevin Swartz and Kathy Tuzinski
Kevin Swartz owns and operates KS2 Consultants LLC, which provides curriculum and instructor/teacher development program assessments and training to improve education and training in government, corporate, and private/public education. These assessments are based on Instructional Systems Design (ISD) and Certified Technical Trainer (CTT) methods in addition to published assessment criteria. Kevin can be reached at jks_2006@yahoo.com or on LinkedIn at https://www.linkedin.com/in/Kevin-Swartz-KS2-Consultants.
Kathy Tuzinski is a Contract Assessor for ANAB’s Certificate Accreditation Program. She has a science-practioner’s background in the testing industry as a psychometrician, consultant, and industrial/organizational psychologist. She started Human Measures to provide the testing industry support in developing testing programs that are based on science – advising companies on job analysis and competency modeling, test development and test validation, legal requirements for employee selection, and relevant testing standards, including the AERA, APA, and NCME Standards for Educational and Psychological Testing and the SIOP Principles for the Validation and Use of Personnel Selection Procedures. She is a member of the American Psychological Association and the Society for Industrial/Organizational Psychology, has published several articles in peer-reviewed journals, and is co-editor of the book Simulations for Personnel Selection. She holds a Master of Arts degree in Industrial/Organizational Psychology from the University of Minnesota. You can reach her at https://www.human-measures.com/.