Glossary of Assessment Terms
Glossary of Assessment Terms
Glossary
Academic Assessment: The systematic collection, review, and use of information about educational programs and courses undertaken for the purpose of both improving student learning and instructional delivery.
Assessment Opportunity: An opportunity to observe and assess students’ behaviors or competencies. Such opportunities include exams, papers, presentations, performances, exhibits, portfolios, surveys, homework, group projects, etc.
Administrative Assessment: The systematic collection, review, and use of information about an administrative unit (or units) for the purpose of increasing efficiency and effectiveness with which the institution attains its mission.
Bloom's Taxonomy of Cognitive Objectives: Six general levels of educational learning objectives, arranged in order of increasing complexity.
Course Embedded Assessment: The process of reviewing materials generated in the classroom. In addition to providing a basis for grading students, such materials allow faculty to evaluate approaches to instruction and course design.
Curriculum Maps: Tools that can be used at any stage in the curriculum cycle, whether developing, reviewing or revising curriculum. They provide a picture, a graphical description or a synopsis, of curriculum components that can be used to encourage dialogue and help faculty ensure that learning experiences are aligned and lead to the achievement of program learning outcomes.
Direct Measures of Learning: Students display knowledge and skills as they respond directly to the instrument itself. Examples include objective tests, essays, presentations, and classroom assignments.
External Assessment: Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. This kind of assessment is usually summative, quantitative, and often high-stakes, such as the SAT or GRE exams.
Formative evaluation: Improvement-oriented assessment. The use of a broad range of instruments and procedures during a course of instruction or during a period of organizational operations in order to facilitate mid-course adjustments.
Goals: Goals are used to express intended results in general terms. Goals describe broad learning concepts and, on their own, cannot be directly measured.
Indirect Measures of Learning: Students are asked to reflect on their learning rather than to demonstrate it. Examples include: exit surveys, student interviews, and alumni surveys.
Institutional Effectiveness: The measure of what an institution actually achieves.
Institution Level Learning Assessment: Institution level assessment is aimed at understanding and improving student learning across the institution
Learning Outcomes: Observable behaviors or actions on the part of students that demonstrate that the intended learning objective has occurred. Learning outcomes occur on both the program and course levels.
Measurements: Strategies, techniques and instruments for collecting evidence of the extent to which students demonstrate the desired behaviors or competencies.
Methods of Assessment: Techniques or instruments used in assessment.
Mission Statement: A mission statement explains why your organization exists and what it hopes to achieve in the future. It articulates the organization’s essential nature, its values, and its work.
Modifications/Improvement Plans: Recommended actions or changes for improving student learning, service delivery, etc. that respond to the respective findings of measurement evaluations.
Performance Assessment: The process of using student activities or products, as opposed to tests or surveys, to evaluate students' knowledge, skills, and development. Methods include: essays, oral presentations, exhibitions, performances, and demonstrations. Examples include: reflective journals (daily/weekly); capstone experiences; demonstrations of student work (e.g. acting in a theatrical production, playing an instrument, observing a student teaching a lesson); products of student work (e.g. Art students produce paintings/drawings, Journalism students write newspaper articles, Geography students create maps, Computer Science students generate computer programs, etc.).
Portfolio: An accumulation of evidence about individual proficiencies, especially in relation to learning standards. Examples include but are not limited to: samples of student work, including projects, journals, exams, papers, presentations, videos of speeches and performances.
Quantitative Methods of Assessment: Methods that rely on numerical scores or ratings. Examples of opportunities to use quantitative assessment are surveys, inventories, institutional/departmental data, departmental/course-level exams (locally constructed, standardized, etc.).
Qualitative Methods of Assessment: Methods that rely on descriptions rather than numbers, very often including a grading rubric. Examples of opportunities to use qualitative assessment are ethnographic field studies, logs, journals, participant observation, and open-ended questions on interviews and surveys.
Reliability: The ability of an instrument or other assessment method to produce consistent responses over time.
Reflective Essays: generally brief (five to ten minute) essays on topics related to identified learning outcomes, although they may be longer when assigned as homework. Students are asked to reflect on a selected issue. Content analysis or a rubric is used to analyze results.
Rubrics: assessment instruments that indicate the qualities by which levels of performance can be differentiated and anchor judgments about the degree of achievement.
Student Learning Assessment: The act of assembling, analyzing and using both quantitative and qualitative evidence of teaching and learning outcomes in order to examine their alignment with stated purposes and educational objectives and to provide meaningful feedback that will stimulate self-renewal.
Summative evaluation: Accountability-oriented assessment. The use of data assembled at the end of a particular sequence of activities to provide a macro view of teaching, learning, and institutional effectiveness.
Validity: As applied to a test, validity refers to a judgment concerning how well a test does in fact measure what it was designed to measure
References:
Adapted from Assessment Glossary compiled by American Public University System, 2005
http://www.apus.edu/community-scholars/learning-outcomes-assessment/university-assessment/glossary.htm
Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus Publishing.
Palomba, C. A. & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass Publishers.
Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass Publishers