Assessment Glossary

Aggregated Data Describes data combined from several measurements. Alignment A logical connection between the curriculum and the expected outcomes. Example: Curriculum Mapping.
Alternative Assessment An assessment that requires students to generate a response to a question rather than choose from a set. Anonymity Data elements cannot be associated with individual respondents.
Artifact  The evidence of student learning (paper, project, test) that demonstrates the students’ abilities and is collected for the purposes of student learning outcomes. Assessment Cycle  Used for continuous improvement, this seven-step cycle assumes that assessment is a cyclical process and not just practiced periodically.
Assessment The systematic process of collecting, analyzing, sharing, and using information to improve student learning and development and the quality and efficiency of programs and services provided to students. Assessment Measure Describes generally how the information/data will be collected, which may involve either direct or indirect measurement and qualitative and/or quantitative measurement.  
Authentic Assessment Determining the level of student knowledge/skill in a particular area by evaluating his/her ability to perform a “real world” task in the way professionals in the field would perform it.

 

Example: Students create a business plan and are assessed based on the characteristics of an effective business plan.

Benchmark Performance data that are used for comparative purposes. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program or institution as a benchmark.
Bloom’s Taxonomy The original model developed by Benjamin Bloom that established six different levels of cognitive learning: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation.  Classroom Assessment Techniques (CAT) Classroom Assessment Techniques (CATs) are, typically, ungraded activities conducted in the classroom setting. Their purpose is to provide the instructor with feedback on whether or not students understand course material so that adjustments can be made before the end of the term.
Closing the Loop Using assessment results for improvement and/or evolution. Basically following the assessment cycle. Confidentiality The person who conducts the assessment study is aware of who participated but does not disclose the information.
Criterion-Referenced Assessment Assessment instruments intended to measure student performance against a set of predetermined criteria or learning standards. Culture of Assessment An environment in which continuous improvement through assessment is expected and valued.
Curriculum Map A chart that shows where and how in the curriculum program outcomes are addressed, to ensure completeness and avoid excessive overlap. Descriptive Statistics Measures of central tendency (mean, median, mode) and dispersions (spread or variation from the central tendency); the aim is to summarize data.
Direct Measures Direct Measures include student products or performances that demonstrate that specific learning is taking place. Examples: exams, course work, essays, oral performances (objective items would need to be assessed with a rubric). Disaggregated Data Describes data that is separated based upon certain characteristics.
Embedded Assessment A means of gathering information about student learning that is built into and a natural part of the teaching-learning process. Formative Assessment Assessment conducted during the program to provide feedback and used to shape, modify, or improve the course, event or program.
Goals Very broad statements that describe the skills, attitudes, and knowledge students should accomplish as a result of the program and/or what the program will accomplish. Indirect Measures Asks students to reflect on their learning rather than to demonstrate it. (e.g. student perceptions of learning). Examples: surveys, interviews, focus groups.
Inferential Statistics Examines the relationships between variables within a sample, and then make generalizations or predictions about how those variables will relate within a larger population. (T-test, ANOVA, Chi-square test). Institutional Effectiveness The extent to which an institution achieves its mission and goals.
Instrument A tool, such as a survey or a rubric, used to systematically assign a value to a variable. (aka measure, tool, method). Key Performance Indicator/Target A specific measurement of an intended outcome.
Latitudinal Study Compares different population groups at a single point in time. Learning Outcomes Learning outcomes are specific statements of what students will be able to think, know, do, or feel because of a given educational experience. They more specifically define the goal and how they will be measured. A learning outcome consists of four components: Audience, Behavior, Condition, and Degree.
Longitudinal Study Data collected on the same individuals over time. Mixed Methods Procedures for utilizing both quantitative and qualitative assessment methods.
Mission Statement The main purpose that identifies the path and target for our endeavors. It describes who our program serves, the main functions or activities, and the primary intention of the program. Needs Assessment The process of determining the things that are necessary or useful for a particular purpose.
Norm-Referenced Assessment The process of evaluating and ranking students’ knowledge, skills, and behaviors relative to that of their peers. Outcomes Outcomes are specifically what you want the end result of your efforts to be, the changes you want to occur. Outcomes are how you measure the goals.
Population Includes all members of a defined group that we are studying or collecting information on for data-driven decisions. (i.e. whole student population). Program Outcomes

 

 

Outcomes focused on what the program should accomplish each year with students in terms of program quality, efficiency, and productivity (e.g. retention and graduation rates), but not in terms of actual student learning. A program outcome consists of five components: Specific, Measurable, Achievable, Relevant, and Time-Sensitive.
Qualitative Method Methods that rely on and evaluate descriptions rather than numeric data.

 

Examples of qualitative data include responses to open-ended survey or interview questions; evaluations of writing samples, portfolios, or formal recitals; participant observations; ethnographic studies.

Quantitative Method Methods that assess outcomes by collecting numeric data and analyzing the data using statistical techniques.

 

Examples of quantitative data include GPA, grades, exam scores; forced-choice survey responses; demographic information; standardized teaching evaluations.

Rater Calibration Conducted to help ensure assessment rubrics are used consistently and similarly by various raters completing them. Rating Scale A scale based on descriptive words or phrases that indicate performance levels.
Reliability The degree to which an assessment tool produces stable and consistent results. Research Any effort to gather evidence which guides theory by testing hypotheses.
Response Rate A measurement of the amount of people who complete an assessment compared to the total number invited (sample or population). Rubric A set of criteria or a scale developed to evaluate students’ knowledge, skills, and/or behaviors.
Sampling A way to obtain information about a large group by examining a smaller, randomly chosen selection (the sample) of group members. Saturation Saturation is referred to in qualitative research when a researcher determines that more data will not provide any new information on the topic under study.
Standards Standards are a level of accomplishment all students are expected to meet or exceed. Statistics A branch of mathematics dealing with the collection, analysis, interpretation, and presentation of masses of numerical data.
Student Engagement Student engagement represents two critical features of collegiate quality. The first is the amount of time and effort students put into their studies and other educationally purposeful activities. The second is how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate in activities that decades of research studies show are linked to student learning. Measured by the National Survey of Student Engagement (NSSE). Summative Assessment Conducted after the program/class, which provides the opportunity to make judgment on quality, worth, or compares to standards and can be incorporated into future plans.
Triangulation The collection of data via multiple methods in order to determine if the results show a consistent outcome. Validity Pertains to whether we are measuring the correct variable rather than accidentally measuring something else and whether we are reaching the appropriate conclusions based on our findings.
Value Added  Determining the impact or increase in learning that participating in higher education had on students during their programs of study. The focus can be on the individual student or a cohort of students.