Title Page





Our Audience and Our Goal for This Book

Our Experience and Backgrounds


About the Authors

Chapter 1 The Need for Quality Benchmarks in Undergraduate Programs

An Example: Using Benchmarks for Program Advocacy

Benchmarking and Program Assessment for Educational Renewal

(Recent) Past as Prologue to the Emerging Focus on Undergraduate Education

Looking Ahead: Using Quality Benchmarks for Your Undergraduate Program Needs

How to Use This Book

Guiding Questions

Part One: Benchmarking for Eight Key Program Domains

Chapter 2 The View from the Top: Checking the Climate and the Leadership of a Program

Pride Reflected in Environment

Campus Reputation


Respect for Individual and Cultural Differences

Equitable Problem Solving

Program Leadership

Relationship with University Community

Program Involvement in Local Community


Guiding Questions

Chapter 3 First Things First: Attending to Assessment Issues, Accountability, and Accreditation with George B. Ellenberg, Claudia J. Stanny, Eman El-Sheikh

Understanding Faculty Responses to Assessment

Why Foster Strong Assessment and Accountability Practices?

Evolving a Program-Based Assessment Culture

A Rubric for Evaluating the Quality of a Program's Assessment Practices


Guiding Questions

Chapter 4 The New Architecture of Learning Design: Focusing on Student Learning Outcomes

Brief Background on Assessing Student Learning Outcomes

General Principles Related to Student Learning Outcomes

Communication Skills


Research and Information Literacy

Discipline-Specific Learning Outcomes

Using Student Learning Outcomes

Student Learning Outcomes as a Defining Feature of a Distinguished Program

Guiding Questions

Chapter 5 Evaluating Curricula

Avoiding Common Pitfalls

The Benchmarks Framework and Curriculum Considerations

Content and Skills Core

Disciplinary Standards and Guidelines

Curricular Structure and Sequence

Flexibility and Course Variety

Disciplinary Perspectives


Cultural Diversity

International Perspectives


Faculty Engagement

Review Process

Tailoring the Framework

Guiding Questions

Chapter 6 Student Development: Solving the Great Puzzle

Student Development Theories

Emerging Practices in Student Development

How Departments Contribute Pieces to the Great Puzzle


Guiding Questions

Chapter 7 Constructively Evaluating Faculty Characteristics

The Rationale for Faculty Evaluation

The Faculty Review Process

The Evaluation Letter

Committee Review and Decision

Beyond Tenure

Evaluating Faculty Contributions Using Quality Benchmarks

A Brief Note on Program Flexibility and Work/Life Balance

Promoting a Program's Future: Closing Words on Faculty Development

Guiding Questions

Chapter 8 Back to Basics: Program Resources

Physical Facilities

Equipment, Supplies, Materials


Administrative Support

Resource Use

Resource Planning

External Resource Acquisition


Guiding Questions

Chapter 9 The Art, Science, and Craft of Administrative Support

The Role of the Mission

Encoding Expectations in Bylaws and Procedures

Providing Equitable Evaluation Systems

Example of Evaluation Criteria: Performance Indicators for Teaching

Teaching Performance Indicators

Determining Equitable Teaching Assignments

Scholarship Support

Recognition of Achievements

Forging Distinguished Administrative Practices


Guiding Questions

Part Two: Benchmarking in Practice

Chapter 10 Benchmarking Quality in Challenging Contexts: The Arts, Humanities, and Interdisciplinary Programs

Curriculum Development

Assessment Planning and Student Learning Outcomes

Program Resources and Administrative Support

Faculty Characteristics and Program Climate

Student Development


Chapter 11 Special Issues in Benchmarking in the Natural Sciences

Curriculum Development

Student Learning Outcomes and Assessment

Program Resources and Administrative Support

Faculty Characteristics and Program Climate

Student Development


Chapter 12 Conducting a Self-Study

Why Do a Program Review?

Where to Begin?

Writing a Self-Study

A Practical Matter: Timing and Time Frame Issues

Special Issues for Small Departments and Community College Programs

Selecting a Reviewer

Implications of Quality Benchmarks for Self-Study

Hosting an External Review

Last Words on Program Reviews

Guiding Questions

Chapter 13 Serving Our Students and Our Institutions

Some Caveats Regarding Benchmarking

Final Thoughts

Appendix A Using Benchmarking to Serve as an External Reviewer

Serving as an External Reviewer

Listening and Understanding a Particular Campus Culture

Meeting with the Chief Academic Officer

Using Performance Benchmarks as a Visitor

Writing a Program Review

The Matter of Compensation

Guiding Questions

Appendix B Sources of Data

Appendix C Assessment Materials for the Arts

Item 1: Musical Theatre Voice Performance Rubric

Item 2: Holistic Writing Rubric

Item 3: Interdisciplinary Social Sciences Assessment Plan

Item 4: Statement on Scholarly and Creative Activity from Music

Appendix D Disciplinary Accrediting Organizations for Bachelor's Degree Programs Currently Recognized by the Council for Higher Education Accreditation



Title Page

The Jossey-Bass Higher and Adult Education Series

For Virginia Andreoli Mathie and Rob McEntarffer, old friends who helped us begin our work


Peggy Maki

Surrounded by and bombarded with demands to demonstrate accountability, institutional effectiveness, fulfillment of quality indicators, and student achievement levels, colleges and universities rhythmically churn out form after form, day after day. Each day brings the possibility of a new form or a proposed new way of demonstrating accountability, as external or internal decision-making bodies mull over what they “really think” they want. Never before in the history of higher education have institutions been under so many microscopes at one time. Yet with all the documents and forms that institutions fill out, which ones really get at what matters?

Along come Dunn, McCarthy, Baker, and Halonen—four seasoned psychology educators whose combined professional experiences as faculty, administrators, program reviewers, and contributors to national education organizations have positioned them to cut to the chase to identify a model for undergraduate program review. These four authors identify eight domains that define quality undergraduate academic programs and departments in the arts, humanities, social sciences, and natural sciences:

These domains immediately resonate with faculty and administrators; that is, they reflect the contemporary professional context within which faculty and administrators work and the issues that define their lives, such as the current focus on assessing student learning.

Laying out a rationale for a new model that assesses the quality of undergraduate programs and departments—our students’ academic homes—the authors identify (1) benchmarks for programs and departments to assess themselves formatively, represented in performance scales; (2) guidelines for how to apply these scales; and (3) representative disciplinary case studies that demonstrate how to apply performance scales. By design and example, then, readers immediately understand the usefulness and relevance of the authors’ proposed framework, not only for periodic program review but also for formative review as a chronological way to identify patterns of strength and weakness in departments or programs. That is, programs and departments can routinely explore for themselves across the eight domains and benchmarks to guide behaviors and decision making and to judge how well these behaviors, decisions, and practices are improving the quality of a program or department.

An irresistibly authentic model for periodic program review as well as ongoing review aimed at taking stock of how well a program or department is working, Using Quality Benchmarks for Assessing and Developing Undergraduate Programs represents a refreshing break from the world that educators typically inhabit in traditional program review. Provosts, deans, chairs, and faculty will find that this new model for program review grounds educators in what matters. The domains and benchmarking scales provide programs and departments with essential information and data that are unquestionably relevant to educational practices and the decisions that surround those practices.


Higher education is slow to change, but change it must. Social, economic, and accountability pressures are challenging college faculty members to rethink how they teach their courses; advise students; contribute to the intellectual lives of their departments or programs, as well as wider institutions; and advance the interests and knowledge of their disciplines. Academic leaders—department or area chairs, deans, provosts, even presidents—want the programs at their institutions to succeed by attracting, retaining, and educating students who will become supportive alumni and alumnae whose futures reflect well on their alma mater. And, of course, what about the students themselves—the key constituents of any institution—as well as their families? After decades in which energy has been largely focused elsewhere, undergraduate education is recognized as a core issue of concern. Teaching matters. How well students learn is as important as what they learn. The assessment movement is no longer a nascent concern or one that is going to disappear anytime soon.

With the energy and push of all these educational trends, we wrote this book to argue for quality benchmarks—selected performance criteria—for promoting how best to deliver a meaningful educational experience to undergraduate students in the arts, humanities, social sciences, and natural sciences. Such benchmarks provide a set of standard reference points for internal self-studies and external reviews. Benchmarks have been used in higher education as a way to improve the climate for learning within departments. The use of benchmarks is not only for assessing student learning; our goal is actually a more holistic one. We are interested in helping department faculty and their administrators think seriously about evaluating program quality broadly. By broadly, we mean examining student learning, creating meaningful faculty scholarship, promoting and rewarding quality teaching, and connecting vibrantly to the community where a college or university resides.

Our framework explores the attributes of undergraduate programs by focusing on educationally related activities in eight domains: program climate; assessment, accountability, and accreditation issues; student learning outcomes; student development; curriculum; faculty characteristics; program resources; and administrative support. We conceptualize a continuum of performance for each attribute in each of the domains to characterize underdeveloped, developing, effective, and distinguished achievement for undergraduate programs. Our goal is to encourage individual departments at various types of institutions to evaluate what they currently do formatively while identifying areas for refinement or future growth. We believe that our recommended benchmarks can improve program quality, encourage more effective program reviews, and help optimally functioning programs compete more successfully for resources as well as protect their resources in economically challenging times based on their distinguished achievements.

Our experiences as program reviewers, faculty members, and administrators inform us that departments, department heads, and deans all want reasonable, valid, and professionally respectable methods for evaluating the performance of undergraduate programs. We believe our developmental framework will satisfy all parties because we emphasize formative assessment (meaning that evaluation should be used to identify areas of existing quality and those for future growth—not for summative or punitive reasons, such as resource cuts, reduction in faculty lines, or the like). Furthermore, we designed our framework to help programs identify and tout what they already do well even in situations of seriously constrained resources. Finally, using performance benchmarks to identify areas of program strength can, in turn, be used to recruit and retain students, to seek funding via grants or alumni support, and to enhance the perceived rating of an institution. When benchmarks reveal that a program or areas within it are underdeveloped or developing, faculty and administrators can then plan for where they can best place subsequent efforts and resources to improve a program's performance and ability to serve students.

Our Audience and Our Goal for This Book

The primary audience for this book is department chairs and program heads (especially new ones) representing disciplines in the arts, natural sciences, humanities, and social sciences, as well as faculty members and administrators (chiefly deans and provosts) who want a convenient collection of formative tools for examining the quality of their programs and the educational experiences of undergraduates in those programs. This book can help leaders in college and university communities evaluate a current undergraduate program by identifying areas of existing strength (including distinguished qualities) and areas for growth and improvement. Thus this book is ideal for internal program reviews. Our work provides a vehicle for discussion within programs about where their strengths lie and which areas they would like to highlight for future growth and development. At the same time, the work will help external evaluators, including external review committees, effectively assess departments and programs during site visits.

We wrote this book's chapters with flexibility in mind. Although all institutions of higher education have some common features, each also has unique or local qualities. We crafted each chapter so that readers can shape general principles to fit their own circumstances, local conditions, and institutional folkways. We anticipate that some readers will be interested in our book so that they can do a targeted or focused review (e.g., a critical examination of student learning outcomes or the curriculum). Thus, one or two chapters will draw their attention. Other readers will want to evaluate an entire program, which means they will be drawing on the content of all the chapters in the book.

Each chapter concludes with a set of “guiding questions” designed to help readers think about the current strengths and challenges faced by an existing program. Whether a review is internal (a self-study) or external (routine scheduled review), these questions will be written to encourage reflective and constructive discussion at the program or department level.

Our Experience and Backgrounds

We four have extensive experience serving as external reviewers for psychology programs. Currently, Halonen is a dean of arts and sciences, McCarthy has served in the Education Directorate of the American Psychological Association, Dunn is the director of his college's general education curriculum, Dunn and Halonen have served as department chairs on repeated occasions, and Baker is an assistant head of a large department. All of us have been active in the assessment movement in our discipline; this is not the first book we have produced related to the topic (see Dunn, Mehrotra, & Halonen, 2004). Two of us (Baker and Halonen) have been active participants in Project Kaleidoscope, a national organization that explores curriculum reform in the sciences. Finally, all four of us are active members of the Society for the Teaching of Psychology (STP), the national organization devoted to advancing the teaching and learning of the discipline's knowledge base, and serve as regular speakers at national, international, and regional conferences. Three of us (Dunn, Halonen, and McCarthy) have served as president of STP.

We look forward to hearing reactions from users of our book—college and university leaders at all levels who care deeply about their institutions and higher education more generally. We believe that academic program review can be an exciting opportunity for all parties concerned. We look forward to learning about your experiences using quality benchmarks to improve the teaching and learning of your students.


This book represents the completion of a long-harbored desire to work with Jossey-Bass, a publisher without peer in the study of trends in higher education. We were delighted to meet and work with our editor, David Brightman, on this project. We thank our good friend and editor, Chris Cardone, for introducing us to David. We owe a special debt of gratitude to our frequent coauthor and close friend, Bill Hill, who was unable to join us on this project. We believe he is still a contributor to the final result, as many of our ideas developed through our many discussions with him on matters of teaching, learning, assessment, and administrative issues. Finally, we thank Virginia Andreoli Mathie and Rob McEntarffer. Ten years ago, Ginnie's leadership on the Psychology Partnerships Project (P3) introduced us to one another; we cannot imagine what work we would each be doing if our own partnership had not resulted from that fateful week in June 1999. Happily, Rob accepted the leadership of the “Assessment All-Stars,” and he helped us forge a scholarly bond that has been our scholarly wellspring for over a decade.

We are also grateful to the American Psychological Association (APA), which published our first ideas on quality benchmarks, and to the Society for the Teaching of Psychology (STP; APA Division 2), which has been a willing crucible for much of what we discuss in this book. Deans Gordon Weil and Carol Traupman-Carr of Moravian College provided helpful comments on an early draft of Chapter 7Seven.

We invited guest scholars to assist us with specific chapters. We thank Claudia Stanny, George Ellenberg, Eman El-Sheikh, and Greg Lanier from the University of West Florida for their contributions to this work. They particularly helped us move beyond our home discipline of psychology to see the implications of our work for other fields of study.

Dana is grateful to his wife, Sarah, and to his children, Jake and Hannah, who are unfailingly supportive of his writing and research efforts. He is also grateful to Moravian College, as portions of this work were written during Dana's sabbatical leave in spring 2009. He feels lucky to work with his talented teacher-scholar coauthors, Suzie, Maureen, and Jane.

Maureen is grateful to Brenda Karns and her family for their support of her writing efforts. She is particularly thankful to her parents, Dennis and MaryAnn McCarthy, for providing her with the foundation that made this book possible. She would also like to thank the American Psychological Association for providing the support for the genesis of this project.

Suzie would like to thank Marshall Graham for his unwavering encouragement, patience, and good humor as she spends hours staring at a computer screen working on various projects. She would also like to thank her wonderful colleagues in the Department of Psychology at James Madison University. When she thinks about the characteristics of a distinguished program, she thinks of them.

Jane feels blessed to have had a teaching career that has involved so many assessment-friendly colleagues at University of West Florida, James Madison University, and Alverno College, and a devoted husband, Brian, who, with predictable good nature, tolerates partner deprivation so she can get her projects across the finish line.

We hope readers find this book to be helpful as they think about ways to assess, evaluate, and subsequently improve undergraduate education in their departments and programs. We welcome comments and suggestions for future editions of this work.

Dunn, D. S., Mehrotra, C., & Halonen, J. S. (Eds.). (2004). Measuring up: Educational assessment challenges and practices for psychology. Washington, DC: American Psychological Association.

About the Authors

Dana S. Dunn is currently professor of psychology and director of the Learning in Common Curriculum at Moravian College in Bethlehem, Pennsylvania. He has chaired Moravian's psychology department as well as served as acting chair of its philosophy department. Dunn is the author or editor of eleven books and over one hundred articles, chapters, and book reviews. He frequently speaks on assessment matters, issues facing higher education, and psychological topics at professional conferences.

Maureen A. McCarthy is professor of psychology at Kennesaw State University. She formerly served as the Associate Executive Director of the Office of Precollege and Undergraduate Programs for the American Psychological Association. While serving at the American Psychological Association, McCarthy initiated efforts to identify profiles of undergraduate psychology programs and she served as the APA liaison for the American Council on Education project to Internationalize the Disciplines. McCarthy is the coauthor of numerous articles, and regularly speaks on the topics of assessment, pedagogy, and ethics in undergraduate psychology.

Suzanne C. Baker is professor of psychology at James Madison University in Harrisonburg, VA, where she also currently serves as assistant department head in psychology. Baker is the author of numerous articles and book chapters on topics related to teaching and curriculum. She frequently speaks at conferences on topics such as curriculum development in psychology, engaging undergraduate students in research, and the use of technology in teaching. She teaches a wide range of courses, including introductory psychology, animal behavior, and cyberpsychology.

Jane S. Halonen is professor of psychology and the dean of the College of Arts and Sciences at the University of West Florida in Pensacola, FL. She formerly served as the director of the School of Psychology at James Madison University and the chair of the behavioral sciences division and coordinator of psychology at Alverno College. Halonen has authored and collaborated on textbooks as well as articles and books on faculty development and curriculum. Halonen's work has been instrumental to benchmarking efforts in various curriculum initiatives of the American Psychological Association, which also honored her with the American Psychological Foundation Distinguished Teaching Award in 2000.