2015 Presenter Bios and Workshop Descriptions 

Tarek Azzam Dale Berger Tiffany Berry Katrina Bledsoe Kendall Bronk Wanda Casillas
           
 
Tessie Catsambas Thomas Chapel Huey Chen Tina Christie Andy Conway William Crano
           
Stewart Donaldson Rebecca Eddy Leslie Fierro John Gargani Michael Q. Patton Meghana Rao
           
Becky Reichard Maritza Salazar Michael Scriven Norbert Semmer Jason Siegel Scott Thomas
           
           
           

Workshop Descriptions

Thursday, August 20

 
 

Basics of Evaluation and Applied Research Methods

Stewart I. Donaldson & Tina Christie

This workshop will provide participants with an overview of the core concepts in evaluation and applied research methods. Key topics will include the various uses, purposes, and benefits of conducting evaluations and applied research, basics of validity and design sensitivity, strengths and weaknesses of a variety of common applied research methods, and the basics of program, policy, and personnel evaluation. In addition, participants will be introduced to a range of popular evaluation approaches including the transdisciplinary approach, program theory-driven evaluation science, experimental and quasi-experimental evaluations, empowerment evaluation, fourth generation evaluation, inclusive evaluation, utilization-focused evaluation, and realist evaluation. This workshop is intended to provide participants with a solid introduction, overview, or refresher on the latest developments in evaluation and applied research, and to prepare participants for intermediate and advanced level workshops in the series.

Recommended background readings include:

Copies are available from Amazon.com by following the links above and are also available from the Claremont Evaluation Center for $20 each.  Checks should be made out to Claremont Graduate University and addressed to: John LaVelle, Claremont Graduate University/SSSPE, 150E 10th Street, Claremont CA 91711.


Questions regarding this workshop may be addressed to Stewart.Donaldson@cgu.edu.

 

Assessment of Cognitive Abilities: The Science, Philosophy, and Art of Latent Variable Models

Andy Conway

A latent variable model is a statistical model that relates observed (manifest) variables to a set of unobserved (latent) variables. A common example is confirmatory factor analysis. In Psychology, the standard approach is to assume that latent variables, or factors, are reflective. That is, we assume that there is something out there, represented by the factor, and the manifest variables are indicators of this something. For example, in the study of cognitive abilities, it is often assumed that a general factor, g, causes the outcomes on manifest variables. An alternative approach is to assume that the latent variables are formative. In formative models the chain of causation is the opposite. The latent variable emerges because of the manifest variables and not the other way around. For example, in the case of cognitive abilities, g is the result, rather than the cause of the correlations among manifest variables. Similar formative latent variables are socioeconomic status (SES) and general health, each of which tap common variance among measures, but do not explain it. I will discuss how latent variable models are specified and evaluated and then discuss the implications for interpretation when factors are assumed to be formative rather than reflective.

   

Survey Research Methods

Jason Siegel


The focus of this hands-on workshop is to instruct attendees how to create reliable and valid surveys to be used in applied research. A bad survey is very easy to create. Creating an effective survey requires a complete understanding of the impact that item wording, question ordering, and survey design can have on a research effort. Only through adequate training can a good survey be discriminated from the bad. The day long workshop will focus specifically on these three aspects of survey creation. The day will begin with a discussion of Dillman’s (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique the item writing of selected national surveys. Next, attendees will work in groups to create survey items of their own. Using Sudman, Bradburn, and Schwatrz’s (1996) cognitive approach, attendees will then be informed of the various ways question order can bias results. As practice, attendees will work in groups to critique the item ordering from selected national surveys. Next, attendees will propose an ordering scheme for the questions created during the previous exercise. Lastly, using several sources, the keys to optimal survey design will be provided. As practice, the design of national surveys will be critiqued. Attendees will then work with the survey items created, and properly ordered, in class and propose a survey design.

Questions regarding this workshop may be addressed to Jason.Siegel@cgu.edu.

 

 

Exemplar Method

Kendall Bronk

A comprehensive understanding of human development requires investigations of deficient, typical, and exemplary forms of constructs of interest. However, to date, most studies only examine poor functioning and average states. As a result we have a variety of effective methods for assessing deficient and typical forms of development, but very few tools for assessing exceptional states. The exemplar method fills this gap. The methodology, which has been used to study creative geniuses, brave heroes, and moral exemplars, requires researchers intentionally select samples of individuals who exemplify the construct of interest. These individuals are then studied to understand, not what is common, but what is possible with regards to development in a particular realm. This workshop will provide an overview of the exemplar methodology, discuss its application to both qualitative and quantitative studies, and offer guidelines for applying it.

 

 

Occupational Health Psychology: Concepts and Findings on Stress and Resources at Work

Norbert Semmer

Stress and resources at work are important for individual health as well as organizational productivity. The workshop is intended to give an overview over stress and resources at work, how they relate to important outcomes, most notably health and well-being, and what challenges do we face when doing research on these issues and when trying to foster health and well-being on an individual and on an organizational basis.
There will be three parts of the workshop: a) on basic mechanisms, b) stressors and resources at work and their effects, and c) intervention.
a) Basic Mechanisms

  • What is stress, what are resources, and what basic mechanisms (e.g., appraisal, coping, action tendencies, physiological activation) are involved

b) Stressors and Resources at work

  • What factors in the (working) environment and in the person are likely to induce stress
  • What factors are likely to prevent or attenuate stress or its consequences (resources)
  • How can all these factors be measured, and what methodological problems have to be dealt with
  • What are major models of stress and resources at work
  • Research designs employed in occupational health psychology
  • Issues of time: How do stressors and resources affect well-being, health, and productivity in the short term (e.g., over days and weeks), and under what circumstances do they effect health, well-being, and productivity in the long run.
  • What are the effects of new economic and technological developments (e.g., new communication technologies)

c) Inteventions

  • What methods can be used to support people in dealing with stress in a better way, and what do we know about their effectiveness (e.g., relaxation, physical activity; cognitive-behavioral methods)
  • What approaches exist to help organizations prevent undue amounts of stress and to promote individual and organizational functioning, relating to job design, the work environment, social and organizational factors
  • What do we know about the effectiveness of these approaches, which factors support, or impede, success of these approaches, and what methodological problems are involved


The workshop is intended to combine lecturing with group discussions and exercises. Active participation and an active attempt to connect theory and research to everyday problems and experiences are expected.

Suggested readings:

Stress at work: Overview

Sonnentag, S., & Frese, M. (2013). Stress in Organizations. In N. W. Schmitt & S. Highhouse (Eds.), Industrial and Organizational Psychology (pp. 560-592). Hoboken, NJ: Wiley.
Warr, P. (2005). Work, well-being, and mental health. In J. Barling, E. Kevin-Kelloway, & M.R. Frone (Eds.), Handbook of work stress (pp. 547-573). Thousand Oaks, CA: Sage.
Basic psychological mechanisms
Semmer, N.K., McGrath, J.E., & Beehr, T.A. (2005). Conceptual issues in research on stress and health. In C.L. Cooper (Ed.), Handbook of Stress and Health (2nd ed., pp. 1-43). New York: CRC Press.
Individual differences
Semmer, N. K., & Meier, L. L. (2009). Individual differences, work stress, and health. In C. L. Cooper, J. Campbell Quick, & M. J. Schabracq (Eds.), International handbook of work and health psychology (3rd ed., pp. 99-121). Chichester, UK: Wiley.

Intervention
Murphy, L.R. (2003). Stress management at work: Secondary prevention of stress. In M.J. Schabracq, J.A. Winnubst, & C.L. Cooper (Eds.), Handbook of Work and Health Psychology (2nd ed., pp. 533-548) Chichester: Wiley.
Semmer, N. K. (2006). Job stress interventions and the organization of work. Scandinavian Journal of Work, Environment and Health, 32, 515-527.

     

Friday, August 21

 
 

Introduction to Qualitative Research Methods

Maritza Salazar


This workshop is designed to introduce you to different types of qualitative research methods, with a particular emphasis on how they can be used in applied research and evaluation. Although you will be introduced to several of the theoretical paradigms that underlie the specific methods that we will cover, the primary emphasis will be on how you can utilize different methods in applied research and consulting settings. We will explore the appropriate application of various techniques, and review the strengths and limitations associated with each. In addition, you will be given the opportunity to gain experience in the use of several different methods. Overall, the workshop is intended to provide you with the basic skills needed to choose an appropriate method for a given project, as well as primary considerations in conducting qualitative research. Topics covered will include field observation, content analysis, interviewing, document analysis, and focus groups.

Questions regarding this workshop may be addressed to Maritza.Salazar@cgu.edu.

 

   

Logic Models as a Practical Tool in Evaluation and Planning

Thomas Chapel

The logic model, as a map of what a program is and intends to do, is a useful tool in both evaluation and planning and, as importantly, for integrating evaluation plans and strategic plans.  In this session, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. We will then examine how to use these models both prospectively for planning and implementation as well as retrospectively for performance measurement and evaluation.   We will illustrate the value of simple and more elaborate logic models using small group case studies. While the cases will be about public health and health programs, the teaching points are generalizable to most programs.

You will learn:
  • To construct simple logic models
  • To use program theory principles to improve a logic model
  • To employ a model to identify and address planning and implementation issues

Audience:  This course is interned for advanced beginners.

Tom Chapel is the first Chief Evaluation Officer at the Centers for Disease Control and Prevention. He serves as a central resource on strategic planning and program evaluation for CDC programs and their partners. Before joining CDC in 2001, Tom was Vice-President of the Atlanta office of Macro International (now ICF International) where he directed and managed projects in program evaluation, strategic planning, and evaluation design for public and nonprofit organizations. He is a frequent presenter at national meetings, a frequent contributor to edited volumes and monographs on evaluation, and has facilitated or served on numerous expert panels on public health and evaluation topics.  In 2013, he was the winner of AEA’s Myrdal Award for Government Evaluation.

 

   

Multilevel Modeling

Scott Thomas

The goal of this workshop is to develop an understanding of the use, application, and interpretation of multilevel modeling in the context of educational, social, and behavioral research. The workshop is intended to acquaint students with several related techniques used in analyzing quantitative data with nested data structures. The workshop will employ the IBM SPSS statistical package. Emphasis in the workshop is on the mastery of concepts and principles, development of skills in the use and interpretation of software output, and development of critical analysis skills in interpreting research results using the techniques we cover.

Questions regarding this workshop may be addressed to Scott.Thomas@cgu.edu.

 
 

Practical Evaluation: A New Approach

Michael Scriven

Our species has been doing evaluation for somewhere between 1 million and 3million years. Every child in kindergarten and before does evaluation. Many of our clients don't want to hear about theories of evaluation, or schools, or models for it. In this workshop we are going to walk through the steps from amateur evaluation, which everybody does, to an evaluation discipline which raises it to the level of a science. We'll us the loopholes in Consumer Reports and the way you chose a job or a place to live as amateur approaches that we can upgrade.

 

 

Introduction to Educational Evaluation

Tiffany Berry & Rebecca Eddy

 

This workshop is designed to provide participants an overview of the key concepts, issues, and current trends in contemporary educational program evaluation. Educational evaluation is a broad and diverse field, covering multiple topics such as curriculum evaluation, K-12 teaching/learning, institutional research and assessment in higher education, teacher education, Science, Technology, Engineering, and Mathematics (STEM), out of school time (OST), and early childhood education. To operate within these varied fields, it is important for educational evaluators to possess an in-depth understanding of the educational environment as well as implement appropriate evaluation methods, procedures, and practices within these fields. Using lecture, interactive activities, and discussion, we will provide an overview of key issues that are important for contemporary educational evaluators to know, such as (1) differentiating between assessment, evaluation and other related practices; (2) understanding common core standards and associated assessment systems; (3) emerging research on predictors of student achievement; and (4) development of logic models and identification of program activities, processes and outcomes. Case studies of recent educational evaluations with a focus on K-12 will be used to introduce and discuss these issues.

Questions regarding this workshop may be addressed to Tiffany.Berry@cgu.edu.

 

     

Saturday, August 22

 
 

Conducting Responsive Community-based Evaluations

Katrina Bledsoe

The beauty of the field of evaluation is in its potential responsiveness to the myriad of contexts in which people—and programs, policies, and the like—exist. As the meaning and construction of the word community expands, the manner in which evaluation is conducted must parallel that expansion. Evaluations must be less about a community and more situated and focused within the community, therefore increasing its responsiveness to the uniqueness of the setting/system. To do this, however, requires an expanded denotative and connotative meaning of community. Moreover, it requires us to think innovatively about how we construct and conduct evaluations, and to broadly consider the kind of data that will be credible to stakeholders, and consumers. The goal of this workshop is to engage the attendee in thinking innovatively about what evaluation looks like with-in a community, rather than simply about a community. We will engage in a process called “design thinking” (inspired by Design Innovation Consultants IDEO and Stanford’s Design School) to help us consider how we might creatively design responsive and credible community-based evaluations. This interactive course includes some necessary foundation-laying, plenty of discussion, and of course, opportunities to think broadly about how to construct evaluations with the community as the focal point.

   

Enhancing Evaluation Capacity - A Systematic and Comprehensive Approach

Leslie Fierro

Evaluation capacity building (ECB) is a phenomenon that has gained attention in the discipline of evaluation over the past decade as the field has struggled to meet the high demand for evaluation services.  Although many definitions for ECB currently exist, the following is the most often recognized,  “…the intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its uses routine” (Stockdill et al., 2002, p. 14).  In this workshop we will examine what it really means to build evaluation capacity in organizations and broader systems.  The workshop will include an orientation to ECB (existing definitions, frameworks, approaches) as well as provide ample time for highly interactive sessions in which attendees will work together to consider how to build evaluation capacity within their own organizations.  Attendees will walk away from this training with an understanding of how to build knowledge, skills, and attitudes among individuals to do, use, and promote evaluation as well as organizational strategies for creating an infrastructure within organizations that can support ongoing healthy evaluation practices (e.g., organizational evaluation policies).

   

Evaluating Program Viability, Effectiveness, and Transferability: An Integrated Perspective

Huey-Tsyh Chen


Traditionally, an evaluation approach argues and addresses one high priority issue (e.g. internal validity for Campbell, external validity for Cronbach). But, what happens when stakeholders prefer an evaluation to address both internal and external validity or more comprehensively, address viable, effectual, and transferable validity. This workshop is designed to introduce an integrated evaluation approach developed from the theory-driven evaluation perspective for addressing multiple or competing values of interest to stakeholders.

Participants will learn:

  • Contributions and limitations of the Campbellian validity typology (e.g., internal and external validity) in the context of program evaluation
  • An integrative validity model with three components as an alternative for better reflecting stakeholders’ view on evaluative evidence: viability, effectuality, and transferability
  • How to apply sequential approaches (top-down or bottom-up) for systematically addressing multiple types of validity in evaluation
  • How to apply concurrent approaches (maximizing or optimizing) for simultaneously addressing multiple types of validity in an evaluation
  • How to use of the innovative framework for reconciling major controversies and debates in evaluation
  • Concrete evaluation examples will be used to illustrate ideas, issues, and applications throughout the workshop.

Questions regarding this workshop may be addressed to hueychen9@gmail.com.

 

   

Applied Multiple Regression: Mediation, Moderation, and More

Dale Berger


Multiple regression is a powerful and flexible tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships. Issues we’ll explore include preparing data for analysis, selecting models that are appropriate to your data and research questions, running analyses, interpreting results, and presenting findings to a nontechnical audience. The facilitator will demonstrate applications from start to finish with live SPSS and Excel. Detailed handouts include explanations and examples that can be used at home to guide similar applications.

You will learn:

  • Concepts important for understanding regression
  • Procedures for conducting computer analysis, including SPSS code
  • How to conduct mediation and moderation analyses
  • How to interpret SPSS REGRESSION output
  • How to present regression findings in useful ways

Questions regarding this workshop may be addressed to Dale.Berger@cgu.edu

 

 

Learning from Success: Incorporating Appreciative Inquiry in Evaluation

Tessie Catsambas

In her blog, “Value-for-Money: Value-for-Whom?” Caroline Heider, Director General of the Independent Evaluation Group of the World Bank, pushes evaluators to make sure that “the questions we ask in our evaluations hone in on specifics that deepen the understanding of results and past experience,” and ask ourselves what difference our recommendations will make once implemented, and what value-added they will create. Applying Appreciative Inquiry to evaluation provides a way to drive an evaluation by vision and intended use, builds trust to get more accurate answers to evaluation questions, and offers an avenue to increase inclusion and deepen understanding by incorporating the systematic study of successful experiences in the evaluation.
Appreciative evaluation is just as serious and systematic as problem analysis and problem solving; and it is probably more difficult for the evaluator, because it requires continuous reframing of familiar problem focused language.
In this one-day workshop, participants will be introduced to Appreciative Evaluation and will explore ways in which it may be applied in their own evaluation work. Participants will use appreciative interviews to focus an evaluation, to structure and conduct interviews, and to develop indicators. Participants will practice “reframing” and then reflect on the power of appreciative and generative questions. Through real-world case examples, practice case studies, exercises, discussion and short lectures, participants will learn how to incorporate AI into their evaluation contexts.

Workshop Agenda
  • Introduction: Theoretical Framework of Appreciative Inquiry (lecturette)
  • Logic and Theory of Appreciative Inquiry (lecturette)
  • Imagine phase: Visions (case study: small-group work) Lunch Reframing deficits into assets (skills building)
  • Good questions exercise (skills building)
  • Innovate: Provocative propositions/possibility statements, links to developing indicators (case study: small group work)
  • Applications of AI—tying things together (lecturette and discussion)
  • Questions and Answers
  • Evaluation
Presenter Bio

Tessie Tzavaras Catsambas, president of EnCompass LLC, is an evaluation and organizational development expert with more than 25 years’ experience in evaluation, organizational development, and innovation. Catsambas has created and implemented an appreciative model for evaluating organizational and program performance, and is co-author of the first text on this topic, Reframing Evaluation Through Appreciative Inquiry (Sage Publications, June 2006). She was AEA’s representative on the board of the International Organization for Cooperation in Evaluation (board secretary, 2012-2013), the EvalPartners Executive Committee, and co-chair of the task force promoting an Enabling Environment for Evaluation, where she still serves. She is deeply committed to promoting inclusion, equity and gender equality. Catsambas holds an MPP from Harvard University.

     

Sunday, August 23

 
 

Cultural Responsiveness in Applied Research and Evaluation

Wanda Casillas


The dynamic cultural demographics of organizations, communities, and societies make it imperative to understand the importance of cultural sensitivity and cultural responsiveness in applied research and evaluation settings. Responding to culture is not easy; the researcher/evaluator must understand how culture underlies the entire research process from conceptualization to dissemination, use, and impact of results.

In this workshop several questions will be considered. How does culture matter in evaluation theory and practice? How does attention to cultural issues make for better evaluation practice? Does your work in an agency or organization require you to know what culturally responsive in evaluation looks like? What issues do you need to consider in building culturally competent and responsive evaluation approaches? How do agencies identify strategies for developing and disseminating culturally responsive evaluation information? We articulate how these questions and considerations are quintessential in working with organizations and communities with hard to reach populations (e.g., marginalized groups), and where evaluations, if not tailored to the organization's or community's cultural milieu, can easily overlook the mores of its members.

This workshop is multifaceted and will rely on various interdisciplinary social science theoretical frameworks to both situate and advance conversations about culture in evaluation and applied research. In particular, participants will receive information and materials that help them to develop expertise in the general topics of culture in evaluation, including understanding the value-addedness for the evaluation researcher or program specialist who needs to develop a general understanding of the topic itself. Workshop attendees will also be encouraged to understand cultural barriers that might arise in evaluative settings between evaluators, key stakeholders, and evaluation participants that can hamper the development and execution of culturally responsive evaluations (e.g., power dynamics; and institutional structures that may intentionally or unintentionally promote the "isms"). We will also discuss how cultural responsiveness extends to institutional review board criteria and research ethics, and the development of strategies to garner stakeholder/constituent involvement, buy-in, and trust.

The presenters will rely on real world examples from their evaluation practice in urban communities, in school districts, and in a large national multi-site federally funded community-based initiative. This workshop assumes participants have an intermediate understanding of evaluation and are interested in promoting ways to build culturally competent and responsive practices.

Questions regarding this workshop may be addressed to Wandadcasillas@gmail.com.

 

 

 

An Introduction to Social Return on Investment (SROI)

John Gargani

Social return on investment (SROI) is a new and controversial evaluation method. It is widely applied in the UK, Europe, and many international development settings. Now demand for it is growing in the US. What is SROI? It is one application of valuation—representing the value of program impacts in monetary units. Specifically, SROI compares the value of impacts to the cost of producing them. It is strongly associated with social enterprise, impact investing, social impact bonds, value-for-money initiatives, and other efforts that combine business thinking with social betterment. In this hands-on workshop, you will learn the basics of how to conduct an SROI analysis. We will approach the method with a critical eye in order to plan, use, and interpret SROI effectively. You will leave the workshop with a better understanding of how to incorporate SROI into your practice, and how to engage clients and stakeholders in its implementation.

 

 

Data Visualization

Tarek Azzam


The careful planning of visual tools will be the focus of this workshop. Part of our responsibility as evaluators is to turn information into knowledge. Data complexity can often obscure main findings, or hinder a true understanding of program impact. So how do we make information more accessible to stakeholders? Often this is done by visually displaying data and information, but this approach, if not done carefully, can also lead to confusion. We will explore the underlying principles behind effective information displays. These are principles that can applied in almost any area of evaluation, and draw on the work of Edward Tufte, Stephen Few, and Johnathan Koomey to illustrate the breadth and depth of their applications. In addition to providing tips to improve most data displays, we will examine the core factors that make them effective. We will discuss the use of the common graphical tools, and delve deeper into other graphical displays that allow the user to visually interact with the data.

Questions regarding this workshop may be addressed to Tarek.Azzam@cgu.edu.

 

   

Leadership Assessment

Becky Reichard


Leadership Assessment PDW
Becky Reichard, PhD
August 23, 2015

Leadership assessment is commonly used by organizations and consultants to inform selection, promotion, and development of leaders. This experiential workshop will provide participants with an overview of the three main methods of leadership assessment – self-assessment, 360-degree assessment, and assessment centers. An assessment center is a method of evaluating leaders’ behaviors during simulated scenarios, or various life-like situations that leaders encounter. Leadership assessments discussed will include leadership competency models, personality, strengths, and skills. In the process of this workshop, participants will be provided with feedback on their leadership strengths, skills, and styles. To receive feedback, please complete the following in advance of the session.

(1) Complete the StrengthsFinder assessment
Purchase and review the Strengths-based leadership book by Rath & Conchie (2008). In the book, you will be provided with an access code. To complete the StrengthsFinder assessment, visit strengths.gallup.com and enter your access code. The StrengthsFinder assessment is based on 35 years of Gallup research and has been taken by 4 million people in 17 languages. The assessment consists of 180 forced choice questions that you must respond to within 20 seconds each. In total, you should allot 40 minutes to complete the StrengthsFinder. When you complete this online self-assessment, Gallup will automatically email you a detailed feedback report, which you should review in detail. The feedback report will contain your top five strengths themes organized across Gallup’s four domains of leadership strength. To learn more about how to develop your strengths as a leader and about your specific strengths, read parts 1-3 (pages 1-95) of the Strengths-Based Leadership book (Rath & Conchie) along with subsequent chapters associated with your strengths. Bring a printout of your Strengths report to the workshop. (make sure you buy a ‘new’ copy of the book that has a code for completing the StrengthsFinder survey)

(2) LeAD360
  • Step 1: Self-assessment
After registering for this workshop, you will be emailed a link to the LeAD360 self-report online assessment. Follow the online instructions for completing the self-assessment. This survey will take you approximately 30 minutes and will ask you questions about your leadership. It is important that you are as honest as possible in your responses so that we can provide you the best available feedback. Your responses will be kept confidential. Please complete the self-assessment no later than August 7th to allow time for others’ assessments to be collected before the workshop.
    Nominate others to complete your LeAD360. At the end of the self-report survey, you will be asked to enter an online ‘roster.’ This roster is where you can input the names and email addresses of up to 20 individuals in your work circle (i.e., peers, supervisor(s), subordinates), who will rate you on the LeAD360. Prior to beginning the self-assessment, consider who you would like to nominate to assess you and identify the appropriate email addresses for each person. Please nominate others who have had an opportunity to observe your leadership in action. Avoid nominating people that you only know through casual or brief interactions. Please complete the roster as part of the self-assessment no later than August 7th to allow time for others’ assessments to be collected.
  • Step 2: Email those rating you
It would be helpful for you to gain approval from those you’d like to rate your leadership and for you to let them know to expect an email request from LeAD Labs. We have provided the following email template for your convenience so you can send it to those who you’d like to rate you. We find this increases rapid response rate, which is essential to ensure you receive your feedback during your workshop in Claremont. Be sure to send this email to those in your work circle no later than August 7th. In order for us to compile your LeAD360 feedback report, others’ assessments of you are due by August 21st.

EMAIL TEMPLATE
Dear [RATER NAME],
I am writing to ask for your assistance in assessing my leadership behavior by participating in the LeAD360. This assessment is part of my participation in the Claremont Evaluation Center's PDW on Leadership Assessment. I have invited several co-workers to help me complete this assessment and all responses will be presented anonymously in a report that averages scores across respondents. The assessment will be a valuable source of feedback about strengths and gaps in my leadership behavior, and will be used to assist me in creating a personal leadership development plan.
 
The assessment will involve completing an online survey that will take about 30 minutes. The link to the survey will be sent out in a separate email. Because survey links are sometimes flagged as “spam” in email filters, please check your spam folder for an email with the subject line containing ‘LeAD360’ if you do not receive the survey link by August 10th. You can also try adding ‘noreply@qemailserver.com’ to your contact or accepted email list. The assessment deadline is August 21st.

Please respond to let me know if you are willing to participate in the LeAD360. I appreciate your help and genuinely thank you for your time.
Sincerely,
[Your NAME]

Questions regarding this workshop may be addressed to Becky.Reichard@cgu.edu

 

   

Introduction to Positive Human Resources Development

Meghana Rao

This workshop will provide an introduction to the positive psychological and strengths-based perspectives, theories and methods that have been revolutionizing HR practice over the last few years. While historically, scholars and practitioners have been primarily concerned with what goes wrong in organizations and how to remedy problems, the positive approach focuses on what works, and how to capitalize on strengths. Key topics will include strengths-based and positive approaches to talent management, performance management, training and development, fostering high quality work relationships, designing jobs for flow and job crafting, employee empowerment and job satisfaction. Finally, this workshop will describe the positive lens, and provide a handy tool to use the positive lens in any HR-related research topic or practical area of concern. This workshop is expected to be useful for all levels of proficiency in positive psychology and/or HRD.

 

Quasi Experimental Methods

William Crano

Conducting, interpreting, and evaluating research are important aspects of the social scientist’s job description.  To that end, many good educational programs provide opportunities for training and experience in conducting and evaluating true experiments (or randomized controlled trials [RCTs], as they sometimes are called).  In applied contexts, the opportunity to conduct RCTs often is quite limited, despite the strong demands on the researcher/evaluator to render “causal” explanations of results, as they lead to more precise understanding and control of outcomes.  In such restricted contexts, which are absolutely more common than those supporting RCTs, quasi-experimental designs sometimes are employed. Though they usually do not support causal explanations (with some noteworthy exceptions), they sometimes provide evidence that helps reduce the range of plausible alternative explanations of results, and thus, can prove to be of real value. This workshop is designed to impart an understanding of quasi-experimental designs. After some introductory foundational discussion focused on “true” experiments, we will consider quasi-experimental designs that may be useful across a range of settings that do not readily admit to experimentation. These designs will include time series and interrupted time series methods, nonrandomized designs with and without control groups, case control (or ex post facto) designs, regression-discontinuity analysis, and other esoterica. Participants are encouraged to bring to the workshop design issues they are facing in real world contexts.

Questions regarding this workshop may be addressed to William.Crano@cgu.edu.

     

Monday, August 24

 
   

Principles-Focused Evaluation

Michael Quinn Patton

Online Workshop

Evidence about program effectiveness involves systematically gathering and carefully analyzing data about the extent to which observed outcomes can be attributed to a program’s interventions. It is useful to distinguish three types of evidence-based conclusions:

  1. Single evidence-based program. Rigorous and credible summative evaluation of a single program provides evidence for the effectiveness of that program and only that program.
  2. Evidence-based model. Systematic meta-analysis (statistical aggregation) of the results of several programs all implementing the same model in a high-fidelity, standardized, and replicable manner, and evaluated with randomized controlled trials (ideally), to determine overall effectiveness of the model. This is the basis for claims that a model is a “best practice.”
  3. Evidence-based principles. Synthesis of case studies, including both processes and outcomes, of a group of diverse programs or interventions all adhering to the same principles but each adapting those principles to its own particular target population within its own context. If the findings show that the principles have been implemented systematically, and analysis connects implementation of the principles with desired outcomes through detailed and in-depth contribution analysis, the conclusion can be drawn that the practitioners are following effective evidence-based principles.

Principles-focused evaluation treats principles as the intervention and unit of analysis, and designs an evaluation to assess both implementation and consequences of principles.  Principles-focused evaluation is a specific application of developmental evaluation because principles are the appropriate way to take action in complex dynamic systems.  This workshop will be the worldwide premier of principles-focused evaluation training.  Specific examples and methods will be part of the training.

Participants will learn:

  • What constitutes a principle that can be evaluated

  • How and why principles should be evaluated

  • Different kinds of principles-focused evaluation

  • The relationship between complexity and principles

  • The particular challenges, strengths, and weaknesses of principles-focused evaluation.

Questions about this workshop may be addressed to mqpatton@prodigy.net.