Faculty who consider themselves primarily researchers can be difficult to engage in faculty development activities. However, as agencies such as the National Science Foundation now require educational activities in research grants, proposal writing may represent a new avenue for engaging research faculty in their teaching. In this chapter, we outline an innovative workshop on writing the pedagogical component of a grant proposal that was developed for faculty at Northwestern University. During the workshop, while learning how to structure an education plan for their grant, faculty engaged in a lively discussion about formulating learning objectives and aligning them with pedagogical methods and activities, assessments, and evaluation strategies.

Because teaching and research are often viewed as distinct academic practices in higher education (Brew & Boud, 1995; Colbeck, 1998; Wolverton, 1998), many faculty developers may find it difficult to reach beyond the “converted”—faculty who value teaching and learning—to reel in the “unconverted,” faculty who view themselves primarily as researchers (Brew, 2003; Light, 2003). Over the last few years, funding agencies such as the National Science Foundation (NSF) and the National Institutes of Health (NIH) have begun to require educational activities in research grant proposals and evaluation of those activities. Yet although many funding agencies and institutions offer extensive tutorials and workshops devoted to grant writing generally, there are few readily available resources that faculty can turn to for assistance in developing the educational components of the grant. Although handbooks and manuals devoted to grant writing abound, some of which may touch on writing the pedagogical component of a grant (Frankel & Wallen, 2000), little information is readily available on the Internet or even in scholarly databases. Indeed, a great deal of the Web based information focuses on common grant writing tips (start early, follow instructions, pay attention to deadlines, and so on) that, though undoubtedly useful in their own right, will do little to help faculty struggling to put together the pedagogical component of a grant. Bordage and Dawson (2003), for example, offer eight steps and twenty-eight questions that are designed to help a researcher in the health sciences construct a strong study and viable proposal, but they do not explicitly advise how to structure the evaluation, let alone a pedagogical component.

In response to these new grant requirements and the lack of available resources, the Searle Center for Teaching Excellence at Northwestern University has received many requests from faculty to help them develop the pedagogical components of their grants. Demand for this service has increased over the past five years, a trend that has opened the door to rich discussion on teaching and learning with faculty who otherwise would not request a consultation about teaching.

Recognizing grant writing assistance as an alternative means of engaging faculty in faculty development, and to meet the demand, we developed a faculty workshop focused on designing the pedagogical component of a grant proposal. Much has been written in faculty development literature about engaging faculty in significant discussions about their teaching and learning as it relates to a specific learning context (Gillespie, Hilsen, & Wadsworth, 2002; Sorcinelli, Austin, Eddy, & Beach, 2006) and through the Scholarship of Teaching and Learning. But it seems that no one has addressed the topic through grant writing. We saw this as a novel and innovative way to bring new faculty through our doors and to enthuse the “unconverted” about their teaching. A workshop on issues on writing pedagogical components of grants may represent a new means of engaging research faculty in meaningful conversation about teaching and learning.

The Design and Activities of the Workshop

Northwestern enrolls about fifteen thousand students, half of whom are undergraduates, and employs approximately twenty-five hundred fulltime faculty, with close to two-thirds drawn from the medical school. Research and scholarly publications weigh heavily in promotion and tenure decisions, and in many fields grant getting is considered essential for continued employment.

The workshop was one of eight faculty development workshops offered monthly by the Searle Center throughout the academic year. Like the other workshops in the series, it was two hours long; was facilitated by two center staff members; was open to faculty, staff, and postdocs across the university; and featured both interactive presentations and hands-on activities. Unlike other faculty workshops, however, its content did not focus directly on instructor–student interactions, such as engaging students in large class settings, using clickers in class, or facilitating discussion.

We offered the workshop twice in the same calendar year, but in two separate academic years. The workshop was well attended and received positive evaluations from faculty (discussed more fully below). Combining both workshops, we had twenty-seven participants (fourteen men, thirteen women) who drew from a range of disciplines and units across campus. Fourteen faculty members came from medicine and engineering; six faculty came from social sciences, music, and humanities; four staff members came from various administrative units, and three were graduate students (two from engineering and one from the humanities). A handful (seven participants) had attended other workshops or sessions offered by our center, but most had not attended a session before or at least had not in the four years since we started keeping individual participation records.

Workshop Goals and Objectives

We designed the workshop to furnish participants with (1) detailed information on how to structure an educational plan and evaluation within a grant proposal, (2) tips on avoiding common mistakes in educational grant writing, and (3) information on additional grant-writing resources at our university and elsewhere. In terms of our workshop objectives, we wanted our participants to be able to examine and revise the objectives, rationale, and methods described in a grant proposal and gauge their overall alignment, clarity, and precision. We also wanted them to design an evaluation plan that was aligned constructively with the educational objectives of the grant (which refer to the specific and concrete aims that are meant to be achieved by the grant) and that in turn used appropriate quantitative and qualitative research methods (Light, Cox, & Calkins, 2009).

Description and Structure of Workshop

Both offerings of the workshop, titled “Developing an Effective Pedagogical Component for Your Grant Proposal,” followed the same facilitator plan, as displayed in Table 15.1. We opened the session by asking participants to share their reasons for attending the workshop and their experience in writing the pedagogical sections of a grant. We found that our participants’ experience with grant writing varied considerably. Some had never pursued a grant at all, while others had written grants before, although not necessarily with an educational component; nor were they necessarily funded. Although “getting funding” was commonly expressed as an important reason for attending the workshop, the seriousness of this pursuit seemed to vary considerably among our participants. Some faculty, particularly those in engineering and science fields, were actively planning to submit, or had already submitted, a proposal with a pedagogical component, viewing it as essential to their research and position at the university. Other participants were simply gathering information for future grant opportunities.

Table 15.1. Grant Writing Workshop Plan
SessionActivityDescription
I. 20 min.Introductions and icebreaker

In small groups, participants discuss: What brought you to the workshop today?

What experiences have you had writing the pedagogical sections of a grant?

OverviewFacilitators outline workshop’s purpose, learning goals, structure, and activities.
II. 30 min.Developing the pedagogical section of your grantFacilitators focus on aligning project objectives, methods, and assessment in proposal.
Case study

In small groups, participants clarify pedagogical objectives and activities drawn from a real proposal.

Small groups share their suggestions for improvement with large group.

III. 60 min.Key aspects of evaluationFacilitator identifies common evaluation myths and describes a common evaluation structure.
Case study exerciseIn small groups, participants identify formative and summative assessments in different cases.
Broader impact statementFaculty expert or facilitator explains the components of the broader impact statement and identifies relevant resources.
IV. 10 min.Final thoughtsHow Searle Center staff can help Workshop evaluation

We then explained the general structure of the workshop, which consisted mainly of our interactive presentations and two case-study activities during which the participants would work together in small groups. In the first case study, we asked our participants to analyze the pedagogical section of a grant proposal, focusing their critique on learning objectives and pedagogical activities. In the second case study, we homed in on the evaluation plan, asking our participants to generate formative and summative assessments for specific educational activities. We concluded the workshop by discussing the “broader impact” section of proposals and educational activities that could be used to demonstrate the far-reaching benefits of research programs. In the first workshop offering, we also asked a faculty member with expertise in writing about broader impacts to give advice on the process.

Aligning Objectives and Activities

Before we began the first case study, we explained the notion of alignment to our participants—that is, the fit among the funding agency goals, project goals, project objectives, project methods and activities, and project assessment and evaluation, as shown in Figure 15.1. Just as alignment is crucial in designing a course (Biggs, 2003; Light et al., 2009), so too is the alignment of educational objectives, educational activities, and evaluation.

The overall goals of the funding agency and the project lay outside the scope of our workshop, but we did spend a few minutes clarifying the difference between goals and objectives. Goals, we explained, tend to be large statements of what the grant seeker hopes to accomplish; they create the setting for the proposal and must match the goals of the funding organization. They are not necessarily measurable. Objectives, on the other hand, are operational, measurable, and specific as to the outcomes that the grant seeker intends to accomplish (Light et al., 2009). We asked our participants to consider several sample goals and objectives, such as those for an imagined grant on doctoral training in the biosciences (Table 15.2).

Figure 15.1. Aligning the Pedagogical Component of a Grant ProposalFigure 15.1. Aligning the Pedagogical Component of a Grant Proposal

The First Case Study: Educational Objectives and Activities

We asked our participants to read over and comment on the case study in small groups of three and four, an activity known to facilitate active learning and critical reflection (Bligh, 2000; Light et al., 2009). The case study had been modified from the educational component of a real—but heavily disguised—proposal for a major science foundation grant written by a faculty member (see Exhibit 15.1 for a condensed version of the case study).

In their small groups, participants considered whether the educational objectives identified in the grant were clear and whether the rationale clearly stated the significance of the problem. They further examined the proposed activities for their alignment with the proposed objectives. Lastly, we asked them how the proposal could be improved.

Our participants identified a number of crucial problems: lack of definition of the problem to be addressed, lack of specificity in the rationale for addressing the problem and the stated objectives, lack of alignment between objectives and activities, and lack of support for methods.

Table 15.2. Sample Goals and Objectives: “Doctoral Training Program in the Biosciences”
Goal 1: To increase the number of students from underrepresented groups admitted to doctoral programs in the biosciences at the university
Objective 1.1Revise current admissions criteria
Objective 1.2Conduct a workshop on revised admissions criteria for admission committees at the university
Goal 2: To increase the number of students from underrepresented groups retained in doctoral programs in the biosciences at our university
Objective 2.1Implement a mentoring program in which faculty serve as mentors for doctoral students
Objective 2.2Implement a peer-led collaborative learning program for doctoral students enrolled in the biosciences

They also viewed the language as imprecise and even misleading in parts. For example, several participants suggested that an expression such as “raising public awareness” lacks clarity and is virtually meaningless. Others suggested that a reviewer would stop reading when the grant seeker stated he wished to “continue the university’s successful practice” because it wouldn’t be necessary to fund successful practices.

Exhibit 15.1. Clarifying Pedagogical Objectives and Activities: A Case Study

3. Education and training. The education objective of the proposed program is to train future Ph.D.s to become leaders in research and industrial development in related fields who will help raise public awareness about environmental issues, and who will act as liaisons to the public to explain these issues.

3-1. Leaders on research and industrial development. We will continue the university’s successful practice of integrating multidisciplinary research and application, using internships, international collaboration, teamwork, and mentoring. We will introduce a new concept of Academic Task Squads in Ph.D. training. Each academic squad will comprise several teams, made up of members from different engineering disciplines, who will each work on a relevant engineering problem involving theory, experiment, and product design. Each multidisciplinary academic squad will develop one line of products. To ensure our success, we will:

  1. Create and modify new and existing courses to ensure a multidisciplinary multiscale engineering curriculum that effectively includes new knowledge.

  2. Create teams with at least three members possessing the required expertise for the problem, with at least one member from outside the university.

  3. Obtain summer internships for graduate students in our Engineering Fellows (EF) program working in industrial or government labs. Through such experience, our EFs will better appreciate the significance of their work and acquire the skills to be leaders in research and development. Each student working with his or her advising committee will develop an improvement plan, which will include (1) presenting research in the weekly EF seminars; (2) maintaining regular contact with his or her industrial advisor/mentor; and (3) designing a website that offers a means to exchange and search for research information.

3-2. Public liaisons. Even though this environmental issue is important, the public has been slow to act. Our Engineering Fellows, as scientists trained to explore such environmental issues, will also raise the awareness of the general public, by visiting local schools. This will attract high school students and teachers to work in our labs, which in the long run will help society welcome environmental reforms in the future.

Despite the many problems identified with the proposal, participants did identify ways to enhance the work. They recommended that the faculty member provide more specifics (What kinds of courses would be created or modified? What are the nature and scope of the problem being addressed? How specifically can the program raise public awareness?), strengthen the rationale by more fully describing the problem and its importance (Why is it necessary for future Ph.D.s to become leaders in research and industrial development and to raise public awareness about environmental issues?), furnish a rationale for the educational activities used to achieve the educational objectives, and formulate measurable objectives and a plan with a defined timeline (identify expectations and guidelines for academic squad leaders within six months; complete preliminary product line within one year). Several participants also suggested that the faculty member scale back expectations a bit and make the overall goals and objectives more manageable. Almost everyone agreed, too, that the writing should be more concise and omit jargon.

The Second Case Study: Developing an Evaluation Plan

The second part of our workshop focused on evaluation. Before turning to the second case study, we identified some common myths about evaluation. First, many proposal writers assume there is one standard evaluation format. We receive many calls from faculty, often within a few days of the grant deadline, asking if we can send them a paragraph about evaluation that they can then insert into their proposals. As we explain, every project has its own objectives and the evaluation must be aligned with those objectives, so no single magic, one-size-fits-all paragraph about evaluation will work; each project requires a uniquely tailored evaluation plan. Second, because many faculty assume that evaluation costs very little, they consistently underestimate the budget allotment. We try to disabuse them of this notion; reviewers may not take seriously proposals that do not allot sufficient resources for conducting the evaluation. We suggest allocating it 5–10 percent of the total budget. Third, many faculty dismiss the evaluation as just a “hoop jumping” exercise that does not have any practical value.

In framing the second case study, we described a simple evaluation structure in which each objective and the activities designed to achieve it are assessed in terms of process (formative assessment) and outcomes (summative assessment). Assessing process—that is, implementation of the educational activity—might include examining the overall integrity of the program (the number of sessions attended by participants, group size in a small-group learning program) using administrative data, observations (of participants, classrooms), or feedback from key participants (surveys, focus groups, interviews, reflective journals). Assessing outcomes (impact) would look at products (materials created, courses delivered, curricula developed, evidence of research activity) or student outcomes. Such outcomes might emphasize learning as measured in multiple ways (course grades and retention; learning inventories; purpose-built instruments, think-aloud protocols), or they might also focus on attitudes, motivation, or other behaviors or skills.

We asked our participants to focus on one of four cases. Each case had a goal, an objective, and an educational activity designed to achieve the objective. We asked participants to identify two or three formative and summative ways in which they could assess the process (implementation) and the outcomes (impact) of the program. To save space, we describe only three of the four cases in Table 15.3.

Initial Assessment of Workshop Outcomes

Our initial assessment of workshop outcomes, conducted immediately after the session concluded, indicated that our participants were very satisfied with the workshop and felt they had taken away something useful. We had our participants complete a questionnaire at the end of the workshop, asking them what they had found most valuable and what they expected to use in their grant writing. Although only fifteen completed the survey, fourteen of them (92 percent) indicated that, in terms of overall helpfulness, the workshop rated a 4 or a 5 on a 1-to-5 scale. More significantly, they identified a range of components that they found valuable, including the evaluation framework, practical examples, the explanation of goals and objectives, and various evaluation resources. We have already consulted individually with several participants on their grants, and we anticipate more of these consultations as the major funding agency deadlines grow closer.

We plan to conduct a more formal follow-up survey to gauge the effectiveness of our workshop in helping our participants write successful pedagogical components of their grants. We will also track their submissions and funding outcomes. But in an initial follow-up study, conducted eleven months after the first incarnation and five months after the second, we found that at least ten participants were working on or had submitted a grant since they participated in the workshop—most of these major NSF, NIH, or comparable large grants—with at least three being funded so far. Three participants indicated they had attended the workshop as part of their job of helping faculty secure grants. When asked about what aspects of the workshop they felt had helped them, most indicated that they found it helpful simply to get a better grasp of pedagogical language, and the structure associated with the evaluation component of educational-related grants.

Table 15.3. Evaluation Case-Study-Based Activity and Participant Response
Case Study123
Specific aimTo maximize opportunities for successful Ph.D. completion and competition for training grant awardsTo prepare students to transition into competitive postdoctoral research positions and to successfully participate in academic research, administrative, and leadership positions beyond graduation
ObjectiveTo enhance students’ research skillsTo provide students with a mentor to guide their professional developmentTo give students experience in presenting and publishing their research
ActivityCreate research skills course that allows students to write proposals and present research findingsCreate a mentoring program where students meet monthly with faculty mentor from their departmentRequire students to present research at meetings and submit work to be peer reviewed
Example of formative assessment (implementation)
  • Track participant numbers

  • Track level and quality of participation

  • Compare group with mentors with those without mentors

  • Track participation of mentors

  • Determine number and overall quality of mentoring meetings

  • Surveys and focus groups to identify what students gained from being mentored

  • Observe students to gauge understanding and misconceptions

  • Interview subset of students (ex. random sample 10%)

Example of summative assessment (impact)
  • Count number of proposals submitted by students

  • Track papers given by students

  • Use questionnaires (e.g., self-efficacy standard measures to gauge attitudes)

  • Program retention

  • Time to graduation

  • Grades

  • Classes taken by students

  • Student presentation

  • Pre- and postsurvey

Critical Reflections

With so few resources available on the Web or in scholarly databases, faculty are hard pressed to find out how to structure the educational component of a grant on their own. Yet even though many universities offer their faculty support in the grant-writing process, we have not seen much evidence that other institutions offer the type of workshop we did. Given the high interest generated from a similar workshop we conducted at the last Professional Organizational Development Network conference, we believe that this workshop is easily transferable to other campuses and should prove just as useful elsewhere. It has given us an innovative way to reach faculty with whom we do not otherwise connect and to engage them in conversation about teaching and learning, including formulating educational goals and learning objectives, articulating rationales for the pedagogical methods and activities chosen to achieve the objectives, and ensuring alignment among the objectives, the pedagogy, and assessment techniques. We have also been able to acquaint faculty with our other teaching-relevant programs and sessions.

For these reasons, we want to share what we have learned from doing this innovative pilot program to help other faculty developers:

  1. Faculty working on grants related to teaching and learning have not necessarily thought much about teaching and learning. As we suspected when we first decided to put the workshop together, it strongly appealed to faculty wanting help with the pedagogical component of their grant. But we were surprised by how little these same faculty had reflected on teaching and learning. This workshop promised, to borrow a phrase, the start of a wonderful friendship with “unconverted” research-focused faculty who have never set foot through our doors. Indeed, the great majority of the participants had either never attended a workshop or not attended one in the last four years.

  2. Faculty have widely varying experience with and knowledge of evaluation. Although we designed the case-based activities to help the participants think about different methodologies for evaluation, we spent more time than we anticipated explicating qualitative, quantitative, and mixed methods. We also explained how multiple data sources can be used for triangulation, particularly because some faculty mistakenly think that funding agencies consider only quantitative assessments as valid. In addition, we spent more time than expected responding to faculty concerns about validity, specifically how to do valid assessments with small sample sizes that may be unable to produce statistically significant results.

  3. The language of assessment needs to be clearly articulated. Some of our participants were confused about the meaning and goals of formative and summative assessment. We found that using “process assessment” for formative assessment and “outcome assessment” for summative assessment helped them understand the difference.

  4. “Broader impact” requires more time. A two-hour workshop allowed too little time to address the broader impacts of the participants’ proposed research. We provide a handout about what the broader impact entails, but the questions and discussion surrounding the topic suggested that it could be its own workshop.

  5. The workshop crosses disciplines. Although many of the workshop examples and terms came from NSF and NIH grants, faculty from the social sciences and humanities reported they were able to draw out the principles and still found the workshop helpful. Because we strive to open dialogue about teaching to faculty from all disciplines, we were gratified when participants told us how much they had learned from talking to colleagues in other disciplines.

Ultimately, the changing expectations for major grants have given us a new and novel opportunity to attract and converse with faculty who do not typically focus on their teaching. The workshop offered an innovative way for us to reach faculty whom we do not usually encounter on our campus, and it gave us a way to encourage a dialogue about teaching and learning that transcends disciplines.

References

  • Biggs, J.B. (2003). Teaching for quality learning at university. London: SRHE/ Open University Press.
  • Bligh, D. (2000). What’s the use of lectures? San Francisco: Jossey-Bass.
  • Bordage, G., & Dawson, B. (2003). Experimental study design and grant writing in eight steps and 28 questions. Medical Education, 37(4), 376–385.
  • Brew, A. (2003). Teaching and research: New relationships and their implications for inquiry-based teaching and learning in higher education. Higher Education Research and Development, 22(1), 3–18.
  • Brew, A., & Boud, D. (1995). Teaching and research: Establishing the vital link with learning. Higher Education, 29(3), 261–273.
  • Colbeck, C. L. ( 1998). Merging in a seamless blend: How faculty integrate teaching and research. Journal of Higher Education, 69(6), 647–671.
  • Frankel, J. R., & Wallen, N. E. (2000). How to design and evaluate research in education (4th ed.). New York: McGraw-Hill.
  • Gillespie, K. H., Hilsen, L. R., & Wadsworth, E. C. (Eds.). (2002). A guide to faculty development: Practical advice, examples, and resources. Bolton, MA: Anker.
  • Light, G. (2003). Realizing academic development: Embedding research practice in the practice of teaching. In H. Eggins & R. MacDonald (Eds.), The scholarship of academic development (pp. 152–162). London: SRHF/ Open University Press.
  • Light, G., Cox, R., & Calkins, S. (2009). Learning and teaching in higher education: The reflective professional. Thousand Oaks, CA: Sage.
  • Sorcinelli, M. D., Austin, A. E., Eddy, P. L., & Beach, A. L. (2006). Creating the future off acuity development: Learning from the past, understanding the present. Bolton, MA: Anker.
  • Wolverton, M. (1998). Treading the tenure-track tightrope: Finding balance between research excellence and quality teaching. Innovative Higher Education, 23(1), 61–79.