Abstract

The program assessment process combines assessments from individual courses to generate final program assessment to match accreditation benchmarks. In developing countries, industrial environment is not diversified to allow graduating engineers to seek jobs in all disciplines or specializations of an engineering program. Hence, it seems necessary to seek evolution of an engineering program assessment for specialized requirements of the industry. This paper describes how specialization-specific courses’ assessments are grouped per requirements and then integrated towards overall program assessment. A software program application is developed to automate this development to reduce assessment work and show equivalently as integration of specialization-specific assessments per outcome per term. The implementation also shows how outcomes are integrated per specialization-specific courses in order to judge the implementation of the program assessment. This effort is expected to help stake holders of the program to judge evolution and quality of specialization tracks vis-à-vis expectations of the local industry.

1. Introduction

The universities in different countries follow their own curriculum development philosophies and processes with the intention of producing “environment-ready” engineers. The ABET’s requirement for accredited programs to implement outcomes-based assessment models has stimulated the growth of formalized assessment of programs within the engineering and engineering technology communities. In fact, a systematic process is to be in place to assess the achievement of both the program outcomes (before students graduate) and the program educational objectives (after graduates leave the program). This process needs to be ongoing to ensure the continuous improvement of each program with input from constituencies, process focus, outcome, and assessment linked to objectives [1, 2]. The objective is to employ an assessment process that provides meaningful input to future program development and allows the institution to strengthen its educational effectiveness [3].

The authors in [4] describe the methodology used and the results obtained in the systematic web and literature search for instructional and assessment materials addressing EC2000 program outcomes. In [5], the authors have described development of several innovative tools to automate the processes of assessment and evaluation of outcomes of the college of Engineering and individual ABET engineering programs. These tools depict quantitative information that is automatically plotted and users can view most recent results displayed relative to historical trends. The sustainable set of assessment tools has also been reported in [6] to measure achievement of stated objectives and outcomes related to ABET EC 2000 criterion. In [7], the authors have investigated the effectiveness of teaching by working out the automated system of monitoring the academic activities in order to control and diagnose actions necessary for goal-oriented education process. The authors in [8] have developed custom designed assessment tools that are user-friendly to the lecturers/instructors using on-shelf technology to measure student’s knowledge. The authors describe the utilization of the tools and evaluation for an undergraduate course. The web has also been exploited [9] to develop outcome based assessment tool. The authors describe a web-based tool used for skills practice and summative assessment. The reported objective behind this tool development is to assess student learning in spite of rising class sizes. In [10], the authors describe in detail innovative course outcomes’ assessment tools used in an engineering department. In that tool, a course outcomes’ assessment program, developed in-house, is used to evaluate the contributions of a given course, training, or any other item in the curriculum to the design, delivery, and achievement of program outcomes. The authors in [11] report results obtained in a pilot study of e-portfolio implementation in two different subjects. The tool is viewed as a complementary for students’ assessment and as a perfect follow-up device to check student’s competency development throughout their degree studies. In summary, various methods and tools have been developed, tested, and reported in the literature for program assessment measurements, but specialization-specific course assessments have not been focused and then integrated towards overall program assessment, at least to our knowledge. With respect to a typical electrical engineering (EE) program, its program assessment comprises courses from its specialization tracks like electronics, power, communications, computer, and so forth in addition to science and mathematics courses.

Typically, the departments embrace the general philosophy of outcome-based education and engage in an assessment process put forth to establish and review the program educational objectives and program outcomes by utilizing a set of tools that do direct and indirect measurement of program assessment components. In this work, in order to generate program assessment, the courses are first grouped per specialization track and then integrated in the next layer to generate final program assessment result. In Section 2, the program assessment process and its components adopted typically at the EE Department of the United Arab Emirates University (UAEU) are briefly described. Section 3 highlights an application to show how a course assessment is implemented. Section 4 shows implementation of an application developed for the purpose of computing assessment for a specialization track, for example, computer area. In Section 5, integration of specialization tracks towards overall program assessment is discussed. The discussion is presented in Section 6 followed by conclusions in Section 7. Section  8 presents references.

2. Program Assessment Process

Effectively, the Electrical Engineering Department at UAE University has established an assessment process that is used for continuous improvement of the undergraduate program and the department’s educational operations. In fact, the program assessment consists of two parts: annual report prepared by the assessment committee and the input from accreditation body from its previous visit. The annual report uses program stake holders’ and external accreditation body feedback to generate program assessment and is prepared using various tools and measurement results from current year and from at least previous two years. The annual program assessment followed by its local (college and department) evaluation and the input from the (external) accreditation body set the benchmark performance limits of the program assessment. The continuous improvement of program assessment requires that respective feedback be used to refine educational objectives and corresponding program outcomes together with improvement of corresponding tools of assessment. The (local) college evaluation is typically generated within the same year, but that from the external accreditation body is usually available after a couple of years during their visit to the college. This process is briefly illustrated in Figure 1.

The various tools (i.e., direct and indirect approaches) are implemented to assess the program outcomes. The set of tools include curriculum quantitative assessment, capstone course (graduation project), exit exam, exit interview, internship survey, alumni survey, employer survey, and so forth. In case of curriculum quantitative assessment, the EE Assessment Committee selects a set of courses (the most relevant for the targeted outcome) related to each program outcome to be assessed during that semester. Curriculum assessment is done collectively with the participation of all faculty members of the department. The curriculum assessment process involves three performance measures: Student assessment (S), faculty assessment (F), and quantitative assessment (Q). The qualitative data collected from the instructor and student assessment of course surveys on the outcomes of each course is analyzed and mapped to the corresponding program outcomes. Then the average of all courses is calculated to obtain the program outcome achievements.

The program assessment tools mentioned above are used to assess how well the curriculum prepares the graduates to perform in a manner consistent with the stated objectives and outcomes to fulfill the EE Department mission. These assessment tools utilize information from faculty, students, (present and graduates), and industry (employer, Internship supervisors). Assessment data are used to improve the curriculum and the courses. It is also used by the faculty to update the outcomes, instruments, and metrics. Additionally, there exist two industrial advisory boards—one at departmental level and the other at the college—to evaluate educational objectives, program outcomes, annual program assessment, and various assessment tools used.

Some of the program outcomes are best assessed qualitatively through capstone course, internship, and extra curriculum activities. For example, the participation in the IEEE student branch participation of students in extra curriculum activities (such as open day, activities day, workshops, field trips, and seminars) have direct impact on some of these outcomes. Likewise, internship is measured through an internship survey accumulated twice a year over a number of students before being included in program assessment. In an alumni survey an attempt is made to measure success by directly asking the alumni for their opinion of how well they are fulfilling the objectives. Similarly, in an employer survey, the employers who employ our graduates are directly asked for their opinion about skills of our graduates and how well our graduates are fulfilling the program objectives. As regards exit interview, this interview form is developed in order to poll the opinion of graduating EE students on the EE program outcomes and the way they feel the program has helped them acquire the skills and abilities.

An assessment report is planned each academic year for the sake of evaluating the program outcomes and objectives. The report includes, among others, the assessment results analysis, observations, and recommendations. The evaluation of this report together with corresponding feedback based on program assessment results and effectiveness of the tools used drives the continuous improvement in assessment process. These components are graphically shown in Figure 1. Additionally, a cyclic feedback to departmental curriculum committee ensures respective changes in program curriculum. The details of the EE program assessment implementation at the UAE university can be found in [12].

Let us look at program assessment in a different way. Previously, it was mentioned that, in order to calculate program outcomes, averaging method is used across all courses outcomes. For example, if a particular outcome “K” maps strongly to courses, say Circuits Lab, Power Lab, Control Lab, Logic Design Lab, Electronic Lab, Communications Lab, Microprocessors Lab, and so forth, then averaging of outcome “K” of all these courses is computed to generate overall outcome “K.” This gives overall view of outcome “K” in EE program assessment. It does not tell, due to averaging, however, how much contribution is made by courses that are related to, for example, communications area, and likewise how much contribution is made by computer related courses. In order to highlight specialization track contribution, the courses may be grouped in baskets; each basket consisting of courses taken from specific area like science and mathematics, circuits and electronics, computer, communications, power and control, and so forth. This way, computer area for example, would contain core courses such as, computer programming, digital logic design, microprocessors, computer architecture, and electives, such as embedded systems, java programming, special topics in computer, and so forth. Once outcomes are calculated per all baskets, then averaging across all baskets may be used to generate equivalently the overall program outcome values. For a particular outcome “m” for example, mathematically this can be viewed asOutcome𝑚=1𝑁𝑁𝑖=1(outcome𝑚"ofcourse𝑖)(1)Outcome𝑚=1𝐿1𝑀𝑀𝑖=1(Outcome𝑚"ofcourse𝑖,basket1)𝑖+1+𝑀𝑀𝑖=1+1(Outcome"𝑚"ofcourse𝑖,basket𝐿1)𝑀𝑀𝑖=1,(Outcome"𝑚"ofcourse𝑖,basket𝐿)(2) where N, L, and M represent total number of courses for the program, number of baskets created per program, and number of courses per basket, respectively. For simplicity, it is assumed that number of courses per basket is same. Equation (2) is the focus of our program assessment implementation. As implied before, in addition to generating program assessment, this would help program curriculum analysts and industry experts to relate knowledge and skills learned by students in that specific area of the program to local industry requirements. The corresponding feedback thus achieved can be used to evolve specific area courses of the program.

In order to develop an application tool for such a requirement, the assessment hierarchy in the form of data flow may look like as shown in Figure 2. Figure 2 shows how assessment data and course tables are to be used at the lowest level to generate view of the course assessment. Later at the next level, different courses may be combined to generate basket (cluster) view of the courses, termed here as specialization area of the program. Finally, all baskets (including science, mathematics, etc.) are to be combined to generate program assessment. In Section 3, individual course assessment implementation is discussed followed by integration of various specialization-specific courses in Section 4, to generate assessment at the next (basket) level.

3. Course Assessment Implementation

In this section, a course assessment implementation is described that follows the procedure described in Section 2. This implementation eases assessment process, provides more accurate and clearer feedback that helps improve the course delivery in the subsequent terms/years, and forms the basis for upper levels in the program assessment process. Though implementation may take different forms and use various technologies, but as mostly liked by instructors for the purpose of simplicity, an Excel workbook is chosen to contain all implementation steps and details that a course assessment may require.

The excel workbook contains a number of Excel sheets:(a)a settings sheet that contains information about the course, credit hours, number of course outcomes and their description, number of program outcomes it assesses and their description, and number of quantitative assessment activities in the course along with their description and weight;(b)activity sheet for each quantitative assessment activity like homework, quiz, test, exam, project, and so forth,(c)qualitative student “S” assessment sheet that records input from each student and calculates average of each course outcome and maps to program outcomes being assessed in the course;(d)qualitative instructor “I” assessment sheet that records opinion of the instructor for each course outcome and calculates average of each course outcome and maps them to program outcomes being assessed in the course;(e)calculation sheet that calculates quantitative score of each student per course outcome in each activity;(f)student assessment summary sheet that shows summary of values for each course outcome and program outcome assessed in the course;(g)a dashboard sheet that calculates and shows score and grade earned by each student taking the course;(h)final course assessment sheet that retains last two-year assessment scores along with comments and fills current final assessment scores “Q,” “S,” “I” from various sheets just described. For illustration purposes, settings sheet is shown in Figure 3.

All of these course assessment steps are fully automated except when modifications are needed to the course assessment, for example, changes in course outcomes and corresponding program outcomes and change in number of assessment activities for the course. In summary, for a normal course without any changes, only student information and assessment activities are to be recorded in dashboard and activity sheets, all other sheets remain unchanged. The final course assessment sheet, however, would always require comments to be recorded for current course assessment and suggestions from last offering of the course. For illustration purposes, course assessment report sheet is shown in Figure 4. At the completion of individual course assessment, (2) is partially implemented as each course belongs to one basket only and contributes to a few course outcomes. The work that still remains is averaging across course assessments within a basket, followed by averaging over baskets.

4. Integrating Course Assessments

In this section, grouping and integration of individual course assessments are discussed. As one course assessment is implemented using Excel workbook program, all course assessments follow the same assessment style and format, and they need to be kept in the same folder as well since all individual course outcome results are to be read automatically from each excel workbook. These values are then averaged and placed in the area-specific (computer basket, example) excel workbook.

At the EE Department of the United Arab Emirates University, courses within a basket for each outcome (AK) are grouped based on respective (higher weighted) targeted outcome. The primary objective of this grouping is to remove unnecessary calculations due to insignificant impact of majority of the courses for a particular course outcome, since each course may not target all program outcomes with the same weight.

This simplifies the calculations without much difference and helps in reducing the course assessment load on each faculty and the assessment committee. For computer basket, this situation is illustrated as follows.

The computer basket includes courses such as Computer Programming (ELEC 330); Digital Logic Design (ELEC 335); Digital Logic Design Lab (ELEC 345); Microprocessors (ELEC 451); Microprocessors Lab (ELEC 461); Computer Architecture and Organization (ELEC 462); Embedded Systems (ELEC 562); Java Programming Techniques and Applications (ELEC 561); Special Topics in Computer (ELEC 570). For all these courses, grouping per program outcomes is shown in Table 1. Table 1 shows those courses that map against each outcome. Table 1 also shows that computer basket courses do not contribute towards program outcomes D, F, G, H, and I. The shaded boxes show that these courses have lower mapping value than the others towards that particular outcome. This mapping requires that for outcome “a” for example, the computer basket excel workbook shall read from ELEC 330, ELEC 335, and ELEC 570 workbooks. Let us have a detailed look at this mapping process and weight assignment.

As an example, consider the course group related to outcome E; it is clear that there are three courses (see Table 1). When we examine the course outcomes (COs) of each course, it was found that Computer Programming (ELEC330) maps to outcome “E” by 85% of its COs and to outcome “K” by 15% of its COs; Microprocessors (ELEC451) maps to outcome “E” by 100% of its COs; Java Programming Applications (ELEC561) maps outcome “E” by 75% of its COs and to outcome “C” by 25% of its COs; Special Topics in Computer (ELEC570) maps outcome “C” by 75% of its COs, to outcome “A” by 5% of its COs, to outcome “B” by 15%, and to outcome “E” by 5% of its outcomes. The question that is raised here is that “Shall we include all of these four courses in evaluating outcome “E” from computer basket?” The similar question can also be raised for remaining baskets. In order to judge, which course may effectively be included in a basket in calculating a particular outcome “m”, we rewrite (2) for one basket as follows:Outcome𝑚=121𝑀1𝑀1𝑖=1𝑎𝑖FC𝑖+1𝑀2𝑀2𝑖=1𝑏𝑖SC𝑖=12𝑎1FC1+𝑎2FC2+𝑎3FC3++𝑎𝑀1FC𝑀1𝑀1+𝑏1SC1+𝑏2SC2+𝑏3SC3++𝑏𝑀2SC𝑀2𝑀2,(3) where 𝑎1,𝑎2,,𝑎𝑀1, and so forth represent mapping weight of each course offered to students for outcome “m,” respectively, and 𝑀1 is number of courses (FC𝑖) offered from a basket in the Fall Semester. Thus, the first term of (3) represents weighted average of outcome “m” in Fall Semester. Similarly, 𝑏1,𝑏2,,𝑏𝑀2, and so forth represent weight of each course offered to students for outcome “m”, and 𝑀2 is number of courses (SC𝑖) offered in Spring semester. Then the two terms are averaged to calculate the impact of a basket over outcome “m” for a year. This exercise is repeated for each basket per program and then averaged over all baskets. Let us analyze this from calculation perspective. As a simple case, consider that 𝑀1 and 𝑀2 each equal to four and that there are five (𝐿=5) baskets per program; this means that each semester each outcome would be averaged over (𝐿×𝑀=20) twenty courses and thus averaged over forty courses for two semesters of the year, for annual assessment of the program. This amounts to too many calculations, and can be reduced if we look carefully into (3). This equation includes all courses that map to outcome “m” with low or high mapping value. The impact of courses with low mapping to any outcome “m” is obviously insignificant if we consider outcome calculation averaged over all baskets and then over a year. In other words, if we consider only those courses in (3) that map more than (for example) 75% to outcome “m,” then the expected results per outcome per year will not change drastically. Based on this analysis, the terms 𝑎𝑖 and 𝑏𝑖 in (3) will all be greater than 0.75, and thus the multiplication terms can be treated equal making (3) as simple averaging process over lesser terms than 𝑀1 or 𝑀2. It was thus concluded that 25% threshold may be used for course mappings in order to be included in the group. Thus, the courses mapped to program outcome “E” by less than 25% were not added to this group and are highlighted as shaded in Table 1. This weight setting is decided by faculty members within computer focus and is modified only when course outcomes or program outcomes change due to a change, for example, in educational objectives of the program. For purpose of implementation, these weight mappings are inserted in “settings” sheet, of the excel workbook (see Figure 3). As Microprocessor (ELEC451) course maps 100% to program outcome “E,” its mapping to all-course outcomes is weighted as value “3” on the scale of 1–3 where 1 is weak, 2 is medium, and 3 is strong. In Figure 3, this is clearly shown in “course outcomes” portion and also reflected in description of each course outcome tailed a by specific program outcome, in “ABET program outcomes” portion.

In order to enable this implementation, separate visual basic macros were written and framed as application buttons on computer basket excel workbook. The visual basic was chosen as it is simple to implement and works well with Microsoft Excel program [13]. The buttons read required values and comments from specific rows and columns of each concerned excel workbook for each outcome and then average outcome values and place the results and comments within specified fields in outcome-specific excel sheets of computer basket workbook. A sample code of visual basic macro for outcome “J” button is shown in Algorithm 1. It can be seen from Algorithm 1 that area to be read from each course excel workbook is the same. This allows uniformity and helps in code reuse. This completes the automation of assessment at basket level, as represented by (3). This procedure is repeated for all outcomes of the program per basket. Like outcome “E,” buttons are created and placed within computer basket excel workbook. From implementation perspective, required course excel workbook files are placed in the same directory, and the buttons when executed perform the required function for computer basket assessment. A sample look at computer basket is shown in Figure 5. This Figure shows analysis of two courses for outcome “E” in computer basket. If compared to Table 1, the course ELEC570 was not included as it weighs less than 25% to outcome “E” and that ELEC561 was not offered in Spring 2010 Semester. It also shows previous year outcome values to provide accumulative view of outcome “E” in computer basket for the last two years. Also shown is qualitative analysis of both courses and is looked at by teaching faculty and program management in succeeding semesters. If looked closely at bottom of Figure 5, it shows links to individual course assessments per basket and outcomes (A through K). These are excel sheets in the computer basket workbook and show required details once clicked. As a whole, the workbook enables complete quantitative view of the basket per program and may be used vis-à-vis specialization requirements of the industry by stack holders. The final program assessment combines assessment from each basket workbook, and in terms of implementation it is just reading from each basket in similar fashion since each area (basket) assessment is in the form of an excel workbook.

Sub Button“j”_Click()
‘ reading first file
 Dim xl As New Excel.Application
 Dim xlw As Excel.Workbook
  ‘ Open the excel file.
 Set xlw = xl.Workbooks.Open(“C:UsersELEC462.xlsx”)
 xlw.Sheets (“ELEC462"). Select
  ‘ Get the value from cell(2,3) of the sheet.
  ‘ MsgBox xlw.Application.Cells(3, 2).Value
 Range (“C6”).Value = xlw.Application.Cells(19, 9).Value
 Range (“D6”).Value = xlw.Application.Cells(20, 9).Value
 Range (“E6”).Value = xlw.Application.Cells(21, 9).Value
 Range (“F6”).Value = xlw.Application.Cells(25, 7).Value
  ‘ Close worksheet without save changes.
  ‘ If you want to save changes, replace the False below with True
 xlw.Close False
  ‘ free memory
 Set xlw = Nothing
 Set xl = Nothing
‘reading 2nd file
 Dim xl2 As New Excel.Application
 Dim xlw2 As Excel.Workbook
  ‘ Open the excel file.
 Set xlw2 = xl.Workbooks.Ope ("C:UsersELEC562.xlsx")
 xlw2.Sheets(“ELEC562").Select
  ‘ Get the value from cell(2,3) of the sheet.
  ‘ MsgBox xlw.Application.Cells(3, 2).Value
 Range(“C7”).Value = xlw2.Application.Cells(19, 9).Value
 Range(“D7”).Value = xlw2.Application.Cells(20, 9).Value
 Range(“E7”).Value = xlw2.Application.Cells(21, 9).Value
 Range(“F7”).Value = xlw2.Application.Cells(25, 7).Value
  ‘ Close worksheet without save changes.
  ‘ If you want to save changes, replace the False below with True
 xlw2.Close False
  ‘ free memory
 Set xlw2 = Nothing
 Set xl2 = Nothing
End Sub

5. Overall Program Assessment

In the previous section, the integration of outcome “m” per basket was demonstrated. As there are four specialization baskets per program, this means four of such workbooks are developed, one for each basket. The concerned faculty in each focus group (per basket) regularly meets at the end of the term and per year to assess all outcomes of the program addressed by the basket. In effect, this annual meeting is more effective as it includes all year assessments.

A focus group report is generated based on Figure 5  to address outcome achievements per year. Similar reports are generated by all focus groups. The accumulation of such focus reports leads to annual assessment of the program, and is prepared every year by assessment committee. This is what has been implemented in each workbook: formulation of each program outcome assessment per basket along with subjective comments. This helps in accumulation of overall program assessment from baskets. These subprocesses can be visualized in program assessment process, as shown in Figure 6. Equivalently, this Figure provides detailed view of program assessment component in Figure 1. On the top side of Figure 6, the process shows outcome data collection and analysis made available from each focus group (per basket). The annual program assessment (usually accumulated over a couple of years) describes effectiveness of the tools (shown on the right side of the figure) used in doing course assessments. The continuous refinement to this process is thus made once feedback from each stake holder and that from ABET accreditation body is available to the department.

6. Discussion

The implementation of main components of the EE program assessment at UAE University is described with emphasis on how and which tools are used to integrate various assessment results. In this work, it involved a set of spreadsheets at each level of assessment hierarchy: at course level, at area (basket) level, and then at program level. At each level, there is interaction of excel workbooks; some of which provide raw data for calculation and presentation to the other. The tool produces a set of reports that indicate how well students performed for each course outcome (in course assessment report sheet) and what is the impact of the course on the basket level (in group assessment report) and finally at program outcomes, in general. This implementation has been in practice at the EE Department of UAEU for last two years (since 2009) at basket level, specifically applied to computer area only. The ease and reduction in time for assessment work attracted all computer area faculty members to adopt it for course assessment and then watch the automatic integration at basket level.

This development has encouraged the area basket-specific faculty to focus better on the basket-specific assessment for its effective use in meeting expectations of the electrical engineering program stake holders, in general. In view of its successful adoption in computer area, the faculty members in other areas of the EE program have developed interest in using it for assessment purposes.

The main problem faced in use is the time consumed in opening excel workbook as each one consists of a number of excel spreadsheets within it. This difficulty is affordable at course assessment level. At basket assessment level, the tool opens various course assessment files one by one for reading required data, copies text and required data, closes the files one by one, does averaging and places them at required places in another excel workbook, then closes the output file. The opening and closing of the files and data processing were implemented in the form of visual basic macros, framed as buttons, and placed in the appropriate excel workbooks. It was noticed that time consumed was considerably high (about 4-5 minutes depending on the speed of the computer used) for implementation of button “K” macro. This is justifiable as the required macro for button “K” operated on seven course assessment excel workbooks for opening and closing of each file, doing the calculation, and placing the result in another excel workbook. It should be noted, however, that this activity is done once at the end of academic term of the department, when course and program assessments are due. The time-consuming problem is in fact inherent in Microsoft Excel as database and calculation are embedded in the same tool. The reason behind use of Microsoft Excel is that it is popular with instructors for being simple, easy to reuse and flexible with simple-to-make changes.

Alternatively, a database may be selected like Microsoft Access or Oracle to store all required data in the form of tables that are to be needed as raw data for course, basket-specific, and program assessment levels. Next, engines like Java and Visual Basic may be selected to read, calculate, and present the data in the required format. Then, this application may be stored at the server to be accessible to all faculty and instructors for data access and use.

7. Conclusions

The objectives in this work included evaluation of the program assessment process at basket level, detailed view of the course assessment, ease of use at faculty/instructor level, and so forth. This approach also provided structured and detailed view of the program assessment without loss of accuracy, as shown in (2). The structured view of the program assessment is expected to meet expectations of stake holders of the program in investigating strengths of the program vis-à-vis industry requirements. As this application development is based on ABET-accredited program, it can be concluded that, for the purpose of uniformity, such an approach can be applied towards all engineering programs of the college.