Use of Data for Program Improvement
Enlarge Font | Minimize Font | Bookmark and Share
Last modified: October 02, 2011 17:53:20.
Summary: The unit has fully developed evaluations and continuously searches for stronger relationships in the evaluations, revising both the underlying data systems and analytic techniques as necessary. The unit has a system in place to use data to make decisions regarded changes in programs and to continue to collect and analyze data related to the effects of those changes. Candidates and faculty review data on their performance regularly and develop plans for improvement based on the data.


Data Collection

Data are collected at various transition points as presented earlier and analyzed regularly. They are first reviewed by the faculty and the coordinator of given program. Program coordinators regularly review candidate GPAs and scores on various ratings, including those instruments that are utilized during clinical experiences (e.g., PDI, PPI, CPA). As such data are reviewed and examined, the program faculty and coordinator are able to identify areas of weaknesses in the program or how and what should be changed to improve at the program level. Program coordinators are expected to share a summary of their review and any plans for improvements at the meeting with deans, associate deans, department heads, and program coordinators.

As for clinical experiences, the unit values input from clinical supervisors in K-12 settings in addition to college supervisors. Near the end of each clinical experience semester, mentors and college supervisors will be evaluated by survey by the candidates (rating mentors and supervisors), by mentors (rating supervisors), and by supervisors (rating mentors). The program coordinators will individually and confidentially share the results with college supervisors as a tool for reflection.

A summary of various data collected will be shared with the Education Partners Committee meetings twice a year for their input and discussions for improvement. It is important to involve the stakeholders as the unit tries to meet the supply and demand of current educator pool (see Data Flow).

Data-Driven Changes

In 2007, the unit operated three diploma programs and one master’s program. The diploma programs required a serious review in many aspects. At that time there were Early Childhood, Special Education and Primary Education programs, and the Primary Education Program was operated in collaboration with Texas A & M University. It was a three-year contract with a one-year extension. In this program, Texas A & M University sent a few faculty members to Doha for two months at a time to co-teach program courses with faculty from College of Education.

During this period, the three diploma programs at the time were functioning completely independently from each other. For example, each program had its own admission criteria (i.e., GPA, TOEFL requirement, one required courses in computer skills, English for Teachers as prerequisites), each had different field and internship credit hours, and each varied in the total credit hours necessary to complete the program. For example, the total credit hours required to complete the diploma programs in Early Childhood, Special Education, and Primary Education were 29, 27, and 30, respectively.

In addition, all three programs seemed to offer courses that were similar in content (i.e., Educational Psychology, Human Growth), each course was independently taught just for that program. It was more program-focused rather than the unit as a whole. Having no consistencies in requirements across the programs presented serious problems as the unit was going to develop an assessment system that would have alignment across programs at each level. Since the contract with Texas A & M was going to end at the end of Spring 2009 and the College of Education was going to take over the program and the fact that the new Diploma Program in Secondary Education was going to be added in Fall 2008, the decision was made to review and revise all diploma programs in early 2008 in preparation for Fall 2009.

In order to develop a coherent assessment system for all diploma programs, the unit made significant changes to streamline the diploma program requirements, including admission criteria, total credit hours, core education courses to be taken by candidates across the programs rather than separated by departments, and instruments to be used by candidates and for data collection during field experience and clinical practice. Significant revisions to all diploma programs were necessary in order for the unit to develop a coherent and consistent assessment system where data are aggregated and disaggregated for evaluation purposes. The original document explaining curriculum changes submitted to the Vice President of Academic Affairs in May 2008 is available onsite.

Although most programs are new and data-collection and analysis is in the beginning stages, the unit has both a process and a history for data-driven decision making. For example, data related to educational needs in Qatar determined what programs the unit would initiate and help structure the program. All new programs were based upon research to determine whether there were interested and qualified potential applicants. The choice to limit the B.Ed. program to males was based on university demographics and identified (prioritized) needs in Independent Schools. More importantly, a process has been established within the unit so that findings from data will be used to inform future decisions.

Faculty Member Access

Each faculty adviser has an access to his/her candidates’ grades and other evaluation tools throughout the transition points. At the end of each semester, each program coordinator calls a meeting with the program faculty to review the assessment results of the unit learning outcomes and a summary of candidate performance at various checkpoints. Faculty are also responsible for submitting scores for pre-identified unit learning outcomes at a course level that are reviewed collectively by each program at the end of each semester for program improvement and a report to the Director of Academic Programs and Learning Outcomes Assessment for the university.

Sharing Data with Stakeholders

The unit assessment system provides multiple opportunities for candidates to reflect on their own performances. A set of requirements at each checkpoint forces each candidate to review and reflect on his/her own performance level prior to entering the next phase. In their coursework, candidates receive feedback from their instructors on their assignments and exams. During their clinical practice, both clinical supervisors and college supervisors share results of ratings on CPA, PPI, and PDI with candidates. Candidates are encouraged to reflect on ratings they received to improve their future performance of their knowledge, skills, and disposition. Candidates are not only rated by their clinical supervisors and college supervisors, they are also given opportunities to rate themselves using the same observational instruments for comparison.

The unit believes in modeling reflective practices by using collected data for reviewing and making sound decisions for making improvements. Assessment of candidates is ongoing throughout the year, and it is critical for faculty to be aware of their performances in order to reflect on the efficacy of our courses for preparing competent candidates. In addition, reviewing candidates’ scores on instruments during their clinical practices in K-12 settings, such as CPA, PPI, and PDI, help them locate any possible gaps in knowledge and skills or to improve the teaching and learning in college classrooms.   

During the Education Partners Committee meeting twice a year, a summary of candidate performances will be presented to the unit’s stakeholders, beginning at the meeting in Fall 2010. The stakeholders will have opportunities to provide feedback, and any plans for improvement will be discussed.