coache faculty satisfaction
play

COACHE Faculty Satisfaction Survey Results Presented to Faculty - PowerPoint PPT Presentation

COACHE Faculty Satisfaction Survey Results Presented to Faculty Assembly on September 9, 2020 John Wallace Amanda Brodish Lu-in Wang Vice Provost for Faculty Vice Provost for Faculty Director of Data Analytics & Diversity and


  1. COACHE Faculty Satisfaction Survey Results Presented to Faculty Assembly on September 9, 2020 John Wallace Amanda Brodish Lu-in Wang Vice Provost for Faculty Vice Provost for Faculty Director of Data Analytics & Diversity and Development Affairs Pathways for Student Success

  2. Why Survey Faculty? • Aligns with Plan for Pitt • Support efforts to recruit, develop, and retain a diverse and excellent faculty • Inform roadmap for implementing data- driven changes to increase faculty satisfaction

  3. The COACHE Survey • C ollaborative O f A cademic C areers in H igher E ducation • Harvard Graduate School of Education • Consortium of over 300 institutions • Survey of faculty satisfaction • Pitt participated in 2016

  4. Survey Themes • Nature of Work (Research, Teaching, Service) • Resources & Benefits • Tenure & Promotion • Collaboration & Mentoring • Leadership & Governance • Department Culture

  5. Methodology • Survey open from February 12 to April 7, 2019 • Most full-time faculty eligible to participate • Newly hired faculty excluded • Some faculty with administrative roles excluded • Clinical faculty in the SOM excluded • Pitt response rate was 42% (similar to 46% response rate of other institutions)

  6. Response Rates By Tenure Status By Rank 100% 100% 80% 80% 60% 60% 47.7% 43.5% 43.2% 42.1% 41.8% 41.1% 39.6% 40% 40% 186 20% 20% 467 591 290 320 402 232 0% 0% Full Associate Assistant Instructor/Lecturer/Other Tenured Tenure Stream Appointment Stream

  7. Response Rates By Gender By Race/Ethnicity 100% 100% 80% 80% 71.3% 60% 60% 47.8% 40.5% 40.6% 45.1% 37.5% 40% 40% 25.8% 20% 20% 628 616 136 34 39 978 57 0% 0% Women Men Asian Black Latinx White Other

  8. Key Outcomes • 25 Key Benchmarks • Each benchmark assessed with multiple questions • Gives a general sense of how faculty feel about that aspect of their work/life • Nested within 7 broad areas (e.g., Nature of Work, Tenure & Promotion, Leadership)

  9. Comparisons • Cohort: 103 research universities that were surveyed in the past 3 years • Peers: 5 universities of our choosing from cohort 1. Indiana University 4. University of North Carolina 2. Purdue University 5. University of Virginia 3. University of Texas • Comparisons will focus on Peers

  10. Results Outline • General satisfaction • Pitt relative to peers on key benchmarks • Within Pitt variation on key benchmarks • Variation by subgroups • Pitt 2016 vs. Pitt 2019

  11. General Satisfaction 73% 74% Said if they had to do it Satisfied with department again, they would select Pitt as a place to work • • Peers Avg: 69% Peers Avg: 72% 75% Satisfied with Pitt as a place to work • Peers Avg: 67%

  12. 1 2 3 4 5 3.4 Nature of Work Research Pitt Benchmark Scores 3.3 Service 3.9 Teaching 3.6 Facilities & Work Resources Resources & Support 4.2 Health & Retirement 3.4 Personal & Family 2.8 Collaboration & Interdisciplinary Work Mentoring 3.7 Collaboration 3.2 Mentoring 3.5 Promotion to Full Promotion Tenure & 3.3 Tenure Expectations 3.5 Tenure Clarity 3.6 Departmental Leadership 3.4 Divisional 3.2 Faculty 3.4 Senior 3.0 Adaptability 3.1 Productivity Governance 3.3 Purpose 3.1 Trust 3.1 Understanding Departmental Relations & 3.4 Appreciation & Recognition Appreciation 3.9 Departmental Collegialty 3.6 Departmental Engagement 3.6 Departmental Quality

  13. Pitt Compared to Peers = Pitt in Top 2 = Pitt in Middle 2 = Pitt in Bottom 2

  14. Within Pitt Variation Effect Size • Strength of a phenomenon • Not a test of statistical significance • Emphasizes size of an effect Effect Size d M 1 – M 2 Small 0.10 d = Medium 0.30 SD Large 0.50

  15. Within Pitt Variation: Rank & Tenure Status

  16. Within Pitt Variation: Gender

  17. Pitt Change from 2016 to 2019 = Small Effect Size = Medium Effect Size = Large Effect Size

  18. Caveats & Limitations • Response bias and small cell size concerns call into question some results, especially within group comparisons • Averaging across groups may mask variation in satisfaction by school and/or department • Quantitative results only tell part of the story

  19. Next Steps ✓ Share interactive dashboards with Deans, Directors, and Campus Presidents ✓ Share results with faculty community ✓ www.provost.pitt.edu/COACHE ✓ Letter to faculty ✓ Presentation to Faculty Assembly

  20. Next Steps ✓ Share interactive dashboards with Deans, Directors, and Campus Presidents ✓ Share results with faculty community • Engage specific groups/committees on using these results for data-driven decision-making

  21. QUESTIONS?

Recommend


More recommend