Ad Hoc Analysis: Support Team Staffing Forecast Model
An exploratory data analysis (EDA) project aimed at predicting demand for VEText and AVS training and help desk support over a six-month horizon. The study applied statistical modeling and time series analysis to assess trends, measure variability, and provide staffing recommendations for future program growth.
Context
VEText and AVS are core support programs that require careful resource planning to maintain service quality. Leadership needed insight into whether historical training and help desk data could be used to forecast future demand, particularly in anticipation of program migrations and potential surges in support requests.
Objective
Analyze historical training and help desk data for VEText and AVS.
Identify trends, stability, and variability in volume.
Build predictive models to forecast demand over the next six months.
Provide actionable staffing and monitoring recommendations.
Role
Lead Data Analyst (designed and implemented all reporting and analysis)
Duration
3 weeks (data exploration, analysis and report deliverable)
Tools & Methodologies
Python, Excel
Statistical Models:
Time-Series Analysis: to identify trends and seasonality
Logistic Regression: to classify and predict support volume patterns
Linear Regression: to quantify variable relationships and forecast demand
Predictions showed relatively stable training demand (1–9 sessions/month for VEText, 1–3 sessions/month for AVS).
Standard deviation remained close to mean, signaling stability and reasonable predictive accuracy.
Helpdesk Ticket Analysis
VEText mean: 217 tickets/month (SD 74.8).
AVS mean: 64.7 tickets/month (SD 44.2).
Logistic regression accuracy: 20% (VEText), 25% (AVS) due to high variability.
Outlier removal improved interpretability but reduced forecast confidence.
Forecasts suggested ticket volumes between 152–277 for VEText and 47–115 for AVS.
Key Findings
Training Demand is Stable
VEText and AVS training sessions showed relatively low variability, with standard deviations close to their means.
Predictive models achieved 53% accuracy for VEText and 67% for AVS, indicating moderate reliability.
Forecasts suggest training volumes will remain steady (1–9 monthly sessions for VEText; 1–3 for AVS).
Help Desk Demand is Highly Variable
VEText averaged 217 tickets/month with wide fluctuations (SD ~75).
AVS averaged 65 tickets/month with even higher proportional variability (SD ~44).
Predictive models had low accuracy (20–25%) due to noise and outliers, requiring careful interpretation.
Forecast Reliability Differs by Data Type
Training data supports reasonable forecasting confidence.
Help desk data is less predictable, making staffing decisions best informed by ongoing monitoring rather than long-term projections.
Operational Implications
Staffing for training may need modest growth (1 additional staff member) if user-facing programs expand.
Help desk staffing should remain flexible, with potential increases tied to post-Cerner migration surges.
Data Tracking Gaps
Current metrics focus on volume but don’t capture user impact or service quality.
Enhanced tracking (e.g., resolution time, user reach, satisfaction) would provide a fuller picture of program effectiveness.
Challenges & Solutions
Challenge: Low accuracy in help desk forecasting due to outliers and variability.
Solution: Outlier removal, cautious interpretation, and recommendation to monitor rather than over-staff prematurely.
Challenge: Logistic regression only moderately accurate on training data.
Solution: Cross-validated with linear regression and stability metrics to strengthen confidence.
Recommendations
Add 1 training staff member if expanding user-facing programs.
Monitor post-Cerner migration and plan for 1 additional staff if volumes increase.
Improve tracking to capture impact (e.g., number of users reached, resolution outcomes).
Immediate actions (Next 30 Days)
Add 1 training staff resource if programming expands.
Closely monitor help desk post-migration, scale team reactively.
Implement better tracking to measure user impact beyond volume.
Conclusion
This project demonstrated that predictive modeling is viable for training demand but less reliable for help desk forecasting due to variability. By combining statistical insights with operational recommendations, the analysis provided leadership with clear next steps for staffing and tracking improvements to ensure program sustainability.