A focused collection of questions on Model Evaluation to sharpen your skills for technical interviews.
Classification Metrics (Accuracy, Precision, Recall, F1-Score)
Regularization techniques (L1, L2, Dropout)
Overfitting and Underfitting in models
the purpose of Cross-Validation
Evaluation Metrics for NLP (e.g., BLEU, ROUGE)
the importance of Explainable AI (XAI) and methods like LIME or SHAP
the Bias-Variance Trade-off
the AUC-ROC Curve
Regression Metrics (MAE, MSE, RMSE)
Data Drift and Concept Drift
different Loss Functions (e.g., Cross-Entropy, MSE)
Classification Metrics (Accuracy, Precision, Recall, F1-Score)
Regularization techniques (L1, L2, Dropout)
Overfitting and Underfitting in models
the purpose of Cross-Validation
Evaluation Metrics for NLP (e.g., BLEU, ROUGE)
the importance of Explainable AI (XAI) and methods like LIME or SHAP
the Bias-Variance Trade-off
the AUC-ROC Curve
Regression Metrics (MAE, MSE, RMSE)
Data Drift and Concept Drift
different Loss Functions (e.g., Cross-Entropy, MSE)