Page 313 - Ai Book - 10
P. 313
5. When Precision is high and Recall is high, the F1 Score is ________________.
6. Accuracy alone may not be enough to ensure the model’s performance on data that has never been
________________.
7. In model evaluation, True Negative is an outcome where the model predicts ________________.
C. State ‘T’ for True or ‘F’ for False statements.
1. A True Negative represents the model predicting water shortage, and there is no water shortage.
2. A False Negative indicates the model predicting rain, but there is no rain.
3. The formula for Precision is True Positive divided by the sum of True Positive and True Negative.
4. F1 Score’s perfect value is 0.
5. When Precision is high and Recall is low, the F1 Score is low.
6. A confusion matrix is a summary of training data.
7. F1 Score is the harmonic mean of Precision and Recall.
8. If Precision is high and Recall is high, the F1 Score is high.
9. Accuracy alone may be insufficient when there is an imbalance in the dataset.
D. Very short answer type question.
1. Why is Precision considered an important evaluation criterion?
2. What is the perfect value for an F1 Score?
3. When F1 Score is considered high?
4. How does the confusion matrix contribute to model evaluation?
5. What does a False Positive outcome signify in model evaluation?
6. In case of predicting unexpected rain, what does a False Positive indicate?
7. Why is it important to use a test dataset that has never been used for training during evaluation?
8. How does Precision contribute to model evaluation?
E. Short answer type question.
1. Why is Precision considered a crucial evaluation criterion for models?
2. What is the significance of a perfect F1 Score being 1?
3. How does a False Positive outcome impact model evaluation?
4. Why might Accuracy be misleading in certain model evaluation scenarios?
5. In case of predicting unexpected rain, explain the significance of a False Positive.
6. When is F1 Score considered low in model evaluation?
7. Why is it crucial to use a test dataset that has never been used for training during evaluation?
8. How does Precision contribute to model evaluation, especially in tasks such as flood predictions?
9. What key insight does a confusion matrix provide in model evaluation?
F. Long answer type question.
1. Explain the concept of a False Negative and its implications in evaluating the performance of an AI model.
2. Define F1 Score and discuss why it is considered a balanced metric in model evaluation.
187
187