Page 293 - Ai Book - 10
P. 293
3. Describe the process of TFIDF and its significance in transforming text into numeric form.
Ans: TFIDF, or Term Frequency-Inverse Document Frequency, assigns numerical weights to words based on
their frequency and rarity. It plays a vital role in transforming text into a numeric format, indicating the
importance of specific words in a document or corpus. This method is commonly used in tasks such as
document classification and topic extraction.
4. How does the development of a natural language processing project follow a five-stage lifecycle, and
what is the significance of each stage?
Ans: The NLP project lifecycle consists of problem scoping, data acquisition, data exploration, modeling,
and evaluation. Problem scoping involves identifying and defining the problem, while data acquisition
focuses on collecting relevant data. Data exploration helps in understanding and cleaning the collected
data. Modeling involves feeding normalized text into an NLP-based AI model, and evaluation assesses
the model’s accuracy in generating relevant answers.
G. Application based questions.
1. In a business setting, how can a Conversational User Interface, such as a chatbot, improves customer
engagement and satisfaction?
Ans: A Conversational User Interface such as a chatbot, can enhance customer engagement by providing
instant and personalized support. It allows businesses to interact with customers 24/7, addressing
queries promptly and improving overall customer satisfaction.
2. Consider a scenario where a company wants to develop a chatbot for a specific industry. What steps
would you suggest in the development lifecycle of the natural language processing project to ensure its
effectiveness in understanding industry-specific language and nuances?
Ans: In this scenario, the project development should begin with a thorough problem scoping phase,
understanding the industry’s unique language and challenges. Data Acquisition should focus on gathering
industry-specific conversational data, and during the data exploration stage, attention should be given
to cleaning and normalising data based on industry jargon. The subsequent stages such as modeling and
evaluation should also prioritise industry relevance.
3. Imagine a world without the spell check functionality in applications such as Grammarly. How would it
impact written communication, especially in professional settings?
Ans: The absence of spell sheck would likely result in more spelling errors in written communication. In
professional settings, accurate and error-free communication is crucial, and the lack of Spell Check would
increase the chances of conveying an unprofessional image due to avoidable mistakes.
H. Assertion and reason-based questions.
1. Assertion: Text normalization is a crucial step in Natural Language Processing.
Reason: Text normalization simplifies textual data by transforming it into a standardized form, aiding in
subsequent language processing tasks.
a. Both Assertion and Reason are true, and Reason is the correct explanation of Assertion.
b. Both Assertion and Reason are true, but Reason is NOT the correct explanation of Assertion.
c. Assertion is true, but Reason is false.
d. Assertion is false, but Reason is true.
2. Assertion: Bag of Words is a feature extraction method in Natural Language Processing.
Reason: Bag of Words involves creating a vocabulary and representing text documents as vectors based
on word frequencies.
167
167