AI Subfields and Technologies : MCQ’s & Solutions

Machine Learning

  1. Which type of learning involves training a model with labeled data? a) Unsupervised Learning
    b) Supervised Learning
    c) Reinforcement Learning
    d) Semi-supervised Learning Answer: b) Supervised Learning
  2. In which type of learning does the model learn from the input data without predefined labels? a) Supervised Learning
    b) Unsupervised Learning
    c) Reinforcement Learning
    d) Transfer Learning Answer: b) Unsupervised Learning
  3. What is the primary goal of reinforcement learning? a) To categorize data into different groups
    b) To learn from past experiences and improve performance through rewards and penalties
    c) To predict future values based on past data
    d) To generate new data based on existing patterns Answer: b) To learn from past experiences and improve performance through rewards and penalties
  4. Which algorithm is commonly used in supervised learning for classification tasks? a) K-Means
    b) Decision Trees
    c) Principal Component Analysis (PCA)
    d) Generative Adversarial Networks (GANs) Answer: b) Decision Trees
  5. What is a common use case for unsupervised learning? a) Spam email detection
    b) Image classification
    c) Customer segmentation
    d) Speech recognition Answer: c) Customer segmentation

Deep Learning and Neural Networks

  1. In deep learning, what is the role of “batch normalization” in neural networks? a) It reduces the size of the network by combining multiple layers.
    b) It helps in regularizing the network to prevent overfitting.
    c) It normalizes the output of a previous activation layer to improve training speed and stability.
    d) It enhances the non-linearity in the activation functions. Answer: c) It normalizes the output of a previous activation layer to improve training speed and stability.
  2. How does the “exploding gradient” problem affect training deep neural networks, and what is a common mitigation strategy? a) It causes gradients to become very small; mitigation includes using dropout.
    b) It causes gradients to become very large, which can destabilize training; mitigation includes gradient clipping.
    c) It leads to convergence issues; mitigation involves using a different activation function.
    d) It slows down convergence; mitigation involves using a learning rate schedule. Answer: b) It causes gradients to become very large, which can destabilize training; mitigation includes gradient clipping.
  3. Which neural network architecture is most suitable for tasks requiring attention to different parts of the input sequence? a) Long Short-Term Memory (LSTM)
    b) Gated Recurrent Unit (GRU)
    c) Transformer
    d) Convolutional Neural Network (CNN) Answer: c) Transformer
  4. What is the significance of the “skip connection” in deep residual networks (ResNets)? a) It allows the model to combine different feature scales.
    b) It provides a mechanism for bypassing layers, facilitating the training of very deep networks.
    c) It improves the speed of convolutional operations.
    d) It enhances the non-linearity in the network. Answer: b) It provides a mechanism for bypassing layers, facilitating the training of very deep networks.
  5. Which technique is used to address the problem of “overfitting” in deep neural networks? a) Increasing the network’s depth
    b) Reducing the amount of training data
    c) Applying dropout during training
    d) Using a higher learning rate Answer: c) Applying dropout during training

Natural Language Processing (NLP)

  1. In the context of NLP, what is “word embedding” and why is it used? a) A method for translating words into multiple languages.
    b) A technique for converting words into dense vector representations to capture semantic meaning.
    c) A way to encode words as one-hot vectors to classify text.
    d) A strategy for detecting named entities in text. Answer: b) A technique for converting words into dense vector representations to capture semantic meaning.
  2. What is the purpose of the “attention mechanism” in sequence-to-sequence models for NLP? a) To enhance the model’s ability to generate text.
    b) To focus on different parts of the input sequence when producing each word in the output sequence.
    c) To preprocess input data more efficiently.
    d) To perform sentiment analysis on the input text. Answer: b) To focus on different parts of the input sequence when producing each word in the output sequence.
  3. How does “BERT” (Bidirectional Encoder Representations from Transformers) improve NLP tasks compared to previous models? a) It uses a single-directional context for understanding text.
    b) It performs better by leveraging bidirectional context and fine-tuning on specific tasks.
    c) It simplifies the network architecture for faster processing.
    d) It reduces the number of layers in the model. Answer: b) It performs better by leveraging bidirectional context and fine-tuning on specific tasks.
  4. What does “named entity recognition” (NER) typically involve in NLP? a) Identifying and classifying entities such as people, organizations, and locations in text.
    b) Generating new entities based on the context of the text.
    c) Translating entities into different languages.
    d) Summarizing the importance of entities in the text. Answer: a) Identifying and classifying entities such as people, organizations, and locations in text.
  5. Which of the following models is specifically designed for extracting key phrases or topics from a document? a) Latent Dirichlet Allocation (LDA)
    b) Long Short-Term Memory (LSTM)
    c) Generative Adversarial Network (GAN)
    d) Convolutional Neural Network (CNN) Answer: a) Latent Dirichlet Allocation (LDA)

Computer Vision

  1. What is the purpose of “transfer learning” in the context of computer vision? a) To transfer data between different types of models.
    b) To utilize pre-trained models on large datasets and adapt them to specific tasks with limited data.
    c) To improve the training speed of neural networks.
    d) To convert image data into textual descriptions. Answer: b) To utilize pre-trained models on large datasets and adapt them to specific tasks with limited data.
  2. Which concept in computer vision helps in localizing and classifying objects within an image by drawing bounding boxes? a) Semantic Segmentation
    b) Object Detection
    c) Image Classification
    d) Feature Extraction Answer: b) Object Detection
  3. In the context of Convolutional Neural Networks (CNNs), what does “stride” refer to? a) The number of layers in the network
    b) The step size by which the convolutional filter moves across the image
    c) The size of the pooling operation
    d) The size of the convolutional kerne lAnswer: b) The step size by which the convolutional filter moves across the image
  4. What is “image captioning” and which type of model is typically used to accomplish it? a) Generating textual descriptions for images using a combination of CNNs and RNNs
    b) Converting images into numerical vectors using PCA
    c) Categorizing images into predefined classes using SVM
    d) Detecting objects within images using GANs Answer: a) Generating textual descriptions for images using a combination of CNNs and RNNs
  5. How does “region-based CNN” (R-CNN) improve upon traditional CNNs for object detection tasks? a) By integrating fully connected layers directly into the CNN
    b) By applying CNNs to different regions of the image to identify objects
    c) By reducing the number of convolutional layers
    d) By focusing on pixel-level classification only Answer: b) By applying CNNs to different regions of the image to identify objects

Leave a Comment

Your email address will not be published. Required fields are marked *