Q1. The ‘perceptron’ is the fundamental unit of:
A) Database
B) Neural Network
C) Operating System
D) CPU
✅ Answer: B) Neural Network
💡 Explanation: A perceptron is a single artificial neuron, the basic computational unit of neural networks.
──────────────────────────────────────────────────────────────────────────────────────────
Q2. Supervised learning requires:
A) Unlabeled data only
B) Labeled training data with input-output pairs
C) No training data
D) Only test data
✅ Answer: B) Labeled training data with input-output pairs
💡 Explanation: In supervised learning, the model trains on labeled data (input + correct output) to learn the mapping function.
──────────────────────────────────────────────────────────────────────────────────────────
Q3. Which algorithm is used for classification and regression using decision boundaries?
A) K-Means
B) Apriori
C) Support Vector Machine (SVM)
D) PCA
✅ Answer: C) Support Vector Machine (SVM)
💡 Explanation: SVM finds the optimal hyperplane (decision boundary) that maximizes margin between classes for classification.
──────────────────────────────────────────────────────────────────────────────────────────
Q4. ‘Overfitting’ in machine learning means:
A) Model performs well on training data but poorly on new data
B) Model is too simple
C) Training data is insufficient
D) Model runs too slowly
✅ Answer: A) Model performs well on training data but poorly on new data
💡 Explanation: Overfitting: model memorizes training data (including noise) but fails to generalize to unseen data.
──────────────────────────────────────────────────────────────────────────────────────────
Q5. NLP stands for:
A) Neural Learning Process
B) Natural Language Processing
C) Network Layer Protocol
D) Numerical Linear Programming
✅ Answer: B) Natural Language Processing
💡 Explanation: NLP is the AI field focused on enabling computers to understand, interpret, and generate human language.
──────────────────────────────────────────────────────────────────────────────────────────
Q6. Which of the following is a reinforcement learning algorithm?
A) Linear Regression
B) K-Nearest Neighbor
C) Q-Learning
D) Naive Bayes
✅ Answer: C) Q-Learning
💡 Explanation: Q-Learning is a model-free reinforcement learning algorithm where an agent learns by receiving rewards/penalties.
──────────────────────────────────────────────────────────────────────────────────────────
Q7. The ‘Transformer’ architecture in AI is mainly used for:
A) Image compression
B) Natural language processing tasks (translation, text generation)
C) Hardware design
D) Database optimization
✅ Answer: B) Natural language processing tasks (translation, text generation)
💡 Explanation: Transformer architecture (2017, ‘Attention is All You Need’) revolutionized NLP, forming the basis of GPT, BERT models.
──────────────────────────────────────────────────────────────────────────────────────────
Q8. K-Means clustering is an example of:
A) Supervised learning
B) Reinforcement learning
C) Unsupervised learning
D) Semi-supervised learning
✅ Answer: C) Unsupervised learning
💡 Explanation: K-Means groups unlabeled data into K clusters based on similarity — no labeled output is required.
──────────────────────────────────────────────────────────────────────────────────────────
Q9. Which activation function is most commonly used in hidden layers of deep neural networks?
A) Sigmoid
B) ReLU (Rectified Linear Unit)
C) Tanh
D) Linear
✅ Answer: B) ReLU (Rectified Linear Unit)
💡 Explanation: ReLU (f(x) = max(0,x)) is preferred in deep networks — it avoids vanishing gradient and is computationally efficient.
──────────────────────────────────────────────────────────────────────────────────────────
Q10. ‘Training data’, ‘validation data’, and ‘test data’ serve what purposes respectively?
A) All are used for training
B) Train the model / tune hyperparameters / evaluate final performance
C) Train / deploy / backup
D) Input / process / output
✅ Answer: B) Train the model / tune hyperparameters / evaluate final performance
💡 Explanation: Training set: trains model. Validation set: tunes hyperparameters. Test set: evaluates final unbiased performance.
──────────────────────────────────────────────────────────────────────────────────────────
Q11. GPT stands for:
A) General Purpose Technology
B) Generative Pre-trained Transformer
C) Graph Processing Transformer
D) Global Prediction Technique
✅ Answer: B) Generative Pre-trained Transformer
💡 Explanation: GPT (Generative Pre-trained Transformer) by OpenAI is a large language model for text generation.
──────────────────────────────────────────────────────────────────────────────────────────
Q12. The ‘confusion matrix’ in ML evaluation shows:
A) How confused the model is
B) True Positives, True Negatives, False Positives, False Negatives
C) Training loss over time
D) Model architecture
✅ Answer: B) True Positives, True Negatives, False Positives, False Negatives
💡 Explanation: A confusion matrix shows the count of correct and incorrect predictions broken down by each class.
──────────────────────────────────────────────────────────────────────────────────────────
Q13. Which technique reduces high-dimensional data to fewer dimensions?
A) Clustering
B) Regression
C) PCA (Principal Component Analysis)
D) Boosting
✅ Answer: C) PCA (Principal Component Analysis)
💡 Explanation: PCA reduces dimensionality by finding principal components that capture maximum variance in fewer dimensions.
──────────────────────────────────────────────────────────────────────────────────────────
Q14. ‘ChatGPT’ is built on which foundational technology?
A) Convolutional Neural Networks
B) Recurrent Neural Networks
C) Large Language Models (Transformer-based)
D) Decision Trees
✅ Answer: C) Large Language Models (Transformer-based)
💡 Explanation: ChatGPT is built on GPT-4 (Large Language Model using Transformer architecture) by OpenAI.
──────────────────────────────────────────────────────────────────────────────────────────
Q15. In deep learning, CNN (Convolutional Neural Network) is primarily used for:
A) Time series prediction
B) Text generation
C) Image recognition and processing
D) Reinforcement learning
✅ Answer: C) Image recognition and processing
💡 Explanation: CNNs use convolutional layers to automatically learn spatial features from images for classification and detection.
──────────────────────────────────────────────────────────────────────────────────────────
Q16. The ‘Turing Test’ evaluates:
A) Computer processing speed
B) Machine’s ability to exhibit human-like intelligent behavior in conversation
C) Network security
D) Database efficiency
✅ Answer: B) Machine’s ability to exhibit human-like intelligent behavior in conversation
💡 Explanation: The Turing Test (1950) checks if a machine can converse indistinguishably from a human to an evaluator.
──────────────────────────────────────────────────────────────────────────────────────────
Q17. ‘Transfer Learning’ in AI means:
A) Moving data between computers
B) Applying a pre-trained model’s knowledge to a new, related task
C) Training a model from scratch
D) Transferring AI skills to humans
✅ Answer: B) Applying a pre-trained model’s knowledge to a new, related task
💡 Explanation: Transfer Learning reuses a model trained on one task (e.g., ImageNet) to solve a different but related task faster.
──────────────────────────────────────────────────────────────────────────────────────────
Q18. Which company developed AlphaGo, the AI that defeated world Go champions?
A) OpenAI
B) Meta AI
C) Google DeepMind
D) IBM
✅ Answer: C) Google DeepMind
💡 Explanation: AlphaGo was developed by Google DeepMind and defeated world Go champion Lee Sedol in 2016.
──────────────────────────────────────────────────────────────────────────────────────────
Q19. ‘Gradient Descent’ in machine learning is used to:
A) Classify data into gradients
B) Minimize the loss function by iteratively updating model weights
C) Normalize input data
D) Visualize model performance
✅ Answer: B) Minimize the loss function by iteratively updating model weights
💡 Explanation: Gradient Descent updates weights in the direction that reduces the loss function, iteratively improving the model.
──────────────────────────────────────────────────────────────────────────────────────────
Q20. BERT in NLP stands for:
A) Bidirectional Encoder Representations from Transformers
B) Binary Encoded Regression Technique
C) Basic Evaluation of Recurrent Transformers
D) Bidirectional Encoder for Retrieval Tasks
✅ Answer: A) Bidirectional Encoder Representations from Transformers
💡 Explanation: BERT (Google, 2018) reads text bidirectionally using Transformer architecture, improving many NLP benchmarks.