Login
Register
@
Dark Mode
Profile
Edit my Profile
Messages
My favorites
Register
Activity
Q&A
Questions
Unanswered
Tags
Subjects
Users
Ask
Previous Years
Blogs
New Blog
Exams
Dark Mode
Filter
Recent
Hot!
Most votes
Most answers
Most views
Previous GATE
Featured
Highest voted questions in Artificial Intelligence
0
votes
0
answers
41
What resources can i use to study the Data Warehousing part for the GATE DA paper?
Ameya Kulkarni
asked
in
Artificial Intelligence
Jan 30
by
Ameya Kulkarni
150
views
0
votes
1
answer
42
AI Sample Question for DS-AI
Imagine you are guiding a robot through a grid-based maze using the A* algorithm. The robot is currently at node A (start) and wants to reach node B (goal). The heuristic function $h(n)$ is the Euclidean distance between a node and the goal. The ... algorithm explore next based on the A* calculation? A) Node C B) Node D C) Node E D) Not enough information to decide
rajveer43
asked
in
Artificial Intelligence
Jan 16
by
rajveer43
379
views
artificial-intelligence
machine-learning
probability
statistics
0
votes
1
answer
43
UPENN | DS-AI Sample | Decision Tree
When choosing one feature from \(X_1, \ldots, X_n\) while building a Decision Tree, which of the following criteria is the most appropriate to maximize? (Here, \(H()\) means entropy, and \(P()\) means probability) (a) \(P(Y | X_j)\) (b) \(P(Y) - P(Y | X_j)\) (c) \(H(Y) - H(Y | X_j)\) (d) \(H(Y | X_j)\) (e) \(H(Y) - P(Y)\)
rajveer43
asked
in
Artificial Intelligence
Jan 16
by
rajveer43
223
views
artificial-intelligence
machine-learning
statistics
probability
0
votes
1
answer
44
UPENN | ML | DECISION TREE
Given the following table of observations, calculate the information gain $IG(Y |X)$ that would result from learning the value of $X$. X Y Red True Green False Brown False Brown False (a) 1/2 (b) 1 (c) 3/2 (d) 2 (e) none of the above
rajveer43
asked
in
Artificial Intelligence
Jan 16
by
rajveer43
221
views
artificial-intelligence
statistics
machine-learning
binary-tree
0
votes
1
answer
45
UPENN | ML Questions for GATE DA
In fitting some data using radial basis functions with kernel width $σ$, we compute training error of $345$ and a testing error of $390$. (a) increasing $σ$ will most likely reduce test set error (b) decreasing $σ$ will most likely reduce test set error (C) not enough information is provided to determine how $σ$ should be changed
rajveer43
asked
in
Artificial Intelligence
Jan 15
by
rajveer43
262
views
machine-learning
statistics
artificial-intelligence
0
votes
1
answer
46
DA Practice | UPENN | ML | Naive Bais
Suppose you have a three-class problem where class label \( y \in \{0, 1, 2\} \), and each training example \( \mathbf{X} \) has 3 binary attributes \( X_1, X_2, X_3 \in \{0, 1\} \). How many parameters do you need to know to classify an example using the Naive Bayes classifier? (a) 5 b) 9 (c) 11 (d) 13 (e) 23
rajveer43
asked
in
Artificial Intelligence
Jan 14
by
rajveer43
394
views
machine-learning
artificial-intelligence
statistics
probability
0
votes
2
answers
47
UPENN | ML | Cross validation
Suppose you have picked the parameter \( \theta \) for a model using 10-fold cross-validation. The best way to pick a final model to use and estimate its error is to (a) pick any of the 10 models you built for your model; use its error estimate on ... a new model on the full data set, using the \( \theta \) you found; use the average CV error as its error estimate
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
260
views
machine-learning
artificial-intelligence
statistics
0
votes
1
answer
48
Decision Tree | Sample Question
$True$ or $False?$ If decision trees such as the ones we built in class are allowed to have decision nodes based on questions that can have many possible answers (e.g. “What country are you from) in addition to binary questions, they will in general tend to add the multiple answer questions to the tree before adding the binary questions
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
222
views
algorithms
artificial-intelligence
machine-learning
0
votes
1
answer
49
UPENN | ML | Cross Validation
P1: In the limit of infinite training and test data, consistent estimators always give at least as low a test error as biased estimators. P2: Leave-one out cross validation (LOOCV) generally gives less accurate estimates of true test error than 10-fold ... following Statements is/are correct? Only P1 is True Only P2 is True P1 is True and P2 is False Both are False
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
194
views
machine-learning
artificial-intelligence
statistics
0
votes
1
answer
50
UPENN | ML | DA Practice | Regularization
After applying a regularization penalty in linear regression, you find that some of the coefficients of $w$ are zeroed out. Which of the following penalties might have been used? (a) L0 norm (b) L1 norm (c) L2 norm (d) either (A) or (B) (e) any of the above
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
266
views
machine-learning
artificial-intelligence
statistics
0
votes
0
answers
51
UPENN | ML | DA Practice
Using the same data as above \( \mathbf{X} = [-3, 5, 4] \) and \( \mathbf{Y} = [-10, 20, 20] \), assuming a ridge penalty \( \lambda = 50 \), what ratio versus the MLE estimate \( \hat{\mathbf{w}}_{\text{MLE}} \) do you think the ridge regression \( L_2 \) estimate \( \hat{\mathbf{w}}_{\text{ridge}} \) will be? (a)] 2 b)] 1 (c)] 0.666 (d)] 0.5
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
128
views
artificial-intelligence
machine-learning
statistics
0
votes
1
answer
52
UPENN | ML | DA Practice
Consider the statements: $P1:$ It is generally more important to use consistent estimators when one has smaller numbers of training examples. $P2:$ It is generally more important to used unbiased estimators when one has smaller numbers of training examples. Which of the following statement( ... $P1$ and $P2$ are true (C) Only $P2$ is True (D) Both $P1$ and $P2$ are False
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
137
views
machine-learning
artificial-intelligence
statistics
0
votes
1
answer
53
DA Practice | UPENN | ML | Bias-Variance Trade Off | Regularization
Suppose we have a regularized linear regression model: \[ \text{argmin}_{\mathbf{w}} \left||\mathbf{Y} - \mathbf{Xw} \right||^2 + k \|\mathbf{w}\|_p^p. \] What is the effect of increasing \( p ... , decreases variance (c)] Decreases bias, increases variance (d)] Decreases bias, decreases variance (e)] Not enough information to tell
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
155
views
machine-learning
artificial-intelligence
statistics
0
votes
1
answer
54
UPENN | ML | DA Practice | Bias-Variance Trade-Off
Suppose we have a regularized linear regression model: \[ \text{argmin}_{\mathbf{w}} \left||\mathbf{Y} - \mathbf{Xw} \right||^2 + \lambda \|\mathbf{w}\|_1. \] What is the effect of increasing \( \lambda \) ... bias, decreases variance (c)] Decreases bias, increases variance (d)] Decreases bias, decreases variance (e)] Not enough information to tell
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
124
views
artificial-intelligence
machine-learning
statistics
0
votes
1
answer
55
UPENN | Midterm | K Fold Validation | DA Practice |
Suppose we want to compute $10-Fold$ Cross-Validation error on $100$ training examples. We need to compute error $N1$ times, and the Cross-Validation error is the average of the errors. To compute each error, we need to build a model with data of size $N2$, and test the ... $N1 = 10, N2 = 100, N3 = 10$ (d) $N1 = 10, N2 = 100, N3 = 10$
rajveer43
asked
in
Artificial Intelligence
Jan 13
by
rajveer43
123
views
machine-learning
artificial-intelligence
0
votes
1
answer
56
Ai Questions | DS-AI Paper | GATE 2024
Given a tree with a branching factor of 3 and a depth of 4, calculate the maximum number of nodes expanded during a breadth-first search.
rajveer43
asked
in
Artificial Intelligence
Jan 1
by
rajveer43
348
views
discrete-mathematics
analytical-aptitude
quantitative-aptitude
artificial-intelligence
0
votes
0
answers
57
GATE DS-AI questions | ML
Consider the feature transform z = [L0(x) L1(x) L2(x)]T with Legendre polynomials and the linear model h(x) = w T .z. For the regularized hypothesis with w = [−1 + 2 − 1] T , what is h(x) explicitly as a function of x? write solution for It.
rajveer43
asked
in
Artificial Intelligence
Dec 11, 2023
by
rajveer43
333
views
artificial-intelligence
machine-learning
0
votes
1
answer
58
Machine Learning Self-doubt
Please Solve this question with full explanation.
gateexplore
asked
in
Artificial Intelligence
Nov 30, 2023
by
gateexplore
260
views
machine-learning
self-doubt
0
votes
4
answers
59
UGC NET CSE | June 2016 | Part 3 | Question: 66
A perceptron has input weights $W_1=-3.9$ and $W_2=1.1$ with threshold value $T=0.3.$ What output does it give for the input $x_1=1.3$ and $x_2=2.2?$ $-2.65$ $-2.30$ $0$ $1$
soujanyareddy13
asked
in
Artificial Intelligence
May 10, 2021
by
soujanyareddy13
1.3k
views
ugcnetcse-june2016-paper3
0
votes
1
answer
60
UGC NET CSE | June 2016 | Part 3 | Question: 75
A software program that infers and manipulates existing knowledge in order to generate new knowledge is known as: Data dictionary Reference mechanism Inference engine Control strategy
soujanyareddy13
asked
in
Artificial Intelligence
May 10, 2021
by
soujanyareddy13
833
views
ugcnetcse-june2016-paper3
Page:
« prev
1
2
3
4
next »
Subscribe to GATE CSE 2024 Test Series
Subscribe to GO Classes for GATE CSE 2024
Quick search syntax
tags
tag:apple
author
user:martin
title
title:apple
content
content:apple
exclude
-tag:apple
force match
+apple
views
views:100
score
score:10
answers
answers:2
is accepted
isaccepted:true
is closed
isclosed:true
Recent Posts
Post GATE 2024 Guidance [Counseling tips and resources]
GATE CSE 2024 Result Responses
[Project Contest] Pytorch backend support for MLCommons Cpp Inference implementation
Participating in MLCommons Inference v4.0 submission (deadline is February 23 12pm IST)
IIITH PGEE 2024 Test Series by GO Classes
Subjects
All categories
General Aptitude
(3.5k)
Engineering Mathematics
(10.4k)
Digital Logic
(3.6k)
Programming and DS
(6.2k)
Algorithms
(4.8k)
Theory of Computation
(6.9k)
Compiler Design
(2.5k)
Operating System
(5.2k)
Databases
(4.8k)
CO and Architecture
(4.0k)
Computer Networks
(4.9k)
Artificial Intelligence
(79)
Machine Learning
(48)
Data Mining and Warehousing
(25)
Non GATE
(1.4k)
Others
(2.7k)
Admissions
(684)
Exam Queries
(1.6k)
Tier 1 Placement Questions
(17)
Job Queries
(80)
Projects
(11)
Unknown Category
(870)
64.3k
questions
77.9k
answers
244k
comments
80.0k
users
Recent Blog Comments
category ?
Hi @Arjun sir, I have obtained a score of 591 in ...
download here
Can you please tell about IIT-H mtech CSE self...
Please add your admission queries here:...
Network Sites
GO Mechanical
GO Electrical
GO Electronics
GO Civil
CSE Doubts
Aptitude Overflow