(1) estimate a Bernoulli Naive Bayes classifer (using the add-one smoothing) (2) apply the classifier to the test document. (3) estimate a multinomial Naive Bayes classifier (using the add-one smoothing) (4) apply the classifier to the test document

COMP9318 (13S2) ASSIGNMENT 2

DUE ON 23:59 1 NOV, 2013 (FRI)

Q1. (25 marks)

Based on the data in the following table,

(1) estimate a Bernoulli Naive Bayes classifer (using the add-one smoothing) (2) apply the classifier to the test document. (3) estimate a multinomial Naive Bayes classifier (using the add-one smoothing) (4) apply the classifier to the test document

You do not need to estimate parameters that you don’t need for classifying the test document.

docID words in document class = China?

training set 1 Taipei Taiwan Yes 2 Macao Taiwan Shanghai Yes 3 Japan Sapporo No 4 Sapporo Osaka Taiwan No

test set 5 Taiwan Taiwan Taiwan Sapporo Bangkok ?

Q2. (20 marks)

Algorithm 1: k-means(D, k)

Data: D is a dataset of n d-dimensional points; k is the number of clusters. Initialize k centers C = [c1, c2, . . . , ck];1

canStop ← false;2 while canStop = false do3

Initialize k empty clusters G = [g1, g2, . . . , gk];4

for each data point p ∈ D do5 cx ← NearestCenter(p, C);6 gcx .append(p);7

for each group g ∈ G do8 ci ← ComputeCenter(g);9

return G;10

1

 

 

2 DUE ON 23:59 1 NOV, 2013 (FRI)

Consider the (slightly incomplete) k-means clustering algorithm as depicted in Algo- rithm 1.

(1) Assume that the stopping criterion is till the algorithm converges to the final k clusters. Can you insert several lines of pseudo-code after Line 8 of the algorithm to implement this logic.

(2) The cost of k clusters

cost(g1, g2, . . . , gk) = k∑

i=1

cost(gi)

where cost(gi) = ∑

p∈gi dist(p, ci). dist() is the Euclidean distance. Now show that

the cost of k clusters as evaluated at the end of each iteration (i.e., after Line 11 in the current algorithm) never increases. (You may assume d = 2)

(3) Prove that the cost of clusters obtained by k-means algorithm always converges to a local minima. (Hint: you can make use of the previous conclusion even if you have not proved it).

Q3. (25 marks)

Consider the given similarity matrix. You are asked to perform group average hierar- chical clustering on this dataset.

You need to show the steps and final result of the clustering algorithm. You will show the final results by drawing a dendrogram. The dendrogram should clearly show the order in which the points are merged.

p1 p2 p3 p4 p5 p1 1.00 0.10 0.41 0.55 0.35 p2 0.10 1.00 0.64 0.47 0.98 p3 0.41 0.64 1.00 0.44 0.85 p4 0.55 0.47 0.44 1.00 0.76 p5 0.35 0.98 0.85 0.76 1.00

Q4. (10 marks)

Play several rounds of the Akinator game at http://au.akinator.com/.

(1) It is not uncommon that users may give completely or partially wrong answers during a game. Assume the site maintains a large table, where each row is about a person, and each column is a Boolean-type question, and each cell value is the correct answer (“Yes” or “No”), and that the core algorithm the site uses is a decision tree. To accommodate possible errors, let’s assume the site allows up to one error in a game. That is, a person will still be a candidate if at most one question answer the user provided does not match the correct answer in the data table. Now describe how you will modify the ID3 decision tree construction algorithm to build a decision tree for the site while allowing up to one error in a game.

 

 

COMP9318 (13S2) ASSIGNMENT 2 3

Figure 1. Example

(2) Assume that you do not think the site uses decision trees as the backbone algo- rithm. What are the reason(s) to support this conjecture? You may list more than one reason. If you design some experiments and will refer to them, please include the setup and the details of the experiments (e.g., something like Figure 1)

Q5. (20 marks)

We consider the linear counting estimator that estimates the number of distinct elements in a data stream. Using this as a building block, we shall derive methods to estimate the number of distinct elements after some common set operations on several data streams.

Let S1 and S2 be two data streams 1, and C(Si) be the linear counting estimator for Si

using the same hash function h() and same length of bit array (i.e., using m bits and the bit array is denoted as C(Si).B).

(1) Prove that C(S1 ∪ S2) = C(S1)∨ C(S2). Here ∪ is the multiset union operator, and the ∨ operator on two linear counting estimators C1 and C2 returns a new estimator (with the same hash function) with a m-bit bit array where its j-th entry is the result of bitwise OR of the corresponding bits in C1 and C2, i.e., C1.B[j] | C2.B[j].

(2) Prove that C(S1∩S2) 6= C(S1)∧C(S2). Here ∩ is the multiset intersection operator, and the ∧ operator is defined similar to ∨ except that we use bitwise AND instead of bitwise OR, i.e., C1.B[j] & C2.B[j].

(3) Derive a method to estimate the number of distinct elements in S1∩S2, based only on linear counting estimators.

Submission

Please write down your answers in a file named ass2.pdf. You must write down your name and student ID on the first page.

1Note that an element could appear in both S1 and S2.

 

 

4 DUE ON 23:59 1 NOV, 2013 (FRI)

You can submit your file by

give cs9318 ass2 ass2.pdf

Late Penalty. -10% for the first two days, and -30% for the following days.