Home » Hierarchical clustering in data mining

Hierarchical clustering in data mining

by Online Tutorials Library

Hierarchical clustering in data mining

Hierarchical clustering refers to an unsupervised learning procedure that determines successive clusters based on previously defined clusters. It works via grouping data into a tree of clusters. Hierarchical clustering stats by treating each data points as an individual cluster. The endpoint refers to a different set of clusters, where each cluster is different from the other cluster, and the objects within each cluster are the same as one another.

There are two types of hierarchical clustering

  • Agglomerative Hierarchical Clustering
  • Divisive Clustering

Agglomerative hierarchical clustering

Agglomerative clustering is one of the most common types of hierarchical clustering used to group similar objects in clusters. Agglomerative clustering is also known as AGNES (Agglomerative Nesting). In agglomerative clustering, each data point act as an individual cluster and at each step, data objects are grouped in a bottom-up method. Initially, each data object is in its cluster. At each iteration, the clusters are combined with different clusters until one cluster is formed.

Agglomerative hierarchical clustering algorithm

  1. Determine the similarity between individuals and all other clusters. (Find proximity matrix).
  2. Consider each data point as an individual cluster.
  3. Combine similar clusters.
  4. Recalculate the proximity matrix for each cluster.
  5. Repeat step 3 and step 4 until you get a single cluster.

Let’s understand this concept with the help of graphical representation using a dendrogram.

With the help of given demonstration, we can understand that how the actual algorithm work. Here no calculation has been done below all the proximity among the clusters are assumed.

Let’s suppose we have six different data points P, Q, R, S, T, V.

Hierarchical clustering in data mining

Step 1:

Consider each alphabet (P, Q, R, S, T, V) as an individual cluster and find the distance between the individual cluster from all other clusters.

Step 2:

Now, merge the comparable clusters in a single cluster. Let’s say cluster Q and Cluster R are similar to each other so that we can merge them in the second step. Finally, we get the clusters [ (P), (QR), (ST), (V)]

Step 3:

Here, we recalculate the proximity as per the algorithm and combine the two closest clusters [(ST), (V)] together to form new clusters as [(P), (QR), (STV)]

Step 4:

Repeat the same process. The clusters STV and PQ are comparable and combined together to form a new cluster. Now we have [(P), (QQRSTV)].

Step 5:

Finally, the remaining two clusters are merged together to form a single cluster [(PQRSTV)]

Divisive Hierarchical Clustering

Divisive hierarchical clustering is exactly the opposite of Agglomerative Hierarchical clustering. In Divisive Hierarchical clustering, all the data points are considered an individual cluster, and in every iteration, the data points that are not similar are separated from the cluster. The separated data points are treated as an individual cluster. Finally, we are left with N clusters.

Hierarchical clustering in data mining

Advantages of Hierarchical clustering

  • It is simple to implement and gives the best output in some cases.
  • It is easy and results in a hierarchy, a structure that contains more information.
  • It does not need us to pre-specify the number of clusters.

Disadvantages of hierarchical clustering

  • It breaks the large clusters.
  • It is Difficult to handle different sized clusters and convex shapes.
  • It is sensitive to noise and outliers.
  • The algorithm can never be changed or deleted once it was done previously.

You may also like