EXPERT KNOWLEDGE AT A GLANCE

Tag: hierarchical cluster analysis

5 Clustering Algorithms Data Scientists need to know – The key is always to understand the basic approach of any algorithm you want to use

As a data scientist, you have several basic tools at your disposal, which you can also apply in combination to a data set. Here we present some clustering algorithms that you should definitely know and use

In times of Big Data, not only the sheer number of data increases, but also the relationships between them. More and more complex dependencies are formed. This makes it all the more difficult to recognize these similar properties and to assign the data to so-called clusters in a way that can be evaluated.

You have certainly heard of these algorithms and maybe used one or the other, but do you really know what clustering algorithms are?

What are clustering algorithms?

So let’s first clarify what these algorithms are in the first place. The goal is clear: You want to identify similar properties between individual data points in a data set and group them in a meaningful way. These properties are often high-dimensional.

With the help of cluster analysis, you want to reduce this high-dimensional information to a low-dimensional dependency. So, for example, a representation in 2D space. Clustering is an unsupervised machine learning technique and in the end you classify the data points by using algorithms.

The approach to clustering differs from technique to technique. All have their advantages and disadvantages, so it makes sense to try several on one set of data, or apply them in combination. Below we will introduce you to some popular clustering methods and explain their grouping approach.

This picture shows schematically popular Clustering Machine Learning Algorithms you should know as a data scientist
Clustering Machine Learning Algorithms – Popular clustering algorithms

Mean-Shift Clustering

The first algorithm we want to introduce you to is Mean-Shift Clustering. With this you can find dense areas of data points according to the concept of kernel density estimation (KDE). The basis of the clustering is a circular sliding window, which moves towards higher density at each iteration. Within the window, the centers of each class are determined, called centroids.

The movement is now created by moving the center to the average of the points within the window. The density within the sliding window is thus proportional to the number of points within it. This motion continues until there is no direction in which the motion can take more points within the kernel.

Clustering Machine Learning Algorithms - Schematic and simplified representation of the Mean-Shift principle.
Clustering Machine Learning Algorithms – Mean-Shift Clustering Priciple

Hierarchical Cluster Analysis (HCA)

With HCA, clusters are formed based on empirical similarity measures of the data points. This means that the two most similar objects are assigned one after the other until all objects are in one cluster. This results in a tree-like structure. In contrast to the K-means algorithm, which we will discuss later, similarities between the clusters play a role. These are represented by a cluster distance. With K-means, only all objects within a collection are similar to each other, while they are dissimilar to objects in other clusters.

You can create an HCA in different ways. There are two elementary procedures, the top-down and the bottom-up. If you want to know more about Hierarchical Cluster Analysis, read this article.

Schematic and simplified representation of the HCA clustering  principle.
Clustering Machine Learning Algorithms – HCA Principle

Expectation-Maximization (EM) Clustering using Gaussian Mixture Models (GMM)

GMM basically assumes that the data points are Gaussian and not circular. The clusters are described by their mean and standard deviation. Each Gaussian distribution is randomly assigned to a single cluster and found using the Expectation-Maximization (EM) optimization algorithm. The probability of belonging to a cluster is then calculated for each data point. Thus, the closer a point is to the Gaussian center, the more likely it is then to belong to that cluster. Based on these probabilities, a new set of parameters for the Gaussian distributions is iteratively calculated. That is, the probabilities within a cluster are maximized.

K-Means clustering algorithms

The k-Means algorithm described by MacQueen, 1967 goes back to the methods described by Lloyd, 1957 and Forgy, 1965. You can use the algorithm besides cluster analysis also for vector quantization. Here, a data set is partitioned into k groups with equal variance.

The number of clusters must be specified in advance. Each disjoint cluster is described by the average of all contained samples. The so-called cluster centroid.


Each centroid is updated to represent the average of its constituent instances. This is done until the assignment of instances to the clusters does not
changes any more. If you want to learn more about the K-means algorithm, check this out.

Schematic and simplified representation.of the kmeans clustering algorithm
K-Means Principle

Density-Based Spatial Clustering of Applications with Noise (DBSCAN)

DBSCAN is a density-based cluster analysis with noise. From an arbitrary starting data point, neighborhood points are specified at a distance epsilon. Clustering then begins from a certain neighborhood data point count.

The current data point becomes the first point of the new cluster, or referred to as noise. In both cases, however, it is considered to be examined. The neighboring data points are then added to the cluster. Once all neighbors have been added, a new, unexamined point is called and processed. A new cluster is thus formed.

Schematic and simplified representation of the DBSCAN Clustering principle.
Clustering Machine Learning Algorithms – How DBSCAN works

The field of cluster algorithms is wide and everyone’s approach is different. You should be aware that there is no one solution. You have to consider each algorithm as another tool. Not every technique works equally well in every situation.

The key here is to always understand the basic approach of each algorithm you want to use. Build a small portfolio and get to know these techniques well. Once you master them, you should then add new ones. Knowing your own tools is crucial to avoid try and error and to gain control over your data. Remember: no result is a result. Your added value here is that even if an algorithm doesn’t work well on your data set, it will give you information about the data properties.

Data Mining vs Big Data Analytics – You need the right tools and you need to know how to use them!

Data Mining vs Big Data Analytics – Both data disciplines, but what makes them different? In this article, we introduce you to both fields and explain the key differences.

Data Science is an interdisciplinary scientific field, as it has become more and more in focus in the last decades. Many companies see this as the key to an Industry 4.0 company. The hope is that valuable information can be found in the company’s own data, which can be used to massively increase its own profitability. Terms such as big data, data mining, data analytics and machine learning are being thrown into the ring. Many people do not realize that these terms describe other disciplines. If you want to build a house, you need the right tools and you have to know how to use them.

Map of Data Disciplines

First of all, you should think of the individual disciplines as being layered into each other like an onion. So there is overlap between all the fields and when you talk about a discipline, you are also talking about lower layers.

data mining vs analytics - This diagram shows the relationships between the individual data disciplines
Map of data disciplines

Since data analytics is located above data mining in the layer model, it is already clear that mining must be a sub discipline of analytics. Therefore, we will first describe the comprehensive discipline.

Data Mining vs Big Data Analytics – What is Analytics?

Big data analytics, as a sub field of data analysis, describes the use of data analysis tools and without special data processing. in data analytics, you use queries and data aggregation methods, but also data mining techniques and tools. The goal of this discipline is to represent various dependencies between input variables.

The goal of this discipline is to represent various dependencies between input variables. The following figure shows the individual overlaps in the use of the tools of the different disciplines.

scheme about overlaps in the use of the tools of the different data disciplines
Overlaps of the different data disciplines

Data Mining vs Big Data Analytics – What is Data Mining?

Data mining is a subset of data analytics. At its core, it is about identifying and discovering a large data set through correlations. Especially if you know little about the available data this field should be used.

datamining

But what does a typical data mining process look like and what are typical data mining tasks?

Data Mining Process

You can divide a typical data mining process into several sequential steps. In the preprocessing stage, your data is first cleaned. This involves integrating sources and removing inconsistencies. Then you can convert the data into the right format. After that, the actual analysis step, the data mining, takes place.Finally, your results have to be evaluated. Expert knowledge is required here to control the patterns found and the fulfillment of your own objectives.

This diagram shows the flow of a typical data mining process
Data Mining Process

The term data mining covers a variety of techniques and algorithms to analyze a data set. In the following we will show you some typical methods.

Data Mining Tasks

Besides identifying unusual data sets with outlier detection, you can also group your objects based on similarities using cluster analysis. In this article we have already summarized some popular clustering algorithms that you should know as a data scientist. While association analysis only identifies the relationships and dependencies in the data, regression analysis provides you with the relationships between dependent and independent variables. Through classification, you assign elements that were not previously assigned to classes to existing classes. You can also summarize the data to reduce the data set to a more compact description without significant loss of information.

data mining tasks
Typical Data Mining Tasks

Data Mining vs Big Data Analytics – Conclusion

Although the two disciplines are related, they are two different disciplines. Data mining is more about identifying key data relationships, patterns or trends in the data, while data analytics is more about deriving a data-driven model. On this path, data mining is an important step in making the data more usable. In the end, it’s not a versus, but both disciplines are part of an analytics pipeline.
In this article, we will go further into the differences between the various data sciences and clarify the difference between data analysis and data science.

What does HCA stand for?

What does HCA stand for? What is the difference between Agglomerative and Divisive? When do I use the algorithm and what are its strengths? In this article we will clarify all these questions.

If you don’t know what clustering means, check out this article. Here we also explain four other clustering methods that you as a data scientist must know.

What is an HCA?

Hierarchical Cluster Analysis, or HCA, is a technique for optimal and compact connection of objects based on empirical similarity measures. The two most similar objects are assigned one after another until all objects are finally in one cluster. This then results in a tree-like structure.

What does HCA mean - This figure shows the basic principle of an applied HCA to raw data.
What does HCA stand for? Basic principle of an applied HCA to raw data.

So how does a hierarchical cluster procedure work?

Agglomerative vs Divisive Calculation

The basic clustering can be done in two opposite ways, Agglomerative and Divisive calculation.

Agglomerative clustering:

Agglomerative Nesting, abbreviated AGNES, is also known as the bottom-up method. This method first creates a cluster between two objects with high similarity, and then adds more clusters until all the data has been enclosed.

The divisive cluster calculation follows an opposite concept.

Divisive hierarchical clustering:

Divise Analysis, also known as DIANA, is a top-down method. All objects are directly framed into a cluster and then reduced in size.

In the following figure, the agglomerative process is compared with the divisive process.

What does HCA stand for?  The figure compares the agglomerative and divisive calculation.
What does HCA stand for? Agglomerative vs Divisive Calculation

Thus, the goal is to represent the common properties in low dimension in multidimensional raw data. A strength of this machine learning method is the inclusion of cluster relationships. With K-means, only all objects within a collection are similar to each other, while they are dissimilar to objects in other clusters. If you want to know more about this other popular clustering method, read this article.

How to calculate the cluster distances?

As mentioned earlier, not only are similarities between data points in a cluster weighted, but also similarities between groups. These similarities are represented by distances between the clusters. These distances can be determined in different ways. The distance between the centroids of two clusters can be calculated. A single linkage is the shortest distance between two clusters, a complete linkage is the largest distance between two clusters and an average linkage is the average distance between two clusters.

The figure below contrasts each cluster distance calculation method.

The figure contrasts each cluster distance calculation method. A single linkage is the shortest distance between two clusters, a complete linkage is the largest distance between two clusters and an average linkage is the average distance between two clusters
Cluster distance calculation methods

In addition to the planar representation, the HCA can also be represented in a dendrogram.

HCA represented in a Dendrogram

Since an HCA describes a tree structure, it can be well represented in a dendrogram. Here the connections between the individual data elements and the connections between the clusters become well visible. This diagram can help to choose the optimal number of clusters in the data depending on where you intersect the tree.

In the following figure, for example, such a dendrogram is shown in dependence on Agglomerative and Divisive Calculation.

The figure shows a HCA represented as a dendrogram in dependence to Agglomerative and Divisive Calculation.
HCA presented as dendrogram in dependence to Agglomerative and Divisive Calculation.