EXPERT KNOWLEDGE AT A GLANCE

Category: Python (Page 1 of 2)

NumPy vs Pandas – Which is used When?

NumPy vs Pandas – Since in our time in every science and economic branch ever larger amounts of data accumulate, which must be analyzed and managed performantly, the learning of a programming language has become interdisciplinary indispensable.

For many, Python is the first programming language in the classical sense, due to its beginner friendliness and mathematical focus. Python offers the possibility of accessing ready-made, optimized computational tools through the modular implementation of powerful mathematical libraries.

NumPy vs Pandas - The schme shows popular python libraries and their place in the Python ecosystem
NumPy vs Pandas – Their place in the Python ecosystem

However, this offer can also quickly become overwhelming. Which library, which framework is suitable for my purposes? Will I save myself work with this tool, or will I reach its limits? Here you can learn more about SciPy and why you should definitely prefer it over MATLAB and here we compared the two Python visualization methods matplotlib and seaborn. These Python libraries are absolutely compatible with each other and together they make a very interesting data science tool. NumPy and Pandas are perhaps two of the best-known python libraries. But what are the differences between them? We will get to the bottom of this question in this article.

What actually is NumPy?

NumPy stands for “Numerical Python” and is an open source Python library for array-based calculations. It was first released in 1995 as Numeric, making it the first implementation of a Python matrix package, and rereleased as NumPy in 2006. This library is intended to allow easy handling of vectors, matrices, or large multidimensional arrays in general.

 

The scheme shows NumPys major applications
NumPy vs Pandas – Numpys Major Applications

For performance purposes, it is written in C, a deep, machine-oriented programming language. NumPy is compatible with a wide variety of Python libraries, some of which are also based on NumPy, adding further useful functions to its power, such as: Minimization, Regression, Fourier Transform

Python and Science

As mentioned earlier, Python is the programming language most intensively used in the application domain of scientific research across all disciplines for data processing and analysis. What is very interesting here is that the solution approaches are similar across disciplines at the data level. Thus, an exchange of ideas has become indispensable and leads more and more to a fusion of the sciences.

This is only mentioned in passing, but should also emphasize the importance of this programming language and its libraries, which are so often open source and further developed by a community.

NumPy vs Pandas - The schema shows Scientific Computing with NumPy over science disciplines
NumPy vs Pandas – Scientific Computing with NumPy

NumPy was developed specifically for scientific calculations and forms the basis for many specific frameworks and libraries.

The elementary NumPy data structure

The core functionality of NumPy is based on the “ndarray” data structure.

The schema shows NumPys fundamental data structure
NumPy vs Pandas – NumPys fundamental data structure

Such an array can only hold elements of the same data type and always consists of a pointer to a contiguous memory area together with the metadata describing the data stored in it. This allows processes to access them very efficiently and manipulate them as desired.

The schema shows how NumPys fundamental data structure could be manipulate
NumPy vs Pandas – NumPys data structure is manipulable

Thus, the shape can be changed via so-called reshaping, smaller subarrays can be created within a given larger array, arrays can be split, or merged.

What is Pandas?

Pandas is an open source library for data analysis and manipulation in Python. Already released in 2008 by Wes McKinney and written in Python, Cython and in C. Pandas are used in almost all areas and find worldwide appeal in all industries.

The schema shows Pandas major applications
NumPy vs Pandas – Pandas Major Applications

The name Pandas is derived from Panel Data.
Its strength lies in the processing and analysis of tabular data and time series.

The schema shows Pandas major features
NumPy vs Pandas – Pandas Features

Especially in the pre-processing of data, pandas offers a lot of operations. In addition to high-performance filter functions, very large data volumes with over 500 thousand rows can be transformed, manipulated, aggregated and cleaned.

Pandas fundamental data structures

As a basis for the individual functions and tools that Pandas provides, the library defines its own data objects. These objects can be one, two, or even three-dimensional.

The one-dimensional series object can take up different data types in contrast to NumPys ndarrays and corresponds to a data structure with two arrays. One array as index and one array holding the actual data.

The two-dimensional DataFrame object contains an ordered collection of columns. Here, each column can consist of different data types and each value is unique by a row index and a column index.
The eponymous Panel object is then a three-dimensional dataset consisting of dataframes. These objects can be divided into major axes, which are the index rows of each DataFrame, and minor axes, which are the columns of each of the DataFrames.

NumPy vs Pandas – Conclusion

Both libraries have their similarities, which are due to the fact that Pandas is based on NumPy, but is it an either or question? No, clearly not. Pandas is based on NumPy, but adds so many individual features to its functionality that there is a clear justification for their parallel existence. They simply serve different purposes and should be used for both.


One of the main differences between the two open source libraries is the data structure used. Pandas allows analysis and manipulation on a tabular form while NumPy works mainly with numerical data in arrays whose objects can have up to n dimensions. These data forms are easily convertible among themselves via an interface.

Pandas is more performant especially with very large data sets (500K rows and more). This makes data preprocessing and reading from external data sources easier to perform with Pandas and can then be transferred as a NumPy array into complex machine learning or deep learning algorithms. If you want to know more about machine learning methods and their fields of application, take a look at this article from us.

Supervised vs Unsupervised vs Reinforcement Learning – The fundamental differences

Supervised vs Unsupervised vs Reinforcement Learning – The three main categories of machine learning. Why these boundaries have been drawn and what they look like will be discussed in this article. The knowledge about this is an elementary part to understand machine learning correctly and to be able to apply it to data in a meaningful way.

This figure contrasts Supervised vs Unsupervised vs Reinforcement Learning.
Supervised vs Unsupervised vs Reinforcement Learning – Overview

Supervised vs Unsupervised vs Reinforcement Learning – Machine Learning Categories

Machine learning is a branch of artificial intelligence. While AI deals with the functioning of artificial intelligence and compares it with the functioning of the human brain, machine learning is a collection of mathematical methods of pattern recognition. If you want to know more about the differences between Machine Learning, AI and Deep Learning, read our article on the subject. IT systems should be given the ability to automatically learn from experience and improve. Algorithms play a central role here. These can be classified into different learning categories.

In the following figures the three main categories of machine learning methods are shown.

This figure shows Supervised vs Unsupervised vs Reinforcement Learning in the machine learning context.
Supervised vs Unsupervised vs Reinforcement Learning – Machine Learning Context

In the meantime, there are many more categories, some of which are hybrids of the individual main categories. One example is semi-supervised learning. This is certainly also a major machine learning topic, but has been left out for the time being for the sake of simplicity.

What is supervised learning?

In supervised learning, the machine learning algorithm iteratively learns the dependencies between data points. The output to be learned is specified in advance and the learning process is supervised by matching the predictions. How the The optimized algorithm is to apply the learned patterns to unknown data to make predictions.

Supervised vs Unsupervised vs Reinforcement Learning - This figure shows the basic principle of supervised learning.
Supervised vs Unsupervised vs Reinforcement Learning – Supervised Learning

Supervised learning methods can be applied to regression, i.e., prediction, or trend prediction, as well as classification problems.

What is supervised classification?

In classification, abstract classes are formed in order to delimit and order data in a meaningful way. For this purpose, objects are obtained on the basis of certain similar characteristics and structured among each other.

Decision trees can be used as prediction models to create a hierarchical structure, or the feature values can be assigned as class labels and in the form of a vector.

In the following figure the most important supervised classification algorithms are listed.

Supervised vs Unsupervised vs Reinforcement Learning - This figure shows the main algorithms of supervised learning.
Supervised vs Unsupervised vs Reinforcement Learning – Main Algorithms of Supervised Learning.

What is supervised regression?

On the other hand, supervised regression algorithms can be used to make predictions and infer causal relationships between independent and dependent variables.
For example, linear regression can be used to fit the data to a straight line or, conversely, to fit a line to the data object.
We have discussed the exact process of linear regression here in this article.

What is unsupervised learning?

In unsupervised learning, patterns are determined in data without initial patterns and relationships being known.
Especially in complex tasks, these methods can be useful to find solutions that would hardly be solvable by hand. An example is autonomous driving, or large biochemical systems with many interactions.
One key to success is a huge data set. The more data available, the more accurate models can be created.

Supervised vs Unsupervised vs Reinforcement Learning - This figure shows the basic principle of unsupervised learning.
Supervised vs Unsupervised vs Reinforcement Learning – Unsupervised Learning

In unsupervised machine learning methods, two basic principles, which also classify the algorithms used, can be distinguished. The clustering and the dimensional reduction.

What is unsupervised clustering?

The main goal of unsupervised clustering is to create collections of data elements that are similar to each other, but dissimilar to elements in other clusters. The figure below shows some of the main clustering algorithms.

Supervised vs Unsupervised vs Reinforcement Learning - This figure shows the main algorithms of unsupervised learning.
Supervised vs Unsupervised vs Reinforcement Learning – Main algorithms of unsupervised learning.

The clustering algorithms differ primarily in the cluster creation process, but also in the definition of such clusters. Thus, the relationships between clusters can also be used and hierarchical relationships can be explored.

What is unsupervised dimensional reduction?

With a high number of features, high dimensional relations can be translated low dimensional with these transformation methods. The goal is to keep the loss of information as small as possible.
The reduction methods can be divided into two main categories: Methods from linear algebra and from manifold learning.

Manifold learning is an approach to nonlinear dimensionality reduction. Algorithms for this task are based on the idea that they can learn the dimensionality of the data without a given classification and project it in a low-dimensional way.
For example, from the field of linear algebra, matrix factorization methods can be used for dimensionality reduction.

What is reinforcement learning?

In reinforcement learning, a program, a so-called agent, should independently develop a strategy to perform actions in an environment. For this purpose, positive or negative reinforcements are conveyed, which describe the interaction interactions of the agent with the environment. In other words, immediate feedback on an executed task. The program should maximize rewards or minimize punishments. The environment is a kind of simulation scenario that the agent has to explore.
The following figure describes the interactions of all components of a reinforcement learning process.

Supervised vs Unsupervised vs Reinforcement Learning - This figure shows the main principle of reinforcement learning.
Supervised vs Unsupervised vs Reinforcement Learning – Main principle of reinforcement learning.

There are two basic types of reinforcement learning.
Namely, whether the environment is model-based or not.
In model-based RL, the agent uses predictions of the environment response during learning or action.
If no model is available, the data is generated by trial and error.

Things you need to know when you start using Apache Spark

Apache Spark Streaming – Every company produces several million pieces of data every day. Properly analyzed, this information can be used to derive valuable business strategies and increase productivity.
Until now, this data was consumed and stored in a persistent. Even today, this is an important step in order to be able to perform analyses on historical data at a later date. Often, however, analysis results are desired in real time. Be it only reference values that have been exceeded.


So-called data streams, i.e. data that is continuously generated from thousands of data sources, can already be consumed before they end up in a persistence, without the flow rate being significantly reduced. It is even possible to train neural networks using such a stream.


In this article, we’ll tell you why you shouldn’t miss out on Apache Spark and Apache Spark Streaming if you’re planning to integrate stream processing in your organization.

What is Apache Spark?


Apache Spark has become one of the most important and performant unified data analytics on the market today. The framework provides a total solution of data processing and AI integration. This allows companies to easily develop performant data pipelines and train AI methods using massive data streams.


Apache Spark combines several partially interdependent components. So can be deployed in a modular fashion to a certain extent.
Spark can run in its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos or on Kubernetes.
The data here can come from streaming sources, such as Kafka, as well as static data sources. So far, the programming languages Java, Scala, Python and R are supported. These are currently the most commonly used languages across all scientific disciplines for implementing data analysis methods.

What does a Spark cluster look like?

Spark applications run as independent sets of processes on a cluster.The coordinator of a Spark program on a cluster is the so-called SparkContext object. This controls the individual Spark applications as they run as independent processes.
The Coordinator then connects to the Central Element, a Cluster Manager, which then allocates resources to the individual applications.
The figure below shows an example of a typical Spark cluster with all its components.

The figure  shows an example of a typical Spark cluster with all its components.
Overview Apache Spark Cluster

The actual calculations and data storage then take place on the nodes. These processes, also called executors, then execute tasks and hold the data in memory or disk space. The cache can then be accessed by another node.

Apache sparks underlying technology – The key to high Performance

Spark Core is the underlying unified computing engine on which all Spark functions are built. It enables parallel processing even for large datasets and thus ensures very high-performance processes.
The following figure shows how the Apache Spark Core APIs are composed.

The  figure shows how the Apache Spark Core APIs are composed.
Apache Spark Core APIs

The core API consists of low level APIs, where object manipulation via Resilient Distributed Datasets (RDDs) takes place and structured APIs, where all data types are manipulated and batch or streaming jobs take place.

How do the individual Apache Spark APIs work?

In order to properly understand the API structure, its components must be placed in a historical context.

The figure shows the development history of the Apache Spark APIs.
Development history of the Apache Spark APIs

What is the RDD API?

The RDD (Resilient Distributed Dataset) API has been implemented since the first Spark release and is based on the Scala collections API.
RDDs are a set of Java or Scala objects that represent data and thus are the building blocks of Spark. They excel in being compile-time type-safe and inert.

All higher level APIs can be decomposed into RDDs. Various transformations can be performed in parallel using this API. Each of them defines an operation to be executed, which is invoked by calling an action method and creates a new RDD. This then represents the transformed data.

What is the Dataframe API?

The Dataframe API introduces a higher level abstraction. Spark dataframes correspond to the Pandas dataframes structure. They are built on top of RDDs and represent two-dimensional data and a schema. It contains an ordered collection of columns and each different column can consist of different data types. Each value is unique by a row and a column index.


When data is transferred between nodes, only the data is transferred. The metadata is managed in a schema registry separate from spark. This has significantly improved the performance and scalability of Spark.
The API is suitable for creating a relational query plan. Thus, manipulation of data can now be done using a query language.

What is the Dataset API?

When working with dataframes, compile-time type safety is lost. This is a strength of the RDD API. The Dataset-API was created to combine the advantages of both APIs. It is thus the second most important Spark API next to the RDD API.


The basis of this API are integrated encoders, which are responsible for the conversion between JVM objects and the internal Spark SQL representation.

What components does Apache Spark consist of?

Spark is modularly extensible through the use of components. Spark includes libraries for various tasks ranging from SQL to streaming and machine learning. All components are based on the Spark Core, the foundation for parallel and distributed processing of large data sets. How this API looks in detail and what makes it so performant, we will explain later.
The following figure lists the individual Apache Spark components.

In the figure, the ecosystem of Apache Spark is shown with all the major components.
Apache Spark Ecosystem

Apache Spark Spark SQL

With this component RDDs are converted into the so-called data frames, i.e. provided with metadata information.
The whole thing is done by a catalyst optimizer, which executes an execution plan in the form of a tree.

Apache Spark GraphX

This framework can be used to perform high-performance calculations on graphs. These operations can run in parallel.

Apache Spark MLlib/SparkML

With the MLlib component, machine learning pipelines can be constructed very easily. For this purpose, ready-made models and common machine learning algorithms (classification, regression, clustering …) can be used. Thus, data identification, feature extraction and transformation are combined in a unified framework.

Apache Spark Streaming

Apache Spark Streaming enables and controls the processing of data streams. However, Apache Spark Streaming can also process data from static data sources.
In the case of datastreaming, input stream goes from a streaming data source, such as Kafka, Flume or HDFS, into Apache Spark Streaming.
There, it is broken into batches and fed into the Spark engine for parallel processing. The final results can then be output to HDFS databases and dashboards.
The following figure illustrates the principle of Apache Spark Streaming.

The figure illustrates the principle of Apache Spark Streaming.
Principle of Apache Spark Streaming

All components can consume directly from the stream via Apache Spark Streaming. This component takes a crucial role here. It coordinates the requests via sliding window operations and regulates the data flow. Since all components are based on the Spark Core API, absolute compatibility is guaranteed. Especially in the Big Data area, this can deliver a decisive performance bonus.

PCA vs Linear Regression – Therefore you should know the differences

PCA vs Linear Regression – Two statistical methods that run very similarly. However, they differ in one important respect. What the two methods actually are and what this difference is, we explain to you in the following article.

What is a PCA?

Principal Component Analysis (PCA) is a multivariate statistical method for structuring or simplifying a large data set. The main goal here is the discovery of relationships in 2 or 3 dimensional domain.
This method enjoys great popularity in almost all scientific disciplines and is mostly used when variables are highly correlated.


However, PCA is only a reliable method if the data are at least interval scaled and approximately normally distributed.
Although the variables are adjusted to avoid redundant effects, the error and residual variance of the data are not taken into account.

The following figure shows the basic principle of a PCA. High dimensional data relationships should be represented in a low dimensional way, with as little loss of information as possible.

PCA vs Linear Regression - Figure shows the basic principle of a PCA. High dimensional data relationships should be represented in a low dimensional way, with as little loss of information as possible.
PCA vs Linear Regression – Basic principle of a PCA

The key point of PCA is dimensional reduction. It is to extract the most important features of a data set by reducing the total number of measured variables with a large proportion of the variance of all variables.
This reduction is done mathematically using linear combinations.

What are linear combinations?

PCA works in a purely exploratory way, searching the data for a linear pattern that best describes the data set.
These linear combinations can best be thought of as straight lines between variable values.
In the figure below, the linear combinations have been applied to a data set.

PCA vs Linear Regression -In this scheme the linear combinations have been applied to a data set
Linear combinations

How does the algorithm work?

In the principal component analysis procedure, a set of fully uncorrelated principal components are first generated.
These contain the main changes in the data and are also known as latent variables, factors or eigenvectors.
The number of extracted components is given here by the data.

The first principal component is formed by minimizing the sum of squared variances of all variables.
During extraction, the variance component is maximized over all variables.
Then, the remaining variance is gradually resolved by the second component until the total variance of all data is explained by the principal components.

The first factor always points in the direction of the maximum variance in the data.
The second factor must be perpendicular to it and explain the next largest variance

PCA vs Linear Regression – How do they Differ?

We have studied the PCA and how it works in great detail. But what are the differences to linear regression?

In the following illustration the main difference is set up against each other.

PCA vs Linear Regression -  The figure shows the main difference between the two methods. The minimization of the error squares to the straight line.
PCA vs Linear Regression – Minimization of the Error Squares to the Straight Line

With PCA, the error squares are minimized perpendicular to the straight line, so it is an orthogonal regression. In linear regression, the error squares are minimized in the y-direction.

Thus, linear regression is more about finding a straight line that best fits the data, depending on the internal data relationships.
Principal component analysis uses an orthogonal transformation to form the principal components, or linear combinations of the variables.

So this difference between the two techniques only becomes apparent when the data are not completely independent, but there is a correlation.

If you want to know more about machine learning methods and how they work, check out our article on the t-SNE algorithm.

t-SNE – Great Machine Learning Algorithm for Visualization of High-Dimensional Datasets

The machine learning algorithm t-Distributed Stochastic Neighborhood Embedding, also abbreviated as t-SNE, can be used to visualize high-dimensional datasets. Each high-dimensional information of a
data point is reduced to a low-dimensional representation. However, the information about existing neighborhoods should be preserved.

So this technique is another tool you can use to create meaningful groups in unordered data collections based on the unifying data properties. If you don’t know what cluster algorithms are, check out this article. Here we present 5 machine learning methods that you should know.
As shown in the following figure, the data should be represented grouped in 2-dimensional space.

The figure shows the data clusters generated by t-Distributed Stochastic Neighborhood Embedding (T-SNE) in 2-dimensional space.
Data clusters generated by t-Distributed Stochastic Neighborhood Embedding (T-SNE)

But how does the algorithm work and what are its strengths? In order to understand its function, we need to look at the origin of the technology.

What is the Stochastic Neighbor Embedding (SNE) Algorithm?

The basis of the t-Distributed Stochastic Neighborhood Embedding algorithm is originally the Stochastic Neighbor Embedding (SNE) algorithm. This converts high-dimensional Euclidean distances into similarity probabilities between individual data points.
The probability with which an object occurs next to a potential neighbor must be calculated.
The dissimilarities between two high-dimensional data points can be explained with a distance matrix, corresponding to the squared Euclidean distance.
A conditional probability is calculated for the low-dimensional correspondence.
This determines the similarity of the two data points on the low-dimensional map.

In order to achieve the closest possible correspondence between the two distributions pij and
qij, a Kullback-Leibler divergence (KL) over all neighbors of each data point is computed as a cost function C. Large costs are incurred for distant data points.

t-Distributed Stochastic Neighborhood Embedding: minimized cost function: sum of the Kullback-Leibler divergences between the original and the induced distribution over the neighbors of an object.
Minimized Cost function: sum of the Kullback-Leibler divergences between the original and the induced distribution over the neighbors of an object.

A gradient descent method is used to optimize the cost function. However, this optimization method converges very slowly. In addition, a so-called crowding problem arises.

If a high dimensional data set is linearly approximated in a small scale, then it cannot be reduced to a lower dimension with a local scaling algo-
rithm to a lower dimension.

What makes the t-Distributed Stochastic Neighborhood Embedding (t-SNE) Algorithmt work?

The t-Distributed Stochastic Neighbor
Embedding (t-SNE) algorithm starts here. On the one hand, a simplified symmetric cost function is used.

The figure shows the simplified symmetric cost function used in t-Distributed Stochastic Neighborhood Embedding.
t-SNE: simplified symmetric cost function

Here, only one KL is minimized over a common probability distribution of all
high, and low dimensional data is minimized.

On the other hand, the similarity of the low-dimensional data points is computed with a Student’s t-distribution and a degree of freedom of one. This can be optimized quickly and is stable to the crowding problem.
stable against the crowding problem.

H2O AI – That’s why it’s so great

There is a lot of Big Data software available now. One of them that you should definitely know about is the H2O AI Machine Learning solution.

With this open-source application you can implement algorithms from the fields of statistics, data mining and machine learning. The H2O AI Engine is based on the distributed file system Hadoop and is therefore more performant than other analysis tools. Your machine learning methods can thus be used as
parallelized methods.

Software Stack

They can program their algorithms in R, Python and Java and thus in the most important mathematical programming languages. H2O provides a REST interface to Python, R, JSON and Excel. Additionally, you can access H2O directly with Hadoop and Apache Spark. This makes integration into your data science workflow much easier. You already get approximate results while running the algorithms. A graphical web browser UI helps you to better analyze the processes and perform targeted optimizations.

How Clients Interacts with H2O AI

You can interact with H2O via clients using various interfaces. It is important for you to know that the data is usually not held in memory. They are localized in a H2O cluster and you only get a pointer to the data when you make a request.

How Clients Interacts with H2O AI
H2O Interaction flow

H2O Frame

The basic unit of data storage accessible to you is the H2O Frame. This corresponds to a two-dimensional, resizable and potentially heterogeneous data point. This tabular data structure also contains labeled axes.

H2O Cluster

Your H2O cluster consists of one or more nodes. A node corresponds to a JVM process and this process consists of three layers.

H2O Machine Learning Software Structure
H2O Software Stack

H2O Machine Learning Components

Language Layer

The R evaluation layer is a slave to the REST client front-end and in the Scala layer you can write native programs and algorithms. You can then use these with H2O Machine learning.

Algorithms Layer

This layer is where your algorithms are applied. You can run statistical methods, data import and machine learning here.

Core Layer

In this layer you handle the resource management. You can manage both the memory and the CPU processing capacity.

Array vs Object – The creation of a JSON structure follows some rules you should know

Array vs Object – JSON is one of the most popular data formats. However, the creation of such an object is done according to some rules. These rules depend on the original data type. In this article we will introduce you to the conversion of some JSON data types (Array vs Object).

What is JSON anyway?

With the JavaScript Object Notation, JSON for short, you can structure data compactly and independently of programming languages. The data format is therefore particularly well suited for exchange between your applications, for general data storage (file extension “.json”) and for configuration files. The data is also readable for you and coded in the standardized text format. The application notes of the data format are defined by the standards – RFC 8259 and the JSON syntax by the standards ECMA-404. Due to its easy integration with JavaScript, you can use it well for transferring data in web applications.

You can best compare the JSON data structure to XML and YAML, only it’s simpler and more compact.

What are the basic rules?

This code snippet shows a simple json object structure
Simple JSON Object

The JSON text structure is based on the JavaScript Object Syntax. Hierarchical data structures are thus possible. It contains only properties and no methods. The basis is formed by name-value pairs and ordered list of values. Basically, they are formatted with curly braces and as strings. This is especially advantageous if you want to transfer the data over the network. If you want to access the data you have to convert the text structure into a native JavaScript object.

Data Formats – JSON Array vs Object

Basically, you can have different data types included in JSON.

Value:

Your JSON value can take one of the following allowed types.

Schematic representation of the data types that a JSON value can assume
JSON value data types

Object:

A JSON object represents the basic form of a JSON text. With this you can accept any data type that is suitable for inclusion in JSON.

JSON Array vs Object - Schematic representation of the creation of a JSON object
Creation of a JSON object

Array:

JSON Array vs Object – It is possible to include an array. Arrays can contain objects, strings, numbers, arrays and boolean. You can include arrays as shown schematically below, enclosed with two square brackets.

JSON Array vs Object - Schematic representation of the creation of a JSON array
Creation of a JSON array

In this way, you can further and further nest the individual data types with each other and thus easily create any number of hierarchy levels. For example, object attributes can consist of arrays, or arrays can contain multiple objects.

5 Clustering Algorithms Data Scientists need to know – The key is always to understand the basic approach of any algorithm you want to use

As a data scientist, you have several basic tools at your disposal, which you can also apply in combination to a data set. Here we present some clustering algorithms that you should definitely know and use

In times of Big Data, not only the sheer number of data increases, but also the relationships between them. More and more complex dependencies are formed. This makes it all the more difficult to recognize these similar properties and to assign the data to so-called clusters in a way that can be evaluated.

You have certainly heard of these algorithms and maybe used one or the other, but do you really know what clustering algorithms are?

What are clustering algorithms?

So let’s first clarify what these algorithms are in the first place. The goal is clear: You want to identify similar properties between individual data points in a data set and group them in a meaningful way. These properties are often high-dimensional.

With the help of cluster analysis, you want to reduce this high-dimensional information to a low-dimensional dependency. So, for example, a representation in 2D space. Clustering is an unsupervised machine learning technique and in the end you classify the data points by using algorithms.

The approach to clustering differs from technique to technique. All have their advantages and disadvantages, so it makes sense to try several on one set of data, or apply them in combination. Below we will introduce you to some popular clustering methods and explain their grouping approach.

This picture shows schematically popular Clustering Machine Learning Algorithms you should know as a data scientist
Clustering Machine Learning Algorithms – Popular clustering algorithms

Mean-Shift Clustering

The first algorithm we want to introduce you to is Mean-Shift Clustering. With this you can find dense areas of data points according to the concept of kernel density estimation (KDE). The basis of the clustering is a circular sliding window, which moves towards higher density at each iteration. Within the window, the centers of each class are determined, called centroids.

The movement is now created by moving the center to the average of the points within the window. The density within the sliding window is thus proportional to the number of points within it. This motion continues until there is no direction in which the motion can take more points within the kernel.

Clustering Machine Learning Algorithms - Schematic and simplified representation of the Mean-Shift principle.
Clustering Machine Learning Algorithms – Mean-Shift Clustering Priciple

Hierarchical Cluster Analysis (HCA)

With HCA, clusters are formed based on empirical similarity measures of the data points. This means that the two most similar objects are assigned one after the other until all objects are in one cluster. This results in a tree-like structure. In contrast to the K-means algorithm, which we will discuss later, similarities between the clusters play a role. These are represented by a cluster distance. With K-means, only all objects within a collection are similar to each other, while they are dissimilar to objects in other clusters.

You can create an HCA in different ways. There are two elementary procedures, the top-down and the bottom-up. If you want to know more about Hierarchical Cluster Analysis, read this article.

Schematic and simplified representation of the HCA clustering  principle.
Clustering Machine Learning Algorithms – HCA Principle

Expectation-Maximization (EM) Clustering using Gaussian Mixture Models (GMM)

GMM basically assumes that the data points are Gaussian and not circular. The clusters are described by their mean and standard deviation. Each Gaussian distribution is randomly assigned to a single cluster and found using the Expectation-Maximization (EM) optimization algorithm. The probability of belonging to a cluster is then calculated for each data point. Thus, the closer a point is to the Gaussian center, the more likely it is then to belong to that cluster. Based on these probabilities, a new set of parameters for the Gaussian distributions is iteratively calculated. That is, the probabilities within a cluster are maximized.

K-Means clustering algorithms

The k-Means algorithm described by MacQueen, 1967 goes back to the methods described by Lloyd, 1957 and Forgy, 1965. You can use the algorithm besides cluster analysis also for vector quantization. Here, a data set is partitioned into k groups with equal variance.

The number of clusters must be specified in advance. Each disjoint cluster is described by the average of all contained samples. The so-called cluster centroid.


Each centroid is updated to represent the average of its constituent instances. This is done until the assignment of instances to the clusters does not
changes any more. If you want to learn more about the K-means algorithm, check this out.

Schematic and simplified representation.of the kmeans clustering algorithm
K-Means Principle

Density-Based Spatial Clustering of Applications with Noise (DBSCAN)

DBSCAN is a density-based cluster analysis with noise. From an arbitrary starting data point, neighborhood points are specified at a distance epsilon. Clustering then begins from a certain neighborhood data point count.

The current data point becomes the first point of the new cluster, or referred to as noise. In both cases, however, it is considered to be examined. The neighboring data points are then added to the cluster. Once all neighbors have been added, a new, unexamined point is called and processed. A new cluster is thus formed.

Schematic and simplified representation of the DBSCAN Clustering principle.
Clustering Machine Learning Algorithms – How DBSCAN works

The field of cluster algorithms is wide and everyone’s approach is different. You should be aware that there is no one solution. You have to consider each algorithm as another tool. Not every technique works equally well in every situation.

The key here is to always understand the basic approach of each algorithm you want to use. Build a small portfolio and get to know these techniques well. Once you master them, you should then add new ones. Knowing your own tools is crucial to avoid try and error and to gain control over your data. Remember: no result is a result. Your added value here is that even if an algorithm doesn’t work well on your data set, it will give you information about the data properties.

Data Mining vs Big Data Analytics – You need the right tools and you need to know how to use them!

Data Mining vs Big Data Analytics – Both data disciplines, but what makes them different? In this article, we introduce you to both fields and explain the key differences.

Data Science is an interdisciplinary scientific field, as it has become more and more in focus in the last decades. Many companies see this as the key to an Industry 4.0 company. The hope is that valuable information can be found in the company’s own data, which can be used to massively increase its own profitability. Terms such as big data, data mining, data analytics and machine learning are being thrown into the ring. Many people do not realize that these terms describe other disciplines. If you want to build a house, you need the right tools and you have to know how to use them.

Map of Data Disciplines

First of all, you should think of the individual disciplines as being layered into each other like an onion. So there is overlap between all the fields and when you talk about a discipline, you are also talking about lower layers.

data mining vs analytics - This diagram shows the relationships between the individual data disciplines
Map of data disciplines

Since data analytics is located above data mining in the layer model, it is already clear that mining must be a sub discipline of analytics. Therefore, we will first describe the comprehensive discipline.

Data Mining vs Big Data Analytics – What is Analytics?

Big data analytics, as a sub field of data analysis, describes the use of data analysis tools and without special data processing. in data analytics, you use queries and data aggregation methods, but also data mining techniques and tools. The goal of this discipline is to represent various dependencies between input variables.

The goal of this discipline is to represent various dependencies between input variables. The following figure shows the individual overlaps in the use of the tools of the different disciplines.

scheme about overlaps in the use of the tools of the different data disciplines
Overlaps of the different data disciplines

Data Mining vs Big Data Analytics – What is Data Mining?

Data mining is a subset of data analytics. At its core, it is about identifying and discovering a large data set through correlations. Especially if you know little about the available data this field should be used.

datamining

But what does a typical data mining process look like and what are typical data mining tasks?

Data Mining Process

You can divide a typical data mining process into several sequential steps. In the preprocessing stage, your data is first cleaned. This involves integrating sources and removing inconsistencies. Then you can convert the data into the right format. After that, the actual analysis step, the data mining, takes place.Finally, your results have to be evaluated. Expert knowledge is required here to control the patterns found and the fulfillment of your own objectives.

This diagram shows the flow of a typical data mining process
Data Mining Process

The term data mining covers a variety of techniques and algorithms to analyze a data set. In the following we will show you some typical methods.

Data Mining Tasks

Besides identifying unusual data sets with outlier detection, you can also group your objects based on similarities using cluster analysis. In this article we have already summarized some popular clustering algorithms that you should know as a data scientist. While association analysis only identifies the relationships and dependencies in the data, regression analysis provides you with the relationships between dependent and independent variables. Through classification, you assign elements that were not previously assigned to classes to existing classes. You can also summarize the data to reduce the data set to a more compact description without significant loss of information.

data mining tasks
Typical Data Mining Tasks

Data Mining vs Big Data Analytics – Conclusion

Although the two disciplines are related, they are two different disciplines. Data mining is more about identifying key data relationships, patterns or trends in the data, while data analytics is more about deriving a data-driven model. On this path, data mining is an important step in making the data more usable. In the end, it’s not a versus, but both disciplines are part of an analytics pipeline.
In this article, we will go further into the differences between the various data sciences and clarify the difference between data analysis and data science.

Apache Avro – Effective Big Data Serialization Solution for Kafka

In this article we will explain everything you need to know about Apache Avro, an open source big data serialization solution and why you should not do without it.


You can serialize data objects, i.e. put them into a sequential representation, in order to store or send them independent of the programming language. The text structure reflects your data hierarchy. Known serialization formats are for example XML and JSON. If you want to know more about both formats, read our articles on the topics. To read, you have to deserialize the text, i.e. convert it back into an object.

In times of Big Data, every computing process must be optimized. Even small computing delays can lead to long delays with a correspondingly large data throughput, and large data formats can block too many resources. The decisive factors are therefore speed and the smallest possible data formats that are stored. Avro is developed by the Apache community and is optimized for Big Data use. It offers you a fast and space-saving open source solution. If you don’t know what Apache means, look here. Here we have summarized everything you need to know about it and introduce you to some other Apache open source projects you should know about.

Apache Avro – Open Source Big Data Serialization Solution

With Apache Avro, you get not only a remote procedure call framework, but also a data serialization framework. So on the one hand you can call functions in other address spaces and on the other hand you can convert data into a more compact binary or text format. This duality gives you some advantages when you have cross-network data pipelines and is justified by its development history.

Avro was released back in 2011 as a part of Apache Hadoop. Here, Avro was supposed to provide a serialization format for data persistence as well as a data transfer format for communication between Hadoop nodes. To provide functionality in a Hadoop cluster, Avro needed to be able to access other address spaces. Due to its ability to serialize large amounts of data, cost-efficiently, Avro can now be used Hadoop-independently. 

You can access Avro via special API’s with many common programming languages (Java, C#, C, C++, Python and Ruby). So you can implement it very flexible.

In the following figure we have summarized some reasons what makes the framework so ingenious. But what really makes Avro so fast?

The schema clearly shows all the features that Apache Avro offers the user and why he should use it
Features Apache Avro

What makes Avro so fast?

The trick is that a schema is used for serialization and deserialization. About that the data hierarchy, i.e. the metadata, is stored separately in a file. The data types and protocols are defined via a JSON format. These are to be assigned unambiguously by ID to the actual values and can be called for the further data processing constantly. This schema is sent along with the data exchange via RPC calls.

Creating a schema registry is especially useful when processing data streams with Apache Kafka.

Apache Avro and Apache Kafka

Here you can save a lot of performance if you store the metadata separately and call it only when you really need it. In the following figure we have shown you this process schematically.

avro kafka

When you let Avro manage your schema registration, it provides you with comprehensive, flexible and automatic schema development. This means that you can add additional fields and delete fields. Even renaming is allowed within certain limits. At the same time, Avro schema is backward and forward compatible. This means that the schema versions of the Reader and Writer can differ. Schema registration management solutions exist, with Google Protocol Buffers and Apache Thrift, among others. However, the JSON data structure makes Avro the most popular choice.

« Older posts