== collective term for several software platforms published by Microsoft (2000) – for development and execution of application programs
Mono from Xamarin
– is a run-time execution environment that manages apps that target
– Command line compiler (for Visual Basic.NET, C#, Jscript.NET) – Software Development Kit (SDK) with tools, documentation and examples
Common Language Runtime (CLR)
– Runtime environment for the execution of a .NET application – provides the just-in-time compiler – provides numerous other basic services (garbage collector, exception handling, a security system and interoperability with non.NET applications) – When a .NET application is started, Windows does not call the CLR itself directly, but first calls a so-called Runtime Host. This host loads the CLR and passes the entry point for the application to the CLR.
Runtime Hosts: – Shell Runtime Host – Internet Explorer Runtime Host – ASP.NET
– enables the cooperation of different languages → Calling a program code written in another programming language → object-oriented languages can inherit classes written in another object-oriented language
Uniform class library
– NET Framework Class Library (FCL) – extensive class library that can be used from all .NET languages – implemented as a set of DLLs (Managed Code)
– not only object-oriented, but also component-oriented – At the center of the component concept are the so-called Assemblies (EXE, DLL) == a composite of one or more MSIL files, where at least one of the files is a DLL or EXE/ non-MSIL files, so called resource files (database, graphic or sound files) DLL assembly: – a reusable software component that can be used by another assembly EXE assembly: – can be started as an independent application – but can provide services for others
== universal open source development platform – replaces the old .NET Framework (mixture of new implementation and redesign/refactoring of .NET Framework 4.x) – faster than .NET Framework – modularly constructed – allows you to create .NET Core Apps, cross-platform, for Windows, macOS and Linux for x64, x86, ARM32 and ARM64 processors → different programming languages are supported (C#, F#, or Visual Basic) → Frameworks and APIs for the cloud, for IoT, for client user interfaces and for machine learning – Modern language constructs like generics, Language Integrated Query (LINQ), and asynchronous programming
Language Integrated Query (LINQ)
Query == an expression that retrieves data from a data source – are usually written in a specialized query language – a new query language for each type of data source or data format they need to support → LINQ provides a consistent model for working with data in different types of data sources and formats – Queries are always object based
LINQ query operations consist of three actions: – Retrieving the data source – Creating the query – Executing the query
== alternative, open source implementation of Microsoft’s .NET Framework → Development of a platform independent software based on the standards of the Common Language Infrastructure, ECMA, and the programming language C# shall be enabled
Additional functions for .Net
– Interfaces for operating system related functions under Unix – Comprehensive Technology Coverage – Binary Compatible – Multi-Language(VB 8, Java, Python, Ruby, Eiffel, F#, Oxygene …) – Changes can be made to already compiled code – easy generation of native code
– C# Compiler
– Mono Runtime
– .NET Framework Class Library
– Mono Class Library
→ In autumn 2020 all three platforms (.NET Core, .NET Framework, Mono) will be merged
Modbus Overview – In this article we introduce you to the industrial communication protocol, its function and its individual characteristics.
What is Modbus and how does it work?
Modbus is due to its simple usability in many automation areas a standard to couple intelligent machines in a client/server architecture, also called master/slave. Each bus participant is assigned a unique address. Zero is always the broadcaster. Usually the master initiates a message and an addressed slave responds. This means that the message is always sent from one point to all participants.
Data can be exchanged either via a serial interface and thus in single bits (RTU, ASCII) one after the other, or via Ethernet using data frames (TCP). Depending on the data format the bus types are also distinguished
Modbus Overview – RTU-Modbus
The Remote Terminal Unit (RTU) is best described as a remote control system. Transmission is in binary form and is therefore very fast. However, to be able to read the data, it must be translated back again.
The length of the transmission pause depends on the transmission speed. The data field contains information which registers the slave is to read. The slave then inserts the read data here and sends it to the master. In the master then an error check takes place via cyclic redundancy check, either via a CRC or by the calculation of a checksum byte.
Modbus Overview – ASCII-Modbus
Instead of a binary sequence, an ASCII code, i.e. a 7-bit character encoding, can also be transmitted. This can be read immediately, but has a lower data throughput in direct comparison to RTU.
Error checking is done by longitudinal parity checking via a line replaceable unit. The error case is usually triggered at a frame transmission pause of more than one second. However, this period is configurable.
Modbus Overview – Modbus/TCP
Data transmission can also take place via transmission Control Protocol/ Internet Protocol (TCP/IP) packets. Here, identification takes place via IP addresses.
The transmission security is ensured by a digital certificate authentication of server and client, a so-called Transport Layer Security (TLS).
Modbus Overview – Client/Server Model
What does the client/server model look like?
The following figure shows the individual steps of both bus participants.
A client sends a request to the network to initiate a transaction. This request is then received on the server side. The so-called indication. The server then processes the request and creates a response, and returns it to the network. The client side then receives the response. This step is called confirmation.
General MODBUS frame
MODBUS protocol defines a Protocol Data Unit (PDU, full message from an OSI shift)
Mapping of protocol on specific buses or networks can introduce additional fields on the Application Data Unit (ADU, combined command/data block)
MODBUS on TCP/IP Application Data Unit
Dedicated header is used to identify the Data Unit (MBAP header) -> contains fields for several information codes
MODBUS vs OPC UA
OPC UA may become one of the most important unified data protocols. For years, umbrella organizations have been driving the project worldwide. Originally developed for the injection molding and rubber processing industries, OPC UA is gradually being extended to other industries. Thanks to its standardized tree structure, OPC can represent data very flexibly as hierarchical objects. This means that a lot of relationship and structure information is also transmitted.
How can OPC UA devices now be interconnected via Modbus? The first problem is hardware based. OPC UA is usually transmitted via Ethernet. Modbus mostly via RS485. Here a first conversion is necessary. The second issue deals with how to represent the registers and coils of the Modbus device in OPC UA.
OPC UA Native Representation
The first option is to remap the entire Modbus data space to objects. Each register and coil are thus represented as a variable attribute and are given descriptive names. Subsequently, metadata can be added. For example the data type, a maximum value or time of origin.
Modbus Native Representation
Instead of building an OPC UA address space, individual objects can be represented with Modbus registers and coils as attributes. Each register and coil can be mapped by the current register and coil number and no metadata is added. A client can then access the new data space using UA Read and UA Write requests.
Modbus Data Transport
How can Modbus use OPC UA for data transport? The actual Modbus message packet is sent over the network, embedded in the OPC UA transport. OPC UA wraps the standard message and adds a standard encoding and security layer.
However, this requires that the OPC UA Server recognizes that the content of a message is not a standard read or write of OPC UA attributes, but is a Modbus message. An OPC UA Server must therefore be attached to the rontend of the device. This can be done in different ways
The OPC UA Server function acts as a mechanism to establish a secure and reliable connection between a Modbus client and a Modbus server. A string attribute is exposed which then contains the entire Modbus message. For a Modbus RTU client device, an OPC UA client device can be used to write the send attribute to the target device. The receive attribute is read back. However, nothing really changes for the Modbus devices except that the messages are passed to OPC UA instead of being put on a line.
Another possibility is to create a data provider/manager for processing the Modbus message. A message is processed through the low-level transport and the application services look at the attribute. The application service manager would notice that the namespace index points to a dedicated application process.
Where else does Modbus make sense?
Modbus is still the best solution for certain applications. It is a relatively inexpensive way to build a reliable information flow. It is a “slow response network” which is especially advantageous for temperature recording. For complex data sets, however, there are far better solutions with EtherNet/IP, PROFINET IO and EtherCAT. OPC UA, for example, can be transferred here without major transformations and detours.
Increasingly large volumes of data can now be processed faster and faster thanks to ever more efficient hardware performance. Large information networks for monitoring and analyzing almost all business processes are becoming more and more standard. uniform, fast and resource-saving data protocols are the key to Industry 4.0. If you want to know more about Industry 4.0, take a look at our article here.
Data warehouse systems offer a way to create data truth in a company. In such an information system, data is not only stored and sorted, but also cleansed and analyzed. If you haven’t heard of these information systems, check out our article on the subject. Here we explain the features of such a system and how to provide it with data. All systems follow the same basic structure, which we explain in this article, but can consist of different components. Accordingly, they are typified. This article is about this classification. In the following, we will introduce you to the functionalities of the most popular data warehouse systems.
Host-Based mainframe warehouse
The Host-Based mainframe warehouse resides on a large-volume database. In addition to this database, metadata is managed in a central metadata repository. Within this metadata, for example, the information for the documentation of data sources or data translation rules are stored.
In general, three phases run in this information system. Selections and scrubbing methods take place in the unloading phase. That is, the appropriate data types and data sources are determined here and the data is subsequently error corrected. In the following transform phase the data are translated into a suitable form. Here also already the rules for the access and the storage are specified. In the final Load phase, the preprocessed data set is moved into tables.
Host-Based LAN data warehouse
• extract information from a variety of sources and support multiple LAN based warehouses • data delivery can be handled either centrally or from the workgroup environment • size depends on the platform
Multi-Stage Data Warehouses
• staging of the data multiple times before the loading operation into the data warehouse
→finally to departmentalized data marts
Stationary Data Warehouse
• data is not changed from the sources • customer is given direct access to the data
Distributed Data Warehouses
• two types of distributed data warehouses and their modifications for the local enterprise warehouses which are distributed throughout the enterprise and a global warehouses • Activity appears at the local level • Bulk of the operational processing • Local site is autonomous • Local warehouses also include historical data and are integrated only within the local site
Virtual Data Warehouses
Created in the following steps
• Installing a set of data approach, data dictionary, and process management facilities • Training end-clients • Monitoring how DW facilities will be used
• Based upon actual usage, physically Data Warehouse is created to provide the high-frequency results
Need to define four kinds of data
• A data dictionary including the definitions of the various databases • A description of the relationship between the data components • The description of the method user will interface with the system • The algorithms and business rules that describe what to do and how to do it
Three Tier Data Warehouse Architecture – In this article we will introduce you to the most common data warehouse architecture.
Nowadays, business processes are increasingly supported by digital assistance systems and recorded for further analysis and optimization. This generates a lot of structured, unstructured and semi-structured data from many different sources.
What is a Data Warehouse System?
In order to create a unified view of the data for improved BI and thus enable comprehensive evaluations, all information from the diverse data sets must be centralized.
This integration is the first basic function of a data warehouse system. However, this information system also assumes the task of data separation. In this way, data that is used for operational business, i.e. that is regularly queried, can be separated from data that is only used for analyzing business processes in controlling. You can read more about the different types of data warehouses here.
Data centralization ensures that there is only one version of the truth for a company to use for decision making and forecasting.
What does the Typical Data Warehousing Architecture look like?
The complexity of this system increases exponentially with the complexity of the business. Many distinctive data sources, i.e. business processes, provide commutative and historical data. Therefore, basic approaches have been defined according to which every data warehouse system should be structured. Single Tier, Two Tier and Three Tier.
2 Tier vs Three Tier Data Warehouse Architecture
In the following we will work out the three tier architecture. This, the most commonly used, structure is completely decoupled from the data and the user interface by moving the application logic to a middle tier. In two-tier, the application logic resides either in the user interface on the client or in the database on the server. Thus, without a middle tier, this system is less scalable and more flexible. Integration of other data sources is more difficult here.
Three Tier Data Warehouse Architecture
The Three Tier Data Warehouse Architecture is the design on the basis of which a data warehouse with three tiers is then built. The figure below shows this structure with common components.
However, the individual components can vary and depend on the project framework. As a rule, however, these changes do not alter the basic structure.
The lowest layer is persistence, which is usually located on a server. The data from various data sources is prepared and stored here using an ETL (extract, transform and load) process. Tools and other external resources can be used to feed the data. This persistence can consist of a relational but also a multidimensional database system.
One or more OLAP (Online Analytical Processing) servers reside in the middle data warehouse layer. This technology can be used to create complex budget plans and perform analyses cost-effectively. So in the three tier data warehouse architecture, jobs are generated in the top tier and sent to this middle tier. Here, the data in the bottom tier is then accessed and analyses are performed. The result is then sent to the top tier and thus made available to the user, and/or forwarded to the bottom tier for storage of the analysis results in persistence.
What is an OLAP Server?
Basically, three OLAP server models are distinguished. In Relational OLAP (ROLAP) the operations on multidimensional data are based on standard relational operations. The Multidimensional OLAP (MOLAP) directly implements the multidimensional data operations. A mixture of relational and multidimensional processing can be handled by Hybrid OLAP (HOLAP). The choice of the server model always depends on the data composition in the lowest layer.
The top tier is the top of the three tier data warehouse architecture, the front-end client layer. It contains query and reporting tools, analysis tools, and data mining tools, thus providing the interface to the user. Here he can generate analyses and take a look at the data.
Data Warehousing – In today’s flood of data, it is becoming increasingly difficult to maintain a clear data management system. More and more data sources are recorded via different software systems. A unified, centralized system can facilitate analysis and ensure that only one data truth exists in an organization.
What is a Data Warehouse System?
Data warehouse systems are built by integrating data from multiple heterogeneous sources and, in addition to centralization, performs the task of structuring data, supporting analytical reporting and structuring decision-making. The system can perform data cleansing as well as data integration and data consolidation and does not require transaction processing or recovery.
It is thus a powerful Big Data information system that can centrally handle everything related to data processing.
What does a Data Warehouse structure look like?
The term data warehouse is used to describe various architectures and systems. However, multi-layer architectures are typical. In this article, we will introduce you to the most commonly used three-tier architecture. If you are interested in the different types, you should read this article from us on the topic. Here we present the individual types in detail.
This article is primarily about what the advantages of the system actually are and how the data communication works.
Data Warehousing Features
Data warehousing offers several features. Such an information system is subject oriented. It does not focus on the current operation, as these data are separated. This means that frequent changes in the operational database are not reflected in the data warehouse. Thus, the focus is on modeling and analysis of data. The system is Time variant, which means that the collected data are identified with a certain period of time and previous data are not deleted when new data are added.
However, some terms that often come up in connection with this system need to be clarified. When metadata is mentioned, a kind of roadmap to the data warehouse is meant. Here the warehouse objects are defined and it acts as a directory. This means that the decision support system finds the contents via the metadata.
The metadata is stored in the metadata repository. An integral directory that manages both the business metadata, i.e. data ownership information, business definition and change policies, and the operational metadata. Operational metadata refers to the timeliness of the data is it active, archived or cleansed, and data lineage, which is the history of the data. This includes the data used to map the operational environment, source databases and their contents, data extraction, data partitioning, cleansing, transformation rules, data refreshing and cleansing rules, but also the algorithms for summation, dimensional algorithms, data for granularity, aggregation, summation, etc.
The so-called data cube represents data in multiple dimensions and the data mart contains only the data specific to a certain group.
Load Data into Warehouse
In addition to the different components and architectures, data can also be transmitted to the information system in different ways.
As shown in the figure, a basic distinction is made between two elementary processes.
What is ELT?
Extract, Load, and Transform, or ELT for short, is about extracting aggregate information from the source system and loading it into the target method.
The following figure shows such an example system. In this case, the Hadoop framework handles the central data management, while applications and analysis tools access the untransformed data.
What is ETL?
In Extract, Transform and Load, or ETL for short, the data set is first extracted from the sources into a staging area, then transformed or reformatted with business manipulation performed on it, and only then loaded into the target or destination database or data warehouse.