Real-time DataLogger

Available from label 2019.6

Use Case

As soon as you record consumption values, need to prove energy feeds, or possibly have to log plant states, you need a reliable data acquisition. With the PLCnext Real-Time Datalogger, you now have a convenient way to do all these use cases without any programming effort. Simply parameterize the required variables, and the DataLogger starts with a task-synchronous recording. The data is available directly on the SD card of the controller, or you can rely on OPC UA Historical Access.

In addition, you can decide whether each value or only changed values are being recorded. This is how a simple data logger becomes a powerful event recording system ("Sequence of Events").

Concept

The DataLogger is a service component of the PLCnext Technology firmware. This component transfers real-time data from the GDS to a database for recording and storage purposes. When starting and stopping the PLCnext Technology firmware, a configured DataLogger session is started and stopped automatically. The DataLogger then collects the task- synchronous values of the configured GDS ports with a given sampling rate and stores them with a time stamp (exact to 1 µs) into a RAM disk. 

With the standard settings, the database is stored SQLite compliant on the SD card of the PLCnext Control. Now you can copy the database at any time, read it out via OPC UA Historical Access or read out entries via a C++ interface (Remote Service Calls). All essential settings are given by an XML configuration file, no further programming is necessary.

How to

cookie

Note: Tutorial videos are embedded from the Phoenix Contact Technical Support YouTube channel.  When you start playing an embedded YouTube video, you accept the YouTube Terms & Conditions. That includes digital "cookies" for marketing purposes which will remain on your device. The data gained through this will be used to provide video suggestions and advertisements based on your browsing habits, and may be sold to other parties. 

Getting used to DataLogger functions

Duration:  04m:00s   Audio Language: English   Subtitles: English   Resolution: max. 1280 x 720px (HD)

Frank Walde tells all you need to know for starters about the DataLogger. Frank is the Senior Project Manager in the Product Management PLCnext Runtime System at Phoenix Contact Electronics in Bad Pyrmont, Germany.
As a newbie, please don't skip this introduction so we can get deeper into it in the following two videos without losing you.

Configuring the DataLogger for the "Record on time" feature

Duration:  08m:31s    Audio Language: English   Subtitles: English   Resolution: max. 1280 x 720px (HD)

In this video, we're setting up a program that logs data from a set of ports into an SQLite database, so we can access those logs from our host computer. 

Prerequisites: Download the DataLogger Service configuration file from the PLCnext repository on GitHub. We will show how to put in the settings for the data generator and deploy it on the PLCnext Control. To go along with this instruction, you have to connect to an OPC UA Server. Just in case you need help with that, here's how to set up an OPC UA server connection with PLCnext Engineer in general.

Real-time DataLogger with OPC UA Historical Access data

Duration: 04m:41s    Audio Language: English   Subtitles: English   Resolution: max. 1280 x 720px (HD)

Now it's getting historical! Recruiting the embedded OPC UA server of the PLCnext Technology Runtime, we will set up our data ports to be historizing all values. We will be using the UaExpert client from Unified Automation, which you can download here.

DataLogger Reference

Sources and data types

The DataLogger can record data from any IN or OUT ports and variables. The following data sources are available:

Global Data Space IN and OUT ports:

  • Type real-time program (C++, IEC 61131-3 and MATLAB® Simulink®) - task-synchronous mode
  • Type component
  • Global IEC 61131-3 variables

The following data types are supported:

  • Elementary data types

Recording mode

The recording mode is set in the configuration file (attribute storeChangesOnly).
There are two recording modes available:

  • Endless mode
    The DataLogger records the data in endless mode. All the ports and variables configured for recording are recorded without interruption (storeChangesOnly="false"). 
  • Save on change
    The DataLogger only records the data when they change. If the value stays the same it is displayed in the data base with a NULL (storeChangesOnly="true"). 

Timestamp

The DataLogger provides a timestamp for each value of a port. Only one timestamp is generated for ports from the same task because this timestamp is identical for all the values of the task. Time resolution has a precision of 100 ns.

 →  : The timestamp is displayed as raw 64 bit integer value.

From  It is possible to configure the format of the timestamp inside the database. It can be displayed as ISO 8601 or as raw 64 bit integer value.

Despite the format, all timestamps are reported using the UTC timezone. The implementation and internal representation complies to the Microsoft® .NET DateTime class, see the documentation of DateTime Struct on  docs.microsoft.com

The timestamp is created in the task cycle via the system time of the controller.
It is set at the start of the task (task executing event) and maps exactly the cycle time of the task, so that the values of the following task cycles are always one interval apart.

Data sink

A database can be configured as the target location for data recording
(e.g. Datasink type="db" dst="/opt/plcnext/projects/Services/DataLogger/yourDB.db").

In each cycle, the values of all ports of a task are stored in a ring buffer. Therefore, the capacity of the ring buffer determines the maximum number of task cycles that can be recorded before data must be forwarded to the data sink.

The data to be archived is written to an SQLite database.  For each configured DataLogger instance, a separate SQLite database is created.

Database layout

The values of the configured variables are saved in a table inside the SQLite database. The default path for the database files on your controller is /opt/plcnext. The database files are saved as *.db files. The file system of the controller is accessed via the SFTP protocol. Use a suitable SFTP client software for this, e.g., WinSCP

Copy the *.db files to your PC and use a suitable software tool to open and evaluate the *.db files (e.g. DB Browser for SQLite). 

Depending on your configuration, a table that is created by the DataLogger can consist of the following columns:

  • Timestamp:
    Timestamp for the logged variable value (see details here).
  • Consistent Data Series:
    This index shows if there is a inconsistency in the logged data (ConsistentDataSeries).
  • Task/Variable:
    One column for each variable that is configured for data logging. The column name consists of the task name and the variable name
  • Task/Variable_change_count:
    In case of storeChangesOnly="true" this column serves as change counter. There is a change counter for every configured variable.
Examples for storeChangesOnly configuration

Note: In these examples the timestamp is displayed in a readable format. In a *.db file generated by the DataLogger, the timestamp is UTC of type Arp::DateTime. It is displayed as 64 bit value in the database. The implementation and internal representation complies to the .NET DateTime class, refer to Documentation of DateTime Struct on https://docs.microsoft.com to convert the timestamp into a readable format.

Attribute storeChangesOnly="false"

In this example the logged variables are from the same task. Therefore there are values for every timestamp.

Timestamp ConsistentDataSeries Task10ms/VarA Task10ms/VarB
10 ms 1 0 0
20 ms 1 1 0
30 ms 1 2 2
40 ms 1 3 2
50 ms 1 4 4
60 ms 1 5 4
Attribute storeChangesOnly="true"

In this example the logged variables are from the same task. Therefore there are values for every timestamp. When there is no change in the value of a variable in relation to the value of the preceding timestamp, it is displayed as NULL, meaning that the value has not changed. 

Timestamp ConsistentDataSeries Task10ms/VarA Task10ms/VarA_change_count Task10ms/VarB Task10ms/VarB_change_count
10 ms 1 0 0 0 0
20 ms 1 1 1 NULL 0
30 ms 1 2 2 2 1
40 ms 1 3 3 NULL 1
50 ms 1 4 4 4 2
60 ms 1 5 5 NULL 2
Attribute storeChangesOnly="false" and variables from different tasks

In this example the logged variables are from different tasks (Task10ms and Task 20ms). Usually different tasks have different timestamps which affects the layout of the table.  When the variable values of a task are added to the table, the variable values of the other task are displayed as NULL. 

Timestamp ConsistenDataSeries Task10ms/VarA Task20ms/VarB
10 ms 1 0 NULL
20 ms 1 1 NULL
21 ms 1 NULL 1
30 ms 1 2 NULL
40 ms 1 3 NULL
41 ms 1 NULL 2
50 ms 1 4 NULL
60 ms 1 5 NULL
61 ms 1 NULL 3

 

Intervals

Sampling rate

The sampling rate defines the interval the DataLogger uses for recording. It can be freely set for each recording process, e.g. samplingInterval ="50ms"

Publishing rate

Use the publishing interval to specify the frequency for forwarding the collected data from the ring buffer to the data sink. The publishing interval can be specified in the DataLogger configuration file via the publishInterval attribute in the General XML ele­ment.

A configuration for a publishing interval of 1 s therefore is: <General publishInterval="1s"/>. Data publishing from the DataLogger to the data sink is not performed within the real-time context. 

Writing rate

You can use the writeInterval attribute in the con­figuration file to specify how many data records are reported to the data sink before these values are written to the file on the SD card (default value: 1000). When the data sink or the firmware is closed, all the values that have not yet been transferred are written to the SD card. 

Note: If the value of the attribute writeInterval is low, the resulting high number of write operations to the SD card might cause performance problems. It is possible that the data cannot be written to the SD card in the required speed. This may result in the loss of data.
If a faster writeInterval is required, Phoenix Contact recommends to create the database in the RAM ( described in examples 3 and 4 below). 

Data consistency check

If recording gaps caused by performance problems or memory overflow occur this information will be saved in the data sink. If there is a loss of data it will be displayed in the database in the column ConsistentDataSeries. This column can contain the values 0 or 1:

  • Value 0:
    If the value is 0 a data gap occurred during the recording of the preceding data series. The first data series always has the value 0 because there is no preceding data series for referencing.
  • Value 1
    If the value is 1 the data is recorded without a gap related to the preceding data series. Therefore the data series tagged with a 1 is consistent to the preceding data series. 

Example from a database with an indicated data gapClick to open an example from a database with an indicated data gap

RowId ConsistentDataSeries VarA
1 0 6
2 1 7
3 1 8
4 1 9
5 1 10
6 1 11
7 1 12
8 1 13
9 0 16
10 1 17
11 1 18

In this recording, the first 8 data rows are consistent and without gaps caused by data loss (that would result in ConsistentDataSeries=0). Between rows 8 and 9 a data gap is indicated (ConsistentDataSeries=0). The rows 9 to 11 are consistent again. 

Note: Phoenix Contact recommends the evaluation of the flag ConsistentDataSeries to ensure that the data is consistent.
If ConsistentDataSeries=0 is stated in other rows than row 1, an inconsistency has occurred during recording.

 

 

DataLogger Configuration file

Configuration of a DataLogger instance is done by an XML configuration file. You can download an exemplary DataLogger configuration file and folder from Github.com/plcnext and use it as a template.

The XML configuration file can be edited using a text editor. After editing, save the configuration file under /opt/plcnext/projects/Services/DataLogger/ on your controller. The file system of the controller is accessed via the SFTP protocol. Use a suitable SFTP client software for this, e.g., WinSCP.

Each configuration file generates a separate DataLogger session, and you can configure several DataLogger sessions running parallel. 

When starting and stopping the PLCnext Technology firmware, a DataLogger is started and stopped automatically in this configuration. Changes in the configuration file will be active after restarting the firmware (sudo /etc/init.d/plcnext restart).

A configuration file for the DataLogger is structured as shown in the following example:

<?xml version="1.0" encoding="utf-8"?>
<DataLoggerConfigDocument
   xmlns="http://www.phoenixcontact.com/schema/dataloggerconfig"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://www.phoenixcontact.com/schema/dataloggerconfig.xsd">
<General name="data-logger" samplingInterval="100ms" publishInterval="500ms" bufferCapacity="10"/>
<Datasink type="db" dst="test.db" rollover="true" tsfmt="Iso8601" maxFiles="3" writeInterval="1000" maxFileSize="4000000"
storeChangesOnly="false"/>
<Variables>
  <Variable name = "Arp.Plc.ComponentName/GlobalUint32Var"/>
  <Variable name = "Arp.Plc.ComponentName/PrgName.Uint32Var"/>
  <Variable name = "Arp.Plc.ComponentName/PrgName.StuctVar.IntElement"/>
  <Variable name = "Arp.Plc.ComponentName/PrgName.IntArrayVarElement[1]"/>
</Variables>
</DataLoggerConfigDocument>

DataLogger configuration attributes

<General>
Attribute Description
name Unique name of the logging session. 
samplingInterval Interval during which the data points of a variable are created. The default value is 500 ms
The following suffixes can be used: ms, s, m, h.
publishInterval Interval during which the data is transferred to the data sink. The default value is 500 ms.
The following suffixes can be used: ms, s, m, h.
bufferCapacity Configuration of the buffer capacity. Capacity of data sets for the internal buffer memory. The default value is 2.

 

<Datasink>
Attribute Description
type Configuration of the data sink:
db: Database (SQLite)
dst

File path and name under which the data sink is to be stored.

Note: The DataLogger does not create folders. If you want to store a data sink under a specific path, you have to create it first. 

rollover

true: Once the maximum file size is reached, the file is closed and renamed with an index starting from 0 (e.g. database.db.0). Then a new file is created. Every file with an index is closed and can be copied for evaluation. 
The current data is always logged in the database that is defined in the attribute dst.

false
When rollover is set to false and the maximum file size is reached, a configurable amount of the oldest data is deleted before the record proceeds. The amount of data to be deleted is configured with the attribute deleteRatio.

deleteRatio

Available from 

Percentage of maximum memory size to be deleted for the logging of new data.
This attribute defines the amount of data that is deleted before new data is written into the database. The old data is deleted when the value that is defined in maxFileSize is reached and the attribute rollover is set to false.
The value for deleteRatio must be provided as an unsigned integer value (16 bit). It must range from 1 to 100. The value corresponds to the percentage of old data to be deleted. 
Example:
1 = 1 % of old data is deleted.
100 = 100 % of old data is deleted.

tsfmt

Available from 

Configuration of the timestamp format

Raw:
The timestamp is stored as 64 bit integer value.

Iso8601:
The timestamp is stored in the ISO 8601 format with microsecond accuracy.

maxFiles

Maximum number of rolling files (default value =1). 
The rollover attribute must be set to true. When the maximum number of files is reached, the oldest file will be deleted. The file index of the closed files will be counted up further. 

If the maximum number of files is set to 0 (maxFiles="0") the behaviour corresponds to a deactivated rollover (rollover="false").

If the maximum number of files is set to a negative number (e.g. -1) the file number limitation is deactivated. This results in logging activity until the memory is full. The default value is -1

Note:
When the value for maxFiles is 1, rollover is set to true and the maximum file size is reached, a configurable amount (attribute deleteRatio) of the oldest data in the database is deleted. The deleteRatio is related to the maximum file size that is defined with the attribute maxFileSize.

maxFileSize Maximum memory size of the log file in bytes.
storeChangesOnly

true: The values are only stored if they change. If a value stays the same, it is defined as NULL in the database.

false: The values are always stored, even if they do not change.

writeInterval

Number of data records the DataLogger collects and writes to the SD card.
The default value is 1000 to keep write access operations to the SD card as low as possible. In other words, as soon as 1000 data records have been transferred to the data sink, they are grouped in a block and written to the SD card.

Note:
If the value of the attribute writeInterval is low, the resulting high number of write operations to the SD card might cause performance problems. It is possible that the data cannot be written to the SD card in the required speed. This may result in the loss of data. If there is a loss of data it will be displayed in the database in the column ConsistentDataSeries.

 

<Variables>
Attribute Description
Variable name

Complete name (URI) of a variable or a port whose values are to be recorded.

Example: 
Arp.Plc.ComponentName/PrgName.Uint32Var

Configuration examples

The following examples show various application scenarios with the associated DataLogger configurations.

In example 1 and 2, the data sink first collects the data in a RAM database. The attribute maxFileSize determines the size of this database and therefore how much of the RAM is used by the DataLogger session. When the value that is determined in maxFileSize is reached, a copy of the database is written from the RAM to the SD card. The duration of the writing process to the SD card depends on the system load and the size of the file. Consider this when configuring maxFileSize. Phoenix Contact recommends to assign 1 MB as file size. If more historical data is necessary, Phoenix Contact recommends to split the data into several files with the attribute rollover="true"

 

  1. Logging of 10 variables in endless mode from a task with 100 ms.
    Guaranteed storage of 1000 collected data records.
    The last seconds are of interest for evaluation.
    Configuration: rollover="false", maxFileSize="1000000", dst="/opt/plcnext/log.db"
  2. Logging of 10 variables in endless mode from a task with 100 ms.
    Guaranteed storage of 1000 collected data records.
    The last minutes are of interest for evaluation.
    Configuration: rollover="true", maxFileSize="10", dst="/opt/plcnext/log.db"

 

Examples 3 and 4 show a configuration for high speed data logging. Data logging in the low ms range requires fast write access. This cannot be realized reliably with an SD card. If data logging faster than >= 5 ms is required, Phoenix Contact recommends to store the data on the RAM disk due to performance reasons (/tmp/). Note that in case of a voltage failure or reset all data that is stored on the RAM disk is lost. Phoenix Contact recommends the usage of a UPS (Uninterruptible Power Supply) to prevent the loss of data. Note: Ensure that the required RAM is available for the database. 

 

  1. High-speed data logging of variables in endless mode from a task with 5 ms.
    The last seconds are of interest for evaluation.
    Configuration: rollover="false", maxFileSize="1000000", dst="/tmp/log.db"
  2. High-speed data logging of variables in endless mode from a task with 5 ms.
    The last minutes are of interest for evaluation.
    Configuration: rollover="true", maxFileSize="10", dst="/tmp/log.db"

 

 

 

 


 • Published/reviewed: 2020-03-29 •  Rev. 24