Fun with Cybersecurity – Weekly Blog Posts on Cybersecurity

The purpose of these weekly blog posts is to demystify Cybersecurity concepts and present them in a demonstrable way. The approach is to present what and how; allowing the reader to think about why!

Sam Analytic Solutions – provides Cybersecurity Solutions! We have a homegrown process that saves time and effort for customers who have issues with, meeting compliance objectives in ICS Cybersecurity, developing an Internal Security program or practice or conducting routine Cybersecurity vulnerability assessments. Our Cybersecurity services are packaged (customized and personalized) to expedite adoption and be cost effective offering a higher ROSI (Return on Security Investment).


It Happens! Events in Cybersecurity


If there is one concept that has shaped our thinking about Cybersecurity it would undoubtedly be, “Events”. The concept of “Events” underpins our perspective of Cybersecurity. In this blog post we look at what and how of events and its ramifications in the context of Cybersecurity.


Events as a Concept

An Event, is generally defined as, “an occurrence in time and space”. Our understanding of Cyberspace is constructed using an Events based model. There are two key concepts to be understood in the Events based model – Events and States.  Every object in Cyberspace has a State (a condition described as set of variables and corresponding values). Events occur when objects’ states change and vice-versa. This is a very significant aspect of Cyberspace. No State change, no Events and vice-versa! In other words, every event is evidenced by a State change. As Cybersecurity professionals we are interested in those occurrences (Events) that can be significant to the behavior of Cyber Assets (objects) under our watch!


Past, Present and Future

Cybersecurity professionals use the above idea to deduce what occurred, predict what is likely to happen and most often, observe what is happening within a given security perimeter of Cyberspace (or system). The changing States of objects (typically variables with their respective values) are continually stored (logged) and these (stored datasets) logs are then analyzed to determine the nature of Events that occurred. The patterns that are discovered are documented and shared with the community; forming Cybersecurity Intelligence.


Sensors, Listeners and Logs

Programs that help us capture Event Information are called, Listeners or Sensors. Sensors can be hardware or software. The nature of sensors is based on the nature of Event Information that needs to be captured. Example, motion sensors are used to detect movement; which in turn triggers a light and video camera or notifies a responsible person.

Listeners on the other hand are receivers of Information (Data). Information, that is relayed from a sensor is collected by a Listener program and stored in a log file. Listeners, as the name suggests continually wait for data; when received, they faithfully log it. Data in the log files are then parsed and analyzed to understand the nature of events that took place.


Cybersecurity professionals rely heavily on logs and it is important that logs be maintained for optimal periods of time. It is also critical to plan the information (datasets) that will be captured in the logs. Many a time, irrelevant data gets captured in the logs and does not add value in analyses and pattern finding. This kind of irrelevant data within the given context is termed, noise. Data Relevance is a matter of experience and professional judgement in a given environment. It is also possible to lose sight of details because of too much data being captured. Determining optimal data for capture is termed, “clipping levels”.


Risk Management as the bedrock of Cybersecurity

To secure (protect) Cyberspace (Cyber assets – Networks, Hardware and Software) we need to continually keep track of the states of Cyber assets and ensure that certain kinds of events DO NOT occur – events that have the potential to adversely impact operations and systems (threats). On the other hand, we need to ensure that certain kinds of events DO occur; Events that have a positive impact and help reach goals.

The level of impact on operations and systems, given the potential impact of a threat and the likelihood of that threat occurring is called, Risk. Risk Management provides a quantifiable approach to understanding Risk (events that are likely to have a negative impact on our objectives and information systems)


FIPS (Federal Information Processing Standards) 200 defines Risk Management as, the process of managing risks to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, other organizations, and the Nation, resulting from the operation of an information system, and includes: (i) the conduct of a risk assessment; (ii) the implementation of a risk mitigation strategy; and (iii) employment of techniques and procedures for the continuous monitoring of the security state of the information system.


Taxonomy of Events

Events can be classified in several ways. Events occur on our desktops, laptops, cellphones and other devices like IOTs (Internet of Things) or the IIOT (Industrial Internet of Things). Events could also occur on Network devices like firewalls, routers or storage devices and Cloud based infrastructure. Events occur within applications and operating systems.


Infrastructure Events

Events that occur on networks, servers, storage systems and virtual machines can be categorized as Infrastructure Events. In an ICS (Industrial Control System) or IOT/IIOT any event that occurs in hardware related components will be an Infrastructure Event.


Platform Events

Events that occur in Operating Systems, Middleware and Runtimes can be classified as Platform Events. In an ICS scenario, events within the firmware on a PLC would be Platform Events. On a cellphone, an iOS or Android update would be a Platform Event.


Application Events

Events in Software Applications and Data related to applications fall into this category.  A Web Browser crashing on an Ubuntu Laptop or a Microsoft Excel file corrupted are examples of Application Events. In an ICS scenario, the receipt of data by a PLC from a field device is an Application Event.


Internal and External Events

Another simple classification could be events that occur within a security perimeter and those that occur outside.


Event Information

Keeping track of events is an important task in Cybersecurity. Routine review of Cyber Asset logs and collecting of event information from sources like employees, news agencies, social media and others is central to good Cybersecurity practices. To be useful, event information should, at the least have the following attributes:

Event timestamp – when the event occurred. It is important that timestamps have a standard format across all Cybersecurity/Information collection systems within and outside the enterprise. Further, time settings on all devices should be checked regularly and clocks synchronized. Incorrect timestamps cause a lot of confusion and produce erroneous results during analyses!

Event Source – where the event occurred. This should include an identification that helps identify the device uniquely. Example: IP address.

Event Type – based on an agreed upon classification.

Cause of the Event – Who or What caused the event, the person, program or other action that resulted in this event.

Event brief description – a couple of lines describing the event – what happened.


Concept of Alerts

An Alert is a warning of an adverse event or threat. When a threat is detected in Cyberspace, based on the configuration of Cyber Assets, a warning is sent to persons and systems that are registered to receive the warnings. Alerts play a key role in safeguarding Cyberspace and Cyber Assets. Alerts are mapped to events. When specific events occur, corresponding alerts are triggered. How alerts are triggered, communicated and responded to, is an important aspect of any Cybersecurity Plan or Policy.


Preparing and Responding to Events

Bad things happen! Systems fail, humans make mistakes, sometimes the bad folks win. An event that causes a negative impact is called an Incident. Every Incident is an event, but every event is not an Incident! Incident response is a critical function and is handled by a team of professionals – the incident response team, example, Computer/Cybersecurity Incident Response Team (CIRT). A CIRT tries to minimize the impact or in a best-case scenario avoid all negative impacts because of the incident.


When an Incident occurs (example, data breach), the first step is called, Triage. During the triage the Incident Responders prepare an action plan – a list of things to do and prioritize the list. Having a prioritized, agreed upon action list signals the end of the triage.  Certain types of Cybersecurity incidents have to be reported to law enforcement and sometimes made public. Please take the advice of a Cybersecurity Consultant regarding the appropriate action for your nature of business/industry.



An Event based model for Cybersecurity provides several benefits. An Event is an occurrence that can be significant to the behavior of objects within a security perimeter or boundary in Cyberspace. There are two key concepts in an Event based model – Events and States. The state of an object (or Cyber Asset) is a dataset of variables and corresponding values. Recording the states of objects over time is called, Logging. Logs are critical for Cybersecurity professionals to understand what happened and derive patterns for predicting future events (threat in particular) in Cyberspace. Risk Management, the bedrock of Cybersecurity, is a quantifiable extension of the Event based model. Collecting event information, classifying events, communicating and building Cybersecurity Intelligence for the community is a daunting task facing todays Cybersecurity professionals.

Understanding Baseline Security Controls and ICS Cybersecurity

ICS (Industrial Control Systems) owners and operators alike, want to know the state of Cybersecurity within their ICS. Further, understanding regulatory and compliance needs for ICS Cybersecurity requires an awareness of the underlying concepts. In simple terms, ICS Cybersecurity is the state of Security Controls of the ICS. This blog explains what is a baseline and goes on to illustrate the concept of Baseline Security Controls in Cybersecurity.

Change is a fact of life and Change Management is an important idea in any ICS. Baseline is a very interesting concept in Change Management. In the context of ICS Cybersecurity, Security Controls Baseline, provides a simple, yet powerful mechanism to implement and manage change to Security Controls without disrupting ICS operations and functions.

Understanding Change to a Single Object

Though everything is changing in relation to something! We limit our scope by first considering a single isolated object. An object Obj-1 has a state S1 at time t1. It then changes to a state S2 at time t2. This is illustrated in Fig.1

Understanding Change to a Collection of Objects

We now look at change to a Collection of objects – objects that are working together as a cohesive whole.  This is illustrated in Fig.2. We consider the collection having an initial state C-S1 which is distinct from the states of the objects (O1-S1… O3-S1) that make up the collection. A change in the state of one or more objects changes the state of the collection as a whole! Each state of an object or collection is called a version! For every change, we increase the version number assigned to the individual objects and the collection. Henceforth, we will use the word, “version” followed by a number to indicate the state of artifacts (Software, documents, concepts, etc.)

Change & Configuration Management – Art or Science

It is important to understand that a collection is not objects thrown together at random; objects in the collection share specific relationships with one-another; objects in the collection work together as an integrated unit! Random changes to one or more objects within the collection may render the whole collection dysfunctional. The art and science of change management is hinged around making changes to individual components without breaking the collections’ operation/function or integrity. The discipline in computer science and information technology that practices this is referred to as, Change and Configuration Management. A list of components in a “working collection” along with their respective version numbers is called a “working configuration”.

Change Management in ICS Cybersecurity

The Cybersecurity of an ICS is a “working collection” of Security Controls. For the purpose of this example we consider 5 security controls that comprise the Cybersecurity posture (state) of an ICS. Technically speaking, the “version” of Cybersecurity of the ICS is distinct from the “version” of the individual Security Controls.

Fig.3.a shows an example of 5 security controls SeCo1 to SeCo5 which collectively make up the Cybersecurity of an ICS. The initial version of Cybersecurity of the ICS is distinctly identified by CybSec1 – the state of the collection.  Fig.3.b shows the version of the 5 controls at a time t2, after changes were made to some of them. The changes were warranted because the collection was not working. If the version of the collection at t2 – CybSec2 works, we draw a line over versions as shown in Fig.4. b. This line is called, a Baseline!

We can make a list of Security Controls and their respective version numbers that comprise the baseline. Any changes henceforth will be made on the baseline versions.

Configuration at Baseline (Ref. Fig.4.b)
Components Version Number
SeCo1 V1
SeCo2 V4
SeCo3 V2
SeCo4 V3
SeCo5 V2

Build and Baselines

When we make changes to various components and bring them together to check if the collection works – we call that a build. At times, builds may not work, in that case we discard the build and continue making changes to fix issues. If a build works, we baseline it! Further, most projects and organizations give each build/baseline a number or name.  All baselines are builds, but all builds may not become baselines!


We can understand the Cybersecurity posture of an ICS by examining the state of the Security Controls within the ICS. The state of the “collection of Security Controls” is distinct from the state of individual Security Controls. A “working collection” of security controls is called, the Baseline Security Controls. The list of security controls and their respective versions (states) at a particular time indicates the Cybersecurity posture of the ICS at that time. Fig.5 shows the state of security controls at two points in time – one point is in the future and baseline at that point is called, the Target Profile; the other point is in the present and is referred to as, the Current Profile.

Balakrishna Subramoney (Balu), is a Lead Analyst – Cybersecurity at Sam Analytic Solutions, in Durham, NC. Sam Analytic Solutions provides Services to make your ICS Cybersecurity Compliance Journey– effortless and easy – We believe Cybersecurity concepts are easy to understand and adopt.

Cybersecurity is not about building impregnable barriers, it is about timely response!

This week’s post is about Cyber Security Services. What do Cyber Security Practitioners within an Industrial Control System (ICS) do? ‘Learning through analogy’ is used to illustrate the concepts.

Data is the life blood of Enterprises in today’s world

If doctors want to understand what is going on within the human body; they take a blood sample and study it – this a passive way of understand the functioning and condition of the human body. When Cyber Security practitioners want to know what is going on within an ICS, they study “data samples” collected from various key processes within the ICS.

This approach can be used to study an entire factory, a single production line or a single aspect of a complex system. The important artifact being the “data sample”.

Medical practitioners study the condition of the human body using blood samples. They look for the presence or absence of specific substances in blood samples. Cyber Security practitioners look for the presence or absence of specific information in “data samples” to understand the Cyber Security of the system under consideration.

What data samples need to be collected? What is the best way to collect them? How are the samples analyzed? What are the key insights from the analyses? Cyber security services from Sam Analytic Solutions help customers with these kinds of questions and much more. This post is NOT about business development!

Cyber Security services are akin to medical services in that, organizations need them only they are faced with issues! (Just kidding 😊) However, in the case of ICS “prevention is better than cure”. A plant owner or operator does not want the plant shutting down due to Cyber Security issues!

Data Driven Decision Making

Blood carries nutrients and signals to various parts of the human body. “Data” is the carrier of information that moves within and outside the ICS which is also used for decision making by managers (and controllers). Simple decisions may involve a few parameters, but complex decision making involves thousands of parameters. Decisions taken invariably affect organizational and production processes, people, economy and the environment. Good data driven decision making presupposes, high quality data made available in a timely fashion in required formats.

Cyber Security Data Collection Services (A Phlebotomist on the Plant floor!)

Collecting samples (data samples) is not a one-time activity in Cyber Security, it is a periodic activity and the periods can be as short as a few microseconds or as long as a few months.  This service is specialized and requires an understanding of the production process, the equipment and lots of planning and collaboration. Further, different data samples are collected for different types of analyses (similar to different blood tests).  This service is typically delivered in two phases. Phase one, the planning phase – preparing a document that models the dataflow within the ICS network, descriptions of the sample dataset(s) to be collected and the collection strategy. Phase two, the execution as per plan and collection of required datasets. (Note: Sometimes this service becomes trivial if reliable logs are available within the ICS)

Cybersecurity Data Analyses Services (A Lab technician on the Plant floor!)

While, a blood sample is sent to a Lab technician for analysis. “Data samples” are sent to a Cyber Security Analyst. The analyst employs various tools and techniques to draw inferences from the sample data provided.  The deliverable from this service is Cyber Security feedback, typically a report, which would contain details of the condition and/or functioning of the system from a Cyber Security perspective. Technically, the steps involve, parsing and analyses. Analyses include, visualization and statistical data analysis.

Faster the better! Since, feedback needs to be provided quickly to help control critical production processes and limit damages. Cyber Security Data Analyses services are provided close to the points of collection. These services are rendered along side DCS (Distributed Control Systems) and SCADA (Supervisory Control And Data Acquisition).

Vulnerability Assessment and Penetration Testing (Health Checkups and Fitness)

Often times, when we feel weak, we are prone to illness. Weaknesses are called, Vulnerabilities in Cyber Security parlance. ICS systems can have weaknesses (vulnerabilities) which can act as points of entry for attackers (much like germs getting into our blood stream through open wounds). A health checkup helps spot vulnerabilities in our body.

Vulnerability Assessment of the ICS helps spot weaknesses. Further, launching an attack to prove that a vulnerability can be exploited is the goal of Penetrating Testing. Vulnerability Assessment may not be a passive exercise like “data sample” collection and analyses mentioned earlier. In terms of our analogy with the human body, it is akin to an injection! Penetration Testing is always an active process and could result in shutting down a production line or worse (similar to stress tests like the treadmill test!). These Cybersecurity services are rendered under controlled conditions and always require precise contractual obligations.

Cyber Security Standards (Avoid the Epidemics and Pandemics)

Public health is important to the community. Governments set standards and provide expert guidance to ensure public health. Cyber Security is an important aspect of the times we live in and ICS Cyber Security incidents can impact entire industry sectors. Much like an epidemic. To avoid Cyber Security incidents of epidemic or pandemic proportions, a number of ICS Cyber Security standards exist. In the past, there have been incidents all over the world that have impacted Nuclear Plants, Electric Grids, manufacturing facilities and other ICS systems.

Cyber Security Standards provide guidance to ensure that “enterprise data” can be protected in a uniform way and protection mechanisms work seamlessly across enterprises. Examples of Cyber Security Standards for ICS include (NIST SP 800-82r2 – National Institute of Standards and Technology Special Publication 800-82 revision 2). The ISA/IEC-62443 is an important global standard for ICS. Following these Cyber Security standards assures stakeholders within and outside the organization including, customers, suppliers and the public. It further enhances the confidence of one and all in having a successful business relationship with the organization.

Further, regulatory standards like NERC-CIP (North American Electric Reliability Corporation – Critical Infrastructure Protection) require mandatory compliance. This ensures ICS systems that fall into the scope of NERC-CIP have an assured level of protection and can guarantee service delivery.

Compliance Testing services, Assessment services and Third Party Security Assessment are Cyber Security services scoped around Cyber Security Standards.

Cyber Security Practice (Your Neighborhood Primary Care)

Cyber Security Services can ensure that data within an ICS System is not lost, modified or mis-located and events occur as anticipated.  These services are packaged in multiple ways and is always dependent on the needs of the ICS system under consideration and the service scope. (In the analogy, a dietitian’s services are quite distinct from a lab technician or phlebotomist)

The study (collection, analyses and reporting) of Events that occur within the enterprise and cyberspace in particular, is the cornerstone of any Cyber Security practice.

Data moves in Cyberspace. The dynamic nature of data is modelled as, Events. We say Events occur; which are evidenced by the flow or movement of data. No events, no data flow and vice-versa! When data moves from location A to location B. There are several Events that are taking place. Each Event is associated with data (an event-dataset). The successful occurrence of a sequence of Events is required for data to move from location A to location B; likewise, if data has moved from location A to location B then it is reasonable to assume that a specific sequence of events have occurred!  The occurrence or non-occurrence of an event can impede the movement of data. Further, data can get lost, modified or reach incorrect locations.

When events are collected and analyzed as Threats (threat events), the services are called, Threat Management Services. A Threat is, any circumstance or event with the potential to adversely impact agency operations (including mission, functions, image, or reputation), agency assets, or individuals through an information system via unauthorized access, destruction, disclosure, modification of information, and/or denial of service – NIST SP 800-53 [22]

When the primary consideration is the Risk, they are called, Risk Management Services. Risk is, the level of impact on agency operations (including mission, functions, image, or reputation), agency assets, or individuals resulting from the operation of an information system, given the potential impact of a threat and the likelihood of that threat occurring – NIST SP 800-30 [79]

ICS Cyber Security Program (A Cyber Security Practice for ICS)

Staying healthy and fit requires constant attention and healthy routines. Good Cyber Security requires, Cyber security hygiene – best practices that owners and operators can use to protect ICS data. The ICS Cyber security Program development and deployment service provides a comprehensive solution to manage and monitor ICS Cyber security in a holistic way. This service is akin to having a family practice for your health care needs.


Cyber security, the ability to protect or defend the use of cyberspace from cyber-attacks, can be provided as a suite of services – Cyber security Services. The analogy of a Medical practice providing health care can explain many Cyber security services. Data is the lifeblood of any enterprise and cyber security practices and services ensure protection of that data. Data-sample collection and analyses is analogous to blood sample testing. Vulnerability assessment is a like an annual health checkup. The blog addresses services that can be of use to ICS (Industrial Control System) owners and operators. These services include: Data Collection and Analyses, Vulnerability Assessment, Compliance Assessment, Threat and Risk Management and developing and deploying ICS Cyber security programs.


Balakrishna Subramoney (Balu), Lead Analyst – Cyber Security at Sam Analytic Solutions, in Durham, NC. As a Cyber Security Practitioner he is a staunch supporter of Blue Team practices.

Fun with Cybersecurity – Weekly Blog Posts on Cybersecurity
The purpose of these weekly blog posts is to demystify Cybersecurity concepts and present them in a demonstrable way. The approach is to present what and how; allowing the reader to think about why! The demonstrations use Windows based systems, however most of these tasks can also be performed on Linux and Mac based systems. If you would like to know “what and how” on non-windows systems, please mention that in the comments.

Integrity in Cybersecurity – Files and Fingerprints

Integrity is a central concept in Cybersecurity. Cybersecurity is the ability to protect or defend the use of cyberspace from cyber-attacks. Integrity is defined as, guarding against improper information modification or destruction, and includes ensuring information nonrepudiation and authenticity.

The focus of this post is to help readers understand modification of information.

Files have Fingerprints

A file is the basic unit of information storage in cyberspace; protecting information at rest often implies protecting files. For the purposes of this post, files and information will be used interchangeably.
Algorithms (a process or set of rules) is used to generate a unique alphanumeric string for a given file based on its contents. The alphanumeric string is called the File Hash, this File Hash serves as a fingerprint of that file. SHA256 is the name of one such algorithm used to fingerprint files.

The following illustrates the use of SHA256. It is possible to spot the smallest of changes using this concept – even if a single character is modified within, the file gets a new fingerprint!

Activity 1 – Fingerprinting a file using the Get-Filehash cmdlet in Windows PowerShell

Step 1
Open an instance of Windows PowerShell and navigate to the folder which has the file, as shown. (in my case it is the illustration1 folder)

Step 2
Copy/create the file whose hash you want to determine. Alternatively, you can navigate to the folder that contains the file!
I am creating a file called, temp1.txt in the illustration1 folder to demonstrate the concept. You can do the same, using the Notepad program. (If you type the same text you will obtain the same Hash, providing the contents are identical!)

Step 3
Use the Get-Filehash cmdlet to get the SHA-256 hash of the file.

Activity 2 – Fingerprinting the File After Changing the Contents

Step 1
Change the last period (.) on the second line into an exclamation mark (!), save the file.

Step 2
Note the size of the file – it is unchanged. Find the file hash.

Activity 3: Compare the file hashes before and after the modification.

Step 1

It is advisable to copy the strings into a Notepad file and compare them by pasting them one below the other. To copy the strings from the PowerShell window into Notepad, select the string with the mouse, when the full string is highlighted, press CTRL+c to copy it to the clipboard; paste it into Notepad by right-clicking the mouse and selecting Paste or by using CTRL+v on the keyboard.

Step 2
A visual examination of the two file hashes (before the change and after respectively) indicates that they are different. This eatablishes that different files produce different SHA-256 file hashes!

Activity 4: Find the SHA-256 of a file using Windows Explorer

It is often easier to get the SHA-256 using Windows Explorer. Navigate to the file and right click on the file. Select CRC SHA from the menu, click on the SHA-256 option.

Activity 5: The Contents of Two Files are Identical if and only if their File Hashes are Identical

Step 1
Copy the file temp1.txt to temp2.txt
Step 2
Find the file hashes of temp1.txt and temp2.txt using Get-FileHash. Compare the hashes of temp1.txt and temp2.txt. Since the contents are identical, the hashes are found to be the same. (Note: You can get the File Hashes of multiple files using a single call to Get-Filehash!)


Every file on a computer, irrespective of location has a unique fingerprint based on its contents. The fingerprint is an alphanumeric string generated by an algorithm. The fingerprint is based on the contents of the file. Fingerprints of two files are identical if and only if the contents of those files are identical. SHA-256 is an algorithm used to fingerprint files (There are others like MD5, SHA-1, etc). Fingerprinting with SHA-256 is used extensively to check if contents of the files have been modified (are identical). Installation programs, Antivirus programs, Backup and Restore programs are some examples of Software that use the concept of fingerprinting and SHA-256.

Stay safe by verifying file integrity when in doubt! Make a note of fingerprints of important files on your computer or reference documents in your project. You can always check, if they have been modified accidentally or intentionally.

* This blog follows from – Leveraging the Cloud for WinCC OA. Please check it out for an introduction to using the Cloud capabilities for WinCC OA.

** The tutorial in this blog is only meant as an introduction to using Websockets in WinCC OA and does not cover all function types. Please refer to documentation for additional details or reach out to SAM IT Industrial Automation team for further assistance by calling +1-919-800-0044 or e-mail us at [email protected]

Starting from WinCC OA version 3.15, the provided http server supports Websockets. WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection. Websockets do not contain overhead data like that in HTTP and hence, require significantly less bandwidth. Moreover, once a connection is established, the server can send live updates and notifications to the client (in this case an Internet browser) without waiting for a client request. Similarly, the client can send requests to the server which will be queued until they can be served.

Let us look at some advantages to using websockets over the OLE DB Provider as mentioned in this post:

  • It can not only access archived values in HDB, but also real time data and alerts.
  • Data can be read as well as edited.
  • It can run simply using a CTRL Manager.
  • No limitations to accessing data from distributed systems.
  • No compatibility issues for Linux/Windows, neither for different versions (32-bit/64-bit).
  • Only requirement is that the client browser should support Websocket protocol.

Now I will describe how to get started with Websockets. To use Websocket, first create a control file and follow the steps below:

Step 1 – Using httpConnect() with the Websocket flag

Import the “CtrlHTTP” library and define httpServer() and httpConnect() functions within main body as follows:

#uses “CtrlHTTP”
void main()
httpServer(false, 8080);
//Activates an HTTP Server listening on port 8080. False means Authentication is not used.

httpConnect("websocket", "/websocketurl", "_websocket_");
//Here “_websocket_” is the flag. This flag defines the “/websocketurl” url as a websocket.
//The first argument “websocket” registers a function called websocket() as web resource.

We will define the Websocket function next.

Step 2 – Define the function websocket()

When a web browser sends a request to the WinCC OA http server, the server starts the callback process within a thread and passes a mapping. Mappings are simply associative arrays, or dictionaries. Hence define the function as follows:

void websocket(mapping map)
mixed any;
//Here, any is a variable that we will use next to read data from the Websocket Request.

Step 3 – Add the httpReadWebSocket() function

This httpReadWebSocket() function is a waiting Control function which waits for a Websocket message from the client. It returns 0 if a message(or many messages) has been sent and is ready to read. If the socket is closed on the client side, this function returns 1.

So now, let’s define the logic to read a request:

while ( httpReadWebSocket(map["idx"], any) == 0 )
//Here, map[“idx”] references the Internal File Descriptor of the Websocket connection.
//The second argument any, gets written by the message that is received.
DebugTN("Received Message", any);
//Read and display the message to Log Viewer

*Here, note that the received message is stored in a mixed data type ‘any’. This is because websockets support only text or binary messages, and they can be passed in an anytype/mixed variable.

Step 4 – Building Logic to respond to requests

If you are using javascript to send requests to the WinCC OA http server, it is advisable to send the messages in a JSON format converted to string. To convert this JSON data to a string, you can use the function JSON.stringify(JSON_Object). The reason for using JSON format is that data parsing becomes very easy. Like you would expect, WinCC OA has a script “json.ctl” in its library that we can leverage to parse and understand the received message.

As an example, let’s say you define a request in the following format using javascript, and send it to the http server as shown below:

websocket_object = new WebSocket("ws://" + + "/websocketurl");
//Here websocketurl is the URL used to initiate a Websocket connection in localhost.
var req_msg = {
type: “dpGetPeriod”,
dpe: “System1:Site.Site1.Total_Energy”,
T1: “2018.02.17 10:30:05.000000000”,
T1: “2018.02.18 11:30:05.000000000”
//Here, websocket_object is a js variable of the type Websocket, that communicates with the

After the message is read on the server side, this message can be converted to json format and can be parsed as follows:

mixed any;
anytype json_data;
while ( httpReadWebSocket(map["idx"], any) == 0 )
DebugTN("Received Message", any);

json_data = json_strToVal(any);
if (json_data[“type”] == “dpGetPeriod”) {
custom_function(map[“idx”], json_data);

Here, you can create a custom_function() to execute the actual dpGetPeriod() function as well as add any extra logic, however you like.

Step 5 – Send Response using httpWriteWebSocket()

Based on the variables mentioned above, your write function should be like follows:

int custom_function(int idx, const mapping &json_data)
mapping response_data;

//Code containing backend logic

response_data[“type”] = json_data[“type”];
response_data[“dpe”] = json_data[“dpe”];
response_data[“values”] = values_from_logic;
response_data[“times”] = times_from_logic;
//Variables values_from_logic and times_from_logic are derived from the
//actual code you have to implement using dpGetPeriod()

return httpWriteWebSocket(idx, jsonEncode(response_data));

After sending the response, the variable response_data can now be parsed accordingly on the client side web browser using Javascript or any other relevant tool.

That’s it! This small tutorial should help you get started on how to use Websockets in WinCC OA. Sending and receiving messages on the client side can be easily done using Javascript, and have not been mentioned much here. You can also use Python to send and receive Websocket messages. Whatever methods you use, once you are able to get the underlying data, you can connect to a Dashboard directly and display real time values, or you can push them to a database for data analytics.

For completion, here’s the complete Pseudo code:

#uses “CtrlHTTP”
#uses “json.ctl”
void main()
httpServer(false, 8080);

httpConnect("websocket", "/websocketurl", "_websocket_");

void websocket(mapping map)
mixed any;
anytype json_data;

while ( httpReadWebSocket(map["idx"], any) == 0 )
DebugTN("Received Message", any);

json_data = json_strToVal(any);
if (json_data[“type”] == “dpGetPeriod”) {
custom_function(map[“idx”], json_data);

int custom_function(int idx, const mapping &json_data)
mapping response_data;

//Code containing backend logic

response_data[“type”] = json_data[“type”];
response_data[“dpe”] = json_data[“dpe”];
response_data[“values”] = values_from_logic;
response_data[“times”] = times_from_logic;
//Variables values_from_logic and times_from_logic are derived from the
//actual code you have to implement using dpGetPeriod()

return httpWriteWebSocket(idx, jsonEncode(response_data));

Should you need help implementing any of these for your environment, feel free to reach out to the Industrial Automation experts at SAM IT Solutions, we are just a phone call away. Call +1-919-800-0044 or e-mail us at [email protected]

Control Infotech, our Industrial Automation partner applies substation automation domain expertise in the realm of utility grid-tie solar generation plants. Complete solutions from Grid-tie engineering, protection & Control panel build, relay programming and PV plant asset monitoring are among the solutions they offer. The SCADA system is technologically the most advanced. It offers user friendly features on a non-proprietary commercially available platform. Customers benefit from a stable and powerful monitoring and control platform that can be seamlessly expanded and deployed on a Cloud platform.

Suyash Kanungo, BTech, MS
Computer Engineer
SAM Analytic Solutions

*This blog follows from – Leveraging the Cloud for WinCC OA. Please check it out for an introduction to using the Cloud capabilities for WinCC OA.

In the last blog, we explained about the advantages of using platforms like Elastic Stack, Web Frameworks, Python etc. with WinCC OA. In this blog, we discuss how to actually access data from WinCC OA to use with an analytics or Big Data platform or database of your choice. We will focus on the data archived in History DB (HDB), and how to pull it using OLE DB Provider.

OLE DB Provider has been provided by WINCC OA. OLE DB is a Microsoft specification for accessing data on different computers. It is based on Microsoft’s COM technology and is the successor to the older and limited ODBC technology. While ODBC uses static APIs for data access and is limited to SQL, OLE DB uses ADO (ActiveX Data Objects) to provide a quick and easy facility for programming applications.

OLE DB Provider is supposed to give access to the underlying HDB. It uses its own SQL queries to get Data Points. The examples are provided in the Help Documentation provided along with the WinCC OA software.

Let’s check out the requirements and limitations for using OLE DB with WinCC OA:

  1. WinCC OA version 2.12.1 or higher should be installed.
  2. It can access only archived Values and Alerts that exist in the History DB.
  3. Data can only be read, not edited.
  4. No direct support for Windows Excel 2000 or earlier versions.
  5. The Data Manager must be running when OLE DB Provider is started. If the Data Manager is stopped, queries using OLE DB are no longer possible.
  6. For distributed Systems, each WinCC OA instance should be running its own OLE DB Driver to provide access to external applications.
  7. Under Windows 64 bit, only single Server Systems are allowed as compared to Distributed systems having multiple Clients.
  8. OLE DB provider is a 32 bit driver, so it might not interact too well with 64-bit applications.

Based on these points, if your requirements are not met, you can refer to – Accessing HDB via Websocket using C# API. If you can work with these requirement, read on.

Let’s get started on how to set up OLE DB Access. The following steps can be found in the Help documentation provided in the WinCC OA software too:

Step 1 – Add WinCCOAoledb manager to the WinCC OA Project

Go to your project directory and navigate to the config folder within it. Open the file called progs and add the following line to the end:

windows/WCCOAoledb | manual |      30 |        2 |        2 |


*Make sure there are no blanks in the end.

Step 2 – Register the OLE DB drivers and executable

Open a command prompt as Administrator, and navigate to the WinCC_OA_installation_directory/bin/windows directory. Here, run these 3 commands:

WCCOAoledb.exe /regserver

regsvr32 WCCOAOleDbExeps.dll

regsvr32 WCCOAoledb.dll


Step 3 – Start WCCOAoledb Manager from the console

Now, when you start/restart your project, you’ll see the OLE DB Manager in the console.You can start it directly from the console.

Click for larger image
We have been able to get access using MS Excel as well as a Python Script with WIN32COM library. To find instructions on how to access HDB data using MS Excel, please refer to the help documentation provided with WinCC OA. Moreover, MS Access will also be able to access the underlying Data. An example output using python is shown below:

Click for larger image
To use OLE DB Provider with Python, a 32 bit version of python should be installed, and you should download the win32com library to use the communication client. The win32com client is distributed as a part of pywin32 library, which you can download from here.

If you have any questions, or need help implementing any of the Industrial Automation tools mentioned in this and other Blog posts, please feel free to contact us with any questions at +1-919-800-0044 or e-mail us at [email protected]

Control Infotech, our Industrial Automation partner applies substation automation domain expertise in the realm of utility grid-tie solar generation plants. Complete solutions from Grid-tie engineering, protection & Control panel build, relay programming and PV plant asset monitoring are among the solutions they offer. The SCADA system is technologically the most advanced. It offers user friendly features on a non-proprietary commercially available platform. Customers benefit from a stable and powerful monitoring and control platform that can be seamlessly expanded and deployed on a Cloud platform.

Suyash Kanungo, BTech, MS
Computer Engineer
SAM IT Solutions

The SIMATIC WinCC Open Architecture (WinCC OA) is a versatile SCADA system that can be used to control, monitor and supervise plants and operations in almost any line of business. It can be used as a standalone system, or can be scaled to a distributed system, connecting up to 2,048 standalone systems. It can also be connected to a Databases to archive process data from machines and production flows. Having seen numerous companies leveraging the flexibility of WinCC OA to fit their custom needs, we at SAM IT Solutions have taken the initiative to help them further leverage some modern cloud platform capabilities, that might help them stand out in the industry.

With all modern technologies drifting towards the cloud, we try to identify a few areas within WinCC OA that might benefit from the cutting edge technologies being used worldwide. Here are three key areas of improvement:

  1. Data Analytics – There is already a powerful tool within WinCC OA for analytics – SmartSCADA. However, with so much advancement made in Big Data, using platforms like Elasticsearch and Kibana might help your company save valuable time and resources. Logstash can be used to gather data from all alerts. Once your data is migrated to Elasticsearch, graphs and visualizations can be created very rapidly, and you can easily see trends for key  indicators from your WinCC OA setup.
  2. Reporting – WinCC OA has provided a SOAP(Simple Object Access Protocol) reporting interface to facilitate creating reports from third party tools. Some tools to integrate with are BIRT, Reporting tools from the Elastic Stack X-pack, or simply a custom made Python script implemented from Reportlab, Jinjas or/and WeasyPrint libraries.
  3. Dashboards and UI – Using web frameworks like Django, Flask, Ruby on Rails, Express etc. can give your web interface the look and feel of a modern application. Using these frameworks, you can create and host your UI in the Cloud. Below is an example of a Dashboard created by us for a customer using WinCC OA for their solar plant. This dashboard is directly connected to the WinCC OA server and displays real-time data and trends as you can see above.

Now you might be wondering how to connect the WinCC OA system to the tools and frameworks mentioned above. The answer is simple — really — transfer the concerned data points from the underlying database to a modern DB of your choice. If you use Oracle DB with WinCC OA, then your task ahead is even simpler. While some tools can readily integrate with Oracle directly, it is nevertheless easy to migrate data from Oracle to another database like MySQL DB, MongoDB or PostgreSQL. However, if you use History DB in your WinCC OA application, then you have to use some programming tricks to leverage some packaged WinCC OA interface libraries. To find out more, you can refer to our blogs below:

The possibilities of integrating a cloud infrastructure are endless, and not at all limited to the points above, maybe you just want to use the programming capabilities of the latest python libraries on your data points. If you have any ideas that you might want to share with us, or if you are curious about how we can help you with your custom WinCC OA architecture and needs, please do not hesitate to give us a call at +1-919-800-0044 or e-mail us at [email protected].

Control Infotech, our Industrial Automation partner applies substation automation domain expertise in the realm of utility grid-tie solar generation plants. Complete solutions from Grid-tie engineering, protection & Control panel build, relay programming and PV plant asset monitoring are among the solutions they offer. The SCADA system is technologically the most advanced. It offers user-friendly features on a non-proprietary commercially available platform. Customers benefit from a stable and powerful monitoring and control platform that can be seamlessly expanded and deployed on a Cloud platform.

Suyash Kanungo, BTech, MS
Computer Engineer
SAM IT Solutions