As a process engineer I would like to be able to integrate complex specific calculations, such as from a 3rd party analytics engine, so that I can use the Asset Analytics and its context with all the available data and scheduling to run those without needing to do custom heavy development.
Reopening this suggestion to collect additional feedback.
We have a status tag with various statuses. We have to generate event frames as they occur for different status transitions 'shutdown' to 'online', 'online' to 'trip', 'online' to 'force shutdown'.... Because of duplicate events, various different functions available out of the box were just not cutting it. OSIsoft support couldn't get the event frame working either. A simple function in Python to fetch data via Web API and dedupe events did it. Some kind of extensibility will go a long way and plug the gap that ACE left.
sample events 15,15,15, 40, 55,10, 5, 5,5,5,5,5,25,25,2,5,25,25,25,25...
we need 15,40,55,10,5,25,2,5,25 to look at prev value and compare and generate event frames.
for idx, x in enumerate(data['2']['Content']['Items']):
# print(idx, x)
if len(unique_dict) == 0:
unique_dict[str(get_pst_time(x['Timestamp']))] = x['Value']
if data['2']['Content']['Items'][idx]['Value'] != data['2']['Content']['Items'][idx-1]['Value']:
unique_dict[str(get_pst_time(x['Timestamp']))] = x['Value']
I am updating older, custom process control logic from a former, obsolete SCADA system. I have found that PSE does not meet my needs in the case of proportional-integral-derivative (PID) control, where the output action should be taken at uneven time intervals, such as only when new values change by a certain amount (i.e. exception based execution triggering), and/or only if there is a long delay between these exception based trigger points.The PID functionality also requires saving at least 3 sets of intermediate data from previous runs of the function.
This was possible with ACE, but now that ACE is no longer fully supported, I have not found another way now to trigger AFSDK routines directly from any PI or PSE system.
In the use case for implementing PID controls using AF SDK, a control program relies on saving intermediate values of past data at intervals that cannot be predicted in advance by the programmer, thus the intervals cannot be programmed in advance. The need for recording the last three saved values of previous process variable for the Integral (I) and Derivative(D) control contributions is not supported within PSE, except by using PI tags to save values that are only useful as intermediate calculation values. These intermediate calculation values are of no use after the control action is taken, so there is very little justification in saving these data within PI tags, except to leverage the data storage provided by PI Archive tags. And even though PSE supports data array functions, using the PSE data array functions is cumbersome, and this has presented difficulties when programming the startup of the PID from a mode change. It has also presented difficulties in determining data quality associated with values during the last few data tags that were recorded at uneven time intervals, as from a manually entered lab reading or from an online analysis with uneven data output timing. Also, during a mode change to supervisory or remote cascade control, these programs have been hard to program on startup when only 1 or 2 of the 3 required historical process variable values are applicable for control use, unless an internal counter is also provided. Saving an internal counter to a PI tag is also not a justifiable use of a Pi tag, because it is only useful temporarily, except possibly for later troubleshooting. The PSE will allow this kind of intermediate data to be saved to PI tags, but saving intermediate program data is generally a wasteful use of expensive PI tags. Instead of saving intermediate program data for the past 2-3 iterations of a program to PI tags, PSE needs to allow for some program memory, as in a continuously running C# or VB.net program that can be used with AF SDK. Unfortunately, AF SDK cannot be scheduled to run a subroutine on a triggered basis from PSE, and even if it were possible to do so, the programs would also need to be scheduled in a way that preserves the previous C# or VB.net memory from the previous run of a program within C# or VB.
Therefore, in this use case, it would not be sufficient to simply support functions that run one time only and then exit. Support would be needed to call a subroutine within an AF SDK calculation engine, similar to what PI ACE formerly provided. Single, standalone, one-time execution of external functions would not meet my needs for this use case, but triggering a function that runs continuously in the background within AF SDK, as ACE formerly did with PI SDK, will meet the needs for this type of application.
Asle Frantzen commented
In addition to being able to call 3rd party code, it would be great to be able to create user-defined functions that you could reuse.
The screenshot shows an expression I have, and I have to copy it 8-12 times in order to accommodate the number of product flows I need to calculate. This could easily have been done by creating a user-function with input parameters, and call that 8-12 times instead.
We want to be able to execute Python scripts from AF analyses.
I support this feature request. In 2015, I developed a PI System-MATLAB integration layer and wrote a generic analysis framework that could accommodate not only PI System but any other such asset framework that had a .NET code base.
The integration layer allowed the user to define an AF Analysis with AF Attributes as inputs and output. However, the actual calculation would be left out and done in MATLAB instead. I wrote a parser in MATLAB for the AF Analysis definition, which would then isolate the input AF Attributes, read their data with AFSDK calls over the MATLAB-NET API.
The framework allowed any number of analyses to be implemented in MATLAB as a library of analyses, using the MATLAB OOP semantics. The AF tree would store the name of the MATLAB analysis in an AF Attribute, which could be passed into the AF Analysis, parsed by the MATLAB framework and dispatched to the delegated analysis object in MATLAB. Results would be displayed in MATLAB figures.
The framework entry point was from the MATLAB side, since there was no facility to spawn the MATLAB framework from an AF Analysis. Ideally, one would want to spawn the MATLAB code from a custom user AF Analysis. It would save my many lines of MATLAB framework code, although it was fun creating the bridge pattern and object factories in MATLAB OOP semantics.
A similar thing may be done with Python.
We are huge users of ACE, and we would love not to be anymore. Moving all of this to AF and out of MDB is a huge "pie in the sky" goal for us, but we use a great deal of custom code, making AF Analytics unable to take over at the moment.
Amit Vankudre commented
Is it also possible to develop the capability of writing array results to a pi point ? Currently, array functions are available only for reading data or carrying out any operations on it, but the outputs need to be written one by one. It would be nice to have a feature where an array is being pushed to pi point in a single go (or if its going to be a per event basis, then we at least need to have some kind of looping functions available as in any standard programming language)
Amit Vankudre commented
I agree with others on this forum. Its important to develop an advanced analytics engine (either integrate PI analysis with an external engine or develop the capability within PI Analysis itself). I am working on an advanced analytics project or which the customer wants to develop many complex calculations that involve using statistical methods to arrive at an output. Doing this is quite easy in Python or MATLAB, but if integrated with PI analysis can prove really useful. Currently, the solutions need to be developed using multiple engines and there is no integrated platform available
Steve Edwards commented
This is needed today to protect the intellectual property of what is inside in the analytic. This would be useful for "community" partners (equipment suppliers, engr service providers, consultants, etc..) to be able to distribute their expertise to PI System owners in a PI System analytic template. Users need see only inputs, outputs. Their logic is proprietary. An example case would be vibration monitoring service providers triggering EFs at plant site for site operators to qualify the incident and interact with the community partner. Community PI AF authors would greatly enhance existing pi system investments and be a welcome boost to site activity, strengthening data leaving the site to cloud based sharing and research.
Rick Davin commented
Another user ( Dave Johnson ) has posted an interesting discussion on this topic at:
PI AF Extensible Analytics
Jørgen Foss commented
I find Dave Johnsen’ post excellent.
There are probably several ways to achieve “Extensible analytics”.
To give another customer’s view on this:
We use a Python Flask server hosting our Python code and a custom DR to call the functions. As Dave points out the CDR has some draw backs.
We can also call our Python Flask server using the REST possibility in Notifications but creating an event frame for each calculation is over kill.
One possible extension for Osisoft that would be more generic is to let the analysis service be able to send web requests with a setup much like the one in Notifications, but where the trigger is the same as a built-in analysis. Then it would be up to the hosting server to able to handle the web requests. The hosted functions could essentially do all that is needed i.e. retrieving data from PI and writing data back to PI. If a response to an attribute is required, then Osisoft should define the response format. (Mathworks or other calculation server providers would have to adapt the Osisoft format)
Dave Johnson commented
In my opinion, the lack of a robust calculation engine is the largest gap in OSIsoft's current product offering. There is a huge opportunity for OSIsoft to add significant value to what is already otherwise an excellent platform.
Please read the background information below to get up to speed on the incredible usefulness of this feature and then be sure to upvote this user voice idea!
The PI AF/MATLAB architecture (now deprecated) was elegantly designed and provided a glimpse into the possibilities of PI AF extensible analytics. While few companies expressed interest in MATLAB integration per se, many customers would petition for a more generic offering that would enable the integration of PI AF with a server hosting a catalog of functions using ASP.NET, Python Flask, etc. if they truly understood the value proposition.
Here are the benefits PI AF extensible analytics would provide:
People can leverage PI System Explorer, the standard OSIsoft tool for creating AF templates and elements, to:
- Invoke functions that reside in external DLLs and libraries beyond the built-in PI AF analytics functions.
- Overcome the inherent limitations of PI AF analytics which do not support programming loops and generally lacks the expressiveness to construct and maintain complex calculations.
- Use external functions residing in DLLs as first-class citizens just like the built-in AF functions. This brings full IntelliSense/autocomplete capabilities so people can create their calculations as AF analyses.
- Manage calculation code using robust software engineering tools including unit testing, source control, etc. (since the code resides in an external DLL)
This PI AF extensibility point would unlock other scenarios that could add value to companies with JSON/HTTP as the lingua franca. For example, a Python Flask server hosting various Python data science packages, R function hosting, etc.
Extensible analytics would provide a decoupled architecture resulting in these benefits:
- Externally hosted ASP.NET calculation server (or other technology platform) makes it less likely that the PI Analysis service will crash and burn if something goes awry in the calculations.
- Clearer separation of responsibilities for OSIsoft to pinpoint the problem as residing with customer code rather than custom data references, PI ACE, etc.
- Easier to maintain and update calculations through a pure functions DLL and a published catalog/manifest on an external calculation server.
- Local hosting of calculations for critical operations so facilities can function as islands rather than relying on a third-party calculation engine in the cloud.
- Happy, productive customers! People can accomplish their complex calculation objectives using one tool (PI System Explorer).
Why not just use a Custom Data Reference (CDR)?
- The CDR and PETools functions run in the same memory space as the PI AF analytics engine which increases the likelihood of crashing the AF analytics service if something goes awry with the calculations versus the decoupled extensible analytics design.
- If nested functions are needed (i.e. calling one function inside another function), using the CDR will be challenging. It will require some advanced knowledge for people building calculations to ensure the correct function execution order. This contrasts with the extensible analytics option which enables IntelliSense and nesting of functions in AF analyses just like built-in AF functions.
- Utilizing the functions via the CDR in PI AF templates will be more clunky for the end users who are building AF structures, leading to a suboptimal UX versus extensible analytics.
- When using PI System Explorer (PSE) to build PI AF templates/elements, PSE will download the CDR/engineering functions DLL file to the user’s local machine behind the scenes, presenting an opportunity to reverse engineer the DLL code. This is a concern if the functions in the DLLs contain "secret sauce" and particularly a concern if the DLLs are .NET-based which can be trivially decompiled.
- Calculations are more costly to build and maintain since one must build and deploy CDR .NET “glue” code on the PI AF server versus updating a catalog of functions on a decoupled server.
Richard Armstrong commented
AF Analytics has been a great step forward but personally I'd like to see the ability to have functions developed and added so they were available to all AF users.
An addition to the AF Library for function templates was my thought.
I've talked to customers that have needed to write external AF SDK Programs to solve problems that have not been overly complex. (eg requiring loops, arrays)
If we had a function editor/builder that allowed for these to be done in AF they would be context aware and able to be recalculated within the AF environment. Ideally this would move away from the PE syntax required in AF Analytics and be closer to C, C# or Python.
Without going into to great detail, I agree with Samuel Clark's comments around the ease of integration. We deployed the Matlab production server and were getting significant value out of the ability to write custom functions to complete the transfer of excel based monitoring to AF- everything from simple solutions like excel's SumProduct, iterative solvers or ML algorithms. These can all be done individually by sending data out with the Pi integrator but they cannot be deployed efficiently or easily by average users
Steven Paine commented
My organization also uses ACE calculations to perform complex calculations (Utilization, Furnace Efficiency) and would welcome the introduction of a third party analytics engine as an add-on and means to expand the functionality and robustness of AF Analytics. Initial thought was AF Analytics was set to be the heir-apparent to ACE; unfortunately without the infusion of third party analytics into AF I don't see that happening.
Steve Edwards commented
Customers doing pattern recognition and prognostics often need multivariable, nonlinear functions to model the physical behavior they see. Extensible analytics keep projects rooted in PI AF and provide feedback to us as to what functions need to be included in our product.
Sasha Krivonosova commented
In many cases customers need to run complex calculations using python scripts, for example:
K-means clustering is used for tank leak detection to eliminate false positives;
Drilling engineers run models against (near)real time data using python code.
It would be beneficial to have a capability to trigger these calculations (external library) from Asset Analytics and write results to pi tags natively within the same tool, rather then use two different data egress and ingress technologies (ex. combination of Integrator and an Interface).
Samuel Clark commented
IMO the biggest advantages with the way Matlab Production Server (MPS) integration was implemented were the self-serve user experience of working in PI System Explorer and being able to leverage the existing PI Analysis Service scheduler. Further the use of function calls greatly simplifies the calculation logic as contextual info /metadata doesn't need to be handled in the calculation. For example the context of the calculation is the element whose analysis called the function. The output of the calculation is the output attribute. There is no need to recursively discover or hard code the paths of the inputs and outputs. There is no shortage of ways to extract PI data, run calculations, and write the results back to PI but most of them do not address the above considerations.
It looked like we had plans to support other generic extensibility points in a similar fashion to the MPS integration and I think it was a very elegant approach that would have provided a true successor to PI ACE.
Tim Carmichael commented
In response to Rick Davin, "Hi Tim Carmichael Data References alwa..."
Thank you! Well explained and I agree with your reasoning.
Tim Carmichael commented
In response to John Messinger, "We would like to be able to add custom a..."
Since I haven't had an opportunity to use MATLAB functions, how does the requested implementation differ from custom data references? And, please excuse my ignorance of the process... I am truly asking to understand.