Would you want to do this per notification (template) - i.e. everyone that gets a notification from this notification template would get that UOM? or per contact? Would you want to do this per UOM or per attribute - i.e. one attribute would be "ft" but another attribute would be "m"?
60 votesRESEARCHING / EVALUATING · 21 comments · PI Server » Asset Framework (AF) · Flag idea as inappropriate… · Admin →
In response to Jagdish Konathala, "Thanks Steve. We had issues upgrading to..."
Yes, SP2a does fix the installation issue with SQL Server AlwaysOn Availability groups.
AF 2018 SP2a has a new feature for splitting strings. Please see release note:
144299 - Split function was added to Expression and Event Frame Generation analyses to split a string into a string array.
In response to Chris Beaupre, "A String Length and Basic Delimiter func..."
Can you help me understand what you're trying to do with tabular alarm data? As you know, an AF data reference such as the string builder is only executed on demand by the client. In order to have a fruitful discussion, it would be great if we can get some details on your use cases.
In response to John Messinger, "Hi David, Some of the additional funct..."
What are you trying to do? Are you parsing alarms and events?
In response to Matt Inglis, "Take for example writing a digital state..."
One of the reasons why we currently don't allow editing the AF System State Set is because within a single AF database, you can have PI Point Data References to multiple PI Data Archives. If you can imagine Attribute1 -> \\PIServer1\sinusoid and Attribute2 -> \\PIServer2\sinusoid. Thus within AF we would need to handle multiple Digital State Sets with potentially the same name. This causes a lot of complexity.
Having said that, I understand the issue you're experiencing and will keep it in mind for possible future enhancement.
Can you provide some examples of what changes you may have made to the System Digital State Set? Can you also provide the rationale behind changing these System Digital State Set? What problems were you trying to solve by doing do?
What are you using to enter in the manual data?
Looks to me there are two specific use cases here: 1) you have analyses that do not need to run more often than weekly - i.e. longer than what is permitted right now (daily), 2) you have data that is supposed to arrive at a specific time, but sometimes they don't and you want to re-calculate at a later time at which point you know for certain you have all the data you need for the calculations.
Did I get that right? Are there any more use cases?
Please describe use case in more details. Are you trying to handle late arriving data?
If you're trying to handle late arriving data, we're researching auto-recalculation for the 2017 R2 release.
The choice of the scheduling configuration depends on what you're trying to achieve and your business needs. Event triggered analyses may get expensive if the calculations are performed more often than necessary, it does not mean it's best practice to always use periodic triggering.
For some users, event triggers are their desired default configuration so it's a no win situation.
In your case, if you configure the templates to have a periodic scheduling, then you would minimize the chance of someone forgetting to set the desired scheduling.
73 votesRESEARCHING / EVALUATING · 12 comments · PI Server » Analytics & Calculations · Flag idea as inappropriate… · Admin →
In response to Asle Frantzen, "I have a 2018 SP2 installation, and I'm ..."
Service Status comes from PI Analysis Service. The remaining filters come from AF. To do what you described, we have to do some work combining these two sources. We just have not gotten to implementing this.
Would you expect the sorting to affect only the page that you're looking at in the Management plugin or would you expect the sorting to affect all the pages? The reason I'm asking is imagine you have 100,000 analyses. In that case, in order to improve performance of the Management plugin, we implemented paging, which means we get back 1 page at a time from the server and display that as soon as possible to improve responsiveness of the user interface. Consequently, if you expect the sorting to affect all the pages together as one, it's much more impactful to performance and overall responsiveness for users with many analyses since we potentially have to retrieve everything before sorting as opposed to sorting only 1 page.
With the 2017 release, you can create your own customized search/filtering of analyses in the management plugin. Please provide feedback on whether this suits your needs.
There are difficulties allowing users to sort a very large list. The management plugin loads analyses by pages into the grid. This dramatically improves performance as we bring back 1 page of data at a time from SQL Server and display it immediately. With additional sorting options, we would need to find a way to sort potentially millions of analyses and then display them. This makes paging much more difficult. In addition, what we have seen is that there are many elements with the same name, for example "pump". In the case of a very large hierarchy, if you were to sort by "pump", you will end up with hundreds if not thousands or more "pump". I'm not sure that's all that useful.
Your feedback is welcome.
Please try out the 2017 release as the management plug-in has been enhanced to cover many of these features.
In response to James Voll, "Another comment . . . that may be the or..."
Good questions James. I'll need to get into more details to answer your questions, but I get the feeling that you do want more details :-).
There are a few things you need to be aware of in terms of the way we do data calls under the hood. If you're in PI System Explorer, if you select the "Evaluate" button, the data call under the hood is an InterpolatedValues call based on the client time. Thus, if you have an offset between the client time and server time, it is possible that when you select "Evaluate", you end up with values that are not the Snapshot. Let's say you have 5 seconds data in your PI Data Archive and your client is 2 seconds behind the server time, your interpolated call with the Evaluate button could be interpolated between the last two values.
Variable1='attribute1' and Variable1=TagVal('attribute1', '*') should give you the same result. Both should be in the context of the trigger time. So * in your example is the trigger time of attribute2, which is late arriving lab data.
Having said that, you should also consider the concept of * with respect to client and PI Analysis Service. When you're in PI System Explorer, and you perform TagVal('attribute1', '*') or 'attribute1', you will get back the InterpolatedValue based on the client time of "now". In PI System Explorer, when you select "Evaluate" you get back two values, one is value at the last trigger time and the other one is value at evaluation time (basically now). The value at last trigger time is based on the schedule. We added this feature a long time ago because the last trigger time may be a long time ago and in many cases is not the same as "now". Meanwhile when you're running the analyses in real time, the basis for time is PI Analysis Service so if there are time offsets between PI Analysis Service, PI Data Archive and PI System Explorer, you may get inexplicable results.
Lastly be aware that we put in special handling of autobackfilling based on the service start time compared to the last evaluation time. Gory details are in the relevant documentation.
Good discussion thread. Let me provide some context around the design concept of Asset Analytics to help with this discussion.
First off, in the PI world, * was used to designate the Snapshot. Since typically the Snapshot holds the latest value that has pass Exception, over time, * is considered to be synonymous with "now" by users.
In the world of Asset Analytics, when running in real time (streaming analytics) with event-triggered scheduling, the analyses are typically "triggered" by Snapshot values coming via the PI Data Archive Update Manager. Thus, * = Snapshot = Trigger time in Asset Analytics. It is important to embrace "trigger time" in Asset Analytics because there is a default 5 seconds wait time before the analyses are actually executed. We call this the Execution time but the time context used in the analyses is always the Trigger time. That means if your analyses are triggered at 12:00 because there was a new value coming from the Update Manager at 12:00, the analyses are executed at 12:00:05 but the values used are the values at 12:00. This default 5 seconds delay is to allow for data that may arrive at the PI Data Archive with a slight delay. This delay is user configurable but I would caution everyone to adjust this only if necessary to accomplish specific use cases - please contact tech support if you feel an adjustment is needed to avoid unintended side effects. Nevertheless, the time context used is always the Trigger time.
Therefore, if you have lab data that arrives late, but with the desired timestamp, you can execute the analyses in event-triggered scheduling but only triggering on the lab data. For example, let's say your lab data arrives 2 days late, but with a time stamp that is 2 days old and this is your ONLY triggering input, then the analyses would execute with the time context of the lab data (2 days old). However, there is a caveat to this, there is a difference between late arriving data vs. out of order data. Late arriving data is supported as triggers in real time, but out of order data is ignored as triggers. Out of order data is data that arrives with a time stamp that is older than the Snapshot value. In the case of out of order data, you would either need to recalculate or turn on auto-recalculation for the affects analyses. This is because streaming calculation are triggered by Snapshot values from the Update Manager and the out of order data bypasses Snapshot thus we never see it as a trigger.
In the case of auto-recalculation or manual recalculation, be aware that the inputs are no longer Snapshot values but rather archive values since typically the Snapshot values are not longer available. Practically that means depending on your Compression setting, real time (streaming) calculations may produce different results than backfill or recalculation.
Hope this helps. (Hope it didn't confuse things further :-))
Can you please provide some details on what you're trying to do? An example would be very useful.
Have you tried using the Parsetime function?
When an analysis triggers, this should return the trigger time.
In response to Calvin La, "Hi Stephen, I think the customer wanted ..."
Ok, in that case, the request is to execute additional analyses "within" the event frame time duration in a streaming fashion. That also means these analyses would need some kind of a triggering/scheduling and I would assume these analyses would only "run" when an event frame is open?
This is a chicken/egg situation. The event frame analysis StartTrigger is executed to determine whether an event frame should be created. Before the EF is created, there is no StartTime. In short, there is no EF StartTime until the EF is created, thus it doesn't make sense to use an EF StartTime to determine whether the EF analysis should create an EF or not.
If I have misundertood the intent of this request, please provide more details.
Are these high fluctuations real behavior of the data or are they noise from the data source?
Asset Analytics runs on a schedule. In its very core, it's either triggered by new input values (event triggered) or by time (periodic). In all case, the behavior is that calculations are triggered in real time, i.e. as time moves ahead, you would either get a new input value or the wall clock changes. Thus, it's important to remember that Asset Analytics was built to be a "streaming" calculation engine and not as an adhoc calculator. While I know what Cubic Spline does, I would like to know in your opinion whether a Cubic Spine function fits the mold of a streaming calculation engine. For example, would you want to calculate a Cubic Spline for every new input values? I would love to hear more about your specific use case.
You can use the array functions, i.e. recordedvalues(), map() and count(), that were introduced with the AF 2018 release to perform countif().
PSE is modeled similar to Windows Explorer. In Windows Explorer you're also not able to multi-select on the left-column.
In PSE, you can do an element search to get the hierarchy onto the right pane. From there you can multi-select.
Based on your description, it is possible that you may have run into a bug that we fixed in AF 2018 SP2. Have you contacted tech support on this?
AF 2018 SP2 will have programmatic access to PI Analysis Service run time status. With a bit of development work, you'd be able to build a dashboard for your analyses based on parameters such as lag. Hopefully that would help you NOT have to look at trace logs.
The current behavior assumes that PI Analysis Service is the "owner" of the event frame that it creates, therefore we assume that other users do not write to an event frame that doesn't belong to them. Similarly, for a PI Point, we assume the interface that is writing to a PI Point to be the "owner" of that data stream. We assume that two users, e.g. two interfaces, would not write to the same PI Point (data stream). Earlier in this discussion, it's mentioned that a custom application is being used to write to event frames. Can you describe what you're writing, the criteria that would lead you to start writing, and how often this is done?
Thank you, this is very helpful.
Can you help me understand what you would do with these event frames? This is so I understand your use case.
Thanks for your idea. I understand what you're looking for in terms of a solution, I would be leaning towards not adding an PI Point attribute. If we were to do that, it would increase the size of the database as this would affect all PI Points affecting overall performance. In addition, the impact to backwards compatibility for clients would be unknown and would require a good amount of testing. Having said that, as I mentioned earlier, I understand your needs (thank you for the clear explanation), we'll look at what we can do to help you and others with similar needs.
In the case of the PI Data Archive UOM, it's a free form text field, which is unlike the AF default UOM database. Can you better describe what you mean by "maintain link"? What behavior do you want to see? An example use case would be beneficial.
Asset Analytics was designed to be a streaming calculation engine, therefore it supports very well calculations that executes on a schedule, in a streaming fashion. These suggestions on linear interpolation and polynomial calculations seem to be more suitable for "adhoc" calculations, ones that run only when needed. Can you provide additional details on what your specific use cases are - i.e. it would help me if you can provide a high level description of what problems you're trying to solve. Thanks.
Hi Scott, can I get a screen shot of what you're describing with #1? I'm trying to visualize what this "...kind of drop down..." you're describing.
If you use the "Preview" feature it gives you all the values corresponding to Variables, Inputs and Outputs. Does that give you sufficient information?
Can you help me understand how this would be different than the current "Evaluate" button that you can use to evaluate the expressions immediately?
What is the hardware configuration of your machine that's running PI Analysis Service?
It's not practical to limit the time period for backfilling because every user may create different analyses and each analysis may use inputs with different data density.
There are built in limits to how much resources backfills use up. We have relaxed some of these limits with newer version of the software. In particular we limit backfilling resources based on the amount of hardware that you have (Processing Cores). With the very first release in 2014, backfilling was limited to 1 thread. Which version are you using?
In the upcoming PI Server 2018 SP2, you will have the ability to bulk cancel backfills.