This is existing capability. You can decide who can read/write analyses by making the appropriate changes using something like PI System Explorer.
Please see attached screenshot
Did you use the LiveLibrary online doc or from PI System Explorer Help?
Sorry, I meant PI Builder user guide, not PI System Explorer user guide.
In response to Bruce McCamant, "Chris - give us the option to just bring..."
That's existing capability.
This is the LiveLibrary URL to Formula Data Reference:
This will help you learn how to create an attribute that is configured to be a Formula Data Reference. Once you have this attribute, you can use it as an input to an Analysis.
I want to emphasize the performance aspect of this per my previous post. Please take great care in ensuring you're not overwhelming the system by calculating excessive amount of data with the Formula Data Reference attribute.
For Example 1, there is a way to do that with existing shipping product. What you need to do is create another attribute (no PI Point, just an attribute) and configure it as a Formula Data Reference. What you would do is configure the Formula to be a*b, with a=FIT-001 and b=DIT-002. Now you can create an Analysis Expression for TagTot('Formula_Attribute', '*-1w', '*') to get the weekly total. Things to keep in mind:
1) The Formula Data Reference is computed on demand. In this case, it's computed when the TagTot is triggered. When the Formula DR is computed, it will retrieve all the values from FIT-001 and also all the values from DIT-002 for the entire time range, then do the computation client side. Therefore you should test and be aware of potential performance issues due to the Formula being computed client side and also the cost of retrieving all the values from the server (network latency).
2) As a way to test this, create this Formula DR attribute in PI System Explorer, then you can perform a "Time Series Data" or "Trend" (via right-click on the attribute) to see what it does.
3) With your Analysis Expression, you should save the output to a PI Point, but in this case it's only 1 PI Point.
In general, AF supports on-demand type calculations via Formula Data Reference and Analysis Data Reference and streaming (scheduled) calculations via Analyses. There are PROs and CONs to both so you need to decide which works best for you.
As to Example 2 Conditional Expressions, you can also tried the same as above by creating a Formula DR attribute with a=x, b=y, c=z, then a AND b AND c. Make sure you have a corresponding AF Enumeration set that you would configure for the Formula DR attribute and you would want to configure the Formula DR attribute to be "stepped". Once you have that Formula DR attribute, you can then configure an Analysis Expression with TimeEq. As a reminder, the same caveat as Example 1 above apply here. Check to make sure the performance is adequate for your needs.
Let me know if you need additional information.
I understand what you're trying to do. However, please keep in mind that AF/Analytics was designed to be a streaming calculation engine. We designed it assuming that users would use it as a way to perform calculations with a schedule, e.g. every time there is a new trigger event for the calculation or based on a clock schedule. What you're describing seems to be a manageability need that may not happen very often, i.e. not in a streaming fashion. Therefore, with the information that you have provided, this request would not be high priority for us.
Does the information in the InstrumentTag attribute change with time?
Can you tell me what you would use this information for?
Would you please create a new suggestion in the PI Vision forum? This particular suggestion originally refers to PI System Explorer as the client. Since different people monitor different forums, your suggestion in the PI Vision forum would be most helpful to the relevant OSIsoft personnel.
Are you referring to PI System Explorer or some other display client?
81 votesRESEARCHING / EVALUATING · 14 comments · PI Server » Analytics & Calculations · Flag idea as inappropriate… · Admin →
When creating predefined Searches within the Management plugin when using PI System Explorer, you can create a search criteria using the Element name with wildcards, e.g. Elem* I just tried this and it seems to work.
Perhaps I'm misinterpreting your question?
In response to Asle Frantzen, "I have a 2018 SP2 installation, and I'm ..."
Service Status comes from PI Analysis Service. The remaining filters come from AF. To do what you described, we have to do some work combining these two sources. We just have not gotten to implementing this.
Would you expect the sorting to affect only the page that you're looking at in the Management plugin or would you expect the sorting to affect all the pages? The reason I'm asking is imagine you have 100,000 analyses. In that case, in order to improve performance of the Management plugin, we implemented paging, which means we get back 1 page at a time from the server and display that as soon as possible to improve responsiveness of the user interface. Consequently, if you expect the sorting to affect all the pages together as one, it's much more impactful to performance and overall responsiveness for users with many analyses since we potentially have to retrieve everything before sorting as opposed to sorting only 1 page.
With the 2017 release, you can create your own customized search/filtering of analyses in the management plugin. Please provide feedback on whether this suits your needs.
There are difficulties allowing users to sort a very large list. The management plugin loads analyses by pages into the grid. This dramatically improves performance as we bring back 1 page of data at a time from SQL Server and display it immediately. With additional sorting options, we would need to find a way to sort potentially millions of analyses and then display them. This makes paging much more difficult. In addition, what we have seen is that there are many elements with the same name, for example "pump". In the case of a very large hierarchy, if you were to sort by "pump", you will end up with hundreds if not thousands or more "pump". I'm not sure that's all that useful.
Your feedback is welcome.
Please try out the 2017 release as the management plug-in has been enhanced to cover many of these features.
In response to Stephen Kwan, "We're investigating why NumOfChange is b..."
Ok, so I know what's going on here. The NumOfChange function uses the user provided StartTime and EndTime and calls RecordedValues with the Mode of "Inside" on the underlying PI Point. As a result, we get all the individual events within the StartTime and EndTime, then we sort through and figure out how many changes occurred. As a result, we don't take into consideration the value at the start time (boundary condition). There are other complications depending on whether the underlying PI Point is configured with Step = 1 or not. In the case of a PI Point that is configured with Step = 0, we need to figure out if we should interpolate at the start time boundary or if we should extrapolate.
I've created a backlog item to correct this. At this time, I do not have an estimate as to when we would be able to work on this. It needs to be prioritized with all the other backlog items.
Please continue to use your workaround for this issue.
In response to Jürgen Schmidt, "Yes it is."
We're investigating why NumOfChange is behaving this way. Will respond back with more info.
Is your PI Point configured with "Step" on?
In response to Jürgen Schmidt, "Our Controller Monitoring by now is base..."
Is this a digital tag? If it is not a digital tag, is it configured with "Step" = 1?
Ok, so you really need to count only if there are changes.
Do you wish to do this counting in an adhoc basis (reporting or basically on demand query) or in a continuous basis?
In response to Jürgen Schmidt, "I would need a possibility to count dist..."
I think there are simpler way to accomplish what you need.
Variable1: FilterData(Setpointvalues, $val = "AUTO")
The first row sets your array with values that are "AUTO". The second row gives you a count. The ArrayLength function would handle the different array lengths.
Asset Analytics was designed to be a streaming calculation engine, therefore it supports very well calculations that executes on a schedule, in a streaming fashion. These suggestions on linear interpolation and polynomial calculations seem to be more suitable for "adhoc" calculations, ones that run only when needed. Can you provide additional details on what your specific use cases are - i.e. it would help me if you can provide a high level description of what problems you're trying to solve. Thanks.
We're having trouble understanding what you're doing. How many different missing Enumeration sets do you have? Can you provide some examples of your use case - i.e. step of step of what you're doing?
Please help me understand how I would differentiate the most future value? There's no way for me to know whether another value further into the future would arrive some time later. In addition, what if you have multiple future data and they arrive out of order? Should we trigger when the most future data arrive even though it may not be the last one to arrive? Thanks for any input you may have.
In response to Kenneth Barber, "I don't do this often, and when I do, I ..."
Thank you for your feedback.
Can you help me understand how often you do this and how important this is to your work? Trying to gauge relative priority compared to everything else in the current backlog.
OK, I understand what you're looking to do. Unfortunately the current architecture makes this very awkward to do. In essence you're looking for a way to be alerted to a data quality issue. We'll keep this request in the backlog and prioritize it with the rest of the backlog items. I do not know if/when we would be able to get to this.
Are you describing a condition whereby the "errors" did not exist when you initially created the analyses and then subsequently these "errors" occurred? Or were these "errors" present when you initially created the analyses?
Unfortunately there is not a workaround. We have identified this to be a bug. We do not have a forecast on when this bug maybe fixed at this time.
Would you want to do this per notification (template) - i.e. everyone that gets a notification from this notification template would get that UOM? or per contact? Would you want to do this per UOM or per attribute - i.e. one attribute would be "ft" but another attribute would be "m"?
71 votesRESEARCHING / EVALUATING · 21 comments · PI Server » Asset Framework (AF) · Flag idea as inappropriate… · Admin →
In response to Jagdish Konathala, "Thanks Steve. We had issues upgrading to..."
Yes, SP2a does fix the installation issue with SQL Server AlwaysOn Availability groups.
AF 2018 SP2a has a new feature for splitting strings. Please see release note:
144299 - Split function was added to Expression and Event Frame Generation analyses to split a string into a string array.
In response to Chris Beaupre, "A String Length and Basic Delimiter func..."
Can you help me understand what you're trying to do with tabular alarm data? As you know, an AF data reference such as the string builder is only executed on demand by the client. In order to have a fruitful discussion, it would be great if we can get some details on your use cases.
In response to John Messinger, "Hi David, Some of the additional funct..."
What are you trying to do? Are you parsing alarms and events?
In response to Matt Inglis, "Take for example writing a digital state..."
One of the reasons why we currently don't allow editing the AF System State Set is because within a single AF database, you can have PI Point Data References to multiple PI Data Archives. If you can imagine Attribute1 -> \\PIServer1\sinusoid and Attribute2 -> \\PIServer2\sinusoid. Thus within AF we would need to handle multiple Digital State Sets with potentially the same name. This causes a lot of complexity.
Having said that, I understand the issue you're experiencing and will keep it in mind for possible future enhancement.
Can you provide some examples of what changes you may have made to the System Digital State Set? Can you also provide the rationale behind changing these System Digital State Set? What problems were you trying to solve by doing do?
What are you using to enter in the manual data?
Looks to me there are two specific use cases here: 1) you have analyses that do not need to run more often than weekly - i.e. longer than what is permitted right now (daily), 2) you have data that is supposed to arrive at a specific time, but sometimes they don't and you want to re-calculate at a later time at which point you know for certain you have all the data you need for the calculations.
Did I get that right? Are there any more use cases?
Please describe use case in more details. Are you trying to handle late arriving data?
If you're trying to handle late arriving data, we're researching auto-recalculation for the 2017 R2 release.
The choice of the scheduling configuration depends on what you're trying to achieve and your business needs. Event triggered analyses may get expensive if the calculations are performed more often than necessary, it does not mean it's best practice to always use periodic triggering.
For some users, event triggers are their desired default configuration so it's a no win situation.
In your case, if you configure the templates to have a periodic scheduling, then you would minimize the chance of someone forgetting to set the desired scheduling.
In response to James Voll, "Another comment . . . that may be the or..."
Good questions James. I'll need to get into more details to answer your questions, but I get the feeling that you do want more details :-).
There are a few things you need to be aware of in terms of the way we do data calls under the hood. If you're in PI System Explorer, if you select the "Evaluate" button, the data call under the hood is an InterpolatedValues call based on the client time. Thus, if you have an offset between the client time and server time, it is possible that when you select "Evaluate", you end up with values that are not the Snapshot. Let's say you have 5 seconds data in your PI Data Archive and your client is 2 seconds behind the server time, your interpolated call with the Evaluate button could be interpolated between the last two values.
Variable1='attribute1' and Variable1=TagVal('attribute1', '*') should give you the same result. Both should be in the context of the trigger time. So * in your example is the trigger time of attribute2, which is late arriving lab data.
Having said that, you should also consider the concept of * with respect to client and PI Analysis Service. When you're in PI System Explorer, and you perform TagVal('attribute1', '*') or 'attribute1', you will get back the InterpolatedValue based on the client time of "now". In PI System Explorer, when you select "Evaluate" you get back two values, one is value at the last trigger time and the other one is value at evaluation time (basically now). The value at last trigger time is based on the schedule. We added this feature a long time ago because the last trigger time may be a long time ago and in many cases is not the same as "now". Meanwhile when you're running the analyses in real time, the basis for time is PI Analysis Service so if there are time offsets between PI Analysis Service, PI Data Archive and PI System Explorer, you may get inexplicable results.
Lastly be aware that we put in special handling of autobackfilling based on the service start time compared to the last evaluation time. Gory details are in the relevant documentation.
Good discussion thread. Let me provide some context around the design concept of Asset Analytics to help with this discussion.
First off, in the PI world, * was used to designate the Snapshot. Since typically the Snapshot holds the latest value that has pass Exception, over time, * is considered to be synonymous with "now" by users.
In the world of Asset Analytics, when running in real time (streaming analytics) with event-triggered scheduling, the analyses are typically "triggered" by Snapshot values coming via the PI Data Archive Update Manager. Thus, * = Snapshot = Trigger time in Asset Analytics. It is important to embrace "trigger time" in Asset Analytics because there is a default 5 seconds wait time before the analyses are actually executed. We call this the Execution time but the time context used in the analyses is always the Trigger time. That means if your analyses are triggered at 12:00 because there was a new value coming from the Update Manager at 12:00, the analyses are executed at 12:00:05 but the values used are the values at 12:00. This default 5 seconds delay is to allow for data that may arrive at the PI Data Archive with a slight delay. This delay is user configurable but I would caution everyone to adjust this only if necessary to accomplish specific use cases - please contact tech support if you feel an adjustment is needed to avoid unintended side effects. Nevertheless, the time context used is always the Trigger time.
Therefore, if you have lab data that arrives late, but with the desired timestamp, you can execute the analyses in event-triggered scheduling but only triggering on the lab data. For example, let's say your lab data arrives 2 days late, but with a time stamp that is 2 days old and this is your ONLY triggering input, then the analyses would execute with the time context of the lab data (2 days old). However, there is a caveat to this, there is a difference between late arriving data vs. out of order data. Late arriving data is supported as triggers in real time, but out of order data is ignored as triggers. Out of order data is data that arrives with a time stamp that is older than the Snapshot value. In the case of out of order data, you would either need to recalculate or turn on auto-recalculation for the affects analyses. This is because streaming calculation are triggered by Snapshot values from the Update Manager and the out of order data bypasses Snapshot thus we never see it as a trigger.
In the case of auto-recalculation or manual recalculation, be aware that the inputs are no longer Snapshot values but rather archive values since typically the Snapshot values are not longer available. Practically that means depending on your Compression setting, real time (streaming) calculations may produce different results than backfill or recalculation.
Hope this helps. (Hope it didn't confuse things further :-))
Can you please provide some details on what you're trying to do? An example would be very useful.
Have you tried using the Parsetime function?
When an analysis triggers, this should return the trigger time.
In response to Calvin La, "Hi Stephen, I think the customer wanted ..."
Ok, in that case, the request is to execute additional analyses "within" the event frame time duration in a streaming fashion. That also means these analyses would need some kind of a triggering/scheduling and I would assume these analyses would only "run" when an event frame is open?
This is a chicken/egg situation. The event frame analysis StartTrigger is executed to determine whether an event frame should be created. Before the EF is created, there is no StartTime. In short, there is no EF StartTime until the EF is created, thus it doesn't make sense to use an EF StartTime to determine whether the EF analysis should create an EF or not.
If I have misundertood the intent of this request, please provide more details.