Reopening this suggestion to collect additional feedback.
Another user ( Dave Johnson ) has posted an interesting discussion on this topic at:
PI AF Extensible Analytics
In response to Tim Carmichael, "Since I haven't had an opportunity to us..."
Hi Tim Carmichael
Data References always work on-demand, but an Analysis has the benefit of persisting the output to PI point for history and faster retrieval. The issue is there are some things that can't be done in Asset Analytics since it lacks a true looping mechanism or conditional branching. A data reference may access a 3rd party library but configured analytics cannot.
The reason for the proposed idea is to have a standard hooks from Analytics to a calculation service of the customer's choosing (not just MATLAB). In this use case, Analytics is regulated to be the scheduling service but it shells the actual calculation to an external service.
I have mixed feelings on this proposed idea. On one hand, I get what you want to do. This works great for assets that has several attributes as long as each attribute uniquely belongs to a given UOM (or actually UOMClass) for that asset.
How do you see this working with heat exchangers where you would have 4 different temperature readings? It would be confusing is all of them were green. Or if displaying a pump, there may be an inlet and outlet measurement (perhaps flow or pressure or temperature).
In response to James Voll, "Everything you state is EXACTLY what my ..."
I cannot duplicate your problem. Today I created a new tag on my data archive. My examples focus on tag "Sinusoid Previous Value".
I quickly created an element with it. Keep in mind that as it take many seconds or minutes to move from one application to another that the SINUSOID will be changing:
Next up is a simple analysis. We will take the current value of SINUSOID to produce the previous value. There are a couple of points to note: (1) the expression CurrentValue is not needed in long term. I only used it to see when I click Evaluate. And (2), the timestamp for PreviousValue will be TriggerTime. That is to say the CurrentValue and PreviousValue have the same timestamp.
Obviously, PreviousValue is mapped as an output back to my newly created tag "Sinusoid Previous Value".
Next I backfilled for a few days:
Using SMT, here's what I show for SINUSOID:
And here's what I show for Sinusoid Previous Value:
Note there a couple of extra values for the Previous version than for SINUSOID. Understand that this analysis is triggered whenever the SINUSOID snapshot changes. However, the SMT archive viewer is showing me values archived values - that is snapshot values that have passed the compression test. So it makes sense that during the time I was messing with this and performing backfills, that is a little after 11:30-ish AM my time, that the analysis was still running for incoming snapshot data. But not every SINUSOID snapshot makes it to the archives.
This falls in line with Steve's previous (and nice) explanation, and on my system demonstrates that '*' refers to trigger time and not a wall clock Now.
In response to James Voll, " "For analytics, '*' does ..."
What version of Analytics are you using? You may send a private message to me with the case number(s) so I may review.
Let's back up and correct your original statement. This is incorrect in regards to Asset Analytics:
'*' refers to now or snapshot values
For analytics, '*' does NOT refer to Now but rather refers to the trigger time. Analytics do not have a notion of wall clock. Likewise, other PI relative times are relative to that trigger time. So 't' refers to midnight of the day of that trigger time. 'y' refers to yesterday before this trigger time.
Is this really a critical distinction? YES, particularly with recalculations, which can be for several days ago. Each recalculation's '*' refers to its own trigger time (perhaps 2 days ago) instead of Now.
Thus, what you are asking for already exists and is amazingly simple:
Pretty sure the requestor means PI SMT's Point Builder. I am using SMT from the 2018 RC1 and the dialog I see is:
To me the wording is quite clear. I have the option to remove it from a list or delete the PI Point.
To add to the need, currently you may filter on attribute values, such as "|attrPath:10" or "|attrPath:>90". You could filter on 1 given bad state such as "|attrPath:'No Result'". But there is a need to filter on any bad value, not just 1 particular bad value.
In response to Stephen Kwan, "Additional details - specifically you ca..."
When the RC comes out, I will check it out and get back to you.
In response to Stephen Kwan, "Is this to insert a value into an EF att..."
Yes, it's to capture a value at start time. However, the value is specifically not coming from a PI point. Users can already capture a PI point's value at start time. But they can only capture non-PI points at end time due to the StringBuilder DR.
84 votesRESEARCHING / EVALUATING · 22 comments · PI Server » Asset Framework (AF) · Flag idea as inappropriate… · Admin →
I am hesitant to upvote this. I certainly believe we should offer something to fill the need. Not so sure that the Formula DR is that something. I would rather see Asset Analytics changed to allow a simple Expression analysis on an event frame.
If the input strings are fixed and well-known, I would consider placing them in an AFEnumerationSet, if possible. You may use AFEnumerationValues in a Formula DR but admittedly it is a bit ugly since you must hard-code the numeric values.
We have started working on this feature that will allow users to define calculations on the fly in PI Vision.
We’d like to learn more from you about what you’d expect when it comes to these types of ad hoc calculations in PI Vision. If you have a few minutes, please consider taking this quick survey:
Your feedback will help determine how this functionality is implemented.
In response to Hahnming Lee, "This seems like a recipe for a lot of da..."
The use-case is having a Rollup Analysis defined on an Element Template, but some element instances may be childless and it messes with other rollups. E.g. a rollup analysis may be wanting to do a count on an element from among it's child elements, and that first rolled up value feeds into its parent's rollup. If the first element has no child elements, its count should be 0 instead of throwing errors that are propagated up to the parent element's rollup.
On the surface this sounds intriguing but I do have some questions. The easy one is: what is an appropriate point limit for small home projects? 25 tags? 50? 100?
Now on to the stickier question: what software components would be comprised in this Community Edition of PI? Obviously the Data Archive, with its point limit. Has some PE but no ACE. What interfaces get included? What about AF Server and Client? Seriously doubtful it would make it in the community edition. Which means I don't get Asset Analytics. No DataLink, ProcessBook, or PI Vision either. So really it's a Data Archive with little other things, which is something that could happen. With more things thrown in, it becomes tougher to restrict some of the other components, which just means we've opened the doors wide open for piracy.
If others are thinking full blown Data Archive and AF, then an inexpensive home IoT project with less than $100 of hardware will also need a suitable PC (or 2) to run SQL Express, AF, and all the other stuff. So I can get a Raspberry Pi and some sensors dirt cheap, but I also need a big beefy more expensive PC to run my Community Edition of PI.
We will make this fix in an upcoming version.
I am neutral on the idea but would like to dispel the notion that these are necessarily spelling mistakes or from sloppiness. There is a justification/history behind each spelling. For instance, TimeStamp was the spelling used for PIValue. And Utc is frequently spelled as-is throughout Microsoft's .NET Framework such as with UtcNow, UtcOffset, UtcTicks, etc. My comment is not to diminish your submitted idea. The vote should be based on preference going forward, but do understand that either spellings are acceptable within given contexts.
We are considering how to implement Data Quality in an upcoming release. A subset of that is if data is stale. If you can help us gather more customer use cases to better document the problem, and help us get a sense of how many customers would benefit from this feature, that would be extremely helpful.
Specifically, what would be very helpful is the requirements customers have to determine when a data point is stale. We are making a serious effort to simplify the management of the PI System, so we would like to be able to avoid asking users to spend time configuring yet another parameter.
I don't think we can truly retire PISDK until AFSDK supports similar PISDK methods for accessing the PI Message Logs.