This item is currently planned for an upcoming release of PI Vision.
If you have a moment, we’d like to make sure we understand the requirements and expectations for this feature.
When choosing to display data markers on a trend, is the expectation that all raw data values over the duration of the trend are shown?
Would it be acceptable to show markers for only some of the raw values as long as there are enough to accurately represent the data, or is it critical for your use case that all raw data values are shown?
Thank you for your feedback!
"Would it be acceptable to show markers for only some of the raw values as long as there are enough to accurately represent the data, ...."
Absolutely NOT! This would make 'trace markers' uselss!
show all data points. I would expect, like Processbook, that if there were too many data points on the trend to be displayed properly that the markers would essentially turn off and the trace would merely be a line, until such time as the time range was shrunk to a point where ALL data points could be shown.
We are currently researching this item and evaluating it for a future release.
If you have a moment, please let us know if you have a preference for how data item descriptions should be displayed in trends. For example, would you prefer to be able to configure the trend legend to show the item’s description, that the descriptions be shown under the plot area of the trend, or would you like them to be displayed in another way?
Let us know what your preference is by adding a comment below.
configurable options . . . descriptions visible: True/False, legend location: right/bottom.
When hovering mouse over legend items it currently shows the name w/root, have that text box show description as well.
I'm tired of the printer marketing spam on this thread . . . rescinding my vote . . .
I've spent some time today trying to re-create the issue i had originally seen in the Tech Support case that generated this feedback item. I tried creating AF analyses from scratch using existing input tags that I was able to add 'in-order' and 'out of order' data into. I had also, previously, attempted to duplicate the original elements/analyses/tags from my tech support case and set the analyses back to the way they were before I had made changes to them.
The end result . . . i was not able to create any scenario that counters your premise . . . that '*' = trigger time. I am happy because the ramifications of this not being true is . . . . ugly . . . but i'm frustrated in having no way to explain the behavior i originally saw (as well as the fact that this premise still contradicts what was communicated to me supposedly from a Product Specialist via a Tech Support Engineer
I will again add the caveat that the original tech support case was on a system running AF 2017 R2 and we have since moved to AF 2018 SP2. I know specifically of a handful of items fixed between these two versions directly concern issues around automatic recalculation and analyses. I would not think any of these issues would have been related . . . but at this point I cannot say for certain whether one of those issues was causing the issue in someway that was not apparent at the time.
I am also still very much keeping a close eye on the usage of automatic recalculations in the new version as we have never been able to fully 'trust' auto recalc and have had continual problems with it performing as it should. I have two cases open now concerning discrepancies between results from backfilling vs results from normal running analyses using auto recalc. FYI
Another comment . . . that may be the original tangent point in which led to the discussion during my original case of what '*' actually means in terms of backfilling, late data, out of order data, and normal operations . . .
Is there a difference between having Variable = 'attribute1' and Variable = Tagval('attribute1','*'), or are they the same? Especially in the context of attribute1 being non-late process data and the analysis being triggered on attribute2, late lab data.
Everything you state is EXACTLY what my assumptions were prior to my original post on April 10th . . . where I was provided with information contradicting these assumptions, and thus turning my world upside down :). I would love to be correct in my original assumption and incorrect in my thinking over the last 1.5 months.
I'm not exactly in the practice of keeping incorrect analyses floating around, thus the analyses that were causing issues have been adjusted. Instead of event-triggered on the lab data (like you even suggest) I switched them to perioidic and they were structured in such a way that all lab data now becomes out of order in relation to the analysis, thus late arriving data will cause the analysis to be recalculated.
Anyway, I did, just now, attempt to create a duplicate set of analyses and output tags . . . switching them back to be event triggered on the lab data. However based on my previous observations I do not trust that backfilling is an accurate representation of what is occurring during normal operation . . . thus i will give this a few days to run normally and then compare the two sets of results form these analyses
I cannot pin point to any obvious items that may be part of this . . . but the other caveat to through out is that originally this system was on AF 2017SP2 and as of last week its on 2018R2. There were some analytic related bugs fixed that were causing me/us problems, but again nothing i would think having a directly impact here.
"For analytics, '*' does NOT refer to Now but rather refers to the trigger time"
Your statement is in direct contradiction to what I was told through a Tech Support call concerning issues with Analyses giving incorrect results (that WERE corrected if I manually backfilled . . . thus its a difference between how backfilling behaves and how real time analyses behave).
Previously, I have always been under the assumption that '*' represents trigger time. But my analyses were not behaving like they should. The explanation provided did fit with the observations I was seeing.
I would be happy to revisit the issue . . . however . . . I have no real complaints on tech support engineers with OSIsoft, they're great, but it is clear that they have varying levels of experiences and different strength areas. Previous experience of having tech support calls involving intricate issues with AF Analyses like this suggest that it would not be worth while to call and roll the dice on who i get.
These issues are hard to identify because they are the most evident by a backfilling behaving differently than normal real time analyses. While Automatic recalculation addresses a big part of this, it does make it more confusing as well. At this point, it has become common to have 'incorrect' analysis results that are simply fixed by backfilling . . . but that means any testing to what the problem is difficult because hitting evaluate or backfill on the analysis is not trust worthy.
Process Data (not late), a like a Flow Rate, used in an analysis with Lab Data (late), like a concentration. The lab data could be anywhere between 2 hours late and 14 days late.
One of the more obvious configurations would be to have an analysis 'event triggered' on the lab data. The trigger time could be 2 hours ago (or 14 days ago), but the process data's snapshot would be <10 seconds ago. The misconception that '*' represents trigger time is the issue . . . because it leads to a very simple analysis that is also incorrect. just using tagval('process data','*').
Again its confusing that you are suggesting that using tagval('process data','*') is incorrect but then using '*' outside of a tag retrieval function is correct.
This is also hard to come up with 'on the spot' examples, because this is a situation where backfilling/evaluate behave DIFFERENTLY than normal running over time. I see a lot of 'solutions' given that are essentially 'well, just backfill'. Backfilling does seem to treat '*' as trigger time and not snapshot time ALWAYS. The evalute button does the same thing. The example I had most recently (from Case 00564622) was adjusted so that the analysis is not event triggered, but periodic, then relying on the automatic recalculation to hit when the late data finally arrives.
No I had not tried that. ParseTime is used for Strings . . . Asteriks is usually with apostrophes ( '*' ). Putting those those together ( ParseTime('*') ) gives a Calc Failed. I had not thought until now to try using quotes with the asterisks . . . ParseTime("*") and that does calculate properly. How does Parsetime("*") and '*' differ, attached picture.
The attached picture also suggests a further confusing aspect of this problem. Used outside of any tag retrieval functions, '*' means trigger time . . . but used inside of any tag retrieval functions, '*' means snapshot. Is this correct? That's very confusing.
(side question, i don't recall any other situation in where the usage of "*" is appropriate)
I do believe this would address the issues I had mentioned . . . however I would argue that this is far from ideal. I believe the confusion between * vs snapshot vs trigger time is likely a common misconception and thus a not obvious problem. I believe the ParseTime solution is a not obvious work around, requiring extra variables on MANY analyses. It would mean that I would need to include ParseTime("*") in nearly every analysis created that contains process data (usually 'not late') and manually entered data (usually 'late'). Not Obvious problem + Not Obvious solution + extra overhead on many analysses = :( .
to reiterate a point above . . . one bad data point (coercion failure) will cause the ENTIRE trace to fail, whether its 1h or 30d. Regardless of whether that one data point should be handled differently, its seem rather problematic that one bad data point can cause the entire trace to fail