As an engineer using PI View or SQL writers for a report, I would like option to remove data older than a certain cutoff. For example, a continuous view is used to generate data for a report on the last 8 hours. Data older than that are no longer needed for the report and should be removed to avoid unnecessarily large tables.
One workaround with Text and Hadoop writers is the Append Timestamp option, but there's no ability to remove data from other targets.38 votes
For supporters of this idea, what is the downstream client tool that consumes the data prepared by the PI Integrator? Is the concern about disk space or how to only display data for a specified time range?
For the former, this seems to be a different type of “rolling” reporting use case that we do not currently have a solution for. In the mean time, as Rory mentioned below, a stored procedure on SQL would work if this idea is intended for SQL writers.
For the latter, is it possible to create “rolling timespans” for your graphs, reports, tables, etc.? For example, here’s a video on how to create calculated measures for a table based on last day or month of data: youtube.com/watch?v=NevFWL2JoMU
As a user of PI Integrator for Business Analytics, I'd like to have it support high availability (HA).18 votes
Please specify what kind of high availability is requested. For PI Integrator for BA to run in a highly available fashion?
To support other applications that are highly available? If so, which ones?
As a PII4BA administrator I would like to specify a logging level so I can filter which logs I want to store and which ones I don't want to store in the database.18 votes
Is this an issue of limited SQL storage? We have a log retention policy that can be configured by the user in the CAST.UI.WindowsService.exe CONFIG file today. By default it only retains a maximum of 2 months of data AND no more than 100k rows AND no more than 1000MB of disk space. Similar retention policies can be created for the PIIntegratorStats database as well.
Let me know if this would satisfy the requirement. Or provide more information on how specifying the logging level would help. If you could, which logging levels would you keep vs. throw away and why?
The current default column size is 4000 characters. It would be nice to allow the user to have control over this parameter.8 votes
What relational table is this in reference to? A specific target that you’re using? And can you give an example of when/how a column size exceeds the 4000 characters?
As a user for PI Integrator for BA, I would like to have an option to only publish data when the value changes. The use case: a point is calculated via analysis service once a minute, and the value does not change often. I want the data to be published only when the value changes. Right now, PI Integrator for BA publishes the same (unchanged) value at each run with a different timestamp.8 votes
What writer are you using? PI Integrator for Business Analytics Advanced Edition supports streaming use cases where the PI Integrator signs up for changes on relevant attributes and only sends data when values have changed based on the Snapshot or the Archive. Supported targets today are Apache Kafka, AWS Kinesis, Azure Event Hubs, and Azure IoT Hub.
Add the possibility of adding a target as a json file format8 votes
If you voted for this idea, did you mean JSON format in the Text File target, or to support JSON format in a new target or existing target? Currently we support JSON format for the Apache Kafka, Azure Event Hubs, Azure IoT Hub, Amazon Kinesis Data Streams, and Amazon S3 writers.
As a user, I'd like to utilize the columns in Statistics tab more effectively by being able to sort on the column so that I can sort by duration, error count, etc.6 votes
Currently we don’t enable sort because each statistic such as “duration, error count, etc” is indexed on time. There is a time-series graph showing error count over the last X number of runs. And when there are anomalies, you can click into those runs to see more details. You can also easily click back out and see the trend for error count or duration. How would being able to sort add more or different value than what is currently available?
Ability to set the locales for each text file target individually.5 votes
The current 'Update' feature backfills all assets matching the asset shape defined in the view.
However, when there is a need to back-fill only a small set of assets in a view there is no option to select such assets.
Adding a select option would prevent updating all assets in the view and user could select only those assets which require a back-fill.5 votes
Thank you for your feedback! We would need more discussion with supporters to understand the problem.
How does the lack of this functionality impact your work? How often does this happen? How many assets are typically in your view? And how many assets are typically affected?
Does the new automatic updates through persistent signups (introduced in 2018) mitigate this problem at all?
Please feel free to email me or post a comment to help us understand the pain point.
I would like to get notified for Unique Output Streams Consumed or Remaining in PI Integrator Administration page under Licensing.3 votes
How would you like to be notified? Under what circumstances is a notification important? Please feel free to email me directly to discuss the use case! Thanks!
As an integrator for Business Analytics user I would like to be able to choose which directory in Azure data lake my view is published to so that is is easier to manage and analyse data once it is there.
Currently this is only possible through setting up additional targets, which is cumbersome.1 vote
How would this interaction play out? Choose the directory before publishing?
How many directories do you currently need to publish to? What are the different purposes for each directory?
This concept would really apply to all integrators except Esri.
We are careful to interpolate data in evenly spaced samples for large matrix analytics. What happens when some of the data is upstream and the rest of the data is downstream and a tank sits in the middle with a 2 hour retention? The data will not align. We need to take into account the asset time retention as the process flows from the beginning to the end. This can easily be calculated by time shift analytics that would shift data on the inlet to match the data on the outlet. Any given plant process could have 100's of time shifts to align the data, but without data alignment, we would be comparing apples with oranges.
This concept would really apply to all integrators except Esri.
We are careful to interpolate data in evenly spaced samples for large matrix analytics. What happens when some of the data is upstream and the rest of the data is downstream and a tank sits in the middle with a 2 hour retention? The data will not align. We need to take into account the asset time retention as the process flows from the beginning to the end. This can easily be calculated by time shift analytics that would shift data on the inlet to match the data on the…1 vote
This sounds like a complex problem and it would be really helpful to see the process visually or get more context. Please email me directly so that I can follow up on this use case.
- Don't see your idea?