Currently the PI Adapter for OPC UA only collects unsolicited advised data that is different from the previous value. We would need to see a polling feature to be able to:
- Refresh slow moving streams with updates with the same value but different timestamp
- Get the information in the OPC UA Server at a defined time stamp
This polling feature could pave the way to more features like setup the equivalent of "Trigger event tags" offered by the old PI OPC DA Interface technology49 votes
Same as the PI Interface for OPC DA feature with "Trigger Tags", the idea is to poll for updates on a series of item streams (tags) only when a certain item stream updates.
The benefit of this is to create timestamp alignments between a series of data points, when a cycle begins or ends for instance. This is very useful for analysis and visualzation of production cycle efficiencies.35 votes
The PI Adapter should be able to create a data pipe listener on certain streams (unsolicited reads) that would trigger the writes back to the OPC UA Server.
This is useful when providing set points to the equipment without the need of establishing a second (DA to UA) data flow in parallel.33 votes
As a PI System Administrator, I would like for the PI Adapter for OPC UA to support array data types from the OPC UA Server. A large amount of our data may be contained within arrays on our OPC UA Server, so we would not currently be able to collect that data.14 votes
As a PI System Administrator, I would like to be able to apply conversion factors to the data that is being gathered from our OPC UA server. This would allow us to get data in the format/units we'd like, without requiring extra tags and/or calculations.4 votes
As we are using an ISO based tag naming, we would like the PI Adapters to be able to write OPC UA Tag value into Specific tags in DA and not only in automaticaly created tags. Automated DA tag creation should be a possible choice, not the standard.3 votes
Thank you for your input. PI Adapter for OPC UA does support specific DA tag naming, as you can supply custom stream names. Please take a look at Kevin Geneva’s comment under this post of how this is handled through PI Web API.
Support ReverseConnect, allowing for the connection to be establish from the more secure area where the OPC Server reside to the least secure area where the OPC Client / Adapter resides.
This is implemented in many other client tools and SDK such as Matrikon, UnifiedAutomation, Prosys.2 votes
As of now, PI adapters and EDS supports PI Web API OMF Basic authentication only. Basic authentication method is less secure in which PI Web API retains user name and password in decrypted format for the duration of the request. To mitigate this risk, all our PI Vision and Web API are configured to use Kerberos authentication. method.2 votes
Implement a configuration flag that allows one to NOT synchronize description from OPC UA server into PI Tag.
In instances where the description is empty "", the description cannot be set in PI. Similarly, the description on the OPC UA Server side may need to be different than the one used on PIDA2 votes
Currently, the PI Adapter for OPC UA sends good quality data to the configured data endpoint with no way to filter on different good quality statuses. There are several OPC status codes that fall under the Good Quality Data classification.
When the status code changes from "Good" to "GoodOverload" because they fall under the OPC Good Quality Data classification, both are being sent to the configured data endpoint.
User would like to have the option to filter good quality data. For example, send only data with "Good" status code to the endpoint and filter out "GoodOverload" data.2 votes
Thank you for your input. This feature is handled with the latest version of PI Adapter for OPC UA (184.108.40.206) in the case where you send data to OCS. PI Web API handles only “Good” data (ie. no distinction between other “Good” data types). Could you elaborate further on this use case?
Similar to PI Interfaces (/DB for opcint), adapters should allow setting some additional debugging to troubleshoot specific issues, instead of only allowing a general LogLevel (Information, Trace, Warning, etc.)
This would allow for more granular logging instead of either having no useful information or so much information that it is very difficult to parse and determine what an issue is.
Additional debugging around discovery / browsing
Additional debugging to record data flowing or a specific stream / nodeid
Additional debugging listing all items that were added to a subscription
Additional debugging around history recovery1 vote