EPICS pvaSrv Specification Ideas

EPICS v4 Working Group, Working Draft, 23-Jan-2013

Editors:
Marty Kraimer, BNL
Ralph Lange, HZB / BESSY II

Abstract

pvaSrv is a pvAccess server running on top of an EPICS V3 database, implemented in C++.

This product is part of the V4 implementation of EPICS (Experimental Physics and Industrial Control System).

Status of this Document

This is the 23-Jan-2013 version of the pvaSrv Specification, as discussed in the EPICS V4 Workgroup Meeting at PSI, Villigen.

Table of Contents

Introduction

pvaSrv is a pvAccess server that runs in the EPICS V3 IOC.

It allows you to get, put and monitor V3 PVs (fields of V3 records) over pvAccess, translating the value and its meta data (graphics limits, alarm status, timestamp) into Normative Type (NT) pvData structures (NTScalar, NTScalarArray). This functionality is implemented as a pvAccess channel provider called "v3Channel".

It also allows you to specify named groups of V3 PVs through an RPC type call, which then can be accessed through the new name as a collection of NT structures. If the records of such a collection are within one V3 database lock set of the IOC, put and get operations are atomic. This functionality is implemented as a pvAccess channel provider called "molecule".

Put, Get and Monitor operations will be supported. Put and Get support the "process=true" request parameter, which processes the record after the Put or before the Get.

In future versions (based on EPICS Base >= 3.15) it is intended to use server-side plug-ins to implement other request and monitor parameters.

V3Channel

Operation

Connections can be made to any V3 PV inside the IOC's database, addressed either by using the full "record.field" name, or the "record" short form that connects to the .VAL field.

PV Data Representation

The pvData structure used to represent the PV's data will be an NTScalar (NTScalarArray for array data), with the NT defined structures filled on a best effort basis with the data from the corresponding DBR_xxx types.

Molecule

Operation

Molecule works in two steps:

  1. Create a named group of V3 PVs (molecule) through an RPC type call.
  2. Operate on this group through its new name.

All PVs whose data types can be held by an NTScalar or NTScalarArray are supported. Types can be mixed.

Pre-Configured vs. On-the-Fly Groups

Both types of configuration will be supported.

Pre-configuration is expected to happen either through the V3 database (e.g. by using info fields), or remotely (e.g. from a service that keeps and persists the supported well-known groups). On-the-fly configuration is expected to be done by a client, that first specifies a group, then does the operation on that group. To facilitate this, the RPC type call is available as a local function call as well as remotely through ChannelRPC.

RPC Type Interface

There will be two calls, "create group" and "delete group".

Create Group

In the arguments to the "create group" RPC call, the client will specify:

group name
name of the PV group to be created
PV names
ordered list of the group's member PVs
lifetime
specify the condition under which the group will be deleted: lifetime is the time the group is guaranteed to be accessible and responsive after all clients have disconnected
processMask
bitfield indicating which PV's records will be processed when an operation specifies "process=true"
maybe special tags to define "first", "last", "all"?
requireAtomic
flag indicating that put and get operations will fail if they cannot be executed atomically for the whole group
time bin : opt
time that pvaSrv will wait after receiving an update for one of the PVs before sending out the pvAccess update

The call will return success or failure.

Delete Group

In the arguments to the "delete group" RPC call, the client will specify:

group name
name of the PV group to be deleted

The call will return success or failure.

Operation Interface

The client may either send a single pvRequest/pvMonitor structure or an array of pvRequest/pvMonitor structures. A single structure will be used for all PVs, an array will specify a separate structure for each of the PVs.
Unclear: Can the client supply an array of those structures for an operation? If not, there is no way to specify individual request/monitor specifications.

PV Data Representation

The PV group data will be represented by a top level pvData structure, that contains the NTScalar (or NTScalarArray) structures of the PVs, each having the name of the PV.

Persistence

The group configurations should be persistent, i.e. survive an IOC reboot. It is currently unclear how this can be achieved. A more generic service, similar to the AutoSaveRestore module, would be very helpful.

Other Issues

Authentication and Authorization

Allowing unrestricted access to an IOC's database through pvaSrv would create a serious security hole. To mitigate this risk, pvaSrv will take a "user" argument when started. This user name and the host name of the pvAccess client (which is available through unadvertised calls) will be used as credentials against the Access Security layer of the IOC.
Unclear: Does pvAccess support access right change events sent from the server to the client?

User Tag in Timestamp Structure

There should be a way (using plug-ins?) to configure additional special functionality, e.g. setting the user tag part of the timestamp structure to data taken from someother record or device support that connects to a timing system.