This is historical information of device classes implemented in NXSDataWriter device server.

Use this link to find the valid information.

Development status: Released
Information status: Updated

Contact:



Class Description


Families: Acquisition

Key words: NeXDaTaS

Language: Python

License:

Contact:

Class interface


Attributes:

Name Description
XMLSettingsScalar: DevString
JSONRecordScalar: DevString
FileNameScalar: DevString
ErrorsSpectrum: DevString

Commands:

Name Description
StateInput: DevVoid
Output: State
This command gets the device state (stored in its device_state data member) and returns it to the caller.
StatusInput: DevVoid
Output: DevString
This command gets the device status (stored in its device_status data member) and returns it to the caller.
OpenFileInput: DevVoid
Output: DevVoid
Open the H5 file
OpenEntryInput: DevVoid
Output: DevVoid
Creating the new entry
RecordInput: DevString
Output: DevVoid
Record setting for one step
CloseEntryInput: DevVoid
Output: DevVoid
Closing the entry
OpenEntryAsynchInput: DevVoid
Output: DevVoid
Creating the new entry in asynchronous mode
RecordAsynchInput: DevString
Output: DevVoid
Record setting for one step in asynchronous mode
CloseEntryAsynchInput: DevVoid
Output: DevVoid
Closing the entry is asynchronous mode
CloseFileInput: DevVoid
Output: DevVoid
Close the H5 file

Pipes:

Properties:

Name Description
NumberOfThreadsDevLong maximal number of threads

README

b'----------\nNexDaTaS \n----------\n\nAuthors: Jan Kotanski, Eugen Wintersberger, Halil Pasic\nIntroduction\n\nNexDaTaS is a Tango server which allows to store NeXuS Data in H5 files.\n\nThe server provides storing data from other Tango devices, various databases \nas well as passed by a user client via JSON strings.\n\n\n---------------------------\nInstallation from sources\n---------------------------\n\nInstall the dependencies:\n\n pni-libraries, PyTango, numpy \n\nDownload the latest NexDaTaS version from\n\n https://github.com/jkotan/nexdatas/\n\nExtract sources and run\n\n$ python setup.py install\n\n------------------\nGeneral overview\n------------------\n\nAll operations carried out on a beamline are orchestrated by the control client (CC), \na software operated by the beamline-scientist and/or a user. Although the term client \nsuggests that it is only a minor component aside from all the hardware control servers, \ndatabases, and whatever software is running on a beamline it is responsible for all \nthe other components and tells them what to do at which point in time. In terms of \nan orchestra the CC is the director which tells each group of instruments or individual \nartist what to do at a certain point in time.\n\nIt is important to understand the role of the CC in the entire software system on a beamline \nas it determines who is responsible for certain operations. The CC might be a simple \nsingle script running on the control PC which can is configured by the user before start \nor it might be a whole application of its own like SPEC or ONLINE. Historically it is \nthe job of the CC to write the data recorded during the experiment (this is true at least \nfor low rate data-sources). However, with the appearance of complex data formats \nlike Nexus the IO code becomes more complex. \n\n---------------\nProject goals\n---------------\n\nThe aim of this project is to implement a Tango server that manages data IO \nfor synchrotron (and maybe neutron) beamlines. The server should satisfy the \nfollowing requirements\n\n * remove responsibility for data IO from the beamline control client\n * provide a simple configuration mechanism via NXDL\n * read data from the following sources without client interaction\n # SQL databases (MySQL, Postgres, Oracle, DB2, ...)\n # other TANGO servers\n # JSON records (important for the interaction with the client and SARDANA) \n * the first implementation of the server will be written in Python\n * the communication model of the first implementation will be strictly synchronous \n (future version most probably will support other communication models too)\n * the control client software has full control over the behavior of the server \n via TANGO commands\n * only low data-rate sources will be handled directly by the server. High data-rate \n * sources will write their data independently and additional software will add this data \n to the Nexus file produced by the server once the experiment is finished. \n\nThe server should make it easy to implement control clients which write Nexus files \nas the entire Nexus logic is kept in the server. Clients only produce NXDL configurations \nor use third party tools for this job. The first Python implementation of \nthis server will serve as a proof of concept. \n\n\n\n-----------------\nNXDL extensions\n-----------------\n\nIn order to describe various data sources the NXDL standard has been extended by XML tags listed \nbelow. Thus, <strategy /> and <datasource /> can be situated inside <field/> or <attribute/> tags. \nThe other ones are nested inside <datasource/> tag.\n\n------------------\nThe <strategy> tag\n------------------\n\nThe strategy tag defines when and in which way the data is stored.\n\nAn example of usage:\n\n<field name="energy" type="NX_FLOAT" units="GeV" >\n <strategy mode="STEP" trigger="trigger1" /> \n <datasource type="CLIENT">\n <record name="counter_1"/> \n </datasource>\n</field>\n\nThe tag can have the following attributes:\n + mode specifies when the data is fetched, i.e.\n - INIT during opening a new entry\n - STEP when the record() command is performed\n - FINAL at the time of closing the entry\n - POSTRUN during post-processing stage \n + trigger stands for the name of the related trigger in asynchronous STEP mode (optional)\n + grows selects which a field dimension grows of in the STEP mode. The default growing \n dimension is the first one, i.e. grows=1 (optional)\n + compression specifies if data is compressed (optional)\n - true data going to be compressed\n - false data stored without compression (default) \n + rate compression rate (optional)\n - from 0 to 9 \n + shuffle compression shuffle (optional)\n - true shuffle enabled (default)\n - false shuffle disabled \n + canfail specifies if during reading data exception should be thrown (optional)\n - false on error exception is raised (default)\n - true on error warning info is printed and the record is filled by a maximum value \n for the record type \n\nThe content of the strategy tags is an label describing data merged into the H5 file by \na post-processing program.\n\nAnother example of usage:\n\n<field name="energy" type="NX_FLOAT" units="GeV" >\n <strategy mode="POSTRUN" > \n http://haso.desy.de:/data/energy.dat\n </strategy>\n</field>\n\n---------------------\nThe <datasource> tag\n--------------------\n\nThe datasource tag specifies a type of the used data sources. They can be one of built types, \ni.e. CLIENT, TANGO, DB, PYEVAL or external ones -- defined in external python package \nand registered via JSON data.\n\nThe <datasouce> tag acquires the following attributes:\n\n + type related to a type of data source with possible values:\n - CLIENT for communication with client via JSON strings\n - TANGO for taking data from Tango servers\n - DB for fetching data from databases\n - PYEVAL for evaluating data from other data sources by python script\n - other type name of data source which has been registered via JSON data. \n + name datasource name (optional) \n\nCLIENT datasource\n--------------------\n\nThe CLIENT datasource allows to read data from client JSON strings. It should contain \na <record /> tag. An example of usage:\n\n<datasource type="CLIENT" name="exp_c01">\n <record name="counter_1"/> \n</datasource>\n\n\n <record> \n\nThe record tag defines the fetched data by its name. It has an attrbute\n\n + name which for the CLIENT data source type denotes a name of the data in the JSON string\n\nAn example of usage:\n\n <record name="Position"/> \n\nTANGO datasource\n--------------------\n\nThe TANGO datasource allows to read data from other TANGO devices. It should contain <device/> \nand <record/> tags. An example of usage:\n\n<datasource type="TANGO">\n <device hostname="haso.desy.de" member="attribute" name="p09/motor/exp.01" \n port="10000" encoding="LIMA_VIDEO_IMAGE"/>\n <record name="Position"/>\n</datasource>\n\n <device> \n\nThe device tag describes the Tango device which is used to get the data. \nIt has the following attributes:\n\n + name corresponding to a name of the Tango device\n + member defining a type of the class member, i.e.\n - attribute an attribute to read\n - command a result of a command to take\n - property a property to read \n + hostname a name of the host with the Tango device server (optional)\n + port a port number related to the Tango device server (optional)\n + encoding a label defining a required decoder for DevEncoded? data (optional)\n + group tango group name (optional) \n\nIf group attribute is defined data of the same group is read simultaneously and \nonly ones during one experimental step.\n\n <record> \n\nThe record tag defines the fetched data by its name. It has an attrbute\n\n + name which for the TANGO data source type a name of the tango class member \n\nDB datasource\n--------------------\n\nThe DB datasource allows to read data from accessible databases. It should contain <database /> \nand <query> tags. An example of usage:\n\n<datasource type="DB">\n <database dbname="tango" dbtype="MYSQL" hostname="haso.desy.de"/>\n <query format="SPECTRUM">\n SELECT pid FROM device limit 10\n </query>\n</datasource>\n\n <database> \n\nThe database tag specifies parameters to connect to the required database. It acquires \nthe attirbutes\n\n + dbtype describing a type of the database, i.e.\n - ORACLE an ORACLE database\n - MYSQL a MySQL database\n - PGSQL a PostgreSQL database \n + dbname denoting a name of the database (optional)\n + hostname being a name of the host with the database (optional)\n + port corresponding to a port number related to the database (optional)\n + user denoting a user name (optional)\n + passwd being a user password (optional)\n + mycnf defining a location of the my.cnf file with MySQL database access configuration (optional)\n + node corresponding to a node parameter for the ORACLE database(optional) \n\nThe content of the database tag defines Oracle DSN string (optional)\n\n <query> \n\nThe query tag defines the database query which fetches the data. It has one attribute\n\n + format which specifies a dimension of the fetch data, i.e.\n - SCALAR corresponds to 0-dimensional data, e.g. a separate numerical value or string\n - SPECTRUM is related to 1-dimensional data, e.g. a list of numerical values or strings\n - IMAGE describes 2-dimensional data, i.e. a table of specific type values, \n e.g. a table of strings \n\nThe content of the query tags is the SQL query.\nAnother example of usage:\n\n<datasource type="DB">\n <database dbname="mydb" dbtype="PGSQL"/>\n <query format="IMAGE">\n SELECT * FROM weather limit 3\n </query>\n</datasource>\n\n\n\nPYEVAL datasource\n--------------------\n\nThe PYEVAL datasource allows to read data from other datasources and evaluate it \nby user python script. An example of usage:\n\n<datasource type="PYEVAL">\n <datasource type="TANGO" name="position">\n <device hostname="haso.desy.de" member="attribute" name="p09/motor/exp.01" port="10000"/>\n <record name="Position"/>\n </datasource>\n <datasource type="CLIENT" name="shift">\n <record name="exp_c01"/> \n </datasource>\n <result name="finalposition">\n ds.finalposition = ds.position + ds.shift\n </result>\n</datasource>\n\n\n <datasource> \n\nThe PYEVAL datasource can contain other datasources. They have to have defined name attributes. \nThose names with additional prefix \'ds.\' correspond to input variable names from the python script, \ni.e. ds.name.\n\n <result> \n\nThe result contains python script which evaluates input data. It has the following attribute:\n\n + name corresponding to a result name. It is related to python script variable by ds.name. \nThe default value name="result". (optional) \n\n--------------------\nClient code\n--------------------\n\nIn order to use Nexus Data Server one has to write a client code. Some simple client codes \nare in the nexdatas repository. In this section we add some \ncomments related to the client code.\n\n# To use the Tango Server we must import the PyTango module and create DeviceProxy for the server.\n\nimport PyTango \n\ndevice = "p09/tdw/r228"\ndpx = PyTango.DeviceProxy(device)\ndpx.set_timeout_millis(10000)\n\ndpx.Init()\n\n# Here device corresponds to a name of our Nexus Data Server. Init() method resets the state of the \n# server.\n\ndpx.FileName = "test.h5"\ndpx.OpenFile()\n\n# We set the name of the output HDF5 file and open it.\n\n# Now we are ready to pass the XML settings describing a structure of the output file as well as \n# defining a way of data storing. Examples of the XMLSettings can be found in the XMLExamples \n# directory.\n\nxml = open("test.xml", \'r\').read()\ndpx.XMLSettings = xml\n\ndpx.JSONRecord = \'{"data": {"parameterA":0.2}, \n "decoders":{"DESY2D":"desydecoders.desy2Ddec.desy2d"}, \n "datasources":{"MCLIENT":"sources.DataSources.LocalClientSource"}\n }\'\n\ndpx.OpenEntry()\n\n# We read our XML settings settings from a file and pass them to the server via the XMLSettings \n# attribute. Then we open an entry group related to the XML configuration. Optionally, we can also \n# set JSONRecord, i.e. an attribute which contains a global JSON string with data needed to store \n# during opening the entry and also other stages of recording. If external decoder for DevEncoded? \n# data is need one can registred it passing its packages and class names in JSONRecord,\n# e.g. "desy2d" class of "DESY2D" label in "desydecoders.desy2Ddec" package. \n# Similarly making use of "datasources" records of the JSON string one can registred additional \n# datasources. The OpenEntry method stores data defined in the XML string with strategy=INIT. \n# The JSONRecord attribute can be changed during recording our data.\n\n# After finalization of the configuration process we can start recording the main experiment \n# data in a STEP mode.\n\ndpx.Record(\'{"data": {"p09/counter/exp.01":0.1, "p09/counter/exp.02":1.1}}\')\n\n# Every time we call the Record method all nexus fields defined with strategy=STEP are \n# extended by one record unit and the assigned to them data is stored. As the method argument \n# we pass a local JSON string with the client data. To record the client data one can also use \n# the global JSONRecord string. Contrary to the global JSON string the local one is only \n# valid during one record step.\n\ndpx.Record(\'{"data": {"emittance_x": 0.1}, "triggers":["trigger1", "trigger2"] }\')\n\n# If you denote in your XML configuration string some fields by additional trigger attributes \n# you may ask the server to store your data only in specific record steps. This can be helpful \n# if you want to store your data in asynchronous mode. To this end you define in \n# the local JSON string a list of triggers which are used in the current record step.\n\ndpx.JSONRecord = \'{"data": {"parameterB":0.3}}\'\ndpx.CloseEntry()\n\n# After scanning experiment data in \'STEP\' mode we close the entry. To this end we call \n# the CloseEntry method which also stores data defined with strategy=FINAL. Since our HDF5 file \n# can contains many entries we can again open the entry and repeat our record procedure. If we \n# define more than one entry in one XML setting string the defined entries are recorded parallel \n# with the same steps.\n\n# Finally, we can close our output file by\n\ndpx.CloseFile()\n\n\nAdditionally, one can use asynchronous versions of OpenEntry, Record, CloseEntry, i.e. \nOpenEntryAsynch, RecordAsynch, CloseEntryAsynch. In this case data is stored \nin a background thread and during this writing Tango Data Server has a state RUNNING.\n\nIn order to build the XML configurations in the easy way the authors of the server provide\nfor this purpose a specialized GUI tool, Component Designer. \nThe attached to the server XML examples \nwas created by XMLFile class defined in XMLCreator/simpleXML.py. \n\n\n'

22 Feb 2018, DS Admin
Updated:
The device class has been updated.
You are looking at this version now.



22 Feb 2018, DS Admin
Updated:
The device class has been updated.
You can see previous version here .



20 Apr 2017, Piotr Goryl
Updated:
The device class has been updated.
You can see previous version here .



23 Feb 2017, Piotr Goryl
Created:
The device class has been added to catalogue.
Added by:pgoryl2 on:22 Feb 2018, 2:45 p.m.