DABC (Data Acquisition Backbone Core)
2.9.9
|
This is short instruction how dabc/go4/stream frameworks can be installed and used for data taking and online/offline analysis of TRB3 data.
There are two main parts of software:
The easiest way to install all necessary software components is use repository https://subversion.gsi.de/dabc/trb3
This method describes how DABC, ROOT, Go4 and stream analysis can be installed with minimal efforts.
Following packages should be installed:
Here is full list of prerequisites for ROOT
It is recommended to use bash (at least, during compilation)
Currently ROOT6 is used, which requires at least gcc 4.8. All following gcc version should work as well. ROOT compiled with system default compiler. If there is a strong reson (other software requires older/newer gcc version), one could change default compiler with following commands:
$ update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 20 $ update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 20 $ update-alternatives --install /usr/bin/gfortran gfortran /usr/bin/gfortran-5 20
Most of the time is consumed by ROOT compilation, therefore if ROOT already installed on your machine, it can be reused. Just configure ROOTSYS, PATH and LD_LIBRARY_PATH variables before starting. For instance, call thisroot.sh script:
[shell] . your_root_path/bin/thisroot.sh
Be aware that at least ROOT 5-34-32 version should be used and compiled with '–enable-http' flag.
To checkout and compile all components, just do:
[shell] svn co https://subversion.gsi.de/dabc/trb3 trb3 [shell] cd trb3 [shell] make -j4
During compilation makelog.txt file will be created in each sub-directory. In case of any compilation problem please send me (S.Linev(at)gsi.de) error message from that file.
There is login script 'trb3login', which must be called before software can be used
[shell] . your_trb3_path/trb3login
It set all shell variables, which are required for DAQ and analysis
To obtain newest version from repository do:
[shell] cd your_trb3_path [shell] make -j4 update
To run DAQ, only DABC installation is required.
Example configuration file can be found in $DABCSYS/plugins/hadaq/app/EventBuilder.xml. Copy it in any suitable place and modify for your needs.
Main configuration parameters:
Defines number and size of buffers, used in application. Normally must remain as is. Should be increased if queue sizes of input/output ports are increased
It is central functional module of the DAQ application. It could have arbitrary number of inputs, defined by NumInputs parameter. Each input corresponds to separate TRB3 board which should be readout. For each input only correct UDP port number should be specified like:
<InputPort name="Input0" url="hadaq://host:10101"/>
Here only port number 10101 is relevant, all other parameters could remain as is. Transport parameters typically speicfied in extra xml line for all ports together:
<InputPort name="Input*" queue="10" urlopt="udpbuf=400000&mtu=65507&flush=0.1&observer=false&maxloop=50" resort="false"/>
Following URL parameters can be used for UDP transport:
Parameter | Description |
---|---|
udpbuf | size of socket buffer for receiving UDP packets (default 200000) |
mtu | Maximial Transport Unit (MTU) for UDP packet (default 64512) |
flush | flush time in seconds, how fast data will be delivered to combiner (default 1 sec) |
observer | when true, generates information for HADES control system (default false) |
maxloop | how many single UDP packets can be read in single loop (default 100), could be reduced for fair thread resource sharing |
reduce | reduce factor for output buffer size, may be configured together with TDC calibration option where more data could be produced, default 1 |
tdc | array of TDC IDs like [0x1001,0x1002]. Activates TDC calibration |
trb | value of TRB ID, to verify when data used for TDC calibration |
hub | value of HUB ID(s), to correctly unpack data for TDC calibration |
trig | trigger type used for calibration (default all or 0xFFFFF), can be 0xD |
resort | when specified, resorting of packets order done with trigger number order |
udp_queue | buffers queue size, used by UDP transport (use together with tdc or resort parameter) |
If parameter (like resort) should be specified only for particular port, one could write:
<InputPort name="Input2" url="hadaq://host:10101" urlopt2="resort&udp_queue=20"/>
Or to activate TDC calibration
<InputPort name="Input3" url="hadaq://host:10101" urlopt2="tdc=[0xC001,0xC002]&trb=0x8010"/>
Events, produced by combiner module, can be stored in hld file or (and) delivered via online server to online analysis.
To write HLD files, one should specify following parameters in combiner module:
<NumOutputs value="2"/> <OutputPort name="Output1" url="hld://dabc.hld?maxsize=30"/>
Typically second output port (name Output1) used for HLD file storage, but several output files could be opened in parallel. maxsize
parameter defines maximum size (in MB) of file, which than will be closed and new file will be started.
In case of any I/O error file is closed, but DAQ continues to run. One could change such default behavior. For instance, application will be immediately stopped if onerror="exit"
property specified:
<OutputPort name="Output1" url="hld://dabc.hld?maxsize=30" onerror="exit"/>
Or one could try to reestablish file transport, providing following parameters:
<OutputPort name="Output1" url="hld://dabc.hld?maxsize=30" reconnect="3" onerror="exit"/>
Here reconnect="3"
means that transport will be try to reconnected after 3 seconds pause. If 10 attempts fail, application will exit as specified with onerror
parameter. One could specify number of attempts with numreconn="5"
parameter. While reconnecting, buffers will be skipped.
Different approach is - keep transport running in any case, just retrying to open new file with some time period. Like:
<OutputPort name="Output1" url="hld://dabc.hld?maxsize=2000" retry="5" blocking="never" thread="FileThread"/>
With such configuration file transport after error will try to start writing new file after 5 second wait time. Parameter blocking="never"
says DABC, that transport should not block event building. If file writing hangs (or too slow), buffers could be skipped and not block main building process. Special thread is assigned, while write operation on full disk can hang for many seconds, blocking other transports running by default in the same thread. Such configuration good to produce files for debugging purposes - if possible such file is written, if not - this not disturb main DAQ process.
First output of combiner module used for online server. It is MBS stream server, which simply adds MBS-specific header to each HLD events. Configuration for online server looks like:
<OutputPort name="Output0" url="mbs://Stream:6002?iter=hadaq_iter&subid=0x1f"/>
For instance, online server can be used to printout raw data with hldprint
command:
[shell] hldprint localhost:6002
Very often default port for online server [6002] used by VNC. Select any other port, use it in hldprint or go4analysis to connect with the server.
Once configuration file is adjusted, one should call:
[shell] dabc_exe EventBuilder.xml
Execution can always be regularly stopped by Ctrl-C. All opened files will be closed normally.
One able to observe and control running DAQ application via web browser. After DAQ is started, one could open in web browser address like http://localhost:8090. Port number 8090 can be changed in configuration of HttpServer.
In browser one should be able to see hierarchy with "EventBuilder/Combiner" folder for parameters and commands of main combiner module.
One of the reason for web-server usage - possibility to interactively start/stop file writings. For this two commands can be used: StartHldFile
for starting file and StopHldFile
for stopping.
hldprint is small utility to printout HLD data from different sources: local hld files, remote hld files and running dabc application. It also supports printout of TDC messages. For instance, printing of messages from TDC with mask 0xC003 can be done with command:
[shell] hldprint file_0000.hld -tdc 0xc003 -hub 0x9000 -num 1
Result is:
All options can be obtain when running "hldprint -help".
Analysis code is provided with stream framework. It is dedicated for synchronization and processing of different kinds of time-stamped data streams. Classes, relevant for TRB3/FPGA-TDC processing located in $STREAMSYS/include/hadaq and $STREAMSYS/framework/hadaq directories.
In principle, in most cases it is not required to change these classes - all user-specific configurations provided in ROOT script, which can be found in $STREAMSYS/applications/trb3tdc/ directory. It shows how to process data from several TDCs. Please read comments in scripts for more details. One can always copy such script to any other location and modify it to specific needs.
To run analysis in batch (offline), start from directory where first.C script is situated:
[shell] go4analysis -user file_0000.hld
After analysis finished, filled histograms will be saved in Go4AutoSave.root file and can be viewed in ROOT or in Go4 browser. Just type:
[shell] go4 Go4AutoSave.root
There are many parameters of go4analysis executable (run go4analysis -help). For instance, one can run only specified number of events or change output file name:
[shell] go4analysis -user file_0000.hld -number 100000 -asf new_name.root
First of all, online server should be configured in DABC. In any moment one could start analysis from batch, connecting to DABC server with command:
[shell] go4analysis -stream dabc_host_name
With Ctrl-C one can always stop execution and check histograms in auto-save file.
But more convenient way is to run analysis from the gui to be able monitor all histogram in live mode. For that one need:
Via analysis browser one can display and monitor any histogram. For more details about go4 see introduction on http://go4.gsi.de.
Core functionality of stream framework written without ROOT usage and can be run with different engines. Such run engine is now provided in DABC. Main difference between Go4/ROOT and DABC engines - with DABC special histogram format is used, which makes code ~10-30% faster. Histograms, filled in DABC processes, can be displayed with normal ROOT graphics in web browser or in Go4 GUI. Such histograms can be stored in normal ROOT files as TH1/TH2 objects.
DABC provides possibility to run code in parallel in several threads, merging produced histograms at the end and storing them in ROOT file. Existing first.C and second.C files can be used as it is, only ROOT-specific parts should be removed (if exists).
To run analysis, one requires configuration file like $DABCSYS/plugins/stream/app/stream.xml. Just copy it in directory where scripts are and run with the command:
dabc_exe stream.xml file="pilas_1517816245*.hld" asf=test.root parallel=4
Here one specifies input HLD file(s) (one could use wildcard symbol), auto-save ROOT file asf where histograms will be stored and number of parallel threads used for analysis (default 0). During analysis run histogram content can be monitored via http channel, using web browser or Go4 GUI.
With single process one achieve ~10-30% gain compare with ROOT histograms filling. If running parallel on 15 cores (on lxhadeb06 machine), performance increased on 800% compare with single-thread analysis.
One also could run analysis in the same process where event builder is running. Analysis will process as much events as possible and produce histograms, which could be monitored via web browser.
Main benefit of such approach - one do not require extra process running, quality monitoring always available via http channel. Example configuration file can be found in $DABCSYS/plugins/hadaq/app/EventBuilderStream.xml.
Scripts first.C and (optional) second.C should be copied into directory where DABC will be started. When DAQ is running, one could always open web browser with address http://localhost:8090
or directly http://localhost:8090/EventBuilder/Analysis/
.
Interested histograms can be shown directly, opening address like:
http://localhost:8090/EventBuilder/Analysis/HLD/HLD_EvSize/draw.htm
One also could produce 1-D histogram statistic, submitting requests like:
wget http://localhost:8090/EventBuilder/Analysis/HLD/HLD_EvSize/cmd.json?command=GetEntries -O entries.txt wget http://localhost:8090/EventBuilder/Analysis/HLD/HLD_EvSize/cmd.json?command=GetMean -O mean.txt wget http://localhost:8090/EventBuilder/Analysis/HLD/HLD_EvSize/cmd.json?command=GetRMS -O rms.txt
Now DABC application can be also used to calibrate data, provided by FPGA TDCs. For this functionality code from stream framework is used. Therefore DABC should be compiled together with stream - at best as trb3 package as described in very beginning.
All details about TDC calibration in DABC or in Go4 can be found on TDC calibration page of stream framework.
hldprint is just program with originally about 150 lines of code (now it is ~1000 due to many extra options). Source code located in $DABCSYS/plugins/hadaq/hldprint.cxx. There is also example in $DABCSYS/applications/hadaq/ directory, which can be copied and modified for the user needs.
In simplified form access to any data source (local file, remote file or online server) looks like:
One can use such interface in any other standalone applications.
Any comments and wishes: S.Linev(at)gsi.de