The Argo Daq System

June 4, 2017 | Autor: F. Barone | Categoria: High Energy Physics, High Speed, Gamma radiation
Share Embed


Descrição do Produto

THE ARGO DAQ SYSTEM A. Aloisio*, A. Anastasio°, P. Parascandolo°

F. Barone**,

S. Cavaliere**, V. Masone°, S. Mastroianni**,

Abstract The ARGO-YBJ experiment has been designed in order to study the cosmic rays, mainly cosmic gamma-radiation, at an energy threshold of ~100GeV, by means of the detection of small size air showers. This goal will be achieved by operating a full coverage array detector in the Yangbajing Laboratory (Tibet, China) at 4300m a. s. l. The ARGO DAQ is based on a high speed event driven architecture previously developed for high energy physics experiments on colliders.

* Universita’ del Sannio, Benevento ** Universita’ Federico II, Napoli ° INFN Sezione di Napoli

1. INTRODUCTION The ARGO-YBJ experiment [1-2] will investigate a wide range of fundamental issues in Cosmic Rays and Astroparticle Physics, including gamma-ray astronomy and gamma-ray bursts physics in the range 100GeV - 500TeV. The detector (fig.1) consists of a single layer of RPCs (Resistive Plate Counters) covering an area of ~6500 m2 and providing a detailed space-time picture of the shower front. The building block of the detector is a 12-chamber module called cluster (the rectangles shaded in figure 1).

Fig.1 ARGO detector layout As shown, the detector consists of 117 clusters in the inner part and 28 clusters in the guard ring for a total of 1740 RPCs. The proposed layout covers an active area of about 95 % . The ARGO DAQ architecture is structured in five layers: Front End Electronics (FEE), Cluster Logic (CL), First–level Readout (RO-I), Second-Level Readout (RO-II), Farm and Event Builder (EB). The Front End Electronics (FEE) is placed directly onto the detector and is built around a custom chip designed to amplify and shape the RPC’s strip signals. The FEE drive the Cluster Logic (CL) located seven meters away. The CL boards manage multi-channel and multi-hit TDC custom chips [3-4] to extract timing information of the hits, acquire strips coordinates, generate high and low level multiplicity and build data frames. Each CL assembly will serve four groups of three chambers and consists of four Signal Processing Cards plus one Control and Communication Card. Data from the clusters is transmitted to the electronics for the first Readout level which is 75 meters away and located at the center of the detector. The RO-I electronics is in charge of collecting data from the clusters, controlling data integrity, and also formatting data according to their event number. This will be possible due to the event-driven data collection chosen implemented by custom controllers for real time data processing and to a purposely designed buffer memory module AMB. At the upper Readout level (RO-II) data is formatted according to the event number and sent to the Farm to perform event building and mass storage. The trigger supervisor located at the RO-II

2

level will have the task to enable the trigger box to produce a new trigger and to broadcast it to all the clusters.

Fig. 2 ARGO DAQ Architecture.

2. SYSTEM ARCHITECTURE The ARGO DAQ system is based upon a trigger driven architecture and embodies large part of the event-driven DAQ system already developed for the KLOE detector [5], extending its modularity in order to accommodate a higher number of channels. For each trigger issued an incrementing trigger number word is produced by each CL controller and inserted onto the frame to be transmitted to the RO-I electronics. To check that all front end clusters stay aligned, granting consistency of the collected data, the ARGO collaboration has chosen to implement a SYNC operation so that a special trigger (SYNC) is sent to all front end clusters over a different cable. In response, each cluster will issue a special frame indicating the trigger number the cluster is pointing to. The first level of Readout (RO-I) performs Data collection from the entire set of clusters. This is done by using custom expansion memory boards AMB (Argo Memory Board) to provide a new level in the buffering chain of the overall system. Each AMB, collects data from 4 clusters and is interfaced to the ROCK Readout controller [6] via a high speed proprietary bus, the AUXBUS [7-8]. The AMB boards will be housed into VME crates equipped with the proprietary AUXBUS backplane developed for the KLOE DAQ. A controller will initialize the whole system, run diagnostic tests and will sample events from time to time for performance monitoring of the system itself. Up to 8 crates can be equipped in this way and chained in a single system: the ROCK board has in fact the ability to transmit Data over an high-speed parallel bus (CBUS) to a second buffering level, which, in turn, sends data to a mass storage by means of an high speed standard interface, without any speed penalty. A block diagram is shown in fig.3

3

Fig.3 First & Second level Data Readout.

Data from a cluster of crates is collected by the ROCKM (ROCK Manager module) [9] through the CBUS. The ROCKM controller finally transmits data to the farm event builder and then to a mass storage medium, via high speed standard data concentrators. Both ROCK and ROCKM modules, as well as the AMB board, have a hardwired control, thus achieving high bandwidth in data transfer, without software penalties and CPU overheads. For the ARGO experiment five crates will be used, four of which equipped with more than 10 AMBs will directly collect data coming from the CLUSTER logic, while the fifth crate will contain the Trigger Supervisor and will act both as an upper level concentrator and as a buffer towards data storage medium. A single CPU will control the entire chain, via a dedicated parallel interface and a VME controller on each crate (8251 VIC CES). In each crate, besides the AMB modules, the rightmost position is reserved to the ROCK while the leftmost position is reserved to the VME Controller. The ROCK will read AMB data via the AUXBUS (20 bits at 10 MHz) under the control of specific lines reserved for signal handshaking. The single crate organization is depicted in fig. 4

Fig. 4 Single crate organization 4

3. THE ARGO MEMORY BOARD (AMB) When a trigger is issued, the CL electronics will transmit data at peak data rate of 20Mbyte/s. Each AMB module acts as a frequency buffer between CL electronics and the central DAQ thereby incorporating a FIFO memory bank. Each AMB (Fig. 5) will serve four clusters KL[3:0]. Onto each input channel data will be first pre-processed to check for consistency. If data are found to be consistent than a bit indicating that no failure has been found is set and vice-versa in case of mistakes. Therefore the word length at each of the FIFO input is fixed at 17 bits, while physics constrains require a minimum deep of 8K locations. At the output of the FIFO two post-processors will add two bits to the FIFO outputs thus allowing the DAQ to distinguish between the four input channels. Accordingly, each AMB module will put onto the AUXBUS a 19-bits word.

Fig. 5 AMB block diagram 5

Next to a SYNC check operation, the AMB will store the four event numbers issued by the CL electronics and perform a compare check. The VME will control the functionality of the board by injecting a test pattern at the input of the FIFO and performing a consistency check at the output. Further, the CPU has also the task of initializing via VME the AMB board, setting control bits in it for proper operation. Via VME one or more clusters can be suppressed.

4. THE ROCK READOUT CONTROLLER The ROCK will perform data acquisition from the memory boards in an event driven fashion acting as the master of the AUXBUS. The ROCK is a double height VME board fitting onto a standard VME bus and allows control and slow monitoring of the DAQ performances. The block diagram of the ROCK is in Fig. 6.

Fig. 6 ROCK block diagram The board has two ports: from one side it is hooked to the AUXBUS in order to collect data from the slave boards in a VME-like handshake mechanism; on the other side it has a port to the CBUS a custom “cable-bus” in order to feed data to the upper buffering level. The two operations 6

are independent and run asynchronously. The board also handles a VME interface and the diagnostic of the whole system. Normal operation consists of the transfer of data from the AUXBUS to the CBUS via a data frequency buffer: the DATAFIFO. These transactions are done in blocks of data called frames. Each frame has a specific trigger number appended to the frame’s header and consists of all the AMBs data associated with that trigger number. The start of any AUXBUS/CBUS data transaction occurs when a trigger is received. This will produce a trigger word which is stored into the ROCK trigger FIFO (TFIFO) queue. Then a readout operation of the data from the AMB modules begins and continues until the complete frame is transferred into the DATAFIFO. Likewise operation onto the CBUS will start as soon as the DATAFIFO contains data to be transmitted and the memory of the ROCKM controller is not full. A new frame transaction will occur if another trigger to be served is waiting in the TFIFO queue. The AUXBUS operation is dependent on the CBUS interface only when DATAFIFO is full (in this case the AUXBUS will temporally suspend operation and waits for the CBUS to read some data from DATAFIFO before resuming operations). Likewise the CBUS is dependent on the AUXBUS only if the DATAFIFO is empty (in this case the CBUS will wait for the AUXBUS to write further data into the DATAFIFO). When the ROCK is in diagnostic mode of operation, it is isolated from the AUXBUS allowing therefore the VME to write and read data from DATAFIFO and TFIFO testing functionality of the board. The ROCK has an on board programmable timeout counter, a facility to sample at random a whole block of data collected for a single trigger without interfering with the normal operations, the capability to provide a mirror image of the main DATAFIFO in order to spy continuously operations from the VME and a timer to measure the duration of the transactions, to be used for statistical checks. In any case, the ROCK requires intervention of a local control only for the start up procedure, for the set up of the operational mode and finally for debugging.

5. THE PROTOCOL ON THE AUXBUS The main purpose of the AUXBUS is to acquire and manage the data from one to sixteen AMBs. To each module on the AUXBUS is reserved a four bit hardwired geographical address XA[3:0]. In response to a trigger input or in response to a trigger waiting in the TFIFO queue, the ROCK begins an AUXBUS transaction requesting a frame from all the AMB boards. First an acknowledgement of data for that particular frame is requested from each AMB (trigger cycle). Then a sparse data scan is carried on by the ROCK in order to identify which are the boards that contain data for that specific trigger number.

a) Trigger cycle The signals over the AUXBUS for this cycle are: XT[7:0] the Trigger Bus which contains the trigger number; XTRGV (active low produced by the ROCK) which indicates that the trigger bus data is valid; XSDS (active low generated by each of the AMBs) which signifies that the AMB has data for the current trigger number; XBK a wired-or signal active high generated by each of the AMBs to the ROCK. Operation begins with the ROCK placing a valid trigger number word onto the XT[7:0] lines and then with the proper set up time driving the XTRGV line low. 7

The AMB modules will decode the XT[7:0] lines and, if there is a non-empty frame to transmit for that particular trigger number, they will drive the Data Ready line XSDS active. After a proper set up time all the AMBs (whether they have data or not) must drive their Board acknowledge line XBK active. The XBK line is an open collector line shared between all AMB modules so that when all modules have answered it is pulled high. Upon receipt of the XBK line high the ROCK will latch the contents of the Data Ready lines and will release the XTRGV line indicating to the AMB modules that the trigger cycle has finished.

Fig.6 Trigger Broadcast Cycle

The low to high transition of XTRGV signals to the AMB modules that the trigger cycle has finished and that AMB modules can remove theirs XSDS lines and bring inactive the individual BK lines.

b) Readout cycle The Readout cycle begins immediately following the trigger cycle with the ROCK driving an address XA[3:0] onto the backplane and then with a proper set up time the XAS (address strobe) line low. The address is chosen among the AMBs identified by the ROCK via the sparse data scan. On receiving the XAS line low, the AMB modules will decode the XA[3:0] lines to see which AMB board was addressed and only this AMB module will immediately turn on its output threestate buffers. The connection between ROCK and AMB is now established and data transfer can take place over the backplane. A full handshaking protocol XDS⇔XDK will then occur between the ROCK and the AMB. When the ROCK brings XDS low it indicates to the selected AMB module that it is ready to accept data. The selected AMB module will recognize that the ROCK is requesting data as it sees the XDS line low. In reply, the AMB module places its data onto the three-state buffers, and it sets the XDK line low indicating to the ROCK that the data onto the backplane are valid data. The ROCK will latch the data onto an internal register and after will raise the XDS line high indicating to the AMB module that the data has been latched. The AMB module upon recognizing that the XDS line has 8

been raised will raise the XDK line so that the cycle can be repeated at the maximum possible speed. Data transfer will continue until the ROCK receives an End of Block (XEOB). Upon receipt of XEOB line low the ROCK will raise its data strobe line XDS high as normal at the end of a cycle but then it will raise the XAS line high indicating to that particular AMB that its data are no more requested onto the backplane and thus to shut off its three-state buffer onto the backplane. The ROCK module will then proceed to read data from the next AMB module in exactly the same manner. When the ROCK has finished the Readout of the entire frame it will drive XAS high and start processing of the next frame. It is possible that during the trigger cycle the ROCK receives no data ready active (XSDS) from all the AMB modules thus indicating that there was no data for that particular trigger number (a complete empty frame). In that case the trigger cycle will be followed by another trigger cycle.

Fig. 7 Data Transfer Cycle

Each AMB shares a common XBUSY line over the AUXBUS to flag both data transfer and the status of the FIFO’s. Further, each AMB can set an error condition using the XBERR flag. Both XBUSY and XBERR pass through the ROCK and are sent over to the ROCKM and to the Trigger Supervisor. The XBUSY flag is used to suspend trigger production within the trigger box and it does not require intervention by the CPU. For the XBERR flag the behavior is quite different. In fact, should one of the AMB input processors not recognize the start or the stop word of the incoming block, the XBERR error flag is raised to stop (via ROCK and ROCKM) data acquisition and to request the action of the CPU. Upon receipt of an error condition the CPU will start diagnostic tests in order to detect the source of failure (AMB board and channel). Should it be unable to recover, that channel will be suppressed i.e. its data will be no more transmitted to DAQ.

9

6. SIMULATIONS The behaviour of a single crate equipped with 10 AMB modules served by just one ROCK module is depicted in figure 8. The number of word per channel of the AMB (abscissa) is reported against the maximum input allowable frequency. Operation onto the backplane run at 12 MHz

4500

4000

3500

Max frequency (Hz)

3000

2500

2000

1500

1000

500

0 0

50

100

150

200

250

300

350

400

450

500

Words/Channel

Fig. 8 Data Input Frequency (AUXBUS equipped with 10 AMBs) versus Word/channel

7. THE TRIGGER SUPERVISOR The Trigger Supervisor of the ARGO experiment does not produce the trigger which is produced in the trigger box. It does however dispatches trigger pulses to all the controllers, handles BUSY conditions, sets and monitors dead-time, manages the Sync Cycles, detects DAQ faults, generates “soft” triggers for debugging and masks trigger pulses for a specific DAQ unit (partitioning). A block diagram of the trigger distribution network is depicted in fig. 9

10

Fig. 9 Trigger Distribution Network

The Trigger Supervisor interacts with the ROCK/M via 5 signals: Trig, SyncR, SyncF, Busy, Halt all of which are NIM level on coax ribbon for best signal integrity. The Trig signal is produced elsewhere and is an input to the Trigger Supervisor. It is possible within the trigger box to choose between different types of triggers. The SyncR pulse is an input produced within the trigger box each time a Sync cycle is produced. The SyncR pulse must not be followed by any trigger pulse until the SYNC cycle is finished. The SyncF(ailure) is an output of the Trigger Supervisor and is active low. The SyncF flag stays active low during the Sync Cycle on the AUXBUS. In case of errors (i.e. the number of the current trigger pointed to by the ROCKs does not match the one answered to by the front-end clusters) SyncF stays low until the CPU performs a reset. The Busy flag is also an output which is produced whenever XBUSY input is driven low by any of the slaves over one the AUXBUS or the ROCK’s trigger fifo goes full or an HALT is generated. The XBUSY flag notifies the Trigger Supervisor that trigger generation should be suspended. The Halt signal is generated if the XBERR line is driven low by one of the AMBs, or when a timeout condition occurred on AUXBUS, or when a SyncF(ailure) has been detected.

11

Conclusions All of the modules refered to in this note have been already implemented and fully tested. A lot of other modules, including a GPS system, have been implemented and will be descrived in the next notes.

References 1. “The ARGO full coverage detector” Proc. Of the “25th ICRC” Durban (1977), vol.5, pag.265. 2. “Physics with the ARGO detector” Proc. Of the “25th ICRC” Durban (1977), vol.5, pag.269. 3. M. Passaseo, E.Petrolo, S.Veneziano “A TDC integrated circuit for drift chamber readout” NIM A 367 (1995) 418-421. 4. M. Passaseo, E.Petrolo, S.Veneziano “Design of a multichannel TDC integrated circuit for drift chamber readout” proc. of The International Conference on Electronic for Particle Physics – LeCroy Research System, Chestnut Ridge, May 1995. 5. A.Aloisio, S.Cavaliere, F.Cevenini, D.Fiore, M.Della Volpe Level 1 DAQ for the KLOE Experiment . Proceedings of the CHEP 95 : Conference on Computing in High Energy Physics pp.371-375. Rio De Janeiro, 1995. 6. A.Aloisio, S.Cavaliere, F.Cevenini, D.Fiore, M.Della Volpe, L. Merola, P. Parascandolo. Rock: the Readout Controller for the KLOE Experiment. IEEE transaction on Nuclear Science. Vol. 43,No.1, February 1996, pp.167-169. 7. A.Aloisio, S.Cavaliere, F.Cevenini, L.Merola, D.Fiore AUXbus: Bus based DAQ system. Proceedings of the Conference DAQ96 2nd International Data Acquisition Workshop on Networked Data Acquisition Systems, pp.19-25, Osaka, Japan 1996. 8. A.Aloisio, S.Cavaliere, F.Cevenini, D.Fiore, AUXbus : an event driven BUS based DAQ system . Proceedings of the CHEP 97 : Conference on Computing in High Energy Physics. B138 pp.127-130.Berlin , 1997. 9. A.Aloisio, S.Cavaliere, F.Cevenini, D.J.Fiore, L.Merola ROCK Manager: The DAQ Chain Controller of the KLOE Experiment, Proceedings of the CHEP 98 : Conference on Computing in High Energy Physics. Washington, 1998.

12

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.