Muon Shift Manual and Reference Guide

  • If you need help: Phone numbers and experts on call can be found on the Muon Whiteboard.
  • Muon Shift Summary: A template can be found here. Please submit your shift summary to eLog using Message type "Shift Summary --> Muon". An example for the type of info to put in the shift summary is given here.
  • Muon DQ Run Summary: A template can be found here. Please submit a Muon DQ summary at the end of each (>30 minutes) physics run with stable beams. In case a Run is ongoing when your shift is over, please transfer the information to the next shifter (e.g. an edited text to be submitted at the end of the run). An example of a run summary can be found here. In the eLog, select Message Type "Data Quality" and DQ_Type "Muon Run Summary". For systems affected, please click DataQuality, CSC, MDT, RPC, and TGC.

Tip: When using the manual, you can open links to pictures and screen shots in a separate window by right-clicking on the link and selecting "Open in New Window".

Table of Content

Introduction

When on shift we assume you are the person booked in OTP for the relevant shift slot. Should you ever swap a shift with a colleague you must inform the muon run coordinators in order to make sure you have the correct privileges for operations when being on shift. (Privileges are automatically assigned according to the information in OTP).

Access rights

To do muon shifts you need to have the access right ATL_CR to be able to enter the Atlas control room. If you do not have it yet, request it via edh.cern.ch --> Access request. Please note that it will take a day or two before your access request to be processed.
Muon shift tasks do not require shifters to go underground, a dosimeter is therefore not needed.

Computer accounts and Roles

Accounts:
As muon shifter you need to have a valid CERN NICE account, which implies having completed the Computer Security course on the web https://sir.cern.ch. Your account for Atlas P1, which is separate from the NICE one, will be created automatically a few days before your shift. Please note that P1 authentication uses CERN single sign-on scheme, which is why your NICE account must nevertheless be valid.
There is no need for you as muon shifter to request any account directly to the Atlas sysadmins, please refrain from doing so!

Roles:
What shifters and experts can do on the P1 network and which applications and machines you can access is governed by the so called P1 Roles. A role is a set of defined privileges enabling you to do your job as shifter while protecting other parts of the system for which you are not trained for.
As muon shifter, your role is MUON:Shifter. Please note that roles are active (enabled) only during the time you are on shift according to OTP as the main muon shifter, not when on shift as trainee or outside your shift times.
Role assignment is handled automatically according to OTP, shifters shall not request any role themselves, neither to the Atlas sysadmins nor muon run coordinators.

Shift Task Overview

When Arriving on Shift

Please come to the control room 10-15 minutes before the actual start of your shift. Check with the previous shifter

  • What are the status and the current conditions of the run ?
  • What is the Atlas run plan for the day ( projected on the wall)
  • Are there any special problems ?
  • What calibration runs have been done already by the previous shifter ?

Then make sure you

  • Close any not needed windows and applications on the muon desk.
  • Make sure all required tools and overview panels are open and arranged such that you can monitor all muon sub-detectors.
  • Log in to eLog and read the previous shifter's shift summary and other recent muon entries
  • Log in to the the DCS FSM and DCS Alarm Screen panels
  • Open the Muon Whiteboard page and check it for special instructions and known problems. If the whiteboard is already open make sure you refresh the page to pick up any changes!

Throughout the Shift

  • Monitor the DCS FSM; its state should stay in state READY (or STANDBY ) and status OK .
  • Monitor the DCS Alarm Screen; it should be empty

The DCS panels on the right screen should always stay in front in order to catch any problems immediately :

During a Run

In addition to keeping DCS green you need to monitor the following applications:

ATLAS TDAQ IGUI(monitoring partition) (Desktop->TDAQ->DAQPanel->MonitorPartition) : Get the information of current run, like run number, trigger rate, etc., and log messages. Change the number of visible rows from 100 to 10000 and use the message filter given on the Muon Whiteboard.

DQMD (Desktop->TDAQ->DAQPanel->DQM Display) : Make sure all Muon detectors are flagged as "Green" (or "Gray" while run has low statistics)

OHP (Desktop->TDAQ->DAQPanel->OHP) : Take a look at all plots from time to time. After warm start (physics triggers activated) make sure to check Overview->EtaPhi for first impression

Firefox with tabs for eLog, Muon Whiteboard, Shifter Assistant, Muon Dashboard and Resources Web View: Usually you should keep the tab with the Shifter Assistant open

During Calibration Periods

In any calibration period announced by the shift leader, please do (priority TGC>CSC>MDT)

  • the 3 TGC calibration runs, as described here
  • the CSC pedestal run, as described here
  • the MDT test run

  • In case of the calibration period is short (less than 1h), do TGC calibration runs individually.

At the End of your Shift

At the end of your shift please

  • Fill in the muon shift summary, using the provided template which can be found here, and post it to eLog using the Message Type 'Shift Summary'--> Muons.
  • Inform your replacement shifter on runs taken during your shift, the current status and any problems encountered during your shift.
  • Do not leave before your replacement arrives.
  • If your replacement is late by more than 20 minutes, inform the shift leader and call the muon run coordinator to arrange for somebody else.

Muon Desk and ACR Environment

Utilities

Taking Screenshots

You can use the tool KSnapshot which can be opened from the General menu (or by pressing button on the key board) to take a snap shot of any graphics panel in order to attach it e.g. to eLog.

Trouble Shooting

Killing individual unresponsive panels/UIs on the muon desk

You can kill a panel/ graphical UI with the command xkill from a opened terminal window, then clicking on the window to kill.

Muon desk is frozen

In case the muon desk becomes frozen attempt to kill the X Server with Ctrl+Alt+Backspace. If this does not help, ask the run control shifter or shift leader to reboot the machine via ipmi, they have instructions and the privileges to do so. Report it in eLog ticking CSC+MDT+RPC+TGC+ SysAdmins as affected system. Please avoid calling the sysadmins on call outside normal working hours if possible, if needed use one of the other muon desks in the meantime.

Muon menu is missing on the muon desk

This is usually since somebody has started the X-session not with the muon profile. Log out of the session (from the button in the bottom left of the desktop), when the login screen appears re-login as user crmuon, no password and select Muon:Shifter as profile when asked .... (if the middle screen does not come up correctly, repeat the procedure)

Muon desk shows the login screen

This can happen after a user logged out or after a reboot. Login as crmuon - not your personal account! - , no password and select Muon:Shifter as profile when asked (if the middle screen does not come up correctly, repeat the procedure)

Muon desk is locked and asks for login

In this case you need to use your NICE login and password to free the lock

Muon desk shows time in UTC

Move the mouse pointer of the clock and turn the mouse wheel to get the time back to local time

Contacting the Calibration Centers

In case you need to pass information to the calibration centers, related e.g. to problems with the muon calibration stream, write an eLog entry using MDT+RPC+DAQ as affected systems and send an email to

  • atlas-muon-mdtcalib-experts AT cern.ch

DCS

There are 2 main tools used by the shifter, the DCS FSM (Finite State Machine) User Interface and the DCS Alarm Screen.

What to do in case of a DCS Alarm: General Procedures (Muons)

In case of a DCS alarm, stick to the following rules

  • Check for 1 or 2 minutes if the alarm disappears by itself; if yes, still mention it in your shift summary. If not,
  • Check the severity of the alarm (WARNING, ERROR , FATAL ) and check if there are specific instructions for this particular alarm by right-clicking on the alarm entry in the alarm screen and selecting Alarm Help. If a alarm help exists, follow the instructions given there. If not
    • if it is a float type value or parameter, e.g. temperature, fan speed or similar, check the recent history by right-clicking on the alarm item in the alarm screen and selecting Trend. If the value just fluctuates around the limit, it's enough to make a note in eLog/your shift summary.
    • if it is a WARNING during the day you can call the expert on call if in doubt. If it's during the night, make an eLog entry.
    • if it is a ERROR or FATAL call the expert on call, then document the alarm in eLog.
  • Acknowledge any alarm in WENT state only when instructed by the expert, by clicking on the red exclamation mark in the alarm screen
  • Mask an alarm to temporarily remove it from the alarm screen only when instructed to do so by the expert.

Low Voltage Operations

Dealing with individual LV channel problems

The following procedure holds for individual LV channel problems for the different muon sub-detectors (for LV failure of a large part of a detector, the cause is normally either a DCS or DSS interlock action or a problem with 48V. Additional steps are needed this case, see below).

CSC
  • Try to switch a failed LV channel back on once. Document in eLog. If the problem persists, call the expert.
  • Please note that a sector whose LV was off intermittently does not give any data until the next run where CSC should be reconfigured! Tell the shift leader.
MDT
  • Try to switch a failed LV channel back on once. Document in eLog. If the problem persists, call the expert.
  • JTAG reinitialize affected chambers
  • if during a run, reinclude chambers into the run, they will have been dropped as consequence of the LV failure.
RPC
  • Identify which type of LV failed (Vee, Vth, Vpd or Vpad) and post an e-log with the details of the error and of the LV failed.
  • In case of alarm ERROR or FATAL , if the alarm do not disappear within 3 minutes, call the RPC DCS/detector expert. (You can avoid to call the expert during the nights of periods of the technical stops of LHC, in this case document it in Elog; in doubt, consult the shift leader)
TGC
  • Do not try to act on a LV problem yourself; call the TGC expert on-call.

High Voltage Operations

Switching HV ON/OFF

In order to Switch HV OFF, for an individual HV channel or a larger part of a muon sub-detector, navigate in the FSM to the corresponding node, then execute the command SWITCH_HV_OFF / POWER_HV_OFF for it.
You should switch HV off

  • when told to do so by an expert, e.g. in preparation for an intervention;
  • for individual channels if the channel is behaving abnormally, in particular when it is showing a persistent OverVoltage alarm or instable output voltage.

In order to Switch HV ON, for an individual HV channel or a larger part of a muon sub-detector, navigate in the FSM to the corresponding node, then execute the command SWITCH_HV_ON / POWER_HV_ON for it.

Please note:

  • You should only switch HV on when told to do so by an expert , except in case of MDT/RPC HV trips. For the procedure of dealing with HV trips see below.
  • There are a number of situations in which HV is switched off automatically by DCS, or HV voltages lowered. See the section on HV Interlocks/DCS Interlocks later in this manual for details.

Transition between HV READY and STANDBY (Question needs review for RPC)

During beam operations, CSC, MDT and TGC sub-detectors ramp HV to nominal values ("READY") only when stable beams has been declared by LHC, implying no more manipulations/adjustments are done on the beams. Outside stable beams, HV is set to a lower STANDBY value, which can be the same value for the full sub-detector (CSC, TGC) or depend on the distance from the interaction point of chamber (MDT). Stable beams is indicated to ATLAS both by the beam mode, displayed on LHC page one, and the so called stable beams flag which is a hardware signal sent to the experiments.
With respect to RPC, STANDBY HV is not inhibiting LHC; RPC will most of the time still follow the usual transitions between Standby and Ready, but may staty at READY for specific tests outside stable beams from time to time. Muons in the new scheme are considered as at STANDBY if CSC, MDT and TGC are at standby voltage and RPC are at either nominal or standby settings.
Current Standby settings are

System Standby Voltage (V0) Nominal Voltage (V1) Detector Region
CSC 1300V 1800V  
MDT 2500V 3080V BI layer; BEE; EM1,2,3; EO1,2,3; EI1,2
MDT 3080V 3080V remaining chambers
RPC 9000V 9600V  
TGC 2200V 2800V  

Understanding if the stable beams flag is present or not

Whether the stable beams flag (hardware signal) is present/received by Atlas or not can be seen from the DCS LHC Widget always displayed in the middle top of the DCS FSM Ui. The widget's last 3 lines give the presence/absence of the stable beams flag, the status of the overall Atlas injection permit sent to LHC, and the DCS evaluated 'safe for beam' status.

Automatic transitions between Standby and Ready

Normally, all ramping between Standby and Ready depending on beam mode is handled automatically by the DCS. There are 3 DCS initiated actions plus a transition which is directly hardware driven.

  • Injection Handshake (DCS action): Upon reception of the Injection Handshake message from LHC, indicated in the control room by an audible signal, all muon sub-detector HV is ramped to STANDBY values if voltages were still at READY.
  • Adjust Handshake (DCS action): Upon reception of the Adjust Handshake message from LHC, indicated in the control room by an audible signal, all muon sub-detector HV is ramped to STANDBY values if at READY before. This transition is only relevant when going to "Adjust" from "Stable Beams", during the Adjust phase following the ramp and a squeeze detectors are still at Standby anyway.
  • Stable Beams (DCS action): Upon reaching stable beams after the sequence RAMP - FLAT TOP - SQUEEZE - ADJUST muon sub-detector HV is ramped to nominal voltages. The ramp up takes between 30 secs and a few minutes.
  • Loss of Stable Beam Flag, Beam Dump (Hardware action except for RPC): CSC, MDT and TGC ramp down to STANDBY voltages on the loss of the stable beams flag/signal. This is a direct coupling to the hardware signal received from LHC, and thus works even if there is a fault in DCS controls. Please note that on a sudden, unscheduled beam loss (beam abort) from stable beams the ramp down usually occurs a few minutes after the actual beam loss, when LHC operators reset the stable beams flag; in case of a scheduled dump the ramp down happens before the actual beam dump.
  • Loss of Stable Beam Flag, Beam Dump - RPC: As explained above, RPC STANDBY HV is not a requirement for detector safety in non-stable beams situations; RPC HV will be kept at nominal voltage for certain tests outside of stable beams. In particular, RPC HV is kept at READY after a beam loss or dump (from stable beams!) for another 20 minutes to allow the study of after-glow effects, after the configured tests are done or extra data is taken the ramp down will be automatic. For RPC shifters only have to check that HV is READY when we are at stable beams.

If the override mode is active, no automatic actions take place for CSC, MDT and TGC; the transition has to be done manually, followed by clearing the override mode for everybody as described here.

Checking Status of Automatic Actions

You can check if all or some of the automatic actions are enabled or disabled by clicking on the MUON FSM top panel on the 'Advanced Panels' drop down menu, then select 'Common' and from there 'Autom. Beam Actions', which opens the panel shown here. Tick marks indicate if automatic actions are active or not. In case of an malfunction of the automatic actions, shifters can disable them (all together) by clicking on the 'Disable All Actions' button, then report this to the experts/system coordinator.Question

Manually going from STANDBY to READY

In case automatic transitions are not enabled, or fail, shifters can go to READY manually, if the stable beam flag is present or override mode is selected. To do so, execute in the FSM

  • the command HV_STANDBY_TO_READY on the MUON SYSTEMS top node of the Muon FSM tree; or
  • the commands HV_STANDBY_TO_READY on the MDT and TGC nodes plus the command GOTO_READY on the RPC node. CSC are coupled to MDTs and thus no command is needed here.

Manually going from READY to STANDBY

In case automatic transitions are not enabled, or fail, shifters can go to STANDBY manually. To do so, execute in the FSM

  • the command HV_READY_TO_STANDBY on the MUON SYSTEMS top node of the Muon FSM tree; or
  • the commands HV_READY_TO_STANDBY on the MDT and TGC nodes. CSC are coupled to MDTs and thus no command is needed here.

Please note that neither GOTO_READY/HV_STANDBY_TO_READY nor the corresponding opposite direction commands will switch on any HV channel which is OFF before. For this please refer to the instructions on HV turn on/off.

HV Override mode (allow nominal HV without stable beams)

For details on the override mode, please see the paragraph later in this manual.

HV Interlocks

Muon sub-detector HV, if in OFF state, is prevented from being turned on under certain conditions:

  • CSC HV interlocked in case of no LV: CSC HV is blocked from being turned on if LV for the sector is off.

  • CSC/MDT manually interlocked channels: CSC and MDT HV channels can be manually interlocked by the expert. In this case, the information "disabled" (MDT) or "interlocked" will appear on the FSM HV channel panel. In the Power System FSM panels, disabled channels are indicated by a little square around the status indicator. If you find any manually disabled channels not listed on the whiteboard, check with the expert !

  • MDT channels interlocked due to gas conditions: MDT HV is blocked if either the CO2 concentration, gas flow or gas pressure is not ok, or if there is no information from the gas system for an extended period of time. In the HV channel FSM panel the word "gasInterlock" will be displayed in this case.
  • MDT channels interlocked due to excessive HV board temperature: MDT HV is blocked if a board reports an abnormally high temperature. The word "tempInterlock" is displayed in the HV channel panel in this case. If you encounter this case, call the MDT/MDT DCS expert !
  • RPC voltage set points reduced due to gas conditions: RPC HV set points are set to a lower than normal safe value of 5000V in case of problems with gas flow or mixture. The safe voltage will continue to be enforced for some time, depending on the duration of bad gas conditions, after the gas problem clears !

  • TGC channels in Manual or Disabled Mode: Channels are blocked from being turned on if configured by the expert as in mode Manual or Disabled. The mode can be seen by navigating in the TGC FSM to the corresponding HV channel, an example is shown below. FSM nodes corresponding to channels in manual mode and OFF should normally be excluded (disabled) from the FSM tree.
In addition, HV when ON is automatically switched off for detector safety reasons when one of the non manual conditions above is reached. More details are given in the section on DCS Interlocks later in this manual. If you find any HV going to OFF state without a Trip or other Error, check if it was caused by an interlock action before you call the expert.

Dealing with HV Trips

The following procedure holds w.r.t. to HV trips for the different muon sub-detectors:

CSC
  • Tripped HV channels will be recovered automatically by DCS, shifters need to post an e-log.
  • Only in case you observe the same channel re-tripping constantly, please alert/call the DCS-oncall expert.
MDT
  • Try to switch a tripped channel back on once;
  • If the channel trips again, leave it off; try to clear the trip alarm with the RESET_TRIP command in the FSM on the affected channel; if this does not work, you can mask the alarm from the alarm screen (right click --> Mask).
  • Disable the node in the FSM (by clicking on the the red cross next to it).
  • Post an eLog entry. There is no need to call an expert for single channels tripping.
RPC
  • Do not act on a tripped channel yourself. Call the RPC DCS/Detector expert on call.
TGC

TGC HV trips occur relatively often.

  • Recovery is automatic, you should not switch tripped channels back on yourself.
  • The automatic procedure attempts to recover a tripped channel up to 4 times; please note that there is a certain time (currently 20 minutes) waited between trip and recovery attempt; the channel appears in state "RECOVERING" in the FSM during this period.
  • You can check the number of automatic recovery attempts carried out so far from the table which appears in the FSM UI when navigating to the corresponding TGC sector wheel node (e.g. HV C 01 M3); look at the "#tr" column.
  • If self-recovery fails the 4th time, the system will disable automatically the HV channel and post an entry to eLog. Shifter should mask the corresponding “Tripped” alarm in the alarm screen .

Dealing with other HV Errors Question

Threshold Settings Question

Front-end electronics thresholds are controlled by DCS for MDT, RPC and TGC.

MDT

MDT thresholds are loaded to the mezzanine card ASD chips as part of JTAG initialization. Threshold values are by default taken from the MDT configuration data base, the are different for each ASD chip. In addition thresholds for each chamber can be set to a user defined value via the JTAG FSM tree, for special calibration runs. When initialized in this mode, WARNING status is present in the FSM and 2 alarms appear in the alarm screen, one indicating custom threshold mode is selected, the second that the chamber is indeed initialized with the custom threshold and not the values from the configuration DB. Please make sure default thresholds from the database are reloaded before the next physics run if not explicitly instructed otherwise!.

RPC

RPC thresholds are generated by ADC modules which are part of the RPC LV system. Threshold voltages are referred to as Vth, trouble shooting is covered under Low Voltage Operations.

TGC

TGC front-end electronics thresholds are handled via TGC ELMBs (Embedded Local Monitroing Boards), which in turn are controlled from DCS via CanBus.

  • Threshold not responding: New instructions are available: TGC-Threshold_reporting_ERROR.pdf. If this does not clear the problem call the expert.
  • Bad thresholds after LV power cycle: After any TGC LV operation thresholds may have to be reset to work correctly, please check with the expert immediately should threshold errors appear after a known operation on LV.

JTAG Initialization (MDT)

JTAG initialization is the loading of parameters to the MDT front end electronics (CSM and mezzanine cards) and their initialization. JTAG init is handled through the MDT FSM tree.

Automatic full JTAG re-initialization at the end of runs

After a long physics run the MDT system will automatically issue a full JTAG re-initialization of all MDT chambers. This process is triggered at most once per day during the UNCONFIGURE step of the DAQ at the end of a physics run after a beam dump. Experience has shown, that there are always a few chambers, that fail the automatic re-initialization and thus will cause an ERROR state of the FSM. In such a situation the shifter is supposed to follow the instructions described in the next section about dealing with lost JTAG initialization.

Once all MDT chambers are fully initialised again, it is advisable to perform a short MDT standalone run. This is to validate, that indeed all chambers are correctly initialised again. Experience has shown, that there is a small chance that although all is READY again, some chamber might fail at next start of run. There is no need to keep the standalone run up for more than 5 minutes. The aim is to see, that all chambers stay in the run or are correctly auto-recovered. Once no chamber is giving any further troubles, the run can be stopped.

Dealing with lost JTAG initialization

JTAG initialization can be lost sponteneously in case of

  • loss of chamber low voltage
  • problems/glitches with the clock signal sent to the front end electronics
  • failures during the automatic full re-initialization

In case JTAG initialization is lost (nodes being NOT_READY/NOT_INITIALIZED ) in the FSM,

  • Initialize affected chambers. You can try several times if needed.
  • If a run is currently ongoing, once chambers are initialized make sure you reinclude them into data taking, in the same way as when recovering dropped chambers.
  • If initialization fails, make sure there is no LV problem nor problem with a MDT TTC crate being off.
  • Outside of a run, if a chamber repeatedly fails to initialise correctly, you might attempt once to switch off LV of the affected chamber, wait 2 minutes, switch the channel back on and try to initialize again (remember a 2nd chamber will lose initialization by a LV power cycle)
  • In case initialization persistently fails, call the expert on-call.

The given procedure is for the case of JTAG init lost for some chambers. Should you loose a large part of the detector, in any case check with the expert before you reinclude them into an ongoing run!

Muon BIS (Beam Injection System) Interface. Injection Permit Logic.

The Muon BIS Interface handles the signals provided to LHC (injection permit, via Atlas BIS system) from muons and the signals received by muons from LHC (stable beams flag). Details on the status can be seen from the Beam Interlock Panel, which can be opened by navigating in the FSM to MUON SYSTEMS (FSM top node) --> MUON --> BEAM INTERLOCK.

The panel is spit in 2 parts (shown here: left -- no stable beams, STANDBY. right: stable beams, HV READY.

  • Injection Permit: The upper part of the Beam Interlock panel shows the status of the muon injection permit logic. The injection permit is a hardware signal common for all muon sub-detectors which must be provided to LHC, via the ATLAS BIS system, to allow injection. This signal is generated by the Muon BIS system when CSC, MDT and TGC HV is at STANDBY and muon HV is thus in a safe state. The signal is present, if the indicators (circles) left most all appear in blue or yellow. The overall muon permit, as received by the Atlas BIS system, is shown in the right part next to the label 'Muon Permit'. Should this be not the case when at STANDBY, please call the relevant sub-detector DCS expert straight away, you are holding up LHC injection.

  • Stable Beams Signal Handling: The lower part of the Beam Interlock panels shows handling of the stable beams signal. Please note that the signal from the LHC is not directly connected to muons' CAEN power system mainframes, but via a set of DCS controlled switches shown here. This allows to withheld the stable beams signal from the mainframe to go to STANDBY voltage (V0 set points) e.g. in case of the Adjust handshake by opening the switches.
    The shown situation is the one during beam operations without stable beams (HV Standby). When stable beams are reached, LHC Stable Beams Signal will become TRUE. On the GOTO_READY(_HV) command the 2 switches to the right for MDT and TGC are closed (will become red), the signal then reaches the mainframes which switch from V0 to V1 set points. The RPC switch is always closed since HV transitions are handled slightly different as described above.

HV Override Mode: Nominal HV without stable beams

Muons sometimes require nominal HV without stable beams, in particular in periods with no beam at all for an extended period of time (at least a few hours). Activating the override mode is an action reserved to the muon run coordinators and protected by corresponding access control. Please call the muon run coordination phone number in case any question on going to override mode comes up.
Please note that while either CSC/MDT or TGC are in override mode is active, the muon injection permit is blocked!.

Asserting Override Mode (Muon run coordinator instructions!)

| To assert the override mode, navigate in the FSM to MUON SYSTEMS (FSM top node) --> MUON --> BEAM INTERLOCK. Click on the State field (reading 'READY') of the Beam Interlock node and execute the command GOTO_OVERRIDE_MODE. Override mode can be set also separately for CSC/MDT and TGC if needed, for this execute the same action separately on the sub-detector specific nodes.
For RPC go to MUON SYSTEMS (FSM top node) --> RPC then on the right hand side you can find Rpc HV Operation. Clicking on the corresponding square opens the RPC Safe for beam (LCSs) panel. At the bottom left of that panel you can find a button Manual/LHC driven Master Switch. Clicking it will give you the option "MANUAL Mode". This will set the RPC into the override mode disabling all automatic script actions. Please note that activating override mode is a protected action reserved to the muon run coordinators!
Please note that while either CSC/MDT or TGC are in override mode is active, the muon injection permit is blocked!.

Clearing Override Mode (Shifter instructions)

| The override mode can be cleared, ie the system made ready for beam injection by the muon shifter. To do so for CSC/MDT and TGC, please navigate to the Beam Interlock node, click on its state field (showing 'READY') and execute the command GOTO_BEAM_MODE. Verify the injection permit is given, this may take a minute. For RPC open the "Advanced Panels->RPC". You will see one button highlighted in yellow "Revert RPC HV Operation". Click this and then press the big button "Revert To standard Operation". At the end of those actions, all four corresponding warnings on the DCS alarm screen should be gone and all sub-systems should start ramping down HV to STANDBY automatically.

DCS and DSS Interlocks Question

Gas System

In case of a stop of one of the muon gas systems, you will see an Error in the DCS alarm screen. Also an audible alert is triggered at the SLIMOS desk. To bring back the gas system is not the task of the muon shifter. The SLIMOS will get in contact with the gas piquet, who will take care. However, in case of a longer stop of the gas system, the affected muon sub-system might need to switch-off the HV. As long as the stop of the gas system is less than 1 hour, nothing has to be done. If you are approaching the 1 hour limit, please get in touch with corresponding sub-system expert on call.

MDT Gas System Pressure Errors

In the past we have experienced some issues with individual pressure sensors in the MDT gas system. It can happen, that an individual pressure sensor is suddenly reporting increasing pressure to values of 3100 bar and above, while actually no changes are visible in the trends of the input or output flow. In these cases, please make a screenshot of the pressure and flow histograms and put them in an eLog entry. During daytime, please inform the MDT primary on-call. During night time, you may disable the corresponding gas channel node in the FSM, if the flow really is stable and only the pressure starts to go crazy. The error in the DCS alarm screen should be left unmasked in order for the expert to check at next occasion.

Barrel Alignment System

| The overall status of the barrel alignment system is monitored as part of the MDT FSM. In case of its state becoming NOT_READY or its status going to ERROR or FATAL, please call the MDT/MDT DCS expert. Intermittent Warnings of a high bad line fraction of a few percent can be ignored if they disappear after a few minutes by themselves.
If a NOT_READY state is accompagnied by a RasDim Server Alarm, please check with the shift leader if he/she know about any check carried out by an alignment expert from remote, if yes, wait before calling the MDT/MDT DCS expert, normally the alarm will clear itself in this case.

Endcap Alignment System

The endcap alignment system is monitored as part of the MDT FSM. If the state becomes "NOT READY" due to an endcap alignment ERROR, please call the MDT/MDT DCS expert.
Generally, any problems with the system will automatically issue a command to switch off all of the alignment VME crates. There will be associated FSM error alarms that begin with the term MDT EALIGN.

To check that the VME crates are off, open up MUON DCS FSM and navigate through the following nodes:

 MDT --> INFRASTRUCTURE --> VME CRATES --> ALIGNMENT 

In rare cases where the crates have not been switched off, the MDT/DCS expert may ask you to do so manually. For this, execute the "GOTO_OFF" command by right clicking the "ON" button in green for each of the six crates (labelled as CEM, AEM, EI, CEO, AEO, and EE).
Once the crates have been turned off, either automatically or manually, please make a note in your shift summary with relevant details (time, alarm messages seen, actions taken...).
No further action is required from the shifter, leave the crates off, and do not restart any endcap alignment processes.

VME (ROD, TTC) Crates

VME crates for CSC and MDT system can be power cycled or turned on/off by the shifter. To do so, navigate in the DCS FSM to MDT --> Infrastructure --> VME Crates or CSC --> Infrastructure --> VME Crates correspondingly, then execute a GOTO_ON/GOTO_OFF action on the crate concerned.

Note: In case of MDT, do not power cycle a ROD or TTC crate for a stuck SBC, but execute the SYS_RESET action on the crate concerned from the DCS FSM.

RPC and TGC crates are not under the control of the muon shifter. In case a power cycle is needed for a stuck SBC (Single Board Computer), ask the run control shifter to take care, he has the relevant permisisions. If a crate needs to be turned on or off for another reason, call the expert.

DAQ

Taking or Monitoring a Run

When a combined run is ongoing, you will monitor the ATLAS partition, spying on errors, log messages and the state of the DAQ. In addition, during calibration periods you will run calibrations/tests in standalone mode. Standalone runs currently are separate for each muon sub-detector (CSC, MDT, RPC, TGC).

Monitoring the ATLAS Partition

In order to monitor what is ongoing in the ATLAS partition, click in the menu on TDAQ --> DAQ Panel.

DAQ Panel default settings for tdaq-07-01-00

Parameter Config
Setup script /det/tdaq/scripts/setup_TDAQ_tdaq-07-01-00.sh
Database File /atlas/oks/tdaq-07-01-00/combined/partitions/ATLAS.data.xml
Partition Name ATLAS
ERS Filter* (*) *QUAL=TGC or QUAL=CSC or QUAL=RPC or QUAL=MDT
OHP Config Parameters -c /atlas/moncfg/tdaq-07-01-00/muons/ohp/muon.ohp.xml
TRP Config Parameters -c /atlas/moncfg/tdaq-07-01-00/trigger/trp/trp_gui_conf.xml
(*) Verify the selection mode 'Expression' for the filter in the ERS.

After opening the DAQ Panel,

  • Enter the above information (you can copy 'n' paste). You can also use the Browse button to navigate to the specified files.
  • Press Get Partition.
  • Select the partition from the drop down menu.
  • Press Read Info. Wait a little until information is read.
  • Once the buttons on the right of the DAQ panel become active, click on Monitor Partition and wait until the TDAQ IGUI appears. This may take half a minute or so.
  • Once the ATLAS partition is up, load the TGC Integrated Panel into the the TDAQ IGUI, by selecting it from the Load Panels menu in the TDAQ IGUI. (only available if TGCs are included in the run)

Starting a Standalone Run - General Procedure

Standalone runs are only possible if the corresponding subsystem is not part of the global ATLAS partition (and vice versa, resources are shared). If a subsystem is still part of the ATLAS partition and a standalone run is needed, please discuss with the run control shifter or the shift leader to remove it from ATLAS. For the same reason, after a run is finished, please release the standalone partition to allow ATLAS to continue. To properly do this, you must close the IGUI window via the menu 'File' -> 'Close IGUI & exit partition' and not by clicking the x in the top right corner of the window.

To start a standalone run, open the DAQ panel, then

  • Enter the setup script and database file. You can also use the Browse button to navigate to the specified files. Please refer below for the configurations needed standalone.
  • Press Get Partition.
  • Select the partition from the drop down menu.
  • Press Read Info. Wait a little until information is read.
  • Once the buttons on the right of the DAQ panel become active, click on Start Partition and wait until the TDAQ IGUI appears. This may take half a minute or so.
  • Press INITIALIZE, wait until CONFIG button enabled and state is INITIALIZED.
  • Press CONFIG, wait for partition state to become CONNECTED. This takes few minutes
  • Check the run settings, in particular run type and recording enabled/disabled. if you do not want to limit the number of events taken, put 0 in the corresponding field. For standalone runs recording should normally be disabled except instructed otherwise .
  • Press START to start the run and check there are triggers.

CSC calibration (pedestal) runs

Shifters should take a CSC pedestal run once per day.

CSC and TGC calibration runs should be taken as explained in Muon Calibration Panel . However if for any reason you are instructed not to use that tool, one can take CSC pedestal run manually as follows:

DAQ Panel CSC pedestal run settings (tdaq-07-01-00)

Parameter Config
Setup script /det/tdaq/scripts/setup_TDAQ_tdaq-07-01-00.sh
Database File /atlas/oks/tdaq-07-01-00/muons/partitions/part_CSC-Pedestal.data.xml
Partition Name part_CSC-Pedestal
Pedestal run sequence:
  • Start the CSC pedestal run partition from the DAQ panel as described above.
  • Press INITIALIZE, CONFIG. Configuration will take about 1 minute.
  • The number of events should already be set to 125(If not set it to 125). Check that recording is enabled, press START. The run will take ~2 minutes. (Note that there will be ~250000 L1 triggers, while only ~125 events will be recorded).
  • The will stop automatically. The Stop transition will take about a minute(it does some calculations and writes some histograms) - then press UNCONFIG and SHUTDOWN. Exit the IGUI and answer 'yes' to the popup window to shut down the partition. It is important to shut down the partition to free its resources for the next combined run.
  • It's enough to make a simple eLog entry with title "CSC Pedestal Run [runnumber]"

CSC standalone runsQuestion

DAQ Panel CSC (Standalone) calibration run settings (tdaq-07-01-00)

Parameter Config
Setup script /det/tdaq/scripts/setup_TDAQ_tdaq-07-01-00.sh
Database File /atlas/oks/tdaq-07-01-00/muons/partitions/part_CSC-Standalone.data.xml
Partition Name part_CSC-Standalone

MDT standalone runs

Shifters are asked to do a standalone run as part of recovering from a power cut, for specific investigations as specified by the experts or after an automatic full JTAG re-initialization.

DAQ Panel settings for tdaq-07-01-00

Parameter Config
Setup script /det/tdaq/scripts/setup_TDAQ.sh
Database Files /atlas/oks/tdaq-07-01-00/muons/partitions/part_MDT_all.data.xml
/atlas/oks/tdaq-07-01-00/muons/partitions/part_MDT_Ba.data.xml
/atlas/oks/tdaq-07-01-00/muons/partitions/part_MDT_Ec.data.xml
Partition Names part_MDT_all
part_MDT_Ba
part_MDT_Ec
ERS Filter MDT
OHP Config Parameters -c /atlas/moncfg/tdaq-07-01-00/muons/ohp/muon.ohp.xml
TRP Config Parameters -c /atlas/moncfg/tdaq-07-01-00/trigger/trp/trp_gui_conf.xml
MDT standalone partitions:
A MDT standalone run can be done either for the full MDT system or separately for the Endcap and the Barrel. The 3 different partitions to be used are
  • part_MDT_Ec: MDT Endcap
  • part_MDT_Ba: MDT Barrel
  • part_MDT_all: Full MDT system
Usually the full partition should be used for the usual runs performed by the shifter. Endcap and Barrel partitions can run simultaneously if needed.

RPC standalone runs Question

Currently shifters do not perform RPC standalone runs on a regular (daily) basis.

DAQ Panel RPC standalone settings (tdaq-07-01-00)

Parameter Config
Setup script /det/tdaq/script/setup_TDAQ.sh
Database File /atlas/oks/tdaq-07-01-00/muons/partitions/part_RPC-DAQSlice.data.xml
Partition Name part_RPC-DAQSlice
ERS Filter RPC
OHP Config Parameters -c /atlas/moncfg/tdaq-07-01-00/muons/ohp/rpc/RPC.ohp.xml
TRP Config Parameters -c /atlas/moncfg/tdaq-07-01-00/trigger/trp/trp_gui_conf.xml

TGC calibration runs Question

Muon shifters are asked to perform 3 types of standalone runs during calibration periods,

  • Random : random trigger run.
  • ASD test : analog test pulse run.
  • Track test : digital test pulse run. Please inform the shift leader when the calibration is completed.

TGC and CSC calibration runs should be taken as explained in Muon Calibration Panel. However if for any reason you are instructed not to use that tool, one can take TGC calibration runs manually as follows:

DAQ Panel TGC calibration run settings

Parameter Config
Setup script /det/muon/TGCFE/installed/bin/setup_partTGC_FillTest.sh
Database File /det/muon/standalone/databases/tdaq-07-01-00/muons/partitions/part_TGC_FillTest.data.xml
Partition Name part_TGC_FillTest
Calibration run sequence:
  1. Prepare DAQ software.
    • Open "TDAQ" panel for calibration test named "part_TGC_FillTest". To do that, need to fill some information on the DAQ Panel.
      • Setup Script : as listed above.
      • Database File : as listed above.
    • Click on "Get Partition".
    • Click on "Read Info". The Part name "part TGC FillTest" is filled automatically.
    • Click on "Start Partition".
    • Now, a new RunControl panel is open. The name is not "ATLAS" but "part_TGC_FillTest".
  1. Open FEtest panel
    • From the menu bar on the Desktop: Muon→DAQ_Config→TGC FE Test. The panel will appear.
  2. Start data taking.
    • (1) Before starting, ensure that TDAQ state says "None". If not, click "SHUTDOWN".
    • (2) Click "Random" on the FEtest panel (or "Track test" or "ASD test" for the other two calibration runs).
    • (3) TDAQ transition will show a popup window (message for e-Log, use your login and password).
      Fill information and click OK. Now data taking will be started automatically. TDAQ state changes "None" -> "INITIALIZED" -> "CONFIGURED" -> ... automatically.
      If transition failed (some box turned red like this), click "Stop test and shutdown DAQ automatically" button on FEtest panel and go back to step (1).
      * (4) When TDAQ is "CONNECTED", TDAQ always shows a warning message. Just click OK and wait for a few minutes more because FEtest panel is still configuring electronics modules.
      After the configuration is done, run automatically starts. If nothing happens after 5 min, something is wrong. Stop the test by clicking "Stop test and shutdown DAQ automatically" and go back to the step (1) above.
      At the end of the calibration run, another popup window appears for end of run message in e-Log, please press "Ok" here.
      • If Rod busy occurred, check the Rod Status using the TGC Status panel, from the TDAQ IGUI by clicking on the "Load Panels" Button and selecting "TGCI Integrated Panel". Busy should be recovered in a minute. If it is not for 5 minutes, restart the run with clicking the "Stop ..." button.
      • It takes about 15 mins for data taking.
      • Running state will automatically go to "NONE" after data taking finishes.
    • (5) Then check run number. (After taking 3 calibartion runs, post an e-Log with the run numbers.)
    • (6) Copy data (Please don't forget!!) : in the bottom of the FEtest panel, fill in the run number and click "copy to shared disk".
    • (7) Go to instruction (1) and do the same thing for "ASD" and "Track" instead of "random".
    • (8) Having finished all three tests, close the TDAQ Panel and FEtest Panel. When you close the TDAQ IGUI, you will be asked whether to also shut down the partition infrastructure. Answer "yes" here.
    • (9) Finally, post an e-Log with the three calibration run numbers (including TGC to "System affected" on ELisA).
  • HELP What To Do When
    • If you find recording is 'Disable' in the TDAQ panel, just ignore and keep it 'Disable'. Since TGC calibration run does not use ROS, the recording mode in the TDAQ panel does not affect the calibration data taking.
    • Some tests have not been finished yet but shift leader is telling you that global run will start soon.
      • If you do not have enough time, give up taking data and report it on an e-log. To finish the testm click "stop test and shutdown DAQ automatically" button on FEtest panel, and after TDAQ goes to "NONE", close TDAQ Panel and FEtest Panel. Then, tell shift leader thet TGC is ready for global run.
    • Waiting for "FEtest" for a long time but it just continues saying "TGCFE-RCD_ECA01_FillTest is still working...".
      • Stop the data taking and try it from the beginning. Close FEtest panel and open it again. Click "SHUTDOWN" on TDAQ panel and wait until it goes "NONE" state.
    • Other problems
      • Call primary on-call expert (16 1905)

Muon Calibration Panel

CSC and TGC pedestal/calibration runs can be done with one click using the muon calibration panel. To open the panel, click to menu Muon -> DAQ_Config-> Muon Calibration Panel. You'll see the following screen

Recently (June 2018) the MDT have been removed from the muon calibration panel. For MDTs please refer to the instructions given here.

To take the calibrations

  • Make sure CSC and TGC are green next to DetMask. (Otherwise they may be included in another partition)
  • Make sure that boxes on left of CSC and TGC are checked.
  • Click "Do All Selected Calibrations".
  • After each calibration you'll be prompt to enter eLog, enter your username and password and publish the eLogs. In the eLog, you'll see a summary of the run, and any error that may happened.
  • Close the panel once everyting is finished.

Other Monitoring Tools for the Ongoing Run

RPC Busy and Rate Panel Question

The "RPC Busy and Rate Panel" shows RPC LVL1 DAQ information obtained through DAQ to DCS Communication (DDC) and is started from the muon menu MUON --> RPC --> INFRASTRUCTURE --> RPC LVL1 --> RPC LVL1 DDC. The same panel can be opened from the MUON or RPC secondary panel clicking on the RPC DDC box or as a standalone independent window by clicking on the neighbouring small circle.

  • What is displayed:
    • The individual trigger rates of the ~400 trigger towers and their busy and kill status
    • The total rate and the number of trigger towers with problems (busy, kill etc.)
    • Information from the ongoing run (Run number, lumi block, status running etc)
    • Overall READY state; OK, WARNING AND ERROR messages

In the main table the rates are organized in 64 lines corresponding to the 64 trigger sectors named in the leftmost column. The corresponding detector sectors (16 in total) is indicated in the rightmost column. Individual rates for all trigger towers are shown together with their sum and a bit pattern indicating error conditions (#).

NOTE: Please note that this panel is active and displays valid information only when the RPC is in data taking (combined or standalone). If neither a combined or standalone run is not on-going the information is invalid (grey) and not relevant. In this case a WARNING message is issued which does not need a follow-up.

  • What to monitor:
    • During a run in the ATLAS partition, shifters should periodically monitor the status of all sectors in the main big table.
    • The title should be green "DAQ to DCS Communication Active". The rates displayed in the table should be mostly green with very few exceptions (*). Grey cells indicate disabled towers and should be monitored. A high rate (>1 KHz) in one or more trigger towers could be a potential problem. Only if you are asked to lower the rate of a trigger tower, call anytime the RPC LVL1 expert in order to exclude that tower. More than 5 killed trigger towers (#) are considered error conditions shown in the title of the panel and reported also in the ALARM screen. In case of such error call anytime the RPC LVL1 expert.
    • Errors and high rate are also summarized in the bottom line of the panel. In case of single killed or busy trigger towers, try to recover them as described below; if this does not succeed, call the RPC LV1/DAQ expert if during day-time and post an eLog during nights. In case of multiple (5 or more more) killed towers and in case of persisting abnormally high rate for one or more towers, call the RPC LV1 expert.

  • Additonal information:
    • The panel allows to visualize history trends of rates, busy and error condition by right/left click on the appropriate table element.
    • The table can be reordered to visualize highers rates, errors, trigger sector logic or detector
    • DB queries and snapshots on archived data are disclosed by clicking on the Online box in the top title.
    • By clicking on the main table border the information displayed flips from Meanrate (last 2 minutes) to instantaneous rate (last measurement)
    • (*) Please note that towers t4 of sectors s21, s22, s25, s26, s53, s54, s57, s58 (corresponding to detector the ATLAS feet sectors 12 and 14) zero rate (blue) is expected.
    • (#) Busy/Killed towers are summarized in red in the "Killed" column of the main table. A bit pattern is associated to each trigger tower. 0 means no errors. 1 Kill, 2 Pad Busy, 4 SL Board Busy, 8 Rx Fifo Busy. More than 5 pads in error (Busy/Kill non zero) are considered an error to be followed up anytime.

TGC FE Monitor ( Do not request to open since 2012 Sep 09, ELOG https://atlasop.cern.ch/elisa/displayEntryID.htm?messID=218495&display=1 ) Question

The TGC FE Monitor panel is started from the muon menu MUON --> DAQ/Config --> TGC FE Monitor and shown here.
What to monitor:
During a run in the ATLAS partition, shifters should monitor the status of all sectors in the left hand part of the panel. The state for all sectors should be either OK or CHECKING , where Checking indicates the sector is currently being tested by the tool.

*What to do in case of Errors and Problems: In case of an ERROR reported in the TGC FE Monitor, please call the TGC expert on-call !

TGC Status Panel Question

The TGC Status panel has been refurbished in March 2011. From the standard RC panel, open the TGC igui by clicking Load Panels and selecting the TGC panels. In the copious tree you are presented, you should open the shifter subtree. All sub-panels there are important and you should understand the reason for any red entry. You can of course browse the other panels; they are harmless except the Configuration section which should not be touched by a non-expert. Please note that there exists a (slow) web interface to this panel, for browsing outside P1: https://atlasop.cern.ch/tgc/tgcStatus.html.

  • The top most panel (ROD status) shows a simple summary of the 24 sectors. The most important tag is the ROD busy state.
  • Then come the Recovery/Reconfig panel. Once you know that Recovery means `transparent reconfiguration of the ROD and flushing of SSW pipes' and that Reconfig means `full reconfigure of ROD and FE electronics', it should be simple. If not, either roll the mouse pointer over the buttons in the panel to get online tips, or follow this link.
  • The third most important Panel in the Occupancy one. It shows, for each of the 7 or 8 star switches in each rod, the averaged wire and strip occupancy. A black box indicates that a star switch has been dropped. If the automatic resync mode is not enabled, please refer to the dedicated page.
  • Then comes the summaries of HV and Thresholds read out monitoring, as written to the condition data base. Disabled indicates a channel which isn't controled by the DCS. The age of newest update is an indicator of the welath to the process writing to the data base: if stuck, all values will increase for ever. Note that you can produce a formatted detailed print-out if these panels from LXPLUS or any other /afs/cern.ch mounted machine. See the detailed instructions.
  • Finally the last panel shows a summary of the DQ flags, as computed by the DCS calculatorAll sub-panels there are important and you should understand the reason any red entry.

Busy Panel Question

The BUSY panel is started from the DAQ panel pressing on the button Busy, which opens a panel as shown here. The panel is usually also among the ones projected in the control room on the front wall so that you don't need to have it open on the muon desk. In case of a TGC problem, see the new instructions for TGC ROD recovering in the dedicated page..

Muon Calibration Stream Panel Question

The Calibration Stream Panel is used to check the status of the muon calibration stream as explained here. Please note that the panel does not need to be kept always open on the muon desk.

Error Reporting System (ERS)

The ERS is started from the DAQ panel. This will bring up a separate window with the printout of all messages issued by the run control application.The number of messages displayed can be changed from the default (100) at the bottom of the window. A filter can be applied to the messages displayed - as default use 'QUAL=TGC or QUAL=CSC or QUAL=RPC or QUAL=MDT'

Log Manager Panel

The Log Manager is started from the DAQ panel. It is used to browse through all messages issues by the run control application for any run/partition. The logs can be browsed by searching using the run number (the top-left of the panel), or by browsing by partition/user using the tree on the left side of the panel. The messages displayed can be filtered by application that issued the message, the time issued, level (warning, error, fatal), etc. using the filter at the top of the screen.

Trigger Presenter Question

SFO Display Panel Question

Dealing with Stopless Removal (ATLAS partition) UPDATED (11.09.2017)

When some detector component is causing 100% BUSY for too long, the expert system CHIP will kick in and start the procedure of a stopless removal of that offending component. When LHC is in mode SQUEEZE, ADJUST or STABLE, the offending channel is automatically excluded from the run and you will see a message in the Shifter Assistant asking you to contact the corresponding sub-system primary on-call. Outside the beam modes listed above, the stopless removal will not be automatically performed, but a pop-up message on the run control desk is asking for confirmation. The run control shifter will then go to the corresponding detector desk and ask for instructions. If you are asked by the run control shifter what to answer to the stopless removal pop-up for a muon sub-detector, please always answer YES, and then contact the corresponding sub-system primary on-call.

If a stopless removal of a muon segment is not followed up by a TTCRestart, but rather left out for the rest of the ongoing run, its segment must be restarted (or unconfigured/configured) in the DAQ before the start of the next run to have removed detetcor parts reincluded. Please make sure the shift leader takes note of this. In particular the next run should not be started via STOP/START without going to UNCONFIGURE in between.

Recovery Procedures for Dropped/Abandoned/Killed/Removed objects Question

Here, recovery mechanisms of each systems are described seperately. One recovery that is in common is TTC Restart, and it takes following amount of time for each system:

CSC 7 sec
MDT 1 min
RPC 5 min
TGC 4 min

CSC

  • When any CSC chamber/ROD goes busy, an automatic recovery (resychronization) takes place, resetting the firmware. For each chamber, it will run up to 3 times during the run, then stop trying to recover the ROD. During the recovery you'll see an ERS error message: CSC-TTC-RCD rc::HardwareSynchronization CSCEndcapA-Controller would like to resynchronize its hardware. , followed by a ERS warning message corresponding to the problematic ROD: CSC-A-RCD-A06 CSC::RCE_Warning Resetting module to recover busy
  • For CSC, RODS are grouped under the structures called COB, and each COB contains 4 or 6 chambers. If a chamber goes busy and can't be recovered by resynchronization, after 3 trials related COB will be asked to be stoplessly removed. In case this happens Run Control shifter will get a pop-up screen and you'll see a message in the ERS like: rc::HardwareError CSC-C-BusyChannel-14 .
  • If part of CSCs was stoplessly removed during a run due to a Busy, the on call expert must be called and s/he'll probably ask to perform a TTC Restart on CSC segment. If no TTC Restart have been performed, CSCs should be restarted or unconfigured/configured before the next run.

MDT

For MDT, 3 different types of objects can become removed or dropped: Chambers (dropped), Mezzanine Cards (dropped) and MRODs (removed). Dropping of chambers and mezzanines happens on the fly without a higher level action by the DAQ Expert system, there is no stopless removal pop-up window on the run control desk, rather a message of severity WARNING in ERS indicates the chamber or mezzanine 'disabling'. A ROD removal on the other hand is initiated by the DAQ Expert System, it is announced by a stopless removal message in the Shifter Assistant and is preceeded by the run being busy for several 10s of seconds.

How to detect a MROD has been stoplessly removed

The Shifter Assistant will announce a stopless-removal of one or more ROL (read-out links). The Resources Web View will give details about how many RODs are affected and which ones.

There is no recovery for a stoplessly removed MROD; if this happens during stable beams, please call the CSC/MDT primary on-call phone to discuss how to proceed. To reenable the removed MROD at the next run, the MDT segment in DAQ must be unconfigured and reconfigured or the segement be restarted (i.e. a simple STOP-START transition does not do it) !!

How to detect chambers/mezzanines have been dropped

You notice when chambers are dropped from a run by

  • an alarm in the DCS alarm screen, giving the number of currently dropped chambers per partition
  • a warning message in ERS stating a CSM channel was disabled and the chamber name.
  • a message in the Shifter Assistant and a note in the Resources Web view
You notice mezzanines being dropped from a run by
  • an alarm in the DCS alarm screen, giving the number of chambers with dropped mezzanines per partition
  • a warning message in ERS stating one or more mezzanines, given as a mezz mask, of a chamber were disabled

You can find out which chambers or mezzanines are dropped in 2 different ways:

DAQ/Run Status Panel in DCS:

  • in the DCS FSM UI navigate on the Muon Systems top panel. Click on "Advanced panels" and select MDT. Then open the "DAQ/Run Status" Panel.
  • Dropped chambers appear in orange color, or in red . Red chambers have been set as 'pathologic', by the DAQ, they have dropped too often within too short time. Such pathologic chambers can not be reincluded with the Reinclude command described below. * Dropped chambers appear in yellow color, mousing over shows you as tooltip the mezzanine mask of dropped cards.

Dropped Recovery Panel in DCS:

  • in the DCS FSM UI navigate on the Muon Systems top panel. Click on "Advanced panels" and select MDT. Then open the "Dropped Recovery" Panel, shown below.
  • The table per default shows chambers which are either dropped, with mezzanines dropped or in recovery;
  • Optionally, all chambers or chambers filtered by name can be shown by selecting the mode 'All' and specifying a Filter pattern.
  • Note: The column Incl? indicates if a reinclude command is allowed; this is the case if a run is ongoing (in state RUNNING), triggers are active and the run is not BUSY, plus if the chamber is dropped but not pathologic.
  • Note: "Drop" and "#" after it indicates if the chamber was dropped and how many times from current run
  • Note: "Mezz" indicates if mezzanine(s) was (were) dropped, "#" after "Mezz" is the number time of mezzanine dropped from current run and "Mask" after "#" is the mezzanine dropping mask, each bit represents a mezzanine, for instance : 0x1 means mezzanine 0 was dropped, and 0x50 means mezzanine 4 and 6 were dropped.

Recovering Dropped Chambers and Mezzanines

Automatic Recovery

Recovery for dropped chambers and mezzanines is configured to be handled automatically during a run, with some protections against doing so if a large number of chambers/mezzanines is dropped or if LV or JTAG state is not compatible with data taking (LV off, JTAG Not Initialized, ...).
Whether Automatic Recovery is enabled or not can be seen from the box titled 'Recovery Control*' in the *Dropped Recovery Panel. From the panel it is also possible to disable automatic actions in case something behaves abnormal or crazy.

Follow up after end of run

  • If a chamber was declared pathologic ( described here) during a run, please perform after the end of the run a LV power cycle of that chamber (switch LV off, wait 2 minutes (!), switch LV back on, perform JTAG initialisation)
  • If a chamber had some dropped Mezzanine cards which were not recovered during a run and no full JTAG re-initialisation ( described here) took place at the end of run, please perform a JTAG reinitialisation of that chamber

Manual Recovery during a run

Automatic recovery is usually running reliably. In case the automatic recovery stops after several attempts, please do not try to perform manual recovery!
Only if automatic recovery is for some reason not running as expected, you should perform recovery procedures manually.
Notes:

  • If only a single mezzanine card of an MDT chamber is dropped, do not try to recover it manually during stable beams.
  • If a full MDT chamber or multiple mezzanine cards in one MDT chamber are dropped, follow the instructions below.
  • If the manual recovery fails or the chambers/cards are dropped again after only a few minutes, do not try again (each recovery leads to a small dead time for the whole ATLAS detector).
  • Always write an eLog entry when ever you perform a manual recovery.

To manually recover dropped chambers or mezzanines, from the Dropped Recovery Panel,

  • wait for the table to be loaded
  • in the part of the panel labelled Recovery Control, click on Set Active for Manual; this inhibits automatic actions while you do things manually;
  • Dropped chambers:
    • Select the chamber(s) to recover in the table with a left click with the mouse.
    • Click on Request Reinclude to attempt to reinclude the selected chambers.
    • If a chamber immediately drops again, JTAG Reinitalize it, then request reinclude once more.
  • Dropped Mezzanines:
    • There is no direct reinclude for dropped mezzanines.
    • Select chambers with dropped mezzanines you want to recover with the mouse in the table.
    • JTAG Reinitialize the chamber(s) and wait for them to become dropped as chambers.
    • Then click on Request Reinclude to reinclude the selected chambers.
  • Once manual recovery is done, release the manual control by clicking the button Release in the 'Recovery Control' part of the panel.

Important Note: In case there was a stopless removal done for one of the MDT MROD, chambers connected to that ROD will appear as dropped as well. In this case there is no possibility of recovery, the described procedure fails. Remember that it is adamant that the MDT segment is restarted/reconfigured before next start of run after any stopless removal!

RPC

You can check if a tower has been killed by looking at the column 'Killed [t0-t6]' in the RPC L1 DDC Panel, where some '1' should appear (all zeroes means ok, instead) and the box should be marked in red. If individual trigger towers get killed (as can be seen from the RPC Busy and Rates panel), no action is required, an automatic recovery procedure will start. Shifters should take care of monitoring that the killed trigger tower has been successfully recovered. In the case you see a large number of killed/busy towers for more than two minutes, call the RPC LVL1/DAQ expert immediately.

TGC

  • TGC recovery for busy RODs and abandoned star switches (SSW) is automatic. In case the automatic recovery fails, do a manual recovery as described in the section about the Recovery and Reconfigure.
Detailed information about the automatic recovery mechanism can be found here.

Changing the Run Configuration: In- and Excluding Detector Parts Question

Major changes to the which parts of muons are in- or excluded in a run are done by the experts. Shifters may however occasionally be asked to in/exclude individual chambers. Changes to the detector configuration should always be discussed with the expert first. Shifters may further find the following useful to check the detector configuration if there are doubts.

MDT

Disabling/Enabling Chambers

To enable/disable a chamber in the DAQ, or to check which chambers are currently included, please take a look at the "Segments & Resources" tab in the TDAQ IGUI. Open the MuonDetectors -> MDT node. Navigate down to the detector region you are interested in and open the sub-node XXX_Sectors.

Disabling/Enabling Mezzanine Cards

A mezzanine card can be enabled/disabled from the MDTConfigui panel (Muon->DAQ/Config->MDTConfigGui). Select the chamber from the list/tree in the left-side panel, and selecting/deselecting the individual CSM channel enable bits for the mezzanine(Top part of MDTConfigGui). (notice: CSM channels = Mezzanine #, Mezzanine channels = Single tube #).

Disabling/Enabling a MROD

An individual MROD can be enabled/disabled from the TDAQ IGUI. Go to the Segments & Resources tab, and navigate to the MROD to be disabled (for example, MDT->MDT-BA-03->MDT-BA3-RCD->MROD-S10-BA3-07). The changes must be committed to the database (using the 'Commit and Reload' button on the top-left of the GUI). Controllers will accept reloading of the database up to run stage INITIAL.

Disabling/Enabling a MROD crate (experts)

An MROD crate can also be enabled/disabled from the TDAQ IGUI. Go to the Segments & Resources tab, and navigate to the crate to be disabled (for example, MDT->MDT-BC-02->MDT-BA2-RCD). Please note that you must disable also the corresponding ROS in this case. The changes must be committed to the database (using the 'Commit and Reload' button on the top-left of the GUI).

Muon Calibration Stream

The calibration stream is used for extracting muon segment data from the event data flow and sending it to the muon calibration centers in Munich, Rome and Michigan. The calibration stream relies on selecting muon segments in the HLT trigger using muFast as algorithm when running in the ATLAS partition.

Checking the Muon calibration Stream is Active:
To check that the calibration stream is active,

  • Open the Calibration Stream panel (shown below) from the Muon Menu Bar
  • Check the run condition: During proton collision physics, the calibration stream is only active when at stable beams !
  • On the Calibration Stream panel, which is shown here, check that the entries "Readers" and "Servers"; both should indicate data traffic, ie values different from zero. Please note that if no combined run is active the panel might not open.
  • The rate is also visible on the TDAQ dashboard on a plot on the top right

Troubleshooting:
If the Calibration Stream panel indicates there is no data flow on the calibration stream while at stable beams(!!),

  • Check the trigger rate of MU20 triggers in the Trigger Presenter. The calibration stream requires muon trigger chains to be active at HLT, it is seeded from MU20, so if MU20 is turned off/vetoed there are not event on the calibration stream. You may ask help from trigger shifter, they know how to find these information.
  • Check the TDAQ Run Control tree in the TDAQ IGUI panel.
    • TDAQ --> "MuCalServerController" and "MuCalServer" under it must be running.
    • MuonDetectors --> "MuCalMonitoringController" in the run control tree must be running. "MDTCalib" is responsible for producing monitoring histograms from the calibration stream data.
  • If you find the MuCalServer or MuCalMonitoring disabled in the partition, refer to the run control shifter to have understood why and have this cured. Otherwise, call the MDT primary expert on call for follow-up and make a separate eLog entry stating that the calibration stream appears to be down.
  • If you see any errors related to MuCalServer, MuCalRackServer, MuCalReader, contact MDT primary expert.

  • In case of the calibration stream not running during physics, inform the calibration centers by sending an email to atlas-muon-mdtcalib-experts AT cern.ch.

ERS Messages/Errors at Run Start Question

Please pay attention to the ERS messages about failed initialization of PAD boards for RPC. These messages are labeled as 'error' containing the expression 'ROSException RPC PAD init failed' and will easily cause a killed tower or L1 Alignment problem. If such errors appear after the configure, please BEFORE THE START OF THE RUN ask to the Run Control shifter to restart the RPC segment in the Run Control TAB of the DAQ panel, until there are no more errors. If there are stable beams, in order to save time, the error can be ignored and the run will start; however, it is possible that the unconfigured tower will be killed or show L1 and/or BCID alignment errors

ERS Messages/Errors During a Run Question

Monitoring and Data Quality

This section is split into an introduction to the different DQ applications used by the muon shifter, providing general instruction how to start the tool, configuration, options etc., and a section explaining in detail what should be checked for each muon sub-detector and what to do if data quality appears compromised.

Data Quality Applications and Tools

OHP - Online Histogram Presenter

The Online Histogram Presenter OHP is started by clicking on the button "OHP" on the DAQ Panel. Make sure you are using the muon configuration as specified here. Once the tool is open, you can

  • Navigate between the muon sub-detectors by clicking on the corresponding item in the left hand side column "Plugins", and ticking the box "XXXGlobalView".
  • Navigate between groups of histograms by selecting the corresponding tab
  • Enlarge a histogram by double clicking on it, this will open the histogram in a separate pop-up window. Note that the new window usually starts "minimized", ie appears only in the bottom menu bar of opened windows on your desk, click on it there to display it.
  • Zoom, change scale, etc. of a histogram via left and right mouse click operations, the OHP histograms are root objects.
Trouble Shooting
  • Histograms are missing: Check that a run is ongoing. Is the concerned sub-detector in the current partition ? Check the DAQ configuration (run control tree from the TDAQ IGUI), are the Gnam applications for the concerned sub-detector enabled and running ? If a Gnam application appears absent, ie has died, it can be restarted, ask the run control shifter to do so.

DQMD - Data Quality Monitoring Display

From the DAQPanel, make sure the database file, Setup script and partition name are correct for the running partition, and then click on the "DQM Display" button on the "Main" tab on the right

MDT GnaMon (expert tool, not required for muon shifter)

To use gnaMon, proceed as follows:

  • Start gnaMon from the Muons --> Monitoring/DQ menu via the MDT GnaMon item.
  • Add the current TDAQ partition as data source (ATLAS partition, for combined run). You can do this either from the ganMon menu bar via File --> Add data Source --> Online Partition or clicking on the 'Data Source' button on the 'Data Sources' window. Type in the partition name and hit enter.
  • Verify HitsPerTube_ge_ADC (ADC cut applied) has been chosen as histogram name. This information is displayed at the bottom of the GnaMon main panel. If you find a different setting, change it from the Tools --> Preferences menu. You have also to change the Histogram path at the bottom of the preference menu: Histogram path will be: /SHIFT/MDT/%CHAMBER%/%HISTO%
  • Start auto-update for histograms clicking on the 'Auto-Update' button
  • Alternatively, you can request histograms to be refreshed/updated by Gnam pressing first the 'Publish histograms' button , then the Process button .
  • Switch to the 'Overview' panel which shows various chamber occupancy parameters, according to the selection in the drop down menu at the top of the panel, as color map. Double-clicking on a chamber opens the 'Details' panel with more information.

For further details, please also refer to the gnaMon online help which can be accessed from the gnaMon menu bar.

What to Check during Data Taking: OHP

Necessary actions:

  • Compare histograms to the references (here below). If there are not-understood histograms, they should be placed in a dedicated e-log entry, and in your Muon DQ Run Summary
  • If the histogram looks very odd compared to the reference, call the on-call shifter for that system
  • All subsystems have implemented 2D histograms vs LumiBlock. These histograms are important to check problems occurring during the run. In particular they might show when detector elements drop from a run and need recovery. Shifters can also use them to verify that an element was successfully re-included. Where ever possible histograms using different sources (gnam and athena) are displayed in order to checkdistinguish monitoring issues from DAQ issues.
  • ALERT! Note: When comparing histograms with references, please keep in mind, that the references have been produced at high luminosity (high mu, 2550 bunches) . So, if the current run conditions are different, also the histograms might be looking different

CSC - OHP histograms (UPDATED 12.07.2017)

There are six pages whith additional tabs.

  • CSC_Occupancy There are three tabs containing various histograms showing the accumulated occupancy per sector. The sectors alternate between Large and Small chambers thus giving rise to the alternating structure in the histograms. The bin at 0 just separates the C-side from the A-side and there is no sector 0. If any new "dead" layers appear, contact a CSC expert (compare to Muon Whiteboard).

    • Overview
      • CSC Signal Occupancy shows hits per sector, layer, and channel. Negative channels are phi (transverse) strip and positive channels are eta (precision) strips.
      • CSC Chamber Occupancy shows hits per sector.
      • Currently there are two layers without HV (C01-L1 and C03-L2)
      • Currently there is one sector with transverse layers fully off (C14)
      • Currently there is one sector with precision layers fully off (C15)

    • PrecisionLayers
      • CSC Precision Layer X - Sector Occupancy shows the number of precision (eta) clusters per sector and layer.
      • Currently there are two layers without HV (C01-L1 and C03-L2)
      • Currently there is one sector fully off (C15)

    • TransverseLayers
      • CSC Transverse Layer X - Sector Occupancy shows the number of transverse (phi) clusters per sector and layer.
      • Currently there are two layers without HV (C01-L1 and C03-L2)
      • Currently there is one sector fully off (C14)

  • CSC_OccupancyVsTime There are three tabs containing various histograms showing the occupancy per sector as a function of time.

    • Overview
      • CSC Chamber Occupancy Side X (Athena) shows the occupancy (2 histograms) for each sector and layer (y-axis) as a function of time (lumiblock). These histograms are filled by athena global monitoring after full event reconstruction.
      • CSC Chamber Occupancy (Gnam) shows the occupancy for each sector and layer (y-axis) as a function of time (lumiblock). This histogram is filled by CSC specific gnam monitoring.
      • Currently there are two layers without HV (C01-L1 and C03-L2)
      • Currently there is one sector with transverse layers fully off (C14)
      • Currently there is one sector with precision layers fully off (C15)

    • PrecisionLayers
      • CSC Precision Layer X - Sector Occupancy vs LumiBlock shows the occupancy in each precision layer (4 histograms) for each sector and layer (y-axis) as a function of time (lumiblock).
      • Currently there are two layers without HV (C01-L1 and C03-L2)
      • Currently there is one sector fully off (C15)

    • TransverseLayers
      • CSC Transverse Layer X - Sector Occupancy vs LumiBlock shows the occupancy in each transverse layer (4 histograms) for each sector and layer (y-axis) as a function of time (lumiblock).
      • Currently there are two layers without HV (C01-L1 and C03-L2)
      • Currently there is one sector fully off (C14)

  • CSC_LinkLosses
    • CSC Sum of Link Losses vs Lumiblock shows the average number of link losses per lumiblock. There should be zero or few (<5) losses.
    • CSC Link Losses vs Lumiblock shows link losses per lumiblock and their location in the sector.
    • CSC Map of Link Losses shows the location of link losses in coordinates of sector and link.

  • CSC_Errors
    • ROS Errors per Sector shows the CSC sectors with ROS errors. If any sectors have significant entries (> 10% of total events) a CSC expert should be contacted.
    • RPU Errors per Sector
    • CSC Status shows the CSC ROD status. The first bin reports the number of sampled events and the second bin reports the number of events with large hits. Other bins: If >10% a CSC expert should be contacted.

  • CSC_Charge
    • CSC Signal Hit Amplitudes shows the charge (in counts) of the highest-charge strip in a cluster per sector and layer.

  • CSC_Timing
    • CSC Cluster Peaking Time shows the peaking time for clusters on the A and C sides. There should be a peak between 50 and 75 ns.

MDT - OHP histograms (UPDATED 03.05.2018)

There are five pages whith additional tabs.

  • MDT_Occupancy There are two tabs containing histograms showing the accumulated occupancy per chamber.

    • CalibrationStream
      • MDT X Chamber Occupancy shows the accumulated occupancy for each chamber (3 histograms). Barrel chambers are located in [-10,10] while endcap chambers are located in [-30,-20] and [20,30]. These histograms are filled by gnam running on the calibration stream.
      • Currently there is a problem with almost all endcap chambers not showing up in these plots. This is being followed up.

    • Athena
      • MDT X Chamber Occupancy shows the accumulated occupancy for each chamber (2 histograms). These histograms are filled by athena global monitoring after full event reconstruction.

  • MDT_OccupancyVsTime There are four tabs (one per detector region) containing histograms showing the occupancy per chamber as a function of time. For each ROD crate 2 histograms are displayed, which are based on different data sources (gnam and athena).

    • OccupancyBA Histograms for Barrel side A
    • OccupancyBC Histograms for Barrel side C
    • OccupancyEA Histograms for Endcap side A
    • OccupancyEC Histograms for Endcap side C
      • MDT X Occupancy vs Time (Gnam) shows the chamber occupancy (4 histograms) as a function of time for a given ROD crate. These histograms are filled by gnam running on ROD level.
      • MDT X Occupancy vs Time (Athena) shows the chamber occupancy (4 histograms) as a function of time for a given ROD crate. These histograms are filled by athena global monitoring after full event reconstruction.

  • MDT_ErrorsVsTime There are four tabs (one per detector region) containing histograms showing errors reported by gnam.

    • ErrorsBA Histograms for Barrel side A
    • ErrorsBC Histograms for Barrel side C
    • ErrorsEA Histograms for Endcap side A
    • ErrorsEC Histograms for Endcap side C
      • MDT X High Severity Errors vs Time shows occurrences of high severity errors per chamber for a given ROD crate (4 histograms) reported by gnam. Individual occurrences will not have a large impact on the data quality. In case of persisting errors, please check occupancy. You also might find related information in DQMD, DCS (dropped chambers) or ERS. Please make an eLog entry and report the chamber in the run summary.
      • MDT X Low Severity Errors vs Time shows the occurrences of low severity errors per chamber for a given ROD crate (4 histograms) reported by gnam. Usually all chambers show a certain rate of those minor errors. These histograms are included mostly for overview purpose. No special actions are needed here.

  • MDT_Charge
    • ADC in region X shows the ADC distribution for all chambers connected to a given ROD crate (16 histograms). These Histograms are filled by gnam running on the calibration stream. A reference is shown in grey on top of the histogram.
    • All histograms should be present and each should have a maximum value in the range of 100-140 ADC counts. A pedestal peak is visible below 50 ADC counts. In case of significant deviations from these general descriptions, take a snapshot and report in the shift summary

  • MDT_Timing
    • TDC in region X shows the TDC distribution for all chambers connected to a given ROD crate (16 histograms). These Histograms are filled by gnam running on the calibration stream. A reference is shown in grey on top of the histogram.
    • All histograms should be present and each should have a t0 (maximum) in the range of 100-200 TDC counts and a tmax (drop-off) near 1500 counts.
      If a histogram appears to be noisy or have an atypical t0 or tmax, report it in the shift summary.
      If one of the 16 histograms is empty or missing, cross-check it with the DQ shifter and inform the expert on call.

RPC - OHP histograms (UPDATED 11.05.2018)

Data quality problems, as corrupted data from a board, can be recovered by the expert by stopping the ATLAS run, re-configuring the RPC, and re-starting the run, or, depending on the exact type of problem, a recovery sequence during the run. During the day (8am- 8pm), call the expert in case of problems (ie a new trigger tower/pad missing). During nighttime, add an eLog entry addressed to RPC DAQ; in case of large problems please call the RPC LVL1 on call expert.
Two OHP panels have to be checked regularly by the muon shifter: RPCGlobalView and RPCVsTime.
In RPCGlobalView there are 6 histogram tabs. The data integrity tab should be always checked, the Hit maps especially after stable beams was reached and HV ramped to nominal.

  • Data Integrity . The first 4 histograms should be empty. If not the case, submit a separate entry to eLog with RPC+LV1+DAQ+Monitoring as affected system and specifying the run number. Please attach a screen shot. By looking into the "DCM - BCID Global Alignment Error Fraction" and in the "DCM - LVL1 Global Alignment Error Fraction" histograms it is possible to check whether a whole trigger sector lost synchronization. By looking into the on-line DQM display it is possible to understand which tower(s) is failing. The other two histos: "DCM - BCID Subfragment Alignment Error Fraction" and "DCM - LVL1 Subfragment Alignment Error Fraction", instead, are filled if, inside a sector, there is at least one CM (Coincidence Matrix) out of synch. *Please call the expert if the fraction of error is above 0.3 and the number of sector involved is greater than 3 *(of course if the problem involves the whole sector you should see entries in the "Global" error histos cited above). In general one or more towers could be failing (and within each tower one or more pads, one or more CMs).
  • DetailsIntegrity These provide additional information about any errors in the data, and errors should be cross-checked with other histograms as mentioned on the plots.
  • Time . For proton collision runs, hit time distributions are all characterized by one peak centered at 0 and possibly additional minor peaks displaced, and/or non-gaussian tails. During standby, were muons come mainly from cosmics, the structure can be different. Plots are reset during warm start transition.
  • Hits . These histograms are occupancy plots for the 6 individual RPC layers (BO stations confirm 0,1, BM station pivot 0,1 and confirm 0,1) as in the readout. Problematic towers are normally masked from both readout and trigger. Check this plots for holes, especially holes in the same spot for several layers, and for larger regions.
    SL output Hit Map shows the output of the sector logic performed by the off-detector electronics and sent to the MuCTPi. Check for missing towers, report these in your shift summary. If towers are only missing in the SL plot but not the trigger hit ones, there is no need to call the expert, even if multiple towers are affected (can be due to a not centered readout window, but real signals are sent to MuCTPi).
  • TriggerHits . The top 2 plots are trigger hit maps, showing which RPC towers are active in the low-pt and high-pt trigger. You should identify any holes and compare them with known problems as listed on the muon whiteboard. If a tower is missing but not listed there, investigate in the RPC Rate and Busy panel if it was auto-killed and in DCS if there is no HV or similar. At high rate, the procedure assembling hits and trigger hits plots from different RODs can fail, if the MuCTPiTh histograms are fine, all is ok.
  • MuctpiTh .The 6 histograms showm for each P_T threshold, the map of the triggers at seen by the MuCTPi level. The plot can serve as reference of expected coverage.
    Report the missing tower in your shift summary, with the reason if you could identify it. *If a larger region/many sectors are missing instead of individual towers, or the number of un-understood towers is higher than 5, please call RPC LVL1 expert (during daytime) or add an eLog entry addressed to RPC DAQ (during nighttime).*.

In RPCvsTime there is 1 histogram tab, where 5 histograms are shown.

  • DCMvsLumiBlock The first 2 histograms show, for each sector, the number of LVL1 (or BCID) error vs Lumiblock. They should be empty. If not the case, submit a separate entry to eLog with RPC+LV1+DAQ+Monitoring as affected system and specifying the run number. Please attach a screen shot. If more than 3 sectors are affected pleas call the RPC LVL1 expert.
The histograms "DCM - Nr of Trigger hit rates vs LumiBlock" and "DCM - Nr of SL Trigger hit rates vs LumiBlock" shows, respectively, the trigger hits coming from detector and the SL proposal to MuCTPi, vs LumiBlock. If suddenly holes appears in many sectors, please call RPC LVL1 expert (during daytime) or add an eLog entry addressed to RPC DAQ (during nighttime). The last plot DCM EventSampled Rate vs LumiBlock shows the number of events sampled by GNAM for each sector vs Lumiblock. The example shows problems in sectors 2 and 3 due a ROD removed from ATLAS chain. In this case, if some holes appears, please call RPC LVL1 expert and inform the Run Control Shifter, because this means that you have problem with RPC Gnam application or with sampling data from detectors: if 2 Trigger Sector are missing it is possible that a ROD has been excluded

In RPCTimeAlignment there are 4 histogram tabs, where 32 histograms are shown.

  • For each sector, the alignment with the Bunch Crossing Id and Level1 Id numbers are show for each electronic boards. These plots are just to check the region of the detector with dis-alignment. The amount of the loss of synchronization is quantified in the DetailsIntegrity histograms.

In RPCAthena There are five tabs containing histograms showing the occupancy as a function of time derived from athena based monitoring. These histograms are not meant to be used in order to follow up each individual horizontal white band, that is visible from the start of run. Those white bands correspond to detector sub-components currently not in operation due to various reasons and might frequently change from one week to another. However, these plots are very helpful in order to validate the impact of larger actions (e.g. stopless removals or TTCRestart) within a run. If a detector region (or even a full histogram) does not show hits according to the current lumi block number it means, that no good data are recorded in that particular region.

TGC - OHP histograms (UPDATED 3.10.2018)

There are seven pages with additional tabs.

  • TGC_Occupancy There are four tabs containing various histograms showing the status and the accumulated occupancy.

    • GloalOverview
      • TGC Global Overview shows the overview of TGC status. Busy fraction should be always zero (empty), and the others should be 100. This plot is updated every 20 seconds. There may be empty bins during the auto recovery, but they should be back to normal within 1 minutes. If anything strange bin entries are observed, please make a report in eLog.

    • HitOverview
      • TGC XY View Side X shows the accumulated occupancy for one TGC side in the x-y-plane. In case of individual regions sticking out significantly, please make a report in eLog.

    • SideA and SideC
      • TGC Wire Occupancy Side X
      • TGC Strip Occupancy Side X
      • The histograms are filled at the chamber level, with the x-axis corresponding to layer and eta name, and the y-axis corresponding to the phi name of the chamber. For example, L1_E1 corresponds to layer 1 (first layer of the triplet M1), and eta station E1 (innermost endcap). A01phi0 corresponds to side A, sector 01 (max 12), and phi station 0 (max 3). If no chambers were dead, E1-E4 would have hits in all phi sectors, while the F chambers (as well as all of layers 8 and 9) would only have hits in even phi sectors. There are no strips in layer 2, so it is normal for this part of the strip occupancy to be empty.
      • Please check for holes in occupancy or hot channels. If there is a region of low or high occupancy different from this reference and that spans more than 2 chambers, report in your shift summary. Otherwise, mention that the coverage is OK.

  • TGC_OccupancyVsTime There are three tabs containing various histograms showing the occupancy as function of time. Pay attention to suddenly appearing horizontal white (or red) bands. These would indicate, that some detector region has stopped recording data (or started to become noisy). Please make an eLog entry. You can cross check the plots between Gnam and Athena in order to distinguish real DAQ issues from monitoring issues in only one of the two applications. In case of confirmed holes (also check ERS messages and ShifterAssistant), please contact the TGC primary on-call.

    • Hits_Gnam
      • TGC Hits Side X (Gnam) shows occupancy as a function of time. This histogram is filled by gnam. In case of a TTCRestart of the TGCs, the histogram content will be reset, and only show entries of after the TTCRestart has completed.

    • Hits_Athena
      • TGC Hits Side X (Athena) shows occupancy as a function of time. This histogram is filled by global athena monitoring after full event reconstruction. This histogram is unaffected in case of a TGC TTCRestart.

    • SLHits_Gnam
      • TGC SL Hits Side X (Gnam) shows the trigger hit occupancy as a function of time.This histogram is filled by gnam. In case of a TTCRestart of the TGCs, the histogram content will be reset, and only show entries of after the TTCRestart has completed.

  • TGC_TriggerDetailes There are six tabs containing various histograms showing information about the level-1 endcap muon trigger status.

    • SLTriggerPTVsLB
      • TGC Side X SL Hits vs LB for PTY shows the trigger hit occupancy for the trigger threshold of PT Y as a function of time.This histogram is filled by gnam.

    • SLTriggerPTVsLB_EITileRegion
      • TGC Side X SL Hits vs LB for PTY (EITileRegion) shows the trigger hit occupancy in TTGC EI/TileCal region (1.05<|eta|<1.3) for the trigger threshold of PT Y as a function of time.This histogram is filled by gnam.

    • SLTriggerPTVsLB_FIRegion
      • TGC Side X SL Hits vs LB for PTY (FIRegion) shows the trigger hit occupancy in TGC FI region (1.3<|eta|<1.9) for the trigger threshold of PT Y as a function of time.This histogram is filled by gnam.

    • SLTriggerPTVsLB_ForwardRegion
      • TGC Side X SL Hits vs LB for PTY (ForwardRegion) shows the trigger hit occupancy in TGC Forward region (1.9<|eta|<2.4) for the trigger threshold of PT Y as a function of time.This histogram is filled by gnam.

    • SLTriggerRoIVsSector_Endcap
      • TGC Side X SL Hits vs Sector for PTY (Endcap) shows the trigger hit occupancy in TGC Endcap region (1.05<<|eta|<1.9) for the trigger threshold of PT Y in RoI vs trigger sector.This histogram is filled by gnam.

    • SLTriggerRoIVsSector_Forward
      • TGC Side X SL Hits vs Sector for PTY (Forward) shows the trigger hit occupancy in TGC Forward region (1.9<<|eta|<2.4) for the trigger threshold of PT Y in RoI vs trigger sector.This histogram is filled by gnam.

  • TGC_RODStatus There are two tabs containing various histograms showing information about the ROD status.

    • Overview
      • RODStatusCurrent_Side_X shows the current ROD status. Make a report in eLog, if some bin shows values of '2' or '3'.
      • SSWStatusCurrent_Side_X shows the current Star-Switch (SSW) status. Make a report in eLog, if some bin shows values of '2' or '3'.

    • StatusVsTime
      • RODStatusPerLB_Side_X shows the ROD status as a function of time. In case of persisting problems, please contact the primary TGC on-call.
      • SSWStatusPerLB_Side_X shows the Star-Switch (SSW) status as a function of time. In case of persisting problems, please contact the primary TGC on-call.

  • TGC_ErrorsVsTime This set of overview histograms shows the number of reported ERS messages from the DAQ per sector and side. No direct action is needed here, since the important messages will be followed up via ERS and ShifterAssistant.

    • TGC ERS Info Messages (Side X)
    • TGC ERS Warning Messages (Side X)
    • TGC ERS Error Messages (Side X)
    • TGC ERS Fatal Messages (Side X)

* TGC_Timing There are two tabs containing various histograms showing information about the TGC timing.

    • Overview
      • TGC Sector Timing shows the time distribution of trigger hits per sector. The large majority of entries must be in the centrBC (current bunch crossing). Significant amount of entries in following or previous BC, or fully missing statistics in a sector might indicate a problem in the trigger. Contact the TGC primary on-call in case of problems.

    • Details
      • TGC SL Timing Side X (more than PT1) shows similar information as the overview histogram, but in more details per sector, layer, and chamber. No direct action needed here, but can be used to narrow down problems seen in the overview histogram.

  • TGC_BurstVeto These plots show the activity of the TGC Noise Burst Stopper. So far real noise burst events have only been observed on side C. Vetos from side C don't need any dedicated action, unless there are larger dense clusters of lumi blocks with activity, which might indicate a hyper-activity of the burst stopper module. In case of higher activity on side A (more than three lumi blocks with activity) , please make a dedicated eLog entry.
    • TGC Burst Veto (Side X) shows occurrences of L1 vetos from the TGC noise burst stopper as a function of time. The y-axis indicates the duration of the veto in units of bunch crossings.

  • Expert This tab is meant for experts. It holds a full collection of detailed histograms about ERS messages per sector and SSW. No action needed from the shifter.

Central DQ - OHP histograms

The CentralDQ tab is a copy of the histograms presented to the ACR data quality shifter. It is useful for confirming or cross-checking histograms seen at the DQ desk. Please consult the ATLASGlobalHistogramsCollisions twiki for a description of these histograms. A short list of histograms is provided below.

  • CSC
    • Link Losses v. LB & Link Loss Map
    • Precision & Transverse Layer Occupancy
    • Occupancy v. LB
  • MDT
    • Fraction of Active Chambers v. LB
    • Occupancy v. LB
  • TGC
    • Occupancy & Occupancy v. LB
    • Star Switch Fraction
    • ROIs in eta-phi and L1 Triggers v. LB
  • RPC
    • Occupancy for Middle and Outer Layers
    • Timing
    • Errors v. LB and Triggers v. LB
  • Muon Track Monitoring
    • Segments
    • MS Tracks
    • Combined Muons

Overview - OHP histograms (UPDATED 21.07.2017)

The Overview tab contains various histograms to give an overview of the full Muon Spectrometer. These are the fist histograms to be checked after the warm start in a stable beam run. Check, that there are no largely distorted or asymmetric distributions or huge holes. However be aware that the plots will show effects of the geometry of the Muon Spectrometer. A slight asymmetry between side A and side C around |eta|=1.2 is expected (effect from magnetic field on particles produced in endcap toroids)

  • Overview
    • MuonStream shows the eta-phi-distribution of L2 muon tracks from the physics stream (no cut on muon pt)
    • CalibrationStream shows the eta-phi-distribution of L2 muon tracks from the muon calibration stream (only high pt muons)
    • L1 input to MuCTPi shows the L1MU region of interest (ROI) input to the MuCTPi
    • MuonSegments shows the eta-phi-distribution of muon segments in the HLT

What to Check during Data Taking: DQMD

CSC - DQMD

  • Outline
    • The DQRegions are laid out according to the following scheme CSC -> Endcap A, C -> chamber -> DQAlgorithm
    • The flag merging algorithm is SimpleSummary: Takes the weighted average of the flags in a given level and propagates it to the next level. Some algorithms have lower weight, so not all red or yellow flags propagate to the top level.
    • There are currently five DQAlgorithms. They are described below
    • View histograms and results of DQAlgorithms here DQMF Web Display

  • Event Status Flag
    • Histograms: csc_is_ros_map , csc_rod_rpu_status_all_evts
    • Calculates the ratio of bad events ( isROS events or RPU status bad events ) to total events. Throws a red flag if ratio of isROS or RPU_status_bad events is > .25 for more than 3 sectors. Flag is green otherwise. A separate flag is thrown for each endcap

  • Precision Occupancy
    • checks the occupancy for each layer against the average of all layers of the same chamber size. Flags for lower occupancy only, hot spots are not a problem.
    • Since this histogram needs all layers to compute averages, find the correct bin on the x axis for the specific sector and layer that was flagged.

  • Transverse Occupancy
    • checks the occupancy for each layer against the average of all layers of the same chamber size. Flags for lower occupancy only, hot spots are not a problem.
    • Since this histogram needs all layers to compute averages, find the correct bin on the x axis for the specific sector and layer that was flagged.
    • This flag has currently weight 0, so it is for information, but not propagated.

  • Landau Fit
    • Histograms: csc_F1L.A01_cluster_charge_precision_layer_3clusSeg_0
    • Performs a Landau Fit on for each individual layer. Compares the fitted MPV value to the expected MPV and the chi_squared of the fit to a threshold.
    • This flag has currently weight 0, so it is for information, but not propagated.

  • ROS Event
    • Checks for the ratio of missing ROD events where the ROS substitutes a dummy event.

Aqua led Please contact the CSC expert in case of problems.

MDT - DQMD

MDT DQMD can be seen in the top left corner of the most general page of Muon DQMD. Each small rectangle represents a MDT chamber. You can click on a given rectangle to see the histograms associated with the chamber.

  • A screen shot of MDT DQMD is shown below.
    • 2015-07-09_MDTDQM.png
  • Ignore red flags if there have not been stable beams for at least 30 minutes. DQMD requires some time to collect data after stable beams to refresh all flags. Ignore red flags during cosmic runs and any other time out of stable beams. The algorithms are configured only for collision conditions.
  • Click on any chambers flagged red or yellow, select the "Histograms" tab for that chamber, and find out which histogram has been flagged red (the square box is red)
  • Note in your Shift Summary eLog the cause of each red or yellow flag, noting
    • The chamber name
    • The histogram name
    • The reason for the red flag (see the "Results" sub-tab under the histo itself) -- please include the numbers!
  • If many histos have red flags, put in a separate eLog immediately, or call expert. (Attaching a screenshot to the eLog can be helpful.)

  • There are 5 types of histograms displayed per MDT chamber:

  • The "ADC" charge histogram
    • ADC is a measure of the amount of charge deposited in the chamber by muon or other charged particle. The average ADC is a characteristic of the gain of the detector.
    • An example of a good ADC distribution is below.
    • MDT_ADC.png
    • We expect the main characteristic peak around 120. This peak corresponds to the hits caused by muons as they pass through the detector. There will be a smaller second bump at greater than 200 this is caused by neutral particles such as neutrons and photons that sometimes interact with the detector. * The Algorithm makes a noise cut at 80 and computes the average past 80. If the average ADC is below a certain threshold then the histogram is tagged yellow/red.

  • The "TDC" hit timing histogram
    • TDC is a measure of the time at which the charge from an incident muon was deposited on the central wire of an MDT tube. Muons that pass closer to the central tube will deposit charge quickly, and have a smaller TDC. Muons that pass through close to the edge of a tube will have larger TDC. Because the radius vs drift time of the MDT tubes is non-linear, the TDC has a distinct shape, shown in the histogram below.
    • An example of a good TDC distribution is below.
    • MDT_TDC.png
    • We expect a sharp first peak followed by a plateau and then a clean drop-off after about 980 "TDC counts" This roughly corresponds to about 760 ns, or the time it takes for electrons to drift from the outer most radius of the MDT tube to the central wire. * The Algorithm finds the initial sharp rise (the t0) and the final sharp fall (tmax). If the difference between tmax and t0 is outside of a certain range then the histogram is tagged yellow/red.

  • The "Occupancy" histogram
    • This histogram shows the number of hits a given multi-layer of a chamber has received throughout the course of a run.
    • You will see a sharp jump at the beginning of collisions and stable beams. Then the hit rate will slowly fall as the instantaneous luminosity decreases over the course of a run.
    • An example of a good occupancy distribution is below.
    • MDTOccupancy.png
    • If an MDT chamber stops taking data for any reason, there will be a series of empty bins. This can be caused by another unrelated system going busy and ATLAS as a whole undergoing a TTC restart. Please check with run control to make sure that this is not caused by another system causing an ATLAS wide stop in data taking. However, if no other system is causing the MDT to stop taking data then this is specifically a problem with the MDT Chamber. * Drops for a single or a few (< 5) lumi blocks can be due to stopless recoveries and are not a cause for concern. Drops for larger periods of time, LB (>10), very close to each other is a problem. * The Algorithm checks the last 20 LB for any bins with zero entries. If the number of bins with zero entry is greater than 10 in the last 20 LB the histogram is tagged red. If any recovery is performed and fixes the problem, once the chamber starts receiving hits again, the algorithm should return to green.

  • The N Hits Per Layer Histogram
    • This histogram is used to check for issues with individual layers or multilayers of MDT chambers.
    • During physics, we expect to see that each MDT layer has a similar number of hits.
    • If a layer or multilayer has much fewer hits than expected, this histogram will be flagged red. When a chamber is flagged red for this algorithm, leave a note in the shift summary so that MDT experts can follow up after the run.
    • HitsPerTubePerLayer.png

  • The "Errors" histogram
    • This MDT front end electronics will output internal errors associated with either data taking, data transferring and errors associated with the internal functions of the card.
    • We expect to see a certain level of "TSMERR" which is when the card loses a bit during the data transfer of the event. This is considered normal detector operations.
    • An example of a good error distribution is below: The Y-axis lists several possible front end electronics errors. The X-Axis shows the relevant lumi-block. Each bin contains the error rate for a given lumi-block, i.e. the total number of times a given error occurred divided by the total number of events in that lumi-block
    • MDT_ErrorRate.png
    • The algorithm has certain thresholds for error rate. If the fraction of events with a certain error is too high then the histogram will be flagged red. Different errors have different tolerances mainly due to the severity of the error. Errors are categorized by how much of the chamber are we losing due to the error. Some errors cause a loss of only a few 8-24 tubes, some errors cause you to lose the entire chamber for the event. The algorithm is applied on the last 10 lumi-blocks of a given run.

RPC - DQMD

  • Click on the "RPC" button in the "DQMD Summary" window
  • Look at the "DQMD Details" window, and look for red flagged chambers in the display
  • Click on any chambers flagged red or yellow, select the "Histograms" tab for that chamber, and find out which histogram has been flagged red (i.e. has a red border), follow instruction described in the Description and Troubleshooting tabs of each histogram.
  • Note in your Shift Summary eLog the cause of each red flag.

TGC - DQMD

  • Click on the "TGC" button in the "DQMD Summary" window
  • Look at the "DQMD Details" window, and look for red flagged chambers in the display
  • Click on any chambers flagged red or yellow, select the "Histograms" tab for that chamber, and find out which histogram has been flagged red (i.e. has a red border),
  • Note in your Shift Summary eLog the cause of each red flag.
  • If many histos have red flags, put in a separate eLog immediately, or call expert. (Attaching a screenshot to the eLog can be helpful.)

Recovery from a Power Cut or DSS Action

CSC

After the RODs have been off for a t least a few seconds, they will not boot up on the first attempt, but on the second. The recovery requires running a special script. This is described in the Expert Manual.

You can also recover them by running the part_CSC_tst partition (test run) 9 times. Each time, you will recover one more ROD on each endcap. Check the progress by subscribing to information in ERS. Terminate when you get a timeout error and re-configure 9 times until you reach the running state.

Special Situations

Beam Splashes

The following steps have to be done.
Before Beam Splashes:

  • Muon Run Coordination must be contacted
  • ATLAS Run Coordination must mask the Muon injection inhibit signal
  • Muon Run Coordination must set override mode for CSC/MDT/RPC (not TGC)
  • CSC HV settings should be lowered by 150V (check Muon Whiteboard or call CSC DCS on-call)
  • CSC OKS settings should be modified (unsparsified readout mode & latency change); to be done by CSC DAQ on-call
  • HV of CSC/MDT/RPC must be ramped to READY by the muon desk shifter (if not already there)
  • HV of TGC must be ramped to STANDBY by the muon desk shifter (if not already there)
Note: In case the shift leader is asking: nothing needs to be done on muon side concerning the 'ROS page sizes'
After Beam Splashes:
  • ATLAS Run Coordination must unmasked the Muon injection inhibit signal (to be validated by muon shifter)
  • Muon Desk Shifter must revoke the override mode for CSC/MDT and RPC
  • CSC OK settings must be reverted by the CSC DAQ on-call

Latency Changes

At times where the ALFA experiment joins the ATLAS read-out the trigger latency needs to be changed to cope with the long distance between the detectors. The full description about what needs to be done for ATLAS can be found here. The short version for muons is:

  • CSC: nothing needs to be done, it's all automatic
  • TGC: nothing needs to be done, it's all automatic
  • MDT: call the MDT primary on-call; he needs to change some database settings; afterwards a full JTAG reinitialisation of all MDT chambers is needed (rough time estimate: 30 minutes)
  • RPC: call Claudio (167260 or you can also try 64441); he needs to upload a new configuration (rough time estimate: 30 minutes)

CSC Side-A Temperature Check

NEW As of today (June 29th 2018) no further temperature report is needed. NEW

At the moment (May 2018) the temperature readout of the CSC side A is not working reliably. To avoid that the automatic protection is switching off LV whenever a bad reading occurs, it has been disabled. Therefore the muon shifter is asked to report at the end of the shift the current situation in a dedicated eLog entry as reply to the entry here ( link from inside P1 or link from outside P1).

Here is what you should do:

  • Go to the DCS FSM and navigate to CSC -> CSC EA -> EA Temp
  • Take a look at the display showing the maximum temperature for each CSC chamber CscATemperatures.png
  • Report how many blue, green, orange, red and yellow entries are displayed
  • Report which sector reports the highest temperature
  • Navigate to the FSM node of the corresponding chamber (e.g. EA Temp -> AS04)
  • Check which of the three sensors reports the highest value and press the corresponding icon DcsHistoButton.png
  • Select a 'Time Range' of '1 day' and zoom on the y-axis (use the mouse wheel on the y-axis) to get a meaningful trend plot
  • Make a screen shot of the 'Window Under Cursor' and attach it to the eLog entry CscATemperatureTrend.png

Please use the following template for your report and modify according to your observations:
Number of blue / green / orange / red / yellow entries: 0 / 15 / 0 / 1 / 0
Sector reporting highest temperature: AS04
Highest temperature currently reported: 37.06 C

Don't forget to attach the screenshot.

Common Synonyms and Abbreviations

Here is a list of common abbreviations and synonyms you will often encounter.

Common to all subdetectors

Synonym Explanation Comments
ACR Atlas Control Room  
CTP Central Trigger Processor handles overall trigger, luminiosity blocks, fianl trigger veto and Busy
DCS Detector Control System previously usually known as Slow Control
DDC DAQ to DCS communication, a way by which data can be exchanged between DAQ and DCS
DSS Detector Safety System, low level PLC based system independent from DCS which will react to certain alarm conditions (fire, smoke, cooling off etc.) and bring the detector into a safe state  
IP Interaction Point  
IS TDAQ Information Service, a process running as part of the DAQ providing information on counters, states, data flow of the ongoing run
ERS TDAQ Message Reporting Service  
MuCTPI Muon Central Trigger Processor Interface  
SCR Satellite Control Room Muon SCR is in barracks 3164, last door to the left
ROD Read Out Driver, VME boards housed in USA15 responsible for event building for data from the chambers/modules/... connected to it  
TTC Trigger and Timing Control.
USA15 Service cavern underground housing most off-detector electronics. Accessible during beam operations
US15 Service cavern underground housing muon 48V CAEN generators. Not accessible during beam operations
UX15 The Atlas experiment cavern  

MDT

Synonym Explanation Comments
AMT The MDT TDC chip of the MDT front end cards, responsible for drift time measurement There is one AMT per mezzanine card/24 tubes
ASD Amplifier-Shaper-Discriminator chip of the MDT front end cards There are 3 ASDs per mezzanine card, each having 8 channels
CSM Chamber Service Module, the electronics card located on chamber which collects data from all mezzanines, builds events and handles trigger and clock distribution. There is 1 CSM per chamber
Mezzanine MDT front end electronics card, located on chamber and containing AMT and ASDs for 24 channels

RPC

Synonym Explanation Comments
SL Sector Logic Board, part of the RPC trigger chain. RODs are connected to the SL.  

TGC

Synonym Explanation Comments
SSW Star Switch, part of the TGC Readout chain  

Previous manuals separate for the 4 muon subdetectors can be found here. This information is in many places obsolete.

RPC Shift Manual

TGC Shift Manual

MDT Shift Manual

CSC Shift Manual

  • 20110614_csc_rod_status.png:
    20110614_csc_rod_status.png

  • 20110614_csc_layer_map_prec.png:
    20110614_csc_layer_map_prec.png

  • 20110614_csc_layer_map_trans.png:
    20110614_csc_layer_map_trans.png

  • 20110614_csc_occupancy_prec_vs_time.png:
    20110614_csc_occupancy_prec_vs_time.png

  • 20110614_csc_occupancy_trans_vs_time.png:
    20110614_csc_occupancy_trans_vs_time.png

  • CSC time plot - 2samples configuration:
    20121015_CSC_sampling_time.png

  • DAQPanel:
    DAQPanel.png

Run-1 Shift Manual

Run-1 Shift Manual

Shadow Shifter Instructions Work in progress, under construction

MuonShadowShifterManual

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf 1.General_Module.pdf r4 r3 r2 r1 manage 11279.1 K 2018-03-16 - 09:33 UnknownUser Training slides: General Module
PDFpdf 2.DCS_Module.pdf r1 manage 3421.9 K 2018-03-16 - 09:34 UnknownUser Training slides: DCS Module
PNGpng 20110420_MDT_OHP_expert.png r1 manage 13.3 K 2011-04-20 - 23:37 UnknownUser  
PNGpng 20110421_CSC_Shifter_ADC_Time.png r1 manage 12.0 K 2011-04-21 - 21:17 UnknownUser  
PNGpng 20110421_MDT_OHP_ADCCalibStream.png r1 manage 20.0 K 2011-04-20 - 23:40 UnknownUser  
PNGpng 20110422_CSC_Shifter_CSC_Layer_Maps.png r1 manage 8.4 K 2011-04-21 - 21:18 UnknownUser  
PNGpng 20110422_MDT_OHP_TDCCalibStream.png r1 manage 19.9 K 2011-04-20 - 23:41 UnknownUser  
PNGpng 20110423_CSC_Shifter_CSC_ROD_Status.png r1 manage 9.7 K 2011-04-21 - 21:18 UnknownUser  
PNGpng 20110423_MDT_OHP_TDCRodLevel.png r1 manage 18.8 K 2011-04-20 - 23:42 UnknownUser  
PNGpng 20110614_csc_layer_map_prec.png r1 manage 9.4 K 2011-06-14 - 08:19 UnknownUser  
PNGpng 20110614_csc_layer_map_trans.png r1 manage 9.6 K 2011-06-14 - 08:19 UnknownUser  
PNGpng 20110614_csc_occupancy_prec_vs_time.png r1 manage 10.4 K 2011-06-14 - 08:21 UnknownUser  
PNGpng 20110614_csc_occupancy_trans_vs_time.png r1 manage 10.4 K 2011-06-14 - 08:21 UnknownUser  
PNGpng 20110614_csc_rod_status.png r1 manage 11.2 K 2011-06-14 - 08:13 UnknownUser  
PNGpng 20120525_csc_adc.png r2 r1 manage 56.5 K 2012-06-14 - 08:27 UnknownUser update at 20120614 for new reference plot
PNGpng 20120525_csc_adc_time.png r1 manage 37.6 K 2012-08-09 - 14:01 UnknownUser  
PNGpng 20120525_csc_layer_maps_prec.png r1 manage 22.9 K 2012-05-25 - 09:48 UnknownUser  
PNGpng 20120525_csc_layer_maps_trans.png r1 manage 22.6 K 2012-05-25 - 09:48 UnknownUser  
PNGpng 20120525_csc_occup_prec_vs_lumiblock.png r1 manage 32.9 K 2012-05-25 - 09:49 UnknownUser  
PNGpng 20120525_csc_occup_trans_vs_lumiblock.png r1 manage 32.1 K 2012-05-25 - 09:49 UnknownUser  
PNGpng 20120525_csc_rod_status.png r1 manage 23.5 K 2012-05-25 - 09:52 UnknownUser  
PNGpng 20120525_shifter_rod_status.png r1 manage 23.5 K 2012-05-25 - 09:49 UnknownUser  
PNGpng 2015-07-09_MDTDQM.png r1 manage 150.4 K 2015-07-15 - 11:00 UnknownUser MDT DQMF Layout
PNGpng 2015-07-09_MuonDQM.png r1 manage 168.1 K 2015-07-15 - 10:55 UnknownUser Muon DQMF Layout
PNGpng 20160721_csc_shifter_2D.png r1 manage 139.6 K 2016-07-21 - 10:17 UnknownUser  
PNGpng 20160721_csc_shifter_cccupancy.png r1 manage 111.1 K 2016-07-21 - 10:18 UnknownUser  
PNGpng 20160829DQMDMDTOccupancy.png r1 manage 42.5 K 2016-08-29 - 10:18 UnknownUser  
PNGpng 20160829ExpertAthena.png r1 manage 184.2 K 2016-08-29 - 09:50 UnknownUser  
PNGpng 20160831DQMDMDTErrors.png r1 manage 60.3 K 2016-08-29 - 10:20 UnknownUser  
PDFpdf 3.DAQTrigger_Module.pdf r4 r3 r2 r1 manage 2099.4 K 2018-03-16 - 09:34 UnknownUser Training slides: DAQ Module
PDFpdf 4.DQM_Module.pdf r4 r3 r2 r1 manage 2221.7 K 2018-03-16 - 09:34 UnknownUser Training slides: DQ Module
PNGpng CscATemperatureTrend.png r1 manage 15.3 K 2018-05-29 - 09:40 UnknownUser  
PNGpng CscATemperatures.png r1 manage 23.5 K 2018-05-29 - 09:40 UnknownUser  
PNGpng DcsHistoButton.png r1 manage 0.3 K 2018-05-29 - 09:40 UnknownUser  
PNGpng LogManager1.png r1 manage 78.5 K 2011-04-05 - 11:04 UnknownUser  
PNGpng MDT_ADC.png r1 manage 22.0 K 2015-07-15 - 10:49 UnknownUser Expected MDT Charge Deposited (ADC) Histogram
PNGpng MDT_DQMD_HitsPerTubePerLayer.png r1 manage 14.1 K 2018-04-03 - 08:58 UnknownUser  
PNGpng MDT_ErrorRate.png r1 manage 25.9 K 2015-07-15 - 10:48 UnknownUser Expected MDT Error Rate Histogram
PNGpng MDT_TCDCalibStream2016.png r1 manage 66.0 K 2016-05-16 - 09:38 UnknownUser MDT TDC Calibration Stream Reference
PNGpng MDT_TDC.png r1 manage 25.9 K 2015-07-15 - 10:49 UnknownUser Expected MDT Hit Timing (TDC) Histogram
PNGpng MDT_TDCRodLevel2016.png r1 manage 74.2 K 2016-05-16 - 09:39 UnknownUser MDT TDC ROD Level Reference
PNGpng MuonOperationManualShifter_StoplessRemovalDecision.png r2 r1 manage 25.5 K 2011-08-13 - 20:01 UnknownUser Decision process for stopless removal (updated Aug 13)
PNGpng MuonOperationManual_BusyPanel.png r2 r1 manage 101.9 K 2015-07-16 - 14:42 UnknownUser  
PNGpng MuonOperationManual_CalibrationStream.png r1 manage 20.9 K 2012-06-04 - 18:28 UnknownUser Muon Calibration Stream Panel
PNGPNG MuonOperationManual_DCS_AutoBeamActions.PNG r1 manage 36.3 K 2011-03-17 - 22:10 UnknownUser  
PNGPNG MuonOperationManual_DCS_BAL.PNG r1 manage 4.7 K 2011-03-19 - 21:31 UnknownUser Barrel Alignment Status FSM
PNGPNG MuonOperationManual_DCS_BeamInterlock2.PNG r1 manage 4.5 K 2011-03-17 - 22:21 UnknownUser  
PNGPNG MuonOperationManual_DCS_BeamInterlock4.PNG r1 manage 40.1 K 2011-07-09 - 10:43 UnknownUser Muon BIS unstable beams
PNGPNG MuonOperationManual_DCS_BeamInterlock5.PNG r1 manage 40.6 K 2011-07-09 - 10:44 UnknownUser Muon BIS stable beams
PNGPNG MuonOperationManual_DCS_DropRecovery.PNG r1 manage 51.9 K 2011-09-02 - 15:00 UnknownUser MDT Dropped Recovery Panel (new Aug 2011)
PNGPNG MuonOperationManual_DCS_LHCWidget.PNG r1 manage 10.5 K 2011-03-16 - 19:31 UnknownUser DCS LHC Widget
PNGPNG MuonOperationManual_DCS_MDTHVInterlock.PNG r1 manage 2.4 K 2011-03-20 - 11:32 UnknownUser MDT HV interlock
PNGPNG MuonOperationManual_DCS_MDTLVInterlock.PNG r1 manage 2.3 K 2011-03-20 - 11:33 UnknownUser MDT LV Interlock
PNGPNG MuonOperationManual_DCS_TGCHVInterlock.PNG r1 manage 32.0 K 2011-03-20 - 11:33 UnknownUser TGC HV Interlock
PNGpng MuonOperationManual_DQ-OHP_NewRPC1.png r1 manage 73.2 K 2011-11-29 - 11:45 UnknownUser  
PNGpng MuonOperationManual_DQ-OHP_NewRPC2.png r1 manage 128.5 K 2011-11-29 - 11:45 UnknownUser  
PNGpng MuonOperationManual_DQ-OHP_NewRPC3.png r1 manage 357.7 K 2011-11-29 - 11:46 UnknownUser  
PNGPNG MuonOperationManual_DQ_GnaMon1.PNG r1 manage 1.9 K 2011-03-03 - 11:31 UnknownUser MDT GnaMon Get Partition
PNGPNG MuonOperationManual_DQ_GnaMon2.PNG r1 manage 0.9 K 2011-03-03 - 11:49 UnknownUser MDT GnaMon Auto-Update
PNGPNG MuonOperationManual_DQ_GnaMon3.PNG r1 manage 1.4 K 2014-11-21 - 14:12 UnknownUser  
PNGPNG MuonOperationManual_DQ_GnaMon4.PNG r1 manage 1.4 K 2011-03-03 - 15:35 UnknownUser MDT gnaMon refresh
PNGPNG MuonOperationManual_DQ_GnaMon5.PNG r1 manage 0.8 K 2011-03-03 - 15:36 UnknownUser MDT gnaMon Process
PNGpng MuonOperationManual_DQ_OHP_NewRPC1.png r1 manage 73.2 K 2011-11-29 - 12:03 UnknownUser  
PNGpng MuonOperationManual_DQ_OHP_NewRPC2.png r1 manage 128.5 K 2011-11-29 - 12:03 UnknownUser  
PNGpng MuonOperationManual_DQ_OHP_NewRPC3.png r1 manage 357.7 K 2011-11-29 - 12:04 UnknownUser  
PNGpng MuonOperationManual_DQ_OHP_RPC1.png r1 manage 57.3 K 2011-03-02 - 21:26 UnknownUser Shifter OHP Histograms RPC1
PNGpng MuonOperationManual_DQ_OHP_RPC2.png r1 manage 51.7 K 2011-03-02 - 21:27 UnknownUser Shifter OHP Histograms RPC2
PNGpng MuonOperationManual_DQ_OHP_RPC3.png r2 r1 manage 108.1 K 2011-03-02 - 21:51 UnknownUser Shifter OHP Histograms RPC3
PNGpng MuonOperationManual_DQ_OHP_RPC4.png r1 manage 20.8 K 2011-11-29 - 12:08 UnknownUser  
PNGpng MuonOperationManual_ISPY.png r1 manage 68.5 K 2011-02-28 - 02:49 UnknownUser ISPY Panel for Muon Calibration Stream
PNGpng MuonOperationManual_MDTControlPanel.png r1 manage 23.8 K 2011-02-26 - 04:43 UnknownUser MDT Control Panel
PNGpng MuonOperationManual_RPC_DDC_panel.png r1 manage 47.7 K 2011-03-25 - 10:06 UnknownUser RPC Rates and Busy Monitor Panel (DDC)
PNGpng MuonOperationManual_TGC_FE_Monitor.png r1 manage 31.8 K 2011-03-01 - 15:07 UnknownUser TGC FE Monitor Panel
PNGpng MuonOperationManual_TGC_FE_Test.png r1 manage 15.9 K 2011-03-01 - 15:09 UnknownUser TGC FE Test Panel
Texttxt MuonRunSummaryExample.txt r3 r2 r1 manage 0.6 K 2017-06-13 - 12:47 UnknownUser Muon DQ Run Summary example (update July 20, remove online DQ flag part)
Texttxt MuonRunSummaryTemplate.txt r3 r2 r1 manage 0.4 K 2016-05-07 - 09:47 UnknownUser Muon DQ Run Summary template (update July 29, remove online DQ flag part)
Texttxt MuonShiftSummaryExample.txt r2 r1 manage 1.8 K 2011-08-12 - 16:47 UnknownUser Muon Shift Summary example (updated Aug 11)
Texttxt MuonShiftSummaryTemplate.txt r2 r1 manage 0.9 K 2011-08-09 - 20:01 UnknownUser Template for Muon Shift Summaries (updated Aug 9)
PDFpdf MuonShiftSurvivalGuide.pdf r11 r10 r9 r8 r7 manage 1455.2 K 2012-12-10 - 09:40 UnknownUser  
PowerPointppt MuonShiftSurvivalGuide.ppt r30 r29 r28 r27 r26 manage 2132.0 K 2012-12-10 - 09:39 UnknownUser 2012/12/10 regular update
PNGpng RPC-DCMvsLumiBlock.png r1 manage 122.1 K 2018-06-12 - 09:17 UnknownUser RPC ohp DCM
PNGpng RPC_DataIntegrity.png r1 manage 158.7 K 2016-08-31 - 18:49 UnknownUser MuonOperationManual DQ OHP RPC DataIntegrity
PNGpng RPC_DetailsItegrity1.png r1 manage 118.6 K 2016-08-31 - 18:50 UnknownUser MuonOperationManual DQ OHP RPC DetailsIntegrity
PNGpng RPC_Hits.png r1 manage 109.1 K 2016-08-31 - 18:58 UnknownUser MuonOperationManual DQ OHP RPC Hits
PNGpng RPC_MuCtpiTh.png r2 r1 manage 80.8 K 2018-04-27 - 17:05 UnknownUser MuonOperationManual DQ OHP RPC MuonCTPi
PNGpng RPC_Time.png r1 manage 114.0 K 2016-08-31 - 19:10 UnknownUser MuonOperationManual DQ OHP RPC Time
PNGpng RPC_TriggerHits.png r1 manage 115.5 K 2016-08-31 - 18:58 UnknownUser MuonOperationManual DQ OHP RPC TriggerHits
PNGpng ResourceDefault.png r1 manage 18.7 K 2016-10-26 - 08:31 UnknownUser  
PNGpng SFO1.png r1 manage 15.0 K 2011-04-05 - 11:05 UnknownUser  
PNGpng SFO2.png r1 manage 14.7 K 2011-04-05 - 11:06 UnknownUser  
PNGpng Screenshot_MDT_OHP_GlobalMonOverview_20110318.png r1 manage 121.4 K 2011-03-21 - 14:10 UnknownUser  
PNGpng Screenshot_MDT_OHP_TDCtab_20110310.png r1 manage 136.7 K 2011-03-10 - 16:44 UnknownUser  
PNGpng TGC-HV-trip.png r1 manage 294.6 K 2017-06-15 - 07:03 UnknownUser TGC DCS FSM for a disabled HV node after 4 trips
PDFpdf TGC-Threshold_reporting_ERROR.pdf r2 r1 manage 1321.0 K 2012-08-04 - 09:29 UnknownUser TGC Threshold Error
PDFpdf TGC_OHP_v01.pdf r1 manage 604.6 K 2011-03-07 - 08:36 UnknownUser  
PNGpng TGC_SIDE_A_Strip.png r1 manage 28.0 K 2011-05-21 - 08:09 UnknownUser  
PNGpng TGC_SIDE_A_Wire.png r1 manage 26.2 K 2011-05-21 - 08:09 UnknownUser  
PNGpng TGC_SIDE_C_Strip.png r1 manage 26.0 K 2011-05-21 - 08:09 UnknownUser  
PNGpng TGC_SIDE_C_Wire.png r1 manage 28.4 K 2011-05-21 - 08:10 UnknownUser  
PNGpng csc_F1L.A01_cluster_charge_precision_layer_3clusSeg_0.png r1 manage 41.7 K 2012-08-09 - 14:04 UnknownUser CSC Cluster Charge
PNGpng csc_is_ros_map.png r1 manage 57.6 K 2012-08-09 - 14:04 UnknownUser CSC IsROS errors vs Sector
PNGpng csc_rod_rpu_status_all_evts.png r1 manage 57.1 K 2012-08-09 - 14:05 UnknownUser  
PNGpng gnam_tgcoverview.png r1 manage 80.9 K 2018-10-03 - 18:14 UnknownUser  
PNGpng mdtOHPoccupancyForManual20110728.png r1 manage 66.6 K 2011-07-28 - 12:57 UnknownUser Screenshot of MDT OHP occupancy vs time histograms
PNGpng muoncalib.png r1 manage 44.0 K 2015-08-19 - 10:18 UnknownUser Muon Calibration Panel
PNGpng ohp_001.png r1 manage 88.2 K 2017-07-12 - 09:06 UnknownUser  
PNGpng ohp_002.png r1 manage 76.4 K 2017-07-12 - 09:06 UnknownUser  
PNGpng ohp_003.png r1 manage 75.3 K 2017-07-12 - 09:06 UnknownUser  
PNGpng ohp_004.png r1 manage 90.0 K 2017-07-12 - 09:06 UnknownUser  
PNGpng ohp_005.png r1 manage 78.6 K 2017-07-12 - 09:07 UnknownUser  
PNGpng ohp_006.png r1 manage 79.0 K 2017-07-12 - 09:07 UnknownUser  
PNGpng ohp_007.png r1 manage 77.8 K 2017-07-12 - 09:07 UnknownUser  
PNGpng ohp_008.png r2 r1 manage 75.1 K 2017-07-21 - 14:45 UnknownUser  
PNGpng ohp_009.png r1 manage 84.9 K 2017-07-12 - 09:07 UnknownUser  
PNGpng ohp_010.png r1 manage 82.4 K 2017-07-12 - 09:07 UnknownUser  
PNGpng ohp_011.png r1 manage 104.6 K 2017-08-21 - 09:49 UnknownUser  
PNGpng ohp_012.png r2 r1 manage 173.4 K 2018-05-03 - 11:40 UnknownUser  
PNGpng ohp_013.png r1 manage 118.6 K 2017-07-12 - 09:08 UnknownUser  
PNGpng ohp_014.png r1 manage 120.5 K 2017-07-12 - 09:08 UnknownUser  
PNGpng ohp_015.png r1 manage 112.0 K 2017-07-12 - 09:08 UnknownUser  
PNGpng ohp_016.png r1 manage 111.9 K 2017-07-12 - 09:09 UnknownUser  
PNGpng ohp_017.png r1 manage 154.1 K 2017-07-12 - 09:09 UnknownUser  
PNGpng ohp_018.png r1 manage 147.8 K 2017-07-12 - 09:09 UnknownUser  
PNGpng ohp_019.png r1 manage 142.8 K 2017-07-12 - 09:09 UnknownUser  
PNGpng ohp_020.png r1 manage 146.5 K 2017-07-12 - 09:09 UnknownUser  
PNGpng ohp_021.png r1 manage 80.9 K 2017-07-12 - 09:09 UnknownUser  
PNGpng ohp_022.png r1 manage 80.5 K 2017-07-12 - 09:10 UnknownUser  
PNGpng ohp_overview_001.png r1 manage 109.3 K 2017-07-21 - 12:26 UnknownUser  
PNGpng ohp_rpc_athena_01.png r1 manage 417.0 K 2018-05-07 - 11:55 UnknownUser athena based RPC Occupancies vs lb
PNGpng ohp_rpc_athena_02.png r1 manage 413.1 K 2018-05-07 - 11:55 UnknownUser athena based RPC Occupancies vs lb
PNGpng ohp_rpc_athena_03.png r1 manage 395.7 K 2018-05-07 - 11:55 UnknownUser athena based RPC Occupancies vs lb
PNGpng ohp_rpc_athena_04.png r1 manage 412.2 K 2018-05-07 - 11:55 UnknownUser athena based RPC Occupancies vs lb
PNGpng ohp_rpc_athena_05.png r1 manage 236.3 K 2018-05-07 - 11:55 UnknownUser athena based RPC Occupancies vs lb
PNGpng ohp_tgc_001.png r1 manage 79.7 K 2017-07-21 - 12:27 UnknownUser  
PNGpng ohp_tgc_002.png r1 manage 149.6 K 2017-07-21 - 12:27 UnknownUser  
PNGpng ohp_tgc_003.png r1 manage 138.5 K 2017-07-21 - 12:27 UnknownUser  
PNGpng ohp_tgc_004.png r1 manage 452.5 K 2017-07-21 - 12:27 UnknownUser  
PNGpng ohp_tgc_005.png r1 manage 171.0 K 2017-07-21 - 12:27 UnknownUser  
PNGpng ohp_tgc_006.png r1 manage 154.6 K 2017-07-21 - 12:27 UnknownUser  
PNGpng ohp_tgc_007.png r1 manage 97.7 K 2017-07-21 - 12:27 UnknownUser  
PNGpng ohp_tgc_008.png r1 manage 111.1 K 2017-07-21 - 12:28 UnknownUser  
PNGpng ohp_tgc_009.png r2 r1 manage 113.6 K 2017-07-21 - 13:51 UnknownUser  
PNGpng ohp_tgc_010.png r1 manage 70.8 K 2017-07-21 - 12:28 UnknownUser  
PNGpng ohp_tgc_011.png r1 manage 60.1 K 2017-07-21 - 12:28 UnknownUser  
PNGpng ohp_tgc_012.png r1 manage 62.9 K 2017-07-21 - 12:28 UnknownUser  
PNGpng ohp_tgc_triggerdetailes_sltriggerptvslb.png r1 manage 102.5 K 2018-05-03 - 16:54 UnknownUser  
PNGpng ohp_tgc_triggerdetailes_sltriggerptvslb_eitileregion.png r1 manage 60.2 K 2018-05-03 - 16:54 UnknownUser  
PNGpng ohp_tgc_triggerdetailes_sltriggerptvslb_firegion.png r1 manage 77.3 K 2018-05-03 - 16:54 UnknownUser  
PNGpng ohp_tgc_triggerdetailes_sltriggerptvslb_forwardregion.png r1 manage 66.1 K 2018-05-03 - 16:54 UnknownUser  
PNGpng ohp_tgc_triggerdetailes_sltriggerroivssector_endcap.png r1 manage 88.0 K 2018-05-03 - 16:54 UnknownUser  
PNGpng ohp_tgc_triggerdetailes_sltriggerroivssector_forward.png r1 manage 73.5 K 2018-05-03 - 16:54 UnknownUser  
PNGpng segment_effic_prec.png r1 manage 38.0 K 2012-08-09 - 14:05 UnknownUser CSC Segment Efficiency for precision strips
PNGpng segment_effic_tran.png r1 manage 37.3 K 2012-08-09 - 14:06 UnknownUser CSC Segment Efficiency for transverse strips
PNGpng segment_nseg_cham_prec.png r1 manage 56.4 K 2012-08-09 - 14:06 UnknownUser CSC Cluster Occupancy vs. Sector, precision strips
PNGpng segment_nseg_cham_tran.png r1 manage 59.2 K 2012-08-09 - 14:06 UnknownUser CSC Cluster Occupancy vs. Sector, transverse strips
PNGpng tgc_strip_A.png r1 manage 70.5 K 2012-05-03 - 17:44 UnknownUser  
Postscriptps tgc_strip_A.ps r1 manage 60.5 K 2011-06-21 - 07:21 UnknownUser  
PNGpng tgc_strip_C.png r1 manage 71.0 K 2012-05-03 - 17:48 UnknownUser  
Postscriptps tgc_strip_C.ps r1 manage 61.1 K 2011-06-21 - 07:23 UnknownUser  
PNGpng tgc_wire_A.png r1 manage 68.7 K 2012-05-03 - 17:42 UnknownUser  
Postscriptps tgc_wire_A.ps r1 manage 63.9 K 2011-06-21 - 07:23 UnknownUser  
PNGpng tgc_wire_C.png r1 manage 69.8 K 2012-05-03 - 17:46 UnknownUser  
Postscriptps tgc_wire_C.ps r1 manage 64.9 K 2011-06-21 - 07:58 UnknownUser  
PNGpng tgcdqmf.png r1 manage 130.3 K 2011-06-27 - 11:01 UnknownUser  
Edit | Attach | Watch | Print version | History: r389 < r388 < r387 < r386 < r385 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r389 - 2018-10-03 - masato_40CERN_2eCH
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding ATLAS?Please contact the page author (see Topic revision above) or the Run Coordinator of the specific system.
Contact SysAdmins support only for technical issues