Summaries February/March 2008 Run

February 4

 

  • started cooling system, noted external Pt1000 temperatures of all half-staves for reference in cooled state
  • started to power side C to verify the cables and fibres that have been repaired in the last weeks. All connections that have been repaired but one are working fine. The one concerned (6C-CH0) is showing an HVmax error in the bus power supply, possibly due to a connector problem. This needs to be further investigated.
  • sector 2C showed correct temperatures when cooled, but several half-staves caused a temperature interlock when the bus was switched on, this needs to be further investigated
  • in the afternoon a DCS integration session was taking place together with the ALICE DCS team

Plans for 5/2/2008:

 

  • in the morning continue the test of the repaired fibres and cables on side A
  • once all fibres and cables have been verified: power on all sectors on one side and note the temperatures for reference
  • in the afternoon an integration session with the ALICE DAQ team has been scheduled

 

February 5

 

  • the mapping of side C has been finished, on one channel (Sect 6C-CH0) the connection to the bus on the patch panel at the absorber is missing (cannot be fixed now), all other connections are working fine
  • in the afternoon the DAQ integration test scheduled with the ALICE DAQ team has started, the busy fan-in board and some work on the top level FSM need to be updated, a furher DAQ integration session will take place in the afternoon of Feb. 18

Plans for 6/2/2008:

 

  • complete the mapping of side A
  • start powering the full detector (first one side, then the other) to provide the temperature mapping for the cooling tuning
  • ongoing software work
  • checking of sensor leakage currents

 

February 6

 

  • The mapping of the cables and fibres on side A has been completed; all connected problems have been resolved;
  • Problems were found on router 12 and 13 and the respective LRx cards - the investigation is ongoing;
  • The complete detector was powered with the exception of sectors 0A, 5A and 6A for which no CAEN power supplies were available; the temperature has been mapped and first cooling tuning was carried out; sector 2 which caused a temperature interlock on Monday was powered with the exception of HS2A-CH1 (requires special config-file and maybe adjustment of the limit); the temperatures measured were all around 30C and stable;

Plans for 7/2/2008:

MAGNET TESTS! Tests with the L3 magnet will be carried out all day until approximately 17:00; no access to the magnet and only supervised access to the cavern; all tests will be carried out with the external HS during the duration of the magnet tests;

 

  • Investigation of the problems observed with router 12 and 13 and the LRx cards
  • DCS software tests
  • Test of newly delivered CAEN power supplies in the lab and if possible installation in the racks

 

February 7

 

  • MAGNET TESTS! During the day time the magnet is being tested by the magnet group.
  • Review and test of the DCS software is ongoing
  • Investigation of the problem observed with the busy card during the DAQ session is underway
  • Investigation of the problems with routers 12, 13 and 3: all problems have been traced to a conflict in the base address and have been resolved

Plans for 8/2/2008:

  • MAGNET TESTS
  • During the morning the will be an intervention in the DSS system to allow commissioning of the DSS units for the SSD. During this time no operation on the detector can be carried out. Once the DSS is back, only the external HS can be operated during the magnet tests.
  • Installation of the new CAEN3009 power supplies; this will complete the remaining sectors (0A, 5A and 6A)
  • Continuation of the review and test of the DCS software and the busy card

Plans for the weekend: * In principle the magnet tests could be continued (no confirmation yet)

  • The work on the DCS software will continue to have a stable system for the remaining time of the run.

The following magnet schedule was proposed by Paul:

Mon 11:

  • 09:00-12:00: +0.2T
  • 12:00-15:00: +0.5T
  • 15:00-18:00: -0.5T

Tue 12:

  • 09:00-12:00: -0.2T
  • 12:00-15:00: -0.5T
  • 14:00-18:00: +0.5T

During these two days we can study the effects of the magnet on the detector in different conditions, i.e. if there are any noise effects.

 

 

February 9-11

 

  • During the weekend tests of the updated DCS software were carried out in the DSF lab and on Monday the software was installed at Point 2.
  • During Monday the different DCS panels were tested with the external half-stave currently connected to position 9A-0. All tests could be completed successfully.
  • The tests on the magnet continued throughout the day and were completed by the evening.

Plans for 12/2/2008:

 

  • Today the magnet will be controlled by ALICE. The following settings will be used:
    • 9:00-12:00: -0.3T
    • 12:00-15:00: -0.5T
    • 15:00-18:00: +0.5T
  • For each magnet setting we will power one half-stave and test minimum threshold and uniformity response.
  • Eventually we can try to power more than one half-stave.

 

February 12

 

  • Three different magnet settings (-0.3, -0.5, +0.5T) were foreseen throughout the day to measure possible noise influence on the detectors. In the morning there were problems to set the first value, which was stable from about 11:00 on. Due to a problem with the DIP severer which resulted in a loss of temperature reading on the A side we were not taking any data during this field setting.
  • At 12:00 the magnetic field was changed to -0.5T. During this setting we took a few runs with HS5-Sect7A (minimum threshold, uniformity). The minimum threshold was found to be around 220 in pre_vth. The corresponding run numbers are 18397 (uniformity) and 18406 (min. th).
  • At 15:00 the polarity of the field should be changed to +0.5T. The ramp down of the field worked fine, but when ramping up to +0.5T the magnet tripped. Three attempts were made to ramp up the magnet, but it tripped every time and no stable condition could be reached.
  • Several min. threshold scans were carried out on half-staves of the inner layer on the bottom half of the detector to check their functionality after the magnet tests. No problems were found during these tests.
  • The temperature of HS1 sector 7A increased over time and triggered the software interlock - to be investigated.

Plans for tomorrow:

 

  • Due to the trips observed the magnet tests have been extended to the working hours of 13/2/2008.
  • We plan to concentrate on DAQ related tests during the day (busy card, DAs, check of scan panels).
  • When a stable field condition has been reached (or after the magnet tests) the heating up of HS1-Sect7A will be investigated.

 

February 13

 

  • Due to the magnet trip that was observed on Tuesday 12/2/2008 during the +0.5T setting it was decided to continue the magnet tests also on 13/2/2008. From 11:00 the field was stable at +0.5T until about 17:00.
  • The test of the busy card was ongoing with one card working by the end of the day. Both cards were installed into the crates in CR4.
  • Sector 7 was fully powered and the cooling was fine tuned in order to avoid the temperature interlock that was caused by HS7A-1. Throughout the evening the sector operated at stable temperatures with the new settings.
  • Noise runs, minimum threshold scans, delay scans and uniformity scans were carried out on two half-staves (7A-0 and 7A-1) to test the DAs. The runs were not yet started from the DCA. All panels worked well and the scans finished correctly. Only during the uniformity scans some data errors occurred in the first events. This has to be investigated. The reference displayer for the DAs was used to look at the data.

Plans for 14/2/2008:

 

  • During the day a general ALICE DCS integration session is scheduled.
  • Continue working on the busy card.

 

February 14

 

  • The general DCS integration session started in the morning. A problem was observed with the general DSN server which persisted throughout the day. Some first successful commands could be sent from the general DCS to our FSM.
  • The problem with the busy card has been resolved and both cards were tested successfully up to 400kHz.
  • A problem with the optical connection between Router 1 and the RORC was seen (error code 313). The DAQ group was informed about this problem and the investigation is ongoing.

Plans for 15/2/2008;

  • The general DCS integration session will continue.
  • In the morning a general DATE upgrade is foreseen - no DAQ is available during this time.
  • In the afternoon we will try to run with the external trigger (ACORDE) to prepare for the delay scans foreseen for the weekend.
  • The DAs have to be tested when started from the DCA.

 

February 15

 

  • Throughout the day the SPD participated in the DCS integration session. Commands were sent from the general DCS to the SPD successfully. In a further stage the SPD was under ECS control.
  • During the DCS session problems were observed related to the DNS server. This problem is a general one and not specific to the SPD.
  • Further work was done on the control of the digital pilot. Writing to the digital pilot works fine, there are still some issues with the read-back that need to be resolved.
  • Again problems have been observed with the 48V power supply on side A (channels 0-4). After switching on the power supply is stable for some time before the channel 000 on side A trips.

Plan for the weekend 16+17/2/2008:

 

  • The plan is to carry out delay scans with a maximum number of sectors powered. With the strobe set to the maximum possible (15) the region of delay (40-50) can be scanned in 4 steps. ACORDE will be used as trigger with an expected rate of ~ 1/ min. in the SPD.

 

 

February 16+17

 

  • Thanks to our volunteers we have been taking runs with different delays continuously since Saturday morning until Sunday evening. The part of the detector that was powered operated stably throughout this time.
  • The following sectors/channels could not be powered due to different problems:
    • SIDE A:
      • Sectors 0A, 1A, 2A, 3A, 4A: 48V power supply trips after short time
      • Sectors 5A, 6A: have never been tested before (new LV power supplies)
      • 7A-2 HS always in busy
      • 8A-2 missing HV recipe
      • 8A-5 missing HV recipe
      • 8A-3 ?
      • 9A-0 external HS
      • 9A-5 problem with external Pt1000 chain
      • 9A-3 HV trip
    • SIDE C:
      • Sector 5C: always busy, probably from one HS
      • Sector 9C: problems with router 19
      • 0C-0 gets hot after some time
      • 2C-1 gets hot after configuration
      • 3C-1 busy problem
      • 3C-3 HV trip after some time
      • 5C-3 gets hot after some time
      • 6C-0 LV PS problem on the bus
      • 6C-3 HV trip
      • 7C-3 problem with configuration
  • RUNS: The delay has been scanned between delay-ctrl 40-50 in steps of 2. The misc was set to 192. All runs lasted for about 4 hours with a total of ~ 1.2 mio triggers each. The trigger input was ACORDE (~80-100 Hz rate) with the run type standalone_pulser. pre_vth was set to 180.

 

    • run 19586: delay 40/192
    • run 19616: delay 42/192
    • run 19625: delay 44/192
    • run 19626: delay 46/192
    • run 19644: delay 48/192
    • run 19653: delay 50/192

We will require offline analysis to see which is the best delay setting. Please keep me and everybody else informed if you have any interesting plots. We will continue the delay scans later this week.

Plans for tomorrow 18/2/2007

 

  • DAQ integration session planned to start at 16:00.
  • In the morning the detector will be powered up and prepared for the DAQ run.

 

February 18

 

  • In the morning the detector was powered and prepared for the DAQ run. Approximately 60% of the detector was operational. The other part could not be powered for various reasons (see run summary 16+17/2).
  • Around lunch time we lost the reading of the temperatures via DSS. This has also occurred once in the December run. The detector still stays powered and is protected, but the error cannot be released from the FSM. The loss of reading has been resolved by restarting the DIP server by IT/CO.
  • In the afternoon the SPD had a dedicated DAQ integration session, where the detector was controlled via ECS and dead times were measured at various trigger rates. The test was carried out with 3 LDCs. The DAQ has now fully passed the DAQ integration session. You can find all details on the test in the electronic logbook under "SPD".
  • In the evening the first DAs were tested successfully, e.g. run 19721 which was a minimum threshold run.

Plan for 19/2/2008:

 

  • The remaining faulty channels will be investigated.
  • The three sectors for which we got the LV power supplies last week will be fully checked (0A, 5A, 6A).
  • Depending on the status the I-V curves will be taken on each HS and the HV settings will be fixed for the READY state - this will probably have to be continued on 20/2/2008.

 

February 19+20

 

  • The last two days were dedicated to solving the remaining cable problems, and problems with the power supply.
  • A replacement for one 48V power supply on side A was installed and now sectors 0A-4A can be powered and operated.
  • Sectors 5A and 6A have been fully tested and verified (those received new 3009 modules and were not yet fully tested).
  • Problems on router 15 have been solved and could be traced to one LRx card.
  • The problem with router 19 still persists and is under investigation.

 

  • The small scintillator has been installed in ALICE. The 80 cm x 80 cm scintillator is placed 3.5 m underneath the SPD. The cosmic rate measured is 6Hz. The signal from the scintillator can now be used as trigger.

Plans for tomorrow:

  • The work on router 19 will continue.
  • I-V curves of all half-staves.
  • Delay scans.

 

February 22

 

  • 98 of 120 half-staves have been powered this morning. Some of the 22 missing half-staves had temperatures too close to software interlock limit to be powered. A tuning of the settings of these half-staves was started to reduce the power consumption without compromising their performance (pre_vipreamp).
  • The cooling has been tuned to give some margin to half-staves with temperatures close to the software limit.
  • The problems with router 19 could be resolved. Now all routers are functioning correctly!

Plans for 23/2/2008:

  • The tuning of the half-stave parameters will continue to find the best configuration settings
  • Calibration runs will be taken to check the detector performance and the DAs

Plans for Monday (start of the global run):

  • All detectors that are in a state to be controlled centrally by DCS will be joined in a global run at around lunch time. Those detectors that do not comply with the requirements yet will get special support from DCS and will be joined on Tuesday at lunch time or on Wednesday at lunch time. * During next week the magnetic field will be on * All detectors will be operated continuously and managed centrally by ECS after the first days
  • Today in the run meeting it was announced that there will be no HV on the TPC for several weeks. Investigations are required after a trip in the field-cage two nights ago.

 

February 25

 

  • Since today we have 24 hours shifts and the detector is operated continuously.
  • On the weekend all half-staves have been tested for their functionality and a map of the detector has been produced. Some half-staves still have to be investigated from the readout point of view and some have temperatures before/after configuration which are too close to the software interlock.
  • On Monday morning 82 half-staves have been powered and configured. The power-on process could be done within about one hour thanks to the detector map.
  • At lunch time the SPD joined with a half-sector the general DCS. Several commands were executed successfully to all the detectors included. In parallel the investigation of the readout of some sectors was carried out.
  • After lunch all operational half-staves were joined the global ECS test together with other 10 detectors. This test is ongoing with the aim to take several hours of stable data until tomorrow morning.
  • During the day the magnet was tested, but after trips it was suggested to continue the magnet tests tomorrow.

Plans for 26/2/2008:

  • Continue investigation of readout channels which give problems (e.g. routers 3 and 4).
  • Check tuning for half-staves with temperatures too close to software limit.
  • For the afternoon another global run is foreseen which should include more detectors than today.

 

February 26

 

  • During the night shift the SPD participated in the global run with 8 detectors. The run number is 21861with a duration of 30 minutes using the ACORDE trigger.
  • We participated in the morning in the general DCS test of the DIM-DNS. Only two detectors were able to participate (SPD and HMPID) in this test. From the SPD we participated with sector 7 in this activity.
  • With the rest of the detector we tried to include the half-staves that have temperatures too close to the software limit. The cooling was tuned and 11 out of 17 half-staves that showed this problem could be included with temperatures of about 32 to 34 C. The detector was then configured and prepared for the global run. The complete list of channels that were recovered can be found in the electronic logbook.
  • For several hours we had problems joining the global run as routers went into busy after some time. A first suspicion was that the temperature on the MCM was too cool (16C) which could cause problems with the optical component. However, the temperatures could be read from the MCM meaning that the communication is working. This might indicate that the temperature could be too close to the margin on some half-staves. Detailed investigations will be necessary.
  • The cooling settings were brought to an intermediate setting and 84 half-staves (list is available in the electronic logbook, entry 27/2 at 10:00) were configured and operating stably. The SPD joined the global run for the rest of the night. Several runs were taken with up to 9 detectors, e.g. 22247 (39 min), 22251 (49 min.), 22252 (2 h), 22254 (1 h) or 22255

Plans for 27/2/2008:

  • investigate the readout channels that gave problems, measure optical output in CR4
  • join the global run in the evening

The following schedule was presented by the RC in the run meeting:

8-12 DCS sessions (might not include all detectors) 12-13 standalone runs 13-18 preparation of global run from 18-8 global runs

 

February 27

 

  • Until about 8 am the SPD participated in the global run with up to 9 detectors (see run numbers in my summary from 26/2/2008)
  • A trip occurred in one HV channel (7C-3). The current has been increasing continuously since 26/2 10pm.
  • After 8am an investigation of all readout channels started in CR4, measuring the optical power and testing the possibility to configure the MCM. 4 channels were found in routers 3 and 4 which will require further investigation. These two routers are currently excluded from the data taking. All channels were tested with MCM stimuli and the data were recorded on the LDC
  • The disks of the LDC are already quite full and it is not easy to move the local data to another location. If possible please us the mstream recording which will put the data onto Castor.
  • 78 half-staves were included in the global run in the evening (trigger ACORDE). The list is available in the electronic logbook.

Plans for tomorrow:

  • Continue investigation of the 4 channels in routers 3 and 4 and try to get them to work
  • Join global run in the evening

 

February 28+29 (+weekend)

 

  • During the night shift of 28/2 the SPD had joined the global run. Several runs were taken, each with a duration of between 1 to up to 3 hours. The runs were stopped and started when detectors needed to leave the partition or wanted to join. At some instances the SPD stopped the run because of loss of communication with the DIP sever (DSS system). This results in the loss of temperature reading via the DSS and an error is propagated to the top node which stops the run. During this server time-out the detector is however fully protected by the interlock. For the current run a temporary solution has been implemented in the FSM: The loss with the DSS reading is indicated in the top node panel (green LEDs that will go to red in case of server connection problems) and the error is not propagated to the top node for the moment.
  • The configuration of all operated half-staves (76) has been verified during the evening and night shift of 28/2 and 29/2. For each half-stave the delay was checked (found 34/192 with the test-pulse) and a uniformity scan was carried out. For about 20HSs the config-files have to be adjusted to improve the matrix response. All calibration runs executed without problems. A summary of the tests can be found in the Wiki page in form of an excel file.
  • During the stable magnet operation on 29/2 the SPD participated in global run from 12-13.45. The magnet tests will continue on Monday.
  • In the afternoon of 29/2 a common data header error appeared in the SPD during a global run. At the same time work was ongoing in the trigger partition. A cross-check in standalone mode could not reproduce the error. Later in the afternoon the SPD joined again the global run and no CDH error appeared anymore.

Plans for the weekend and the coming week:

 

  • The production run will start on Monday 8am. The magnet tests will continue in parallel on Monday. The data taking will continue until 9.3. at 16:00.
  • During the morning and part of the afternoon on Saturday it is planned to investigate the remaining problems with the routers 3 and 4 and to try to recover half-staves that have temperatures too close to the limit.
  • On Saturday the updated configuration files will be generated and used.
  • Before going into the global run on Sunday evening a series of calibration runs should be carried out with all working half-staves (uniformity, minimum threshold...).

 

March 3

 

  • In the morning a general problem with the DIM DNS was discovered. This affected all detectors. It took until about lunch time to restore the status.
  • Most of the day was dedicated to investigation of the busy problem that appeared on the weekend. The busy is raised in different routers on starting a run. This problem appeared randomly in all routers since Saturday.
  • The configuration files were changed back to the newly adopted ones from 29/2 and 1/3 (improved APIL and DPIL settings on few half-staves) in the evening of 3/3/2008. The original files are in the folder "ConFile_original"
  • In the evening all channels were investigated to check if they can be powered on correctly and configured. A complete list of power-able and configurable channels can be found in the logbook.
  • There were several attempts to join the global run in the evening which did not work due to busy problems from the SPD. After these attempts the LTU for the SPD was restarted by the trigger expert and standalone runs were carried out. These runs could be started and stopped correctly.

Plans for tomorrow:

  • Include all powerable half-staves in the data-taking and join the global run
  • In total 80 hours of run are required with the ITS to acquire a sufficient amount of alignment data in the next days.

 

March 4

* The SPD joined with 14 routers and 59 half-staves the global run in the night shift of 4/3. The reduced number of routers and channels is still related to the problem of busy and certain channels had to be excluded to allow stable running. The global run with the ACORDE trigger continued until mid morning when the run got interrupted several times by different problems. The SPD stayed in the global partition until about 15:45. * At about 15:45 we went into standalone run to further investigate the busy problem. A series of calibration runs (uniformity matrix) was taken with all 20 routers. After removing router 4 (was already causing problems last week) the runs could be completed without errors. * From 19:15 on the SPD joined the global run with TOF trigger (2 segments). The trigger rate in this case is ~1Hz. According to TOF experts the trigger latency is different from the ACORDE settings, thus the delay setting of 50/128 with a strobe-length of 15 has to be verified. One run with 7.5 hours and one with 4 hours have been completed successfully.

Plan for tomorrow: * Continue global run (with ACORDE) trigger and ITS to accumulate the equivalent of 80 hours data for alignment. * If possible include routers 6+7 in the global partition.

 

March 5-9

 

  • The SPD has been running continuously in the global partition in the last days (with up o 82 half-staves in the data taking). Occasionally one of the detectors stopped the run and after a reset a new run was started. In the case of the SPD in most cases when a problem occurred an error message about a common data header mismatch was seen in the DAQ monitoring window. A reset of the routers and the detector cured the problem and a new run could be started. This problem will be investigated in detail.
  • 26042 is the last last run taken on Sunday with ITS and ACORDE in the ALICE partition. During the last days two other partitions were also running in parallel - TRD, TOF, T0 and MTR, MCH, V0.
  • At 16:00 the runs were stopped and the detectors started to switch off. The SPD was fully powered off, followed by powering off the 48V supplies. After this the cooling plant switched off. This was traced back to the PLC being in stop-mode. The DSS actions connected to the plant being off were triggered correctly. The cooling piquet was contacted to come and set back the PLC.
  • During the last hour of data taking a snapshot of the detector currents, voltages and temperatures was taken. The xls file can be found in the Wiki page.
  • The next cosmic run is scheduled to start on April 28 continuing until first beam. A detailed schedule will be circulated as soon as it becomes available.

 

 

Summaries December 2007 Run

 

December 10

 

  • a standalone DAQ run with 19 routers was carried out successfully
  • the cooling plant seems to work fine and the DCS control to it was checked and found ok
  • all HS temperatures have been measured in ambient temperature and with the cooling switched on. The results show the same channels with higher values as found in the DSF tests
  • a problem in the cooling plant to DSS connection was found and needs to be investigated. As this interlock does not seem to be correctly in place we plan not to switch on the detector before this problem is understood.

The access to the cavern is only allowed between 18-19:30 in the evening due to magnet tests. We need to access the cavern in order to investigate the Cooling-DSS problem. Therefore the plan for tomorrow is

 

  • systematic cooling checks in the morning
  • finalize archiving software for cooling plant
  • switch on external half-stave to test the FECS (front-end control system)
  • get access as soon as possible to check the cooling-DSS connections

 

December 11

Here a short summary of run day 2. We have been granted supervised access during the morning to investigate the cooling-DSS problem. Here are the actions of today:

 

  • The temperature of all half-staves has been re-checked in thermalized state
  • The cooling-DSS problem has been checked and solved by rewiring the connections at the cooling plant rack. Now the correspondence between cooling line and DSS channel is as expected (1 = 0, 2 = 1, 3 == 2, ...) and the problems observed yesterday were corrected
  • Regenerating the correct connections in the DSS software was fine with the exception of sector 8 and 9 (DSS software problem) the expert from IT has been informed and the problem will hopefully be resolved tomorrow
  • All Pt100 sensors (cooling plant) have been checked and found ok, thus all heaters work fine
  • The Freon has been refilled
  • The cooling plant has been switched off to check the recovery rate over night
  • The FECS panels have been finalized and implemented in the user interface
  • Access rights for the operator node have been reviewed and updated

The plans for tomorrow are as follows:

 

  • Switch on the cooling plant and check status
  • Test FECS with external half-stave
  • Switch on first MCMs....?

 

December 12

 

  • Yesterdays problem with the DSS has been resolved and also another one that appeared during the day
  • All cooling lines have been checked individually in the DSS and the interlocks were found to work correctly
  • The MCM of the external half-stave was powered, but an alarm from the cooling plant prevented further measurements (the alarm correctly triggered the interlock)
  • The weight of freon in the plant did recover with slower rate and to a lower value as expected. The cooling lines on the mini-frame and the plant was inspected and a few defect valve end-caps were found which probably led to freon leaking from the system. The caps were sealed and a recovery initiated which showed a faster recovery rate.

Plans for tomorrow:

  • test with external half-stave
  • switch on MCMs, test, ...

 

December 13+14

13/12/2007:

  • the cooling plant was put in run mode ~4.00 am and is operating stably since
  • 5 MCMs were powered and configured on sector 5C

14/12/2007:

  • systematic check of the temperature read by DSS and MCM on all cooling lines, comparing DSS values with and without MCM powered, MCM_ON temperature varied between different sectors (from ~12C to ~30C), bus powering on well cooled sector resulted in temperature interlock
  • therefore tuning of cooling line carried out (sector 7)
  • powered successfully all MCMs and buses on sector 7C, all HS between 25 and 32C stable
  • started to configure HS, automatic configuration not always successful, needs investigation
  • pre_vth scan with HS 0, 2,3,and 5 (run 12555)
  • attempt to enter global run prevented by busy from router 7 (sector 7A), reset and reboot of crate without effect, busy and FO disconnected on this router - needs investigation
  • I-V curves taken on all HS on sector 7C up to 50V, maximum leakage current ~2uA at 50V

Plan for tomorrow:

  • systematic check of temperatures (cold, MCM_on, bus_on) on other sectors
  • establish list of half-staves/sectors on side C that can be powered correctly
  • take I-V curves on all power-able half-staves
  • configure power-able half-staves
  • take data (pre_vth scan, ....)

 

 

December 15+16

 

  • A problem with the cooling plant control panel was discovered in the morning, the card with the settings for the SPD plant interface had to be retrieved before a stable running could be resumed
  • All sectors which are equipped with power supplies have been powered (MCM and bus) and currents and temperatures were checked. Most half-staves operate at around 30-32C with the bus powered.
  • Few channels with problems (HVMax, HV trip at 4V) were detected - need investigation, I-V curves were taken on two sectors.
  • Discovered problem with configuration procedure of half-staves; Investigation carried out and partially fixed - new panels were developed and need to be tested tomorrow.
  • Up to four half-sectors were powered simultaneously and operated stably for about an hour. When powering all buses simultaneously in one half-sector a small increase in temperature (~few C) was observed for a few minutes, before returning to the previous value.
  • An increase in temperature on one half-stave triggered the software interlock, similar spikes, but with smaller amplitude were also seen in other half-staves. The reason for this is not understood.
  • Two half-sectors were powered and configured (3C and 4C) and standalone runs with test-pulse (uniformity matrix with one line enabled) were carried out.
  • SPD joined for the first time a global run (with SDD) with sectors 3C and 4C at 1:11 am with triggers provided by the small scintillator. In a new global run the trigger was changed to ACORDE. >100k Triggers accumulated (run number 13071).

Plan for tomorrow:

  • inspect data files taken this night
  • power on sectors, complete I-V measurements on all sectors and investigate temperature effects when several sectors are powered
  • complete list of half-staves that can be included in data taking (no cable problems, ......)
  • test new configuration panels and configure as many sectors as possible
  • carry out calibration runs and join global runs

 

 

December 17+18

 

  • the cooling has been operated stably since yesterday, after solving the problem of the temperature spikes (wrong connections on the chiller)
  • the SPD participated on Monday night in the DAQ stability test with the other detectors (run for several hours with pulser)
  • during the night to Tuesday delay scans were carried out with the ACORDE trigger to find the correct delay, the rate however is very low, analysis is on the way (the correct delay is estimated to be between delay_ctr 35 and 45)
  • on Tuesday sectors 2A and 3A were powered and the cooling was fine tuned
  • an additional 48V PS was inserted into the mainframe
  • the power plug of the mainframe was not connected to the rack distribution, but via an extension cord to a plug in the wall. This was corrected around 17:00
  • the bus on sector 5C has now been connected and the external half-stave has been disconnected. The connections need to be checked before powering half-sector 5C
  • today the SPD participated in the global cosmic run in the evening (ACORDE trigger) - 5 sectors top barrel
  • At midnight (18.12) a global run with the other ITS detectors was started (trigger: small scintillator, rate 3Hz) - total of 8 sectors (top and bottom side C) - run 14574 and subsequent runs

Plans for tomorrow:

  • continue ITS global run for as long as possible
  • trigger busy measurement 12.00-16.00 (Federico)
  • planned to run with TPC in unchecked partition in the evening

 

December 19

 

  • stable running during the night shift (all ITS), DAQ problems required starting new runs several times
  • trigger test (Federico), measurement of the busy time carried out
  • standalone run with trigger (required fixing of trigger cable by expert), run no: 14917, small scintillator
  • global run with TPC in unchecked partition

 

December 20

 

  • run with ITS and TPS continued (small scintillator, 0.3 Hz), several runs with different delays carried out (delay ctr: 45-50, strobe:15), see logbook
  • test with a random trigger (trigger rate 40 MHz)
  • test of the new setting for the external trip of the HV modules (on sector 7) worked fine

Plans for switch off:

  • we will start powering down the detector in the evening shift of 20/12/2007. The freon will be recovered over night. During the Christmas shutdown the detector (electronics, power supplies and cooling) will be fully powered off.
  • therefore the shifts for the night and day of 21/12 are cancelled

-- Main.PetraRiedler - 20 Dec 2007