Atlas setup in N221
Last update: 17 May 2022¶
FPGA kits and FPGA IDs¶
For some thoughts on SMBUS, see SMBUS_for_FELIX_cards
For instructions on how to program and use the cards / machines below, see FELIX at Nikhef readme
Optical patch panels and their connections are described here: Optical_Patches_Atlas_TDAQ_Lab
Slides with graphical representation of configurations for 1 MHz tests, version of 11 February 2022
Card | Project | Card S/N | JTAG host / Cable ID | Card host | PCIe Slot | Card DNA | Transceiver channels |
BNL-711 | rother/xilinx_tcf/Digilent/210249A06135 | agogna | 0x0105122244b0c205 | 48 | |||
BNL-712 | rother/xilinx_tcf/Digilent/210249A85CD6 | agogna | 0x0105968534800345 | 48 | |||
PRIME-712 | ITk | 161 | rother/xilinx_tcf/Digilent/210249A85F02 | gimone | 0x013a6f281c2143c5 | 24 | |
PRIME-712 | 3 | rother/xilinx_tcf/Digilent/210249A14F75 | seudre | 0x011822432c808585 | 48 | ||
VC-709 | ProtoDUNE | rother/xilinx_tcf/Digilent/210203A103E6A | seudre | 4 | |||
PRIME-712 | 50 | rother/xilinx_tcf/Digilent/210249A14F61 | gimone | 0x013a6f281c40c145 | 48 | ||
PRIME-712 | ITk | 160 | rother/xilinx_tcf/Digilent/210249A85FDE | srvgst002 | 0x013a6f281d410205 | 24 | |
|
|
brembo | 2 | 6 | |||
|
|
brembo | 5 | 0xc0000000000000000131a30132a0e142 | 12 + 16 (FMC+) | ||
BNL-182 | rother/xilinx_tcf/Digilent/210308B3596C | brembo | 5 | 24+4 | |||
VCU-128 | rother/xilinx_tcf/Xilinx/091847100612A | brembo | 7 | 16 | |||
BNL-712 | rother/xilinx_tcf/Xilinx/000013e80bf801 | turano | 0x0117ed6124f0e2c5 | 48 | |||
HTG-710 | rother/xilinx_tcf/Digilent/210249854652 | argos | 24 |
Fanout rack
¶
Main rack layout, from bottom to top¶
Host | CPU | RAM | IPMI Address | *Remarks | Chassis | Motherboard | PCIe slots | Ethernet |
agogna | i9-7940X, 3.1 GHz, 14 cores, runs at 3.8 GHz | 32 GB | 192.168.0.46 | 4U | Asus WS X299 PRO_SE | 2 x 16 lanes (PCIEX16_1 and _2), 1 x 8 lanes (PCIEX16_3), 1 x 4 lanes (PCIEX16_4) | 2x 40 GbE, 100 GbE | |
argos | E5-1620v1, 3.6 GHz, 4 cores | 16 GB | 192.168.0.41 | 4U | Supermicro X9SRE-3F | 1 x 16, 1 x 8, 1 x 4 lanes | 2x 10 GbE, 2x 40 GbE | |
canche | Xeon Gold 5118, 2.3 GHz, 12 cores | 48 GB | 192.168.0.36 | 2U | Supermicro X11SPW-TF | depdends on riser card: 5 x 8 lanes or 2 x 16 lanes + 1 x 8 lanes | ||
calore | E5-1650v2, 6 cores | 16 GB | 192.168.0.44 | ROS PC | 2U | Supermicro X9SRW-F | depdends on riser card: 5 x 8 lanes or 2 x 16 lanes + 1 x 8 lanes | 2x 40 GbE |
turano | E5-1660v4, 3.2 GHz, 8 cores | 32 GB | 192.168.0.45 (via en01) | 4U | Supermicro X10SRA-F | 2 x 16 lanes, 1 x 8 lanes | 2x 100 GbE | |
gimone | AMD EPYC ROME 7302P 16c/32t 3.0ghz | 8x16GB DDR4 3200MHZ ECC REG | 192.168.0.43 | 4U | Supermicro H12SSL-i Mainboard | 5x PCIe 4.0 x16, 2x PCIe 4.0 x8 | ||
seudre | E5-1650V4, 3.6 GHz, 6 cores | 32 GB | 192.168.0.42 | 4U | Supermicro X10SRA-F | 2 x 16 lanes, 1 x 8 lanes | 2x 40 GbE | |
brembo | AMD Epyc 7302P CPU, 3.0 GHz, 16 cores | 128 GB | 192.168.0.40 | 4U | ASRock ROMED8-2T | 7 16-lane Gen4, slot 2 is 8 lane because it is shared with the M.2 SSD (jumper selectable) | ||
srvgst002 | Intel(R) Xeon(R) CPU E5-1660 v4 @ 3.20GHz | 32 GB | 192.168.0.39 | On Guestnet, no LDAP login. Ask Frans for a user account. ITk | 2U | SuperMicro, Standard FELIX PC | ||
rother | Intel Core i5-6260U, 1.80GHz | 192.168.0.3 (IPMI gateway) | IPMI gateway and hardware (JTAG) server in cable duct on main rack | NUC |
In order to use the web interface of one of the IPMI devices over an ssh tunnel (in this case gimone) you can setup the tunnel as follows
ssh -J login -L 8080:192.168.0.43:443 rother
#A tunnel to the Intellinet power switch:
ssh -J login -L 8080:192.168.0.100:80 rother
Then connect to https://localhost:8080
Second rack, MROD rack or other locations¶
Host | CPU | RAM | IPMI Address | *Remarks | Chassis | Motherboard | PCIe slots | Rack location | MAC | IPv4 | IPv6 |
piedra | i7-5930K, 3.5 GHz, 6 cores | 64 GB | 192.168.0.38 | Used as FW build server | 4U | Asus X99-WS/IPMI | Server rack | ||||
srvgst001 | AMD EPYC | 512 GB | 192.168.0.37 | ET server for simulation, matlab, build, etc | 2U | Server rack | 192.16.192.97 | 2001:610:120:3000::192:97 | |||
orada | i5 | No IPMI | Windows 10, below table, username: daqmuwin | Desktop | Server rack | 192.16.192.167 | 2001:610:120:3000::192:167 | ||||
tarbot | AMD Ryzen 7 3700X 8 cores, 3.6 GHz | 32 GB | No IPMI | In second rack, Runs Ubuntu 20.04, Nikhef network | 4U | Gigabyte Aorus Pro X570 | 1x 16 lanes Gen4 or 2x 8 lanes Gen4 | Server rack | b4:2e:99:3d:0e:56 | 192.16.192.60 | 2001:610:120:3000::192:60 |
brenta | Atom E3845, 1.91GHz, 4 cores, no HT | 4 GB | No IPMI | Controls VME crate / TTC system | VME SBC | TTC rack in MROD crate | |||||
alhama | Atom E3845, 1.91GHz, 4 cores, no HT | 4 GB | No IPMI | Controls ALTI TTC | VME SBC | ALTI Crate in server rack | |||||
babc-gst-001 | Raspberry pi 4B | 4 GB | No IPMI | Controlling VLDB+ with lpGBTv1 (I2C=0x71), Connected to GuestNet | Fanout rack, next to VLDB+ | DC:A6:32:56:8D:36 | 192.16.192.98 | 2001:610:120:3000::192:98 | |||
babc-gst-003 | Raspberry pi 4B | 4 GB | No IPMI | Controlling VLDB+ with lpGBTv0 (I2C=0x73), Connected to GuestNet | Table, next to VLDB+ | E4:5F:01:1D:6D:97 | 192.16.192.100 | 2001:610:120:3000::192:100 | |||
babc-gst-002 | Raspberry pi 3B | 512 MB | No IPMI | Controlling TTC (TTCvi) Connected to GuestNet | TTC Rack, inside topobox | B8:27:EB:FA:FC:C3 | 192.16.192.99 | 2001:610:120:3000::192:99 | |||
babc-gst-004 | Zynq 7030 SOC PicoZED | 1 GB | No IPMI | Alternative Picozed TTC system | TTC Rack | 54:10:EC:BA:E5:C6 | 192.16.192.101 | 2001:610:120:3000::192:101 | |||
ltittcsys | Enclustra board | No IPMI | LTI-TTC generator | TTC Rack | 20:B0:F7:06:D1:7E | 192.16.192.33 | 2001:610:120:3000::192:33 | ||||
Intellinet 8-port PDU | 192.168.0.100 | Controls power of ALTI VME crate, Tarbot and Orada | Intellinet 8-port PDU | Server rack |
Explanation of machine names¶
- PIEDRA "stone" in Spanish, and a Spanish river
- CALORE "heat" in Italian, and an Italian river
- SEUDRE French river
- TURANO Italian lake
- BREMBO Italian manufacturer of great automotive brake systems, and a river
- ARGOS Greek city
- AGOGNA Italian river
- GIMONE French river
- CANCHE French river
- ROTHER British river
- ORADA Portugese river, and a fish
The MROD crate¶
- Used for MROD testing and development
- Data source for the ROSes
- BRENTA (ancient unit of measure for liquids, in Turin equivalent to 49,29 l which is a lot of wine..., and an Italian river):
- VME SBC (VP325)
- VME SBC (VP315) -- returned to CERN Dec 2016
Point-to-point GbE links¶
!ATLAS_network_setup_06-10-2016.jpg!
100 GbE
gimone: Gen4 (interface of NIC), 192.168.144.10, 192.168.176.10, Mellanox Connect X-5
turano: Gen3, 192.168.208.11, 192.168.144.11, Mellanox Connect X-5
agogna: Gen3, 192.168.160.11, 192.168.176.11, Mellanox Connect X-5
turano: <-> gimone 192.168.208.10 <-> 192.168.208.11
agogna: <-> gimone 192.168.144.10 <-> 192.168.144.11
turano: <-> agogna 192.168.176.11 <-> 192.168.176.10
Temporarily not connected: brembo (temporarily, was canche): <-> osiris 192.168.160.11 <-> 192.168.160.10
40 GbE
calore: 192.168.48.10, 192.168.32.10, Mellanox Connect X-3
seudre: 192.168.48.11, 192.168.16.10, Mellanox Connect X-3
argos: 192.168.16.11, 192.168.32.11, Mellanox Connect X-3
seudre <-> calore 192.168.48.11 <-> 192.168.48.10
seudre <-> argos 192.168.16.10 <-> 192.168.16.11
argos <-> calore 192.168.32.11 <-> 192.168.32.10
10 GbE
canche (temporarily, was calore) 192.168.192.10, 192.168.176.10, 192.168.224.10, 192.168.240.10, two Intel cards
osiris: 192.168.192.11, 192.168.224.11alhama: 192.168.176.11, 192.168.240.11calore <> osiris and calore <-> alhama-
turano: 192.168.240.11, 192.168.192.11, Intel card
argos: 192.168.176.11, 192.168.224.11, Intel card
Cables policy¶
- Copper RJ45 Network cables color coding:
- black: Ethernet connection
- white: IPMI connection
- gray: Main Ethernet connection
- blue or green: DEV-Net (normally)
- no color coding for the point to point 10 and 40 GbE cables
- Label coding:
- ETHxx: Ethernet to the main rack
- ETH Txx: Ethernet to the Table(s) or secondary rack
- IPMIxx: IPMI to the main rack
- IPMI Txx: IPMI to the Table(s) or secondary rack
- KVM xx: KVM switch connection
- dedicated numbering on point to point 10 and 40 GbE cables
NOTE: please try to use appropriately coloured cables as much as you can
The raspberry pi connected to the VLDB+ board hosts a webbrowser to control the lpGBT. An SSH tunnel can be created to view the web page using a local browser:
alias pigbt='ssh -N -f -J login.nikhef.nl -L 8080:192.168.32.134:8080 pi@192.168.32.134 && firefox localhost:8080'
Configurations for 1 MHz tests¶
TestlabConfigurationFor1MHzTests-11-Feb-2022.pdf
TestlabConfigurationFor1MHzTests-11-Feb-2022.pptx