Project

General

Profile

Atlas setup in H148

Last update: 23 October 2020

FPGA kits and FPGA IDs

For some thoughts on SMBUS, see SMBUS_for_FELIX_cards
For instructions on how to program and use the cards / machines below, see FELIX at Nikhef readme

Card Project Card S/N JTAG host / Cable ID Card host PCIe Slot Card DNA Transceiver channels
BNL-711 rother/xilinx_tcf/Digilent/210249A06135 agogna 0x0105122244b0c205 48
BNL-712 rother/xilinx_tcf/Digilent/210249A85CD6 agogna 0x0105968534800345 48
PRIME-712 ITk 160 rother/xilinx_tcf/Digilent/210249A85FDE turano 4 0x013a6f281d410205 24
PRIME-712 ITk 161 rother/xilinx_tcf/Digilent/210249A85F02 turano 6 0x013a6f281c2143c5 24
PRIME-712 3 rother/xilinx_tcf/Digilent/210249A14F75 seudre 0x011822432c808585 48
VC-709 ProtoDUNE rother/xilinx_tcf/Digilent/210203A103E6A seudre 4
PRIME-712 50 rother/xilinx_tcf/Digilent/210249A14F61 brembo 0x013a6f281c40c145 48
VMK-180 piedra/xilinx_tcf/Xilinx/461944111202A brembo 6
VCU-128 osiris/xilinx_tcf/Xilinx/091847100612A brembo 16
BNL-712 rother/xilinx_tcf/Xilinx/00001176835301 canche 0x0117ed6124f0e2c5 48
HTG-710 rother/xilinx_tcf/Digilent/210249854652 argos 24
VC-709 rother/xilinx_tcf/Digilent/210203826227A 4

Main rack layout, from bottom to top

Host CPU RAM IPMI Address *Remarks Chassis Motherboard PCIe slots Ethernet
agogna i9-7940X, 3.1 GHz, 14 cores, runs at 3.8 GHz 32 GB 192.168.0.46 4U Asus WS X299 PRO_SE 2 x 16 lanes (PCIEX16_1 and _2), 1 x 8 lanes (PCIEX16_3), 1 x 4 lanes (PCIEX16_4) 2x 40 GbE, 100 GbE
argos E5-1620v1, 3.6 GHz, 4 cores 16 GB 192.168.0.41 4U Supermicro X9SRE-3F 1 x 16, 1 x 8, 1 x 4 lanes 2x 10 GbE, 2x 40 GbE
canche Xeon Gold 5118, 2.3 GHz, 12 cores 48 GB 192.168.0.36 2U Supermicro X11SPW-TF depdends on riser card: 5 x 8 lanes or 2 x 16 lanes + 1 x 8 lanes
turano E5-1660v4, 3.2 GHz, 8 cores 32 GB 192.168.0.45 (via en01) 4U Supermicro X10SRA-F 2 x 16 lanes, 1 x 8 lanes 2x 100 GbE
gimone E5-1650v2, 3.5 GHz 16 GB 192.168.0.43 4U Supermicro X9SRL-F only 8-lane slots
seudre E5-1650V4, 3.6 GHz, 6 cores 32 GB 192.168.0.42 4U Supermicro X10SRA-F 2 x 16 lanes, 1 x 8 lanes 2x 40 GbE
calore E5-1650v2, 6 cores 16 GB 192.168.0.44 ROS PC 2U Supermicro X9SRW-F depdends on riser card: 5 x 8 lanes or 2 x 16 lanes + 1 x 8 lanes 2x 40 GbE
brembo AMD Epyc 7302P CPU, 3.0 GHz, 16 cores 128 GB 192.168.0.40 4U ASRock ROMED8-2T 7 16-lane Gen4, slot 2 is 8 lane because it is shared with the M.2 SSD (jumper selectable)
piedra i7-5930K, 3.5 GHz, 6 cores 64 GB 192.168.0.38 Used as FW build server 4U Asus X99-WS/IPMI 2x16 and 1x8 or 5x8
osiris 2 Xeon Gold 5115 CPUs, 2.4 GHz, 10 cores per CPU 192 GB per CPU 192.168.0.33 Fujitsu server 1U
alhama 2 Xeon Gold 5115 CPUs, 2.4 GHz, 10 cores per CPU 192 GB per CPU 192.168.0.34 Fujitsu server 1U

Second rack, table or other locations

Host CPU RAM IPMI Address *Remarks Chassis Motherboard PCIe slots Ethernet
rother Intel Core i5-6260U, 1.80GHz 192.168.0.3 (IPMI gateway) IPMI gateway and hardware (JTAG) server in rack with VME crate (near IPMI switch) NUC
brenta Atom E3845, 1.91GHz, 4 cores, no HT 4 GB No IPMI Controls VME crate / TTC system VME SBC
orada i5 No IPMI Windows 10, below table, username: daqmuwin Desktop
Raspberry pi 3B 512 MB No IPMI Connected to devnet, access via login server
Raspberry pi 4B 4 GB No IPMI controlling VLDB+, Connected to devnet, access via login server
tarbot AMD Ryzen 7 3700X 8 cores, 3.6 GHz 32 GB No IPMI In N127 office, Runs Ubuntu 20.04 Desktop Gigabyte Aorus Pro X570 1x 16 lanes Gen4 or 2x 8 lanes Gen4
brembo Q9650, 3.0 GHz 192.168.0.40 obsolete, SLC6, Run 1 ROS PC 2x 10 GbE

Explanation of machine names

  • PIEDRA "stone" in Spanish, and a Spanish river
  • CALORE "heat" in Italian, and an Italian river
  • SEUDRE French river
  • TURANO Italian lake
  • BREMBO Italian manufacturer of great automotive brake systems, and a river
  • ARGOS Greek city
  • AGOGNA Italian river
  • GIMONE French river
  • CANCHE French river
  • ROTHER British river
  • ORADA Portugese river, and a fish

The MROD crate

  • Used for MROD testing and development
  • Data source for the ROSes
  • BRENTA (ancient unit of measure for liquids, in Turin equivalent to 49,29 l which is a lot of wine..., and an Italian river):
    • VME SBC (VP325)
    • VME SBC (VP315) -- returned to CERN Dec 2016

Point-to-point GbE links

!ATLAS_network_setup_06-10-2016.jpg!

100 GbE

agogna: Gen4 (interface of NIC), 192.168.144.10, 192.168.176.10, Mellanox Connect X-5
turano: Gen3, 192.168.208.11, 192.168.144.11, Mellanox Connect X-5
brembo: Gen3, 192.168.160.11, 192.168.176.11, Mellanox Connect X-5
osiris: Gen3: 192.168.160.10, Mellanox Connect X-4: connected to Xilinx dev. card in brembo
alhama: Gen3: 192.168.208.10, Mellanox Connect X-4

turano <-> alhama 192.168.208.11 <-> 192.168.208.10
turano <-> agogna 192.168.144.11 <-> 192.168.144.10
brembo: <-> agogna 192.168.176.11 <-> 192.168.176.10
Temporarily not connected: brembo (temporarily, was canche): <-> osiris 192.168.160.11 <-> 192.168.160.10

40 GbE

calore: 192.168.48.10, 192.168.32.10, Mellanox Connect X-3
seudre: 192.168.48.11, 192.168.16.10, Mellanox Connect X-3
argos: 192.168.16.11, 192.168.32.11, Mellanox Connect X-3

seudre <-> calore 192.168.48.11 <-> 192.168.48.10
seudre <-> argos 192.168.16.10 <-> 192.168.16.11
argos <-> calore 192.168.32.11 <-> 192.168.32.10

10 GbE

canche (temporarily, was calore) 192.168.192.10, 192.168.176.10, 192.168.224.10, 192.168.240.10, two Intel cards

osiris: 192.168.192.11, 192.168.224.11
alhama: 192.168.176.11, 192.168.240.11
calore <-> osiris and calore <-> alhama

turano: 192.168.240.11, 192.168.192.11, Intel card
argos: 192.168.176.11, 192.168.224.11, Intel card

Cables policy

  • Copper RJ45 Network cables color coding:
    • black: Ethernet connection
    • white: IPMI connection
    • gray: Main Ethernet connection
    • blue or green: DEV-Net (normally)
    • no color coding for the point to point 10 and 40 GbE cables
  • Label coding:
    • ETHxx: Ethernet to the main rack
    • ETH Txx: Ethernet to the Table(s) or secondary rack
    • IPMIxx: IPMI to the main rack
    • IPMI Txx: IPMI to the Table(s) or secondary rack
    • KVM xx: KVM switch connection
    • dedicated numbering on point to point 10 and 40 GbE cables

NOTE: please try to use appropriately coloured cables as much as you can

Add picture from clipboard (Maximum size: 97.7 MB)