MOS
Research Center of Modeling, Optimization and Simulation
.
BBU HPC Infrastructure
http://hpc.cs.ubbcluj.ro
HPC – IBM NEXTSCALE
- Rpeak 62 Tflops, Rmax 40 Tflops
- 68 nodes NX360 M5, of which
- 12 nodes with 2 GPU Nvidia K40X
- 6 nodes with Intel Phi
- 2 processors E5-2660 v3 with 10C per node
- 128 GB RAM per node, 2 HDD SATA 500 Gb/node
- Subscription rate 1: 1 between nodes provided by the IB Melanox SX6512 switch with 216-ports
- NetApp E5660 disk storage equipment, 120 HDD SAS with 600 Gb/HDD => total 72Tb
- IBM GPFS 4.x as parallel file system
- IBM TS3100 tape library for data archiving
- Operating system on nodes: RedHat Linux 6 with subscription
- HPC management software: IBM Platform HPC 4.2
Cloud infrastructure – IBM Flex System
- 10 virtualization servers Flex System x240
- 128GB RAM/server
- 2 x Intel Xeon E5-2640 v2 processor/server
- 2 x SSD SATA 240 Gb/server
- 1 server management
- Private Cloud software: IBM cloud with OpenStack manager 4.2
- Monitoring and management software: IBM Flex System Manager software stack
- Virtualisation software: VMware vSphere Enterprise 5.1
MOS Local Network:
- Window Server:
- IBM System x3650 M4 Planar,
- Procesoare: 2 x E5-2650 v2 8 nuclee 2.6GHz 20MB Cache 1866MHz 95W
- RAM: 64 GB (4 x 16GB (1x16GB, 2Rx4, 1.5V) PC3-14900 CLI3
ECC DDR3 1866MHz LP RDIMM))
- Storage: 3 x IBM 300GB I0K 6Gbps SAS 2.5 SFF G2HS HDD, 10.000 RPM
- Controller RAID: 8 ports SAS 8GB/sec
- Linux Server:
- IBM System x3750 M4 Planar (8752CTO)
- 4 x lntel Xeon Processor E5-4610 v2 8C 2.3GHz 16MB Cache 1600MHz 95W , 2.3 GHz16 MB cache, 8 cores
- RAM: 64 GB (4 x 16GB (1x16GB, 2Rx4, 1.35V) PC3L-12800 CLII ECC DDR3 1600MHz LP RDIMM)
- HDD: 8 x IBM 300GB 15K 6Gbps
- Network of Workstations
- Graphical Station