ÁñÁ«Ö±²¥

X

Configuration

University of Memphis Network services supporting research

Infrastructure

ÁñÁ«Ö±²¥ operates a state of the art communications network with over 18,000 nodes which provides connectivity across the main and regional campuses and facilities, on-campus residences, the Internet, and high-performance national research networks (e.g., I2). The campus network infrastructure standard provides switched 1Gbps service to the desktop. Optional switched 10 Gbps service to the desktop is available to support specialized research needs. Wireless data service (currently 802.11ac) is available in all buildings and public areas on the main campus, and in all regional locations. ÁñÁ«Ö±²¥ has dual redundant internet connections through 2 different Tier 1 providers with a total of up to 20 gigabit per second (Gbps) of commodity Internet service.

Research connectivity

ÁñÁ«Ö±²¥ is the State of Tennessee Connector site for the research network . Internet2 is a consortium of over 200 U.S. universities working in partnership with industry and government to develop and deploy advanced network applications and technologies to support education and research. The University of Memphis, in partnership with the , , and , provides statewide access into Internet2 through Internet2's  (United States Unified Community Anchor Network) program, to K-12 school districts throughout the state as well as other higher education and research institutions in Tennessee.

ÁñÁ«Ö±²¥ is a founding member of the  (MRC). MRC is a regional optical network providing very high speed connectivity at 10 gigabits per second among the researchers and facilities of its local members, including , the University of Memphis, and the .

MRC is dedicated to the development of Tennessee's knowledge and innovation economy by providing reliable and cost-effective very high speed communications for its members. The organization plays an active role in developing research partnerships on behalf of its members. MRC serves education, research, public service, and economic development initiatives. MRC establishes communication connections with other initiatives in Tennessee to form a statewide high speed research backbone. Further, MRC connects the Memphis region with national and international research and scientific networks, enabling Memphis and Tennessee a significant competitive advantage in research-driven economic development. MRC is working with other regional research network partners in Arkansas, Louisiana, and Mississippi to promote regional connectivity in the mid-south.


BigBlue Hardware

The Intel configuration consists of 88 Dell compute nodes with 3520 total CPU cores, 20736 GB total RAM, and 12 NVIDIA V100 GPUs from the old cluster. The AMD configuration consists of 32 Dell compute nodes with 5632 total CPU cores, 27648 GB total RAM, and 8 A100 GPUs. Overall, the cluster will have 120 compute nodes with 9152 cores, 48384 GB total RAM, and 20 GPUs.

  • Login: 2 PowerEdge R6625 dual socket AMD Epyc Genoa 9124 Login nodes with 384 GB DDR5 RAM and HDR100 Infiniband
  • Head: 1 PowerEdge R7625 dual socket AMD Epyc Genoa 9124 Head node with 384 GB DDR5 RAM and HDR100 Infiniband
  • Intel:
    • Thin: 78 PowerEdge C6420 dual socket Intel Skylake Gold 6148 Compute nodes with 192 GB DDR4 RAM and EDR Infiniband.
    • NVIDIA GPU: 6 PowerEdge R740 dual socket Intel Skylake Gold 6148 GPU nodes with 192 GB DDR4 RAM, 2 x NVIDIA V100 GPU and EDR Infiniband
    • Fat: 2 PowerEdge R740 dual socket Intel Skylake Gold 6148 Fat Memory Nodes with 768 GB DDR4 RAM and EDR Infiniband
    • Large Fat: 2 PowerEdge R740 dual socket Intel Skylake Gold 6148 Nodes with 1.5 TB DDR4 RAM and EDR Infiniband
  • AMD:
    • Compute: 24 PowerEdge R7625 dual socket AMD Epyc Genoa 9654 compute nodes with 768 GB DDR5 RAM, 1.6 TB NVME storage, and HDR100 Infiniband.
    • NVIDIA GPU: 4 PowerEdge R7625 dual socket AMD Epyc Genoa 9354 compute nodes with 768 GB DDR5 RAM, 1.6 TB NVME storage, 2 x NVIDIA A100 GPU and HDR100 Infiniband.
    • Fat: 4 PowerEdge R7625 dual socket AMD Epyc Genoa 9654 compute nodes with 1.5 TB DDR5 RAM, 1.6 TB NVME storage, and HDR100 Infiniband.
  • Parallel File System: Arcastream PixStor (GPFS) with 80 x 8 TB HDD (640 TB total raw storage) providing up to 7.5GB/sec read and 5.5GB/sec write performance, and 8 x 15.3 TB SSD (122.4 TB total raw storage) providing up to 80 GB/s read and write speeds. Total usable storage (without raid overhead) is 690 TB for home, project, and scratch directories.
  • Backup Storage: 1 PowerEdge R740XD2 with 24 x 20 TB HDD (480 TB total raw storage) for home and project directories.

All compute nodes are connected via HDR100/EDR Infiniband (2:1 Blocking) and 1/10/25GbE for host/OOB management. Head and Login nodes are connected via HDR100 Infiniband and 10/25GbE for host/OOB management.


BigBlue Software 

The operating system used on the BigBlue cluster is Rocky Linux 8 (a redhat compatible linux, not debian compatible). Scheduling is handled by There are many packages on the cluster that can be shown with the command "module avail" and many more if "module load spack; module avail" is run. We do require users install and packages in their own directory, but if help is needed, just submit a request to the . Similarly, if other software needs to be installed, just submit a request to the . We also have a that has more detailed guides. In addition to many traditional Linux software like automake, awk, gcc, etc, the following licensed software is installed on the cluster (some might require a user to purchase a seperate license, especially if you need a particular module or extension):

Ansys Workbench
Gaussian
Matlab
Molpro
Comsol
Gammes
VASP
AMBER
SAS
Intel Compilers
CUDA Compilers
VMD
MOE