aboutsummaryrefslogtreecommitdiff
path: root/general/help
diff options
context:
space:
mode:
authorrobwwilliams2020-11-20 09:05:37 -0600
committerGitHub2020-11-20 09:05:37 -0600
commit321684176c9138b1114c8d1bb7382f25b9b5bc2e (patch)
treef3bd87d1d4891d2c608cf72689cf4d344a52f594 /general/help
parent7bdbd1bf13205bee32f6aab782ef849d3c474ee4 (diff)
downloadgn-docs-321684176c9138b1114c8d1bb7382f25b9b5bc2e.tar.gz
Update facilities.md
Diffstat (limited to 'general/help')
-rw-r--r--general/help/facilities.md58
1 files changed, 28 insertions, 30 deletions
diff --git a/general/help/facilities.md b/general/help/facilities.md
index 83fcd1b..89771f7 100644
--- a/general/help/facilities.md
+++ b/general/help/facilities.md
@@ -1,30 +1,29 @@
# Facilities
The core GeneNetwork team maintains modern Linux servers and storage
-systems for genomic and genetic analysis. These machines are
-maintained in the main UTHSC machine room in the Lamar Alexander
-Building in Memphis. The whole team has access to this space for
+systems for genetic, genomic, and phenome analyses. Machines are
+located in the main UTHSC machine room of the Lamar Alexander
+Building in at UTHSC (Memphis campus). The whole team has access to this space for
upgrades and hardware maintenance. Issues and work packages are
tracked through a Trello board and we use git repositories for
documentation (all available on request).
This computing facility has four computer racks dedicated to
-GeneNetwork related work. Each rack has a mix of Dell PowerEdge
-servers (from a few low-end R610s to high performance Dell R7425
+GeneNetwork-related work. Each rack has a mix of Dell PowerEdge
+servers (from a few low-end R610s, high performance Dell R6515, and two R7425
64-core systems - tux01 and tux02 - running the GeneNetwork web
-services), tux03 with 40 cores, 196 GB RAM and 2x NVIDIA V100 GPU, and
-one Computing Relion 2600GT systems - Penguin2 - with NVIDIA Tesla K40
-GPU which is used for software development and serves outside facing
-less-secure R/shiny and Python web services running in isolated
-containers. Effectively we have three decent outward facing servers
-which are fully utilized for the GeneNetwork project and OPAR with a
-total of 64+64+40+28=196 real cores. Furthermore we have a dedicated
-HPC cluster, named Octopus, consisting of 11 PowerEdge R6515 AMD EPYC
-7402P 24-Core (total 264 cores; 528 hyperthreaded). These machines
-have 128 GB RAM each. The two head nodes are large RAM machines with
-1TB each. All these machines run Debian + GNU Guix and use Slurm for
-batch submission. The racks have dedicated high speed Cisco switches
-and firewalls which are maintained by UTHSC IT staff.
+services). We also support several more experimental systems, including a 40-core system with 196 GB RAM and 2x NVIDIA V100 GPU (tux03), and one Penguin Computing Relion 2600GT systems (Penguin2) with a NVIDIA Tesla K40
+GPU used for software development and to serve outside-facing
+less secure R/shiny and Python services that run in isolated
+containers. Effectively, we have three outward facing servers
+that are fully used by GeneNetwork teams with a
+total of 64+64+40+28 = 196 real cores. In late 2020 we set up a small
+HPC cluster (Octopus), consisting of 11 PowerEdge R6515 AMD EPYC
+7402P 24-core CPUs (264 cores). Most of these machines
+are equipped with 128 GB RAM, but two nodes have 1 TB of memory.
+All Octopus nodes run Debian and GNU Guix and use Slurm for
+batch submission. All racks have dedicated high-speed Cisco switches
+and firewalls that are maintained by UTHSC IT staff.
We also run some 'specials' including an ARM-based NVIDIA Jetson and a
RISC-V [PolarFire
@@ -34,18 +33,17 @@ have also ordered two RISC-V
computers.
In addition to above hardware we have batch submission access to the
-cluster computing resource at the Advanced Computing Facility operated
+cluster computing resource at the ISAAX computing facility operated
by the UT Joint Institute for Computational Sciences in a secure setup
-at the DOE Oak Ridge National Laboratory. We have a 10 Gbit connection
-from the machine room at UTHSC to data transfer nodes at the ACF. The
-ACF has been upgraded in the past year (see [ACF system
+at the DOE Oak Ridge National Laboratory and on the UT Knoxville campus. We have a 10 Gbit connection
+from the machine room at UTHSC to data transfer nodes at ISAAC.
+ISAAC has been upgraded in the past year (see [ISAAC system
overview](http://www.nics.utk.edu/computing-resources/acf/acf-system-overview)
and now has over 3 PB of high-performance Lustre DDN storage and
-contains over 8000 cores with some large RAM nodes and one GPU
-node. Drs. Prins and other team members have used ACF systems to
-analyze genomic and genetic data sets. In recent developments the ACF
-will be moved from ORNL to UT Knoxville in 2021. We note that we can
-not use the ACF compute and storage facilities for public facing web
-services because of its stringent security requirements. The ACF,
-however, may come in useful to precompute genomics and genetics
-analysis results using standardized pipelines.
+contains over 8000 cores with some large RAM nodes and several GPU
+nodes. Drs. Prins and other team members have used ISAAC systems to
+analyze genomic and genetic data sets. Note that we can
+not yet use ISAAC and storage facilities for public-facing web
+services because of stringent security requirements. ISAAC
+however, will be highly useful for "precomputing" genomics and genetics
+ results using standardized pipelines.