summaryrefslogtreecommitdiff
path: root/topics/octopus/maintenance.gmi
blob: 65ea52ed7239028c2d808a52e0ee60ac317a5dcd (about) (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
# Octopus/Tux maintenance

## To remember

`fdisk -l` to see disk models
`lsblk -nd` to see mounted disks

## Status

octopus02
- Devices: 2 3.7T SSDs + 2 894.3G SSDs + 2 4.6T HDDs
- **Status: Slurm not OK, LizardFS not OK**
- Notes:
  - `octopus02 mfsmount[31909]: can't resolve master hostname and/or portname (octopus01:9421)`, 
  - **I don't see 2 drives that are physically mounted**

octopus03
- Devices: 4 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK
- Notes: **I don't see 2 drives that are physically mounted**

octopus04
- Devices: 4 7.3 T SSDs (Neil) + 1 4.6T HDD + 1 3.7T SSD + 2 894.3G SSDs
- Status: Slurm NO, LizardFS OK (we don't share the HDD) 
- Notes: no

octopus05
- Devices: 1 7.3 T SSDs (Neil) + 5 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK
- Notes: no

octopus06
- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
- Notes: no

octopus07
- Devices: 1 7.3 T SSDs (Neil) + 4 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK
- Notes: **I don't see 1 device that is physically mounted**

octopus08
- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
- Notes: no

octopus09
- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
- Notes: no

octopus10
- Devices: 1 7.3 T SSDs (Neil) + 4 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
- Notes: **I don't see 1 device that is physically mounted**

octopus11
- Devices: 1 7.3 T SSDs (Neil) + 5 3.7T SSDs + 2 894.3G SSDs
- Status: Slurm OK, LizardFS OK
- Notes: on

tux05
- Devices: 1 3.6 NVMe + 1 1.5T NVMe + 1 894.3G NVMe
- Status: Slurm OK, LizardFS OK (we don't share anything)
- Notes: **I don't have a picture to confirm physically mounted devices**

tux06
- Devices: 2 3.6 T SSDs (1 from Neil) + 1 1.5T NVMe + 1 894.3G NVMe
- Status: Slurm OK, LizardFS (we don't share anything)
- Notes:
  - **Last picture reports 1 7.3 T SSD (Neil) that is missing**
  - **Disk /dev/sdc: 3.64 TiB (Samsung SSD 990: free and usable for lizardfs**
  - **Disk /dev/sdd: 3.64 TiB (Samsung SSD 990): free and usable for lizardfs**

tux07
- Devices: 3 3.6 T SSDs + 1 1.5T NVMe (Neil) + 1 894.3G NVMe
- Status: Slurm OK, LizardFS
- Notes:
  - **Disk /dev/sdb: 3.64 TiB (Samsung SSD 990): free and usable for lizardfs**
  - **Disk /dev/sdd: 3.64 TiB (Samsung SSD 990): mounted at /mnt/sdb and shared on LIZARDFS: TO CHECK BECAUSE IT HAS NO PARTITIONS**

tux08
- Devices: 3 3.6 T SSDs + 1 1.5T NVMe (Neil) + 1 894.3G NVMe
- Status: Slurm OK, LizardFS
- Notes: no

tux09
- Devices: 1 3.6 T SSDs + 1 1.5T NVMe + 1 894.3G NVMe
- Status: Slurm OK, LizardFS
- Notes: **I don't see 1 device that is physically mounted**

## Neil disks
- four 8TB SSDs on the right of octopus04
- one 8TB SSD in the left slot of octopus05
- six 8TB SSDs bottom-right slot of octopus06,07,08,09,10,11
- one 4TB NVMe and one 8TB SSDs on tux06, NVME in the bottom-right of the group of 4 on the left, SSD on the bottom-left of the group of 4 on the right
- one 4TB NVMe on tux07, on the top-left of the group of 4 on the right
- one 4TB NVMe on tux08, on the top-left of the group of 4 on the right