summaryrefslogtreecommitdiff
path: root/topics/octopus/lizardfs/README.gmi
blob: 078a6289d8d74eb954a1695d28a3817d2b6086c0 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
# Information about lizardfs, and some usage suggestions

On the octopus cluster the lizardfs head node is on octopus01, with disks being added mainly from the other nodes. SSDs are added to the lizardfs-chunkserver.service systemd service and SDDs added to the lizardfs-chunkserver-hdd.service. The storage pool is available on all nodes at /lizardfs, with the default storage option of "slow", which corresponds to two copies of the data, both on SDDs.

## Interacting with lizardfs

It is possible to query the server for all the available goals:

```
$ lizardfs-admin list-goals octopus01 9421

Goal definitions:
Id      Name    Definition
1       1_copy  1_copy: $std _
2       2_copy  2_copy: $std {_ _}
...
19      slow    slow: $std {HDD HDD}
20      fast    fast: $std {SSD SSD}
21      2ssd    2ssd: $std {SSD SSD}
...
```

To change the replication level:

```
$ lizardfs setgoal slow /lizardfs/efraimf -r

/lizardfs/efraimf/:
 inodes with goal changed:               2
 inodes with goal not changed:           0
 inodes with permission denied:          0
```

And to see the replication level:

```
$ lizardfs getgoal /lizardfs/efraimf/

/lizardfs/efraimf/: slow

$ lizardfs getgoal /lizardfs/efraimf/ -r

/lizardfs/efraimf/:
 files with goal        slow :          1
 directories with goal  slow :          1
```

## Checking the health of the pool

There are a couple of commands which can be used to check on the health of the pool. They all take the syntax of `lizardfs-admin <command> <head-node> <port>`.

To find out the overall health of the data on the pool:

```
$ lizardfs-admin chunks-health octopus01 9421

Chunks availability state:
        Goal    Safe    Unsafe  Lost
        slow    202726  26005   2073
        fast    43397   1085    -
        2ssd    7984    -       -

Chunks replication state:
        Goal    0       1       2       3       4       5       6       7       8       9       10+
        slow    95      1870    228839  -       -       -       -       -       -       -       -
        fast    17253   2317    24912   -       -       -       -       -       -       -       -
        2ssd    7984    -       -       -       -       -       -       -       -       -       -

Chunks deletion state:
        Goal    0       1       2       3       4       5       6       7       8       9       10+
        slow    68      15      2081    27598   201022  20      -       -       -       -       -
        fast    12603   720     1880    5377    23902   -       -       -       -       -       -
        2ssd    7984    -       -       -       -       -       -       -       -       -       -
```

To query how the individual disks are filling up and if there are any errors:

List all disks

```
lizardfs-admin list-disks octopus01 9421 | less
```

Other commands can be found with `man lizardfs-admin`.


## Deleted files

Lizardfs also keeps deleted files, by default for 30 days. If you need to recover deleted files (or delete them permanently) then the metadata directory can be mounted with:

```
$ mfsmount /path/to/unused/mount -o mfsmeta
```

For more information see the lizardfs documentation online
=> https://dev.lizardfs.com/docs/adminguide/advanced_configuration.html#trash-directory lizardfs documentation for the trash directory

## Gotchas

It should be noted that any goal using erasure_coding is incredibly slow to write to, and defining goals like this should be avoided. Although it does decrease the amount of space each file takes up in the pool, the trade-off when it is mistakenly used for data or folders which will be written to outweighs the benefits.

"speeding up" replication or resilvering of the data can be done in /etc/lizardfs/mfsmaster.cfg. Uncomment the following lines to increase their effect 10-fold from their defaults:

```
# CHUNKS_SOFT_DEL_LIMIT = 100
# CHUNKS_HARD_DEL_LIMIT = 250
# CHUNKS_WRITE_REP_LIMIT = 20
# CHUNKS_READ_REP_LIMIT = 100
```

followed by either restarting the lizardfs-master.service or by running (probably as root on octopus01):

```
lizardfs-admin reload-config octopus01 9421
```

It has not yet been tested to see how much this affects reading and writing to the HDDs or SSDs while this change is in effect.