1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
|
# Hanging database
Mariadb occassionally stops responding. We think updating myisam to InnodB is the solution because it prevents full table locking.
Also it is better to use mydumper, mydumper locks MyISAM tables and does not lock InnoDB so the dump is consistent.
## Tags
* assigned: pjotrp, zsloan
* type: bug
* keywords: database
* status: unclear
* priority: high
## Tasks
### for Penguin2, Tux01 and Tux02
* First on Penguin2
* Update mariadb to latest
* Convert fulltext tables (see below)
+ ProbeSet
+ GeneRIF_BASIC
+ pubmedsearch
* Good candidates
+ 2.1G Dec 4 22:15 ProbeSetXRef.MYD
+ 2.3G Dec 18 14:56 ProbeSet.MYD
+ 2.6G Aug 27 2019 ProbeSE.MYD
+ 7.1G Nov 2 05:07 ProbeSetSE.MYD
+ 11G Aug 27 2019 ProbeData.MYD
+ 63G Dec 4 22:15 ProbeSetData.MYD
* Create test for every table that is going to switch
* Convert largest tables to innodb
* After some testing do same for Tux01 and Tux02
### Tux02
# Info
## Mariadb is 'hanging'
In the last 12 hours GN2 monitoring shows the website is responding intermittendly. A quick check shows the database is blocking. Rather than simply restarting the database - which is known to sort the issue - the timing is that the US is sleeping so I can do some checking. Let's take a look.
Mariadb is at 4x CPU
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
40514 mysql 20 0 37.3g 1.8g 18268 S 394.4 0.7 4855:33 mysqld
```
The ps table shows a backup is ongoing
```
root 57559 0.0 0.0 2388 756 ? Ss 03:00 0:00 /bin/sh -c /bin/su mysql -c /export/backup/scripts/tux01/backup_mariadb.sh >> ~/cron.log 2>&1
mysql 57588 0.2 0.0 200112 27292 ? Sl 03:00 0:25 mariabackup --backup --target-dir=/home/backup/tux01_mariadb_new/latest/ --user=webqtlout --password=x xxxxxxx
```
Tales in use:
```
MariaDB [db_webqtl]> show open tables where in_use > 1;
+-----------+----------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+-----------+----------------+--------+-------------+
| db_webqtl | InfoFiles | 5 | 0 |
| db_webqtl | GeneRIF_BASIC | 3 | 0 |
| db_webqtl | ProbeFreeze | 4 | 0 |
| db_webqtl | ProbeSet | 4 | 0 |
| db_webqtl | ProbeSetXRef | 4 | 0 |
| db_webqtl | Species | 4 | 0 |
| db_webqtl | Geno | 4 | 0 |
| db_webqtl | Chr_Length | 2 | 0 |
| db_webqtl | Tissue | 4 | 0 |
| db_webqtl | ProbeSetFreeze | 4 | 0 |
| db_webqtl | InbredSet | 4 | 0 |
+-----------+----------------+--------+-------------+
11 rows in set (0.001 sec)
```
```
MariaDB [db_webqtl]> show open tables where in_use > 1;
+-----------+----------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+-----------+----------------+--------+-------------+
| db_webqtl | ProbeFreeze | 191 | 0 |
| db_webqtl | ProbeSet | 6 | 0 |
| db_webqtl | ProbeSetXRef | 6 | 0 |
| db_webqtl | PublishFreeze | 13 | 0 |
| db_webqtl | Species | 106 | 0 |
| db_webqtl | Tissue | 106 | 0 |
| db_webqtl | ProbeSetFreeze | 6 | 0 |
| db_webqtl | InbredSet | 132 | 0 |
| db_webqtl | GenoFreeze | 13 | 0 |
+-----------+----------------+--------+-------------+
9 rows in set (0.001 sec)
```
In the error log we seeing a lot of
```
2021-12-21 6:17:57 256179 [Warning] Aborted connection 256179 to db: 'db_webqtl' user: 'webqtlout' host: '128.169.5.59' (Got timeout reading communication packets)
```
```
SHOW FULL PROCESSLIST;
458 rows in set (0.003 sec)
```
with entries
```
| 256363 | webqtlout | 128.169.5.59:59120 | db_webqtl | Query | 15 | Waiting for table flush | SELECT Id, Name, FullName, ShortName, DataScale FROM ProbeSetFreeze WHERE public > 0 AND (Name = "CB_M_0305_R" OR FullName = "CB_M_0305_R" OR ShortName = "CB_M_0305_R")
```
waiting for tables to flush!
at the top of the process list we find
```
Id User Host db Command Time State Info Progress
1 system user NULL Daemon NULL InnoDB purge coordinator NULL 0.000
2 system user NULL Daemon NULL InnoDB purge worker NULL 0.000
3 system user NULL Daemon NULL InnoDB purge worker NULL 0.000
4 system user NULL Daemon NULL InnoDB purge worker NULL 0.000
5 system user NULL Daemon NULL InnoDB shutdown handler NULL 0.000
227365 webqtlout 127.0.0.1:33950 db_webqtl Sleep 13015 NULL 0.000
245634 webqtlout 127.0.0.1:38098 db_webqtl Sleep 23180 NULL 0.000
```
This is quite informative:
=> https://programmer.group/analysis-of-mysql-process-in-waiting-for-table-flush.html
it suggests that the backup can be the root of the problem.
And then
=> https://www.thegeekdiary.com/troubleshooting-mysql-query-hung-waiting-for-table-flush/
suggests
* Wait for the long-running queries which are blocking the FLUSH TABLE to complete;
* Identify the long-running queries and kill them;
* Restart the server
and that is somewhat amusing.
Stripping out all reqular queries we get:
```
grep localhost test.out |grep -vi probesetfreeze|grep -vi species
255559 webqtlout localhost NULL Query 13092 Waiting for table flush FLUSH NO_WRITE_TO_BINLOG TABLES0.000
256351 webqtlout localhost db_webqtl Field List 1588 Waiting for table flush NULL 0.000
256383 webqtlout localhost db_webqtl Query 0 Init SHOW FULL PROCESSLIST 0.000
```
and it appears everyone is waiting for id 255559. Let's kill that.
```
kill 255559;
```
and inspect
```
mysql -u webqtlout -pwebqtlout db_webqtl -A -e "show processlist;"|less
```
it started processing again. To speed up recovery back to:
```
systemctl restart mysql
```
of course that stopped the running backup. But processing is back in business.
My first conclusion is that this problem was triggered by the backup procedure. Interestingly, it happens irregularly. We also have seen this issue before this backup procedure was instated, so I figure it has to do with Mariadb.
The version on production is from 2017 - we should update that soon:
Server version: 10.3.27-MariaDB-0+deb10u1-log Debian 10
we have been running a more recent version of mariadb on luna. Still, that is unlikely to fix this issue because I think it really has to do with myisam and locking of large tables. Switching to innodb does away with global locks and is the default on mariadb (there are less and less people using myisam).
## Mariadb table locked
Arthur reports: MariaDB is not responding Saturday, July 24 2021 at 10:48 pm. I tried to enter data to the table ProbeSetXRef.pValue and when normally takes few seconds, now is more than 10 minutes without completion/responding.
Zach: before restarting the table can you check the status first?
=> https://mariadb.com/kb/en/show-table-status/
some ideas here
=> https://dba.stackexchange.com/questions/98725/mariadb-innodb-what-to-do-on-locks-in-status-log-but-no-locked-table-found
we are still using MyISAM for these tables: a switch to InnoDB may help.
## Another round of mariadb
Arthur complained again that the DB is slow. The standard performance tests are not failing. The slow log shows slow queries:
```
# Thread_id: 1715564 Schema: db_webqtl QC_hit: No
# Query_time: 399.159339 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1647055599;
UPDATE ProbeSet SET description = REPLACE(description, ";", "") WHERE ChipId=11;
# Query_time: 2006.780492 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1647057611;
update ProbeSetXRef set mean = (select AVG(value) from ProbeSetData where ProbeSetData.Id = ProbeSetXRef.DataId) where ProbeSetXRef.ProbeSetFreezeId = 385;
# Query_time: 1158.804376 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
use db_webqtl;
SET timestamp=1647142312;
update ProbeSet set description="ArfGAP with SH3 domain, ankyrin repeat and PH domain 3" where Id=1719426 AND ChipId=11;
```
so we should be able to reproduce that.
* mariadb was restarted 1 week ago
* mariadb is not eating CPU or RAM, so the general state looks healthy.
* Disk state looks healty - but there are files in /tmp which do not belong there.
* The ps table shows a backup is ongoing (after 3 hours).
* There are no tables in use according to `show open tables where in_use > 1;`
* There are no blocking prcesses according to `SHOW FULL PROCESSLIST;`
* Number of threads is a healthy 4 with `show status where `variable_name` = 'Threads_connected';`
So, no issues. And this query runs at
```
MariaDB [db_webqtl]> update ProbeSet set description="ArfGAP with SH3 domain, ankyrin repeat and PH domain 3" where Id=1719426 AND ChipId=11;
Query OK, 1 row affected (0.210 sec)
Rows matched: 1 Changed: 1 Warnings: 0
```
There must have been an update block. Best to do diagnostics when the system is blocking.
# Connections
```
MariaDB [db_webqtl]> SHOW STATUS LIKE '%connect%';
+-----------------------------------------------+-------+
| Variable_name | Value |
+-----------------------------------------------+-------+
| Aborted_connects | 0 |
| Connection_errors_accept | 0 |
| Connection_errors_internal | 0 |
| Connection_errors_max_connections | 0 |
| Connection_errors_peer_address | 0 |
| Connection_errors_select | 0 |
| Connection_errors_tcpwrap | 0 |
| Connections | 700 |
| Max_used_connections | 29 |
| Performance_schema_session_connect_attrs_lost | 0 |
| Slave_connections | 0 |
| Slaves_connected | 0 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_finished_connects | 0 |
| Threads_connected | 3 |
| wsrep_connected | OFF |
+-----------------------------------------------+-------+
17 rows in set (0.001 sec)
```
```
MariaDB [db_webqtl]> SHOW VARIABLES LIKE '%timeout%';
+---------------------------------------+----------+
| Variable_name | Value |
+---------------------------------------+----------+
| connect_timeout | 10 |
| deadlock_timeout_long | 50000000 |
| deadlock_timeout_short | 10000 |
| delayed_insert_timeout | 300 |
| idle_readonly_transaction_timeout | 0 |
| idle_transaction_timeout | 0 |
| idle_write_transaction_timeout | 0 |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 86400 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_slave_kill_conn_timeout | 5 |
| slave_net_timeout | 60 |
| thread_pool_idle_timeout | 60 |
| wait_timeout | 28800 |
+---------------------------------------+----------+
```
If you set wait_timeout and interactive_timeout values they get reset after a while.
# Recent crash
```
| Max_used_connections | 1030 |
| Threads_connected | 1030 |
show open tables where in_use > 1;
+-----------+----------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+-----------+----------------+--------+-------------+
| db_webqtl | ProbeFreeze | 182 | 0 |
| db_webqtl | ProbeSet | 3 | 0 |
| db_webqtl | ProbeSetXRef | 3 | 0 |
| db_webqtl | PublishFreeze | 7 | 0 |
| db_webqtl | Species | 94 | 0 |
| db_webqtl | Tissue | 94 | 0 |
| db_webqtl | ProbeSetFreeze | 3 | 0 |
| db_webqtl | InbredSet | 108 | 0 |
| db_webqtl | GenoFreeze | 7 | 0 |
+-----------+----------------+--------+-------------+
```
```
FLUSH NO_WRITE_TO_BINLOG TABLES
Waiting for table flush SELECT confidentiality, AuthorisedUsers FROM ProbeSetFreeze WHERE Name = 'EL_BXDCDScWAT_0216' 0.000
Waiting for table flush SELECT DISTINCT Tissue.Name FROM ProbeFreeze,ProbeSetFreeze, InbredSet, Tissue, Species WHERE Speci 0.000
```
Another one to check next time is
```
SHOW ENGINE INNODB STATUS
```
and
```
mariabackup --backup --kill-long-query-type=SELECT --kill-long-queries-timeout=120
```
# Monitor connections
Use the general log as described in
```
ALTER TABLE mysql.general_log ENGINE = MyISAM;
ALTER TABLE mysql.general_log ADD INDEX (event_time);
SET GLOBAL general_log = 'ON';
SET GLOBAL log_output = 'TABLE';
```
and we start logging all connections and queries! Let this run for a while and we switch it off again for analysis.
To stop logging
```
SET GLOBAL general_log = 'OFF';
```
|