Jump to content

jithu

Approved members
  • Posts

    4
  • Joined

  • Last visited

Everything posted by jithu

  1. Ian, I checked the raid status for any drive failures. $ cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md2 : active raid1 sdb2[1] sda2[0] 20478912 blocks [2/2] [UU] md3 : active raid1 sdb3[1] sda3[0] 1932506048 blocks [2/2] [UU] unused devices: <none> $ mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Thu May 15 09:29:16 2014 Raid Level : raid1 Array Size : 20478912 (19.53 GiB 20.97 GB) Used Dev Size : 20478912 (19.53 GiB 20.97 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Jan 28 22:25:30 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 23d13fab:22752e43:a4d2adc2:26fd5302 Events : 0.135 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 ​​$ mdadm -D /dev/md3 /dev/md3: Version : 0.90 Creation Time : Thu May 15 09:29:16 2014 Raid Level : raid1 Array Size : 1932506048 (1842.98 GiB 1978.89 GB) Used Dev Size : 1932506048 (1842.98 GiB 1978.89 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Wed Jan 28 22:45:39 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 8077f854:59bf935e:a4d2adc2:26fd5302 Events : 0.582 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 Looks like raid is working just fine without any failed drives.
  2. Hi Ian, partitions were not full at the moment of crash. Unfortunately I don't have any log before the crash. But I got something from syslog which indicates what happened on restart. https://gist.github.com/anonymous/8c9645f54c5a787e85ed It looks like the database was not shutdown normally. Do you've any hints? Thanks.
  3. Hi, Here are some more informations. df -h Filesystem Size Used Avail Use% Mounted on rootfs 20G 11G 7.4G 60% / /dev/root 20G 11G 7.4G 60% / devtmpfs 32G 0 32G 0% /dev tmpfs 6.3G 308K 6.3G 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 13G 0 13G 0% /dev/shm /dev/md3 1.8T 155G 1.6T 9% /home my.cnf ... [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql ... I don't think I'm using tmpfs for MySQL.
  4. Hi everyone, we had some data loss with revive following a server restart. Thanks to our daily database backups we were able to minimize this loss. We are running Revive 3.0.5 atm. It seems like MySQL was not saving anything onto disk over the last few months, which is pretty strange. We got MySQL 5.5. Do you have any idea on what could be the causes of this problem? Thanks
×
×
  • Create New...