Jump to content

Data Loss With A Server Restart


jithu

Recommended Posts

Hi everyone,

we had some data loss with revive following a server restart. Thanks to our daily database backups we were able to minimize this loss.

We are running Revive 3.0.5 atm.

It seems like MySQL was not saving anything onto disk over the last few months, which is pretty strange. We got MySQL 5.5.

Do you have any idea on what could be the causes of this problem?

Thanks

Link to comment
Share on other sites

Hi,

 

Here are some more informations.

 

df -h

Filesystem                                                            Size  Used Avail Use% Mounted on
rootfs                                                                 20G   11G  7.4G  60% /
/dev/root                                                              20G   11G  7.4G  60% /
devtmpfs                                                               32G     0   32G   0% /dev
tmpfs                                                                 6.3G  308K  6.3G   1% /run
tmpfs                                                                 5.0M     0  5.0M   0% /run/lock
tmpfs                                                                  13G     0   13G   0% /dev/shm
/dev/md3                                                              1.8T  155G  1.6T   9% /home

my.cnf

...

[mysqld_safe]
socket          = /var/run/mysqld/mysqld.sock
nice            = 0


[mysqld]
#
# * Basic Settings
#
user            = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
port            = 3306
basedir         = /usr
datadir         = /var/lib/mysql
tmpdir          = /tmp
lc-messages-dir = /usr/share/mysql

...

I don't think I'm using tmpfs for MySQL.

Link to comment
Share on other sites

Hi Jithu,

 

You're tables are marked as crashed. This is not without reason ... I'd check maybe the harddisk(s?) and/or if the raidset is still errorfree. Maybe you have (or had) a mirrored raidset and now you've booted from the other disk as it was before ? I think that's really the direction you should search into, it doesn't have anything to do with Revive. Nevertheless im curious for the outcome if you find something.

Link to comment
Share on other sites

Ian, I checked the raid status for any drive failures.

$ cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] 
md2 : active raid1 sdb2[1] sda2[0]
      20478912 blocks [2/2] [UU]
      
md3 : active raid1 sdb3[1] sda3[0]
      1932506048 blocks [2/2] [UU]
      
unused devices: <none>

$ mdadm -D /dev/md2

/dev/md2:
        Version : 0.90
  Creation Time : Thu May 15 09:29:16 2014
     Raid Level : raid1
     Array Size : 20478912 (19.53 GiB 20.97 GB)
  Used Dev Size : 20478912 (19.53 GiB 20.97 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent


    Update Time : Wed Jan 28 22:25:30 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0


           UUID : 23d13fab:22752e43:a4d2adc2:26fd5302
         Events : 0.135


    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2






​​$ mdadm -D /dev/md3

/dev/md3:
        Version : 0.90
  Creation Time : Thu May 15 09:29:16 2014
     Raid Level : raid1
     Array Size : 1932506048 (1842.98 GiB 1978.89 GB)
  Used Dev Size : 1932506048 (1842.98 GiB 1978.89 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 3
    Persistence : Superblock is persistent


    Update Time : Wed Jan 28 22:45:39 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0


           UUID : 8077f854:59bf935e:a4d2adc2:26fd5302
         Events : 0.582


    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3


 

 

Looks like raid is working just fine without any failed drives.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...