Redis AOF

问大家一个问题,假如Redis宕机,内存中的数据全部丢失,怎么恢复数据?

Redis 分别提供了 RDB 和 AOF 两种持久化机制:
RDB将数据库的快照(snapshot)以二进制的方式保存到磁盘中。
AOF则以协议文本的方式,将所有对数据库进行过写入的命令(及其参数)记录到AOF文件,以此达到记录数据库状态的目的。

(1) AOF日志是什么

AOF(Append Only File)日志是一种写后日志,在Redis先执行命令,把数据写入内存后,然后才记录日志,日志会追加到文件末尾,所以叫AOF日志。

和我们常见的WAL日志不同,WAL(Write Ahead Log)是写前日志,在实际写数据前,先把修改的数据记到日志文件中,再去执行命令,这个就要求数据库需要额外的检查命令是否正确。

AOF日志的作用主要有2个: 1.用来在redis宕机后恢复数据;2.可以用来主从数据同步。

(2) AOF命令同步

Redis将所有对数据库进行过写入的命令(及其参数)记录到 AOF 文件, 以此达到记录数据库状态的目的。

redis> RPUSH list 1 2 3 4
(integer) 4

redis> LRANGE list 0 -1
1) "1"
2) "2"
3) "3"
4) "4"

redis> KEYS *
1) "list"

redis> RPOP list
"4"

redis> LPOP list
"1"

redis> LPUSH list 1
(integer) 3

redis> LRANGE list 0 -1
1) "1"
2) "2"
3) "3"

那么其中四条对数据库有修改的写入命令就会被同步到 AOF 文件中:

RPUSH list 1 2 3 4

RPOP list

LPOP list

LPUSH list 1

(2.1) Reids AOF数据存储方式

为了处理的方便, AOF文件使用网络通讯协议的格式来保存这些命令。

*2      # 表示这条命令的消息体共2$6      # 下一行的数据长度为6
SELECT  # 消息体
$1      # 下一行数据长度为1
0       # 消息体
*6      # 表示这条命令的消息体共6$5      # 下一行的数据长度为5
RPUSH   # 消息体
$4      # 下一行的数据长度为4
list
$1
1
$1
2
$1
3
$1
4
*2
$4
RPOP
$4
list
*2
$4
LPOP
$4
list
*3
$5
LPUSH
$4
list
$1
1

除了 SELECT 命令是 AOF 程序自己加上去的之外, 其他命令都是之前我们在终端里执行的命令。

(2.2) 同步命令到AOF文件的过程

同步命令到AOF文件的整个过程可以分为三个阶段:

  1. 命令传播:Redis 将执行完的命令、命令的参数、命令的参数个数等信息发送到AOF程序中。
  2. 缓存追加:AOF程序根据接收到的命令数据,将命令转换为网络通讯协议的格式,然后将协议内容追加到服务器的AOF缓存中。
  3. 文件写入和保存:AOF缓存中的内容被写入到AOF文件末尾,如果设定的AOF保存条件被满足的话, fsync函数或者 fdatasync函数会被调用,将写入的内容真正地保存到磁盘中。

(3) AOF保存模式

Redis 目前支持三种 AOF 保存模式,它们分别是:

  1. AOF_FSYNC_NO:不保存。
  2. AOF_FSYNC_EVERYSEC :每一秒钟保存一次。
  3. AOF_FSYNC_ALWAYS :每执行一个命令保存一次。

(3.1) AOF 保存模式对性能和安全性的影响

对于三种 AOF 保存模式, 它们对服务器主进程的阻塞情况如下:

不保存(AOF_FSYNC_NO):写入和保存都由主进程执行,两个操作都会阻塞主进程。
每一秒钟保存一次(AOF_FSYNC_EVERYSEC):写入操作由主进程执行,阻塞主进程。保存操作由子线程执行,不直接阻塞主进程,但保存操作完成的快慢会影响写入操作的阻塞时长。
每执行一个命令保存一次(AOF_FSYNC_ALWAYS):和模式 1 一样。

(4) AOF重写

AOF 文件通过同步 Redis 服务器所执行的命令, 从而实现了数据库状态的记录, 但是, 这种同步方式会造成一个问题: 随着运行时间的流逝, AOF 文件会变得越来越大。

举个例子, 如果服务器执行了以下命令,那么光是记录 list 键的状态, AOF 文件就需要保存四条命令。

RPUSH list 1 2 3 4      // [1, 2, 3, 4]

RPOP list               // [1, 2, 3]

LPOP list               // [2, 3]

LPUSH list 1            // [1, 2, 3]

“重写”其实是一个有歧义的词语, 实际上, AOF 重写并不需要对原有的 AOF 文件进行任何写入和读取, 它针对的是数据库中键的当前值。
上面的例子,列表键 list 在数据库中的值就为 [1, 2, 3] 。
如果要保存这个列表的当前状态, 并且尽量减少所使用的命令数, 那么最简单的方式不是去 AOF 文件上分析前面执行的四条命令, 而是直接读取 list 键在数据库的当前值, 然后用一条 RPUSH 1 2 3 命令来代替前面的四条命令。

列表、集合、字符串、有序集、哈希表等键可以用类似的方法来保存状态, 并且保存这些状态所使用的命令数量, 比起之前建立这些键的状态所使用命令的数量要大大减少。

(4.1) AOF重写实现原理

// 

(4.2) AOF后台重写

避免竞争aof文件

当子进程在执行AOF重写时, 主进程需要执行以下三个工作:
处理命令请求。
将写命令追加到现有的 AOF 文件中。
将写命令追加到 AOF 重写缓存中。

(4.3) AOF后台重写的触发条件

AOF 重写可以由用户通过调用 BGREWRITEAOF 手动触发。

另外, 服务器在 AOF 功能开启的情况下, 会维持以下三个变量:

记录当前 AOF 文件大小的变量 aof_current_size 。
记录最后一次 AOF 重写之后, AOF 文件大小的变量 aof_rewrite_base_size 。
增长百分比变量 aof_rewrite_perc 。
每次当 serverCron 函数执行时, 它都会检查以下条件是否全部满足, 如果是的话, 就会触发自动的 AOF 重写:

没有 BGSAVE 命令在进行。
没有 BGREWRITEAOF 在进行。
当前 AOF 文件大小大于 server.aof_rewrite_min_size (默认值为 1 MB)。
当前 AOF 文件大小和最后一次 AOF 重写后的大小之间的比率大于等于指定的增长百分比。
默认情况下, 增长百分比为 100% , 也即是说, 如果前面三个条件都已经满足, 并且当前 AOF 文件大小比最后一次 AOF 重写时的大小要大一倍的话, 那么触发自动 AOF 重写。

(5) 文件IO工作原理

在我们向文件中写数据时,传统Unix/Liunx系统内核通常现将数据复制到缓冲区中,然后排入队列,晚些时候再写入磁盘。这种方式被称为延迟写(delayed write)。

对磁盘文件的write操作,更新的只是内存中的page cache,因为write调用不会等到硬盘IO完成之后才返回,因此如果OS在write调用之后、硬盘同步之前崩溃,则数据可能丢失。

为了保证磁盘上时间文件系统与缓冲区内容的一致性,UNIX系统提供了sync、fsync和fdatasync三个函数:
sync 只是将所有修改过的块缓冲区排查写队列,然后就返回,它并不等待时间写磁盘操作结束。通常称为update的系统守护进程会周期性地(一般每隔30秒)调用sync函数。这就保证了定期flush内核的块缓冲区。

fsync 只对由文件描述符fd指定的单一文件起作用,并且等待写磁盘操作结束才返回。fsync可用于数据库这样的应用程序,这种应用程序需要确保将修改过的块立即写到磁盘上。

fdatasync 类似于fsync,但它只影响文件的数据部分。而除数据外,fsync还会同步更新文件的属性。

现在来看一下fsync的性能问题,与fdatasync不同,fsync除了同步文件的修改内容(脏页),fsync还会同步文件的描述信息(metadata,包括size、访问时间statime & stmtime等等),因为文件的数据和metadata通常存在硬盘的不同地方,因此fsync至少需要两次IO写操作,这个在fsync的man page有说明:
fdatasync不会同步metadata,因此可以减少一次IO写操作。fdatasync的man page中的解释:

具体来说,如果文件的尺寸(st_size)发生变化,是需要立即同步,否则OS一旦崩溃,即使文件的数据部分已同步,由于metadata没有同步,依然读不到修改的内容。
而最后访问时间(atime)/修改时间(mtime)是不需要每次都同步的,只要应用程序对这两个时间戳没有苛刻的要求,基本没有影响。

在Redis的源文件src/config.h中可以看到在Redis针对Linux实际使用了fdatasync()来进行刷盘操作

/* Define redis_fsync to fdatasync() in Linux and fsync() for all the rest */
#ifdef __linux__
#define redis_fsync fdatasync
#else
#define redis_fsync fsync
#endif

Redis的AOF刷盘工作原理

always 每次有新命令追加到AOF文件是就执行一次同步到AOF文件的操作,安全性最高,但是性能影响最大。
everysec 每秒执行一次同步到AOF文件的操作,redis会在一个单独线程中执行同步操作。
no 将数据同步操作交给操作系统来处理,性能最好,但是数据可靠性最差。

如果配置文件设置appendonly=yes后,没有指定apendfsync,默认会使用everysec选项,很多redis线上集群都是采用的这个选项。

来具体分析一下Redis代码中关于AOF刷盘操作的工作原理:
https://github.com/redis/redis/blob/6.0/src/server.c#L3014

/* 在设置 appendonly=yes 开启AOF日志时 */
if (server.aof_state == AOF_ON) {
    // 
    server.aof_fd = open(server.aof_filename,
                            O_WRONLY|O_APPEND|O_CREAT,0644);
    if (server.aof_fd == -1) {
        serverLog(LL_WARNING, "Can't open the append-only file: %s",
            strerror(errno));
        exit(1);
    }
}

同时在Redis启动时也会创建专门的bio线程处理aof持久化,在src/server.c文件的initServer()中会调用bioInit()函数创建两个线程,分别用来处理刷盘和关闭文件的任务。代码如下:
https://github.com/redis/redis/blob/6.0/src/bio.h#L43

/* Background job opcodes */
#define BIO_CLOSE_FILE    0 /* Deferred close(2) syscall. */
#define BIO_AOF_FSYNC     1 /* Deferred AOF fsync. */
#define BIO_LAZY_FREE     2 /* Deferred objects freeing. */
#define BIO_NUM_OPS       3

https://github.com/redis/redis/blob/6.0/src/bio.c#L123

/* Ready to spawn our threads. We use the single argument the thread
 * function accepts in order to pass the job ID the thread is
 * responsible of. */
for (j = 0; j < BIO_NUM_OPS; j++) {
    void *arg = (void*)(unsigned long) j;
    if (pthread_create(&thread,&attr,bioProcessBackgroundJobs,arg) != 0) {
        serverLog(LL_WARNING,"Fatal: Can't initialize Background Jobs.");
        exit(1);
    }
    bio_threads[j] = thread;
}

当redis服务器执行写命令时,例如SET foo helloworld,不仅仅会修改内存数据集,也会记录此操作,记录的方式就是前面所说的数据组织方式。redis将一些内容被追加到server.aof buf缓冲区中,可以把它理解为一个小型临时中转站,所有累积的更新缓存都会先放入这里,它会在特定时机写入文件或者插入到server.aof rewrite buf blocks,同时每次写操作后先写入缓存,然后定期fsync到磁盘,在到达某些时机(主要是受auto-aof-rewrite-percentage/auto-aof-rewrite-min-size这两个参数影响)后,还会fork子进程执行rewrite。为了避免在服务器突然崩溃时丢失过多的数据,在redis会在下列几个特定时机调用flushAppendOnlyFile函数进行写盘操作:
1.进入事件循环之前
2.服务器定时函数serverCron()中,在Redis运行期间主要是在这里调用flushAppendOnlyFile
3.停止AOF策略的stopAppendOnly()函数中

因 serverCron 函数中的所有代码每秒都会调用 server.hz 次,为了对部分代码的调用次数进行限制,Redis使用了一个宏 runwithperiod(milliseconds) { … } ,这个宏可以将被包含代码的执行次数降低为每 milliseconds 执行一次。

代码: https://github.com/redis/redis/blob/6.0/src/server.c#L2026

/* This is our timer interrupt, called server.hz times per second.
 * Here is where we do a number of things that need to be done asynchronously.
 * For instance:
 *
 * - Active expired keys collection (it is also performed in a lazy way on
 *   lookup).
 * - Software watchdog.
 * - Update some statistic.
 * - Incremental rehashing of the DBs hash tables.
 * - Triggering BGSAVE / AOF rewrite, and handling of terminated children.
 * - Clients timeout of different kinds.
 * - Replication reconnection.
 * - Many more...
 *
 * Everything directly called here will be called server.hz times per second,
 * so in order to throttle execution of things we want to do less frequently
 * a macro is used: run_with_period(milliseconds) { .... }
 */

int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
 
    // ... 省略部分代码

    /* AOF postponed flush: Try at every cron cycle if the slow fsync
     * completed. */
    if (server.aof_flush_postponed_start) flushAppendOnlyFile(0);

    /* AOF write errors: in this case we have a buffer to flush as well and
     * clear the AOF error in case of success to make the DB writable again,
     * however to try every second is enough in case of 'hz' is set to
     * a higher frequency. */
    run_with_period(1000) {
        if (server.aof_last_write_status == C_ERR)
            flushAppendOnlyFile(0);
    }
 
    // ... 省略部分代码  
 
    return 1000/server.hz;
}

通过下面的代码可以看到flushAppendOnlyFile函数中,在write写盘之后根据apendfsync选项来执行刷盘策略,如果是AOF FSYNC ALWAYS,就立即执行刷盘操作,如果是AOF EVERYSEC,则创建一个后台异步刷盘任务。 在函数bioCreateBackgroundJob()会创建bio后台任务,在函数bioProcessBackgroundJobs()会执行bio后台任务的处理。

https://github.com/redis/redis/blob/6.0/src/aof.c#L210

/* Starts a background task that performs fsync() against the specified
 * file descriptor (the one of the AOF file) in another thread. */
void aof_background_fsync(int fd) {
    bioCreateBackgroundJob(BIO_AOF_FSYNC,(void*)(long)fd,NULL,NULL);
}

https://github.com/redis/redis/blob/6.0/src/aof.c#L253

/* Called when the user switches from "appendonly no" to "appendonly yes"
 * at runtime using the CONFIG command. */
int startAppendOnly(void) {
    char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */
    int newfd;

    // 打开文件
    newfd = open(server.aof_filename,O_WRONLY|O_APPEND|O_CREAT,0644);
    serverAssert(server.aof_state == AOF_OFF);
    if (newfd == -1) {
        char *cwdp = getcwd(cwd,MAXPATHLEN);

        serverLog(LL_WARNING,
            "Redis needs to enable the AOF but can't open the "
            "append only file %s (in server root dir %s): %s",
            server.aof_filename,
            cwdp ? cwdp : "unknown",
            strerror(errno));
        return C_ERR;
    }
    if (hasActiveChildProcess() && server.aof_child_pid == -1) {
        server.aof_rewrite_scheduled = 1;
        serverLog(LL_WARNING,"AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible.");
    } else {
        /* If there is a pending AOF rewrite, we need to switch it off and
         * start a new one: the old one cannot be reused because it is not
         * accumulating the AOF buffer. */
        if (server.aof_child_pid != -1) {
            serverLog(LL_WARNING,"AOF was enabled but there is already an AOF rewriting in background. Stopping background AOF and starting a rewrite now.");
            killAppendOnlyChild();
        }
        if (rewriteAppendOnlyFileBackground() == C_ERR) {
            close(newfd);
            serverLog(LL_WARNING,"Redis needs to enable the AOF but can't trigger a background AOF rewrite operation. Check the above logs for more info about the error.");
            return C_ERR;
        }
    }
    /* We correctly switched on AOF, now wait for the rewrite to be complete
     * in order to append data on disk. */
    server.aof_state = AOF_WAIT_REWRITE;
    server.aof_last_fsync = server.unixtime;
    server.aof_fd = newfd;
    return C_OK;
}

https://github.com/redis/redis/blob/6.0/src/aof.c#L340

/* Write the append only file buffer on disk.
 *
 * Since we are required to write the AOF before replying to the client,
 * and the only way the client socket can get a write is entering when the
 * the event loop, we accumulate all the AOF writes in a memory
 * buffer and write it on disk using this function just before entering
 * the event loop again.
 *
 * About the 'force' argument:
 *
 * When the fsync policy is set to 'everysec' we may delay the flush if there
 * is still an fsync() going on in the background thread, since for instance
 * on Linux write(2) will be blocked by the background fsync anyway.
 * When this happens we remember that there is some aof buffer to be
 * flushed ASAP, and will try to do that in the serverCron() function.
 *
 * However if force is set to 1 we'll write regardless of the background
 * fsync. */
#define AOF_WRITE_LOG_ERROR_RATE 30 /* Seconds between errors logging. */
void flushAppendOnlyFile(int force) {
    ssize_t nwritten;
    int sync_in_progress = 0;
    mstime_t latency;

    // 没有数据,不需要写盘
    if (sdslen(server.aof_buf) == 0) {
        /* Check if we need to do fsync even the aof buffer is empty,
         * because previously in AOF_FSYNC_EVERYSEC mode, fsync is
         * called only when aof buffer is not empty, so if users
         * stop write commands before fsync called in one second,
         * the data in page cache cannot be flushed in time. */
        if (server.aof_fsync == AOF_FSYNC_EVERYSEC &&
            server.aof_fsync_offset != server.aof_current_size &&
            server.unixtime > server.aof_last_fsync &&
            !(sync_in_progress = aofFsyncInProgress())) {
            goto try_fsync;
        } else {
            return;
        }
    }

    // 通过bio的任务技术器bio_pending来判断是否有后台fsync操作正在进行
    // 如果有就要标记下sync_in_process
    if (server.aof_fsync == AOF_FSYNC_EVERYSEC)
        sync_in_progress = aofFsyncInProgress();

    // 如果没有设置强制刷盘的选项,可能不会立即进行,而是延迟执行AOF刷盘
    // 因为 Linux 上的 write(2) 会被后台的 fsync 阻塞, 
    // 如果强制执行write 的话,服务器主线程将阻塞在 write 上面 
    if (server.aof_fsync == AOF_FSYNC_EVERYSEC && !force) {
        /* With this append fsync policy we do background fsyncing.
         * If the fsync is still in progress we can try to delay
         * the write for a couple of seconds. */
        if (sync_in_progress) {
            if (server.aof_flush_postponed_start == 0) {
                /* No previous write postponing, remember that we are
                 * postponing the flush and return. */
                server.aof_flush_postponed_start = server.unixtime;
                return;
            } else if (server.unixtime - server.aof_flush_postponed_start < 2) {
                // 如果距离上次执行刷盘操作没有超过2秒,直接返回 
                /* We were already waiting for fsync to finish, but for less
                 * than two seconds this is still ok. Postpone again. */
                return;
            }

            // 如果后台还有 fsync 在执行,并且 write 已经推迟 >= 2 秒
            // 那么执行写操作(write 将被阻塞)
            // 假如此时出现死机等故障,可能存在丢失2秒左右的AOF日志数据
            /* Otherwise fall trough, and go write since we can't wait
             * over two seconds. */
            server.aof_delayed_fsync++;
            serverLog(LL_NOTICE,"Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis.");
        }
    }
    /* We want to perform a single write. This should be guaranteed atomic
     * at least if the filesystem we are writing is a real physical one.
     * While this will save us against the server being killed I don't think
     * there is much to do about the whole server stopping for power problems
     * or alike */

    if (server.aof_flush_sleep && sdslen(server.aof_buf)) {
        usleep(server.aof_flush_sleep);
    }

    // 将server.aof_buf中缓存的AOF日志数据进行写盘
    latencyStartMonitor(latency);
    nwritten = aofWrite(server.aof_fd,server.aof_buf,sdslen(server.aof_buf));
    latencyEndMonitor(latency);
    /* We want to capture different events for delayed writes:
     * when the delay happens with a pending fsync, or with a saving child
     * active, and when the above two conditions are missing.
     * We also use an additional event name to save all samples which is
     * useful for graphing / monitoring purposes. */
    if (sync_in_progress) {
        latencyAddSampleIfNeeded("aof-write-pending-fsync",latency);
    } else if (hasActiveChildProcess()) {
        latencyAddSampleIfNeeded("aof-write-active-child",latency);
    } else {
        latencyAddSampleIfNeeded("aof-write-alone",latency);
    }
    latencyAddSampleIfNeeded("aof-write",latency);

    // 重置延迟刷盘时间
    /* We performed the write so reset the postponed flush sentinel to zero. */
    server.aof_flush_postponed_start = 0;

    // 如果write失败,那么尝试将该情况写入到日志里面
    if (nwritten != (ssize_t)sdslen(server.aof_buf)) {
        static time_t last_write_error_log = 0;
        int can_log = 0;

        /* Limit logging rate to 1 line per AOF_WRITE_LOG_ERROR_RATE seconds. */
        if ((server.unixtime - last_write_error_log) > AOF_WRITE_LOG_ERROR_RATE) {
            can_log = 1;
            last_write_error_log = server.unixtime;
        }

        /* Log the AOF write error and record the error code. */
        if (nwritten == -1) {
            if (can_log) {
                serverLog(LL_WARNING,"Error writing to the AOF file: %s",
                    strerror(errno));
                server.aof_last_write_errno = errno;
            }
        } else {
            if (can_log) {
                serverLog(LL_WARNING,"Short write while writing to "
                                       "the AOF file: (nwritten=%lld, "
                                       "expected=%lld)",
                                       (long long)nwritten,
                                       (long long)sdslen(server.aof_buf));
            }

            // 通过ftruncate尝试删除新追加到AOF中的不完整的数据内容
            if (ftruncate(server.aof_fd, server.aof_current_size) == -1) {
                if (can_log) {
                    serverLog(LL_WARNING, "Could not remove short write "
                             "from the append-only file.  Redis may refuse "
                             "to load the AOF the next time it starts.  "
                             "ftruncate: %s", strerror(errno));
                }
            } else {
                /* If the ftruncate() succeeded we can set nwritten to
                 * -1 since there is no longer partial data into the AOF. */
                nwritten = -1;
            }
            server.aof_last_write_errno = ENOSPC;
        }

        // 处理写入AOF文件时出现的错误
        /* Handle the AOF write error. */
        if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
            /* We can't recover when the fsync policy is ALWAYS since the
             * reply for the client is already in the output buffers, and we
             * have the contract with the user that on acknowledged write data
             * is synced on disk. */
            serverLog(LL_WARNING,"Can't recover from AOF write error when the AOF fsync policy is 'always'. Exiting...");
            exit(1);
        } else {
            /* Recover from failed write leaving data into the buffer. However
             * set an error to stop accepting writes as long as the error
             * condition is not cleared. */
            server.aof_last_write_status = C_ERR;

            // 如果是已经写入了部分数据,是不能通过ftruncate进行撤销的
            // 这里通过sdsrange清除掉aof_buf中已经写入磁盘的那部分数据
            /* Trim the sds buffer if there was a partial write, and there
             * was no way to undo it with ftruncate(2). */
            if (nwritten > 0) {
                server.aof_current_size += nwritten;
                sdsrange(server.aof_buf,nwritten,-1);
            }
            return; /* We'll try again on the next call... */
        }
    } else {
        /* Successful write(2). If AOF was in error state, restore the
         * OK state and log the event. */
        if (server.aof_last_write_status == C_ERR) {
            serverLog(LL_WARNING,
                "AOF write error looks solved, Redis can write again.");
            server.aof_last_write_status = C_OK;
        }
    }
    // 更新写入后的 AOF 文件大小
    server.aof_current_size += nwritten;

    // 当 server.aof_buf 足够小,重新利用空间,防止频繁的内存分配。
    // 相反,当 server.aof_buf 占据大量的空间,采取的策略是释放空间。
    /* Re-use AOF buffer when it is small enough. The maximum comes from the
     * arena size of 4k minus some overhead (but is otherwise arbitrary). */
    if ((sdslen(server.aof_buf)+sdsavail(server.aof_buf)) < 4000) {
        sdsclear(server.aof_buf);
    } else {
        sdsfree(server.aof_buf);
        server.aof_buf = sdsempty();
    }

try_fsync:
    /* Don't fsync if no-appendfsync-on-rewrite is set to yes and there are
     * children doing I/O in the background. */
    if (server.aof_no_fsync_on_rewrite && hasActiveChildProcess())
        return;

    // 执行 fysnc
    /* Perform the fsync if needed. */
    if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
        /* redis_fsync is defined as fdatasync() for Linux in order to avoid
         * flushing metadata. */
        latencyStartMonitor(latency);
        redis_fsync(server.aof_fd); /* Let's try to get this data on the disk */
        latencyEndMonitor(latency);
        latencyAddSampleIfNeeded("aof-fsync-always",latency);
        server.aof_fsync_offset = server.aof_current_size;
        server.aof_last_fsync = server.unixtime;
    } else if ((server.aof_fsync == AOF_FSYNC_EVERYSEC &&
                server.unixtime > server.aof_last_fsync)) {
        if (!sync_in_progress) {
            aof_background_fsync(server.aof_fd);
            server.aof_fsync_offset = server.aof_current_size;
        }
        server.aof_last_fsync = server.unixtime;
    }
}
  1. 主线程操作完内存数据后,会执行write,之后根据配置决定是立即还是延迟fdatasync
  2. redis在启动时,会创建专门的bio线程用于处理aof持久化
  3. 如果是apendfsync=everysec,时机到达后,会创建异步任务(bio)
  4. bio线程轮询任务池,拿到任务后同步执行fdatasync

如果是always每次写命令后都是刷盘,故障时丢失数据最少,如果是everysec,会丢失大概2秒的数据,在bio延迟刷盘时如果后台刷盘操作卡住,在ServerCron里面每一轮循环(频率取决于hz参数,我们设置为100,也就是一秒执行100次循环)都检查是否上一次后台刷盘操作是否超过2秒,如果超过立即进行一次强制刷盘,因此可以粗略的认为最大可能丢失2.01秒的数据。

如果在进行bgrewriteaof期间出现故障,因rewrite会阻塞fdatasync刷盘,可能丢失的数据量更大,这个就不太容易量化评估了。

关于AOF对访问延迟的影响,Redis作者曾经专门写过一篇博客 fsync() on a different thread: apparently a useless trick,结论是bio对延迟的改善并不是很大,因为虽然apendfsync=everysec时fdatasync在后台运行,wirte的aof_buf并不大,基本上不会导致阻塞,而是后台的fdatasync会导致write等待datasync完成了之后才调用write导致阻塞,fdataysnc会握住文件句柄,fwrite也会用到文件句柄,这里write会导致了主线程阻塞。这也就是为什么之前浪潮服务器的RAID出现性能问题时,虽然对大部分应用没有影响,但是对于Redis这种对延迟非常敏感的应用却造成了影响的原因。

参考资料

[1] 04 | AOF日志:宕机了,Redis如何避免数据丢失?
[2] AOF-Redis设计与实现