Redis AOF 重写

# Redis配置文件

# 启用AOF持久化
appendonly yes

# 当AOF文件增长 100% 时重写AOF文件
auto-aof-rewrite-percentage 100

# 当AOF文件大于 64M 时重写AOF文件
auto-aof-rewrite-min-size 64mb

(1) AOF重写

AOF 文件通过同步 Redis 服务器所执行的命令, 从而实现了数据库状态的记录。
但是,这种同步方式会造成一个问题,随着运行时间的流逝,AOF 文件会变得越来越大。

举个例子,如果服务器执行了以下命令,那么光是记录 list 键的状态, AOF 文件就需要保存四条命令。

RPUSH list 1 2 3 4      // [1, 2, 3, 4]

RPOP list               // [1, 2, 3]

LPOP list               // [2, 3]

LPUSH list 1            // [1, 2, 3]

AOF 重写并不需要对原有的 AOF 文件进行任何写入和读取, 它针对的是数据库中键的当前值。
上面的例子,列表键 list 在数据库中的值就为 [1, 2, 3] 。
如果要保存这个列表的当前状态, 并且尽量减少所使用的命令数, 那么最简单的方式不是去 AOF 文件上分析前面执行的四条命令, 而是直接读取 list 键在数据库的当前值, 然后用一条 RPUSH 1 2 3 命令来代替前面的四条命令。

列表、集合、字符串、有序集、哈希表等键可以用类似的方法来保存状态, 并且保存这些状态所使用的命令数量, 比起之前建立这些键的状态所使用命令的数量要大大减少。

(1.1) 什么时候会触发AOF重写

有两个配置项在控制AOF重写的触发时机:

  1. auto-aof-rewrite-min-size: 表示运行AOF重写时文件的最小大小,默认为64MB
  2. auto-aof-rewrite-percentage: 这个值的计算方法是:当前AOF文件大小和上一次重写后AOF文件大小的差值,再除以上一次重写后AOF文件大小。也就是当前AOF文件比上一次重写后AOF文件的增量大小,和上一次重写后AOF文件大小的比值。

AOF文件大小同时超出上面这两个配置项时,会触发AOF重写。

(1.2) AOF重写会影响性能吗?

AOF后台重写

避免竞争aof文件

当子进程在执行AOF重写时, 主进程需要执行以下三个工作:
处理命令请求。
将写命令追加到现有的 AOF 文件中。
将写命令追加到 AOF 重写缓存中。

(1.3) AOF重写函数与触发时机

实现AOF重写的函数是 rewriteAppendOnlyFileBackground

触发AOF有3种方法:

  1. 执行 bgrewriteaof 命令,对应的函数是bgrewriteaofCommand
  2. 配置开始AOF重写,对应函数是startAppendOnly
  3. 周期性检查,对应函数是 serverCron 里的 rewriteAppendOnlyFileBackground

(1.4) 为什么AOF重写不复用AOF本身的日志

AOF重写不复用AOF本身的日志,
一个原因是父子进程写同一个文件必然会产生竞争问题,控制竞争就意味着会影响父进程的性能。
二是如果AOF重写过程中失败了,那么原本的AOF文件相当于被污染了,无法做恢复使用。

所以Redis AOF重写一个新文件,重写失败的话,直接删除这个文件就好了,不会对原先的AOF文件产生影响。等重写完成之后,直接替换旧文件即可。

(1.4.1) 手动触发AOF重写-bgrewriteaofCommand

/**
 * 手动触发AOF重写 
 * 
 * 1. 检测是否有AOF子进程
 * 2. 检查是否有活跃子进程(RDB/AOF)
 * 3. 实际执行AOF重写
 *
 * @param *c 
 */
void bgrewriteaofCommand(client *c) {
    if (server.aof_child_pid != -1) {  // 有AOF重写子进程 
        // 后台已经有重写进程
        addReplyError(c,"Background append only file rewriting already in progress");
    } else if (hasActiveChildProcess()) {  // 有活跃子进程
        // 
        server.aof_rewrite_scheduled = 1;
        // 
        addReplyStatus(c,"Background append only file rewriting scheduled");
    } else if (rewriteAppendOnlyFileBackground() == C_OK) {  // 实际执行AOF重写
        addReplyStatus(c,"Background append only file rewriting started");
    } else {
        addReplyError(c,"Can't execute an AOF background rewriting. "
                        "Please check the server logs for more information.");
    }
}

(4.2.2) 开始AOF重写-startAppendOnly

/**
 * 
 * Called when the user switches from "appendonly no" to "appendonly yes"
 * at runtime using the CONFIG command. 
 */
int startAppendOnly(void) {
    char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */
    int newfd;

    // 打开文件
    newfd = open(server.aof_filename,O_WRONLY|O_APPEND|O_CREAT,0644);
    // 确保AOF是关闭状态
    serverAssert(server.aof_state == AOF_OFF);
    if (newfd == -1) {
        char *cwdp = getcwd(cwd,MAXPATHLEN);

        serverLog(LL_WARNING,
            "Redis needs to enable the AOF but can't open the "
            "append only file %s (in server root dir %s): %s",
            server.aof_filename,
            cwdp ? cwdp : "unknown",
            strerror(errno));
        return C_ERR;
    }
    if (hasActiveChildProcess() && server.aof_child_pid == -1) {
        server.aof_rewrite_scheduled = 1;
        serverLog(LL_WARNING,"AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible.");
    } else {
        /* If there is a pending AOF rewrite, we need to switch it off and
         * start a new one: the old one cannot be reused because it is not
         * accumulating the AOF buffer. */
        if (server.aof_child_pid != -1) {
            serverLog(LL_WARNING,"AOF was enabled but there is already an AOF rewriting in background. Stopping background AOF and starting a rewrite now.");
            killAppendOnlyChild();
        }
        if (rewriteAppendOnlyFileBackground() == C_ERR) {
            close(newfd);
            serverLog(LL_WARNING,"Redis needs to enable the AOF but can't trigger a background AOF rewrite operation. Check the above logs for more info about the error.");
            return C_ERR;
        }
    }
    /* We correctly switched on AOF, now wait for the rewrite to be complete
     * in order to append data on disk. */
    server.aof_state = AOF_WAIT_REWRITE;
    server.aof_last_fsync = server.unixtime;
    server.aof_fd = newfd;
    return C_OK;
}

(1.4.2) 周期性执行-serverCron

int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {

    // 省略部分代码

    /* Start a scheduled AOF rewrite if this was requested by the user while
     * a BGSAVE was in progress. */
    if (!hasActiveChildProcess() &&
        server.aof_rewrite_scheduled)
    {
        rewriteAppendOnlyFileBackground();
    }

}

(2) 其它

AOF重写可以由用户通过调用 BGREWRITEAOF 手动触发。

另外, 服务器在 AOF 功能开启的情况下, 会维持以下三个变量:

记录当前 AOF 文件大小的变量 aof_current_size 。
记录最后一次 AOF 重写之后, AOF 文件大小的变量 aof_rewrite_base_size 。
增长百分比变量 aof_rewrite_perc 。
每次当 serverCron 函数执行时, 它都会检查以下条件是否全部满足, 如果是的话, 就会触发自动的 AOF 重写:

没有 BGSAVE 命令在进行。
没有 BGREWRITEAOF 在进行。
当前 AOF 文件大小大于 server.aof_rewrite_min_size (默认值为 1 MB)。
当前 AOF 文件大小和最后一次 AOF 重写后的大小之间的比率大于等于指定的增长百分比。
默认情况下, 增长百分比为 100% , 也即是说, 如果前面三个条件都已经满足, 并且当前 AOF 文件大小比最后一次 AOF 重写时的大小要大一倍的话, 那么触发自动 AOF 重写。

(2.1) 重写AOF文件时潜在的阻塞风险

Redis采用fork子进程重写AOF文件时,潜在的阻塞风险包括:fork子进程 和 AOF重写过程中父进程产生写入的场景
a、fork子进程,fork这个瞬间一定是会阻塞主线程的(注意,fork时并不会一次性拷贝所有内存数据给子进程),fork采用操作系统提供的写实复制(Copy On Write)机制,就是为了避免一次性拷贝大量内存数据给子进程造成的长时间阻塞问题,但fork子进程需要拷贝进程必要的数据结构,其中有一项就是拷贝内存页表(虚拟内存和物理内存的映射索引表),这个拷贝过程会消耗大量CPU资源,拷贝完成之前整个进程是会阻塞的,阻塞时间取决于整个实例的内存大小,实例越大,内存页表越大,fork阻塞时间越久。
拷贝内存页表完成后,子进程与父进程指向相同的内存地址空间,也就是说此时虽然产生了子进程,但是并没有申请与父进程相同的内存大小。
那什么时候父子进程才会真正内存分离呢?“写实复制”顾名思义,就是在写发生时,才真正拷贝内存真正的数据,这个过程中,父进程也可能会产生阻塞的风险,就是下面介绍的场景。

b、fork出的子进程指向与父进程相同的内存地址空间,此时子进程就可以执行AOF重写,把内存中的所有数据写入到AOF文件中。
但是此时父进程依旧是会有流量写入的,如果父进程操作的是一个已经存在的key,那么这个时候父进程就会真正拷贝这个key对应的内存数据,申请新的内存空间,这样逐渐地,父子进程内存数据开始分离,父子进程逐渐拥有各自独立的内存空间。因为内存分配是以页为单位进行分配的,默认4k,如果父进程此时操作的是一个bigkey,重新申请大块内存耗时会变长,可能会产阻塞风险。
另外,如果操作系统开启了内存大页机制(Huge Page,页面大小2M),那么父进程申请内存时阻塞的概率将会大大提高,所以在Redis机器上需要关闭Huge Page机制。Redis每次fork生成RDB或AOF重写完成后,都可以在Redis log中看到父进程重新申请了多大的内存空间。

(2.2) AOF重写的基本过程

/* ----------------------------------------------------------------------------
 * AOF background rewrite
 * ------------------------------------------------------------------------- */

/* 
 * 后台重写AOF的流程
 *
 * 1. 用户调用 BGREWRITEAOF 函数
 * 2. Redis调用这个方法,然后forks
 *    2.1 子进程把AOF重写到一个临时文件
 *    2.2 父进程
 * 3. 当子进程完成2.1 
 * 4. 
 * 
 * This is how rewriting of the append only file in background works:
 *
 * 1) The user calls BGREWRITEAOF
 * 2) Redis calls this function, that forks():
 *    2a) the child rewrite the append only file in a temp file.
 *    2b) the parent accumulates differences in server.aof_rewrite_buf.
 * 3) When the child finished '2a' exists.
 * 4) The parent will trap the exit code, if it's OK, will append the
 *    data accumulated into server.aof_rewrite_buf into the temp file, and
 *    finally will rename(2) the temp file in the actual file name.
 *    The the new file is reopened as the new append only file. Profit!
 */
int rewriteAppendOnlyFileBackground(void) {
    pid_t childpid;

    if (hasActiveChildProcess()) return C_ERR;

    // 创建管道,用于父子进程通信
    if (aofCreatePipes() != C_OK) return C_ERR;
    // 
    openChildInfoPipe();
    // fork一个子进程
    if ((childpid = redisFork(CHILD_TYPE_AOF)) == 0) {
        char tmpfile[256];

        // 子进程标题 "redis-aof-rewrite" 
        redisSetProcTitle("redis-aof-rewrite");
        // 
        redisSetCpuAffinity(server.aof_rewrite_cpulist);
        // 
        snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof", (int) getpid());
        // 写到临时文件
        if (rewriteAppendOnlyFile(tmpfile) == C_OK) {
            // 发送子进程写时复制信息 
            sendChildCOWInfo(CHILD_TYPE_AOF, "AOF rewrite");
            exitFromChild(0);
        } else {
            exitFromChild(1);
        }
    } else {
        /* Parent */
        if (childpid == -1) {
            // 关闭子进程信息管道
            closeChildInfoPipe();
            // 记录日志 
            serverLog(LL_WARNING,
                "Can't rewrite append only file in background: fork: %s",
                strerror(errno));
            //     
            aofClosePipes();
            return C_ERR;
        }
        // 记录日志 
        serverLog(LL_NOTICE,
            "Background append only file rewriting started by pid %d",childpid);
        server.aof_rewrite_scheduled = 0;
        server.aof_rewrite_time_start = time(NULL);
        server.aof_child_pid = childpid;
        updateDictResizePolicy();
        /* We set appendseldb to -1 in order to force the next call to the
         * feedAppendOnlyFile() to issue a SELECT command, so the differences
         * accumulated by the parent into server.aof_rewrite_buf will start
         * with a SELECT statement and it will be safe to merge. */
        server.aof_selected_db = -1;
        replicationScriptCacheFlush();
        return C_OK;
    }
    return C_OK; /* unreached */
}
/* Create the pipes used for parent - child process IPC during rewrite.
 * We have a data pipe used to send AOF incremental diffs to the child,
 * and two other pipes used by the children to signal it finished with
 * the rewrite so no more data should be written, and another for the
 * parent to acknowledge it understood this new condition. */
int aofCreatePipes(void) {
    int fds[6] = {-1, -1, -1, -1, -1, -1};
    int j;

    if (pipe(fds) == -1) goto error; /* parent -> children data. */
    if (pipe(fds+2) == -1) goto error; /* children -> parent ack. */
    if (pipe(fds+4) == -1) goto error; /* parent -> children ack. */


    /* Parent -> children data is non blocking. */
    // 
    if (anetNonBlock(NULL,fds[0]) != ANET_OK) goto error;
    if (anetNonBlock(NULL,fds[1]) != ANET_OK) goto error;

    // 创建文件事件
    if (aeCreateFileEvent(server.el, fds[2], AE_READABLE, aofChildPipeReadable, NULL) == AE_ERR) goto error;

    // 主进程和重写子进程间用于传递操作命令的管道-写描述符
    server.aof_pipe_write_data_to_child = fds[1];
    // 主进程和重写子进程间用于传递操作命令的管道-读描述符
    server.aof_pipe_read_data_from_parent = fds[0];
    // 重写子进程向父进程发送ACK信息的管道-写描述符
    server.aof_pipe_write_ack_to_parent = fds[3];
    // 重写子进程向父进程发送ACK信息的管道-读描述符
    server.aof_pipe_read_ack_from_child = fds[2];
    // 父进程向重写子进程发送ACK信息的管道-写描述符
    server.aof_pipe_write_ack_to_child = fds[5];
    // 父进程向重写子进程发送ACK信息的管道-读描述符
    server.aof_pipe_read_ack_from_parent = fds[4];
    server.aof_stop_sending_diff = 0;
    return C_OK;

error:
    serverLog(LL_WARNING,"Error opening /setting AOF rewrite IPC pipes: %s",
        strerror(errno));
    for (j = 0; j < 6; j++) if(fds[j] != -1) close(fds[j]);
    return C_ERR;
}

(3) 文件IO工作原理

在我们向文件中写数据时,传统Unix/Liunx系统内核通常现将数据复制到缓冲区中,然后排入队列,晚些时候再写入磁盘。这种方式被称为延迟写(delayed write)。

对磁盘文件的write操作,更新的只是内存中的page cache,因为write调用不会等到硬盘IO完成之后才返回,因此如果OS在write调用之后、硬盘同步之前崩溃,则数据可能丢失。

为了保证磁盘上时间文件系统与缓冲区内容的一致性,UNIX系统提供了syncfsyncfdatasync三个函数:
sync 只是将所有修改过的块缓冲区排查写队列,然后就返回,它并不等待时间写磁盘操作结束。通常称为update的系统守护进程会周期性地(一般每隔30秒)调用sync函数。这就保证了定期flush内核的块缓冲区。

fsync 只对由文件描述符fd指定的单一文件起作用,并且等待写磁盘操作结束才返回。fsync可用于数据库这样的应用程序,这种应用程序需要确保将修改过的块立即写到磁盘上。

fdatasync 类似于fsync,但它只影响文件的数据部分。而除数据外,fsync还会同步更新文件的属性。

现在来看一下fsync的性能问题,与fdatasync不同,fsync除了同步文件的修改内容(脏页),fsync还会同步文件的描述信息(metadata,包括size、访问时间statime & stmtime等等),因为文件的数据和metadata通常存在硬盘的不同地方,因此fsync至少需要两次IO写操作,这个在fsync的man page有说明:
fdatasync不会同步metadata,因此可以减少一次IO写操作。fdatasync的man page中的解释:

具体来说,如果文件的尺寸(st_size)发生变化,是需要立即同步,否则OS一旦崩溃,即使文件的数据部分已同步,由于metadata没有同步,依然读不到修改的内容。
而最后访问时间(atime)/修改时间(mtime)是不需要每次都同步的,只要应用程序对这两个时间戳没有苛刻的要求,基本没有影响。

在Redis的源文件src/config.h中可以看到在Redis针对Linux实际使用了fdatasync()来进行刷盘操作

/* Define redis_fsync to fdatasync() in Linux and fsync() for all the rest */
#ifdef __linux__
#define redis_fsync fdatasync
#else
#define redis_fsync fsync
#endif

(4) Redis的AOF刷盘工作原理

always 每次有新命令追加到AOF文件是就执行一次同步到AOF文件的操作,安全性最高,但是性能影响最大。
everysec 每秒执行一次同步到AOF文件的操作,redis会在一个单独线程中执行同步操作。
no 将数据同步操作交给操作系统来处理,性能最好,但是数据可靠性最差。

如果配置文件设置appendonly=yes后,没有指定apendfsync,默认会使用everysec选项,很多redis线上集群都是采用的这个选项。

来具体分析一下Redis代码中关于AOF刷盘操作的工作原理:
https://github.com/redis/redis/blob/6.0/src/server.c#L3014

/* 在设置 appendonly=yes 开启AOF日志时 */
if (server.aof_state == AOF_ON) {
    // redis aof文件fd 
    server.aof_fd = open(server.aof_filename,
                            O_WRONLY|O_APPEND|O_CREAT,0644);
    if (server.aof_fd == -1) {
        serverLog(LL_WARNING, "Can't open the append-only file: %s",
            strerror(errno));
        exit(1);
    }
}

同时在Redis启动时也会创建专门的bio线程处理aof持久化,在src/server.c文件的initServer()中会调用bioInit()函数创建两个线程,分别用来处理刷盘和关闭文件的任务。代码如下:
https://github.com/redis/redis/blob/6.0/src/bio.h#L43

/* Background job opcodes */
#define BIO_CLOSE_FILE    0 /* Deferred close(2) syscall. */
#define BIO_AOF_FSYNC     1 /* Deferred AOF fsync. */
#define BIO_LAZY_FREE     2 /* Deferred objects freeing. */
#define BIO_NUM_OPS       3

https://github.com/redis/redis/blob/6.0/src/bio.c#L123

/* Ready to spawn our threads. We use the single argument the thread
 * function accepts in order to pass the job ID the thread is
 * responsible of. */
for (j = 0; j < BIO_NUM_OPS; j++) {
    void *arg = (void*)(unsigned long) j;
    if (pthread_create(&thread,&attr,bioProcessBackgroundJobs,arg) != 0) {
        serverLog(LL_WARNING,"Fatal: Can't initialize Background Jobs.");
        exit(1);
    }
    bio_threads[j] = thread;
}

当redis服务器执行写命令时,例如SET foo helloworld,不仅仅会修改内存数据集,也会记录此操作,记录的方式就是前面所说的数据组织方式。redis将一些内容被追加到server.aof buf缓冲区中,可以把它理解为一个小型临时中转站,所有累积的更新缓存都会先放入这里,它会在特定时机写入文件或者插入到server.aof rewrite buf blocks,同时每次写操作后先写入缓存,然后定期fsync到磁盘,在到达某些时机(主要是受auto-aof-rewrite-percentage/auto-aof-rewrite-min-size这两个参数影响)后,还会fork子进程执行rewrite。为了避免在服务器突然崩溃时丢失过多的数据,在redis会在下列几个特定时机调用flushAppendOnlyFile函数进行写盘操作:
1.进入事件循环之前
2.服务器定时函数serverCron()中,在Redis运行期间主要是在这里调用flushAppendOnlyFile
3.停止AOF策略的stopAppendOnly()函数中

因 serverCron 函数中的所有代码每秒都会调用 server.hz 次,为了对部分代码的调用次数进行限制,Redis使用了一个宏 runwithperiod(milliseconds) { … } ,这个宏可以将被包含代码的执行次数降低为每 milliseconds 执行一次。

代码: https://github.com/redis/redis/blob/6.0/src/server.c#L2026

/* This is our timer interrupt, called server.hz times per second.
 * Here is where we do a number of things that need to be done asynchronously.
 * For instance:
 *
 * - Active expired keys collection (it is also performed in a lazy way on
 *   lookup).
 * - Software watchdog.
 * - Update some statistic.
 * - Incremental rehashing of the DBs hash tables.
 * - Triggering BGSAVE / AOF rewrite, and handling of terminated children.
 * - Clients timeout of different kinds.
 * - Replication reconnection.
 * - Many more...
 *
 * Everything directly called here will be called server.hz times per second,
 * so in order to throttle execution of things we want to do less frequently
 * a macro is used: run_with_period(milliseconds) { .... }
 */

int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
 
    // ... 省略部分代码

    /* AOF postponed flush: Try at every cron cycle if the slow fsync
     * completed. */
    if (server.aof_flush_postponed_start) flushAppendOnlyFile(0);

    /* AOF write errors: in this case we have a buffer to flush as well and
     * clear the AOF error in case of success to make the DB writable again,
     * however to try every second is enough in case of 'hz' is set to
     * a higher frequency. */
    run_with_period(1000) {
        if (server.aof_last_write_status == C_ERR)
            flushAppendOnlyFile(0);
    }
 
    // ... 省略部分代码  
 
    return 1000/server.hz;
}

通过下面的代码可以看到flushAppendOnlyFile函数中,在write写盘之后根据apendfsync选项来执行刷盘策略,如果是AOF FSYNC ALWAYS,就立即执行刷盘操作,如果是AOF EVERYSEC,则创建一个后台异步刷盘任务。 在函数bioCreateBackgroundJob()会创建bio后台任务,在函数bioProcessBackgroundJobs()会执行bio后台任务的处理。

https://github.com/redis/redis/blob/6.0/src/aof.c#L210

/* Starts a background task that performs fsync() against the specified
 * file descriptor (the one of the AOF file) in another thread. */
void aof_background_fsync(int fd) {
    bioCreateBackgroundJob(BIO_AOF_FSYNC,(void*)(long)fd,NULL,NULL);
}

https://github.com/redis/redis/blob/6.0/src/aof.c#L253

/* Called when the user switches from "appendonly no" to "appendonly yes"
 * at runtime using the CONFIG command. */
int startAppendOnly(void) {
    char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */
    int newfd;

    // 打开文件
    newfd = open(server.aof_filename,O_WRONLY|O_APPEND|O_CREAT,0644);
    serverAssert(server.aof_state == AOF_OFF);
    if (newfd == -1) {
        char *cwdp = getcwd(cwd,MAXPATHLEN);

        serverLog(LL_WARNING,
            "Redis needs to enable the AOF but can't open the "
            "append only file %s (in server root dir %s): %s",
            server.aof_filename,
            cwdp ? cwdp : "unknown",
            strerror(errno));
        return C_ERR;
    }
    if (hasActiveChildProcess() && server.aof_child_pid == -1) {
        server.aof_rewrite_scheduled = 1;
        serverLog(LL_WARNING,"AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible.");
    } else {
        /* If there is a pending AOF rewrite, we need to switch it off and
         * start a new one: the old one cannot be reused because it is not
         * accumulating the AOF buffer. */
        if (server.aof_child_pid != -1) {
            serverLog(LL_WARNING,"AOF was enabled but there is already an AOF rewriting in background. Stopping background AOF and starting a rewrite now.");
            killAppendOnlyChild();
        }
        if (rewriteAppendOnlyFileBackground() == C_ERR) {
            close(newfd);
            serverLog(LL_WARNING,"Redis needs to enable the AOF but can't trigger a background AOF rewrite operation. Check the above logs for more info about the error.");
            return C_ERR;
        }
    }
    /* We correctly switched on AOF, now wait for the rewrite to be complete
     * in order to append data on disk. */
    server.aof_state = AOF_WAIT_REWRITE;
    server.aof_last_fsync = server.unixtime;
    server.aof_fd = newfd;
    return C_OK;
}

https://github.com/redis/redis/blob/6.0/src/aof.c#L340

/* Write the append only file buffer on disk.
 *
 * Since we are required to write the AOF before replying to the client,
 * and the only way the client socket can get a write is entering when the
 * the event loop, we accumulate all the AOF writes in a memory
 * buffer and write it on disk using this function just before entering
 * the event loop again.
 *
 * About the 'force' argument:
 *
 * When the fsync policy is set to 'everysec' we may delay the flush if there
 * is still an fsync() going on in the background thread, since for instance
 * on Linux write(2) will be blocked by the background fsync anyway.
 * When this happens we remember that there is some aof buffer to be
 * flushed ASAP, and will try to do that in the serverCron() function.
 *
 * However if force is set to 1 we'll write regardless of the background
 * fsync. */
#define AOF_WRITE_LOG_ERROR_RATE 30 /* Seconds between errors logging. */
void flushAppendOnlyFile(int force) {
    ssize_t nwritten;
    int sync_in_progress = 0;
    mstime_t latency;

    // 没有数据,不需要写盘
    if (sdslen(server.aof_buf) == 0) {
        /* Check if we need to do fsync even the aof buffer is empty,
         * because previously in AOF_FSYNC_EVERYSEC mode, fsync is
         * called only when aof buffer is not empty, so if users
         * stop write commands before fsync called in one second,
         * the data in page cache cannot be flushed in time. */
        if (server.aof_fsync == AOF_FSYNC_EVERYSEC &&
            server.aof_fsync_offset != server.aof_current_size &&
            server.unixtime > server.aof_last_fsync &&
            !(sync_in_progress = aofFsyncInProgress())) {
            goto try_fsync;
        } else {
            return;
        }
    }

    // 通过bio的任务技术器bio_pending来判断是否有后台fsync操作正在进行
    // 如果有就要标记下sync_in_process
    if (server.aof_fsync == AOF_FSYNC_EVERYSEC)
        sync_in_progress = aofFsyncInProgress();

    // 如果没有设置强制刷盘的选项,可能不会立即进行,而是延迟执行AOF刷盘
    // 因为 Linux 上的 write(2) 会被后台的 fsync 阻塞, 
    // 如果强制执行write 的话,服务器主线程将阻塞在 write 上面 
    if (server.aof_fsync == AOF_FSYNC_EVERYSEC && !force) {
        /* With this append fsync policy we do background fsyncing.
         * If the fsync is still in progress we can try to delay
         * the write for a couple of seconds. */
        if (sync_in_progress) {
            if (server.aof_flush_postponed_start == 0) {
                /* No previous write postponing, remember that we are
                 * postponing the flush and return. */
                server.aof_flush_postponed_start = server.unixtime;
                return;
            } else if (server.unixtime - server.aof_flush_postponed_start < 2) {
                // 如果距离上次执行刷盘操作没有超过2秒,直接返回 
                /* We were already waiting for fsync to finish, but for less
                 * than two seconds this is still ok. Postpone again. */
                return;
            }

            // 如果后台还有 fsync 在执行,并且 write 已经推迟 >= 2 秒
            // 那么执行写操作(write 将被阻塞)
            // 假如此时出现死机等故障,可能存在丢失2秒左右的AOF日志数据
            /* Otherwise fall trough, and go write since we can't wait
             * over two seconds. */
            server.aof_delayed_fsync++;
            serverLog(LL_NOTICE,"Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis.");
        }
    }
    /* We want to perform a single write. This should be guaranteed atomic
     * at least if the filesystem we are writing is a real physical one.
     * While this will save us against the server being killed I don't think
     * there is much to do about the whole server stopping for power problems
     * or alike */

    if (server.aof_flush_sleep && sdslen(server.aof_buf)) {
        usleep(server.aof_flush_sleep);
    }

    // 将server.aof_buf中缓存的AOF日志数据进行写盘
    latencyStartMonitor(latency);
    nwritten = aofWrite(server.aof_fd,server.aof_buf,sdslen(server.aof_buf));
    latencyEndMonitor(latency);
    /* We want to capture different events for delayed writes:
     * when the delay happens with a pending fsync, or with a saving child
     * active, and when the above two conditions are missing.
     * We also use an additional event name to save all samples which is
     * useful for graphing / monitoring purposes. */
    if (sync_in_progress) {
        latencyAddSampleIfNeeded("aof-write-pending-fsync",latency);
    } else if (hasActiveChildProcess()) {
        latencyAddSampleIfNeeded("aof-write-active-child",latency);
    } else {
        latencyAddSampleIfNeeded("aof-write-alone",latency);
    }
    latencyAddSampleIfNeeded("aof-write",latency);

    // 重置延迟刷盘时间
    /* We performed the write so reset the postponed flush sentinel to zero. */
    server.aof_flush_postponed_start = 0;

    // 如果write失败,那么尝试将该情况写入到日志里面
    if (nwritten != (ssize_t)sdslen(server.aof_buf)) {
        static time_t last_write_error_log = 0;
        int can_log = 0;

        /* Limit logging rate to 1 line per AOF_WRITE_LOG_ERROR_RATE seconds. */
        if ((server.unixtime - last_write_error_log) > AOF_WRITE_LOG_ERROR_RATE) {
            can_log = 1;
            last_write_error_log = server.unixtime;
        }

        /* Log the AOF write error and record the error code. */
        if (nwritten == -1) {
            if (can_log) {
                serverLog(LL_WARNING,"Error writing to the AOF file: %s",
                    strerror(errno));
                server.aof_last_write_errno = errno;
            }
        } else {
            if (can_log) {
                serverLog(LL_WARNING,"Short write while writing to "
                                       "the AOF file: (nwritten=%lld, "
                                       "expected=%lld)",
                                       (long long)nwritten,
                                       (long long)sdslen(server.aof_buf));
            }

            // 通过ftruncate尝试删除新追加到AOF中的不完整的数据内容
            if (ftruncate(server.aof_fd, server.aof_current_size) == -1) {
                if (can_log) {
                    serverLog(LL_WARNING, "Could not remove short write "
                             "from the append-only file.  Redis may refuse "
                             "to load the AOF the next time it starts.  "
                             "ftruncate: %s", strerror(errno));
                }
            } else {
                /* If the ftruncate() succeeded we can set nwritten to
                 * -1 since there is no longer partial data into the AOF. */
                nwritten = -1;
            }
            server.aof_last_write_errno = ENOSPC;
        }

        // 处理写入AOF文件时出现的错误
        /* Handle the AOF write error. */
        if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
            /* We can't recover when the fsync policy is ALWAYS since the
             * reply for the client is already in the output buffers, and we
             * have the contract with the user that on acknowledged write data
             * is synced on disk. */
            serverLog(LL_WARNING,"Can't recover from AOF write error when the AOF fsync policy is 'always'. Exiting...");
            exit(1);
        } else {
            /* Recover from failed write leaving data into the buffer. However
             * set an error to stop accepting writes as long as the error
             * condition is not cleared. */
            server.aof_last_write_status = C_ERR;

            // 如果是已经写入了部分数据,是不能通过ftruncate进行撤销的
            // 这里通过sdsrange清除掉aof_buf中已经写入磁盘的那部分数据
            /* Trim the sds buffer if there was a partial write, and there
             * was no way to undo it with ftruncate(2). */
            if (nwritten > 0) {
                server.aof_current_size += nwritten;
                sdsrange(server.aof_buf,nwritten,-1);
            }
            return; /* We'll try again on the next call... */
        }
    } else {
        /* Successful write(2). If AOF was in error state, restore the
         * OK state and log the event. */
        if (server.aof_last_write_status == C_ERR) {
            serverLog(LL_WARNING,
                "AOF write error looks solved, Redis can write again.");
            server.aof_last_write_status = C_OK;
        }
    }
    // 更新写入后的 AOF 文件大小
    server.aof_current_size += nwritten;

    // 当 server.aof_buf 足够小,重新利用空间,防止频繁的内存分配。
    // 相反,当 server.aof_buf 占据大量的空间,采取的策略是释放空间。
    /* Re-use AOF buffer when it is small enough. The maximum comes from the
     * arena size of 4k minus some overhead (but is otherwise arbitrary). */
    if ((sdslen(server.aof_buf)+sdsavail(server.aof_buf)) < 4000) {
        sdsclear(server.aof_buf);
    } else {
        sdsfree(server.aof_buf);
        server.aof_buf = sdsempty();
    }

try_fsync:
    /* Don't fsync if no-appendfsync-on-rewrite is set to yes and there are
     * children doing I/O in the background. */
    if (server.aof_no_fsync_on_rewrite && hasActiveChildProcess())
        return;

    // 执行 fysnc
    /* Perform the fsync if needed. */
    if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
        /* redis_fsync is defined as fdatasync() for Linux in order to avoid
         * flushing metadata. */
        latencyStartMonitor(latency);
        redis_fsync(server.aof_fd); /* Let's try to get this data on the disk */
        latencyEndMonitor(latency);
        latencyAddSampleIfNeeded("aof-fsync-always",latency);
        server.aof_fsync_offset = server.aof_current_size;
        server.aof_last_fsync = server.unixtime;
    } else if ((server.aof_fsync == AOF_FSYNC_EVERYSEC &&
                server.unixtime > server.aof_last_fsync)) {
        if (!sync_in_progress) {
            aof_background_fsync(server.aof_fd);
            server.aof_fsync_offset = server.aof_current_size;
        }
        server.aof_last_fsync = server.unixtime;
    }
}
  1. 主线程操作完内存数据后,会执行write,之后根据配置决定是立即还是延迟fdatasync
  2. redis在启动时,会创建专门的bio线程用于处理aof持久化
  3. 如果是apendfsync=everysec,时机到达后,会创建异步任务(bio)
  4. bio线程轮询任务池,拿到任务后同步执行fdatasync

如果是always每次写命令后都是刷盘,故障时丢失数据最少,如果是everysec,会丢失大概2秒的数据,在bio延迟刷盘时如果后台刷盘操作卡住,在ServerCron里面每一轮循环(频率取决于hz参数,我们设置为100,也就是一秒执行100次循环)都检查是否上一次后台刷盘操作是否超过2秒,如果超过立即进行一次强制刷盘,因此可以粗略的认为最大可能丢失2.01秒的数据。

如果在进行bgrewriteaof期间出现故障,因rewrite会阻塞fdatasync刷盘,可能丢失的数据量更大,这个就不太容易量化评估了。

关于AOF对访问延迟的影响,Redis作者曾经专门写过一篇博客 fsync() on a different thread: apparently a useless trick,结论是bio对延迟的改善并不是很大,因为虽然apendfsync=everysec时fdatasync在后台运行,wirte的aof_buf并不大,基本上不会导致阻塞,而是后台的fdatasync会导致write等待datasync完成了之后才调用write导致阻塞,fdataysnc会握住文件句柄,fwrite也会用到文件句柄,这里write会导致了主线程阻塞。这也就是为什么之前浪潮服务器的RAID出现性能问题时,虽然对大部分应用没有影响,但是对于Redis这种对延迟非常敏感的应用却造成了影响的原因。