Redis info 命令
通过 info
命令可以查看Redis Server的基本信息、CPU、内存、持久化、客户端连接信息等等;
通过 info memory
可以查看Redis Server的内存相关信息。
(1) 从 info memory 看Redis内存使用情况
(1.1) info memory 命令
127.0.0.1:6379> info memory
# Memory
used_memory:153492864
used_memory_human:146.38M
used_memory_rss:163676160
used_memory_rss_human:156.09M
used_memory_peak:153510864
used_memory_peak_human:146.40M
used_memory_peak_perc:99.99%
used_memory_overhead:49426784
used_memory_startup:1020736
used_memory_dataset:104066080
used_memory_dataset_perc:68.25%
allocator_allocated:153427264
allocator_active:163639296
allocator_resident:163639296
total_system_memory:34359738368
total_system_memory_human:32.00G
used_memory_lua:36864
used_memory_lua_human:36.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.07
allocator_frag_bytes:10212032
allocator_rss_ratio:1.00
allocator_rss_bytes:0
rss_overhead_ratio:1.00
rss_overhead_bytes:36864
mem_fragmentation_ratio:1.07
mem_fragmentation_bytes:10248896
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:17440
mem_aof_buffer:0
mem_allocator:libc
active_defrag_running:0
lazyfree_pending_objects:0
127.0.0.1:6379>
(1.2) 参数说明
参数对应说明参考 https://redis.io/commands/info/
参数 | 含义 | Redis启动时对应的值 | 插入百万数据后对应的值(libc内存分配器) | 插入百万数据(jemalloc内存分配器) |
---|---|---|---|---|
used_memory | Redis使用其分配器(标准libc、jemalloc 或替代分配器,如 tcmalloc)分配的字节总数 | 1104032 | 153492864 | 145462944 |
used_memory_human | 人类可读的已使用内存数 | 1.05M | 146.38M | 138.72M |
used_memory_rss | 操作系统层看到的Redis分配的字节数(也称为驻留集大小)。 例如 top(1) 和 ps(1) 等工具报告的数字 | 2342912 | 163676160 | 150941696 |
used_memory_rss_human | 人类可读的实际使用内存数 | 2.23M | 156.09M | 143.95M |
used_memory_peak | Redis消耗的峰值内存(以字节为单位) | 1162544 | 153510864 | 145483976 |
used_memory_peak_human | 人类可读 | 1.11M | 146.40M | 138.74M |
used_memory_peak_perc | used_memory_peak 占 used_memory 的百分比 | 94.97% | 99.99% | 99.99% |
used_memory_overhead | 服务器分配用于管理其内部数据结构的所有开销的总和(以字节为单位) | 1037952 | 49426784 | 49421464 |
used_memory_startup | Redis 启动时消耗的初始内存量(以字节为单位) | 1020512 | 1020736 | 1012352 |
used_memory_dataset | 数据集的大小(以字节为单位)(从used_memory中减去used_memory_overhead) | 66080 | 104066080 | 96041480 |
used_memory_dataset_perc | used_memory_dataset 占净内存使用量的百分比 ( used_memory 减去 used_memory_startup ) | 79.12% | 68.25% | 66.49% |
allocator_allocated | 从分配器分配的总字节数,包括内部碎片。 通常与used_memory相同。 | 1038432 | 153427264 | 145490728 |
allocator_active | 分配器活动页中的总字节数,包括外部碎片。 | 2306048 | 163639296 | 145793024 |
allocator_resident | 分配器中驻留的总字节数 (RSS),这包括可以释放到操作系统的页面(通过内存清除,或只是等待)。 | 2306048 | 163639296 | 151691264 |
total_system_memory | Redis主机拥有的内存总量 | 34359738368 | 34359738368 | 34359738368 |
total_system_memory_human | 机器内存 | 32.00G | 32.00G | 32.00G |
used_memory_lua | Lua引擎使用的字节数 | 36864 | 36864 | 36864 |
used_memory_lua_human | 36.00K | 36.00K | 36.00K | |
used_memory_scripts | 缓存的Lua脚本使用的字节数 | 0 | 0 | 0 |
used_memory_scripts_human | 0B | 0B | 0B | |
number_of_cached_scripts | 0 | 0 | 0 | |
maxmemory | maxmemory配置指令的值 | 0 | 0 | 0 |
maxmemory_human | 0B | 0B | 0B | |
maxmemory_policy | maxmemory-policy配置指令的值 | noeviction | noeviction | noeviction |
allocator_frag_ratio | 分配器碎片比率,allocator_active 和 allocator_alulated 之间的比率。 这是真正的(外部)碎片指标 (不是 mem_fragmentation_ratio )。 | 2.22 | 1.07 | 1.00 |
allocator_frag_bytes | 分配器碎片,allocator_active 和 allocator_alulated 之间的增量。 请参阅有关 mem_fragmentation_bytes 的注释。 | 1267616 | 10212032 | 302296 |
allocator_rss_ratio | allocator_resident 和 allocator_active 之间的比率。 这通常表明分配器可以而且可能很快就会将页面释放回操作系统。 | 1.00 | 1.00 | 1.04 |
allocator_rss_bytes | allocator_resident 和 allocator_active 之间的差值 | 0 | 0 | 5898240 |
rss_overhead_ratio | used_memory_rss (进程RSS) 和 allocator_resident 之间的比率。 这包括与分配器或堆无关的 RSS 开销。 | 1.02 | 1.00 | 1.00 |
rss_overhead_bytes | used_memory_rss (进程RSS) 和 allocator_resident 之间的增量 | 36864 | 36864 | -749568 |
mem_fragmentation_ratio | 内存碎片率, used_memory_rss 与used_memory 之间的比率。 请注意,这不仅包括碎片,还包括其他进程开销(请参阅 allocator_* 指标),以及代码、共享库、堆栈等开销。 | 2.26 | 1.07 | 1.04 |
mem_fragmentation_bytes | 内存碎片字节数,used_memory_rss 和used_memory 之间的增量。 请注意,当总碎片字节数较低(几兆字节)时,较高的比率(例如 1.5 及以上)并不表示存在问题。 | 1304480 | 10248896 | 5519768 |
mem_not_counted_for_evict | 已使用的内存不计入键驱逐。 这基本上是瞬态副本和 AOF 缓冲区。 | 0 | 0 | 0 |
mem_replication_backlog | 复制积压使用的内存 | 0 | 0 | 0 |
mem_clients_slaves | 副本客户端使用的内存 - 从 Redis 7.0 开始,副本缓冲区与复制积压共享内存,因此当副本不触发内存使用量增加时,此字段可以显示 0。 | 0 | 0 | 0 |
mem_clients_normal | 普通客户端使用的内存 | 17440 | 17440 | 20504 |
mem_aof_buffer | 用于 AOF 和 AOF 重写缓冲区的临时内存 | 0 | 0 | 0 |
mem_allocator | 内存分配器,在编译时选择。 | libc | libc | jemalloc-5.1.0 |
active_defrag_running | 当启用 activedefrag 时,这表明碎片整理当前是否处于活动状态,以及它打算使用的 CPU 百分比。 | 0 | 0 | 0 |
lazyfree_pending_objects | 等待释放的对象数量(调用 UNLINK 或带有 ASYNC 选项的 FLUSHDB 和 FLUSHALL 的结果) | 0 | 0 | 0 |
used_memory_rss
= used_memory
+ 其它
操作系统分配的内存 = Redis内存分配器分配的内存 + 其它
used_memory
= used_memory_overhead
+ used_memory_dataset
Redis内存分配器分配的内存大小 = 服务器分配用于管理其内部数据结构的所有开销的总和 + 数据集大小
(2) info memory原理与作用
(3) Redis info 命令源码解析
源码: https://github.com/redis/redis/blob/6.0/src/server.c#L4734
(3.1) infoCommand
// filepath: /src/server.c
//
void infoCommand(client *c) {
// 解析入参
char *section = c->argc == 2 ? c->argv[1]->ptr : "default";
// 参数校验
if (c->argc > 2) {
addReply(c,shared.syntaxerr);
return;
}
// 执行info命令,获取结果
sds info = genRedisInfoString(section);
// 把结果写入缓冲
addReplyVerbatim(c,info,sdslen(info),"txt");
// 释放内存
sdsfree(info);
}
(3.2) 获取Redis信息-genRedisInfoString()
/*
* 创建 INFO 命令返回的字符串。
* INFO 命令本身解耦的,因为我们需要报告有关内存损坏问题的相同信息。
*/
sds genRedisInfoString(const char *section) {
// 创建sds,用来保存返回结果
sds info = sdsempty();
// 启动时间
time_t uptime = server.unixtime-server.stat_starttime;
int j;
struct rusage self_ru, c_ru;
int allsections = 0, defsections = 0, everything = 0, modules = 0;
int sections = 0;
if (section == NULL) section = "default";
allsections = strcasecmp(section,"all") == 0;
defsections = strcasecmp(section,"default") == 0;
everything = strcasecmp(section,"everything") == 0;
modules = strcasecmp(section,"modules") == 0;
if (everything) allsections = 1;
getrusage(RUSAGE_SELF, &self_ru);
getrusage(RUSAGE_CHILDREN, &c_ru);
/* Server相关的信息 */
if (allsections || defsections || !strcasecmp(section,"server")) {
static int call_uname = 1;
static struct utsname name;
char *mode;
if (server.cluster_enabled) mode = "cluster";
else if (server.sentinel_mode) mode = "sentinel";
else mode = "standalone";
if (sections++) info = sdscat(info,"\r\n");
if (call_uname) {
/* Uname can be slow and is always the same output. Cache it. */
uname(&name);
call_uname = 0;
}
info = sdscatfmt(info,
"# Server\r\n"
"redis_version:%s\r\n"
"redis_git_sha1:%s\r\n"
"redis_git_dirty:%i\r\n"
"redis_build_id:%s\r\n"
"redis_mode:%s\r\n"
"os:%s %s %s\r\n"
"arch_bits:%i\r\n"
"multiplexing_api:%s\r\n"
"atomicvar_api:%s\r\n"
"gcc_version:%i.%i.%i\r\n"
"process_id:%I\r\n"
"run_id:%s\r\n"
"tcp_port:%i\r\n"
"uptime_in_seconds:%I\r\n"
"uptime_in_days:%I\r\n"
"hz:%i\r\n"
"configured_hz:%i\r\n"
"lru_clock:%u\r\n"
"executable:%s\r\n"
"config_file:%s\r\n"
"io_threads_active:%i\r\n",
REDIS_VERSION,
redisGitSHA1(),
strtol(redisGitDirty(),NULL,10) > 0,
redisBuildIdString(),
mode,
name.sysname, name.release, name.machine,
server.arch_bits,
aeGetApiName(),
REDIS_ATOMIC_API,
#ifdef __GNUC__
__GNUC__,__GNUC_MINOR__,__GNUC_PATCHLEVEL__,
#else
0,0,0,
#endif
(int64_t) getpid(),
server.runid,
server.port ? server.port : server.tls_port,
(int64_t)uptime,
(int64_t)(uptime/(3600*24)),
server.hz,
server.config_hz,
server.lruclock,
server.executable ? server.executable : "",
server.configfile ? server.configfile : "",
server.io_threads_active);
}
/* Clients相关信息 */
if (allsections || defsections || !strcasecmp(section,"clients")) {
size_t maxin, maxout;
getExpansiveClientsInfo(&maxin,&maxout);
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,
"# Clients\r\n"
"connected_clients:%lu\r\n"
"client_recent_max_input_buffer:%zu\r\n"
"client_recent_max_output_buffer:%zu\r\n"
"blocked_clients:%d\r\n"
"tracking_clients:%d\r\n"
"clients_in_timeout_table:%llu\r\n",
listLength(server.clients)-listLength(server.slaves),
maxin, maxout,
server.blocked_clients,
server.tracking_clients,
(unsigned long long) raxSize(server.clients_timeout_table));
}
/* Memory相关信息 */
if (allsections || defsections || !strcasecmp(section,"memory")) {
char hmem[64];
char peak_hmem[64];
char total_system_hmem[64];
char used_memory_lua_hmem[64];
char used_memory_scripts_hmem[64];
char used_memory_rss_hmem[64];
char maxmemory_hmem[64];
// 内存分配器已使用大小 单位: 字节
size_t zmalloc_used = zmalloc_used_memory();
// 操作系统已使用内存大小 单位: 字节
size_t total_system_mem = server.system_memory_size;
// 过期策略
const char *evict_policy = evictPolicyToString();
// lua使用的内存大小
long long memory_lua = server.lua ? (long long)lua_gc(server.lua,LUA_GCCOUNT,0)*1024 : 0;
// used_memory_overhead 获取服务器分配用于管理其内部数据结构的所有开销的总和
struct redisMemOverhead *mh = getMemoryOverheadData();
/* 峰值内存由 serverCron() 不时更新,因此可能会出现瞬时值略大于峰值的情况。
* 这可能会让用户感到困惑,因此如果发现小于当前内存使用量,我们会更新峰值。
*/
if (zmalloc_used > server.stat_peak_memory)
server.stat_peak_memory = zmalloc_used;
// 格式化,转成人类可读的格式
bytesToHuman(hmem,zmalloc_used);
bytesToHuman(peak_hmem,server.stat_peak_memory);
bytesToHuman(total_system_hmem,total_system_mem);
bytesToHuman(used_memory_lua_hmem,memory_lua);
bytesToHuman(used_memory_scripts_hmem,mh->lua_caches);
bytesToHuman(used_memory_rss_hmem,server.cron_malloc_stats.process_rss);
bytesToHuman(maxmemory_hmem,server.maxmemory);
// 处理返回结果
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,
"# Memory\r\n"
"used_memory:%zu\r\n"
"used_memory_human:%s\r\n"
"used_memory_rss:%zu\r\n"
"used_memory_rss_human:%s\r\n"
"used_memory_peak:%zu\r\n"
"used_memory_peak_human:%s\r\n"
"used_memory_peak_perc:%.2f%%\r\n"
"used_memory_overhead:%zu\r\n"
"used_memory_startup:%zu\r\n"
"used_memory_dataset:%zu\r\n"
"used_memory_dataset_perc:%.2f%%\r\n"
"allocator_allocated:%zu\r\n"
"allocator_active:%zu\r\n"
"allocator_resident:%zu\r\n"
"total_system_memory:%lu\r\n"
"total_system_memory_human:%s\r\n"
"used_memory_lua:%lld\r\n"
"used_memory_lua_human:%s\r\n"
"used_memory_scripts:%lld\r\n"
"used_memory_scripts_human:%s\r\n"
"number_of_cached_scripts:%lu\r\n"
"maxmemory:%lld\r\n"
"maxmemory_human:%s\r\n"
"maxmemory_policy:%s\r\n"
"allocator_frag_ratio:%.2f\r\n"
"allocator_frag_bytes:%zu\r\n"
"allocator_rss_ratio:%.2f\r\n"
"allocator_rss_bytes:%zd\r\n"
"rss_overhead_ratio:%.2f\r\n"
"rss_overhead_bytes:%zd\r\n"
"mem_fragmentation_ratio:%.2f\r\n"
"mem_fragmentation_bytes:%zd\r\n"
"mem_not_counted_for_evict:%zu\r\n"
"mem_replication_backlog:%zu\r\n"
"mem_clients_slaves:%zu\r\n"
"mem_clients_normal:%zu\r\n"
"mem_aof_buffer:%zu\r\n"
"mem_allocator:%s\r\n"
"active_defrag_running:%d\r\n"
"lazyfree_pending_objects:%zu\r\n",
zmalloc_used,
hmem,
server.cron_malloc_stats.process_rss,
used_memory_rss_hmem,
server.stat_peak_memory,
peak_hmem,
mh->peak_perc,
mh->overhead_total,
mh->startup_allocated,
mh->dataset,
mh->dataset_perc,
server.cron_malloc_stats.allocator_allocated,
server.cron_malloc_stats.allocator_active,
server.cron_malloc_stats.allocator_resident,
(unsigned long)total_system_mem,
total_system_hmem,
memory_lua,
used_memory_lua_hmem,
(long long) mh->lua_caches,
used_memory_scripts_hmem,
dictSize(server.lua_scripts),
server.maxmemory,
maxmemory_hmem,
evict_policy,
mh->allocator_frag,
mh->allocator_frag_bytes,
mh->allocator_rss,
mh->allocator_rss_bytes,
mh->rss_extra,
mh->rss_extra_bytes,
mh->total_frag, /* This is the total RSS overhead, including
fragmentation, but not just it. This field
(and the next one) is named like that just
for backward compatibility. */
mh->total_frag_bytes,
freeMemoryGetNotCountedMemory(),
mh->repl_backlog,
mh->clients_slaves,
mh->clients_normal,
mh->aof_buffer,
ZMALLOC_LIB,
server.active_defrag_running,
lazyfreeGetPendingObjectsCount()
);
// 释放内存
freeMemoryOverheadData(mh);
}
/* Persistence */
if (allsections || defsections || !strcasecmp(section,"persistence")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,
"# Persistence\r\n"
"loading:%d\r\n"
"rdb_changes_since_last_save:%lld\r\n"
"rdb_bgsave_in_progress:%d\r\n"
"rdb_last_save_time:%jd\r\n"
"rdb_last_bgsave_status:%s\r\n"
"rdb_last_bgsave_time_sec:%jd\r\n"
"rdb_current_bgsave_time_sec:%jd\r\n"
"rdb_last_cow_size:%zu\r\n"
"aof_enabled:%d\r\n"
"aof_rewrite_in_progress:%d\r\n"
"aof_rewrite_scheduled:%d\r\n"
"aof_last_rewrite_time_sec:%jd\r\n"
"aof_current_rewrite_time_sec:%jd\r\n"
"aof_last_bgrewrite_status:%s\r\n"
"aof_last_write_status:%s\r\n"
"aof_last_cow_size:%zu\r\n"
"module_fork_in_progress:%d\r\n"
"module_fork_last_cow_size:%zu\r\n",
server.loading,
server.dirty,
server.rdb_child_pid != -1,
(intmax_t)server.lastsave,
(server.lastbgsave_status == C_OK) ? "ok" : "err",
(intmax_t)server.rdb_save_time_last,
(intmax_t)((server.rdb_child_pid == -1) ?
-1 : time(NULL)-server.rdb_save_time_start),
server.stat_rdb_cow_bytes,
server.aof_state != AOF_OFF,
server.aof_child_pid != -1,
server.aof_rewrite_scheduled,
(intmax_t)server.aof_rewrite_time_last,
(intmax_t)((server.aof_child_pid == -1) ?
-1 : time(NULL)-server.aof_rewrite_time_start),
(server.aof_lastbgrewrite_status == C_OK) ? "ok" : "err",
(server.aof_last_write_status == C_OK) ? "ok" : "err",
server.stat_aof_cow_bytes,
server.module_child_pid != -1,
server.stat_module_cow_bytes);
if (server.aof_enabled) {
info = sdscatprintf(info,
"aof_current_size:%lld\r\n"
"aof_base_size:%lld\r\n"
"aof_pending_rewrite:%d\r\n"
"aof_buffer_length:%zu\r\n"
"aof_rewrite_buffer_length:%lu\r\n"
"aof_pending_bio_fsync:%llu\r\n"
"aof_delayed_fsync:%lu\r\n",
(long long) server.aof_current_size,
(long long) server.aof_rewrite_base_size,
server.aof_rewrite_scheduled,
sdslen(server.aof_buf),
aofRewriteBufferSize(),
bioPendingJobsOfType(BIO_AOF_FSYNC),
server.aof_delayed_fsync);
}
if (server.loading) {
double perc;
time_t eta, elapsed;
off_t remaining_bytes = server.loading_total_bytes-
server.loading_loaded_bytes;
perc = ((double)server.loading_loaded_bytes /
(server.loading_total_bytes+1)) * 100;
elapsed = time(NULL)-server.loading_start_time;
if (elapsed == 0) {
eta = 1; /* A fake 1 second figure if we don't have
enough info */
} else {
eta = (elapsed*remaining_bytes)/(server.loading_loaded_bytes+1);
}
info = sdscatprintf(info,
"loading_start_time:%jd\r\n"
"loading_total_bytes:%llu\r\n"
"loading_loaded_bytes:%llu\r\n"
"loading_loaded_perc:%.2f\r\n"
"loading_eta_seconds:%jd\r\n",
(intmax_t) server.loading_start_time,
(unsigned long long) server.loading_total_bytes,
(unsigned long long) server.loading_loaded_bytes,
perc,
(intmax_t)eta
);
}
}
/* Stats */
if (allsections || defsections || !strcasecmp(section,"stats")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,
"# Stats\r\n"
"total_connections_received:%lld\r\n"
"total_commands_processed:%lld\r\n"
"instantaneous_ops_per_sec:%lld\r\n"
"total_net_input_bytes:%lld\r\n"
"total_net_output_bytes:%lld\r\n"
"instantaneous_input_kbps:%.2f\r\n"
"instantaneous_output_kbps:%.2f\r\n"
"rejected_connections:%lld\r\n"
"sync_full:%lld\r\n"
"sync_partial_ok:%lld\r\n"
"sync_partial_err:%lld\r\n"
"expired_keys:%lld\r\n"
"expired_stale_perc:%.2f\r\n"
"expired_time_cap_reached_count:%lld\r\n"
"expire_cycle_cpu_milliseconds:%lld\r\n"
"evicted_keys:%lld\r\n"
"keyspace_hits:%lld\r\n"
"keyspace_misses:%lld\r\n"
"pubsub_channels:%ld\r\n"
"pubsub_patterns:%lu\r\n"
"latest_fork_usec:%lld\r\n"
"migrate_cached_sockets:%ld\r\n"
"slave_expires_tracked_keys:%zu\r\n"
"active_defrag_hits:%lld\r\n"
"active_defrag_misses:%lld\r\n"
"active_defrag_key_hits:%lld\r\n"
"active_defrag_key_misses:%lld\r\n"
"tracking_total_keys:%lld\r\n"
"tracking_total_items:%lld\r\n"
"tracking_total_prefixes:%lld\r\n"
"unexpected_error_replies:%lld\r\n"
"total_reads_processed:%lld\r\n"
"total_writes_processed:%lld\r\n"
"io_threaded_reads_processed:%lld\r\n"
"io_threaded_writes_processed:%lld\r\n",
server.stat_numconnections,
server.stat_numcommands,
getInstantaneousMetric(STATS_METRIC_COMMAND),
server.stat_net_input_bytes,
server.stat_net_output_bytes,
(float)getInstantaneousMetric(STATS_METRIC_NET_INPUT)/1024,
(float)getInstantaneousMetric(STATS_METRIC_NET_OUTPUT)/1024,
server.stat_rejected_conn,
server.stat_sync_full,
server.stat_sync_partial_ok,
server.stat_sync_partial_err,
server.stat_expiredkeys,
server.stat_expired_stale_perc*100,
server.stat_expired_time_cap_reached_count,
server.stat_expire_cycle_time_used/1000,
server.stat_evictedkeys,
server.stat_keyspace_hits,
server.stat_keyspace_misses,
dictSize(server.pubsub_channels),
listLength(server.pubsub_patterns),
server.stat_fork_time,
dictSize(server.migrate_cached_sockets),
getSlaveKeyWithExpireCount(),
server.stat_active_defrag_hits,
server.stat_active_defrag_misses,
server.stat_active_defrag_key_hits,
server.stat_active_defrag_key_misses,
(unsigned long long) trackingGetTotalKeys(),
(unsigned long long) trackingGetTotalItems(),
(unsigned long long) trackingGetTotalPrefixes(),
server.stat_unexpected_error_replies,
server.stat_total_reads_processed,
server.stat_total_writes_processed,
server.stat_io_reads_processed,
server.stat_io_writes_processed);
}
/* Replication */
if (allsections || defsections || !strcasecmp(section,"replication")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,
"# Replication\r\n"
"role:%s\r\n",
server.masterhost == NULL ? "master" : "slave");
if (server.masterhost) {
long long slave_repl_offset = 1;
if (server.master)
slave_repl_offset = server.master->reploff;
else if (server.cached_master)
slave_repl_offset = server.cached_master->reploff;
info = sdscatprintf(info,
"master_host:%s\r\n"
"master_port:%d\r\n"
"master_link_status:%s\r\n"
"master_last_io_seconds_ago:%d\r\n"
"master_sync_in_progress:%d\r\n"
"slave_repl_offset:%lld\r\n"
,server.masterhost,
server.masterport,
(server.repl_state == REPL_STATE_CONNECTED) ?
"up" : "down",
server.master ?
((int)(server.unixtime-server.master->lastinteraction)) : -1,
server.repl_state == REPL_STATE_TRANSFER,
slave_repl_offset
);
if (server.repl_state == REPL_STATE_TRANSFER) {
info = sdscatprintf(info,
"master_sync_left_bytes:%lld\r\n"
"master_sync_last_io_seconds_ago:%d\r\n"
, (long long)
(server.repl_transfer_size - server.repl_transfer_read),
(int)(server.unixtime-server.repl_transfer_lastio)
);
}
if (server.repl_state != REPL_STATE_CONNECTED) {
info = sdscatprintf(info,
"master_link_down_since_seconds:%jd\r\n",
(intmax_t)(server.unixtime-server.repl_down_since));
}
info = sdscatprintf(info,
"slave_priority:%d\r\n"
"slave_read_only:%d\r\n",
server.slave_priority,
server.repl_slave_ro);
}
info = sdscatprintf(info,
"connected_slaves:%lu\r\n",
listLength(server.slaves));
/* If min-slaves-to-write is active, write the number of slaves
* currently considered 'good'. */
if (server.repl_min_slaves_to_write &&
server.repl_min_slaves_max_lag) {
info = sdscatprintf(info,
"min_slaves_good_slaves:%d\r\n",
server.repl_good_slaves_count);
}
if (listLength(server.slaves)) {
int slaveid = 0;
listNode *ln;
listIter li;
listRewind(server.slaves,&li);
while((ln = listNext(&li))) {
client *slave = listNodeValue(ln);
char *state = NULL;
char ip[NET_IP_STR_LEN], *slaveip = slave->slave_ip;
int port;
long lag = 0;
if (slaveip[0] == '\0') {
if (connPeerToString(slave->conn,ip,sizeof(ip),&port) == -1)
continue;
slaveip = ip;
}
switch(slave->replstate) {
case SLAVE_STATE_WAIT_BGSAVE_START:
case SLAVE_STATE_WAIT_BGSAVE_END:
state = "wait_bgsave";
break;
case SLAVE_STATE_SEND_BULK:
state = "send_bulk";
break;
case SLAVE_STATE_ONLINE:
state = "online";
break;
}
if (state == NULL) continue;
if (slave->replstate == SLAVE_STATE_ONLINE)
lag = time(NULL) - slave->repl_ack_time;
info = sdscatprintf(info,
"slave%d:ip=%s,port=%d,state=%s,"
"offset=%lld,lag=%ld\r\n",
slaveid,slaveip,slave->slave_listening_port,state,
slave->repl_ack_off, lag);
slaveid++;
}
}
info = sdscatprintf(info,
"master_replid:%s\r\n"
"master_replid2:%s\r\n"
"master_repl_offset:%lld\r\n"
"second_repl_offset:%lld\r\n"
"repl_backlog_active:%d\r\n"
"repl_backlog_size:%lld\r\n"
"repl_backlog_first_byte_offset:%lld\r\n"
"repl_backlog_histlen:%lld\r\n",
server.replid,
server.replid2,
server.master_repl_offset,
server.second_replid_offset,
server.repl_backlog != NULL,
server.repl_backlog_size,
server.repl_backlog_off,
server.repl_backlog_histlen);
}
/* CPU */
if (allsections || defsections || !strcasecmp(section,"cpu")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,
"# CPU\r\n"
"used_cpu_sys:%ld.%06ld\r\n"
"used_cpu_user:%ld.%06ld\r\n"
"used_cpu_sys_children:%ld.%06ld\r\n"
"used_cpu_user_children:%ld.%06ld\r\n",
(long)self_ru.ru_stime.tv_sec, (long)self_ru.ru_stime.tv_usec,
(long)self_ru.ru_utime.tv_sec, (long)self_ru.ru_utime.tv_usec,
(long)c_ru.ru_stime.tv_sec, (long)c_ru.ru_stime.tv_usec,
(long)c_ru.ru_utime.tv_sec, (long)c_ru.ru_utime.tv_usec);
}
/* Modules */
if (allsections || defsections || !strcasecmp(section,"modules")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,"# Modules\r\n");
info = genModulesInfoString(info);
}
/* Command statistics */
if (allsections || !strcasecmp(section,"commandstats")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info, "# Commandstats\r\n");
struct redisCommand *c;
dictEntry *de;
dictIterator *di;
di = dictGetSafeIterator(server.commands);
while((de = dictNext(di)) != NULL) {
c = (struct redisCommand *) dictGetVal(de);
if (!c->calls) continue;
info = sdscatprintf(info,
"cmdstat_%s:calls=%lld,usec=%lld,usec_per_call=%.2f\r\n",
c->name, c->calls, c->microseconds,
(c->calls == 0) ? 0 : ((float)c->microseconds/c->calls));
}
dictReleaseIterator(di);
}
/* Cluster */
if (allsections || defsections || !strcasecmp(section,"cluster")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info,
"# Cluster\r\n"
"cluster_enabled:%d\r\n",
server.cluster_enabled);
}
/* Key space */
if (allsections || defsections || !strcasecmp(section,"keyspace")) {
if (sections++) info = sdscat(info,"\r\n");
info = sdscatprintf(info, "# Keyspace\r\n");
for (j = 0; j < server.dbnum; j++) {
long long keys, vkeys;
keys = dictSize(server.db[j].dict);
vkeys = dictSize(server.db[j].expires);
if (keys || vkeys) {
info = sdscatprintf(info,
"db%d:keys=%lld,expires=%lld,avg_ttl=%lld\r\n",
j, keys, vkeys, server.db[j].avg_ttl);
}
}
}
/* Get info from modules.
* if user asked for "everything" or "modules", or a specific section
* that's not found yet. */
if (everything || modules ||
(!allsections && !defsections && sections==0)) {
info = modulesCollectInfo(info,
everything || modules ? NULL: section,
0, /* not a crash report */
sections);
}
return info;
}
(3.3) Redis Info memory 相关方法
(3.3.1) 获取-getMemoryOverheadData()
// filepath: /src/server.c
/*
* 返回一个 struct redisMemOverhead ,
* 其中填充了用于 MEMORY OVERHEAD 和 INFO 命令的内存开销信息。
*
* 返回的结构体指针,应该通过调用 freeMemoryOverheadData() 来释放。
*
*/
struct redisMemOverhead *getMemoryOverheadData(void) {
int j;
// 所有 结构体元数据 占用的内存
size_t mem_total = 0;
// 占用内存大小
size_t mem = 0;
// 内存分配器使用的内存
size_t zmalloc_used = zmalloc_used_memory();
// 为 redisMemOverhead 结构体分配内存,用来保存结果
struct redisMemOverhead *mh = zcalloc(sizeof(*mh));
// 内存分配器使用的内存大小
mh->total_allocated = zmalloc_used;
// Redis Server启动时分配的内存大小
mh->startup_allocated = server.initial_memory_usage;
// 峰值分配大小
mh->peak_allocated = server.stat_peak_memory;
//
//
// cron_malloc_stats 使用 malloc_stats 结构体, 通过 serverCron() 更新
mh->total_frag =
(float)server.cron_malloc_stats.process_rss / server.cron_malloc_stats.zmalloc_used;
// 分配器碎片
mh->total_frag_bytes =
server.cron_malloc_stats.process_rss - server.cron_malloc_stats.zmalloc_used;
// 分配器碎片比率,allocator_active 和 allocator_alulated 之间的比率。
// 对应 mem_fragmentation_ratio
mh->allocator_frag =
(float)server.cron_malloc_stats.allocator_active / server.cron_malloc_stats.allocator_allocated;
mh->allocator_frag_bytes =
server.cron_malloc_stats.allocator_active - server.cron_malloc_stats.allocator_allocated;
//
mh->allocator_rss =
(float)server.cron_malloc_stats.allocator_resident / server.cron_malloc_stats.allocator_active;
// 内存分配器实际使用的内存 单位:字节
mh->allocator_rss_bytes =
server.cron_malloc_stats.allocator_resident - server.cron_malloc_stats.allocator_active;
// 额外使用的内存 比
mh->rss_extra =
(float)server.cron_malloc_stats.process_rss / server.cron_malloc_stats.allocator_resident;
// 额外使用的内存 单位:字节
mh->rss_extra_bytes =
server.cron_malloc_stats.process_rss - server.cron_malloc_stats.allocator_resident;
/** 开始计算内存 */
// 加上Redis Server 初始化使用的内存
mem_total += server.initial_memory_usage;
mem = 0;
if (server.repl_backlog)
mem += zmalloc_size(server.repl_backlog); // 加上Redis主从复制环形缓冲复制队列 repl_backlog 占用的内存
// repl_backlog 占用内存大小
mh->repl_backlog = mem;
// 总内存 加上 Redis主从复制环形缓冲复制队列repl_backlog占用的内存
mem_total += mem;
/* Computing the memory used by the clients would be O(N) if done
* here online. We use our values computed incrementally by
* clientsCronTrackClientsMemUsage(). */
// 从节点client占用的内存
mh->clients_slaves = server.stat_clients_type_memory[CLIENT_TYPE_SLAVE];
// 正常client占用的内存 包括 master puhsub nomal
mh->clients_normal = server.stat_clients_type_memory[CLIENT_TYPE_MASTER]+
server.stat_clients_type_memory[CLIENT_TYPE_PUBSUB]+
server.stat_clients_type_memory[CLIENT_TYPE_NORMAL];
// 总内存 加上 从节点client占用的内存
mem_total += mh->clients_slaves;
// 总内存 加上 正常client占用的内存
mem_total += mh->clients_normal;
mem = 0;
if (server.aof_state != AOF_OFF) { // 开启了AOF
// 加上 AOF缓冲大小
mem += sdsZmallocSize(server.aof_buf);
// 加上 AOF重写缓冲大小
mem += aofRewriteBufferSize();
}
//
mh->aof_buffer = mem;
// 总内存 加上 AOF缓冲区大小 和 AOF重写缓冲大小
mem_total+=mem;
mem = server.lua_scripts_mem;
//
mem += dictSize(server.lua_scripts) * sizeof(dictEntry) +
dictSlots(server.lua_scripts) * sizeof(dictEntry*);
//
mem += dictSize(server.repl_scriptcache_dict) * sizeof(dictEntry) +
dictSlots(server.repl_scriptcache_dict) * sizeof(dictEntry*);
if (listLength(server.repl_scriptcache_fifo) > 0) { //
//
mem += listLength(server.repl_scriptcache_fifo) * (sizeof(listNode) +
sdsZmallocSize(listNodeValue(listFirst(server.repl_scriptcache_fifo))));
}
//
mh->lua_caches = mem;
// 总内存 加上 lua占用的内存大小
mem_total+=mem;
// 逐个处理redisDb里的内存
for (j = 0; j < server.dbnum; j++) {
redisDb *db = server.db+j;
// key 个数
long long keyscount = dictSize(db->dict);
if (keyscount==0) continue;
// 设置key个数
mh->total_keys += keyscount;
// db struct占用的内存 如果有多个,会每次都更新值
mh->db = zrealloc(mh->db,sizeof(mh->db[0])*(mh->num_dbs+1));
// 更新mh里的db[mh->num_dbs] 个数
mh->db[mh->num_dbs].dbid = j;
// 字典里已使用节点个数 * dictEntry结构体元数据大小
// + 两个哈希表dictEntry总个数 * dictEntry* 大小
// + 字典里已使用节点个数 * robj结构体元数据大小
mem = dictSize(db->dict) * sizeof(dictEntry) +
dictSlots(db->dict) * sizeof(dictEntry*) +
dictSize(db->dict) * sizeof(robj);
mh->db[mh->num_dbs].overhead_ht_main = mem;
// 加到总内存里
mem_total+=mem;
// 过期key的个数 * dictEntry结构体元数据大小 + 两个哈希表dictEntry总个数 * dictEntry* 大小
mem = dictSize(db->expires) * sizeof(dictEntry) +
dictSlots(db->expires) * sizeof(dictEntry*);
// 设置 overhead_ht_expires 大小
mh->db[mh->num_dbs].overhead_ht_expires = mem;
// 加到总内存里
mem_total+=mem;
mh->num_dbs++;
}
// 总内存
mh->overhead_total = mem_total;
// 数据集大小 = 内存分配器内存大小 - 结构体元数据内存大小
mh->dataset = zmalloc_used - mem_total;
mh->peak_perc = (float)zmalloc_used*100/mh->peak_allocated;
/* 从总内存中减去启动内存后计算出的指标 */
size_t net_usage = 1;
//
if (zmalloc_used > mh->startup_allocated)
net_usage = zmalloc_used - mh->startup_allocated; // 内存分配器内存大小 - 启动时内存
// 数据集内存百分比
mh->dataset_perc = (float)mh->dataset*100/net_usage;
// 每个key占用的内存均值
mh->bytes_per_key = mh->total_keys ? (net_usage / mh->total_keys) : 0;
return mh;
}
(3.3.2) serverCron()
// filepath: /src/server.c
/*
* 这是由定时器出发,每秒调用 server.hz 次。
* 这是我们做一些需要异步完成的事情的地方。
*
* 比如:
*
* - Active expired keys 收集(在查找时也以惰性方式执行)。
* - Software watchdog.
* - 更新一些统计数据
* - 数据库哈希表的增量重新哈希。
* - 触发 BGSAVE / AOF 重写,并处理终止的子项。
* - 不同类型的客户端超时。
* - Replication reconnection.
* - 还有很多
*
* 这里直接调用的所有方法都会每秒被调用 server.hz 次,
* 因此,为了限制我们想要不那么频繁执行的事情的执行,使用了宏:run_with_period(milliseconds) { .... }
*/
int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
int j;
UNUSED(eventLoop);
UNUSED(id);
UNUSED(clientData);
/* Software watchdog: deliver the SIGALRM that will reach the signal
* handler if we don't return here fast enough. */
if (server.watchdog_period) watchdogScheduleSignal(server.watchdog_period);
/* Update the time cache. */
updateCachedTime(1);
// 更新 server.hz
server.hz = server.config_hz;
/* Adapt the server.hz value to the number of configured clients. If we have
* many clients, we want to call serverCron() with an higher frequency. */
if (server.dynamic_hz) {
while (listLength(server.clients) / server.hz >
MAX_CLIENTS_PER_CLOCK_TICK)
{
server.hz *= 2;
if (server.hz > CONFIG_MAX_HZ) {
server.hz = CONFIG_MAX_HZ;
break;
}
}
}
run_with_period(100) {
trackInstantaneousMetric(STATS_METRIC_COMMAND,server.stat_numcommands);
trackInstantaneousMetric(STATS_METRIC_NET_INPUT,
server.stat_net_input_bytes);
trackInstantaneousMetric(STATS_METRIC_NET_OUTPUT,
server.stat_net_output_bytes);
}
/* We have just LRU_BITS bits per object for LRU information.
* So we use an (eventually wrapping) LRU clock.
*
* Note that even if the counter wraps it's not a big problem,
* everything will still work but some object will appear younger
* to Redis. However for this to happen a given object should never be
* touched for all the time needed to the counter to wrap, which is
* not likely.
*
* Note that you can change the resolution altering the
* LRU_CLOCK_RESOLUTION define. */
server.lruclock = getLRUClock();
/* 记录服务器启动以来使用的最大内存大小。 */
if (zmalloc_used_memory() > server.stat_peak_memory)
server.stat_peak_memory = zmalloc_used_memory();
run_with_period(100) {
/* 此处对 RSS 和其他指标进行采样,因为这是一个相对较慢的调用。
* 我们必须在获取 rss 的同时对 zmalloc_used 进行采样,否则碎片比率计算可能会出错(不同时间的两个样本的比率)
*/
// 获取 内存分配器实际占用的操作系统物理内存
server.cron_malloc_stats.process_rss = zmalloc_get_rss();
// 获取内存分配器已使用内存
server.cron_malloc_stats.zmalloc_used = zmalloc_used_memory();
/* Sampling the allcator info can be slow too.
* The fragmentation ratio it'll show is potentically more accurate
* it excludes other RSS pages such as: shared libraries, LUA and other non-zmalloc
* allocations, and allocator reserved pages that can be pursed (all not actual frag) */
zmalloc_get_allocator_info(&server.cron_malloc_stats.allocator_allocated,
&server.cron_malloc_stats.allocator_active,
&server.cron_malloc_stats.allocator_resident);
/* in case the allocator isn't providing these stats, fake them so that
* fragmention info still shows some (inaccurate metrics) */
if (!server.cron_malloc_stats.allocator_resident) {
/* LUA memory isn't part of zmalloc_used, but it is part of the process RSS,
* so we must desuct it in order to be able to calculate correct
* "allocator fragmentation" ratio */
size_t lua_memory = lua_gc(server.lua,LUA_GCCOUNT,0)*1024LL;
server.cron_malloc_stats.allocator_resident = server.cron_malloc_stats.process_rss - lua_memory;
}
if (!server.cron_malloc_stats.allocator_active)
server.cron_malloc_stats.allocator_active = server.cron_malloc_stats.allocator_resident;
if (!server.cron_malloc_stats.allocator_allocated)
server.cron_malloc_stats.allocator_allocated = server.cron_malloc_stats.zmalloc_used;
}
/* We received a SIGTERM, shutting down here in a safe way, as it is
* not ok doing so inside the signal handler. */
if (server.shutdown_asap) {
if (prepareForShutdown(SHUTDOWN_NOFLAGS) == C_OK) exit(0);
serverLog(LL_WARNING,"SIGTERM received but errors trying to shut down the server, check the logs for more information");
server.shutdown_asap = 0;
}
/* Show some info about non-empty databases */
run_with_period(5000) {
for (j = 0; j < server.dbnum; j++) {
long long size, used, vkeys;
size = dictSlots(server.db[j].dict);
used = dictSize(server.db[j].dict);
vkeys = dictSize(server.db[j].expires);
if (used || vkeys) {
serverLog(LL_VERBOSE,"DB %d: %lld keys (%lld volatile) in %lld slots HT.",j,used,vkeys,size);
/* dictPrintStats(server.dict); */
}
}
}
/* Show information about connected clients */
if (!server.sentinel_mode) {
run_with_period(5000) {
serverLog(LL_DEBUG,
"%lu clients connected (%lu replicas), %zu bytes in use",
listLength(server.clients)-listLength(server.slaves),
listLength(server.slaves),
zmalloc_used_memory());
}
}
/* We need to do a few operations on clients asynchronously. */
clientsCron();
/* Handle background operations on Redis databases. */
databasesCron();
/* Start a scheduled AOF rewrite if this was requested by the user while
* a BGSAVE was in progress. */
if (!hasActiveChildProcess() &&
server.aof_rewrite_scheduled)
{
rewriteAppendOnlyFileBackground();
}
/* Check if a background saving or AOF rewrite in progress terminated. */
if (hasActiveChildProcess() || ldbPendingChildren())
{
checkChildrenDone();
} else {
/* If there is not a background saving/rewrite in progress check if
* we have to save/rewrite now. */
for (j = 0; j < server.saveparamslen; j++) {
struct saveparam *sp = server.saveparams+j;
/* Save if we reached the given amount of changes,
* the given amount of seconds, and if the latest bgsave was
* successful or if, in case of an error, at least
* CONFIG_BGSAVE_RETRY_DELAY seconds already elapsed. */
if (server.dirty >= sp->changes &&
server.unixtime-server.lastsave > sp->seconds &&
(server.unixtime-server.lastbgsave_try >
CONFIG_BGSAVE_RETRY_DELAY ||
server.lastbgsave_status == C_OK))
{
serverLog(LL_NOTICE,"%d changes in %d seconds. Saving...",
sp->changes, (int)sp->seconds);
rdbSaveInfo rsi, *rsiptr;
rsiptr = rdbPopulateSaveInfo(&rsi);
rdbSaveBackground(server.rdb_filename,rsiptr);
break;
}
}
/* Trigger an AOF rewrite if needed. */
if (server.aof_state == AOF_ON &&
!hasActiveChildProcess() &&
server.aof_rewrite_perc &&
server.aof_current_size > server.aof_rewrite_min_size)
{
long long base = server.aof_rewrite_base_size ?
server.aof_rewrite_base_size : 1;
long long growth = (server.aof_current_size*100/base) - 100;
if (growth >= server.aof_rewrite_perc) {
serverLog(LL_NOTICE,"Starting automatic rewriting of AOF on %lld%% growth",growth);
rewriteAppendOnlyFileBackground();
}
}
}
/* Just for the sake of defensive programming, to avoid forgeting to
* call this function when need. */
updateDictResizePolicy();
/* AOF postponed flush: Try at every cron cycle if the slow fsync
* completed. */
if (server.aof_flush_postponed_start) flushAppendOnlyFile(0);
/* AOF write errors: in this case we have a buffer to flush as well and
* clear the AOF error in case of success to make the DB writable again,
* however to try every second is enough in case of 'hz' is set to
* a higher frequency. */
run_with_period(1000) {
if (server.aof_last_write_status == C_ERR)
flushAppendOnlyFile(0);
}
/* Clear the paused clients flag if needed. */
clientsArePaused(); /* Don't check return value, just use the side effect.*/
/* Replication cron function -- used to reconnect to master,
* detect transfer failures, start background RDB transfers and so forth. */
run_with_period(1000) replicationCron();
/* Run the Redis Cluster cron. */
run_with_period(100) {
if (server.cluster_enabled) clusterCron();
}
/* Run the Sentinel timer if we are in sentinel mode. */
if (server.sentinel_mode) sentinelTimer();
/* Cleanup expired MIGRATE cached sockets. */
run_with_period(1000) {
migrateCloseTimedoutSockets();
}
/* Stop the I/O threads if we don't have enough pending work. */
stopThreadedIOIfNeeded();
/* Resize tracking keys table if needed. This is also done at every
* command execution, but we want to be sure that if the last command
* executed changes the value via CONFIG SET, the server will perform
* the operation even if completely idle. */
if (server.tracking_clients) trackingLimitUsedSlots();
/* Start a scheduled BGSAVE if the corresponding flag is set. This is
* useful when we are forced to postpone a BGSAVE because an AOF
* rewrite is in progress.
*
* Note: this code must be after the replicationCron() call above so
* make sure when refactoring this file to keep this order. This is useful
* because we want to give priority to RDB savings for replication. */
if (!hasActiveChildProcess() &&
server.rdb_bgsave_scheduled &&
(server.unixtime-server.lastbgsave_try > CONFIG_BGSAVE_RETRY_DELAY ||
server.lastbgsave_status == C_OK))
{
rdbSaveInfo rsi, *rsiptr;
rsiptr = rdbPopulateSaveInfo(&rsi);
if (rdbSaveBackground(server.rdb_filename,rsiptr) == C_OK)
server.rdb_bgsave_scheduled = 0;
}
/* Fire the cron loop modules event. */
RedisModuleCronLoopV1 ei = {REDISMODULE_CRON_LOOP_VERSION,server.hz};
moduleFireServerEvent(REDISMODULE_EVENT_CRON_LOOP,
0,
&ei);
server.cronloops++;
return 1000/server.hz;
}
(3.3.3) 转成人类可读的格式-bytesToHuman()
/*
* 将一定数量的字节转换为人类可读的字符串,格式为 100B、2G、100M、4K 等。
*/
void bytesToHuman(char *s, unsigned long long n) {
double d;
if (n < 1024) { // <1KB,展示 xx B
/* 字节 */
sprintf(s,"%lluB",n);
} else if (n < (1024*1024)) { // <1MB,展示 xx.yy K (2位小数)
d = (double)n/(1024);
sprintf(s,"%.2fK",d);
} else if (n < (1024LL*1024*1024)) { // <1GB,展示 xx.yy M (2位小数)
d = (double)n/(1024*1024);
sprintf(s,"%.2fM",d);
} else if (n < (1024LL*1024*1024*1024)) { // <1TB,展示 xx.yy G (2位小数)
d = (double)n/(1024LL*1024*1024);
sprintf(s,"%.2fG",d);
} else if (n < (1024LL*1024*1024*1024*1024)) { // <1PB,展示 xx.yy T (2位小数)
d = (double)n/(1024LL*1024*1024*1024);
sprintf(s,"%.2fT",d);
} else if (n < (1024LL*1024*1024*1024*1024*1024)) { // <1EB,展示 xx.yy P (2位小数)
d = (double)n/(1024LL*1024*1024*1024*1024);
sprintf(s,"%.2fP",d);
} else {
/* Let's hope we never need this */
sprintf(s,"%lluB",n);
}
}