[TOC]
# 動態調整參數
redis為我們提供了一個動態調整的命令。
CONFIG SET (官網 https://redis.io/commands/config-set )
`CONFIG SET parameter value`
config set 命令可以動態地調整 Redis 服務器的配置(configuration)而無須重啟。
# 4.0.6配置文件
~~~
# Redis configuration file example.
# Redis配置文件示例。
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
# 注意,為了讀取配置文件,Redis必須是從文件路徑開始作為第一個參數:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
# 注意單元:當需要內存大小時,可以指定它通常是1 k 5 GB 4 M,等等:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
# 單位是大小寫不敏感的,所以1 GB 1 GB 1 GB都是一樣的。
################################## INCLUDES 包含配置 ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
# 在這里包含一個或多個其他配置文件。如果您有一個標準的模板,它可以訪問所有Redis服務器,但是也需要定制每個服務器的設置,這是非常有用的。
# 包含文件可以包含其他文件,所以要明智地使用它。
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
# 注意選項“include”不會通過命令“CONFIG REWRITE”從admin或Redis Sentinel(redis 哨兵)重寫。
# 由于Redis總是使用最后一條處理線作為配置指令的值,所以最好在這個文件的開頭加上包含,以避免在運行時重寫配置更改。
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
# 如果您感興趣的是使用include來覆蓋配置選項,那么最好使用include作為最后一行。
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## MODULES 模塊配置 #####################################
# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
# 在啟動時加載模塊。如果服務器不能加載模塊,它就會中止。可以使用多個loadmodule指令。
#
# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so
################################## NETWORK 網絡配置 #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
# 默認情況下,如果沒有指定“bind”配置指令,Redis會監聽用于從服務器上可用的所有網絡接口的連接。
# 使用“bind”配置指令監聽一個或多個選定的接口是可能的,然后是一個或多個IP地址。
#
# Examples:
# 例如:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
# 警告如果運行redis的計算機直接暴露在因特網上,綁定到所有的接口是危險的,并且會將實例暴露給internet上的每個人。
# 因此,在默認情況下,我們取消了以下綁定指令,這將迫使Redis只監聽IPv4 lookback接口地址(這意味著Redis將只能接受來自運行到其正在運行的同一臺計算機的客戶端的連接)。
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# 如果您確定希望您的實例偵聽所有接口,請注釋以下一行。
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 172.0.0.1
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
# 保護模式是一層安全保護,以避免在internet上打開的Redis實例被訪問和利用。
#
# When protected mode is on and if:
# 當保護模式開啟時,如果:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 服務器不使用“bind”指令顯式地綁定到一組地址。
# 2) No password is configured.
# 沒有密碼配置。
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
# 服務器只接受來自IPv4和IPv6環路的客戶端的連接,127.0.0.1 和 ::1,以及Unix域套接字。
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
# 默認保護模式是啟用的。只有當您確信您希望其他主機的客戶端連接到Redis時,您才應該禁用它,即使沒有配置身份驗證,也不需要使用“bind”指令顯式地列出特定的接口集。
protected-mode yes
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
# 接受指定端口上的連接,默認為6379(IANA 815344)。如果指定端口0,Redis將不會監聽TCP套接字。
port 6379
# TCP listen() backlog.
# TCP listen() 積壓
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
# 在 高請求/秒 的環境中,為了避免客戶端連接問題的緩慢,您需要大量的積壓。
# 請注意,Linux內核將會悄悄地將其截斷為/proc/sy/net/core/somaxconn的值,因此要確保提高somaxconn和tcpmaxsynbacklog的值,以獲得預期的效果。
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
# 指定用于偵聽傳入連接的Unix socket的路徑。沒有默認值,所以Redis在沒有指定的情況下不會監聽Unix socket。
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
# 在客戶端閑置N秒后關閉連接(0 禁用)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
# 如果非零,在沒有通信的情況下使用sokeepalive向客戶發送TCP ack。這有兩個原因:
#
# 1) Detect dead peers.
# 發現死去的同伴。
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
# 從中間的網絡設備的角度來看待連接。
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
# 在Linux上,指定的值(以秒為單位)是用來發送ack的時間。
# 注意,要關閉連接,需要的時間是雙倍的。
# 在其他內核上,周期取決于內核配置。
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
# 這個選項的一個合理值是300秒,這是從Redis 3.2.1開始的新Redis默認值。
tcp-keepalive 300
################################# GENERAL 常規配置 #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
# 默認情況下,Redis不作為守護進程運行。如果你需要,請用“是”。
# 請注意,如果開啟,Redis將在/var/run/redis.pid中編寫一個pid文件。
daemonize no
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# 如果您從upstart或systemd中運行Redis,Redis可以與您的監督樹交互。選項:
#
# supervised no - no supervision interaction
# 沒有監督互動
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# 通過將Redis放入SIGSTOP模式
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# 通過寫入就緒=1到$notifysocket的信號系統
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# 根據upstartjob或notifysocket環境變量檢測upstart或systemd方法
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
# 注意:這些監督方法只表示“進程已經準備好了”。
# 他們不會讓你的上司不斷地向你的上司發信號。
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
# 如果指定了pid文件,Redis在啟動時指定它,并在出口處刪除它。
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
# 當服務器運行非監控時,如果在配置中沒有指定任何pid文件,則不會創建pid文件。
# 當服務器被啟用時,即使沒有指定pid文件,也會使用它,默認為“/var/run/redis.pid”。
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
# 創建一個pid文件是最好的工作:如果Redis不能創建它,那么就會發生不好的事情,服務器將正常啟動和運行。
pidfile /var/run/redis_6379.pid
# Specify the server verbosity level.
# 指定服務器的冗長級別。
# This can be one of:
# 這可以是:
# debug (a lot of information, useful for development/testing)
# 大量的信息,對于開發/測試很有用
# verbose (many rarely useful info, but not a mess like the debug level)
# 許多很少有用的信息,但不是像調試級別那樣的混亂
# notice (moderately verbose, what you want in production probably)
# 稍微啰嗦,你在生產中想要什么
# warning (only very important / critical messages are logged)
# 只記錄非常重要的/關鍵的消息
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
# 指定日志文件名。
# 此外,空字符串可以用來強制Redis登錄標準輸出。
# 注意,如果您使用標準輸出來進行日志記錄,但是將日志發送到/dev/null,那么日志將被發送到/dev/null
logfile ""
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# 為了使日志記錄到系統記錄器,只需將“syslog-enabled”設置為yes,并可選擇性地更新其他syslog參數以滿足您的需要。
# syslog-enabled no
# Specify the syslog identity.
# 指定syslog的身份。
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# 指定syslog工具。必須是用戶或本地-local7之間。
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
# 設置數據庫的數量。
# 默認數據庫是DB 0,您可以在每個連接的基礎上選擇一個不同的數據庫,其中dbid是一個介于0和“數據庫”-1之間的數字
databases 16
# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
# 默認情況下,Redis只在開始記錄標準輸出時顯示一個ASCII藝術徽標,如果標準輸出是一個TTY。
# 基本上,這意味著通常只有在交互式會話中才會顯示徽標。
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
# 然而,在啟動日志中,通過設置以下選項,可以強制執行pre4.0行為,并在啟動日志中顯示ASCII藝術徽標。
always-show-logo yes
################################ SNAPSHOTTING 快照配置 ################################
#
# Save the DB on disk:
# 將數據保存在磁盤上:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# 如果給定的秒數和給定的秒數,就會保存DB(針對參數seconds)
# number of write operations against the DB occurred.
# 發生了針對DB的寫操作的數量。(針對參數changes)
#
# In the example below the behaviour will be to save:
# 在下面的例子中,行為將是保存:
# after 900 sec (15 min) if at least 1 key changed
# 900秒內至少有1次寫入
# after 300 sec (5 min) if at least 10 keys changed
# 300秒內至少有10次寫入
# after 60 sec if at least 10000 keys changed
# 60秒內至少有10000次寫入
#
# Note: you can disable saving completely by commenting out all "save" lines.
# 注意:你可以通過注釋掉所有的“保存”行來完全禁用保存。
#
# It is also possible to remove all the previously configured save
# 也可以刪除先前配置的所有save
# points by adding a save directive with a single empty string argument
# 通過添加一個帶有空字符串參數的save指令
# like in the following example:
# 如下面的例子:
#
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# 默認情況下,如果啟用RDB快照,Redis將停止接受寫入
# (at least one save point) and the latest background save failed.
# (至少一個保存點)和最新的后臺保存失敗。
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
# 這將使用戶意識到(以一種困難的方式)數據不會正確地持久化到磁盤上,否則很可能沒有人會注意到,一些災難將會發生。
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
# 如果后臺保存過程將重新開始工作,Redis將自動允許重寫。
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
# 但是,如果您已經對Redis服務器和持久性進行了適當的監測,那么您可能想要禁用這個特性,這樣即使在磁盤、許可權等問題上存在問題,Redis也會繼續正常工作。
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
# 在轉儲.rdb數據庫時使用LZF壓縮字符串對象?
# 默認情況下,這是“是”,因為它幾乎總是一場勝利。如果你想在保存的孩子中保存一些CPU,將其設置為“no”,但是如果你有可壓縮的值或鍵,那么數據集可能會大。
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
# 自從RDB版本5以來,CRC64校驗和被放置在文件的末尾。
# 這使得這種格式對腐敗更有抵抗力,但是在保存和加載RDB文件時,會有一個性能損失(大約10%),所以您可以禁用它以獲得最大的性能。
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
# 用校驗和禁用的RDB文件有一個零校驗和,它會告訴加載代碼跳過檢查。
rdbchecksum yes
# The filename where to dump the DB
# 要轉儲DB的文件名
dbfilename dump.rdb
# The working directory.
# 工作目錄。
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
# DB將被寫入這個目錄中,使用“dbfilename”配置指令指定上面指定的文件名。
#
# The Append Only File will also be created inside this directory.
# 附加的文件也將在這個目錄中創建。
#
# Note that you must specify a directory here, not a file name.
# 注意,您必須在這里指定一個目錄,而不是一個文件名。
dir ./
################################# REPLICATION 復制設置(主從) #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
# 主從復制。使用slaveof來做一個Redis實例一個Redis服務器的副本。
# 關于Redis的復制,需要了解一些事情。
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# Redis復制是異步的,但是您可以配置一個主服務,如果它看起來與至少給定數量的從服務沒有連接,就可以停止接受寫操作。
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 如果復制鏈接在相對較小的時間內丟失,那么Redis的從服務就可以執行部分重新同步。
# 您可能希望配置復制待辦事項列表大小(請參閱該文件的下一部分),并根據您的需要確定一個合理的值。
#
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
# 復制是自動的,不需要用戶干預。在網絡分區之后,從服務會自動嘗試重新連接到主服務并與他們重新同步。
#
# slaveof <masterip> <masterport>
# slaveof 主服務ip 主服務端口
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
# 如果主密碼保護(使用下面的“requirepass”配置指令),在啟動復制同步過程之前,可以告訴從服務進行身份驗證,否則主服務將拒絕從服務請求。
#
# masterauth <master-password>
# masterauth 主服務密碼
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
# 當一個從服務失去與主服務的聯系,或者當復制還在進行的時候,從服務可以以兩種不同的方式行動:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
# 如果將slave-serve-stale-data設置為“yes”(默認值),則從服務仍然會回復客戶端請求,可能是使用過期數據,或者如果這是第一次同步,數據集可能是空的。
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
# 如果slave-serve-stale-data被設置為“no”,那么從服務將會以“與主同步”的錯誤來回答所有的命令,而不是INFO and SLAVEOF。
#
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
# 您可以配置一個從服務實例來接受寫操作。
# 針對從屬實例的寫作可能有助于存儲一些短暫的數據(因為在與主服務器重新同步之后,在從服務上編寫的數據很容易被刪除),但如果客戶端由于配置錯誤而對其進行寫入,也可能會導致問題。
#
# Since Redis 2.6 by default slaves are read-only.
# 因為從服務默認是只讀的。
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
# 注意:只讀從服務并不是被設計成在互聯網上暴露給不受信任的客戶端。它只是防止誤用實例的保護層。
# 在默認情況下,仍然是只讀的從服務導出,包括CONFIG、DEBUG等所有管理命令。
# 在有限的范圍內,您可以使用“rename-command”來提高只讀從服務的安全性,從而使所有的管理/危險命令都受到影響。
slave-read-only yes
# Replication SYNC strategy: disk or socket.
# 復制同步策略:磁盤或socket。
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
# 警告:無磁盤復制目前是實驗性的
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
# 新的從服務和重新連接的從服務不能繼續復制過程僅僅是接受差異,需要做所謂的“完全同步”。一個RDB文件從主服務器傳輸到從服務。
# 這種傳播有兩種不同的方式:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# Redis的主服務創建了一個新的進程,在磁盤上寫入RDB文件。稍后,該文件由父進程以增量方式傳遞給從服務。
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
# Redis的主服務創建了一個新的進程,它直接將RDB文件寫到從服務socket,而不需要觸碰磁盤。
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
# 有了磁盤支持的復制,當RDB文件生成時,就會有更多的從服務在當前的RDB文件完成工作后立即排隊并使用RDB文件。
# 當傳輸開始時,沒有磁盤的復制,新的從服務將會排隊,當當前的傳輸結束時,新的傳輸將開始。
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
# 當使用無磁盤復制時,主服務器在開始傳輸之前等待一個可配置的時間(以秒為單位),希望多個從服務能夠到達,并且傳輸可以并行化。
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
# 使用慢速磁盤和快速(大帶寬)網絡,無磁盤復制效果更好。
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
# 當啟用無磁盤復制時,可以配置服務器等待的延遲,以便派生出將RDB通過套接字傳輸到從服務的子節點。
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
# 這是很重要的,因為一旦傳輸開始,就不可能為新到的從服務提供服務,這將會排隊等待下一個RDB傳輸,所以服務器等待一個延遲,以便讓更多的從服務到達。
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
# 延遲是在幾秒內指定的,默認情況下是5秒。要完全禁用它,只需將它設置為0秒,然后傳輸就會盡快開始。
repl-diskless-sync-delay 5
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
# 從服務在一個預定義的時間間隔內向服務器發送ping信號。可以用replpingslave句號選項來改變這個區間。默認值是10秒。
#
# repl-ping-slave-period 10
# The following option sets the replication timeout for:
# 下面的選項用來設置復制超時:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 從從服務的角度看,在同步過程中,批量傳輸的輸入輸出。
# 2) Master timeout from the point of view of slaves (data, pings).
# 從從服務的角度(數據,ping信號)中掌握超時。
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
# 從主服務(REPLCONF ACK ping包)的角度來看,從服務器超時。
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
# 重要的是要確保這個值大于為repl-ping期間指定的值,否則每次在主服務器和從服務之間的流量都很低時,就會檢測到超時。
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
# 在同步后禁用tcpno延時?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
# 如果您選擇“yes”,Redis將使用較少的TCP數據包和更少的帶寬來將數據發送給從服務。但是這可能會增加數據在從服務端出現的延遲,使用默認配置的Linux內核可以使用40毫秒。
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
# 如果您選擇“no”,那么在從服務端出現的數據延遲將會減少,但是更多的帶寬將被用于復制。
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
# 在默認情況下,我們優化了低延遲,但是在非常高的交通條件下,或者當主服務和從服務有很多跳的時候,把這個變成“是”可能是個好主意。
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
# 設置復制backlog的大小。
# 積壓是一個緩沖區,當從服務斷開連接一段時間后,就會積累復制數據,這樣當一個從服務想要重新連接時,通常不需要完全的重新同步,但是部分的重新同步就足夠了,只是傳遞了在斷開連接時丟失的數據的一部分。
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
# 復制的積壓越大,從服務斷開連接的時間越長,以后就能夠執行部分重新同步。
#
# The backlog is only allocated once there is at least a slave connected.
# 只有在至少有一個從服務連接的情況下,才會分配待辦事項。
#
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
# 當一個主服務不再連接從服務一段時間后,積壓將被釋放。
# 下面的選項配置所需的秒數,從最后一個從服務斷開連接的時間開始,以便釋放積壓的緩沖區。
#
# Note that slaves never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the slaves: hence they should always accumulate backlog.
# 要注意的是,從服務們從來沒有把積壓的工作時間釋放出來,因為他們以后可能會被提升為主服務,并且應該能夠正確地“部分地重新同步”和slave復制:因此他們應該總是積累積壓的工作。
#
# A value of 0 means to never release the backlog.
# 0的值意味著永遠不要釋放積壓。
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
# 從屬優先級是由Redis在INFO輸出中發布的整數編號。
# 它被Redis哨兵用來選擇一個從服務,如果主服務不再正常工作,就可以把它提升為主服務。
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
# 一個低優先級的從服務被認為是更好的晉升機會,例如,如果有三個從服務的優先級是10 100,那么25個哨兵將會選擇優先級10的,這是最低的。
#
# By default the priority is 100.
# 默認情況下,優先級是100。
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
# 如果有少于N個從服務的連接,有一個延遲小于或等于M秒,那么一個主服務就可以停止接受寫。
#
# The N slaves need to be in "online" state.
# N個從服務需要處于“在線”狀態。
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
# 秒的延遲,必須是<=指定的值,是從從服務收到的最后一個ping來計算的,這通常是每秒鐘發送一次。
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
# 這個選項并不能保證N個副本將接受寫入,但是如果沒有足夠的從服務可用,則將限制丟失的寫入的窗口,以達到指定的秒數。
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
# 例如,至少需要3個有延遲<=10秒的從服務:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
# 將一個或另一個設置為0禁用該特性。
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
# 默認情況下,min-slaves-to-write被設置為0(功能禁用),而min-slaves-max-滯后設置為10。
# A Redis master is able to list the address and port of the attached
# slaves in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover slave instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
# Redis的主服務能夠以不同的方式列出所依附的從服務的地址和港口。
# 例如,“INFO復制”一節提供了這個信息,它在其他工具中使用,通過Redis Sentinel來發現從服務實例。
# 這個信息可用的另一個地方是在一個主的“角色”命令的輸出中。
#
# The listed IP and address normally reported by a slave is obtained
# in the following way:
# 所列出的由從服務報告的IP地址和地址是通過以下方式獲得的:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the slave to connect with the master.
# 通過檢查從服務使用的socket的對等地址來連接主服務器,從而自動檢測到地址。
#
# Port: The port is communicated by the slave during the replication
# handshake, and is normally the port that the slave is using to
# list for connections.
# 這個端口是由從服務在復制握手過程中進行通信的,通常是從服務用來為連接列出的端口。
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the slave may be actually reachable via different IP and port
# pairs. The following two options can be used by a slave in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
# 然而,當使用端口轉發或網絡地址轉換(NAT)時,可以通過不同的IP和端口對來訪問從服務。一個從服務可以使用下面兩個選項來向它的主服務報告一組特定的IP和端口,這樣信息和角色就會報告這些值。
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
# 如果您需要覆蓋端口或IP地址,那么就不需要同時使用兩個選項。
#
# slave-announce-ip 5.5.5.5
# slave-announce-port 1234
################################## SECURITY 安全設置 ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
# 要求客戶在處理任何其他命令之前發出身份驗證。這可能在您不相信其他人可以訪問主機運行redis-server的環境中是有用的。
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
# 對于向后兼容性,這應該被注釋掉,因為大多數人不需要身份驗證(例如,他們運行自己的服務器)。
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
# 警告:由于Redis非常快,外部用戶可以在一個好的盒子上每秒嘗試150 k個密碼。這意味著您應該使用一個非常強的密碼,否則它將很容易被破解。
#
# requirepass foobared
# Command renaming.
# 命令重命名。
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
# 在共享環境中更改危險命令的名稱是可能的。
# 例如,配置命令可能會被重命名為難以猜測的東西,這樣它仍然可以用于內部使用工具,但對于一般客戶來說是不可用的。
#
# Example:
# 例子:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
# 也可以通過將命令重命名為空字符串來完全殺死一個命令:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
# 請注意,更改登錄到AOF文件或傳輸給從服務的命令的名稱可能會導致問題。
################################### CLIENTS 客戶端設置 ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
# 在同一時間設置連接客戶端的最大數量。
# 默認情況下這個限制設置為10000個客戶,但是如果復述,服務器不能配置過程文件限制允許指定限制允許的最大數量的客戶設置為當前文件限制- 32(復述,儲備一些為內部使用文件描述符)。
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
# 一旦達到限制,Redis將關閉所有的新連接,發送一個錯誤的“最大客戶數量”。
#
# maxclients 10000
############################## MEMORY MANAGEMENT 內存管理 ################################
# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
# 將內存使用限制設置為指定的字節數。當達到內存限制時,Redis會根據選擇的驅逐策略(參見max內存策略)來刪除鍵。
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
# 如果Redis不能根據政策刪除鍵,或者如果策略被設置為“noeviction”,那么Redis將開始以錯誤的方式回復那些使用更多內存的命令,比如SET、LPUSH等等,并將繼續回復諸如GET這樣的只讀命令。
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
# 當使用Redis作為LRU或LFU緩存時,這個選項通常很有用,或者為一個實例設置一個硬內存限制(使用“noeviction”策略)。
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
# 警告:如果你有從服務與maxmemory連接到一個實例,輸出緩沖區的大小需要喂從服務使用內存數相減時,這網絡問題/同步不會觸發一個鍵被驅逐的循環,進而從服務的輸出緩沖區滿del鍵驅逐觸發的刪除鍵,直到完全清空數據庫等等。
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
# 總之……如果你有從服務,建議你為maxmemory設定一個較低的限制,這樣系統就會有一些免費的RAM來實現從服務輸出緩沖區(但是如果策略是“no驅逐”的話,這是不需要的)。
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
# MAXMEMORY策略:當到達MAXMEMORY時,Redis將如何選擇移除哪些內容。你可以在以下五種行為中進行選擇:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# 在帶有過期設置的鍵中使用近似LRU。
# allkeys-lru -> Evict any key using approximated LRU.
# 使用近似的LRU將任何鍵驅逐。
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# 使用近似的LFU在帶有過期集的鍵之間進行驅逐。
# allkeys-lfu -> Evict any key using approximated LFU.
# 用近似的LFU清除任何鍵。
# volatile-random -> Remove a random key among the ones with an expire set.
# 在帶有過期設置的鍵中刪除一個隨機鍵。
# allkeys-random -> Remove a random key, any key.
# 刪除一個隨機鍵,任何鍵。
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# 用最近的過期時間(小TTL)刪除鍵
# noeviction -> Don't evict anything, just return an error on write operations.
# 不要驅逐任何東西,只是在寫操作上返回一個錯誤。
#
# LRU means Least Recently Used
# LRU的意思是最近最少使用的
# LFU means Least Frequently Used
# LFU的意思是最不常用的
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
# LRU、LFU和揮發-ttl都是用近似的隨機算法實現的。
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
# 注意:在上述任何策略中,Redis會在寫操作時返回一個錯誤,因為沒有合適的鍵可以被驅逐。
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
# 在寫這些命令的時候,這些命令是:set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#
# The default is:
# 默認的是:
#
# maxmemory-policy noeviction
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
# LRU、LFU和最小TTL算法不是精確的算法,而是近似算法(為了節省內存),所以您可以對它進行優化,以獲得速度或準確性。
# 默認情況下,Redis會檢查5個鍵,并選擇最近使用較少的鍵,您可以使用下面的配置指令更改樣本大小。
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
# 5的默認值會產生足夠好的結果。10近似于非常接近的LRU,但花費更多的CPU。3是更快,但不是很準確。
#
# maxmemory-samples 5
############################# LAZY FREEING 懶惰的釋放 ####################################
# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
# Redis有兩個原語來刪除鍵。一種叫做DEL,是對對象的阻塞刪除。
# 這意味著服務器停止處理新的命令,以便以同步方式回收與對象相關聯的所有內存。
# 如果被刪除的鍵與一個小對象相關聯,那么執行DEL命令所需的時間非常小,并且可以與Redis中的大多數O(1)或O(logn)命令相媲美。
# 然而,如果密鑰與包含數百萬個元素的聚合值相關聯,那么服務器就可以阻塞很長時間(甚至幾秒),以完成操作。
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
# 出于以上原因,Redis還提供非阻塞刪除原語,如UNLINK(非阻塞DEL)和FLUSHALL和FLUSHDB命令的異步選項,以便在后臺回收內存。
# 這些命令是在固定時間執行的。另一個線程會以最快的速度在后臺以增量方式釋放對象。
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
# DEL、UNLINK和FLUSHDB的異步選項是用戶控制的。這取決于應用程序的設計,以了解什么時候使用一個或另一個是一個好主意。
# 然而,Redis服務器有時不得不刪除鍵或刷新整個數據庫,作為其他操作的副作用。具體來說,Redis在以下場景中獨立于用戶調用刪除對象:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
# in order to make room for new data, without going over the specified
# memory limit.
# 在被驅逐的情況下,由于maxmemory和maxmemory政策配置,為了為新數據騰出空間,而不需要超過指定的內存限制。
# 2) Because of expire: when a key with an associated time to live (see the
# EXPIRE command) must be deleted from memory.
# 因為過期:當一個有關聯時間的鍵(見過期命令)必須從內存中刪除。
# 3) Because of a side effect of a command that stores data on a key that may
# already exist. For example the RENAME command may delete the old key
# content when it is replaced with another one. Similarly SUNIONSTORE
# or SORT with STORE option may delete existing keys. The SET command
# itself removes any old content of the specified key in order to replace
# it with the specified string.
# 因為一個命令的副作用,它將數據存儲在可能已經存在的鍵上。
# 例如,RENAME命令可能會在替換舊鑰匙內容時刪除另一個。
# 類似的,SUNIONSTORE或SORT商店選項可以刪除現有的密鑰。SET命令本身刪除指定鑰匙的任何舊內容,以便用指定的字符串替換它。
# 4) During replication, when a slave performs a full resynchronization with
# its master, the content of the whole database is removed in order to
# load the RDB file just transfered.
# 在復制過程中,當一個從服務與它的主服務執行完全的重新同步時,整個數據庫的內容將被刪除,以便加載剛剛傳輸的RDB文件。
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:
# 在上述所有情況下,默認情況下是以阻塞方式刪除對象,就像調用DEL一樣。但是,您可以專門配置每個案例,以便以非阻塞方式釋放內存,就像調用UNLINK一樣,使用以下配置指令:
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
############################## APPEND ONLY MODE 只附加模式(AOF) ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
# 默認情況下,Redis會異步地將數據集轉儲到磁盤上。在許多應用程序中,這種模式已經足夠好了,但是Redis進程或斷電的問題可能會導致幾分鐘的寫入丟失(取決于配置的保存點)。
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
# Append Only File是一種可替代的持久性模式,它提供了更好的持久性。
# 例如使用默認數據fsync策略配置文件中(見后)復述,可以失去只是一秒的寫在一個戲劇性的事件像一個服務器斷電,或一個寫如果復述過程本身出了問題,但正確操作系統仍在運行。
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
# AOF和RDB持久性可以同時啟用,而不會出現問題。如果在啟動時啟用了AOF,Redis將加載AOF,那就是具有更好的持久性保證的文件。
#
# Please check http://redis.io/topics/persistence for more information.
# 請檢查 http://redis.io/topics/persistence 獲取久性的更多信息。
appendonly no
# The name of the append only file (default: "appendonly.aof")
# 只追加文件的名稱(默認:"appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
# fsync()調用告訴操作系統在磁盤上實際寫入數據,而不是在輸出緩沖器中等待更多的數據。一些操作系統將真正地刷新磁盤上的數據,其他一些操作系統將會盡快地完成它。
#
# Redis supports three different modes:
# Redis支持三種不同的模式:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# 不要fsync,只要讓操作系統在需要的時候刷新數據。得更快。
# always: fsync after every write to the append only log. Slow, Safest.
# 始終:每次寫入追加日志后。
# everysec: fsync only one time every second. Compromise.
# 緩慢的,安全的。每秒鐘只做一次。妥協。
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
# 默認情況下是“everysec”,因為這通常是速度和數據安全之間的正確折衷。
# 由你理解如果你能放松這個“no”字,讓操作系統刷新輸出緩沖區時,為了更好的表現(但是如果你可以忍受一些數據丟失的想法考慮默認快照的持久性模式),或相反,使用“always”非常緩慢但比everysec更安全一點。
#
# More details please check the following article:
# 更多詳情請查看以下文章:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# 如果不確定,使用“everysec”。
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
# 當AOF fsync策略被設置為總是或everysec時,后臺保存過程(后臺保存或日志后臺重寫)在磁盤上執行大量的輸入輸出,在一些Linux配置中,Redis可能會在fsync()調用上阻塞太長時間。
# 注意,目前還沒有解決這個問題的方法,因為即使在不同的線程中執行fsync也會阻塞我們的同步寫入(2)調用。
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
# 為了減輕這個問題,可以使用以下選項來防止fsyn()在主進程中被調用,而BGSAVE或BGREWRITEAOF正在進行中。
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
# 這意味著,當另一個子進程在保存的時候,Redis的耐用性和“appendfsync none”是一樣的。在實際操作中,這意味著在最壞的情況下(使用默認的Linux設置)可能會損失30秒的日志。
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
# 如果你有延遲問題,就把這個問題變成“yes”。
# 否則就把它當作“no”,從耐用性的角度來看,這是最安全的選擇。
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
# 自動重寫附加文件。Redis能夠自動重寫日志文件,當AOF日志大小以指定的百分比增長時,它會隱式地調用BGREWRITEAOF。
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
# 這就是它的工作原理:Redis在最新的重寫之后記住了AOF文件的大小(如果重新啟動后沒有重寫,那么在啟動時就會使用AOF的大小)。
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
# 這個基本尺寸與當前的大小比較。如果當前的大小大于指定的百分比,則重寫被觸發。
# 此外,您還需要為將要重寫的AOF文件指定最小的大小,這對于避免重寫AOF文件是很有用的,即使百分比增加了,但它仍然很小。
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
# 指定0%的百分比來禁用自動重寫功能。
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
# 在Redis啟動過程中,當AOF數據被加載回內存時,可能會發現一個AOF文件會被截斷。
# 當Redis正在運行崩潰的系統時,這可能會發生,特別是當一個ext4文件系統在沒有data=ordered選項的情況下安裝時(但是當Redis本身崩潰或中止時,這是不可能發生的,但是操作系統仍然正常工作)。
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
# 當發生這種情況時,Redis可以以錯誤退出,或者加載盡可能多的數據(默認情況下),并在最終發現AOF文件被截斷時開始。
# 下面的選項控制這種行為。
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
# 如果被截斷的截斷被設置為yes,則加載一個截斷的AOF文件,Redis服務器開始發出一個日志來通知用戶事件。
# 否則,如果選項被設置為no,服務器就會出現錯誤并拒絕啟動。
# 當選項被設置為no時,用戶需要在重啟服務器之前使用“redis-核對”實用程序來修復AOF文件。
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
# 注意,如果AOF文件在中間被發現損壞,服務器仍然會出現錯誤。
# 只有當Redis試圖從AOF文件中讀取更多數據時,才會使用該選項,但不會找到足夠的字節。
aof-load-truncated yes
# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
# 在重寫AOF文件時,Redis能夠在AOF文件中使用RDB序言,以獲得更快的重寫和恢復。
# 當這個選項被打開時,重寫的AOF文件由兩個不同的小節組成:
#
# [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
# 加載Redis時,確認AOF文件以“Redis”字符串開頭,并加載預固定的RDB文件,并繼續加載AOF tail。
#
# This is currently turned off by default in order to avoid the surprise
# of a format change, but will at some point be used as the default.
# 這在默認情況下是關閉的,以避免格式更改的意外,但在某些時候會被用作默認值。
aof-use-rdb-preamble no
################################ LUA SCRIPTING LUA腳本 ###############################
# Max execution time of a Lua script in milliseconds.
# 以毫秒為間隔的Lua腳本執行時間。
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
# 如果達到最大執行時間,Redis將會記錄一個腳本在允許的最大時間之后仍然在執行,并且將開始以錯誤的方式回復查詢。
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
# 當一個長時間運行的腳本超過了最大執行時間時,只有腳本殺死和關閉NOSAVE命令。
# 第一個可以用來阻止一個還沒有調用write命令的腳本。
# 第二種方法是關閉服務器的唯一方法,在這種情況下,腳本已經發布了一個寫命令,但是用戶不希望等待腳本的自然終止。
#
# Set it to 0 or a negative value for unlimited execution without warnings.
# 將其設置為0或負值,以無限制地執行,而不需要警告。
lua-time-limit 5000
################################ REDIS CLUSTER 集群配置 ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 警告實驗:Redis集群被認為是穩定的代碼,但是為了將其標記為“成熟”,我們需要等待一個非平凡的用戶在生產中部署它。
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
# 正常的Redis實例不能成為Redis集群的一部分;只有作為集群節點啟動的節點才可以。為了啟動一個Redis實例作為集群節點,集群支持取消注釋以下內容:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
# 每個集群節點都有一個集群配置文件。
# 這個文件不打算手工編輯。
# 它是由Redis節點創建和更新的。
# 每個Redis集群節點都需要一個不同的集群配置文件。
# 確保在同一個系統中運行的實例沒有重疊的集群配置文件名。
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
# 集群節點超時是節點必須不可訪問的毫秒數,以便在故障狀態下考慮它。
# 大多數其他內部時間限制是節點超時的倍數。
#
# cluster-node-timeout 15000
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
# 如果數據看起來太舊,故障管理員的從服務將避免啟動故障轉移。
#
# There is no simple way for a slave to actually have an exact measure of
# its "data age", so the following two checks are performed:
# 對于一個從服務來說,沒有一種簡單的方法可以精確地測量它的“數據年齡”,因此執行以下兩項檢查:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
# 如果有多個從服務能夠進行故障轉移,他們交換消息,以試圖通過最佳復制偏移(更多來自主處理的數據)給從服務帶來優勢。
# 從服務將試圖通過偏移來獲得他們的等級,并應用到故障轉移的開始,這是與他們的等級成比例的延遲。
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
# 每一個從服務計算最后一次與主服務的互動的時間。
# 這可以是最后一次ping或命令(如果主仍然處于“連接的”狀態),或者是與主斷開連接后的時間(如果復制鏈接當前正在關閉)。
# 如果最后一次交互太舊,那么從服務將不會嘗試進行故障轉移。
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
# “2”可以由用戶調優。
# 具體來說,如果從與主的最后一次交互中,一個從服務將不會執行故障轉移,因為時間的流逝比:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
# 例如,如果node-timeout是30秒,slave-validity-facto是10,假設默認的回復周期為10秒,如果不能與主對話超過310秒,則該從服務將不會嘗試進行故障轉移。
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
# 一個slave-validity-factor可能允許擁有太舊數據的從服務能夠對主服務進行故障轉移,而一個太小的值可能會阻止集群完全選擇一個從服務。
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
# 為了獲得最大的可用性,可以將斯拉夫有效因子設置為0,也就是說,無論最后一次與主交互的時間,從服務總是試圖對主服務器進行故障轉移。
#(然而,他們總是試圖將延遲與他們的偏移量成比例)。
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
# 零是唯一能夠保證當所有分區恢復時,集群將始終能夠繼續的值。
#
# cluster-slave-validity-factor 10
# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
# 集群的從服務能夠遷移到孤立的主服務那里,這些主服務是沒有工作的從服務的主服務。
# 這提高了集群抵御失敗的能力,否則,如果沒有工作的從服務,一個孤立的主服務就不能失敗。
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
# 只有在為老主服務至少有一定數量的其他工作從服務的情況下,從服務才會遷移到孤立的主服務那里。
# 這個數字是“移民障礙”。
# 一個1的遷移屏障意味著只有當一個從服務至少有一個其他的工作從服務時,從服務才會遷移。
# 它通常反映了你想要的每一個主服務的從服務數量。
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
# 默認值是1(只有當主服務至少有一個從服務時,從服務才會遷移)。
# 要禁用遷移,只需將其設置為非常大的值。
# 可以設置0的值,但是只對調試和生產中的危險有用。
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
# 默認情況下,Redis集群節點會停止接受查詢,如果它們檢測到至少有一個散列槽被發現(沒有可用的節點正在服務它)。
# 這樣,如果集群部分下降(例如,不再覆蓋多個散列槽),所有集群最終都將不可用。
# 一旦所有的槽被再次覆蓋,它就會自動返回。
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
# 然而,有時您想要的是正在工作的集群的子集,繼續接受仍然覆蓋的關鍵空間部分的查詢。
# 為了做到這一點,只需將集群需求全覆蓋選項設置為no。
#
# cluster-require-full-coverage yes
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
# 為了設置您的集群,請確保閱讀http://redis上的文檔
########################## CLUSTER DOCKER/NAT support 集群DOCKER/NAT支持 ########################
# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
# 在某些部署中,Redis集群節點地址發現失敗,因為地址是NAT-ted,或者因為端口被轉發(典型的例子是Docker和其他容器)。
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
# 為了使Redis集群在這樣的環境中工作,需要一個靜態配置,其中每個節點都知道它的公共地址。下面兩個選項用于這個范圍,并且是:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
# 每個節點都指示節點關于其地址、客戶端端口和集群消息總線端口。
# 然后,這些信息將在總線數據包的頭部中發布,這樣其他節點就能夠正確地映射出發布信息的節點的地址。
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
# 如果不使用上述選項,則將使用常規的Redis集群自動檢測。
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
# 注意,當重新映射時,總線端口可能不會處于客戶端端口+10000的固定偏移量,因此您可以根據它們的重新映射來指定任何端口和總線端口。
# 如果沒有設置公共汽車端口,則通常會使用1萬的固定偏移量。
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380
################################## SLOW LOG 慢日志 ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
# Redis慢速日志是一個記錄超過指定執行時間的查詢的系統。
# 執行時間不包括I / O操作,比如與客戶端,發送應答等等,但就實際執行命令所需的時間(這是唯一階段命令執行的線程被阻塞,不能同時處理其他請求)。
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# 您可以用兩個參數來配置慢日志:一個告訴Redis什么是執行時間,以微秒計,以使命令被記錄下來,而另一個參數是慢日志的長度。當一個新的命令被記錄時,最老的命令將從已登錄的命令的隊列中刪除。
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
# 下面的時間以微秒表示,所以1000000等于一秒。
# 注意,一個負數禁用慢速日志,而值為0則會強制每個命令的日志記錄。
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
# 這個長度是沒有限制的。
# 只要意識到它會消耗內存。
# 您可以使用慢速日志重置慢速日志所使用的內存。
slowlog-max-len 128
################################ LATENCY MONITOR 延遲監控 ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
# Redis延遲監視子系統在運行時對不同的操作進行取樣,以便收集與Redis實例可能的延遲源相關的數據。
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
# 通過延遲命令,該信息對能夠打印圖形并獲得報告的用戶可用。
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
# 該系統只記錄在一個時間內執行的操作,該操作的時間等于或大于通過延遲監控閾值配置指令指定的毫秒數。
# 當它的值被設置為0時,延遲監視器將被關閉。
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
# 在默認情況下,延遲監控是禁用的,因為如果沒有延遲問題,通常不需要延遲,而收集數據具有性能影響,雖然非常小,但可以在大負載下進行測量。
# 如果需要,可以很容易地在運行時啟用延遲監控,使用“配置集設置器-監控-閾值”。
latency-monitor-threshold 0
############################# EVENT NOTIFICATION 事件通知 ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
# Redis可以通知發布訂閱客戶關于在密鑰空間中發生的事件。
# 該特性可在http://redis.io/topics/通知中記錄
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
# 例如,如果啟用了keyspace事件通知,并且客戶端對存儲在數據庫0中的鍵“foo”執行DEL操作,那么兩個消息將通過發布/訂閱發布:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
# 可以選擇Redis在一組類中通知的事件。每個類都由一個字符來標識:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# 鍵空間事件,用__keyspace@<db>__前綴發布。
# E Keyevent events, published with __keyevent@<db>__ prefix.
# 鍵空間事件,用__keyevent@<db>__前綴發布。
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# 通用命令(非特定類型),如DEL,過期,重命名,……
# $ String commands
# 字符串的命令
# l List commands
# 列表命令
# s Set commands
# 設置命令
# h Hash commands
# 散列的命令
# z Sorted set commands
# 排序設置命令
# x Expired events (events generated every time a key expires)
# 過期事件(每次密鑰過期時生成的事件)
# e Evicted events (events generated when a key is evicted for maxmemory)
# 被驅逐的事件(當一個鑰匙被驅逐到maxmemory時產生的事件)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
# g$lshzxe的別名,因此“AKE”字符串意味著所有的事件。
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
# “notify-keyspace-events”以一個由零個或多個字符組成的字符串作為參數。空字符串表示通知是禁用的。
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
# 為了啟用列表和泛型事件,從事件名稱的角度來看,使用:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
# 要獲得過期密鑰的流,訂閱頻道名稱 __keyevent@0__:expired使用:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
# 默認情況下,所有通知都是禁用的,因為大多數用戶不需要這個特性,而且該特性也有一些開銷。
# 注意,如果您沒有指定至少一個K或E,則不會交付任何事件。
notify-keyspace-events ""
############################### ADVANCED CONFIG 高級配置 ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
# 當它們有少量的條目時,使用內存高效的數據結構進行編碼,而最大的條目不會超過給定的閾值。這些閾值可以使用下列指令進行配置。
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# 列表也以一種特殊的方式進行編碼,以節省大量的空間。
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# 每個內部列表節點允許的條目數可以指定為固定的最大大小或最大數量的元素。
# For a fixed maximum size, use -5 through -1, meaning:
# 對于一個固定的最大尺寸,使用-5到-1,意思是:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# 不建議正常工作負載
# -4: max size: 32 Kb <-- not recommended
# 不推薦
# -3: max size: 16 Kb <-- probably not recommended
# 可能不推薦
# -2: max size: 8 Kb <-- good
# 好
# -1: max size: 4 Kb <-- good
# 好
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# 正號意味著每個列表節點的元素數量都是如此。
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
# 性能最高的選項通常是-2(8 Kb大小)或-1(4 Kb大小),但是如果您的用例是惟一的,那么根據需要調整設置。
list-max-ziplist-size -2
# Lists may also be compressed.
# 列表也可能被壓縮。
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 壓縮深度是列表的每一端的quicklist ziplist節點的數量,以排除壓縮。列表的頭和尾總是未被壓縮,以快速的推送/pop操作。設置:
# 0: disable all list compression
# 禁用所有列表壓縮
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# 深度1的意思是“不要開始壓縮,直到1個節點進入列表,從頭部或尾部開始”
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 永遠是未壓縮的;內部節點將壓縮。
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 這里的意思是:不要壓縮頭或頭——下一個或尾->prev或tail,而是壓縮它們之間的所有節點。
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# set在一個案例中有一個特殊的編碼:當一個集合由只有在64位簽名整數的基數上的整數組成的字符串組成時。
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
# 下面的配置設置設置了集合大小的限制,以便使用這種特殊的內存保存編碼。
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
# 類似于哈希表和列表,排序集也經過特殊編碼,以節省大量空間。
# 只有當排序集的長度和元素低于以下限制時才使用這種編碼:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
# 超loglog稀疏表示字節限制。
# 這個限制包括16個字節的頭。
# 當使用稀疏表示的超loglog跨越這個極限時,它就會被轉換成密集的表示形式。
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
# 大于16000的值是完全無用的,因為在這一點上,密集的表示更有效。
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
# 建議的值是3000,以獲得空間高效編碼的好處,而不會減慢太多的PFADD,即O(N)與稀疏編碼。
# 當CPU不是一個問題時,它的值可以提高到10000,但是空間是,并且數據集是由許多具有基數在0-15000范圍的超loglogs組成的。
hll-sparse-max-bytes 3000
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
# 活躍的重放使用每100毫秒的CPU時間1毫秒,以幫助重放主Redis哈希表(一個映射頂級鍵到值)。
# 哈希表實現復述,使用(見dict.c)執行一個懶惰改作:操作越多改作遇到一個哈希表,越改作“步驟”執行,如果服務器空閑時再處理就不算完整和一些更多的內存使用哈希表。
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
# 默認情況下,每秒鐘使用這個毫秒10次,以便積極地對主字典進行重新處理,盡可能地釋放內存。
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
# 如果不確定:
# 如果您有嚴格的延遲需求,那么使用“activerehash no”,在您的環境中,Redis可以不時地回復到2毫秒延遲的查詢,這不是一件好事。
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
# 如果你沒有這樣的硬性要求,那么就使用“activerehash”,但如果可能的話,希望盡快釋放內存。
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
# 客戶端輸出緩沖限制可以用來強制那些由于某種原因而不能快速讀取服務器數據的客戶端斷開(一個常見的原因是,發布/訂閱客戶端不能像發布者所能生成的那樣快速地使用消息)。
#
# The limit can be set differently for the three different classes of clients:
# 對于這三種不同類型的客戶,可以設置不同的限制:
#
# normal -> normal clients including MONITOR clients
# 正常的客戶端包括監控客戶端
# slave -> slave clients
# 從服務客戶端
# pubsub -> clients subscribed to at least one pubsub channel or pattern
# 客戶至少訂閱了一個pubsub通道或模式
#
# The syntax of every client-output-buffer-limit directive is the following:
# 每個客戶-輸出-緩沖區-限制指令的語法如下:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# 當達到硬限制時,客戶端立即斷開連接,或者如果達到了軟限制,并且在指定的秒數(連續)中保持了聯系。
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
# 比如如果硬限制是32字節和軟限制是16 mb / 10秒,客戶端會立即斷開輸出緩沖區的大小達到32字節,但也會斷開如果客戶達到16字節,不斷克服了限制10秒鐘。
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
# 默認情況下,普通客戶端并不受限制,因為他們不需要(以推的方式)請求數據,而是在請求之后才接收數據,因此只有異步客戶端可能會創建一個場景,其中的數據被請求的速度比它所能讀取的要快。
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
# 相反,對于pubsub和從服務客戶端有一個默認的限制,因為訂閱者和從服務以一種推的方式接收數據。
#
# Both the hard or the soft limit can be disabled by setting them to zero.
# 硬的或軟的限制都可以通過將它們設置為零來禁用。
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
# Redis調用內部函數來執行許多后臺任務,比如關閉超時的客戶端連接,清除從未請求過的過期密鑰,等等。
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
# 并不是所有的任務都使用相同的頻率執行,但是Redis檢查任務按照指定的“hz”值執行。
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
# 默認情況下,“hz”設置為10。
# 當Redis空閑時,提高值將會使用更多的CPU,但同時,當有許多鍵同時到期時,Redis會更有響應性,并且可以更精確地處理超時。
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
# 范圍在1到500之間,但是超過100的值通常不是一個好主意。
# 大多數用戶應該使用默認的10,并且只在需要非常低的延遲的環境中將其提高到100。
hz 10
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
# 當一個子進程重新編寫AOF文件時,如果啟用了下列選項,那么該文件將會被fsync-ed所生成的每32 MB的數據。這對于將文件提交到磁盤上更有幫助,并避免大的延遲峰值是很有用的。
aof-rewrite-incremental-fsync yes
# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
# Redis LFU驅逐(請參閱maxmemory設置)可以調優。然而,從默認設置開始是一個好主意,并且只在研究如何改進性能以及如何隨著時間的推移改變鍵值之后才改變它們,這是可以通過OBJECT FREQ命令進行檢查的。
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
# 在Redis LFU實現中有兩個可調參數:計數器對數因子和計數器衰減時間。
# 在改變它們之前,理解這兩個參數意味著什么是很重要的。
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
# LFU計數器只有8比特/鍵,它的最大值是255,所以Redis使用對數行為的概率增量。
# 考慮到舊計數器的值,當一個鍵被訪問時,計數器以這種方式遞增:
#
# 1. A random number R between 0 and 1 is extracted.
# 在0和1之間的隨機數字R被提取出來。
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 一個概率P被計算為1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
# 計數器只有在R<p時才會增加。
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
# 默認的lfu-log-factor是10。
# 這是一個關于頻率計數器如何隨著不同數量的存取而變化的表格,不同的對數因子:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
# +--------+------------+------------+------------+------------+------------+
# | 0 | 104 | 255 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 1 | 18 | 49 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 10 | 10 | 18 | 142 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 100 | 8 | 11 | 49 | 143 | 255 |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
# 注意:上面的表是通過運行以下命令獲得的:
#
# redis-benchmark -n 1000000 incr foo
# redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
# 注2:計數器的初始值為5,以便給新對象一個累積命中的機會。
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
# 計數器衰減時間是指在幾分鐘內,為了使鍵計數器被2(如果它的值小于<=10),則必須間隔時間。
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
# lfu-decay時段的默認值是1。0的特殊值意味著每次掃描時都要衰變計數器。
#
# lfu-log-factor 10
# lfu-decay-time 1
########################### ACTIVE DEFRAGMENTATION 活躍的碎片整理 #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
# 警告這個特性是實驗性的。
# 然而,即使是在生產過程中,也要進行壓力測試,并在一段時間內由多個工程師進行手工測試。
#
# What is active defragmentation?
# 活躍的碎片整理是什么?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
# Active(聯機)碎片整理允許一個Redis服務器壓縮內存中小分配和數據的釋放之間的空間,從而允許回收內存。
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
# 碎片化是一個自然的過程,它發生在每個分配器(幸運的是,幸運的是)和某些工作負載。
# 通常,需要重新啟動服務器,以降低碎片,或者至少清除所有數據并再次創建它。
# 然而,由于奧蘭阿格拉為Redis 4.0實現了這個特性,這個過程可以在運行時以“熱”的方式發生,而服務器正在運行。
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
# 分裂時基本上超過一定水平(見下面的配置選項)復述,將開始創建新副本的值在連續的內存區域利用特定Jemalloc特性(為了理解如果一個分配導致分裂和分配在一個更好的地方),同時,將舊的數據的副本。
# 這個過程,對于所有的鍵都是遞增的,將導致碎片化回到正常值。
#
# Important things to understand:
# 重要的事情要明白:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
# to use the copy of Jemalloc we ship with the source code of Redis.
# This is the default with Linux builds.
# 這個特性在默認情況下是禁用的,并且只有在編譯Redis時才會工作,以便使用我們附帶的Redis的源代碼。
# 這是Linux構建的默認值。
#
# 2. You never need to enable this feature if you don't have fragmentation
# issues.
# 如果沒有碎片問題,您永遠不需要啟用這個特性。
#
# 3. Once you experience fragmentation, you can enable this feature when
# needed with the command "CONFIG SET activedefrag yes".
# 一旦您體驗了碎片化,您就可以在需要的時候啟用這個特性,命令“配置集activedefrag yes”。
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.
# 配置參數能夠很好地調整碎片整理過程的行為。
# 如果您不確定它們的意思,那么保留默認值是一個好主意。
# Enabled active defragmentation
# 使活躍的碎片整理
# activedefrag yes
# Minimum amount of fragmentation waste to start active defrag
# 最少的碎片產生的碎片開始活動的defrag
# active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
# 開始活動的碎片化的最小百分比
# active-defrag-threshold-lower 10
# Maximum percentage of fragmentation at which we use maximum effort
# 最大的碎片百分比,我們使用最大的努力
# active-defrag-threshold-upper 100
# Minimal effort for defrag in CPU percentage
# 在CPU百分比上對defrag的最小努力
# active-defrag-cycle-min 25
# Maximal effort for defrag in CPU percentage
# 在CPU百分比上的最大努力
# active-defrag-cycle-max 75
~~~
- 基礎
- 編譯和安裝
- classpath到底是什么?
- 編譯運行
- 安裝
- sdkman多版本
- jabba多版本
- java字節碼查看
- 數據類型
- 簡介
- 整形
- char和int
- 變量和常量
- 大數值運算
- 基本類型包裝類
- Math類
- 內存劃分
- 位運算符
- 方法相關
- 方法重載
- 可變參數
- 方法引用
- 面向對象
- 定義
- 繼承和覆蓋
- 接口和抽象類
- 接口定義增強
- 內建函數式接口
- 多態
- 泛型
- final和static
- 內部類
- 包
- 修飾符
- 異常
- 枚舉類
- 代碼塊
- 對象克隆
- BeanUtils
- java基礎類
- scanner類
- Random類
- System類
- Runtime類
- Comparable接口
- Comparator接口
- MessageFormat類
- NumberFormat
- 數組相關
- 數組
- Arrays
- string相關
- String
- StringBuffer
- StringBuilder
- 正則
- 日期類
- Locale類
- Date
- DateFormat
- SimpleDateFormat
- Calendar
- 新時間日期API
- 簡介
- LocalDate,LocalTime,LocalDateTime
- Instant時間點
- 帶時區的日期,時間處理
- 時間間隔
- 日期時間校正器
- TimeUnit
- 用yyyy
- 集合
- 集合和迭代器
- ArrayList集合
- List
- Set
- 判斷集合唯一
- Map和Entry
- stack類
- Collections集合工具類
- Stream數據流
- foreach不能修改內部元素
- of方法
- IO
- File類
- 字節流stream
- 字符流Reader
- IO流分類
- 轉換流
- 緩沖流
- 流的操作規律
- properties
- 序列化流與反序列化流
- 打印流
- System類對IO支持
- commons-IO
- IO流總結
- NIO
- 異步與非阻塞
- IO通信
- Unix的IO模型
- epoll對于文件描述符操作模式
- 用戶空間和內核空間
- NIO與普通IO的主要區別
- Paths,Path,Files
- Buffer
- Channel
- Selector
- Pipe
- Charset
- NIO代碼
- 多線程
- 創建線程
- 線程常用方法
- 線程池相關
- 線程池概念
- ThreadPoolExecutor
- Runnable和Callable
- 常用的幾種線程池
- 線程安全
- 線程同步的幾種方法
- synchronized
- 死鎖
- lock接口
- ThreadLoad
- ReentrantLock
- 讀寫鎖
- 鎖的相關概念
- volatile
- 釋放鎖和不釋放鎖的操作
- 等待喚醒機制
- 線程狀態
- 守護線程和普通線程
- Lamda表達式
- 反射相關
- 類加載器
- 反射
- 注解
- junit注解
- 動態代理
- 網絡編程相關
- 簡介
- UDP
- TCP
- 多線程socket上傳圖片
- NIO
- JDBC相關
- JDBC
- 預處理
- 批處理
- 事務
- properties配置文件
- DBUtils
- DBCP連接池
- C3P0連接池
- 獲得MySQL自動生成的主鍵
- Optional類
- Jigsaw模塊化
- 日志相關
- JDK日志
- log4j
- logback
- xml
- tomcat
- maven
- 簡介
- 倉庫
- 目錄結構
- 常用命令
- 生命周期
- idea配置
- jar包沖突
- 依賴范圍
- 私服
- 插件
- git-commit-id-plugin
- maven-assembly-plugin
- maven-resources-plugin
- maven-compiler-plugin
- versions-maven-plugin
- maven-source-plugin
- tomcat-maven-plugin
- 多環境
- 自定義插件
- stream
- swing
- json
- jackson
- optional
- junit
- gradle
- servlet
- 配置
- ServletContext
- 生命周期
- HttpServlet
- request
- response
- 亂碼
- session和cookie
- cookie
- session
- jsp
- 簡介
- 注釋
- 方法,成員變量
- 指令
- 動作標簽
- 隱式對象
- EL
- JSTL
- javaBean
- listener監聽器
- Filter過濾器
- 圖片驗證碼
- HttpUrlConnection
- 國際化
- 文件上傳
- 文件下載
- spring
- 簡介
- Bean
- 獲取和實例化
- 屬性注入
- 自動裝配
- 繼承和依賴
- 作用域
- 使用外部屬性文件
- spel
- 前后置處理器
- 生命周期
- 掃描規則
- 整合多個配置文件
- 注解
- 簡介
- 注解分層
- 類注入
- 分層和作用域
- 初始化方法和銷毀方法
- 屬性
- 泛型注入
- Configuration配置文件
- aop
- aop的實現
- 動態代理實現
- cglib代理實現
- aop名詞
- 簡介
- aop-xml
- aop-注解
- 代理方式選擇
- jdbc
- 簡介
- JDBCTemplate
- 事務
- 整合
- junit整合
- hibernate
- 簡介
- hibernate.properties
- 實體對象三種狀態
- 檢索方式
- 簡介
- 導航對象圖檢索
- OID檢索
- HQL
- Criteria(QBC)
- Query
- 緩存
- 事務管理
- 關系映射
- 注解
- 優化
- MyBatis
- 簡介
- 入門程序
- Mapper動態代理開發
- 原始Dao開發
- Mapper接口開發
- SqlMapConfig.xml
- map映射文件
- 輸出返回map
- 輸入參數
- pojo包裝類
- 多個輸入參數
- resultMap
- 動態sql
- 關聯
- 一對一
- 一對多
- 多對多
- 整合spring
- CURD
- 占位符和sql拼接以及參數處理
- 緩存
- 延遲加載
- 注解開發
- springMVC
- 簡介
- RequestMapping
- 參數綁定
- 常用注解
- 響應
- 文件上傳
- 異常處理
- 攔截器
- springBoot
- 配置
- 熱更新
- java配置
- springboot配置
- yaml語法
- 運行
- Actuator 監控
- 多環境配置切換
- 日志
- 日志簡介
- logback和access
- 日志文件配置屬性
- 開機自啟
- aop
- 整合
- 整合Redis
- 整合Spring Data JPA
- 基本查詢
- 復雜查詢
- 多數據源的支持
- Repository分析
- JpaSpeci?cationExecutor
- 整合Junit
- 整合mybatis
- 常用注解
- 基本操作
- 通用mapper
- 動態sql
- 關聯映射
- 使用xml
- spring容器
- 整合druid
- 整合郵件
- 整合fastjson
- 整合swagger
- 整合JDBC
- 整合spingboot-cache
- 請求
- restful
- 攔截器
- 常用注解
- 參數校驗
- 自定義filter
- websocket
- 響應
- 異常錯誤處理
- 文件下載
- 常用注解
- 頁面
- Thymeleaf組件
- 基本對象
- 內嵌對象
- 上傳文件
- 單元測試
- 模擬請求測試
- 集成測試
- 源碼解析
- 自動配置原理
- 啟動流程分析
- 源碼相關鏈接
- Servlet,Filter,Listener
- springcloud
- 配置
- 父pom
- 創建子工程
- Eureka
- Hystrix
- Ribbon
- Feign
- Zuul
- kotlin
- 基本數據類型
- 函數
- 區間
- 區塊鏈
- 簡介
- linux
- ulimit修改
- 防止syn攻擊
- centos7部署bbr
- debain9開啟bbr
- mysql
- 隔離性
- sql執行加載順序
- 7種join
- explain
- 索引失效和優化
- 表連接優化
- orderby的filesort問題
- 慢查詢
- show profile
- 全局查詢日志
- 死鎖解決
- sql
- 主從
- IDEA
- mac快捷鍵
- 美化界面
- 斷點調試
- 重構
- springboot-devtools熱部署
- IDEA進行JAR打包
- 導入jar包
- ProjectStructure
- toString添加json模板
- 配置maven
- Lombok插件
- rest client
- 文檔顯示
- sftp文件同步
- 書簽
- 代碼查看和搜索
- postfix
- live template
- git
- 文件頭注釋
- JRebel
- 離線模式
- xRebel
- github
- 連接mysql
- 選項沒有Java class的解決方法
- 擴展
- 項目配置和web部署
- 前端開發
- json和Inject language
- idea內存和cpu變高
- 相關設置
- 設計模式
- 單例模式
- 簡介
- 責任鏈
- JUC
- 原子類
- 原子類簡介
- 基本類型原子類
- 數組類型原子類
- 引用類型原子類
- JVM
- JVM規范內存解析
- 對象的創建和結構
- 垃圾回收
- 內存分配策略
- 備注
- 虛擬機工具
- 內存模型
- 同步八種操作
- 內存區域大小參數設置
- happens-before
- web service
- tomcat
- HTTPS
- nginx
- 變量
- 運算符
- 模塊
- Rewrite規則
- Netty
- netty為什么沒用AIO
- 基本組件
- 源碼解讀
- 簡單的socket例子
- 準備netty
- netty服務端啟動
- 案例一:發送字符串
- 案例二:發送對象
- websocket
- ActiveMQ
- JMS
- 安裝
- 生產者-消費者代碼
- 整合springboot
- kafka
- 簡介
- 安裝
- 圖形化界面
- 生產過程分析
- 保存消息分析
- 消費過程分析
- 命令行
- 生產者
- 消費者
- 攔截器interceptor
- partition
- kafka為什么快
- kafka streams
- kafka與flume整合
- RabbitMQ
- AMQP
- 整體架構
- RabbitMQ安裝
- rpm方式安裝
- 命令行和管控頁面
- 消息生產與消費
- 整合springboot
- 依賴和配置
- 簡單測試
- 多方測試
- 對象支持
- Topic Exchange模式
- Fanout Exchange訂閱
- 消息確認
- java client
- RabbitAdmin和RabbitTemplate
- 兩者簡介
- RabbitmqAdmin
- RabbitTemplate
- SimpleMessageListenerContainer
- MessageListenerAdapter
- MessageConverter
- 詳解
- Jackson2JsonMessageConverter
- ContentTypeDelegatingMessageConverter
- lucene
- 簡介
- 入門程序
- luke查看索引
- 分析器
- 索引庫維護
- elasticsearch
- 配置
- 插件
- head插件
- ik分詞插件
- 常用術語
- Mapping映射
- 數據類型
- 屬性方法
- Dynamic Mapping
- Index Template 索引模板
- 管理映射
- 建立映射
- 索引操作
- 單模式下CURD
- mget多個文檔
- 批量操作
- 版本控制
- 基本查詢
- Filter過濾
- 組合查詢
- 分析器
- redis
- String
- list
- hash
- set
- sortedset
- 發布訂閱
- 事務
- 連接池
- 管道
- 分布式可重入鎖
- 配置文件翻譯
- 持久化
- RDB
- AOF
- 總結
- Lettuce
- zookeeper
- zookeeper簡介
- 集群部署
- Observer模式
- 核心工作機制
- zk命令行操作
- zk客戶端API
- 感知服務動態上下線
- 分布式共享鎖
- 原理
- zab協議
- 兩階段提交協議
- 三階段提交協議
- Paxos協議
- ZAB協議
- hadoop
- 簡介
- hadoop安裝
- 集群安裝
- 單機安裝
- linux編譯hadoop
- 添加新節點
- 退役舊節點
- 集群間數據拷貝
- 歸檔
- 快照管理
- 回收站
- 檢查hdfs健康狀態
- 安全模式
- hdfs簡介
- hdfs命令行操作
- 常見問題匯總
- hdfs客戶端操作
- mapreduce工作機制
- 案例-單詞統計
- 局部聚合Combiner
- combiner流程
- combiner案例
- 自定義排序
- 自定義Bean對象
- 排序的分類
- 案例-按總量排序需求
- 一次性完成統計和排序
- 分區
- 分區簡介
- 案例-結果分區
- 多表合并
- reducer端合并
- map端合并(分布式緩存)
- 分組
- groupingComparator
- 案例-求topN
- 全局計數器
- 合并小文件
- 小文件的弊端
- CombineTextInputFormat機制
- 自定義InputFormat
- 自定義outputFormat
- 多job串聯
- 倒排索引
- 共同好友
- 串聯
- 數據壓縮
- InputFormat接口實現類
- yarn簡介
- 推測執行算法
- 本地提交到yarn
- 框架運算全流程
- 數據傾斜問題
- mapreduce的優化方案
- HA機制
- 優化
- Hive
- 安裝
- shell參數
- 數據類型
- 集合類型
- 數據庫
- DDL操作
- 創建表
- 修改表
- 分區表
- 分桶表
- DML操作
- load
- insert
- select
- export,import
- Truncate
- 注意
- 嚴格模式
- 函數
- 內置運算符
- 內置函數
- 自定義函數
- Transfrom實現
- having和where不同
- 壓縮
- 存儲
- 存儲和壓縮結合使用
- explain詳解
- 調優
- Fetch抓取
- 本地模式
- 表的優化
- GroupBy
- count(Distinct)去重統計
- 行列過濾
- 動態分區調整
- 數據傾斜
- 并行執行
- JVM重用
- 推測執行
- reduce內存和個數
- sql查詢結果作為變量(shell)
- youtube
- flume
- 簡介
- 安裝
- 常用組件
- 攔截器
- 案例
- 監聽端口到控制臺
- 采集目錄到HDFS
- 采集文件到HDFS
- 多個agent串聯
- 日志采集和匯總
- 單flume多channel,sink
- 自定義攔截器
- 高可用配置
- 使用注意
- 監控Ganglia
- sqoop
- 安裝
- 常用命令
- 數據導入
- 準備數據
- 導入數據到HDFS
- 導入關系表到HIVE
- 導入表數據子集
- 增量導入
- 數據導出
- 打包腳本
- 作業
- 原理
- azkaban
- 簡介
- 安裝
- 案例
- 簡介
- command類型單一job
- command類型多job工作流flow
- HDFS操作任務
- mapreduce任務
- hive腳本任務
- oozie
- 安裝
- hbase
- 簡介
- 系統架構
- 物理存儲
- 尋址機制
- 讀寫過程
- 安裝
- 命令行
- 基本CURD
- java api
- CURD
- CAS
- 過濾器查詢
- 建表高級屬性
- 與mapreduce結合
- 與sqoop結合
- 協處理器
- 參數配置優化
- 數據備份和恢復
- 節點管理
- 案例-點擊流
- 簡介
- HUE
- 安裝
- storm
- 簡介
- 安裝
- 集群啟動及任務過程分析
- 單詞統計
- 單詞統計(接入kafka)
- 并行度和分組
- 啟動流程分析
- ACK容錯機制
- ACK簡介
- BaseRichBolt簡單使用
- BaseBasicBolt簡單使用
- Ack工作機制
- 本地目錄樹
- zookeeper目錄樹
- 通信機制
- 案例
- 日志告警
- 工具
- YAPI
- chrome無法手動拖動安裝插件
- 時間和空間復雜度
- jenkins
- 定位cpu 100%
- 常用腳本工具
- OOM問題定位
- scala
- 編譯
- 基本語法
- 函數
- 數組常用方法
- 集合
- 并行集合
- 類
- 模式匹配
- 異常
- tuple元祖
- actor并發編程
- 柯里化
- 隱式轉換
- 泛型
- 迭代器
- 流stream
- 視圖view
- 控制抽象
- 注解
- spark
- 企業架構
- 安裝
- api開發
- mycat
- Groovy
- 基礎