# Sidekiq Style Guide
> 原文:[https://docs.gitlab.com/ee/development/sidekiq_style_guide.html](https://docs.gitlab.com/ee/development/sidekiq_style_guide.html)
* [ApplicationWorker](#applicationworker)
* [Dedicated Queues](#dedicated-queues)
* [Queue Namespaces](#queue-namespaces)
* [Idempotent Jobs](#idempotent-jobs)
* [Ensuring a worker is idempotent](#ensuring-a-worker-is-idempotent)
* [Declaring a worker as idempotent](#declaring-a-worker-as-idempotent)
* [Deduplication](#deduplication)
* [Job urgency](#job-urgency)
* [Latency sensitive jobs](#latency-sensitive-jobs)
* [Changing a queue’s urgency](#changing-a-queues-urgency)
* [Jobs with External Dependencies](#jobs-with-external-dependencies)
* [CPU-bound and Memory-bound Workers](#cpu-bound-and-memory-bound-workers)
* [Declaring a Job as CPU-bound](#declaring-a-job-as-cpu-bound)
* [Determining whether a worker is CPU-bound](#determining-whether-a-worker-is-cpu-bound)
* [Feature category](#feature-category)
* [Job weights](#job-weights)
* [Worker context](#worker-context)
* [Cron workers](#cron-workers)
* [Jobs scheduled in bulk](#jobs-scheduled-in-bulk)
* [Arguments logging](#arguments-logging)
* [Tests](#tests)
* [Sidekiq Compatibility across Updates](#sidekiq-compatibility-across-updates)
* [Changing the arguments for a worker](#changing-the-arguments-for-a-worker)
* [Remove an argument](#remove-an-argument)
* [Add an argument](#add-an-argument)
* [Multi-step deployment](#multi-step-deployment)
* [Parameter hash](#parameter-hash)
* [Removing workers](#removing-workers)
* [Renaming queues](#renaming-queues)
# Sidekiq Style Guide[](#sidekiq-style-guide "Permalink")
本文檔概述了添加或修改 Sidekiq 工作程序時應遵循的各種準則.
## ApplicationWorker[](#applicationworker "Permalink")
All workers should include `ApplicationWorker` instead of `Sidekiq::Worker`, which adds some convenience methods and automatically sets the queue based on the worker’s name.
## Dedicated Queues[](#dedicated-queues "Permalink")
所有工作程序都應使用自己的隊列,該隊列將根據工作程序類名稱自動設置. 對于名為`ProcessSomethingWorker`的工作程序,隊列名稱將為`process_something` . 如果不確定工人使用什么隊列,可以使用`SomeWorker.queue`找到它. 幾乎沒有理由使用`sidekiq_options queue: :some_queue`手動覆蓋隊列名稱.
添加新隊列后,運行`bin/rake gitlab:sidekiq:all_queues_yml:generate`來重新生成`app/workers/all_queues.yml`或`ee/app/workers/all_queues.yml`以便可以由[`sidekiq-cluster`](../administration/operations/extra_sidekiq_processes.html)拾取.
## Queue Namespaces[](#queue-namespaces "Permalink")
雖然不同的工作人員無法共享隊列,但是他們可以共享隊列名稱空間.
為工作程序定義隊列名稱空間可以啟動 Sidekiq 進程,該進程自動為該工作空間中的所有工作程序處理作業,而無需顯式列出其所有隊列名稱. 例如,如果由`sidekiq-cron`管理的所有工作人員都使用`cronjob`隊列名稱空間,那么我們可以專門針對此類計劃的作業啟動 Sidekiq 進程. 如果稍后添加使用`cronjob`命名空間的新工作程序,則 Sidekiq 進程也將自動為該工作程序選擇作業(重新啟動后),而無需更改任何配置.
可以使用`queue_namespace` DSL 類方法設置隊列名稱空間:
```
class SomeScheduledTaskWorker
include ApplicationWorker
queue_namespace :cronjob
# ...
end
```
在后臺,這會將`SomeScheduledTaskWorker.queue`設置為`cronjob:some_scheduled_task` . 常用的名稱空間將具有自己的關注模塊,可以輕松地將其包含在 worker 類中,并且可以設置隊列名稱空間以外的其他 Sidekiq 選項. 例如, `CronjobQueue`設置名稱空間,但也禁用重試.
`bundle exec sidekiq`是可感知名稱空間的,當提供名稱空間而不是`--queue` ( `-q` )選項中的簡單隊列名稱時,它將自動偵聽名稱空間中的所有隊列(技術上:所有以名稱空間名稱為前綴的隊列) ,或`config/sidekiq_queues.yml`中的`:queues:`部分.
請注意,應謹慎執行將工作程序添加到現有命名空間的操作,因為如果沒有適當調整可用于處理命名空間的 Sidekiq 進程可用的資源,則額外的作業將占用已經存在的工作程序的資源.
## Idempotent Jobs[](#idempotent-jobs "Permalink")
眾所周知,一項作業可能由于多種原因而失敗. 例如,網絡中斷或錯誤. 為了解決此問題,Sidekiq 具有內置的重試機制,GitLab 中的大多數工作人員默認使用該機制.
期望作業在失敗后可以再次運行,而不會給應用程序或用戶帶來重大副作用,這就是 Sidekiq 鼓勵作業具有[冪等性和事務性的原因](https://github.com/mperham/sidekiq/wiki/Best-Practices#2-make-your-job-idempotent-and-transactional) .
通常,在以下情況下,可以將工人視為等冪的:
* 它可以使用相同的參數安全地運行多次.
* 預期應用程序副作用僅發生一次(或第二次運行的副作用無效).
一個很好的例子是緩存過期工作器.
**注意:**如果隊列中已經存在具有相同參數的未啟動作業,則為等冪工作器調度的作業將自動進行[重復數據刪除](#deduplication) .
### Ensuring a worker is idempotent[](#ensuring-a-worker-is-idempotent "Permalink")
確保使用以下共享示例通過工作程序測試:
```
include_examples 'an idempotent worker' do
it 'marks the MR as merged' do
# Using subject inside this block will process the job multiple times
subject
expect(merge_request.state).to eq('merged')
end
end
```
直接使用`perform_multiple`方法而不是`job.perform` (此輔助方法將自動包含在 worker 中).
### Declaring a worker as idempotent[](#declaring-a-worker-as-idempotent "Permalink")
```
class IdempotentWorker
include ApplicationWorker
# Declares a worker is idempotent and can
# safely run multiple times.
idempotent!
# ...
end
```
鼓勵只具有`idempotent!` 即使在另一個類或模塊中定義了`perform`方法,也要在最頂層的 worker 類中調用.
**注意:**如果工人階級沒有被標記為冪等,那么警察將失敗. 如果您不確定自己的工作可以安全地多次運行,請考慮跳過警察.
### Deduplication[](#deduplication "Permalink")
當隊列中有另一個冪函數的作業入隊而另一個未啟動的作業時,GitLab 會刪除第二個作業. 之所以跳過該工作,是因為首先安排的工作將完成相同的工作; 到第二個作業執行時,第一個作業什么也做不了.
例如, `AuthorizedProjectsWorker`需要一個用戶 ID. 當工作程序運行時,它將重新計算用戶的授權. 每當操作有可能更改用戶的授權時,GitLab 都會計劃此作業. 如果將同一用戶同時添加到兩個項目,則如果第一個作業尚未開始,則可以跳過第二個作業,因為當第一個作業運行時,它將為兩個項目創建授權.
GitLab 不會跳過將來計劃的作業,因為我們假設在計劃執行作業時狀態將已更改.
[已經提出了](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/195)更多的[重復數據刪除策略](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/195) . 如果您正在部署的員工可能會從其他策略中受益,請在問題中發表評論.
如果自動重復數據刪除會導致某些隊列出現問題. 可以通過啟用名為`disable_<queue name>_deduplication`的功能標志來暫時禁用此功能. 例如,要禁用`AuthorizedProjectsWorker`重復數據刪除,我們將啟用功能標記`disable_authorized_projects_deduplication` .
從 ChatOps:
```
/chatops run feature set disable_authorized_projects_deduplication true
```
從 rails 控制臺:
```
Feature.enable!(:disable_authorized_projects_deduplication)
```
## Job urgency[](#job-urgency "Permalink")
作業可以設置一個`urgency`屬性,可以是`:high` , `:low`或`:throttled` . 這些目標如下:
| **Urgency** | **隊列調度目標** | **執行延遲要求** |
| --- | --- | --- |
| `:high` | 10 秒 | 1 秒的 p50、10 秒的 p99 |
| `:low` | 1 分鐘 | 最長運行時間為 5 分鐘 |
| `:throttled` | None | 最長運行時間為 5 分鐘 |
要設置作業的緊急程度,請使用`urgency`類方法:
```
class HighUrgencyWorker
include ApplicationWorker
urgency :high
# ...
end
```
### Latency sensitive jobs[](#latency-sensitive-jobs "Permalink")
如果立即安排大量后臺作業,則在作業等待輔助節點可用時,可能會出現作業排隊. 這是正常現象,它可以通過系統地處理流量高峰來賦予系統彈性. 但是,某些作業比其他作業對延遲更敏感. 這些工作的示例包括:
1. 在推送到分支之后更新合并請求的作業.
2. 在推送到分支之后,該任務會使項目的已知分支的緩存無效.
3. 更改權限后,用戶可以看到重新計算組和項目的作業.
4. 在狀態更改為管道中的作業之后更新 CI 管道狀態的作業.
當這些作業被延遲時,用戶可能會將延遲視為錯誤:例如,他們可以推送分支,然后嘗試為該分支創建合并請求,但在 UI 中被告知該分支不存在. 我們認為這些工作很`urgency :high`
做出額外的努力以確保這些作業在計劃后的很短時間內啟動. 但是,為了確保吞吐量,這些作業還具有非常嚴格的執行持續時間要求:
1. 中位作業執行時間應少于 1 秒.
2. 99%的工作應在 10 秒內完成.
如果一個工作人員不能滿足這些期望,那么就不能將其視為`urgency :high`工作人員:考慮重新設計該工作人員,或在兩個不同的工作人員之間拆分工作,其中一個工作`urgency :high`執行快速的`urgency :high`代碼,另一個工作`urgency :low` ,它沒有執行延遲要求(但也有較低的調度目標).
### Changing a queue’s urgency[](#changing-a-queues-urgency "Permalink")
在 GitLab.com,我們幾個跑 Sidekiq [碎片](https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail) ,其中每一個代表一個特定類型的工作負載.
更改隊列的緊急性或添加新隊列時,我們需要考慮新分片上的預期工作量. 請注意,如果我們要更改現有隊列,那么也會對舊分片產生影響,但這始終會減少工作量.
為此,我們要計算新分片的總執行時間和 RPS(吞吐量)的預期增長. 我們可以從以下獲得這些值:
* " [隊列詳細信息"儀表板](https://dashboards.gitlab.net/d/sidekiq-queue-detail/sidekiq-queue-detail)具有隊列本身的值. 對于新隊列,我們??可以查找具有類似模式或在類似情況下安排的隊列.
* [碎片詳細信息儀表板](https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail)具有總執行時間和吞吐量(RPS). "分片利用率"面板將顯示該分片當前是否有多余的容量.
然后,我們可以計算我們要更改的隊列的 RPS *平均運行時間(針對新作業的估算值),以查看新分片期望的 RPS 和執行時間的相對增加:
```
new_queue_consumption = queue_rps * queue_duration_avg
shard_consumption = shard_rps * shard_duration_avg
(new_queue_consumption / shard_consumption) * 100
```
如果我們預期增加**幅度小于 5%** ,則無需采取進一步措施.
否則,請對合并請求 ping `@gitlab-org/scalability`并要求進行審查.
## Jobs with External Dependencies[](#jobs-with-external-dependencies "Permalink")
GitLab 應用程序中的大多數后臺作業都與其他 GitLab 服務進行通信. 例如,PostgreSQL,Redis,Gitaly 和對象存儲. 這些被視為作業的"內部"依賴性.
但是,某些作業將依賴于外部服務才能成功完成. 一些示例包括:
1. 調用用戶配置的 Web 鉤子的作業.
2. 將應用程序部署到用戶配置的 k8s 集群的作業.
這些作業具有"外部依賴性". 這對于后臺處理群集的運行有多種重要的作用:
1. 大多數外部依賴項(例如 Web 鉤子)都不提供 SLO,因此我們不能保證這些作業的執行延遲. 由于我們無法保證執行延遲,因此無法確保吞吐量,因此,在高流量環境中,我們需要確保將具有外部依賴關系的作業與高緊急性作業分開,以確保這些隊列上的吞吐量.
2. Errors in jobs with external dependencies have higher alerting thresholds as there is a likelihood that the cause of the error is external.
```
class ExternalDependencyWorker
include ApplicationWorker
# Declares that this worker depends on
# third-party, external services in order
# to complete successfully
worker_has_external_dependencies!
# ...
end
```
**注意:**請注意,一項工作既不能具有很高的緊迫性,又不能具有外部依賴性.
## CPU-bound and Memory-bound Workers[](#cpu-bound-and-memory-bound-workers "Permalink")
受 CPU 或內存資源限制約束的工作程序應使用`worker_resource_boundary`方法進行注釋.
大多數工作人員傾向于將大部分時間都花在阻止時間上,等待來自 Redis,PostgreSQL 和 Gitaly 等其他服務的網絡響應. 由于 Sidekiq 是多線程環境,因此可以高并發地調度這些作業.
但是,有些工人在 Ruby 中花費大量時間*在 CPU*運行邏輯上. Ruby MRI 不支持真正的多線程-它依賴[GIL](https://thoughtbot.com/blog/untangling-ruby-threads#the-global-interpreter-lock)來極大簡化應用程序開發,無論托管該進程的計算機有多少核,一次僅允許一個進程中的一部分 Ruby 代碼運行一次. 對于受 IO 約束的工作人員,這不是問題,因為大多數線程在基礎庫(位于 GIL 之外)中被阻止.
如果許多線程試圖同時運行 Ruby 代碼,則將導致 GIL 爭用,這將減慢所有進程的速度.
在高流量的環境中,知道一個工作人員受 CPU 限制,可以使我們在具有較低并發性的其他隊列中運行它. 這樣可以確保最佳性能.
同樣,如果工作人員使用大量內存,則可以在定制的低并發,高內存隊列上運行這些內存.
請注意,受內存限制的工作程序會創建大量的 GC 工作負載,暫停時間為 10-50ms. 這將對工作人員的延遲要求產生影響. 因此, `memory`限制, `urgency :high`作業是不允許的,并且將使 CI 失敗. 通常,不鼓勵受`memory`限制的工作人員,應考慮處理工作的替代方法.
如果工作程序需要大量的內存和 CPU 時間,則由于上述對高緊急性的內存綁定工作程序的限制,應將其標記為內存綁定.
## Declaring a Job as CPU-bound[](#declaring-a-job-as-cpu-bound "Permalink")
本示例說明如何將作業聲明為受 CPU 約束.
```
class CPUIntensiveWorker
include ApplicationWorker
# Declares that this worker will perform a lot of
# calculations on-CPU.
worker_resource_boundary :cpu
# ...
end
```
## Determining whether a worker is CPU-bound[](#determining-whether-a-worker-is-cpu-bound "Permalink")
我們使用以下方法來確定工作程序是否受 CPU 限制:
* 在 Sidekiq 結構化 JSON 日志中,匯總工作`duration`和`cpu_s`字段.
* `duration` refers to the total job execution duration, in seconds
* `cpu_s`是從[`Process::CLOCK_THREAD_CPUTIME_ID`](https://www.rubydoc.info/stdlib/core/Process:clock_gettime)計數器派生的,它是作業在 CPU 上花費的時間的度量.
* 將`cpu_s`除以`duration`即可得到在 CPU 上花費的`duration`百分比.
* 如果該比例超過 33%,則認為該工作線程受 CPU 限制,因此應進行注釋.
* 請注意,這些值不應用于較小的樣本量,而應用于相當大的匯總.
## Feature category[](#feature-category "Permalink")
所有 Sidekiq 工作人員都必須定義一個已知的[特征類別](feature_categorization/index.html#sidekiq-workers) .
## Job weights[](#job-weights "Permalink")
某些作業的重量已聲明. 僅在默認執行模式下運行 Sidekiq 時才使用此選項-使用[`sidekiq-cluster`](../administration/operations/extra_sidekiq_processes.html)不能計算權重.
隨著我們[朝在 Core 中使用`sidekiq-cluster`邁進](https://gitlab.com/gitlab-org/gitlab/-/issues/34396) ,新增加的工作人員無需指定權重. 他們可以簡單地使用默認權重 1.
## Worker context[](#worker-context "Permalink")
版本歷史
* 在 GitLab 12.8 中[引入](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/9) .
為了在日志中獲得有關工作程序的更多信息,我們[以`ApplicationContext`的形式向工作](logging.html#logging-context-metadata-through-rails-or-grape-requests)添加[元數據](logging.html#logging-context-metadata-through-rails-or-grape-requests) . 在大多數情況下,從請求計劃作業時,該上下文已經從請求中扣除并添加到計劃的作業中.
運行作業時,將還原計劃時處于活動狀態的上下文. 這會使上下文傳播到正在運行的作業中計劃的任何作業.
所有這些意味著在大多數情況下,要將上下文添加到作業中,我們無需執行任何操作.
但是,在某些情況下,計劃作業時將不存在任何上下文,或者存在的上下文很可能不正確. 對于這些實例,我們添加了 Rubocop 規則以引起注意并避免日志中的元數據不正確.
與大多數警察一樣,有完全正當的理由禁用它們. 在這種情況下,可能來自請求的上下文是正確的. 或者,您可能已經以警察無法接受的方式指定了上下文. 無論如何,請在禁用警察時留下指向將使用哪個上下文的代碼注釋.
當確實為上下文提供對象時,請確保已預先加載名稱空間和項目的路由. 這可以通過使用來完成`.with_route`上所有定義范圍`Routable`秒.
### Cron workers[](#cron-workers "Permalink")
對于 Cronjob 隊列( `include CronjobQueue` )中的工作人員,將自動清除上下文,即使從請求中安排工作人員時也是如此. 我們這樣做是為了避免從 cron worker 安排其他作業時出現不正確的元數據.
Cron 工作人員自己在實例范圍內運行,因此它們的作用域不限于應添加到上下文中的用戶,名稱空間,項目或其他資源.
然而,他們往往安排*確實*需要方面的其他工作.
這就是為什么需要在工作人員中某處顯示上下文的原因. 可以通過在工作器內的某些位置使用以下方法之一來完成此操作:
1. 在`with_context`幫助器中包裝用于調度作業的代碼:
```
def perform
deletion_cutoff = Gitlab::CurrentSettings
.deletion_adjourned_period.days.ago.to_date
projects = Project.with_route.with_namespace
.aimed_for_deletion(deletion_cutoff)
projects.find_each(batch_size: 100).with_index do |project, index|
delay = index * INTERVAL
with_context(project: project) do
AdjournedProjectDeletionWorker.perform_in(delay, project.id)
end
end
end
```
2. 使用提供上下文的批處理調度方法:
```
def schedule_projects_in_batch(projects)
ProjectImportScheduleWorker.bulk_perform_async_with_contexts(
projects,
arguments_proc: -> (project) { project.id },
context_proc: -> (project) { { project: project } }
)
end
```
或者,在延遲調度時:
```
diffs.each_batch(of: BATCH_SIZE) do |diffs, index|
DeleteDiffFilesWorker
.bulk_perform_in_with_contexts(index * 5.minutes,
diffs,
arguments_proc: -> (diff) { diff.id },
context_proc: -> (diff) { { project: diff.merge_request.target_project } })
end
```
### Jobs scheduled in bulk[](#jobs-scheduled-in-bulk "Permalink")
通常,在批量調度作業時,這些作業應具有單獨的上下文而不是總體上下文.
如果是這樣的話, `bulk_perform_async`可以通過更換`bulk_perform_async_with_context`幫手,而不是`bulk_perform_in`使用`bulk_perform_in_with_context` .
例如:
```
ProjectImportScheduleWorker.bulk_perform_async_with_contexts(
projects,
arguments_proc: -> (project) { project.id },
context_proc: -> (project) { { project: project } }
)
```
第一個參數中可枚舉的每個對象分為兩個塊:
* `arguments_proc` ,它需要返回作業需要調度的參數列表.
* 需要返回帶有作業上下文信息的哈希值的`context_proc` .
## Arguments logging[](#arguments-logging "Permalink")
當[`SIDEKIQ_LOG_ARGUMENTS`](../administration/troubleshooting/sidekiq.html#log-arguments-to-sidekiq-jobs)啟用,Sidekiq 作業參數將被記錄.
默認情況下,記錄的唯一參數是數字參數,因為其他類型的參數可能包含敏感信息. 要覆蓋此參數,請在工作程序內部使用`loggable_arguments`并記錄要記錄的參數的索引. (此處不需要指定數字參數.)
例如:
```
class MyWorker
include ApplicationWorker
loggable_arguments 1, 3
# object_id will be logged as it's numeric
# string_a will be logged due to the loggable_arguments call
# string_b will be filtered from logs
# string_c will be logged due to the loggable_arguments call
def perform(object_id, string_a, string_b, string_c)
end
end
```
## Tests[](#tests "Permalink")
與其他任何類一樣,每個 Sidekiq 工作者都必須使用 RSpec 進行測試. 這些測試應放在`spec/workers` .
## Sidekiq Compatibility across Updates[](#sidekiq-compatibility-across-updates "Permalink")
請記住,Sidekiq 作業的參數在計劃執行時存儲在隊列中. 在線更新期間,這可能會導致幾種可能的情況:
1. 該應用程序的較舊版本發布作業,該作業由升級的 Sidekiq 節點執行.
2. 作業在升級之前排隊,但在升級之后執行.
3. 作業由運行較新版本應用程序的節點排隊,但在運行較舊版本應用程序的節點上執行.
### Changing the arguments for a worker[](#changing-the-arguments-for-a-worker "Permalink")
作業需要在應用程序的連續版本之間向后和向前兼容. 在所有 Rails 和 Sidekiq 節點都具有更新的代碼之前,添加或刪除參數可能會在部署期間引起問題.
#### Remove an argument[](#remove-an-argument "Permalink")
**不要從`perform`函數中刪除參數.** . 而是,使用以下方法:
1. 提供默認值(通常為`nil` )并使用注釋將參數標記為已棄用
2. 停止在`perform_async`使用該參數.
3. 忽略 worker 類中的值,但是直到下一個主要版本才將其刪除.
在以下示例中,如果要刪除`arg2` ,請首先設置`nil`默認值,然后更新調用`ExampleWorker.perform_async`位置.
```
class ExampleWorker
def perform(object_id, arg1, arg2 = nil)
# ...
end
end
```
#### Add an argument[](#add-an-argument "Permalink")
有兩種方法可以安全地向 Sidekiq 工作者添加新參數:
1. Set up a [multi-step deployment](#multi-step-deployment) in which the new argument is first added to the worker
2. 將[參數哈希](#parameter-hash)用于其他參數. 這也許是最靈活的選擇.
##### Multi-step deployment[](#multi-step-deployment "Permalink")
這種方法需要多個合并請求,并且在合并其他更改之前,要合并和部署第一個合并請求.
1. 在初始合并請求中,使用默認值將參數添加到 worker 中:
```
class ExampleWorker
def perform(object_id, new_arg = nil)
# ...
end
end
```
2. 使用新參數合并和部署工作程序.
3. 在另一個合并請求中,更新`ExampleWorker.perform_async`調用以使用新參數.
##### Parameter hash[](#parameter-hash "Permalink")
如果現有工作人員已經利用參數哈希,則此方法將不需要多次部署.
1. 在 worker 中使用參數散列以實現將來的靈活性:
```
class ExampleWorker
def perform(object_id, params = {})
# ...
end
end
```
### Removing workers[](#removing-workers "Permalink")
盡量避免在次要版本和修補程序版本中刪除工作人員及其隊列.
在聯機更新期間,實例可能有待處理的作業,而刪除隊列可能導致這些作業永遠卡住. 如果您無法為這些 Sidekiq 作業編寫遷移,請考慮僅在主要版本中刪除該工作程序.
### Renaming queues[](#renaming-queues "Permalink")
出于同樣的原因,遣散工人也很危險,因此在重命名隊列時應格外小心.
重命名隊列時,請使用`sidekiq_queue_migrate`幫助程序遷移方法,如本示例所示:
```
class MigrateTheRenamedSidekiqQueue < ActiveRecord::Migration[5.0]
include Gitlab::Database::MigrationHelpers
DOWNTIME = false
def up
sidekiq_queue_migrate 'old_queue_name', to: 'new_queue_name'
end
def down
sidekiq_queue_migrate 'new_queue_name', to: 'old_queue_name'
end
end
```
- GitLab Docs
- Installation
- Requirements
- GitLab cloud native Helm Chart
- Install GitLab with Docker
- Installation from source
- Install GitLab on Microsoft Azure
- Installing GitLab on Google Cloud Platform
- Installing GitLab on Amazon Web Services (AWS)
- Analytics
- Code Review Analytics
- Productivity Analytics
- Value Stream Analytics
- Kubernetes clusters
- Adding and removing Kubernetes clusters
- Adding EKS clusters
- Adding GKE clusters
- Group-level Kubernetes clusters
- Instance-level Kubernetes clusters
- Canary Deployments
- Cluster Environments
- Deploy Boards
- GitLab Managed Apps
- Crossplane configuration
- Cluster management project (alpha)
- Kubernetes Logs
- Runbooks
- Serverless
- Deploying AWS Lambda function using GitLab CI/CD
- Securing your deployed applications
- Groups
- Contribution Analytics
- Custom group-level project templates
- Epics
- Manage epics
- Group Import/Export
- Insights
- Issues Analytics
- Iterations
- Public access
- SAML SSO for GitLab.com groups
- SCIM provisioning using SAML SSO for GitLab.com groups
- Subgroups
- Roadmap
- Projects
- GitLab Secure
- Security Configuration
- Container Scanning
- Dependency Scanning
- Dependency List
- Static Application Security Testing (SAST)
- Secret Detection
- Dynamic Application Security Testing (DAST)
- GitLab Security Dashboard
- Offline environments
- Standalone Vulnerability pages
- Security scanner integration
- Badges
- Bulk editing issues and merge requests at the project level
- Code Owners
- Compliance
- License Compliance
- Compliance Dashboard
- Create a project
- Description templates
- Deploy Keys
- Deploy Tokens
- File finder
- Project integrations
- Integrations
- Atlassian Bamboo CI Service
- Bugzilla Service
- Custom Issue Tracker service
- Discord Notifications service
- Enabling emails on push
- GitHub project integration
- Hangouts Chat service
- Atlassian HipChat
- Irker IRC Gateway
- GitLab Jira integration
- Mattermost Notifications Service
- Mattermost slash commands
- Microsoft Teams service
- Mock CI Service
- Prometheus integration
- Redmine Service
- Slack Notifications Service
- Slack slash commands
- GitLab Slack application
- Webhooks
- YouTrack Service
- Insights
- Issues
- Crosslinking Issues
- Design Management
- Confidential issues
- Due dates
- Issue Boards
- Issue Data and Actions
- Labels
- Managing issues
- Milestones
- Multiple Assignees for Issues
- Related issues
- Service Desk
- Sorting and ordering issue lists
- Issue weight
- Associate a Zoom meeting with an issue
- Merge requests
- Allow collaboration on merge requests across forks
- Merge Request Approvals
- Browser Performance Testing
- How to create a merge request
- Cherry-pick changes
- Code Quality
- Load Performance Testing
- Merge Request dependencies
- Fast-forward merge requests
- Merge when pipeline succeeds
- Merge request conflict resolution
- Reverting changes
- Reviewing and managing merge requests
- Squash and merge
- Merge requests versions
- Draft merge requests
- Members of a project
- Migrating projects to a GitLab instance
- Import your project from Bitbucket Cloud to GitLab
- Import your project from Bitbucket Server to GitLab
- Migrating from ClearCase
- Migrating from CVS
- Import your project from FogBugz to GitLab
- Gemnasium
- Import your project from GitHub to GitLab
- Project importing from GitLab.com to your private GitLab instance
- Import your project from Gitea to GitLab
- Import your Jira project issues to GitLab
- Migrating from Perforce Helix
- Import Phabricator tasks into a GitLab project
- Import multiple repositories by uploading a manifest file
- Import project from repo by URL
- Migrating from SVN to GitLab
- Migrating from TFVC to Git
- Push Options
- Releases
- Repository
- Branches
- Git Attributes
- File Locking
- Git file blame
- Git file history
- Repository mirroring
- Protected branches
- Protected tags
- Push Rules
- Reduce repository size
- Signing commits with GPG
- Syntax Highlighting
- GitLab Web Editor
- Web IDE
- Requirements Management
- Project settings
- Project import/export
- Project access tokens (Alpha)
- Share Projects with other Groups
- Snippets
- Static Site Editor
- Wiki
- Project operations
- Monitor metrics for your CI/CD environment
- Set up alerts for Prometheus metrics
- Embedding metric charts within GitLab-flavored Markdown
- Embedding Grafana charts
- Using the Metrics Dashboard
- Dashboard YAML properties
- Metrics dashboard settings
- Panel types for dashboards
- Using Variables
- Templating variables for metrics dashboards
- Prometheus Metrics library
- Monitoring AWS Resources
- Monitoring HAProxy
- Monitoring Kubernetes
- Monitoring NGINX
- Monitoring NGINX Ingress Controller
- Monitoring NGINX Ingress Controller with VTS metrics
- Alert Management
- Error Tracking
- Tracing
- Incident Management
- GitLab Status Page
- Feature Flags
- GitLab CI/CD
- GitLab CI/CD pipeline configuration reference
- GitLab CI/CD include examples
- Introduction to CI/CD with GitLab
- Getting started with GitLab CI/CD
- How to enable or disable GitLab CI/CD
- Using SSH keys with GitLab CI/CD
- Migrating from CircleCI
- Migrating from Jenkins
- Auto DevOps
- Getting started with Auto DevOps
- Requirements for Auto DevOps
- Customizing Auto DevOps
- Stages of Auto DevOps
- Upgrading PostgreSQL for Auto DevOps
- Cache dependencies in GitLab CI/CD
- GitLab ChatOps
- Cloud deployment
- Docker integration
- Building Docker images with GitLab CI/CD
- Using Docker images
- Building images with kaniko and GitLab CI/CD
- GitLab CI/CD environment variables
- Predefined environment variables reference
- Where variables can be used
- Deprecated GitLab CI/CD variables
- Environments and deployments
- Protected Environments
- GitLab CI/CD Examples
- Test a Clojure application with GitLab CI/CD
- Using Dpl as deployment tool
- Testing a Phoenix application with GitLab CI/CD
- End-to-end testing with GitLab CI/CD and WebdriverIO
- DevOps and Game Dev with GitLab CI/CD
- Deploy a Spring Boot application to Cloud Foundry with GitLab CI/CD
- How to deploy Maven projects to Artifactory with GitLab CI/CD
- Testing PHP projects
- Running Composer and NPM scripts with deployment via SCP in GitLab CI/CD
- Test and deploy Laravel applications with GitLab CI/CD and Envoy
- Test and deploy a Python application with GitLab CI/CD
- Test and deploy a Ruby application with GitLab CI/CD
- Test and deploy a Scala application to Heroku
- GitLab CI/CD for external repositories
- Using GitLab CI/CD with a Bitbucket Cloud repository
- Using GitLab CI/CD with a GitHub repository
- GitLab Pages
- GitLab Pages
- GitLab Pages domain names, URLs, and baseurls
- Create a GitLab Pages website from scratch
- Custom domains and SSL/TLS Certificates
- GitLab Pages integration with Let's Encrypt
- GitLab Pages Access Control
- Exploring GitLab Pages
- Incremental Rollouts with GitLab CI/CD
- Interactive Web Terminals
- Optimizing GitLab for large repositories
- Metrics Reports
- CI/CD pipelines
- Pipeline Architecture
- Directed Acyclic Graph
- Multi-project pipelines
- Parent-child pipelines
- Pipelines for Merge Requests
- Pipelines for Merged Results
- Merge Trains
- Job artifacts
- Pipeline schedules
- Pipeline settings
- Triggering pipelines through the API
- Review Apps
- Configuring GitLab Runners
- GitLab CI services examples
- Using MySQL
- Using PostgreSQL
- Using Redis
- Troubleshooting CI/CD
- GitLab Package Registry
- GitLab Container Registry
- Dependency Proxy
- GitLab Composer Repository
- GitLab Conan Repository
- GitLab Maven Repository
- GitLab NPM Registry
- GitLab NuGet Repository
- GitLab PyPi Repository
- API Docs
- API resources
- .gitignore API
- GitLab CI YMLs API
- Group and project access requests API
- Appearance API
- Applications API
- Audit Events API
- Avatar API
- Award Emoji API
- Project badges API
- Group badges API
- Branches API
- Broadcast Messages API
- Project clusters API
- Group clusters API
- Instance clusters API
- Commits API
- Container Registry API
- Custom Attributes API
- Dashboard annotations API
- Dependencies API
- Deploy Keys API
- Deployments API
- Discussions API
- Dockerfiles API
- Environments API
- Epics API
- Events
- Feature Flags API
- Feature flag user lists API
- Freeze Periods API
- Geo Nodes API
- Group Activity Analytics API
- Groups API
- Import API
- Issue Boards API
- Group Issue Boards API
- Issues API
- Epic Issues API
- Issues Statistics API
- Jobs API
- Keys API
- Labels API
- Group Labels API
- License
- Licenses API
- Issue links API
- Epic Links API
- Managed Licenses API
- Markdown API
- Group and project members API
- Merge request approvals API
- Merge requests API
- Project milestones API
- Group milestones API
- Namespaces API
- Notes API
- Notification settings API
- Packages API
- Pages domains API
- Pipeline schedules API
- Pipeline triggers API
- Pipelines API
- Project Aliases API
- Project import/export API
- Project repository storage moves API
- Project statistics API
- Project templates API
- Projects API
- Protected branches API
- Protected tags API
- Releases API
- Release links API
- Repositories API
- Repository files API
- Repository submodules API
- Resource label events API
- Resource milestone events API
- Resource weight events API
- Runners API
- SCIM API
- Search API
- Services API
- Application settings API
- Sidekiq Metrics API
- Snippets API
- Project snippets
- Application statistics API
- Suggest Changes API
- System hooks API
- Tags API
- Todos API
- Users API
- Project-level Variables API
- Group-level Variables API
- Version API
- Vulnerabilities API
- Vulnerability Findings API
- Wikis API
- GraphQL API
- Getting started with GitLab GraphQL API
- GraphQL API Resources
- API V3 to API V4
- Validate the .gitlab-ci.yml (API)
- User Docs
- Abuse reports
- User account
- Active sessions
- Deleting a User account
- Permissions
- Personal access tokens
- Profile preferences
- Threads
- GitLab and SSH keys
- GitLab integrations
- Git
- GitLab.com settings
- Infrastructure as code with Terraform and GitLab
- GitLab keyboard shortcuts
- GitLab Markdown
- AsciiDoc
- GitLab Notification Emails
- GitLab Quick Actions
- Autocomplete characters
- Reserved project and group names
- Search through GitLab
- Advanced Global Search
- Advanced Syntax Search
- Time Tracking
- GitLab To-Do List
- Administrator Docs
- Reference architectures
- Reference architecture: up to 1,000 users
- Reference architecture: up to 2,000 users
- Reference architecture: up to 3,000 users
- Reference architecture: up to 5,000 users
- Reference architecture: up to 10,000 users
- Reference architecture: up to 25,000 users
- Reference architecture: up to 50,000 users
- Troubleshooting a reference architecture set up
- Working with the bundled Consul service
- Configuring PostgreSQL for scaling
- Configuring GitLab application (Rails)
- Load Balancer for multi-node GitLab
- Configuring a Monitoring node for Scaling and High Availability
- NFS
- Working with the bundled PgBouncer service
- Configuring Redis for scaling
- Configuring Sidekiq
- Admin Area settings
- Continuous Integration and Deployment Admin settings
- Custom instance-level project templates
- Diff limits administration
- Enable and disable GitLab features deployed behind feature flags
- Geo nodes Admin Area
- GitLab Pages administration
- Health Check
- Job logs
- Labels administration
- Log system
- PlantUML & GitLab
- Repository checks
- Repository storage paths
- Repository storage types
- Account and limit settings
- Service templates
- System hooks
- Changing your time zone
- Uploads administration
- Abuse reports
- Activating and deactivating users
- Audit Events
- Blocking and unblocking users
- Broadcast Messages
- Elasticsearch integration
- Gitaly
- Gitaly Cluster
- Gitaly reference
- Monitoring GitLab
- Monitoring GitLab with Prometheus
- Performance Bar
- Usage statistics
- Object Storage
- Performing Operations in GitLab
- Cleaning up stale Redis sessions
- Fast lookup of authorized SSH keys in the database
- Filesystem Performance Benchmarking
- Moving repositories managed by GitLab
- Run multiple Sidekiq processes
- Sidekiq MemoryKiller
- Switching to Puma
- Understanding Unicorn and unicorn-worker-killer
- User lookup via OpenSSH's AuthorizedPrincipalsCommand
- GitLab Package Registry administration
- GitLab Container Registry administration
- Replication (Geo)
- Geo database replication
- Geo with external PostgreSQL instances
- Geo configuration
- Using a Geo Server
- Updating the Geo nodes
- Geo with Object storage
- Docker Registry for a secondary node
- Geo for multiple nodes
- Geo security review (Q&A)
- Location-aware Git remote URL with AWS Route53
- Tuning Geo
- Removing secondary Geo nodes
- Geo data types support
- Geo Frequently Asked Questions
- Geo Troubleshooting
- Geo validation tests
- Disaster Recovery (Geo)
- Disaster recovery for planned failover
- Bring a demoted primary node back online
- Automatic background verification
- Rake tasks
- Back up and restore GitLab
- Clean up
- Namespaces
- Maintenance Rake tasks
- Geo Rake Tasks
- GitHub import
- Import bare repositories
- Integrity check Rake task
- LDAP Rake tasks
- Listing repository directories
- Praefect Rake tasks
- Project import/export administration
- Repository storage Rake tasks
- Generate sample Prometheus data
- Uploads migrate Rake tasks
- Uploads sanitize Rake tasks
- User management
- Webhooks administration
- X.509 signatures
- Server hooks
- Static objects external storage
- Updating GitLab
- GitLab release and maintenance policy
- Security
- Password Storage
- Custom password length limits
- Restrict allowed SSH key technologies and minimum length
- Rate limits
- Webhooks and insecure internal web services
- Information exclusivity
- How to reset your root password
- How to unlock a locked user from the command line
- User File Uploads
- How we manage the TLS protocol CRIME vulnerability
- User email confirmation at sign-up
- Security of running jobs
- Proxying assets
- CI/CD Environment Variables
- Contributor and Development Docs
- Contribute to GitLab
- Community members & roles
- Implement design & UI elements
- Issues workflow
- Merge requests workflow
- Code Review Guidelines
- Style guides
- GitLab Architecture Overview
- CI/CD development documentation
- Database guides
- Database Review Guidelines
- Database Review Guidelines
- Migration Style Guide
- What requires downtime?
- Understanding EXPLAIN plans
- Rake tasks for developers
- Mass inserting Rails models
- GitLab Documentation guidelines
- Documentation Style Guide
- Documentation structure and template
- Documentation process
- Documentation site architecture
- Global navigation
- GitLab Docs monthly release process
- Telemetry Guide
- Usage Ping Guide
- Snowplow Guide
- Experiment Guide
- Feature flags in development of GitLab
- Feature flags process
- Developing with feature flags
- Feature flag controls
- Document features deployed behind feature flags
- Frontend Development Guidelines
- Accessibility & Readability
- Ajax
- Architecture
- Axios
- Design Patterns
- Frontend Development Process
- DropLab
- Emojis
- Filter
- Frontend FAQ
- GraphQL
- Icons and SVG Illustrations
- InputSetter
- Performance
- Principles
- Security
- Tooling
- Vuex
- Vue
- Geo (development)
- Geo self-service framework (alpha)
- Gitaly developers guide
- GitLab development style guides
- API style guide
- Go standards and style guidelines
- GraphQL API style guide
- Guidelines for shell commands in the GitLab codebase
- HTML style guide
- JavaScript style guide
- Migration Style Guide
- Newlines style guide
- Python Development Guidelines
- SCSS style guide
- Shell scripting standards and style guidelines
- Sidekiq debugging
- Sidekiq Style Guide
- SQL Query Guidelines
- Vue.js style guide
- Instrumenting Ruby code
- Testing standards and style guidelines
- Flaky tests
- Frontend testing standards and style guidelines
- GitLab tests in the Continuous Integration (CI) context
- Review Apps
- Smoke Tests
- Testing best practices
- Testing levels
- Testing Rails migrations at GitLab
- Testing Rake tasks
- End-to-end Testing
- Beginner's guide to writing end-to-end tests
- End-to-end testing Best Practices
- Dynamic Element Validation
- Flows in GitLab QA
- Page objects in GitLab QA
- Resource class in GitLab QA
- Style guide for writing end-to-end tests
- Testing with feature flags
- Translate GitLab to your language
- Internationalization for GitLab
- Translating GitLab
- Proofread Translations
- Merging translations from CrowdIn
- Value Stream Analytics development guide
- GitLab subscription
- Activate GitLab EE with a license