迁移到 RHEL 8/9

为新的 RHEL 8/9 系统创建并准备 Salt 基础架构后,可以执行以下迁移步骤来完成到 RHEL 8/9 的升级。

准备并执行迁移

  1. 在 RHEL 7 和 RHEL 8/9 系统上停止 RaaS 服务。
  2. 将 gz 备份文件从旧服务器复制到新服务器。gz 文件必须存储在
    /var/lib/pgsql
    目录中,且 ownership 必须设置为 postgres:postgres。
  3. 从 postgres 用户运行以下命令以移除数据库目录:
    su - postgres psql -U postgres drop database raas_43cab1f4de604ab185b51d883c5c5d09
  4. 创建一个空数据库并验证用户:
    create database raas_43cab1f4de604ab185b51d883c5c5d09 \du – should display users for postgres and salteapi
  5. /etc/raas/pki/.raas.key
    /etc/raas/pki/.instance_id
    文件从旧的 RaaS 服务器复制到新的 RaaS 服务器。
  6. 对新的 Postgresql 数据库运行升级命令:
    su – raas raas -l debug upgrade
现在,您可以在新的 rhel9-raas 服务器上启动 raas 服务。您还可以在浏览器中访问
Automation Config
UI。接下来,必须在新的 RHEL 8/9 Salt 主节点上配置主节点插件。

在新的 Salt 主节点上配置主节点插件

在新的 rhel9-master 节点上执行这些步骤。
  1. Log in to your Salt master and verify the
    /etc/salt/master.d
    directory exists, or create it.
  2. Generate the master configuration settings.
    If you want to preserve your settings when upgrading your installation, make a backup of your existing Master Plugin configuration file before running this step. Then copy relevant settings from your existing configuration to the newly generated file.
    sudo sseapi-config --all > /etc/salt/master.d/raas.conf
    If you installed Salt using onedir, the path to this executable is
    /opt/saltstack/salt/extras-3.10/bin/sseapi-config
    .
  3. Edit the generated
    raas.conf
    file and update the values as follows:
    Value
    Description
    sseapi_ssl_validate_cert
    Validates the certificate the API (RaaS) uses. The default is
    True
    .
    If you are using your own CA-issued certificates, set this value to
    True
    and configure the
    sseapi_ssl_ca
    ,
    sseapi_ssl_cert
    , and
    sseapi_ssl_cert:
    settings.
    Otherwise, set this to
    False
    to not validate the certificate.
    sseapi_ssl_validate_cert:False
    sseapi_server
    HTTP IP address of your RaaS node, for example,
    http://example.com
    , or
    https://example.com
    if SSL is enabled.
    sseapi_command_age_limit
    Sets the age (in seconds) after which old, potentially stale jobs are skipped. For example, to skip jobs older than a day, set it to:
    sseapi_command_age_limit:86400
    Skipped jobs continue to exist in the database and display with a status of
    Completed
    in the
    Automation Config
    user interface.
    Some environments might need the Salt master to be offline for long periods of time and will need the Salt master to run any jobs that were queued after it comes back online. If this applies to your environment, set the age limit to
    0
    .
    sseapi_windows_minion_deploy_delay
    Sets a delay to allow all requisite Windows services to become active. The default value is 180 seconds.
    sseapi_linux_minion_deploy_delay
    Sets a delay to allow all requisite Linux services to become activate. The default value is 90 seconds.
    sseapi_local_cache load: 3600 tgt: 86400 pillar: 3600 exprmatch: 86400 tgtmatch: 86400
    Sets the length of time that certain data is cached locally on each salt master. Values are in seconds. The example values are recommended values.
    • load- salt save_load() payloads
    • tgt- SSE target groups
    • pillar- SSE pillar data (encrypted)
    • exprmatch- SSE target expression matching data
    • tgtmatch- SSE target group matching data
  4. OPTIONAL:
    This step is necessary for manual installations only. To verify you can connect to SSL before connecting the Master Plugin, edit the generated
    raas.conf
    file to update the following values. If you do not update these values, the Master Plugin uses the default generated certificate.
    Value
    Description
    sseapi_ssl_ca
    The path to a CA file.
    sseapi_ssl_cert
    The path to the certificate. The default value is
    /etc/pki/raas/certs/localhost.crt
    .
    sseapi_ssl_key
    The path to the certificate’s private key. The default value is
    /etc/pki/raas/certs/localhost.key
    .
    id
    Comment this line out by adding a
    #
    at the beginning. It is not required.
  5. OPTIONAL:
    Update performance-related settings. For large or busy environments, you can improve the performance of the communications between the Salt master and
    Automation Config
    by adjusting the following settings.
    • Configure the master plugin engines:
      The master plugin
      eventqueue
      and
      rpcqueue
      engines offload some communications with
      Automation Config
      from performance-critical code paths to dedicated processes. While the engines are waiting to communicate with
      Automation Config
      , payloads are stored in the Salt master’s local filesystem so the data can persist across restarts of the Salt master. The
      tgtmatch
      engine moves the calculation of minion target group matches from the RaaS server to the salt-masters.
      To enable the engines, ensure that the following settings are present in the Salt Master Plugin configuration file (raas.conf):
      engines: - sseapi: {} - eventqueue: {} - rpcqueue: {} - jobcompletion: {} - tgtmatch: {}
      To configure the
      eventqueue
      engine, verify that the following settings are present:
      sseapi_event_queue: name: sseapi-events strategy: always push_interval: 5 batch_limit: 2000 age_limit: 86400 size_limit: 35000000 vacuum_interval: 86400 vacuum_limit: 350000
      The queue parameters can be adjusted with consideration to how they work together. For example, assuming an average of 400 events per second on the Salt event bus, the settings shown above allow for about 24 hours of queued event traffic to collect on the Salt master before the oldest events are discarded due to size or age limits.
      To configure the
      rpcqueue
      engine, verify the following settings in raas.conf:
      sseapi_rpc_queue: name: sseapi-rpc strategy: always push_interval: 5 batch_limit: 500 age_limit: 3600 size_limit: 360000 vacuum_interval: 86400 vacuum_limit: 100000
      要配置 tgtmatch 引擎,请确保主节点插件配置文件 (/etc/salt/master.d/raas.conf) 中存在以下设置:
      engines: - sseapi: {} - eventqueue: {} - rpcqueue: {} - jobcompletion: {} - tgtmatch: {} sseapi_local_cache: load: 3600 tgt: 86400 pillar: 3600 exprmatch: 86400 tgtmatch: 86400 sseapi_tgt_match: poll_interval: 60 workers: 0 nice: 19
      To make use of target matching on the salt-masters, the following config setting must also be present in the RaaS configuration:
      target_groups_from_master_only: true
      .
    • Limit minion grains payload sizes:
      sseapi_max_minion_grains_payload: 2000
    • Enable skipping jobs that are older than a defined time (in seconds). For example, use
      86400
      to set it to skip jobs older than a day. When set to
      0
      , this feature is disabled:
      sseapi_command_age_limit:0
      During system upgrades, enabling this setting is useful to prevent old commands stored in the database from running unexpectedly.
    Together, event queuing in Salt and the queuing engines, salt-master target matching, grains payload size limit, and command age limit in the Salt Master Plugin increase the throughput and reduce the latency of communications between the Salt master and
    Automation Config
    in the most performance-sensitive code paths.
  6. Restart the master service.
    sudo systemctl restart salt-master
  7. OPTIONAL:
    You might want to run a test job to ensure the Master Plugin is now enabling communication between the master and the RaaS node.
    salt -v '*' test.ping
RHEL 8/9 主节点现在会显示在
主节点密钥
页面中。
此时不要接受主节点密钥。

配置工作节点代理

按照以下步骤将 rhel9-master 节点上的工作节点代理配置为指向自身。
  1. 通过 SSH 登录到 rhel9-master 节点,然后浏览到
    /etc/salt/minion.d
    目录。
  2. 编辑
    minion.conf
    文件,并将主节点设置更改为
    master:localhost
  3. 浏览到
    /etc/salt/pki/minion
    目录,然后删除
    minion_master.pub
    文件。
  4. 使用 重新启动 salt-minion 服务
    systemctl restart salt-minion
  5. 通过运行以下命令查看并接受 rhel9-master 上的工作节点密钥:
    salt-key salt-key -A
  6. Automation Config
    中,导航到
    管理
    主节点密钥
    ,然后接受主节点密钥。
    RHEL8/9 主节点现在应显示在
    目标
    页面中。
  7. 通过 SSH 登录到 RHEL7 主节点,然后删除 rhel9-master 工作节点的密钥。

迁移 Salt-Minion 系统

迁移受管系统的方法有很多种。如果您已设置某个流程,请遵循该流程。如果未设置流程,请按照以下说明将 salt-minion 从旧的 Salt 主节点迁移到新的 Salt 主节点。
这些步骤不适用于多主节点系统。
  1. 创建编排文件。例如,
    # Orchestration to move Minions from one master to another # file: /srv/salt/switch_masters/init.sls {% import_yaml 'switch_masters/map.yaml' as mm %} {% set minions = mm['minionids'] %} {% if minions %} {% for minion in minions %} move_minions_{{ minion }}: salt.state: - tgt: {{ minion }} - sls: - switch_masters.move_minions_map {% endfor %} {% else %} no_minions: test.configurable_test_state: - name: No minions to move - result: True - changes: False - comment: No minions to move {% endif %} remove_minions: salt.runner: - name: manage.down - removekeys: True # map file for moving minions # file: /srv/salt/switch_masters/map.yaml newsaltmaster: <new_ip_address> oldsaltmaster: <old_ip_address> minionids: - minion01 - minion02 - minion03 state to switch minions from one master to another # file: /srv/salt/swith_masters/move_minions_map.sls {% set minion = salt['grains.get']('os') %} # name old master and set new master ip address {% import_yaml 'switch_masters/map.yaml' as mm %} {% set oldmaster = mm['oldsaltmaster'] %} {% set newmaster = mm['newsaltmaster'] %} # remove minion_master.pub key {% if minion == 'Windows' %} remove_master_key: file.absent: - name: c:\ProgramData\Salt Project\Salt\conf\pki\minion\minion_master.pub change_master_assignment: file.replace: - name: c:\ProgramData\Salt Project\Salt\conf\minion.d\minion.conf - pattern: 'master: {{oldmaster}}' - repl: 'master: {{newmaster}}' - require: - remove_master_key {% else %} remove_master_key: file.absent: - name: /etc/salt/pki/minion/minion_master.pub # modify minion config file change_master_assignment: file.replace: - name: /etc/salt/minion.d/minion.conf - pattern: 'master: {{oldmaster}}' - repl: 'master: {{newmaster}}' - require: - remove_master_key {% endif %} # restart salt-minion restart_salt_minion: service.running: - name: salt-minion - require: - change_master_assignment - watch: - change_master_assignment
  2. 创建包含以下代码的
    map.yaml
    文件(请参见上面的代码示例):
    1. <旧的 salt 主节点> IP 地址/FQDN
    2. <新的 salt 主节点> IP 地址/FQDN
    3. 要移动的 Salt 工作节点 ID 的列表。
  3. 创建状态文件(请参见上述代码示例)以处理迁移。例如,
    move_minions_map.sls
  4. 将这些文件添加到 RHEL7 Salt 主节点上的某个目录(例如,
    /srv/salt/switch_masters
    )。
  5. 在 RHEL7 Salt 主节点上运行编排文件。这会导致 Salt 工作节点服务重新启动时出现一些错误,并且无法为 RHEL7 Salt 主节点恢复联机。
  6. Automation Config
    中监控进度。接受 UI 中填充的已迁移 Salt 工作节点 ID。
  7. 迁移所有系统后,针对这些系统运行
    test.ping
    作业,以验证所有服务是否都正常通信。

迁移现有文件

此过程完全取决于您的组织创建、存储和管理状态文件及配置文件的方式。下面概述了最常见的用例以供参考。
用例 1:
Automation Config
文件服务器
在此用例中,
Automation Config
文件存储在 Postgres 数据库中,并显示在
Automation Config
UI 中。
在还原 Postgres 数据库期间,将恢复并迁移这些文件。您无需执行任何额外步骤来将这些文件迁移到 RHEL8/9 环境。
用例 2:Github/Gitlab 文件服务器
在此用例中,
Automation Config
状态文件和配置文件存储在 Github/Gitlab/Bitbucket 或某种其他代码版本控制系统中。
由于这些文件存储在第三方工具中,因此您需要配置新的 RHEL8/9 主节点以连接到存储库系统。此配置将镜像 RHEL7 存储库配置。
用例 3:Salt 主节点的本地文件根目录
在此用例中,
Automation Config
存储在 Salt 主节点上的本地文件服务器目录中。
要将这些文件迁移到 RHEL8/9 主节点,请从 RHEL7 主节点将相应的目录复制到 RHEL8/9 主节点。
  1. 状态文件和 pillar 文件分别存储在
    /srv/salt
    /srv/pillar
    中。
  2. 使用安全复制工具(例如 wincp 或命令行)从 RHEL7 主节点将这些目录安全复制到 RHEL8/9 主节点。
  3. 使用
    Salt \* saltutil.refresh_pillar
    刷新 pillar 日期