Harbor 主从双机部署方案- ARM64环境
前言
后续生产环境为 ARM64环境,本次用途主要是为了支持国产化以及信创环境,因为harbor没arm的镜像,因此基于官方harbor-offline-installer方式,部署想着是这样:
1.完全独立部署:两个节点各自独立运行完整的 Harbor
2.Harbor 复制:使用 Harbor 的 Replication 功能自动同步镜像
3.VIP 自动切换:主节点故障时 VIP 自动漂移到从节点

环境
系统:Kylin Linux Advanced Server V11 (Swan25)
CPU:鲲鹏920 (ARM64)
主节点:192.168.120.58 (harbor-master)
从节点:192.168.120.59 (harbor-slave)
VIP:192.168.120.60
域名:reg-hub.gzeport.com
安装docker和docker-compose
这里有统一的一键部署包,直接安装,忽略。
配置基础环境
配置主机名和hosts
配置主机名和hosts
主节点 (192.168.120.58):
hostnamectl set-hostname harbor-master
从节点 (192.168.120.59):
hostnamectl set-hostname harbor-slave
两个节点的 /etc/hosts:
cat >> /etc/hosts << EOF
192.168.120.58 harbor-master
192.168.120.59 harbor-slave
192.168.120.60 reg-hub.gzeport.com harbor-vip
EOF
# 关闭防火墙或开放端口(生产环境按需配置)
systemctl stop firewalld
systemctl disable firewalld
# 时间同步
yum install -y chrony
systemctl enable chronyd
systemctl start chronyd
ulimit -SHn 65535
安装redis
这里不使用内置redis是因为2点:
1.内置redis存在无密码认证漏洞问题,漏扫会报此漏洞。
2.内置redis在arm环境会异常报错
#redis 一直在重启 Restarting,报错如下:
<jemalloc>: Unsupported system page size
harbor官方的redis镜像使用了jemalloc,编译的jemalloc 不支持64k了。
# getconf PAGE_SIZE
65536
部署外部 Redis,双机部署互不打扰。或者自行部署主从即可。
单机模式
# 创建目录
mkdir -p /AppHome/docker/redis/{data,conf}
cat > /AppHome/docker/redis/conf/redis.conf << 'EOF'
# 网络配置
bind 0.0.0.0
port 6379
protected-mode yes
# 认证
requirepass Harbor@Redis123
# 持久化
appendonly yes
appendfsync everysec
dir /data
save 900 1
save 300 10
save 60 10000
# 内存配置
maxmemory 2gb
maxmemory-policy allkeys-lru
# 日志
loglevel notice
logfile ""
EOF
# Docker Compose 配置
cat > /AppHome/docker/redis/docker-compose.yml << 'EOF'
version: '3.8'
services:
redis:
image: redis:6.2-alpine
container_name: harbor-redis
restart: always
ports:
- "6379:6379"
volumes:
- /AppHome/docker/redis/data:/data
- /AppHome/docker/redis/conf/redis.conf:/etc/redis/redis.conf
command: redis-server /etc/redis/redis.conf
healthcheck:
test: ["CMD", "redis-cli", "-a", "Harbor@Redis123", "ping"]
interval: 10s
timeout: 3s
retries: 3
EOF
# 启动服务
cd /AppHome/docker/redis
docker-compose up -d
# 验证
docker exec -it harbor-redis redis-cli -a Harbor@Redis123 ping
主从模式
# 创建目录
mkdir -p /AppHome/docker/redis/{data,conf}
# Redis Master 配置
cat > /AppHome/docker/redis/conf/redis.conf << 'EOF'
# 网络配置
bind 0.0.0.0
port 6379
protected-mode yes
# 认证
requirepass Harbor@Redis123
masterauth Harbor@Redis123
# 持久化
appendonly yes
appendfsync everysec
dir /data
save 900 1
save 300 10
save 60 10000
# 内存配置
maxmemory 2gb
maxmemory-policy allkeys-lru
# 主从复制配置
repl-diskless-sync yes
repl-diskless-sync-delay 5
# 日志
loglevel notice
logfile ""
EOF
# Docker Compose 配置
cat > /AppHome/docker/redis/docker-compose.yml << 'EOF'
version: '3.8'
services:
redis:
image: redis:6.2-alpine
container_name: harbor-redis-master
restart: always
ports:
- "6379:6379"
volumes:
- /AppHome/docker/redis/data:/data
- /AppHome/docker/redis/conf/redis.conf:/etc/redis/redis.conf
command: redis-server /etc/redis/redis.conf
healthcheck:
test: ["CMD", "redis-cli", "-a", "Harbor@Redis123", "ping"]
interval: 10s
timeout: 3s
retries: 3
EOF
# 启动服务
cd /AppHome/docker/redis
docker-compose up -d
# 验证
docker exec -it harbor-redis-master redis-cli -a Harbor@Redis123 INFO replication
# Redis Slave 配置
cat > /AppHome/docker/redis/conf/redis.conf << 'EOF'
# 网络配置
bind 0.0.0.0
port 6379
protected-mode yes
# 认证
requirepass Harbor@Redis123
masterauth Harbor@Redis123
# 配置为从库(主库地址)
replicaof 192.168.120.58 6379
# 从库可读
replica-read-only yes
# 持久化
appendonly yes
appendfsync everysec
dir /data
save 900 1
save 300 10
save 60 10000
# 内存配置
maxmemory 2gb
maxmemory-policy allkeys-lru
# 日志
loglevel notice
logfile ""
EOF
# Docker Compose 配置
cat > /AppHome/docker/redis/docker-compose.yml << 'EOF'
version: '3.8'
services:
redis:
image: redis:6.2-alpine
container_name: harbor-redis-slave
restart: always
ports:
- "6379:6379"
volumes:
- /AppHome/docker/redis/data:/data
- /AppHome/docker/redis/conf/redis.conf:/etc/redis/redis.conf
command: redis-server /etc/redis/redis.conf
healthcheck:
test: ["CMD", "redis-cli", "-a", "Harbor@Redis123", "ping"]
interval: 10s
timeout: 3s
retries: 3
EOF
# 启动服务
cd /AppHome/docker/redis
docker-compose up -d
# 验证主从复制
docker exec -it harbor-redis-slave redis-cli -a Harbor@Redis123 INFO replication
# 应该看到 role:slave 和 master_host:192.168.120.58
#验证主从同步
# 在主节点写入测试数据
docker exec -it harbor-redis-master redis-cli -a Harbor@Redis123 SET test_key "Hello from Master"
# 在从节点读取数据
docker exec -it harbor-redis-slave redis-cli -a Harbor@Redis123 GET test_key
# 应该返回: "Hello from Master"
# 查看复制延迟
docker exec -it harbor-redis-master redis-cli -a Harbor@Redis123 INFO replication | grep lag
部署harbor
本次使用 harbor v2.14版本
harbor官方不提供arm的镜像,habor-offline-installer里面默认只有x86_64的离线镜像,所以这里找到 ghcr.io的镜像
# arm在线:
tar zxvf harbor-online-installer-v2.14.0.tgz -C /AppHome/docker/
# 拉取镜像
# ghrc.io
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/harbor-registryctl:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/nginx-photon:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/registry-photon:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/prepare:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/harbor-portal:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/harbor-log:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/harbor-exporter:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/redis-photon:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/trivy-adapter-photon:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/harbor-core:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/harbor-db:v2.14.0
docker pull --platform=linux/arm64 ghcr.io/octohelm/harbor/harbor-jobservice:v2.14.0
docker tag ghcr.io/octohelm/harbor/harbor-registryctl:v2.14.0 goharbor/harbor-registryctl:v2.14.0
docker tag ghcr.io/octohelm/harbor/nginx-photon:v2.14.0 goharbor/nginx-photon:v2.14.0
docker tag ghcr.io/octohelm/harbor/registry-photon:v2.14.0 goharbor/registry-photon:v2.14.0
docker tag ghcr.io/octohelm/harbor/prepare:v2.14.0 goharbor/prepare:v2.14.0
docker tag ghcr.io/octohelm/harbor/harbor-portal:v2.14.0 goharbor/harbor-portal:v2.14.0
docker tag ghcr.io/octohelm/harbor/harbor-log:v2.14.0 goharbor/harbor-log:v2.14.0
docker tag ghcr.io/octohelm/harbor/harbor-exporter:v2.14.0 goharbor/harbor-exporter:v2.14.0
docker tag ghcr.io/octohelm/harbor/redis-photon:v2.14.0 goharbor/redis-photon:v2.14.0
docker tag ghcr.io/octohelm/harbor/trivy-adapter-photon:v2.14.0 goharbor/trivy-adapter-photon:v2.14.0
docker tag ghcr.io/octohelm/harbor/harbor-core:v2.14.0 goharbor/harbor-core:v2.14.0
docker tag ghcr.io/octohelm/harbor/harbor-db:v2.14.0 goharbor/harbor-db:v2.14.0
docker tag ghcr.io/octohelm/harbor/harbor-jobservice:v2.14.0 goharbor/harbor-jobservice:v2.14.0
# amd的可以使用离线导入镜像
tar zxvf harbor-offline-installer-v2.14.0.tgz -C /AppHome/docker/
# 导入镜像
docker load -i harbor.v2.14.0.tar.gz
# 验证镜像
docker images | grep goharbor
部署 Harbor(两个节点)
# 两个节点都执行:
mkdir /AppHome/docker/harbor_data
两个节点的 harbor.yml 配置(相同):
cd /AppHome/docker/harbor/
cp harbor.yml.tmpl harbor.yml
harbor.yml内容
编辑 harbor.yml,注意
hostname: 192.168.120.58 这里需要换节点ip
# 使用 节点 Ip hostname: 192.168.120.59
hostname: 192.168.120.58
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
# 仓库对外的域名,证书配置放在 Nginx 中
# 去掉 external_url,避免 CSRF 问题
# external_url: https://reg-hub.gzeport.com
harbor_admin_password: Harbor@Admin123
# Harbor DB configuration
database:
# The password for the user('postgres' by default) of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 100
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 900
# The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
conn_max_lifetime: 5m
# The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time.
# The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
conn_max_idle_time: 0
# The default data volume
data_volume: /AppHome/docker/harbor_data
jobservice:
# Maximum number of job workers in job service
max_job_workers: 10
# Maximum hours of task duration in job service, default 24
max_job_duration_hours: 24
# The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB"
job_loggers:
- STD_OUTPUT
- FILE
# - DB
# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
logger_sweeper_duration: 1 #days
notification:
# Maximum retry count for webhook job
webhook_job_max_retry: 3
# HTTP client timeout for webhook job
webhook_job_http_client_timeout: 3 #seconds
# Log configurations
log:
# options are debug, info, warning, error, fatal
level: info
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: 50
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: 200M
# The directory on your host that store log
location: /var/log/harbor
# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140
#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.14.0
# 配置外部 Redis
external_redis:
host: 192.168.120.58:6379
password: Harbor@Redis123
registry_db_index: 1
jobservice_db_index: 2
trivy_db_index: 5
idle_timeout_seconds: 30
proxy:
http_proxy:
https_proxy:
no_proxy: 127.0.0.1,localhost,core,registry
components:
- core
- jobservice
- trivy
# Enable purge _upload directories
upload_purging:
enabled: true
# remove files in _upload directories which exist for a period of time, default is one week.
age: 168h
# the interval of the purge operations
interval: 24h
dryrun: false
cache:
# not enabled by default
enabled: false
# keep cache for one day by default
expire_hours: 24
trivy:
ignore_unfixed: false
skip_update: true
offline_scan: true
security_check: vuln
insecure: false
metric:
enabled: true
port: 29090
path: /metrics
ui:
swagger: false
两个节点都执行允许
cd /AppHome/docker/harbor/
# 生成配置
./prepare --with-trivy
# 启动 Harbor
./install.sh --with-trivy
# 检查服务状态
docker-compose ps
部署 Keepalived + Nginx(两个节点)
nginx安装和配置,主从服务都需要安装,后续会考虑使用caddy,这样不需要自签证书
直接使用了rpm包部署,麒麟V10 直接下载NGINX的Centos8的rpm包就行
dnf -y install nginx-1.28.0-1.el8.ngx.aarch64.rpm
nginx配置(两个节点配置相同)
# 备份原配置
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
# 编辑文件/etc/nginx/nginx.conf,或者在 /etc/nginx/conf.d/ 新增一个 reg-hub.gzeport.com.conf配置文件
server {
listen 443 ssl;
server_name reg-hub.gzeport.com;
# SSL配置
ssl_certificate /etc/nginx/conf.d/reg-hub.gzeport.com.crt;
ssl_certificate_key /etc/nginx/conf.d/reg-hub.gzeport.com.key;
# 禁用不必要的HTTP方法限制 生产配置
# add_header Allow "GET, POST, HEAD, PUT, DELETE, OPTIONS, PATCH" always;
#if ($request_method !~ ^(GET|HEAD|POST|PUT|DELETE|PATCH|OPTIONS)$) {
# return 405;
#}
location / {
proxy_pass http://127.0.0.1:80;
# 代理 header
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Original-URI $request_uri;
# 认证相关 header
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
proxy_pass_header WWW-Authenticate;
# WebSocket和长连接支持
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
# 大文件上传支持
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
# 方法支持 生产配置
# proxy_method $request_method;
}
}
启动、开机自启服务
# rm -f /etc/nginx/conf.d/default.conf
# 测试配置
nginx -t
# 启动 Nginx
systemctl start nginx && systemctl enable nginx
安装keepalived
yum install -y keepalived
修改配置文件
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
vim /etc/keepalived/keepalived.conf
# 主节点 (192.168.120.58):
cat > /etc/keepalived/keepalived.conf << "EOF"
! Configuration File for keepalived
global_defs {
router_id HARBOR_MASTER
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -20
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface enp0s11 # 根据实际网卡名称修改
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass Harbor@123
}
virtual_ipaddress {
192.168.120.60
}
track_script {
check_nginx
}
}
EOF
# 从节点 (192.168.120.59):
cat > /etc/keepalived/keepalived.conf << "EOF"
! Configuration File for keepalived
global_defs {
router_id HARBOR_SLAVE
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -20
fall 3
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface enp0s11 # 根据实际网卡名称修改
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass Harbor@123
}
virtual_ipaddress {
192.168.120.60
}
track_script {
check_nginx
}
}
EOF
创建 Nginx 检查脚本(两个节点)
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
num=$(ps -C nginx --no-header | wc -l)
if [ $num -eq 0 ]; then
systemctl restart nginx
sleep 10
num=$(ps -C nginx --no-header | wc -l)
if [ $num -eq 0 ]; then
systemctl stop keepalived
fi
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
启动、开机自启keepalived服务
systemctl restart keepalived
systemctl enable keepalived
systemctl status keepalived
# 在主节点查看 VIP
ip addr show | grep 192.168.120.60
# 从其他机器 ping VIP
ping 192.168.120.60
配置镜像复制和推送
配置 Harbor-01
系统管理 -> 仓库管理 -> 新建目标 ,指向 Harbor-02

系统管理 -> 复制管理 -> 新建规则
harbor01-pull-harbor02 拉取harbor02数据

harbor01-push-harbor02 推送到harbor02

配置 Harbor-02
同理相反配置即可!



参考
1.https://github.com/goharbor/harbor/pull/22311
2.https://github.com/octohelm/harbor