Please enable Javascript to view the contents

多机多盘 minio 集群不同纠删码配置在 IPoIB 下的性能测试

 ·  ☕ 7 分钟

前面测试的发现瓶颈在网卡,本篇在 IPoIB 下进行补充测试。

1. minio 集群环境

1.1 创建 minio 集群

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
export CONTAINER_CLI=nerdctl
export IMAGE=minio/minio:RELEASE.2025-04-22T22-12-26Z
export ROOT_USER=minioadmin
export ROOT_PASSWORD=minioadmin
export MINIO_ERASURE_SET_DRIVE_COUNT=16
export MINIO_STORAGE_CLASS_STANDARD=EC:4
export POOL_0="http://minioib{1...4}/mnt/data{0...3}"

$CONTAINER_CLI run -d \
  --net host \
  --ulimit memlock=-1 \
  --ulimit stack=67108864 \
  --ulimit nofile=1048576:1048576 \
  --memory-swappiness=0 \
  --name minio \
  -v /mnt/data0:/mnt/data0 \
  -v /mnt/data1:/mnt/data1 \
  -v /mnt/data2:/mnt/data2 \
  -v /mnt/data3:/mnt/data3 \
  -e "MINIO_ROOT_USER=$ROOT_USER" \
  -e "MINIO_ROOT_PASSWORD=$ROOT_PASSWORD" \
  -e "MINIO_ERASURE_SET_DRIVE_COUNT=$MINIO_ERASURE_SET_DRIVE_COUNT" \
  -e "MINIO_STORAGE_CLASS_STANDARD=$MINIO_STORAGE_CLASS_STANDARD" \
  $IMAGE server \
  $POOL_0 \
  --console-address ":9090" \
  --address ":9000"

1.2 清理 minio 集群

1
2
3
4
5
6
export CONTAINER_CLI=nerdctl
$CONTAINER_CLI rm -f minio
for i in {0..3}; do
  rm -rf /mnt/data${i}/*
  rm -rf /mnt/data${i}/.minio.sys
done

2. 2个数据位2个校验位

2.1 集群拓扑

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
●  minioib1:9000
   Uptime: 36 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib2:9000
   Uptime: 13 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib3:9000
   Uptime: 55 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib4:9000
   Uptime: 1 minute
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 0.7% (total: 5.8 TiB)44└──────┴───────────────────────┴─────────────────────┴──────────────┘

16 drives online, 0 drives offline, EC:2

2.2 4MiB 文件

  • get
1
2
3
4
5
6
7
8
warp get \
  --host minioib1:9000,minioib2:9000,minioib3:9000,minioib4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
Reqs: 564672, Errs:0, Objs:564672, Bytes: 2205.75GiB
 -       GET Average: 1886 Obj/s, 7543.7MiB/s; Current 1837 Obj/s, 7347.5MiB/s, 16.7 ms/req, TTFB: 6.3m


Report: GET. Concurrency: 32. Ran: 4m57s
 * Average: 7543.74 MiB/s, 1885.93 obj/s
 * Reqs: Avg: 16.9ms, 50%: 16.4ms, 90%: 21.0ms, 99%: 28.8ms, Fastest: 6.1ms, Slowest: 277.9ms, StdDev: 3.7ms
 * TTFB: Avg: 6ms, Best: 3ms, 25th: 5ms, Median: 6ms, 75th: 7ms, 90th: 9ms, 99th: 14ms, Worst: 222ms StdDev: 3ms

Throughput by host:
 * http://minioib1:9000: Avg: 1965.69 MiB/s, 491.42 obj/s
 * http://minioib2:9000: Avg: 1984.22 MiB/s, 496.06 obj/s
 * http://minioib3:9000: Avg: 1798.07 MiB/s, 449.52 obj/s
 * http://minioib4:9000: Avg: 1798.16 MiB/s, 449.54 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 8300.2MiB/s, 2075.05 obj/s
 * 50% Median: 7601.3MiB/s, 1900.33 obj/s
 * Slowest: 6087.2MiB/s, 1521.80 obj/s
  • put
1
2
3
4
5
6
7
8
warp put \
  --host minioib1:9000,minioib2:9000,minioib3:9000,minioib4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
Reqs: 223940, Errs:0, Objs:223940, Bytes: 874.77GiB
 -       PUT Average: 747 Obj/s, 2989.5MiB/s; Current 733 Obj/s, 2930.1MiB/s, 43.0 ms/req


Report: PUT. Concurrency: 32. Ran: 4m57s
 * Average: 2989.45 MiB/s, 747.36 obj/s
 * Reqs: Avg: 42.8ms, 50%: 41.8ms, 90%: 52.0ms, 99%: 61.7ms, Fastest: 29.2ms, Slowest: 131.0ms, StdDev: 7.2ms

Throughput by host:
 * http://minioib1:9000: Avg: 845.34 MiB/s, 211.34 obj/s
 * http://minioib2:9000: Avg: 838.82 MiB/s, 209.71 obj/s
 * http://minioib3:9000: Avg: 654.48 MiB/s, 163.62 obj/s
 * http://minioib4:9000: Avg: 649.76 MiB/s, 162.44 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 3066.6MiB/s, 766.66 obj/s
 * 50% Median: 2990.6MiB/s, 747.65 obj/s
 * Slowest: 2840.2MiB/s, 710.06 obj/s

3. 12个数据位4个校验位

3.1 集群拓扑

这是 minio 针对四节点,每个节点四盘位置默认配置的集群拓扑。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
●  minioib1:9000
   Uptime: 20 minutes
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib2:9000
   Uptime: 16 minutes
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib3:9000
   Uptime: 19 minutes
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib4:9000
   Uptime: 18 minutes
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 9.5% (total: 8.7 TiB)161└──────┴───────────────────────┴─────────────────────┴──────────────┘

56 GiB Used, 1 Bucket, 14,404 Objects
16 drives online, 0 drives offline, EC:4

3.2 4MiB 文件

  • get
1
2
3
4
5
6
7
8
warp get \
  --host minioib1:9000,minioib2:9000,minioib3:9000,minioib4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
Reqs: 561238, Errs:0, Objs:561238, Bytes: 2192.34GiB
 -       GET Average: 1875 Obj/s, 7500.6MiB/s; Current 1676 Obj/s, 6703.3MiB/s, 17.4 ms/req, TTFB: 7.3m


Report: GET. Concurrency: 32. Ran: 4m57s
 * Average: 7500.64 MiB/s, 1875.16 obj/s
 * Reqs: Avg: 17.1ms, 50%: 16.4ms, 90%: 21.5ms, 99%: 32.5ms, Fastest: 6.0ms, Slowest: 286.1ms, StdDev: 4.5ms
 * TTFB: Avg: 7ms, Best: 3ms, 25th: 6ms, Median: 7ms, 75th: 8ms, 90th: 10ms, 99th: 20ms, Worst: 219ms StdDev: 3ms

Throughput by host:
 * http://minioib1:9000: Avg: 1932.86 MiB/s, 483.22 obj/s
 * http://minioib2:9000: Avg: 1949.82 MiB/s, 487.45 obj/s
 * http://minioib3:9000: Avg: 1807.09 MiB/s, 451.77 obj/s
 * http://minioib4:9000: Avg: 1810.60 MiB/s, 452.65 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 8101.2MiB/s, 2025.30 obj/s
 * 50% Median: 7551.7MiB/s, 1887.93 obj/s
 * Slowest: 6658.3MiB/s, 1664.59 obj/s
  • put
1
2
3
4
5
6
7
8
warp put \
  --host minio1:9000,minio2:9000,minio3:9000,minio4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
Reqs: 209711, Errs:0, Objs:209711, Bytes: 819.18GiB
 -       PUT Average: 701 Obj/s, 2805.6MiB/s; Current 732 Obj/s, 2926.8MiB/s, 45.2 ms/req


Report: PUT. Concurrency: 32. Ran: 4m57s
 * Average: 2805.38 MiB/s, 701.34 obj/s
 * Reqs: Avg: 45.7ms, 50%: 44.6ms, 90%: 54.9ms, 99%: 78.7ms, Fastest: 28.7ms, Slowest: 254.7ms, StdDev: 8.8ms

Throughput by host:
 * http://minioib1:9000: Avg: 750.25 MiB/s, 187.56 obj/s
 * http://minioib2:9000: Avg: 777.20 MiB/s, 194.30 obj/s
 * http://minioib3:9000: Avg: 643.67 MiB/s, 160.92 obj/s
 * http://minioib4:9000: Avg: 633.79 MiB/s, 158.45 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 3039.6MiB/s, 759.91 obj/s
 * 50% Median: 2814.9MiB/s, 703.74 obj/s
 * Slowest: 2500.2MiB/s, 625.06 obj/s

4. 10个数据位6个校验位

4.1 集群拓扑

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
●  minioib1:9000
   Uptime: 20 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib2:9000
   Uptime: 16 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib3:9000
   Uptime: 18 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib4:9000
   Uptime: 13 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 0.5% (total: 7.3 TiB)161└──────┴───────────────────────┴─────────────────────┴──────────────┘

16 drives online, 0 drives offline, EC:6

4.2 4MiB 文件

  • get
1
2
3
4
5
6
7
8
warp get \
  --host minioib1:9000,minioib2:9000,minioib3:9000,minioib4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
Reqs: 544782, Errs:0, Objs:544782, Bytes: 2128.05GiB
 -       GET Average: 1820 Obj/s, 7281.0MiB/s; Current 1426 Obj/s, 5704.3MiB/s, 17.3 ms/req, TTFB: 6.9m


Report: GET. Concurrency: 32. Ran: 4m57s
 * Average: 7280.37 MiB/s, 1820.09 obj/s
 * Reqs: Avg: 17.6ms, 50%: 17.0ms, 90%: 22.1ms, 99%: 32.6ms, Fastest: 5.7ms, Slowest: 290.0ms, StdDev: 4.5ms
 * TTFB: Avg: 7ms, Best: 3ms, 25th: 6ms, Median: 6ms, 75th: 8ms, 90th: 9ms, 99th: 18ms, Worst: 232ms StdDev: 3ms

Throughput by host:
 * http://minioib1:9000: Avg: 1850.52 MiB/s, 462.63 obj/s
 * http://minioib2:9000: Avg: 1869.08 MiB/s, 467.27 obj/s
 * http://minioib3:9000: Avg: 1769.72 MiB/s, 442.43 obj/s
 * http://minioib4:9000: Avg: 1790.80 MiB/s, 447.70 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 7952.3MiB/s, 1988.08 obj/s
 * 50% Median: 7328.1MiB/s, 1832.01 obj/s
 * Slowest: 5704.3MiB/s, 1426.08 obj/s
  • put
1
2
3
4
5
6
7
8
warp put \
  --host minioib1:9000,minioib2:9000,minioib3:9000,minioib4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
Reqs: 220069, Errs:0, Objs:220069, Bytes: 859.64GiB
 -       PUT Average: 736 Obj/s, 2945.5MiB/s; Current 747 Obj/s, 2986.1MiB/s, 43.5 ms/req


Report: PUT. Concurrency: 32. Ran: 4m57s
 * Average: 2945.83 MiB/s, 736.46 obj/s
 * Reqs: Avg: 43.5ms, 50%: 42.6ms, 90%: 52.3ms, 99%: 71.8ms, Fastest: 29.3ms, Slowest: 257.5ms, StdDev: 8.2ms

Throughput by host:
 * http://minioib1:9000: Avg: 825.80 MiB/s, 206.45 obj/s
 * http://minioib2:9000: Avg: 815.13 MiB/s, 203.78 obj/s
 * http://minioib3:9000: Avg: 655.48 MiB/s, 163.87 obj/s
 * http://minioib4:9000: Avg: 647.87 MiB/s, 161.97 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 3086.0MiB/s, 771.49 obj/s
 * 50% Median: 2952.5MiB/s, 738.12 obj/s
 * Slowest: 2656.3MiB/s, 664.07 obj/s

5. 8个数据位8个校验位

5.1 集群拓扑

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
●  minioib1:9000
   Uptime: 22 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib2:9000
   Uptime: 19 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib3:9000
   Uptime: 25 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  minioib4:9000
   Uptime: 28 seconds
   Version: 2025-04-22T22:12:26Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 0.4% (total: 5.8 TiB)161└──────┴───────────────────────┴─────────────────────┴──────────────┘

16 drives online, 0 drives offline, EC:8

5.2 4MiB 文件

  • get
1
2
3
4
5
6
7
8
warp get \
  --host minioib1:9000,minioib2:9000,minioib3:9000,minioib4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
Reqs: 559138, Errs:0, Objs:559138, Bytes: 2184.13GiB
 -       GET Average: 1868 Obj/s, 7473.3MiB/s; Current 1856 Obj/s, 7425.5MiB/s, 16.9 ms/req, TTFB: 7.0m


Report: GET. Concurrency: 32. Ran: 4m57s
 * Average: 7474.37 MiB/s, 1868.59 obj/s
 * Reqs: Avg: 17.1ms, 50%: 16.5ms, 90%: 21.4ms, 99%: 30.9ms, Fastest: 5.8ms, Slowest: 277.3ms, StdDev: 4.1ms
 * TTFB: Avg: 7ms, Best: 3ms, 25th: 6ms, Median: 7ms, 75th: 8ms, 90th: 9ms, 99th: 18ms, Worst: 221ms StdDev: 3ms

Throughput by host:
 * http://minioib1:9000: Avg: 1914.94 MiB/s, 478.74 obj/s
 * http://minioib2:9000: Avg: 1933.85 MiB/s, 483.46 obj/s
 * http://minioib3:9000: Avg: 1800.13 MiB/s, 450.03 obj/s
 * http://minioib4:9000: Avg: 1824.05 MiB/s, 456.01 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 8090.1MiB/s, 2022.54 obj/s
 * 50% Median: 7503.7MiB/s, 1875.93 obj/s
 * Slowest: 6452.4MiB/s, 1613.10 obj/s
  • put
1
2
3
4
5
6
7
8
warp put \
  --host minio1:9000,minio2:9000,minio3:9000,minio4:9000 \
  --access-key minioadmin \
  --secret-key minioadmin \
  --tls=false \
  --obj.size 4MiB \
  --concurrent 32 \
  --duration 5m
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
Reqs: 87673, Errs:0, Objs:87673, Bytes: 342.47GiB
 -       PUT Average: 293 Obj/s, 1170.8MiB/s; Current 293 Obj/s, 1171.4MiB/s, 109.5 ms/req


Report: PUT. Concurrency: 32. Ran: 4m57s
 * Average: 1170.79 MiB/s, 292.70 obj/s
 * Reqs: Avg: 109.3ms, 50%: 93.2ms, 90%: 161.0ms, 99%: 257.0ms, Fastest: 43.5ms, Slowest: 478.7ms, StdDev: 37.5ms

Throughput by host:
 * http://minio1:9000: Avg: 287.68 MiB/s, 71.92 obj/s
 * http://minio2:9000: Avg: 322.23 MiB/s, 80.56 obj/s
 * http://minio3:9000: Avg: 303.02 MiB/s, 75.76 obj/s
 * http://minio4:9000: Avg: 257.77 MiB/s, 64.44 obj/s

Throughput, split into 297 x 1s:
 * Fastest: 1179.5MiB/s, 294.88 obj/s
 * 50% Median: 1172.9MiB/s, 293.21 obj/s
 * Slowest: 1142.7MiB/s, 285.67 obj/s

6. 总结

6.1 单盘吞吐

  • 单线程
1
fio -numjobs=1 -fallocate=none -iodepth=2 -ioengine=libaio -direct=1 -rw=read -bs=4M --group_reporting -size=100M -time_based -runtime=30 -name=fio-test -directory=/mnt/data0
1
   READ: bw=2990MiB/s (3135MB/s), 2990MiB/s-2990MiB/s (3135MB/s-3135MB/s), io=87.6GiB (94.1GB), run=30002-30002msec
1
fio -numjobs=1 -fallocate=none -iodepth=2 -ioengine=libaio -direct=1 -rw=write -bs=4M --group_reporting -size=100M -time_based -runtime=30 -name=fio-test -directory=/mnt/data0
1
  WRITE: bw=981MiB/s (1028MB/s), 981MiB/s-981MiB/s (1028MB/s-1028MB/s), io=28.7GiB (30.9GB), run=30007-30007msec
  • 多线程
1
fio -numjobs=32 -fallocate=none -iodepth=2 -ioengine=libaio -direct=1 -rw=read -bs=4M --group_reporting -size=100M -time_based -runtime=30 -name=fio-test -directory=/mnt/data0
1
   READ: bw=3133MiB/s (3286MB/s), 3133MiB/s-3133MiB/s (3286MB/s-3286MB/s), io=91.0GiB (98.8GB), run=30064-30064msec
1
fio -numjobs=32 -fallocate=none -iodepth=2 -ioengine=libaio -direct=1 -rw=write -bs=4M --group_reporting -size=100M -time_based -runtime=30 -name=fio-test -directory=/mnt/data0
1
  WRITE: bw=958MiB/s (1005MB/s), 958MiB/s-958MiB/s (1005MB/s-1005MB/s), io=29.1GiB (31.2GB), run=31071-31071msec
线程操作吞吐量
1read3023MiB/s
1write981MiB/s
128read3133MiB/s
128write958MiB/s

6.1 吞吐汇总

纠删码布局 (K/M)操作对象吞吐 (MiB/s)读/写放大 (物理比)有效并行度 (Quorum)单盘推算吞吐 (MiB/s)单 Shard 大小
2 / 2GET7543.741.50 (3/2)33771.872.0 MiB
2 / 2PUT2989.452.00 (4/2)31992.962.0 MiB
12 / 4GET7500.641.08 (13/12)13625.05341.3 KiB
12 / 4PUT2805.381.33 (16/12)12311.71341.3 KiB
10 / 6GET7280.371.10 (11/10)11728.03409.6 KiB
10 / 6PUT2945.831.60 (16/10)10471.33409.6 KiB
8 / 8GET7474.371.125 (9/8)9934.29512.0 KiB
8 / 8PUT1170.792.00 (16/8)9260.17512.0 KiB

补充测试了下 fio 单盘随机写 400 KiB

1
  WRITE: bw=905MiB/s (949MB/s), 905MiB/s-905MiB/s (949MB/s-949MB/s), io=106GiB (114GB), run=120001-120001msec

说明瓶颈不在磁盘 IO,而在 minio。

  • get 占用网卡资源

测试节点的 IB 带宽打到 50+Gbps,minio 节点的带宽达到 10+GBps。
磁盘 IO util 使用率不高。
cpu 使用少于 20Core。

由于采用了 SSD 存储,单盘就能达到 3GiB/s 的吞吐量,因此数据位的宽度影响不大,基本都能达到 7GiB/s 的吞吐量。

  • put 占用磁盘、CPU 资源

测试节点带宽很低,minio 节点带宽达到 13+GBps。
有两节点的磁盘 IO util都接近 100%,另外两个节点磁盘 IO util 20%左右。
cpu 使用 20+Core。

  • 校验位占用 CPU 资源

校验位越多,写入时因计算而消耗更多时间;Shard 文件增大,又可能会提升写入性能,缩短部分时间。

在四机,单机四盘,16 盘的情况下,10+6、12+4 是值得尝试的配置。


微信公众号
作者
微信公众号