您现在的位置是:首页 > 文章详情

使用elasticsearch-dump迁移elasticsearch集群数据

日期:2020-09-27点击:414

缘起

最近某个5节点es集群发现其中1个主节点(被选中master)cpu负载很高,其中3节点master角色,所有节点默认data角色,初步操作重启这个主节点,使其角色变更减少压力,发现不起作用。
后来使用Cerebro分析发现其中有个geo_infomation信息索引只有一个分片,且大小超过20G,默认单分片在SSD盘中大小最好不超过20G,HDD盘中大小最好不超过10G时性能最佳,这里分片不均衡导致集群压力分配不均。
所以现在需要改变geo_information的索引分片信息,由于不可变更已经生成索引的分片信息,所以只能新建一个索引(默认5分片),然后迁移索引mapping和data数据,这里使用elasticsearch-dump实现。

[root@VM-88-87-centos bin]# curl 192.168.88.87:9200/_cat/nodes 192.168.88.39 27 88 4 0.33 0.30 0.26 dim - es-39 192.168.88.135 72 99 99 18.33 18.52 18.45 di - es-135 192.168.88.40 32 99 98 21.67 21.03 20.73 dim - es-40 192.168.88.33 49 94 3 0.41 0.34 0.27 dim * es-33 192.168.88.87 35 95 0 0.01 0.04 0.08 di - es-87

操作步骤

1、安装node环境

[root@VM-88-87-centos ~]# wget https://cdn.npm.taobao.org/dist/node/v14.12.0/node-v14.12.0-linux-x64.tar.xz [root@VM-88-87-centos ~]# xz -d node-v14.12.0-linux-x64.tar.xz [root@VM-88-87-centos ~]# tar -xvf node-v14.12.0-linux-x64.tar -C /usr/local/ [root@VM-88-87-centos ~]# cd /usr/local/ [root@VM-88-87-centos local]# ln -s node-v14.12.0-linux-x64 node [root@VM-88-87-centos local]# vim /etc/profile export JAVA_HOME=/usr/local/jdk export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar export NODE_HOME=/usr/local/node export NODE_PATH=$NODE_HOME/lib/node_modules export PATH=$PATH:${JAVA_HOME}/bin:$NODE_HOME/bin [root@VM-88-87-centos local]# source /etc/profile [root@VM-88-87-centos local]# node -v v14.12.0 [root@VM-88-87-centos local]# npm -v 6.14.8

2、安装elasticsearch-dump

[root@VM-88-87-centos ~]# wget https://codeload.github.com/elasticsearch-dump/elasticsearch-dump/tar.gz/v6.33.4 [root@VM-88-87-centos ~]# tar zxvf v6.33.4 [root@VM-88-87-centos ~]# cd elasticsearch-dump-6.33.4/ [root@VM-88-87-centos elasticsearch-dump-6.33.4]# ls bin Dockerfile LICENSE.txt test docker-compose-test-helper.yml elasticdump.jpg node_modules transforms docker-compose.yml elasticdump.js package.json docker-entrypoint.sh lib README.md [root@VM-88-87-centos elasticsearch-dump-6.33.4]# cd bin/

依次安装依赖
p-queue delay ini s3urls lodash requestretry request lossless-json big.js aws4 aws-sdk async socks5-http-client socks5-https-client bytes JSONStream s3-stream-upload http-status

[root@VM-88-87-centos bin]# ./elasticdump --input=http://192.168.88.87:9200/geo_information --output=http://192.168.88.87:9200/geo_information_new --type=mapping internal/modules/cjs/loader.js:896 throw err; ^ Error: Cannot find module 'p-queue' Require stack: - /root/elasticsearch-dump-6.33.4/lib/processor.js - /root/elasticsearch-dump-6.33.4/elasticdump.js - /root/elasticsearch-dump-6.33.4/bin/elasticdump at Function.Module._resolveFilename (internal/modules/cjs/loader.js:893:15) at Function.Module._load (internal/modules/cjs/loader.js:743:27) at Module.require (internal/modules/cjs/loader.js:965:19) at require (internal/modules/cjs/helpers.js:88:18) at Object.<anonymous> (/root/elasticsearch-dump-6.33.4/lib/processor.js:2:29) at Module._compile (internal/modules/cjs/loader.js:1076:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10) at Module.load (internal/modules/cjs/loader.js:941:32) at Function.Module._load (internal/modules/cjs/loader.js:782:14) at Module.require (internal/modules/cjs/loader.js:965:19) { code: 'MODULE_NOT_FOUND', requireStack: [ '/root/elasticsearch-dump-6.33.4/lib/processor.js', '/root/elasticsearch-dump-6.33.4/elasticdump.js', '/root/elasticsearch-dump-6.33.4/bin/elasticdump' ] } [root@VM-88-87-centos bin]# npm install p-queue npm notice created a lockfile as package-lock.json. You should commit this file. + p-queue@6.6.1 added 4 packages from 2 contributors and audited 5 packages in 5.154s 1 package is looking for funding run `npm fund` for details found 0 vulnerabilities

最终npm模块依赖如下

[root@VM-88-87-centos bin]# npm list elasticdump@6.33.4 /root/elasticsearch-dump-6.33.4 ├─┬ async@2.6.3 │ └── lodash@4.17.20 deduped ├─┬ aws-sdk@2.761.0 │ ├─┬ buffer@4.9.2 │ │ ├── base64-js@1.3.1 │ │ ├── ieee754@1.1.13 deduped │ │ └── isarray@1.0.0 │ ├── events@1.1.1 │ ├── ieee754@1.1.13 │ ├── jmespath@0.15.0 │ ├── querystring@0.2.0 │ ├── sax@1.2.1 │ ├─┬ url@0.10.3 │ │ ├── punycode@1.3.2 │ │ └── querystring@0.2.0 deduped │ ├── uuid@3.3.2 │ └─┬ xml2js@0.4.19 │ ├── sax@1.2.1 deduped │ └── xmlbuilder@9.0.7 ├── aws4@1.10.1 ├── big.js@5.2.2 ├── bytes@3.1.0 ├── delay@4.4.0 ├── http-status@1.4.2 ├── ini@1.3.5 ├─┬ JSONStream@1.3.5 │ ├── jsonparse@1.3.1 │ └── through@2.3.8 ├── lodash@4.17.20 ├── lossless-json@1.0.4 ├── minimist@1.2.5 ├─┬ p-queue@6.6.1 │ ├── eventemitter3@4.0.7 │ └─┬ p-timeout@3.2.0 │ └── p-finally@1.0.0 ├─┬ request@2.88.2 │ ├── aws-sign2@0.7.0 │ ├── aws4@1.10.1 │ ├── caseless@0.12.0 │ ├─┬ combined-stream@1.0.8 │ │ └── delayed-stream@1.0.0 │ ├── extend@3.0.2 │ ├── forever-agent@0.6.1 │ ├─┬ form-data@2.3.3 │ │ ├── asynckit@0.4.0 │ │ ├── combined-stream@1.0.8 deduped │ │ └── mime-types@2.1.27 deduped │ ├─┬ har-validator@5.1.5 │ │ ├─┬ ajv@6.12.5 │ │ │ ├── fast-deep-equal@3.1.3 │ │ │ ├── fast-json-stable-stringify@2.1.0 │ │ │ ├── json-schema-traverse@0.4.1 │ │ │ └─┬ uri-js@4.4.0 │ │ │ └── punycode@2.1.1 │ │ └── har-schema@2.0.0 │ ├─┬ http-signature@1.2.0 │ │ ├── assert-plus@1.0.0 │ │ ├─┬ jsprim@1.4.1 │ │ │ ├── assert-plus@1.0.0 deduped │ │ │ ├── extsprintf@1.3.0 │ │ │ ├── json-schema@0.2.3 │ │ │ └─┬ verror@1.10.0 │ │ │ ├── assert-plus@1.0.0 deduped │ │ │ ├── core-util-is@1.0.2 deduped │ │ │ └── extsprintf@1.3.0 deduped │ │ └─┬ sshpk@1.16.1 │ │ ├─┬ asn1@0.2.4 │ │ │ └── safer-buffer@2.1.2 deduped │ │ ├── assert-plus@1.0.0 deduped │ │ ├─┬ bcrypt-pbkdf@1.0.2 │ │ │ └── tweetnacl@0.14.5 deduped │ │ ├─┬ dashdash@1.14.1 │ │ │ └── assert-plus@1.0.0 deduped │ │ ├─┬ ecc-jsbn@0.1.2 │ │ │ ├── jsbn@0.1.1 deduped │ │ │ └── safer-buffer@2.1.2 deduped │ │ ├─┬ getpass@0.1.7 │ │ │ └── assert-plus@1.0.0 deduped │ │ ├── jsbn@0.1.1 │ │ ├── safer-buffer@2.1.2 │ │ └── tweetnacl@0.14.5 │ ├── is-typedarray@1.0.0 │ ├── isstream@0.1.2 │ ├── json-stringify-safe@5.0.1 │ ├─┬ mime-types@2.1.27 │ │ └── mime-db@1.44.0 │ ├── oauth-sign@0.9.0 │ ├── performance-now@2.1.0 │ ├── qs@6.5.2 │ ├── safe-buffer@5.2.1 │ ├─┬ tough-cookie@2.5.0 │ │ ├── psl@1.8.0 │ │ └── punycode@2.1.1 │ ├─┬ tunnel-agent@0.6.0 │ │ └── safe-buffer@5.2.1 deduped │ └── uuid@3.3.2 deduped ├─┬ requestretry@4.1.1 │ ├── extend@3.0.2 deduped │ ├── lodash@4.17.20 deduped │ └── when@3.7.8 ├─┬ s3-stream-upload@2.0.2 │ ├── buffer-queue@1.0.0 │ └─┬ readable-stream@2.3.7 │ ├── core-util-is@1.0.2 │ ├── inherits@2.0.4 │ ├── isarray@1.0.0 deduped │ ├── process-nextick-args@2.0.1 │ ├── safe-buffer@5.1.2 │ ├─┬ string_decoder@1.1.1 │ │ └── safe-buffer@5.1.2 │ └── util-deprecate@1.0.2 ├─┬ s3urls@1.5.2 │ ├── minimist@1.2.5 deduped │ └─┬ s3signed@0.1.0 │ └─┬ aws-sdk@2.761.0 │ ├── buffer@4.9.2 deduped │ ├── events@1.1.1 deduped │ ├── ieee754@1.1.13 deduped │ ├── jmespath@0.15.0 deduped │ ├── querystring@0.2.0 deduped │ ├── sax@1.2.1 deduped │ ├── url@0.10.3 deduped │ ├── uuid@3.3.2 deduped │ └── xml2js@0.4.19 deduped ├─┬ socks5-http-client@1.0.4 │ └─┬ socks5-client@1.2.8 │ └─┬ ip-address@6.1.0 │ ├── jsbn@1.1.0 │ ├── lodash@4.17.20 deduped │ └── sprintf-js@1.1.2 └─┬ socks5-https-client@1.2.1 └── socks5-client@1.2.8 deduped 

3、数据迁移
由于setting里面的index分片信息是不同的,所以新index默认创建即可,不需要迁移setting配置
mapping迁移

[root@VM-88-87-centos bin]# ./elasticdump --input=http://192.168.88.87:9200/geo_information --output=http://192.168.88.87:9200/geo_information_new --type=mapping Sun, 27 Sep 2020 02:45:24 GMT | starting dump Sun, 27 Sep 2020 02:45:24 GMT | got 1 objects from source elasticsearch (offset: 0) Sun, 27 Sep 2020 02:45:25 GMT | sent 1 objects to destination elasticsearch, wrote 1 Sun, 27 Sep 2020 02:45:25 GMT | got 0 objects from source elasticsearch (offset: 1) Sun, 27 Sep 2020 02:45:25 GMT | Total Writes: 1 Sun, 27 Sep 2020 02:45:25 GMT | dump complete

data迁移
20G数据大概用了1小时

[root@VM-88-87-centos bin]# ./elasticdump --input=http://192.168.88.87:9200/geo_information --output=http://192.168.88.87:9200/geo_information_new --limit 10000 --type=data Sun, 27 Sep 2020 06:19:22 GMT | starting dump Sun, 27 Sep 2020 06:19:22 GMT | got 10000 objects from source elasticsearch (offset: 0) Sun, 27 Sep 2020 06:19:24 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:25 GMT | got 10000 objects from source elasticsearch (offset: 10000) Sun, 27 Sep 2020 06:19:26 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:27 GMT | got 10000 objects from source elasticsearch (offset: 20000) Sun, 27 Sep 2020 06:19:28 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:28 GMT | got 10000 objects from source elasticsearch (offset: 30000) Sun, 27 Sep 2020 06:19:30 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:30 GMT | got 10000 objects from source elasticsearch (offset: 40000) Sun, 27 Sep 2020 06:19:32 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:32 GMT | got 10000 objects from source elasticsearch (offset: 50000) Sun, 27 Sep 2020 06:19:33 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:34 GMT | got 10000 objects from source elasticsearch (offset: 60000) Sun, 27 Sep 2020 06:19:35 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:36 GMT | got 10000 objects from source elasticsearch (offset: 70000) Sun, 27 Sep 2020 06:19:37 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 06:19:38 GMT | got 10000 objects from source elasticsearch (offset: 80000) Sun, 27 Sep 2020 06:19:39 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 ..... Sun, 27 Sep 2020 07:21:47 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 07:21:48 GMT | got 10000 objects from source elasticsearch (offset: 19980000) Sun, 27 Sep 2020 07:21:49 GMT | sent 10000 objects to destination elasticsearch, wrote 10000 Sun, 27 Sep 2020 07:21:50 GMT | got 6094 objects from source elasticsearch (offset: 19990000) Sun, 27 Sep 2020 07:21:50 GMT | sent 6094 objects to destination elasticsearch, wrote 6094 Sun, 27 Sep 2020 07:21:50 GMT | got 0 objects from source elasticsearch (offset: 19996094) Sun, 27 Sep 2020 07:21:50 GMT | Total Writes: 19996094 Sun, 27 Sep 2020 07:21:50 GMT | dump complete

后续

es虽然支持横向扩容缩容及配置纵向扩减配,但是有些信息还是需要开始设计时安排合理,比如mapping、index分片、主节点信息、冷热、集群规模等。否则后续变更起来很麻烦。

原文链接:https://blog.51cto.com/jerrymin/2538618
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章