您现在的位置是:首页 > 文章详情

基于Raft排序的Fabric多机集群部署教程

日期:2019-12-23点击:374

Raft是Hyperledger Fabric 1.4.1中新增的排序服务模块,这个教程将介绍如何部署一个基于Raft排序服务的多机Fabric网络。

相关教程推荐:

1、用Raft排序服务启动byfn示例

BYFN是学习Hyperledger Fabric的一个很好的例子:它包含了Fabric网络的所有元素,并且在byfn.sh中完整表现出来。当我们不加任何参数运行这个脚本时,它将启动一个包含2个机构、4个peer和1个orderer(使用solo排序)的Fabric网络。byfn.sh的参数-o用来指定排序服务的类型:

./byfn.sh up -o <kafka | etcdraft>

最新的BYFN网络设计中包含了5个orderer的密码学资料。如果使用Solo或Kafka排序服务,将值运行第一个Orderer(参见docker-compose-cli.yaml中的定义)。

当我们指定raft作为排序服务时的一个区别之处在于,使用configtxgen生成创世区块我们需要指定configtx.yaml中的SampleMultiNodeEtcdRaft配置端。其他的操作都保持不变。

最后,我们还需要启动其他4个orderer,这些内容在docker-compose-etcdraft2.yaml中定义。

下面时基于raft排序的BYFN网络启动后的情况,你可以看到有5个orderer在运行:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-o6Ys8UJk-1577184158821)(fabric-raft-multi-host/p1.png)]

下面是基于Raft排序服务的BYFN网络的拓扑示意图:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IrXryhrg-1577184158822)(fabric-raft-multi-host/topology.png)]

在下面的内容中,我们将不再使用BYFN脚本,而是从零开始来学习如何构建多主机的Raft排序的Fabric网络。

2、Hyperledger Fabric多机部署的不同方案

由于Hyperledger Fabric的组件以容器形式部署,当这些容器都在本机时会很顺利的工作。但是当这些容器需要运行在不同的主机上时,我们需要找出让它们彼此通信的办法。

虽然Hyperledger Fabric官方没有给出正式的推荐方案,目前大致有三种方案:

使用静态IP:通过指定容器运行的主机IP,容器之间就可以彼此通信。可以在docker-compose文件中使用extra_hosts方法来指定主机IP,当容器运行后,就可以在/etc/hosts文件中看到这些项。这一方案容易理解,我们也不需要依赖于外部组件,缺点在于所有的东西都是静态配置的,不利于需要动态修改配置的应用场景。

使用docker swarm:Docker Swarm是Docker环境中的原生容器编排服务。简而言之,Docker Swarm为跨主机的容器提供了一个叠加网络层,使容器彼此之间能够通信,就像在一台主机上一样。这一方案的好处在于原始的配置只需要简单修改就可以使用,也不需要在配置中硬编码像IP地址这样的静态信息。缺点在于需要依赖外部组件Docker Swarm。在这个教程中我们将使用这一方案。

使用Kubernetes:k8s使目前最流行的容器编排工具,其机制类似于Docker Swarm。我们已经注意到有一些教程尝试利用k8s来进行fabric的多机部署,但是使用k8s的难度要明显高于前两种方案。

3、Fabrc Raft多机演示环境搭建

我们需要将容器分发到4个主机,本教程使用AWS上的4个EC2实例,但是并不使用AWS提供的特别功能,仅仅是用它来运行Ubuntu以及必要的软件,采用公共IP进行通信。当然你可以可以选择其他的云服务提供商。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3mvG512t-1577184158823)(fabric-raft-multi-host/raft-4-hosts.png)]

在Fabric网络启动运行后,我们将使用Fabcar链码进行测试。总体流程如下:

  • 启动AWS EC2实例,安装必要的镜像和工具。
  • 构建一个叠加网络,将4个主机都加入该网络
  • 在主机1上准备所有资料,包括密码学资料、通道配置交易、每个节点的docker-compose
    文件等。然后拷贝到其他主机。
  • 使用docker-compose启动所有组件
  • 创建通道mychannel并将所有peer加入该通道
  • 安装并实例化Fabcar链码
  • 调用查询链码方法

4、Fabric Raft多机演示

4.1 启动主机

本教程使用AWS EC2 t2.small实例,使用fabric 1.4.4。注意在演示中我们使用了简化的全开放安全组,在生产环境中你应当根据自己的需求来决定开放哪些端口:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-OIHtxorr-1577184158824)(fabric-raft-multi-host/aws-hosts.png)]

4.2 使用Docker Swarm构建叠加网络

现在我们可以打开4个终端,分别对应每个主机:

ssh -i <key> ubuntu@<public IP>

从Host 1执行如下命令:

docker swarm init --advertise-addr <host-1 ip address> docker swarm join-token manager

运行结果如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-kUaDCYQk-1577184158824)(fabric-raft-multi-host/host1-swarm.png)]

使用最后的输出,将其他节点以管理者身份加入swarm。

从host 2/3/4执行如下命令:

<output from join-token manager> --advertise-addr <host n ip>

运行结果如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AdQVbhcO-1577184158826)(fabric-raft-multi-host/host234-swarm.png)]

最后,我们添加一个叠加网络,该网络将用于下面的演示,这一步的操作只需要在一个节点上执行。如果Docker Swarm正常工作,所有的节点都会加入这个叠加网络。

从host 1创建叠加网络first-network:

docker network create --attachable --driver overlay first-networkdocker network ls

运行结果如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xgYKQWBi-1577184158827)(fabric-raft-multi-host/host1-overlay.png)]

在其他主机上,我们可以看到这个网络:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-A73jJCFT-1577184158828)(fabric-raft-multi-host/host234-overlay.png)]

现在已经构建好了叠加网络,这些信息将在稍后用于docker-compose文件。

4.3 在host 1上准备资料

一个关键的环节时确保所有的Farbic成员使用相同的密码学资料。我们将使用host 1来创建这些密码学资料,然后拷贝到其他主机。

理论上,我们只需要确保个体身份(证书和签名私钥)遵循所要求的规范。一个机构的证书应当由同一个CA签发 —— 例如org1的证书应当由ca.org1签发。出于简化目的,在本教程中,我们在host 1上创建所有的密码学资料,然后将整个目录拷贝到其他主机。

首先进入fabric-samples目录,然后创建一个raft-4node-swarm目录。

在host 1上执行如下命令:

cd fabric-samples mkdir raft-4node-swarm cd raft-4node-swarm

直接从first-network拷贝crypto-config.yaml和configtx.yaml文件:

cp ../first-network/crypto-config.yaml . cp ../first-network/configtx.yaml .

然后生成必要的密码学资料:

../bin/cryptogen generate --config=./crypto-config.yaml export FABRIC_CFG_PATH=$PWD mkdir channel-artifacts ../bin/configtxgen -profile SampleMultiNodeEtcdRaft -outputBlock ./channel-artifacts/genesis.block ../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID mychannel ../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP ../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP

现在我们为所有主机准备docker-compose文件,主要基于BYFN中的文件,需要创建6个docker-compose文件以及一个env文件:

  • base/peer-base.yaml
  • base/docker-compose-peer.yaml
  • host1.yaml
  • host2.yaml
  • host3.yaml
  • host4.yaml
  • .env

4.3.1 base/peer-base.yaml

# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' services: peer-base: image: hyperledger/fabric-peer:$IMAGE_TAG environment: - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock # the following setting starts chaincode containers on the same # bridge network as the peers # https://docs.docker.com/compose/networking/ - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=first-network - FABRIC_LOGGING_SPEC=INFO #- FABRIC_LOGGING_SPEC=DEBUG - CORE_PEER_TLS_ENABLED=true - CORE_PEER_GOSSIP_USELEADERELECTION=true - CORE_PEER_GOSSIP_ORGLEADER=false - CORE_PEER_PROFILE_ENABLED=true - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer command: peer node start orderer-base: image: hyperledger/fabric-orderer:$IMAGE_TAG environment: - FABRIC_LOGGING_SPEC=INFO - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] - ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR=1 - ORDERER_KAFKA_VERBOSE=true - ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer

4.3.2 base/docker-compose-base.yaml

# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' services: orderer.example.com: container_name: orderer.example.com extends: file: peer-base.yaml service: orderer-base volumes: - ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp - ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls - orderer.example.com:/var/hyperledger/production/orderer ports: - 7050:7050 peer0.org1.example.com: container_name: peer0.org1.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer0.org1.example.com - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_LISTENADDRESS=0.0.0.0:7051 - CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 - CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.example.com:7051 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls - peer0.org1.example.com:/var/hyperledger/production ports: - 7051:7051 peer1.org1.example.com: container_name: peer1.org1.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer1.org1.example.com - CORE_PEER_ADDRESS=peer1.org1.example.com:7051 - CORE_PEER_LISTENADDRESS=0.0.0.0:7051 - CORE_PEER_CHAINCODEADDRESS=peer1.org1.example.com:7052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:7051 - CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls - peer1.org1.example.com:/var/hyperledger/production ports: - 7051:7051 peer0.org2.example.com: container_name: peer0.org2.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer0.org2.example.com - CORE_PEER_ADDRESS=peer0.org2.example.com:7051 - CORE_PEER_LISTENADDRESS=0.0.0.0:7051 - CORE_PEER_CHAINCODEADDRESS=peer0.org2.example.com:7052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:7051 - CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org2.example.com:7051 - CORE_PEER_LOCALMSPID=Org2MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls - peer0.org2.example.com:/var/hyperledger/production ports: - 7051:7051 peer1.org2.example.com: container_name: peer1.org2.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer1.org2.example.com - CORE_PEER_ADDRESS=peer1.org2.example.com:7051 - CORE_PEER_LISTENADDRESS=0.0.0.0:7051 - CORE_PEER_CHAINCODEADDRESS=peer1.org2.example.com:7052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.example.com:7051 - CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.example.com:7051 - CORE_PEER_LOCALMSPID=Org2MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls:/etc/hyperledger/fabric/tls - peer1.org2.example.com:/var/hyperledger/production ports: - 7051:7051

4.3.3 host1.yaml

# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' volumes: orderer.example.com: orderer5.example.com: peer0.org1.example.com: networks: byfn: external: name: first-network services: orderer.example.com: extends: file: base/docker-compose-base.yaml service: orderer.example.com container_name: orderer.example.com networks: - byfn orderer5.example.com: extends: file: base/peer-base.yaml service: orderer-base container_name: orderer5.example.com networks: - byfn volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/:/var/hyperledger/orderer/tls - orderer5.example.com:/var/hyperledger/production/orderer ports: - 8050:7050 peer0.org1.example.com: container_name: peer0.org1.example.com extends: file: base/docker-compose-base.yaml service: peer0.org1.example.com networks: - byfn cli: container_name: cli image: hyperledger/fabric-tools:$IMAGE_TAG tty: true stdin_open: true environment: - SYS_CHANNEL=$SYS_CHANNEL - GOPATH=/opt/gopath - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock #- FABRIC_LOGGING_SPEC=DEBUG - FABRIC_LOGGING_SPEC=INFO - CORE_PEER_ID=cli - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP - CORE_PEER_TLS_ENABLED=true - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer command: /bin/bash volumes: - /var/run/:/host/var/run/ - ./../chaincode/:/opt/gopath/src/github.com/chaincode - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ - ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/ - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts depends_on: - orderer.example.com - peer0.org1.example.com networks: - byfn

4.3.4 host2.yaml

# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' volumes: orderer2.example.com: peer1.org1.example.com: networks: byfn: external: name: first-network services: orderer2.example.com: extends: file: base/peer-base.yaml service: orderer-base container_name: orderer2.example.com networks: - byfn volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls - orderer2.example.com:/var/hyperledger/production/orderer ports: - 7050:7050 peer1.org1.example.com: container_name: peer1.org1.example.com extends: file: base/docker-compose-base.yaml service: peer1.org1.example.com networks: - byfn

4.3.5 host3.yaml

# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' volumes: orderer3.example.com: peer0.org2.example.com: networks: byfn: external: name: first-network services: orderer3.example.com: extends: file: base/peer-base.yaml service: orderer-base container_name: orderer3.example.com networks: - byfn volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/:/var/hyperledger/orderer/tls - orderer3.example.com:/var/hyperledger/production/orderer ports: - 7050:7050 peer0.org2.example.com: container_name: peer0.org2.example.com extends: file: base/docker-compose-base.yaml service: peer0.org2.example.com networks: - byfn

4.3.6 host4.yaml

# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' volumes: orderer4.example.com: peer1.org2.example.com: networks: byfn: external: name: first-network services: orderer4.example.com: extends: file: base/peer-base.yaml service: orderer-base container_name: orderer4.example.com networks: - byfn volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/:/var/hyperledger/orderer/tls - orderer4.example.com:/var/hyperledger/production/orderer ports: - 7050:7050 peer1.org2.example.com: container_name: peer1.org2.example.com extends: file: base/docker-compose-base.yaml service: peer1.org2.example.com networks: - byfn

4.3.7 .env

COMPOSE_PROJECT_NAME=net IMAGE_TAG=latest SYS_CHANNEL=byfn-sys-channel

下面是目录中的内容:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VkhFpPJa-1577184158829)(fabric-raft-multi-host/materials.png)]

下面是针对BYFN中的文件所作的修改:

  • base/peer-base.yaml中,CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE修改为
    我们之前创建的叠加网络first-network
  • base/docker-compose-base.yaml中,由于所有的peers都在不同的主机上,我们
    将端口映射改回7051:7051。在每个peer的环境变量中也进行了相应的修改。
  • 在所有的hostn.yaml文件中,我们添加叠加网络first-network

现在我们在host 1上准备好了所有资料,将该目录拷贝到其他主机。由于不能跨EC2实例拷贝文件,我们使用本地机器进行桥接操作:

# on Host 1 cd .. tar cf raft-4node-swarm.tar raft-4node-swarm/# on my localhost scp -i <key> ubuntu@<Host 1 IP>:/home/ubuntu/fabric-samples/raft-4node-swarm.tar . scp -i <key> raft-4node-swarm.tar ubuntu@<Host 2, 3 and 4 IP>:/home/ubuntu/fabric-samples/# on Host 2, 3 and 4 cd fabric-samples tar xf raft-4node-swarm cd raft-4node-swarm

现在所有的节点都有了同样的密码学资料和docker-compose文件,我们可以启动容器了。

4.4 分别启动各主机上的容器

我们使用docker-compose启动全部主机:

# on Host 1, 2, 3 and 4, bring up corresponding yaml file docker-compose -f hostn.yaml up -d

运行结果如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gIie4UcM-1577184158829)(fabric-raft-multi-host/up1.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2L1QDOgT-1577184158830)(fabric-raft-multi-host/up2.png)]

4.5 创建通道并加入peer

由于我们仅在host 1上由命令行CLI,因此从host 1的终端执行所有的命令。

为mychannel通道创建创世区块:

docker exec cli peer channel create -o orderer.example.com:7050 -c mychannel \ -f ./channel-artifacts/channel.tx --tls \ --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

将peer0.org1加入mychannel:

docker exec cli peer channel join -b mychannel.block

将peer1.org1加入mychannel:

docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:7051 \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt \ cli peer channel join -b mychannel.block

将peer0.org2加入mychannel:

docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp \ -e CORE_PEER_ADDRESS=peer0.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt \ cli peer channel join -b mychannel.block

将peer1.org2 加入mychannel:

docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp \ -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt \ cli peer channel join -b mychannel.block

4.6 安装并实例化Fabcar链码

从host 1的终端将Fabcar链码安装到所有peer节点:

# to peer0.org1 docker exec cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/ # to peer1.org1 docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:7051 \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt \ cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/ # to peer0.org2 docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp \ -e CORE_PEER_ADDRESS=peer0.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt \ cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/ # to peer1.org2 docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp \ -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt \ cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/

在通道mychannle实例化Fabcar链码:

docker exec cli peer chaincode instantiate -o orderer.example.com:7050 --tls \ --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem \ -C mychannel -n mycc -v 1.0 -c '{"Args":[]}' -P "AND ('Org1MSP.peer','Org2MSP.peer')"

4.7 链码调用与查询

首先调用initLedger方法将10个车辆记录载入账本。

在peer0.org1上执行:

docker exec cli peer chaincode invoke -o orderer.example.com:7050 --tls true \ --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem \ -C mychannel -n mycc --peerAddresses peer0.org1.example.com:7051 \ --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt \ --peerAddresses peer0.org2.example.com:7051 \ --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt \ -c '{"Args":["initLedger"]}'

现在可以从4个不同的peer节点查询车辆记录:

# from peer0.org1 docker exec cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}' # from peer1.org1 docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:7051 \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt \ cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}' # from peer0.org2 docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp \ -e CORE_PEER_ADDRESS=peer0.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt \ cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}' # from peer1.org2 docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp \ -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt \ cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'

运行结果如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GUTTTPc6-1577184158831)(fabric-raft-multi-host/cc-query.png)]

现在我们使用orderer3.example.com来调用 changeCarOwner,并在执行
后进行查询:

docker exec cli peer chaincode invoke -o orderer3.example.com:7050 --tls true \ --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem \ -C mychannel -n mycc --peerAddresses peer0.org1.example.com:7051 \ --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt \ --peerAddresses peer0.org2.example.com:7051 \ --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt \ -c '{"Args":["changeCarOwner","CAR0","KC"]}' # from peer0.org1 docker exec cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}' # from peer1.org2 docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp \ -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" \ -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt \ cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'

运行结果如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QBLfftOw-1577184158831)(fabric-raft-multi-host/cc-query2.png)]

你可以使用其他的orderer进行链码调用。由于这些排序节点构成了raft集群,你
应该得到同样的结果,这表示排序节点集群工作正常。

4.8 清理环境

要清理运行环境,使用docker-compose将容器停掉并移除即可:

# each host docker-compose -f noden.yaml down -v

5、总结

在本教程中,我们基于BYFN示例进行修改,构建了一个基于Raft排序集群的
Hyperledger Fabric网络,使用Docker Swarm来实现多主机容器的通信。


原文链接:Hyperledger Fabric Raft排序多机部署 - 汇智网

原文链接:https://yq.aliyun.com/articles/740509
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章