首页 文章 精选 留言 我的

精选列表

搜索[基础搭建],共10000篇文章
优秀的个人博客,低调大师

手把手教你在CentOS上搭建Kubernetes集群

作者:ChamPly安装CentOS 1.安装net-tools[root@localhost ~]# yum install -y net-tools2.关闭firewalld[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalldRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.[root@localhost ~]# setenforce 0[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config安装Docker 如今Docker分为了Docker-CE和Docker-EE两个版本,CE为社区版即免费版,EE为企业版即商业版。我们选择使用CE版。 1.安装yum源工具包[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm22.下载docker-ce官方的yum源配置文件[root@localhost ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo3.禁用docker-c-edge源配edge是不开发版,不稳定,下载stable版yum-config-manager --disable docker-ce-edge4.更新本地YUM源缓存yum makecache fast5.安装Docker-ce相应版本的yum -y install docker-ce6.运行hello world[root@localhost ~]# systemctl start docker[root@localhost ~]# docker run hello-worldUnable to find image 'hello-world:latest' locallylatest: Pulling from library/hello-world9a0669468bf7: Pull completeDigest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fcStatus: Downloaded newer image for hello-world:latest Hello from Docker!This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: The Docker client contacted the Docker daemon. The Docker daemon pulled the "hello-world" image from the Docker Hub. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with:$ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID:https://cloud.docker.com/ For more examples and ideas, visit:https://docs.docker.com/engine/userguide/安装kubelet与kubeadm包 使用kubeadm init命令初始化集群之下载Docker镜像到所有主机的实始化时会下载kubeadm必要的依赖镜像,同时安装etcd,kube-dns,kube-proxy,由于我们GFW防火墙问题我们不能直接访问,因此先通过其它方法下载下面列表中的镜像,然后导入到系统中,再使用kubeadm init来初始化集群 1.使用DaoCloud加速器(可以跳过这一步)[root@localhost ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://0d236e3f.m.daocloud.iodocker version >= 1.12{"registry-mirrors": ["http://0d236e3f.m.daocloud.io"]}Success.You need to restart docker to take effect: sudo systemctl restart docker[root@localhost ~]# systemctl restart docker2.下载镜像,自己通过Dockerfile到dockerhub生成对镜像,也可以克隆我的images=(kube-controller-manager-amd64 etcd-amd64 k8s-dns-sidecar-amd64 kube-proxy-amd64 kube-apiserver-amd64 kube-scheduler-amd64 pause-amd64 k8s-dns-dnsmasq-nanny-amd64 k8s-dns-kube-dns-amd64)for imageName in ${images[@]} ; dodocker pull champly/$imageNamedocker tag champly/$imageName gcr.io/google_containers/$imageNamedocker rmi champly/$imageNamedone3.修改版本docker tag gcr.io/google_containers/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 && \docker rmi gcr.io/google_containers/etcd-amd64 && \docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 && \docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 && \docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 && \docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64 && \docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 && \docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64 && \docker tag gcr.io/google_containers/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5 && \docker rmi gcr.io/google_containers/kube-apiserver-amd64 && \docker tag gcr.io/google_containers/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5 && \docker rmi gcr.io/google_containers/kube-controller-manager-amd64 && \docker tag gcr.io/google_containers/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 && \docker rmi gcr.io/google_containers/kube-proxy-amd64 && \docker tag gcr.io/google_containers/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5 && \docker rmi gcr.io/google_containers/kube-scheduler-amd64 && \docker tag gcr.io/google_containers/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 && \docker rmi gcr.io/google_containers/pause-amd644.添加阿里源[root@localhost ~]# cat >> /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0EOF5.查看kubectl kubelet kubeadm kubernetes-cni列表 [root@localhost ~]# yum list kubectl kubelet kubeadm kubernetes-cni已加载插件:fastestmirrorLoading mirror speeds from cached hostfile base: mirrors.tuna.tsinghua.edu.cn extras: mirrors.sohu.com updates: mirrors.sohu.com可安装的软件包 kubeadm.x86_64 1.7.5-0 kuberneteskubectl.x86_64 1.7.5-0 kuberneteskubelet.x86_64 1.7.5-0 kuberneteskubernetes-cni.x86_64 0.5.1-0 kubernetes[root@localhost ~]# 6.安装kubectl kubelet kubeadm kubernetes-cni[root@localhost ~]# yum install -y kubectl kubelet kubeadm kubernetes-cni修改cgroups vi /etc/systemd/system/kubelet.service.d/10-kubeadm.confupdate KUBELET_CGROUP_ARGS=--cgroup-driver=systemd to KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs 修改kubelet中的cAdvisor监控的端口,默认为0改为4194,这样就可以通过浏器查看kubelet的监控cAdvisor的web页[root@kub-master ~]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.confEnvironment="KUBELET_CADVISOR_ARGS=--cadvisor-port=4194" 启动所有主机上的kubelet服务[root@master ~]# systemctl enable kubelet && systemctl start kubelet初始化mastermaster节点上操作[root@master ~]# kubeadm reset && kubeadm init --apiserver-advertise-address=192.168.0.100 --kubernetes-version=v1.7.5 --pod-network-cidr=10.200.0.0/16[preflight] Running pre-flight checks[reset] Stopping the kubelet service[reset] Unmounting mounted directories in "/var/lib/kubelet"[reset] Removing kubernetes-managed containers[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd][reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki][reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf][kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.[init] Using Kubernetes version: v1.7.5[init] Using Authorization modes: [Node RBAC][preflight] Running pre-flight checks[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12[preflight] Starting the kubelet service[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)[certificates] Generated CA certificate and key.[certificates] Generated API server certificate and key.[certificates] API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.100][certificates] Generated API server kubelet client certificate and key.[certificates] Generated service account token signing key and public key.[certificates] Generated front-proxy CA certificate and key.[certificates] Generated front-proxy client certificate and key.[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"[apiclient] Created API client, waiting for the control plane to become ready[apiclient] All control plane components are healthy after 34.002949 seconds[token] Using token: 0696ed.7cd261f787453bd9[apiconfig] Created RBAC rules[addons] Applied essential addon: kube-proxy[addons] Applied essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each nodeas root: kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443 [root@master ~]#kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443这个一定要记住,以后无法重现,添加节点需要 添加节点[root@node1 ~]# kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.[preflight] Running pre-flight checks[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'[preflight] Starting the kubelet service[discovery] Trying to connect to API Server "192.168.0.100:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.100:6443"[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.0.100:6443"[discovery] Successfully established connection with API Server "192.168.0.100:6443"[bootstrap] Detected server version: v1.7.10[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request[csr] Received signed certificate from the API server, generating KubeConfig...[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: Certificate signing request sent to master and responsereceived. Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.在master配置kubectl的kubeconfig文件[root@master ~]# mkdir -p $HOME/.kube[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config在Master上安装flanneldocker pull quay.io/coreos/flannel:v0.8.0-amd64kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.ymlkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml查看集群[root@master ~]# kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-0 Healthy {"health": "true"}[root@master ~]# kubectl get nodesNAME STATUS AGE VERSIONmaster Ready 24m v1.7.5node1 NotReady 45s v1.7.5node2 NotReady 7s v1.7.5[root@master ~]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system etcd-master 1/1 Running 0 24mkube-system kube-apiserver-master 1/1 Running 0 24mkube-system kube-controller-manager-master 1/1 Running 0 24mkube-system kube-dns-2425271678-h48rw 0/3 ImagePullBackOff 0 25mkube-system kube-flannel-ds-28n3w 1/2 CrashLoopBackOff 13 24mkube-system kube-flannel-ds-ndspr 0/2 ContainerCreating 0 41skube-system kube-flannel-ds-zvx9j 0/2 ContainerCreating 0 1mkube-system kube-proxy-qxxzr 0/1 ImagePullBackOff 0 41skube-system kube-proxy-shkmx 0/1 ImagePullBackOff 0 25mkube-system kube-proxy-vtk52 0/1 ContainerCreating 0 1mkube-system kube-scheduler-master 1/1 Running 0 24m[root@master ~]#如果出现:The connection to the server localhost:8080 was refused - did you specify the right host or port? 解决办法:为了使用kubectl访问apiserver,在~/.bash_profile中追加下面的环境变量: export KUBECONFIG=/etc/kubernetes/admin.conf source ~/.bash_profile 重新初始化kubectl 作者:ChamPly来源:https://my.oschina.net/ChamPly/blog/1575888

优秀的个人博客,低调大师

K8S自己动手系列 - 1.1 - 集群搭建

准备 作为学习与实战的记录,笔者计划编写一系列实战系列文章,主要根据实际使用场景选取Kubernetes最常用的功能进行实验,并使用当前最流行的kubeadm安装集群。本文用到的所有实验环境均基于笔者个人工作站虚拟化多个VM而来,如果读者有一台性能尚可的工作站台式机,推荐读者参考本文操作过程实战演练一遍,有助于对Kubernetes各项概念及功能的理解。 前期准备: 两台VM,笔者安装的OS为ubuntu 16.04 保证两台VM网络互通,为了使网络拓扑尽可能简单,我使用的虚拟化软件为VirtualBox,宿主机为ubuntu 19.04,网络模式为Bridge 先解决网络问题 Ubuntu APT https://opsx.alibaba.com/mirror 搜索 ubuntu Kubernetes APT Repo https://opsx.alibaba.com/mirror 搜索 Kubernetes Docker Image Repo # 此过程需要主机先安装好docker-daemon,参考集群安装部分有说明 1. 安装/升级Docker客户端 推荐安装1.10.0以上版本的Docker客户端,参考文档 docker-ce 2. 配置镜像加速器 针对Docker客户端版本大于 1.10.0 的用户 您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ft3ykfyc.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker 集群安装 kubelet kubeadm kubectl apt-get update apt-get install -y kubelet kubeadm kubectl docker # 参考 https://kubernetes.io/docs/setup/cri/ # Install Docker CE ## Set up the repository: ### Install packages to allow apt to use a repository over HTTPS apt-get update && apt-get install apt-transport-https ca-certificates curl software-properties-common ### Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - ### Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" ## Install Docker CE. apt-get update && apt-get install docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker 初始化集群 确保swap关闭 swapoff -a vim /etc/fstab ... # comment this #UUID=2746cf1b-d1ab-41e2-8a31-8c1ed2cca910 none swap sw 0 0 kubeadm init ~ kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=stable I0608 11:05:15.863459 9577 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0608 11:05:15.863537 9577 version.go:97] falling back to the local client version: v1.14.3 [init] Using Kubernetes version: v1.14.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 解决镜像拉取问题 上面的拉取特别慢,所以需要从镜像仓库手工拉取镜像,并打tag替代从官方库拉取 # 查看使用到的镜像 ~ kubeadm config images list k8s.gcr.io/kube-apiserver:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 # 手工拉取镜像 docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.3 docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.3 docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.3 docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.14.3 docker pull docker.io/mirrorgooglecontainers/pause:3.1 docker pull docker.io/mirrorgooglecontainers/etcd:3.3.10 docker pull docker.io/coredns/coredns:1.3.1 # 手工打tag docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3 docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3 docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 再次执行,终于创建成功,输出如下: ~ kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=stable [init] Using Kubernetes version: v1.14.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [worker01 localhost] and IPs [192.168.101.113 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [worker01 localhost] and IPs [192.168.101.113 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [worker01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.101.113] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 18.005322 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node worker01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node worker01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ss6flg.csw4u0ok134n2fy1 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.101.113:6443 --token ss6flg.csw4u0ok134n2fy1 \ --discovery-token-ca-cert-hash sha256:bac9a150228342b7cdedf39124ef2108653db1f083e9f547d251e08f03c41945 安装网络插件 For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init. Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here. Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here . Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml For more information about flannel, see the CoreOS flannel repository on GitHub . 安装完成后,查看所有组件已经成功运行 ~ kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-fb8b8dccf-vmdsj 1/1 Running 0 24m kube-system pod/coredns-fb8b8dccf-xrhrs 1/1 Running 0 24m kube-system pod/etcd-worker01 1/1 Running 0 23m kube-system pod/kube-apiserver-worker01 1/1 Running 0 23m kube-system pod/kube-controller-manager-worker01 1/1 Running 0 23m kube-system pod/kube-flannel-ds-amd64-cgnnz 1/1 Running 0 4m18s kube-system pod/kube-proxy-vfvkp 1/1 Running 0 24m kube-system pod/kube-scheduler-worker01 1/1 Running 0 23m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 24m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-flannel-ds-amd64 1 1 1 1 1 beta.kubernetes.io/arch=amd64 4m18s kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 4m18s kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 4m18s kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 4m18s kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 4m18s kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 24m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 2/2 2 2 24m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-fb8b8dccf 2 2 2 24m run a demo pod ~ kubectl create deployment nginx --image=nginx deployment.apps/nginx created ~ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx 0/1 1 0 9s ~ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-65f88748fd-95gkh 0/1 Pending 0 21s ~ kubectl describe pod/nginx-65f88748fd-95gkh Name: nginx-65f88748fd-95gkh Namespace: default Priority: 0 PriorityClassName: <none> Node: <none> Labels: app=nginx pod-template-hash=65f88748fd Annotations: <none> Status: Pending IP: Controlled By: ReplicaSet/nginx-65f88748fd Containers: nginx: Image: nginx Port: <none> Host Port: <none> Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kf45 (ro) Conditions: Type Status PodScheduled False Volumes: default-token-5kf45: Type: Secret (a volume populated by a Secret) SecretName: default-token-5kf45 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 30s default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. 查看错误原因,写的很清楚,没有可用节点,是因为我们唯一的一个节点worker01是master节点,master节点默认含有taint(污点),默认不可以调度业务pod,我们来去除这个污点,让nginx可以调度上去 ~ kubectl describe node worker01 Name: worker01 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=worker01 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"86:f6:8f:29:d7:c7"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 192.168.101.113 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 08 Jun 2019 11:56:28 +0800 Taints: node-role.kubernetes.io/master:NoSchedule ... ~ kubectl taint nodes --all node-role.kubernetes.io/master- node/worker01 untainted ~ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-65f88748fd-95gkh 1/1 Running 0 4m11s 10.244.0.4 worker01 <none> <none> 可以看到pod已经是running状态了,测试一下 ~ curl 10.244.0.4 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 成功!! 加入节点 在worker02上,执行: ~ kubeadm join 192.168.101.113:6443 --token ss6flg.csw4u0ok134n2fy1 \ --discovery-token-ca-cert-hash sha256:bac9a150228342b7cdedf39124ef2108653db1f083e9f547d251e08f03c41945 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 查看 ~ kubectl get nodes NAME STATUS ROLES AGE VERSION worker01 Ready master 52m v1.14.3 worker02 Ready <none> 7m12s v1.14.3 将demo的replica设置为2 ~ kubectl scale deployment.v1.apps/nginx --replicas=2 deployment.apps/nginx scaled ~ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7cffb9df96-8n884 1/1 Running 0 5m2s 10.244.0.6 worker01 <none> <none> nginx-7cffb9df96-rbvsr 1/1 Running 0 3s 10.244.1.10 worker02 <none> <none> ~ http 10.244.1.10 HTTP/1.1 200 OK Accept-Ranges: bytes Connection: keep-alive Content-Length: 612 Content-Type: text/html Date: Sat, 08 Jun 2019 05:03:57 GMT ETag: "5ce409fd-264" Last-Modified: Tue, 21 May 2019 14:23:57 GMT Server: nginx/1.17.0 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 成功!至此我们安装好了两个节点的集群,并基于Flannel网络插件的方式,网络模式为VXLAN

优秀的个人博客,低调大师

2018-04-21 搭建Python官方文档翻译环境

参考PEP 545 -- Python Documentation Translations fork的编译脚本: nobodxbodon/docsbuild-scripts, 添加了zh语言标签, 以及fork的PO文件库nobodxbodon/python-docs-ko, 仅作演示用(更改字段"测试python入门教程": Update index.po · nobodxbodon/python-docs-ko@00b8073) 据非常有限的理解, 大概过程是, PO文件包含了所有翻译的段落, 像打补丁似地附加到原英文文档后生成rEst格式文件, 再编译成HTML文档(html库地址:nobodxbodon/py36zh. 演示: 4. 기타 제어 흐름 도구 - Python 3.6.5 文档) 接下去要解决的是, 如何汉化程序部分, 效果如下. 所有现有的其他语言的翻译文档都没有对程序进行本地化(字符串/命名等) 现在可以通过直接修改rst文件, 但PEP 545的流程是修改PO文件. 问了其他翻译组的作者, 似乎需要修改Sphinx配置才能实现(在PO文件中添加程序部分, 并且在构建时合并入rst文件). 另: Python官方文档(入门教程只是一小部分)日语翻译进度86+%, 法语30%, 中文1.5%: The Python 3.6 translation project on Transifex. 后得知早先有老版本3.2.2的中文翻译项目: https://docspy3zh.readthedocs.io/en/latest/ 不知是否是这1.5%的前身.

优秀的个人博客,低调大师

kubernetes 使用NFS搭建动态存储卷(PV/SorageClass/PVC)

在日常学习测试kubernetes时,经常需要PersistentVolume把一些数据(例如:数据库、日志等)存储起来,不随着容器的删除而丢失;关于PV、PVC、StorageClass的关系参考PV/PVC/StorageClass;存储卷的实现有很多种,此处选择比较容易实现的NFS作为存储;参考了网上好多资料,但都是不太完整,按照资料上的说明操作都有问题,所以在参考了众多资料之后决定记录一下过程,以供各位参考。 环境: centos-7(3.10.0-957.5.1.el7.x86_64)(node1, node2) kubernetes v1.13.0 helm v2.12.0 安装环境 安装kubernetes、helm 参考资料,在node1、node2上安装kubernetes和helm; 安装NFS 在node1上安装NFS Server $ sudo yum -y install nfs-utils rpcbind 在服务端node1上配置共享目录 $ sudo mkdir /var/nfs $ sudo su //切换到root用户 $ echo "/var/nfs 192.168.0.0/24(rw,async,no_root_squash,no_all_squash,sync)" >> /etc/exports //配置共享目录 $ exit //退回原来用户 $ exportfs -r //让上面的配置生效 在服务端启动NFS服务 //必须先启动rpcbind服务,再启动nfs服务,这样才能让nfs服务在rpcbind服务上注册成功 $ sudo systemctl start rpcbind $ sudo systemctl start nfs-server 检查服务是否启动成功 $ showmount -e localhost Export list for localhost: /var/nfs 192.168.0.0/24 设置开机启动 $ sudo systemctl enable rpcbind $ sudo systemctl enable nfs-server 在客户端node2安装nfs-utils **注意:所有kubernetes机器都需要安装nfs-utils,我就是没有在客户端安装nfs-utils,才卡在怎么测试PV、StorageClass、PVC都不通; $ sudo yum install nfs-utils 可以参考资料,在客服端上测试NFS共享存储; NFS作为动态存储卷 参考资料,在node1上使用helm安装NFS-Client Provisioner $ helm install stable/nfs-client-provisioner --set nfs.server=x.x.x.x --set nfs.path=/exported/path --name nfs-client-provisioner 它会安装一个StorageClass $ kubectl get sc NAME PROVISIONER AGE nfs-client cluster.local/nfs-client-provisioner 32h 设置默认StorageClass 使用PVC的时候需要创建并指定PV;如果没有创建PV,就会使用默认的StorageClass来创建相应的PV;否则PVC一直都是Pending的状态; 把上面创建StorageClass设置为默认的 $ kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' $ kubectl get sc NAME PROVISIONER AGE nfs-client (default) cluster.local/nfs-client-provisioner 32h 现在你可以尽情使用PVC了,而不用在去手动创建PV和StorageClass了; 参考资料(谢谢各位作者): https://blog.csdn.net/jettery/article/details/72722324 https://www.kubernetes.org.cn/4956.html https://www.kubernetes.org.cn/4022.html https://blog.51cto.com/passed/2160149 https://blog.51cto.com/fengwan/2176889 https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client https://k8smeetup.github.io/docs/tasks/administer-cluster/change-default-storage-class/

优秀的个人博客,低调大师

基于Spark的机器学习实践 (三) - 实战环境搭建

0 相关源码 1 Spark环境安装 ◆ Spark 由scala语言编写,提供多种语言接口,需要JVM ◆ 官方为我们提供了Spark 编译好的版本,可以不必进行手动编译 ◆ Spark安装不难,配置需要注意,并且不一定需要Hadoop环境 下载 解压 tar zxvf spark-2.4.1-bin-hadoop2.7.tgz 2 Spark配置 ◆ 在配置前尽量先阅读官方文档,避免直接从网上找配置教程 ◆ 要为节点设 置好使用的内存,否则可能导致节点利用率低; ◆ 注意spark中IP与端口号的配置,以免UnknownHostException [官网配置]() 应用默认配置 配置文件 复制两份模板,开启自行配置 单机环境配置 本地IP shell进行验证 bin/spark-shell 3 Spark shell ◆ Spark shell是一个bash脚本,在

优秀的个人博客,低调大师

docker-compose快速搭建 Prometheus+Grafana监控系统

一、说明Prometheus负责收集数据,Grafana负责展示数据。其中采用Prometheus 中的 Exporter含:1)Node Exporter,负责收集 host 硬件和操作系统数据。它将以容器方式运行在所有 host 上。2)cAdvisor,负责收集容器数据。它将以容器方式运行在所有 host 上。3)Alertmanager,负责告警。它将以容器方式运行在所有 host 上。完整Exporter列表请参考:https://prometheus.io/docs/instrumenting/exporters/ 二、安装docker,docker-compose2.1 安装docker先安装一个64位的Linux主机,其内核必须高于3.10,内存不低于1GB。在该主机上安装Docker。 # 安装依赖包 yum install -y yum-utils device-mapper-persistent-data lvm2 # 添加Docker软件包源 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # 安装Docker CE yum install docker-ce -y # 启动 systemctl start docker # 开机启动 systemctl enable docker # 查看Docker信息 docker info 2.2 安装docker-compose curl -L https://github.com/docker/compose/releases/download/1.23.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose 三、添加配置文件 mkdir -p /usr/local/src/config cd /usr/local/src/config 2.1 添加prometheus.yml配置文件,vim prometheus.yml # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: ['192.168.159.129:9093'] # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: - "node_down.yml" # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' static_configs: - targets: ['192.168.159.129:9090'] - job_name: 'cadvisor' static_configs: - targets: ['192.168.159.129:8080'] - job_name: 'node' scrape_interval: 8s static_configs: - targets: ['192.168.159.129:9100'] 2.2 添加邮件告警配置文件添加配置文件alertmanager.yml,配置收发邮件邮箱vim alertmanager.yml global: smtp_smarthost: 'smtp.163.com:25' #163服务器 smtp_from: 'tsiyuetian@163.com' #发邮件的邮箱 smtp_auth_username: 'tsiyuetian@163.com' #发邮件的邮箱用户名,也就是你的邮箱 smtp_auth_password: 'TPP***' #发邮件的邮箱密码 smtp_require_tls: false #不进行tls验证 route: group_by: ['alertname'] group_wait: 10s group_interval: 10s repeat_interval: 10m receiver: live-monitoring receivers: - name: 'live-monitoring' email_configs: - to: '1933306137@qq.com' #收邮件的邮箱 2.3 添加报警规则添加一个node_down.yml为 prometheus targets 监控vim node_down.yml groups: - name: node_down rules: - alert: InstanceDown expr: up == 0 for: 1m labels: user: test annotations: summary: "Instance {{ $labels.instance }} down" description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minutes." 四、编写docker-composevim docker-compose-monitor.yml version: '2' networks: monitor: driver: bridge services: prometheus: image: prom/prometheus container_name: prometheus hostname: prometheus restart: always volumes: - /usr/local/src/config/prometheus.yml:/etc/prometheus/prometheus.yml - /usr/local/src/config/node_down.yml:/etc/prometheus/node_down.yml ports: - "9090:9090" networks: - monitor alertmanager: image: prom/alertmanager container_name: alertmanager hostname: alertmanager restart: always volumes: - /usr/local/src/config/alertmanager.yml:/etc/alertmanager/alertmanager.yml ports: - "9093:9093" networks: - monitor grafana: image: grafana/grafana container_name: grafana hostname: grafana restart: always ports: - "3000:3000" networks: - monitor node-exporter: image: quay.io/prometheus/node-exporter container_name: node-exporter hostname: node-exporter restart: always ports: - "9100:9100" networks: - monitor cadvisor: image: google/cadvisor:latest container_name: cadvisor hostname: cadvisor restart: always volumes: - /:/rootfs:ro - /var/run:/var/run:rw - /sys:/sys:ro - /var/lib/docker/:/var/lib/docker:ro ports: - "8080:8080" networks: - monitor 五、启动docker-compose #启动容器: docker-compose -f /usr/local/src/config/docker-compose-monitor.yml up -d #删除容器: docker-compose -f /usr/local/src/config/docker-compose-monitor.yml down #重启容器: docker restart id 容器启动如下: prometheus targets界面如下: 备注:如果State为Down,应该是防火墙问题,参考下面防火墙配置。 prometheus graph界面如下: 备注:如果没有数据,同步下时间。 六、防火墙配置6.1 关闭selinux setenforce 0 vim /etc/sysconfig/selinux 6.2 配置iptables #删除自带防火墙 systemctl stop firewalld.service systemctl disable firewalld.service #安装iptables yum install -y iptables-services #配置 vim /etc/sysconfig/iptables *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [24:11326] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 9090 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 3000 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 9093 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 9100 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT #启动 systemctl restart iptables.service systemctl enable iptables.service 七、配置Grafana7.1 添加Prometheus数据源 2 配置dashboards 说明:可以用自带模板,也可以去https://grafana.com/dashboards,下载对应的模板。 7.3 查看数据我从网页下载了docker相关的模板:Docker and system monitoring,893输入893,就会加载出下面的信息 导入后去首页查看数据 八、附录:单独命令启动各容器 #启动prometheus docker run -d -p 9090:9090 --name=prometheus \ -v /usr/local/src/config/prometheus.yml:/etc/prometheus/prometheus.yml \ -v /usr/local/src/config/node_down.yml:/etc/prometheus/node_down.yml \ prom/prometheus # 启动grafana docker run -d -p 3000:3000 --name=grafana grafana/grafana #启动alertmanager容器 docker run -d -p 9093:9093 -v /usr/local/src/config/config.yml:/etc/alertmanager/config.yml --name alertmanager prom/alertmanager #启动node exporter docker run -d \ -p 9100:9100 \ -v "/:/host:ro,rslave" \ --name=node_exporter \ quay.io/prometheus/node-exporter \ --path.rootfs /host #启动cadvisor docker run \ --volume=/:/rootfs:ro \ --volume=/var/run:/var/run:rw \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --publish=8080:8080 \ --detach=true \ --name=cadvisor \ google/cadvisor:latest

优秀的个人博客,低调大师

基于Zero-Ice搭建的物联网监控平台

[P1]项目初始态势 开始接手项目时,领导要求很简单,就是做一个本地服务,手机连接上服务,能控制本地系统内的各种设备,至于设备状态如何采集与控制,数据如何分析和存储这里略过,其通信机制类似于下图: ​ 整个项目,领导只安排了只有我一个项目人员,当然能简单就简单。我将本地数据经过转换汇总,终端采用设备-〉信息点的层级关系显示,当然设备层具有自身属性,信息点层也具有自身属性,终端是瘦客户端,在链接上服务端后,由服务端推送层级结构和相关属性信息给客户端,进行初始化,后续就是实现实时上行推送和下行控制,上行和下行采用独立线程处理。 ​ 一个星期后,安装在项目现场,实现了本地WIFI的手机端远程监控,工程人员和使用人员都觉得OK. 又经历一段时间的功能追加,支持各种类型设备可配置接入,支持蓝牙、串口、网络采集数据,支持实时分析与告警,跨平台部署,自动化调度运维等一系列任务,想来项目应该结题。嗯,领导有了新想法。 [P2]项目变种 采用socket通信,手机端需要设置IP,很麻烦,领导希望工程人员、使用人员只要连上WIFI打开app就能实现远程监控。领导发话,没办法只能照办,为了偷懒,当然采用现成的通信机制改造,最终选型了Zero-ice实现,通信机制类似于: ​ 我将的ice配置文件如下: #pragma once #include <Ice/Identity.ice> module PCS { enum DevTypeICE { UDDev=0,Region, Platform,Entity }; enum PTypeICE { UDPInfo=0,YX, YC, YXS, YCS }; struct DateTimeI { int isec; int imsec; }; struct DateTimeS { short year; short month; short day; short hour; short minute; short second; short msec; }; struct PInfo { long pID; string name; string desc; PTypeICE pType; float pValue; }; sequence<PInfo> PInfos; struct Dev { long devID; DevTypeICE devType; string name; string desc; }; sequence<Dev> Devs; interface ClientAchieve { void PValueChange(long devID,long pID, DateTimeI itime, float val); void addDev(Devs devs); void addPInfo(long devID,PInfos pinfos); }; interface ServerAchieve { void AddClient(::Ice::Identity ident); void AddClientID(::Ice::Identity ident,int type); void setPValue(long devID, long pID, float val); }; }; 至于接口实现代码略过,给出部分服务和客户端的通信连接实现参考: 服务端: void PCSServer::listen() { CacheDataObj *ptr_CacheDataObj = CacheDataObj::getInstance(); iceType = ptr_CacheDataObj->getIceType(); confFile = ptr_CacheDataObj->getIceConfFile(); try { Ice::InitializationData initData; initData.properties = Ice::createProperties(); #ifdef ICE_STATIC_LIBS Ice::registerIceSSL(); #endif CLogger::createInstance()->Log(eTipMessage ,"start load ice config file %s", confFile.c_str()); initData.properties->load(confFile); char ** argv; char *p = NULL; argv = &p; int argc = 0; if (iceType > 0) { communicator_ = Ice::initialize(argc, argv, initData); } else { communicator_ = this->communicator(); } if (NULL == communicator_) { CLogger::createInstance()->Log(eConfigError , "communicator initialize fail! [%s %s %d]" , __FILE__, __FUNCTION__, __LINE__); } } catch (const Ice::Exception& ex) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception[1]:%s [%s %s %d]" , ex.ice_id().c_str(), __FILE__, __FUNCTION__, __LINE__); disListen(); } catch (const std::string & msg) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception[2]:%s [%s %s %d]" , msg.c_str(), __FILE__, __FUNCTION__, __LINE__); disListen(); } catch (const char * msg) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception[3]:%s [%s %s %d]" , msg, __FILE__, __FUNCTION__, __LINE__); disListen(); } if (NULL != communicator_) { try { if (iceType > 0) { Ice::ObjectAdapterPtr adapter_pcs = communicator_->createObjectAdapter("SyePcs"); Ice::Identity id_pcs = Ice::stringToIdentity("SyePcs"); PCS::ServerAchievePtr pcs_ = new ServerAchieveI(communicator_); adapter_pcs->add(pcs_, id_pcs); adapter_pcs->activate(); //fprintf(stderr, "ice server listen!\n"); CLogger::createInstance()->Log(eTipMessage, "ice server listen!"); isListen = true; communicator_->waitForShutdown(); } else { Ice::PropertiesPtr properties = communicator_->getProperties(); Ice::ObjectAdapterPtr adapter_pcs = communicator_->createObjectAdapter("SyePcs"); std::string id = communicator_->getProperties()->getPropertyWithDefault("Identity", "PCSIO"); Ice::Identity id_pcs = Ice::stringToIdentity(id); CLogger::createInstance()->Log(eTipMessage , "id_pcs.name=%s", id_pcs.name.c_str()); PCS::ServerAchievePtr pcs_ = new ServerAchieveI(communicator_); adapter_pcs->add(pcs_, id_pcs); adapter_pcs->activate(); CLogger::createInstance()->Log(eTipMessage, "ice server listen!"); isListen = true; communicator_->waitForShutdown(); } } catch (const Ice::Exception& ex) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception[1]:%s [%s %s %d]" , ex.ice_id().c_str(), __FILE__, __FUNCTION__, __LINE__); disListen(); } catch (const std::string & msg) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception[2]:%s [%s %s %d]" , msg.c_str(), __FILE__, __FUNCTION__, __LINE__); disListen(); } catch (const char * msg) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception[3]:%s [%s %s %d]" , msg, __FILE__, __FUNCTION__, __LINE__); disListen(); } catch (...) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception[4]:unkown error for adapter crate and activate! [%s %s %d]" , __FILE__, __FUNCTION__, __LINE__); disListen(); } } else { CLogger::createInstance()->Log(eConfigError , "communicator is NULL! [%s %s %d]" , __FILE__, __FUNCTION__, __LINE__); } }; void PCSServer::disListen() { if (communicator_) { try { isListen = false; communicator_->destroy(); } catch (const Ice::Exception& ex) { CLogger::createInstance()->Log(eSoftError , "Ice::Exception:%s [%s %s %d]" , ex.ice_id().c_str() , __FILE__, __FUNCTION__, __LINE__); } } }; 客户端只给出java的,c++实现类似: import PCS.*; import android.annotation.SuppressLint; import Ice.*; import java.io.IOException; import java.io.InputStream; import java.util.Properties; import java.util.UUID; import java.lang.Thread; @SuppressLint("NewApi") public class PcsClient implements Runnable { public PcsClient() { } boolean connectf=false; InputStream inputStream =null; private Ice.Communicator ic = null; private ServerAchievePrx twoway = null; private ClientAchieveI pca = null; private ObjectAdapter adapter = null; private Ice.Identity ident = null; private boolean mWorking = false; int runType = 1; int clientType = 1; private Thread mythread = null; public void start() { mWorking = true; mythread = new Thread(this); mythread.start(); } @Override public void run(){ long utime = System.currentTimeMillis(); while(mWorking){ try { Thread.sleep(10); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); break; } if(twoway != null&& pca!=null) { if(System.currentTimeMillis()>(utime+10000)) { // if(!pca.getUpdateState()) { disconnect(); System.out.println("disconnect and connect again"); if(!connect()) { System.out.println("connect 2 fail"); try { Thread.sleep(10); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } utime = System.currentTimeMillis(); // pca.setUpdateState(false); } }else{ if(!connect()) { System.out.println("connect 1 fail"); try { Thread.sleep(100); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } } } public void stop() { mWorking = false; try { Thread.sleep(1); mythread.interrupt(); //手动停止 mythread.join(); } catch (InterruptedException e) { e.printStackTrace(); } disconnect(); disCommunicator(); } public void disCommunicator() { if (ic != null) { try{ ic.shutdown(); ic.destroy(); ic = null; }catch (Ice.Exception e) { System.out.printf("destroy fail(%s)",e.ice_name()); e.printStackTrace(); } } } public Ice.Communicator getCommunicator() { System.out.println("getCommunicator"); if(ic == null){ Ice.InitializationData initData = new Ice.InitializationData(); System.out.println("InitializationData"); initData.properties = Ice.Util.createProperties(); System.out.println("createProperties"); try { Properties properties = new Properties(); properties.load(inputStream); for (String name : properties.stringPropertyNames()) { String value = properties.getProperty(name); initData.properties.setProperty(name, value); System.out.printf("name=%s,value=%s\n",name,value); } } catch(IOException ex) { String _error = String.format("Initialization failed %s", ex.toString()); System.out.println(_error); try { throw ex; } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } ic = Ice.Util.initialize(initData); System.out.println("initialize"); if(ic==null){ System.out.println("invalid Communicator"); } } return ic; } public boolean connect() { if(!connectf){ try { twoway = null; if(runType>0){ System.out.println("checkedCast SyePcs.Proxy"); twoway = ServerAchievePrxHelper.checkedCast( getCommunicator().propertyToProxy("SyePcs.Proxy").ice_twoway().ice_secure(false)); }else{ try{ System.out.println("checkedCast PCSIO 1"); twoway = ServerAchievePrxHelper.checkedCast( getCommunicator().stringToProxy("PCSIO").ice_twoway().ice_secure(false).ice_timeout(100)); System.out.println("checkedCast PCSIO 2"); }catch(Ice.NotRegisteredException e){ System.out.println("checkedCast 1 MCSSVCGrid/Query"); Ice.ObjectPrx proxy = getCommunicator().stringToProxy("MCSSVCGrid/Query"); IceGrid.QueryPrx query = IceGrid.QueryPrxHelper.checkedCast(proxy); twoway = ServerAchievePrxHelper.checkedCast( query.findObjectByType("::PCS::PCSIO").ice_twoway().ice_secure(false).ice_timeout(100)); e.printStackTrace(); System.out.println("checkedCast 2 MCSSVCGrid/Query"); } } if(twoway != null) { /*ObjectAdapter*/ adapter = getCommunicator().createObjectAdapter(""); System.out.println("Communicator create adapter"); pca = new ClientAchieveI(); /*Ice.Identity*/ ident = getCommunicator().stringToIdentity("ClientAchieve"); ident = new Ice.Identity(); //System。Guid guid = System。Guid.NewGuid(); //ident.name = guid.ToString(); UUID uuid = UUID.randomUUID(); ident.name = uuid.toString(); //ident.name = java.util.UUID.randomUUID(); ident.category = ""; System.out.println("Communicator get Identity"); adapter.add(pca, ident); adapter.activate(); System.out.println("activate"); twoway.ice_getConnection().setAdapter(adapter); twoway.AddClientID(ident,clientType); System.out.println("AddClient"); connectf = true; pca.setUpdateState(true); }else{ System.err.println("invalid proxy,couldn't find a `::PCS::PCSIO' object"); connectf = false; } } catch (Ice.Exception e) { System.out.println("create Client fail"); e.printStackTrace(); connectf = false; System.out.println("connectf = false"); } } return connectf; } public void disconnect() { if(pca != null){ pca = null; } if(null!=adapter&&null!=ident){ adapter.remove(ident); ident = null; adapter.destroy(); } if(twoway != null){ twoway = null; } connectf = false; } public void setControl(long devID, long pID, float val){ if(twoway != null){ twoway.setPValue(devID, pID, val); } } } 由于本地通信不是很饱和,我只将serverPcs设了一个Node,其简化的application.xml配置如下,serverPcs是按需由节点服务icegridnode启动的: <icegrid> <application name="SyeSys"> <node name="Node01" > <description>测试节点</description> <server id="syePcsSrv" exe="PCS_server" pwd="D:\\SYE_MCS_PRO\\pcs_project\\PCS_server\\x64\\PcsSrv" ice-version="3.6" activation-timeout="60" application-distrib="false" deactivation-timeout="60" activation="on-demand"> <adapter name="SyePcs" id="SyePcs" endpoints="tcp"> <object identity="PCSIO" type="::PCS::PCSIO" property="Identity"/> </adapter> </server> </node> </application> </icegrid> 为了简化与测试,暂时将服务部署本机,后续只要将IP更改即可,并在Host文件将IP与域内自定义站点映射即可,下面以本机地址为例配置了中心服务、节点服务的配置文件,通过节点服务启动serverPcs。 主中心服务配置,一些描述也一并给出,而从中心服务和节点服务只给出配置项: # # The IceGrid instance name. # IceGrid.InstanceName=MCSSVCGrid #1 Ice.Default.Protocol=tcp #Ice.Default.Locator=MCSSVCGrid/Locator:default -p 4061 #2 #1 为这个应用实例指定一个唯一的标识 #2 注册服务的端点信息(主注册服务和所有的从注册服务),节点注册时要用到 # # IceGrid registry configuration. #IceGrid 注册表最多创建四组端点,用下面的属性进行配置: #1)IceGrid.Registry.Client.Endpoints #支持Ice::Locator IceGrid::Query 的客户端端点。 #2)IceGrid.Registry.Server.Endpoints #用于对象和对象适配器注册的服务器端端点。 #3)IceGrid.Registry.Admin.Endpoints #IceGrid::Admin 接口的管理端点( 可选)。 #4)IceGrid.Registry.Internal.Endpoints #定义IceGrid节点用于与注册表进行通信的内部端点内部接口的端点。这些内部端点必须能被IceGrid节点用于与注册表进行通信的内部端点节点访问。 # IceGrid.Registry.Client.Endpoints=default -p 4061 #3 #协议default,其值可通过Ice.Default.Protocol设置,默认是tcp,协议可以设置tcp udp ssll ws wss等 #-h <ip>指定网址,-p <port> 指定端口, -t <msec> 指定超时毫秒数, 多个Endpoint采用':'间隔 #IceGrid.Registry.Client.Endpoints=default -h 127.0.0.1 -p 4061 #3 IceGrid.Registry.Server.Endpoints=default #4 #IceGrid.Registry.Admin.Endpoints=default -p 4062 #IcePack.Registry.Admin.Endpoints=default -p 4062 IceGrid.Registry.Internal.Endpoints=default #5 #IceGrid.Registry.ReplicaName=Master #标识服务名称 #IceGrid.Registry.Data=db\\LMDB_master #6 IceGrid.Registry.LMDB.Path=db\\LMDB_master IceGrid.Registry.DynamicRegistration=1 #IceGrid.Node.CollocateRegistry=1 //定义节点是否和注册服务并置在一起,设为1时并置,设为0时不并置,不能有两个节点都配置这个属性只能有一个主Registry配置 # 3 客户端访问注册服务器的端点信息 # 4 服务访问注册服务器的端点信息,通常是default # 5 内部访问端点信息,通常是default,节点用这个端口和注册服务通信 # 6 注册服务的数据目录的路径 # # IceGrid admin clients must use a secure connection to connect to the # registry or use Glacier2. # #IceGrid.Registry.AdminSessionManager.Endpoints=default IceGrid.Registry.PermissionsVerifier=MCSSVCGrid/NullPermissionsVerifier #7 #IceGrid.Registry.CryptPasswords=passwords IceGrid.Registry.AdminPermissionsVerifier=MCSSVCGrid/NullPermissionsVerifier #8 #IceGrid.Registry.SSLPermissionsVerifier=MCSSVCGrid/NullSSLPermissionsVerifier #9 #IceGrid.Registry.AdminSSLPermissionsVerifier=MCSSVCGrid/NullSSLPermissionsVerifier #10 # 7 设定防火墙安全代理,从而控制客户端访问注册表时可用的权限 # 8 设定防火墙安全代理,从而控制注册表管理者可用的权限 # 9 设定SSL安全代理,从而设定客户端访问注册表时的SSL安全访问机制 # 10 设定SSL安全代理,从而设定注册表管理者的SSL安全访问机制 # # IceGrid SQLconfiguration if using SQL database. # #Ice.Plugin.DB=IceGridSqlDB:createSqlDB #11 #IceGrid.SQL.DatabaseType=QSQLITE #12 #IceGrid.SQL.DatabaseName=register/Registry.db #13 # 11 指定Ice对象序列化的机制,如果不设置,默认用Freeze机制 # 12 指定使用数据库的类型 #13 指定使用数据库的名称 # # #Ice Error andStandard output Set # #Ice.StdErr=master/stderr.txt #14 #Ice.StdOut= master/stdout.txt #15 # #14 指定标准错误输出文件 #15 指定标准输出文件 # # Trace properties. # Ice.ProgramName=Master #16 IceGrid.Registry.Trace.Node=2 #17 IceGrid.Registry.Trace.Replica=2 #18 #16 指定主注册服务的名称 #17 指定主注册服务跟踪节点信息的级别(0~3),默认为0 #18 指定主/从热备注册服务的跟踪级别(0~3),默认为0 # # IceGrid nodeconfiguration. # #IceGrid.Node.Name=node_1 #19 #IceGrid.Node.Endpoints=default #20 #IceGrid.Node.Data=node_1 #21 #IceGrid.Node.CollocateRegistry=1 #22 #IceGrid.Node.Output=node_1 #23 #IceGrid.Node.RedirectErrToOut=1 #24 # 19 定义节点的名称,必须唯一 # 20 节点被访问的端口信息,注册服务使用这个端点和节点通信,通常设为default # 21 节点的数据目录的路径 # 22 定义节点是否和注册服务并置在一起,设为1时并置,设为0时不并置 # 23 节点标准输出信息重定向蹈的目录路径,会自动生成输出文件 # 24 节点上的服务程序的标准错误重定向到标准输出 # Traceproperties. # IceGrid.Node.Trace.Activator=1 #25 IceGrid.Node.Trace.Adapter=2 #26 IceGrid.Node.Trace.Server=3 #27 # 25 激活器跟踪级别,通常有0,1,2,3级,默认是0 # 26 对象适配器跟踪级别,通常有0,1,2,3级,默认是0 # 27 服务跟踪级别,通常有0,1,2,3级,默认是0 # # Dummy usernameand password for icegridadmin. # IceGridAdmin.Username=mygrid #28 IceGridAdmin.Password=mygrid #29 # 28 IceGrid管理器登录该应用的用户名 # 29 IceGrid管理器登录该应用的密码 从中心服务配置: # The IceGrid locator proxy. #主从注册表之间访问定位器配置 Ice.Default.Locator=MCSSVCGrid/Locator:default -p 4061 #可指定ip #Ice.Default.Locator=MCSSVCGrid/Locator:default -h 127.0.0.1 -p 4061 # IceGrid registry configuration. IceGrid.Registry.Client.Endpoints=default -p 14061 #可指定ip #IceGrid.Registry.Client.Endpoints=default -h 127.0.0.1 -p 14061 IceGrid.Registry.Server.Endpoints=default IceGrid.Registry.Internal.Endpoints=default IceGrid.Registry.ReplicaName=Slave#指定从注册服务的名称 IceGrid.Registry.LMDB.Path=db/LMDB_slave # IceGrid admin clients must use a secure connection to connect to the IceGrid.Registry.PermissionsVerifier=MCSSVCGrid/NullPermissionsVerifier IceGrid.Registry.AdminPermissionsVerifier=MCSSVCGrid/NullPermissionsVerifier # Trace properties. Ice.ProgramName=Slave IceGrid.Registry.Trace.Node=2 IceGrid.Registry.Trace.Replica=2 # Traceproperties. IceGrid.Node.Trace.Activator=1 # Dummy usernameand password for icegridadmin. IceGridAdmin.Username=mygrid #28 IceGridAdmin.Password=mygrid #29 节点服务的配置: # The IceGrid locator proxy. Ice.Default.Locator=MCSSVCGrid/Locator:default -p 4061:default -p 14061 IceGrid.Node.Name=Node01 IceGrid.Node.Endpoints=default IceGrid.Node.Data=db/node_01 IceGrid.Node.Output=db/node_out_01 # Trace properties. Ice.ProgramName=Node IceGrid.Node.Trace.Replica=2 IceGrid.Node.Trace.Activator=1 #log tracing Ice.LogFile=iceserv.log Ice.LogFile.SizeMax=1048576 Ice.PrintStackTraces=2 #Ice.Trace.GC=1 Ice.Trace.Protocol=1 Ice.Trace.Slicing=1 Ice.Trace.Retry=2 Ice.Trace.Network=2 Ice.Trace.Locator=2 #warning Ice.Warn.Connections=1 Ice.Warn.Datagrams=1 Ice.Warn.Dispatch=1 Ice.Warn.AMICallback=1 #Ice.Warn.Leaks=1 配置完成后我们将配置启动脚本,当然也可以通过命令逐个执行,为了便捷,配置数个脚本,以win系统为例: (1)start_center_server.bat,用于启动主从中心服务 start /b /Min "registry01" icegridregistry --Ice.Config=config.master start /b /Min "registry02" icegridregistry --Ice.Config=config.slave (2)start_admin.bat,用于修改更新配置 ::如果更改配置,需要重新映射服务,删除数据目录并重新生成或更新,需先启动中心服务,再调用配置服务更新 ::start_center_server.bat icegridadmin --Ice.Config=config.admin -e "application add 'application.xml'" icegridadmin --Ice.Config=config.admin -e "application update 'application.xml'" (3)start_server.bat,启动节点服务集,本案例只有一个 cd D:\SYE_MCS_PRO\pcs_project\PCS_server\x64\PcsSrv start /b /Min "node01" icegridnode --Ice.Config=config.node cd D:\SYE_MCS_PRO\pcs_project\sye_mcs\mgr (4)stop_server.bat,关闭服务集使用 taskkill /f /im icegridregistry.exe taskkill /f /im icegridnode.exe taskkill /f /im PCS_server.exe 配置完成,先去配置一些服务需要的一些目录, 如 D:\SYE_MCS_PRO\pcs_project\sye_mcs\mgr\db\LMDB_master, D:\SYE_MCS_PRO\pcs_project\sye_mcs\mgr\db\LMDB_slave, D:\SYE_MCS_PRO\pcs_project\PCS_server\x64\PcsSrv\db\node_01, D:\SYE_MCS_PRO\pcs_project\PCS_server\x64\PcsSrv\db\node_out_01, 我手动配置是为了防止尝试配置测试的就数据信息阻隔目录创建或权限受限,完成目录配置后 ,先start_center_server.bat服务,然后启动start_admin.bat(配置有改动时)和start_server.bat。 当服务启动稳定后,serverPcs服务应该是没有启动的,只有客户端连接需求后才会被触发启动。 客户端的ice配置文件config0.client如下: # # The IceGrid locator proxy. #"Ice.Default.Locator"的访问地址与"IceGrid.Registry.Client.Endpoints设置"一致 # Ice.Default.Locator=MCSSVCGrid/Locator:tcp -h localhost -p 4061:tcp -h localhost -p 14061 #Ice.Default.Locator=MCSSVCGrid/Locator:tcp -h www.syemcs.com -p 4061:tcp -h www.pytest.com -p 14061 #非本地连接,需指定IP #Ice.Default.Locator=MCSSVCGrid/Locator:tcp -h 192.168.1.102 -p 4061:tcp -h 192.168.1.102 -p 14061 #Ice.Default.Router=MCSSVCGlacier2/router:tcp -h www.syemcs.com -p 4064:ssl -h www.pytest.com -p 4065 现在启动测试终端,通过从服务-〉节点服务-〉serverPcs,终端与serverPcs建立了长链接: ​​ 手机终端类似下图 ​​ 服务部署到工程上使用了一段时间,领导开始又不满足了,嗯,想在家里也能远程监控现场设备,顺便来个短信告警、统计报表啥的。好吧,准备升级。 [P3]升级到互联网 嗯,这次在利用Zero-Ice比较熟络了,我设计了通信机制如下: ​ 我将本地的serverPcs作为客户端,并在阿里云购买了ECS、数据库等服务,并建立了一个控制中转的门户服务serverMcs来支持各地的serverPcs链接进来,另外建立一个serverDcs来支持互联网终端链接进来,同时将终端升级,将每一个serverPcs推送的数据看作一个区域,即显示格式为区域-〉设备-〉信息点的层级结构。 同样,需要为serverPcs<->serverMcs和app<->serverDcs建立Ice通信接口,其接口文件与本地的终端APP通信类似, McsInterface.ice: #pragma once #include <Ice/Identity.ice> module MCS { //area type for ice communitation enum AreaTypeICE { UDArea=0,ShowingRoom,SubwayStation }; //virtual device type for ice communitation enum DevTypeICE { UDDev=0,Region, Platform,Entity }; //piont type for ice communitation enum PTypeICE { UDPInfo=0,YX, YC, YXS, YCS }; //date time describe by second and millsecond struct DateTimeI { int isec; int imsec; }; //date time describe by the struct within year mon day hour min second millsecond struct DateTimeS { short year; short month; short day; short hour; short minute; short second; short msec; }; //ponit desc struct PInfo { long pID; string name; string desc; PTypeICE pType; float pValue; }; sequence<PInfo> PInfos; dictionary<long, PInfo> PInfoMap; //device desc struct Dev { long devID; DevTypeICE devType; string name; string desc; }; sequence<Dev> Devs; dictionary<long, Dev> DevMap; //area desc struct Area { long areaID; AreaTypeICE areaType; string name; string desc; }; sequence<Area> Areas; sequence<long> AreaIDs; //file comunication format struct FileBinary { string filename; int startpos; int filesize; string filebuf; int buflen; }; //point desc from server and set point value struct PValue { long areaID; long devID; long pID; DateTimeI itime; float val; }; //client accomplish it interface MCSClient { void setPValue(PValue pval); int nextFileData(long areaID,out FileBinary filedata); bool FileRefresh(long areaID,int fileType,string filename); }; //server accomplish it interface MCSServer { void AddClient(::Ice::Identity ident,int type,AreaIDs areaids); void PValueChange(PValue pval); //void addArea(Areas areas); void addArea(Area area); void addDev(long areaID,Devs devs); void addPInfo(long areaID,long devID,PInfos pinfos); }; }; DcsInterface.ice: #pragma once #include <Ice/Identity.ice> module DCS { enum AreaTypeICE { UDArea=0,ShowingRoom,SubwayStation }; enum DevTypeICE { UDDev=0,Region, Platform,Entity }; enum PTypeICE { UDPInfo=0,YX, YC, YXS, YCS }; struct DateTimeI { int isec; int imsec; }; struct DateTimeS { short year; short month; short day; short hour; short minute; short second; short msec; }; struct PInfo { long pID; string name; string desc; PTypeICE pType; float pValue; }; sequence<PInfo> PInfos; struct Dev { long devID; DevTypeICE devType; string name; string desc; }; sequence<Dev> Devs; struct Area { long areaID; AreaTypeICE areaType; string name; string desc; }; sequence<Area> Areas; sequence<long> AreaIDs; interface ClientAchieve { //for internet void PValueChangeI(long areaID,long devID,long pID, DateTimeI itime, float val); void addAreaI(Areas devs); void addDevI(long areaID,Devs devs); void addPInfoI(long areaID,long devID,PInfos pinfos); }; interface ServerAchieve { //for internet void AddClientAreas(::Ice::Identity ident,AreaIDs areaIDList); void setPValueI(long areaID,long devID, long pID, float val); }; }; 其实现逻辑很简单,就是serverPcs只要重新连接上云端后,推送一次设备-〉信息点整个层级结构信息,然后根据业务需要在变化或定期上送信息点数值和接收下行控制,而对于终端来说,也是瘦客户端,其链接上云端后,先根据其订购和账号权限,获得相应的设备及信息点的初始化推送,在终端建立起区域->设备->信息点的层级显示,后续就是刷新服务端推送数据和下发控制。 通信接口函数的具体实现略过,通信链接实现参考本地(java)即可,下面给出c++客户端通信实现代码示例。 int MCSClientV::Run() { while (!exitflag) { if (!m_bConnect) { #ifdef WIN32 Sleep(1000); #else usleep(1000000); #endif if (connect()) { newConnectEvent(); } } else { changeUpEvent(); } #ifdef WIN32 Sleep(10); #else usleep(10000); #endif } return 0; } void MCSClientV::setPValue(const ::MCS::PValue &pval) { if (areaID != pval.areaID) { return; } unsigned long taskID = ptr_ServiceChain->getTaskIDFromDateTime(); PFrom _pfrom; if (ptr_CacheDataObj->getFromInfo(static_cast<unsigned long long>(pval.devID) , static_cast<unsigned int>(pval.pID), _pfrom)) { float _val = pval.val; ptr_CacheDataObj->getCValue(static_cast<unsigned long long>(pval.devID) , static_cast<unsigned int>(pval.pID), _val); WDS wd(_pfrom.ipLong, OnSet, _pfrom.pID, _pfrom.pType, _val, 0, "ICE_Control_MCS", taskID); ptr_ReceiveData->addWDS(wd); CLogger::createInstance()->Log(eControlMessage , "TaskID[%lu] and down_node[1] setPValue from ICE MCS,time(%s)" ",devID(%ld),pID(%ld),val(%.3f)" ",down_control_map, ip[%s],pID(%d),pType(%d),val(%.3f)" , taskID , PFunc::getCurrentTime().c_str() , pval.devID, pval.pID, pval.val , _pfrom.ipStr.c_str() , _pfrom.pID, static_cast<int>(_pfrom.pType), _val); // VerificationCache vit; vit.execTime = PFunc::getCurrentTime("%04d%02d%02dT%02d%02d%02dZ"); vit.taskID = taskID; vit.taskDesc = "ICE_Control_MCS"; vit.devID = static_cast<unsigned long>(pval.devID); vit.devDesc = _pfrom.devDesc; vit.pID = static_cast<unsigned long>(pval.pID); vit.pDesc = _pfrom.pDesc; vit.pType = static_cast<unsigned int>(_pfrom.pType); vit.val = pval.val; vit.limitTimeForCheck = static_cast<unsigned int>(time(NULL)) + 5; vit.eway_ = _pfrom.eway; ptr_VerificationForControlCache->addVerifyData(vit); } else { PValueRet pret(pval.val); ptr_CacheDataObj->setValue(static_cast<unsigned long long>(pval.devID) , static_cast<unsigned int>(pval.pID), pret); CLogger::createInstance()->Log(eControlMessage , "TaskID[%lu] and down_node[1] setPValue from ICE MCS and down_node[0],time(%s)" ",devID(%ld),pID(%ld),val(%.3f)" ",ditect set val to virtual ponit control" , taskID, PFunc::getCurrentTime().c_str() , pval.devID, pval.pID, pret.val_actual); } } bool MCSClientV::FileRefresh(::Ice::Long areaID, ::Ice::Int filetype, const ::std::string &filename) { return false; } ::Ice::Int MCSClientV::nextFileData(::Ice::Long areaID, ::MCS::FileBinary &fdata) { return 0; } Ice::CommunicatorPtr MCSClientV::communicator() { fprintf(stderr, "MCSClientV::communicator()\n"); if (m_ic == NULL) { char ** argv; char *p = NULL; argv = &p; int argc = 0; Ice::InitializationData initData; initData.properties = Ice::createProperties(); #ifdef ICE_STATIC_LIBS Ice::registerIceSSL(); #endif //fprintf(stderr, "load %s start\n", confFile.c_str()); CLogger::createInstance()->Log(eTipMessage , "load %s start" , confFile.c_str()); initData.properties->load(confFile); m_ic = Ice::initialize(argc, argv, initData); } return m_ic; }; bool MCSClientV::connect() { if (!m_bConnect) { try { fprintf(stderr, "MCS::MCSServerPrx::checkedCast\n"); if (runType>0) { soneway = MCS::MCSServerPrx::checkedCast( communicator()->propertyToProxy("SyeMcs.Proxy")->ice_twoway()->ice_secure(true)); } else { try { fprintf(stderr, "checkedCast MCSIO\n"); soneway = MCS::MCSServerPrx::checkedCast( communicator()->stringToProxy("MCSIO")->ice_twoway()->ice_secure(false)); } catch (const Ice::NotRegisteredException&) { fprintf(stderr, "checkedCast MCSLGrid/Query\n"); IceGrid::QueryPrx query = IceGrid::QueryPrx::checkedCast(communicator()->stringToProxy("MCSLGrid/Query")); soneway = MCS::MCSServerPrx::checkedCast( query->findObjectByType("::MCS::MCSIO")->ice_twoway()->ice_secure(false)); } } if (!soneway) { std::cerr <<"couldn't find a `SyeMcs.Proxy' object." << std::endl; } else { std::cerr <<"find a `SyeMcs.Proxy' object." << std::endl; //MCS::MCSServerPrx oneway = twoway->ice_oneway(); //MCS::MCSServerPrx batchOneway = twoway->ice_batchOneway(); //MCS::MCSServerPrx datagram = twoway->ice_datagram(); //MCS::MCSServerPrx batchDatagram = twoway->ice_batchDatagram(); Ice::ObjectAdapterPtr adapter = communicator()->createObjectAdapter(""); Ice::Identity ident; ident.name = IceUtil::generateUUID(); m_strUUID = ident.name; ident.category = ""; MCS::MCSClientPtr crtwoway = new MCSClientI(this); adapter->add(crtwoway, ident); adapter->activate(); soneway->ice_getConnection()->setAdapter(adapter); ::MCS::AreaIDs aids; aids.push_back(areaID); //flag client-type client-area-ids-map soneway->AddClient(ident,(int)1, aids); m_bConnect = true; } } catch (const Ice::Exception& ex) { //fprintf(stderr, "%s\n", ex.ice_id().c_str()); CLogger::createInstance()->Log(eSoftError , "Ice::Exception[1]:%s, [%s %s %d]" , ex.ice_id().c_str() , __FILE__, __FUNCTION__, __LINE__); } catch (const std::string & msg) { //fprintf(stderr, "%s\n", msg.c_str()); CLogger::createInstance()->Log(eSoftError , "Ice::Exception[2]:%s, [%s %s %d]" , msg.c_str() , __FILE__, __FUNCTION__, __LINE__); } catch (const char * msg) { //fprintf(stderr, "%s\n", msg); CLogger::createInstance()->Log(eSoftError , "Ice::Exception[3]:%s, [%s %s %d]" , msg , __FILE__, __FUNCTION__, __LINE__); } } return m_bConnect; }; void MCSClientV::disconnect() { if (m_bConnect) { m_bConnect = false; } if (NULL!=m_ic) { try { m_ic->destroy(); m_ic = NULL; } catch (const Ice::Exception& ex) { //fprintf(stderr, "%s\n", ex.ice_id().c_str()); CLogger::createInstance()->Log(eSoftError , "Ice::Exception:%s, [%s %s %d]" , ex.ice_id().c_str() , __FILE__, __FUNCTION__, __LINE__); } } } void MCSClientV::newConnectEvent() { ::MCS::Devs _devs; if (ptr_CacheDataObj->getDevsToSrv(_devs)) { try { //::MCS::Areas areas; ::MCS::Area area; area.areaID = areaID; area.areaType = (MCS::AreaTypeICE)areaType; area.name = areaName; area.desc = areaDesc; soneway->addArea(area); // soneway->addDev(areaID,_devs); for (::MCS::Devs::const_iterator itdev = _devs.begin(); itdev != _devs.end(); ++itdev) { ::MCS::PInfos _pinfos; if (ptr_CacheDataObj->getPInfosToSrv(itdev->devID, _pinfos)) { soneway->addPInfo(area.areaID,itdev->devID, _pinfos); } } } catch (...) { //printf("addDev or addPInfo Error:%d\n", static_cast<int>(time(NULL))); CLogger::createInstance()->Log(eSoftError , "addDev or addPInfo Error:%d, [%s %s %d]" , static_cast<int>(time(NULL)) , __FILE__, __FUNCTION__, __LINE__); disconnect(); #ifdef WIN32 Sleep(1000); #else usleep(1000000); #endif } } } void MCSClientV::changeUpEvent() { WDC wdls; // if (ptr_ReceiveData->getFirstWDLS(wdls)) { ::MCS::DateTimeI _itime; _itime.isec = wdls.evtTimeS; _itime.imsec = wdls.evtTimeMS; try { //std::cerr << " PValueChange:" << wdlc.devID << "," << wdlc.pID // << "," << wdlc.val << "," << _itime.isec << "," << _itime.imsec << std::endl; ::MCS::PValue pval; pval.areaID = areaID; pval.devID = wdls.devID; pval.pID = wdls.pID; pval.itime = _itime; pval.val = wdls.val; soneway->PValueChange(pval); } catch (...) { CLogger::createInstance()->Log(eTipMessage , "PValueChange Error:%d [%s,%s,%d]" , static_cast<int>(time(NULL)) , __FILE__, __FUNCTION__, __LINE__); disconnect(); #ifdef WIN32 Sleep(1000); #else usleep(1000000); #endif } if (!ptr_ReceiveData->removeFirstWDLS()) { CLogger::createInstance()->Log(eTipMessage , "removeFirstWDLS Error[%s,%s,%d]" , __FILE__, __FUNCTION__, __LINE__); } } } 下面给出Zero-Ice云端服务的各个配置信息,其实很类似。 application.xml的简要配置: <icegrid> <application name="SyeMSys"> <server-template id="syeMcsSrv"> <parameter name="index"/> <server id="syeMcsSrv-${index}" exe="MCS_server" pwd="D:\\SYE_MCS_PRO\\pcs_project\\MCS_server\\x64\\McsSrv" ice-version="3.6" activation-timeout="60" application-distrib="false" deactivation-timeout="60" activation="on-demand"> <adapter name="SyeMcs" id="SyeMcs-${index}" endpoints="tcp" replica-group="SyeMcsRe"/> <property name="Identity" value="MCSIO"/> </server> </server-template> <replica-group id="SyeMcsRe"> <load-balancing type="random" n-replicas="2"/> <object identity="MCSIO" type="::MCS::MCSIO"/> </replica-group> <node name="Node21"> <description>本地系统门户服务01</description> <server-instance template="syeMcsSrv" index="1"/> <!--server-instance template="syeMcsSrv" index="2"/--> </node> <node name="Node22"> <description>本地系统门户服务02</description> <server-instance template="syeMcsSrv" index="3"/> <!--server-instance template="syeMcsSrv" index="4"/--> </node> <server-template id="syeDcsSrv"> <parameter name="index"/> <server id="syeDcsSrv-${index}" exe="DCS_server" pwd="D:\\SYE_MCS_PRO\\pcs_project\\DCS_server\\x64\\DcsSrv" ice-version="3.6" activation-timeout="60" application-distrib="false" deactivation-timeout="60" activation="on-demand"> <adapter name="SyeDcs" id="SyeDcs-${index}" endpoints="tcp" replica-group="SyeDcsRe"/> <property name="Identity" value="DCSIO"/> </server> </server-template> <replica-group id="SyeDcsRe"> <load-balancing type="random" n-replicas="2"/> <object identity="DCSIO" type="::DCS::DCSIO"/> </replica-group> <node name="Node11"> <description>终端控制门户服务01</description> <server-instance template="syeDcsSrv" index="1"/> </node> <node name="Node12"> <description>终端控制门户服务02</description> <server-instance template="syeDcsSrv" index="3"/> </node> </application> </icegrid> 云端的主从中心服务的配置,测试时以本地为例, config.master: # # The IceGrid instance name. # IceGrid.InstanceName=MCSLGrid #1 Ice.Default.Protocol=tcp #Ice.Default.Locator=MCSLGrid/Locator:default -p 5061 #2 #1 为这个应用实例指定一个唯一的标识 #2 注册服务的端点信息(主注册服务和所有的从注册服务),节点注册时要用到 # # IceGrid registry configuration. #IceGrid 注册表最多创建四组端点,用下面的属性进行配置: #1)IceGrid.Registry.Client.Endpoints #支持Ice::Locator IceGrid::Query 的客户端端点。 #2)IceGrid.Registry.Server.Endpoints #用于对象和对象适配器注册的服务器端端点。 #3)IceGrid.Registry.Admin.Endpoints #IceGrid::Admin 接口的管理端点( 可选)。 #4)IceGrid.Registry.Internal.Endpoints #定义IceGrid节点用于与注册表进行通信的内部端点内部接口的端点。这些内部端点必须能被IceGrid节点用于与注册表进行通信的内部端点节点访问。 # IceGrid.Registry.Client.Endpoints=default -p 5061 #3 #协议default,其值可通过Ice.Default.Protocol设置,默认是tcp,协议可以设置tcp udp ssll ws wss等 #-h <ip>指定网址,-p <port> 指定端口, -t <msec> 指定超时毫秒数, 多个Endpoint采用':'间隔 #IceGrid.Registry.Client.Endpoints=default -h 127.0.0.1 -p 5061 #3 IceGrid.Registry.Server.Endpoints=default #4 #IceGrid.Registry.Admin.Endpoints=default -p 5062 #IcePack.Registry.Admin.Endpoints=default -p 5062 IceGrid.Registry.Internal.Endpoints=default #5 #IceGrid.Registry.ReplicaName=Master #标识服务名称 #IceGrid.Registry.Data=db\\LMDB_master #6 IceGrid.Registry.LMDB.Path=db\\LMDB_master IceGrid.Registry.DynamicRegistration=1 #IceGrid.Node.CollocateRegistry=1 //定义节点是否和注册服务并置在一起,设为1时并置,设为0时不并置,不能有两个节点都配置这个属性只能有一个主Registry配置 # 3 客户端访问注册服务器的端点信息 # 4 服务访问注册服务器的端点信息,通常是default # 5 内部访问端点信息,通常是default,节点用这个端口和注册服务通信 # 6 注册服务的数据目录的路径 # # IceGrid admin clients must use a secure connection to connect to the # registry or use Glacier2. # #IceGrid.Registry.AdminSessionManager.Endpoints=default IceGrid.Registry.PermissionsVerifier=MCSLGrid/NullPermissionsVerifier #7 #IceGrid.Registry.CryptPasswords=passwords IceGrid.Registry.AdminPermissionsVerifier=MCSLGrid/NullPermissionsVerifier #8 #IceGrid.Registry.SSLPermissionsVerifier=MCSLGrid/NullSSLPermissionsVerifier #9 #IceGrid.Registry.AdminSSLPermissionsVerifier=MCSLGrid/NullSSLPermissionsVerifier #10 # 7 设定防火墙安全代理,从而控制客户端访问注册表时可用的权限 # 8 设定防火墙安全代理,从而控制注册表管理者可用的权限 # 9 设定SSL安全代理,从而设定客户端访问注册表时的SSL安全访问机制 # 10 设定SSL安全代理,从而设定注册表管理者的SSL安全访问机制 # # IceGrid SQLconfiguration if using SQL database. # #Ice.Plugin.DB=IceGridSqlDB:createSqlDB #11 #IceGrid.SQL.DatabaseType=QSQLITE #12 #IceGrid.SQL.DatabaseName=register/Registry.db #13 # 11 指定Ice对象序列化的机制,如果不设置,默认用Freeze机制 # 12 指定使用数据库的类型 #13 指定使用数据库的名称 # # #Ice Error andStandard output Set # #Ice.StdErr=master/stderr.txt #14 #Ice.StdOut= master/stdout.txt #15 # #14 指定标准错误输出文件 #15 指定标准输出文件 # # Trace properties. # Ice.ProgramName=Master #16 IceGrid.Registry.Trace.Node=2 #17 IceGrid.Registry.Trace.Replica=2 #18 #16 指定主注册服务的名称 #17 指定主注册服务跟踪节点信息的级别(0~3),默认为0 #18 指定主/从热备注册服务的跟踪级别(0~3),默认为0 # # IceGrid nodeconfiguration. # #IceGrid.Node.Name=node_1 #19 #IceGrid.Node.Endpoints=default #20 #IceGrid.Node.Data=node_1 #21 #IceGrid.Node.CollocateRegistry=1 #22 #IceGrid.Node.Output=node_1 #23 #IceGrid.Node.RedirectErrToOut=1 #24 # 19 定义节点的名称,必须唯一 # 20 节点被访问的端口信息,注册服务使用这个端点和节点通信,通常设为default # 21 节点的数据目录的路径 # 22 定义节点是否和注册服务并置在一起,设为1时并置,设为0时不并置 # 23 节点标准输出信息重定向蹈的目录路径,会自动生成输出文件 # 24 节点上的服务程序的标准错误重定向到标准输出 # Traceproperties. # IceGrid.Node.Trace.Activator=1 #25 IceGrid.Node.Trace.Adapter=2 #26 IceGrid.Node.Trace.Server=3 #27 # 25 激活器跟踪级别,通常有0,1,2,3级,默认是0 # 26 对象适配器跟踪级别,通常有0,1,2,3级,默认是0 # 27 服务跟踪级别,通常有0,1,2,3级,默认是0 # # Dummy usernameand password for icegridadmin. # IceGridAdmin.Username=mygrid #28 IceGridAdmin.Password=mygrid #29 # 28 IceGrid管理器登录该应用的用户名 # 29 IceGrid管理器登录该应用的密码 从服务的config.slave: # # The IceGrid locator proxy. #主从注册表之间访问定位器配置 # Ice.Default.Locator=MCSLGrid/Locator:default -p 5061 #可指定ip #Ice.Default.Locator=MCSLGrid/Locator:default -h 192.168.1.102 -p 5061 # # The IceGrid instance name. # #IceGrid.InstanceName=MCSLGrid # # IceGrid registry configuration. # IceGrid.Registry.Client.Endpoints=default -p 15061 #可指定ip #IceGrid.Registry.Client.Endpoints=default -h 127.0.0.1 -p 15061 IceGrid.Registry.Server.Endpoints=default IceGrid.Registry.Internal.Endpoints=default #IceGrid.Registry.Data=db/LMDB_slave IceGrid.Registry.ReplicaName=Slave#指定从注册服务的名称 IceGrid.Registry.LMDB.Path=db/LMDB_slave # # IceGrid admin clients must use a secure connection to connect to the # registry or use Glacier2. # #IceGrid.Registry.AdminSessionManager.Endpoints=default IceGrid.Registry.PermissionsVerifier=MCSLGrid/NullPermissionsVerifier #7 IceGrid.Registry.AdminPermissionsVerifier=MCSLGrid/NullPermissionsVerifier #8 #IceGrid.Registry.SSLPermissionsVerifier=MCSLGrid/NullSSLPermissionsVerifier #9 #IceGrid.Registry.AdminSSLPermissionsVerifier=MCSLGrid/NullSSLPermissionsVerifier #10 # # IceGrid SQLconfiguration if using SQL database. # #Ice.Plugin.DB=IceGridSqlDB:createSqlDB #11 #IceGrid.SQL.DatabaseType=QSQLITE #12 #IceGrid.SQL.DatabaseName=register/Registry.db #13 # # #Ice Error andStandard output Set # #Ice.StdErr=slave_1/stderr.txt #14 #Ice.StdOut=slave_1/stdout.txt #15 # # Trace properties. # Ice.ProgramName=Slave IceGrid.Registry.Trace.Node=2 IceGrid.Registry.Trace.Replica=2 # # IceGrid nodeconfiguration. # #IceGrid.Node.Name=node_2 #19 #IceGrid.Node.Endpoints=default #20 #IceGrid.Node.Data=node_2 #21 #IceGrid.Node.CollocateRegistry=1 #22 #IceGrid.Node.Output=node_2 #23 #IceGrid.Node.RedirectErrToOut=1 #24 # Traceproperties. # IceGrid.Node.Trace.Activator=1 #25 #IceGrid.Node.Trace.Adapter=2 #26 #IceGrid.Node.Trace.Server=3 #27 # # Dummy usernameand password for icegridadmin. # IceGridAdmin.Username=mygrid #28 IceGridAdmin.Password=mygrid #29 serverMcs的配置config01.node(config02.node): # # The IceGrid locator proxy. # Ice.Default.Locator=MCSLGrid/Locator:default -p 5061:default -p 15061 #不是本地节点,需指定ip #Ice.Default.Locator=MCSLGrid/Locator:default -h 127.0.0.1 -p 5061:default -h 127.0.0.1 -p 15061 # # IceGrid node configuration. #"IceGrid.Node.Endpoints"的访问地址与"IceGrid.Registry.Internal.Endpoints=tcp -h localhost"一致 # IceGrid.Node.Name=Node21 IceGrid.Node.Endpoints=default #IceGrid.Node.Endpoints=default -h 127.0.0.1 IceGrid.Node.Data=db/node_21 IceGrid.Node.Output=db/node_out_21 #IceGrid.Node.RedirectErrToOut=1 #IceGrid.Node.Name = 172.16.14.165 //服务器地址 # # Trace properties. # Ice.ProgramName=Node IceGrid.Node.Trace.Replica=2 IceGrid.Node.Trace.Activator=1 #log tracing Ice.LogFile=iceserv.log Ice.LogFile.SizeMax=1048576 Ice.PrintStackTraces=2 #Ice.Trace.GC=1 Ice.Trace.Protocol=1 Ice.Trace.Slicing=1 Ice.Trace.Retry=2 Ice.Trace.Network=2 Ice.Trace.Locator=2 #warning Ice.Warn.Connections=1 Ice.Warn.Datagrams=1 Ice.Warn.Dispatch=1 Ice.Warn.AMICallback=1 #Ice.Warn.Leaks=1 serverDcs的config01.node(config02.node类似): # # The IceGrid locator proxy. # Ice.Default.Locator=MCSLGrid/Locator:default -p 5061:default -p 15061 #不是本地节点,需指定ip #Ice.Default.Locator=MCSLGrid/Locator:default -h 127.0.0.1 -p 5061:default -h 127.0.0.1 -p 15061 # # IceGrid node configuration. #"IceGrid.Node.Endpoints"的访问地址与"IceGrid.Registry.Internal.Endpoints=tcp -h localhost"一致 # IceGrid.Node.Name=Node11 IceGrid.Node.Endpoints=default #IceGrid.Node.Endpoints=default -h 127.0.0.1 IceGrid.Node.Data=db/node_11 IceGrid.Node.Output=db/node_out_11 #IceGrid.Node.RedirectErrToOut=1 #IceGrid.Node.Name = 172.16.14.165 //服务器地址 # # Trace properties. # Ice.ProgramName=Node IceGrid.Node.Trace.Replica=2 IceGrid.Node.Trace.Activator=1 #log tracing Ice.LogFile=iceserv.log Ice.LogFile.SizeMax=1048576 Ice.PrintStackTraces=2 #Ice.Trace.GC=1 Ice.Trace.Protocol=1 Ice.Trace.Slicing=1 Ice.Trace.Retry=2 Ice.Trace.Network=2 Ice.Trace.Locator=2 #warning Ice.Warn.Connections=1 Ice.Warn.Datagrams=1 Ice.Warn.Dispatch=1 Ice.Warn.AMICallback=1 #Ice.Warn.Leaks=1 配置完成后我们将配置启动脚本,有了本地配置的经验,我同样配置数个脚本,以win系统为例: (1)start_center_server.bat,用于启动主从中心服务 start /b /MIN "registry11" icegridregistry --Ice.Config=config.master start /b /MIN "registry12" icegridregistry --Ice.Config=config.slave (2)start_admin.bat,用于修改更新配置 ::如果更改配置,需要重新映射服务,删除数据目录并重新生成或更新,需先启动中心服务,再调用配置服务更新 ::start_center_server.bat icegridadmin --Ice.Config=config.admin -e "application add 'application.xml'" icegridadmin --Ice.Config=config.admin -e "application update 'application.xml'" (3)start_server.bat,启动节点服务集,本案例serverMcs和serverDcs有两个节点服务 cd D:\\SYE_MCS_PRO\\pcs_project\\MCS_server\\x64\\McsSrv start /b /MIN "node11" icegridnode --Ice.Config=config01.node start /b /MIN "node12" icegridnode --Ice.Config=config02.node cd D:\\SYE_MCS_PRO\\pcs_project\\DCS_server\\x64\\DcsSrv start /b /MIN "node21" icegridnode --Ice.Config=config01.node start /b /MIN "node22" icegridnode --Ice.Config=config02.node cd D:\\SYE_MCS_PRO\\pcs_project\\sye_mcs\\mgrs (4)stop_server.bat,关闭服务集使用 taskkill /f /im icegridregistry.exe taskkill /f /im icegridnode.exe taskkill /f /im glacier2router.exe taskkill /f /im MCS_server.exe taskkill /f /im DCS_server.exe 配置完成,先去配置一些服务需要的一些目录,以本地win服务为例, 如 D:\SYE_MCS_PRO\pcs_project\sye_mcs\mgrs\db\LMDB_master, D:\SYE_MCS_PRO\pcs_project\sye_mcs\mgrs\db\LMDB_slave, D:\SYE_MCS_PRO\pcs_project\MCS_server\x64\McsSrv\db\node_21, D:\SYE_MCS_PRO\pcs_project\MCS_server\x64\McsSrv\db\node_out_21, D:\SYE_MCS_PRO\pcs_project\DCS_server\x64\DcsSrv\db\node_12, D:\SYE_MCS_PRO\pcs_project\DCS_server\x64\DcsSrv\db\node_out_12 完成目录配置后 ,先start_center_server.bat服务,然后启动start_admin.bat(配置有改动时)和start_server.bat。 当服务启动稳定后,serverPcs服务应该是没有启动的,只有客户端连接需求后才会被触发启动。 serverPcs作为客户端与serverMcs通信,终端app与serverDcs通信,由于他们的注册主从服务一致,它们的ice配置文件config0.client如下: # # The IceGrid locator proxy. #"Ice.Default.Locator"的访问地址与"IceGrid.Registry.Client.Endpoints设置"一致 # Ice.Default.Locator=MCSLGrid/Locator:tcp -h localhost -p 5061:tcp -h localhost -p 15061 #Ice.Default.Locator=MCSLGrid/Locator:tcp -h www.syemcs.com -p 5061:tcp -h www.pytest.com -p 15061 #非本地连接,需指定IP #Ice.Default.Locator=MCSSVCGrid/Locator:tcp -h 192.168.1.102 -p 4061:tcp -h 192.168.1.102 -p 14061 #Ice.Default.Router=MCSSVCGlacier2/router:tcp -h www.syemcs.com -p 4064:ssl -h www.pytest.com -p 4065 现在启动测试样例,先启动本地服务集群的脚本(start_center_server.bat,start_admin.bat(配置有改动时),start_server.bat),再启动云端服务集群的脚本(start_center_server.bat,start_admin.bat(配置有改动时),start_server.bat),然后启动终端app测试,展示类似,只多了一级区域: ​​ 样例走通后,进行linux编译,然后部署在阿里云的ECS上,进行一些优化和调整,重新设计终端UI,撰写维护使用手册,开通给使用人员。嗯,现在每个被授权人可以通过外网通信实现对现场设备的实时监控。 可是项目还没有完结,领导又有新构想了,远程升级、日志备份云端和远程查阅、语音控制、视频发布等等,估计又要追加一堆微服务了。 嘿,没问题,就是能不能招人或找外包,或加点项目预算啥的。 项目开发还在路上,感觉坑越来越深了。

优秀的个人博客,低调大师

用函数计算搭建微服务——云客服访客名片

云客服可以方便快捷集成到用户的官网、APP、公众号等任意位置;依托阿里云智能算法,机器人能准确的理解用户的意图,并准确的回答用户的问题;提供完整的热线、在线服务功能,并能轻松连接企业的其他系统,如 CRM 等;动态管理客户和坐席使用的统一知识库、知识文章;实时汇总、实时分析服务中心的数据,帮助业务决策者从全局视角了解热门问题和当前的服务瓶颈;云客服是一套完整的智能服务体系。 访客名片是云客服上一个功能,用于关联云客服和客户 CRM 系统之间的用户,方便客服人员了解提问用户的基本信息,更好的支持用户。 目前云客服提供的访客名片集成指南是一个基于 Spring MVC 实现的 Web 项目,nodejs 语言背景的用户提出希望将 java 实现移植到函数计算服务,以服务的形式提供给 nodejs 实现的核心业务调用。 困难点 用户自己尝试过移

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册