首页 文章 精选 留言 我的

精选列表

搜索[搭建],共10000篇文章
优秀的个人博客,低调大师

搭建百度unit2.0测试代码(Java)

1.主类:App.java package haidong.haidong; import java.io.BufferedReader; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.URL; import java.util.List; import java.util.Map; import org.json.JSONObject; /** * Hello world! * */ public class App { public static void main(String[] args) { System.out.println(utterance("你好")); } /** * 获取权限token * * @return 返回示例: { "access_token": * "24.460da4889caad24cccdb1fea17221975.2592000.1491995545.282335-1234567", * "expires_in": 2592000 } */ public static String getAuth() { // 官网获取的 API Key 更新为你注册的 String clientId = "**************************"; // 官网获取的 Secret Key 更新为你注册的 String clientSecret = "****************************"; return getAuth(clientId, clientSecret); } /** * 获取API访问token 该token有一定的有效期,需要自行管理,当失效时需重新获取. * * @param ak * - 百度云官网获取的 API Key * @param sk * - 百度云官网获取的 Securet Key * @return assess_token 示例: * "24.460da4889caad24cccdb1fea17221975.2592000.1491995545.282335-1234567" */ public static String getAuth(String ak, String sk) { // 获取token地址 String authHost = "https://aip.baidubce.com/oauth/2.0/token?"; String getAccessTokenUrl = authHost // 1. grant_type为固定参数 + "grant_type=client_credentials" // 2. 官网获取的 API Key + "&client_id=" + ak // 3. 官网获取的 Secret Key + "&client_secret=" + sk; try { URL realUrl = new URL(getAccessTokenUrl); // 打开和URL之间的连接 HttpURLConnection connection = (HttpURLConnection) realUrl.openConnection(); connection.setRequestMethod("GET"); connection.connect(); // 获取所有响应头字段 Map<String, List<String>> map = connection.getHeaderFields(); // 遍历所有的响应头字段 for (String key : map.keySet()) { System.err.println(key + "--->" + map.get(key)); } // 定义 BufferedReader输入流来读取URL的响应 BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String result = ""; String line; while ((line = in.readLine()) != null) { result += line; } /** * 返回结果示例 */ System.err.println("result:" + result); JSONObject jsonObject = new JSONObject(result); String access_token = jsonObject.getString("access_token"); return access_token; } catch (Exception e) { System.err.printf("获取token失败!"); e.printStackTrace(System.err); } return null; } private static String utterance(String say) { // 请求URL String talkUrl = "https://aip.baidubce.com/rpc/2.0/unit/bot/chat"; try { // 请求参数 String params = "{\"bot_session\":\"\",\"log_id\":\"7758521\",\"request\":{\"bernard_level\":1,\"client_session\":\"{\\\"client_results\\\":\\\"\\\", \\\"candidate_options\\\":[]}\",\"query\":\""+say+"\",\"query_info\":{\"say_hello_satisfy\":[],\"source\":\"KEYBOARD\",\"type\":\"TEXT\"},\"updates\":\"\",\"user_id\":\"88888\"},\"bot_id\":******,\"version\":\"2.0\"}"; String accessToken = getAuth(); String result = HttpUtil.post(talkUrl, accessToken, "application/json", params); return result; } catch (Exception e) { e.printStackTrace(); } return null; } } 2.工具类:Base64Util.java,FileUtil.java,GsonUtils.java,HttpUtil.java package haidong.haidong; /** * Base64 工具类 */ public class Base64Util { private static final char last2byte = (char) Integer.parseInt("00000011", 2); private static final char last4byte = (char) Integer.parseInt("00001111", 2); private static final char last6byte = (char) Integer.parseInt("00111111", 2); private static final char lead6byte = (char) Integer.parseInt("11111100", 2); private static final char lead4byte = (char) Integer.parseInt("11110000", 2); private static final char lead2byte = (char) Integer.parseInt("11000000", 2); private static final char[] encodeTable = new char[]{'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '+', '/'}; public Base64Util() { } public static String encode(byte[] from) { StringBuilder to = new StringBuilder((int) ((double) from.length * 1.34D) + 3); int num = 0; char currentByte = 0; int i; for (i = 0; i < from.length; ++i) { for (num %= 8; num < 8; num += 6) { switch (num) { case 0: currentByte = (char) (from[i] & lead6byte); currentByte = (char) (currentByte >>> 2); case 1: case 3: case 5: default: break; case 2: currentByte = (char) (from[i] & last6byte); break; case 4: currentByte = (char) (from[i] & last4byte); currentByte = (char) (currentByte << 2); if (i + 1 < from.length) { currentByte = (char) (currentByte | (from[i + 1] & lead2byte) >>> 6); } break; case 6: currentByte = (char) (from[i] & last2byte); currentByte = (char) (currentByte << 4); if (i + 1 < from.length) { currentByte = (char) (currentByte | (from[i + 1] & lead4byte) >>> 4); } } to.append(encodeTable[currentByte]); } } if (to.length() % 4 != 0) { for (i = 4 - to.length() % 4; i > 0; --i) { to.append("="); } } return to.toString(); } } package haidong.haidong; import java.io.*; /** * 文件读取工具类 */ public class FileUtil { /** * 读取文件内容,作为字符串返回 */ public static String readFileAsString(String filePath) throws IOException { File file = new File(filePath); if (!file.exists()) { throw new FileNotFoundException(filePath); } if (file.length() > 1024 * 1024 * 1024) { throw new IOException("File is too large"); } StringBuilder sb = new StringBuilder((int) (file.length())); // 创建字节输入流 FileInputStream fis = new FileInputStream(filePath); // 创建一个长度为10240的Buffer byte[] bbuf = new byte[10240]; // 用于保存实际读取的字节数 int hasRead = 0; while ( (hasRead = fis.read(bbuf)) > 0 ) { sb.append(new String(bbuf, 0, hasRead)); } fis.close(); return sb.toString(); } /** * 根据文件路径读取byte[] 数组 */ public static byte[] readFileByBytes(String filePath) throws IOException { File file = new File(filePath); if (!file.exists()) { throw new FileNotFoundException(filePath); } else { ByteArrayOutputStream bos = new ByteArrayOutputStream((int) file.length()); BufferedInputStream in = null; try { in = new BufferedInputStream(new FileInputStream(file)); short bufSize = 1024; byte[] buffer = new byte[bufSize]; int len1; while (-1 != (len1 = in.read(buffer, 0, bufSize))) { bos.write(buffer, 0, len1); } byte[] var7 = bos.toByteArray(); return var7; } finally { try { if (in != null) { in.close(); } } catch (IOException var14) { var14.printStackTrace(); } bos.close(); } } } } /* * Copyright (C) 2017 Baidu, Inc. All Rights Reserved. */ package haidong.haidong; import java.lang.reflect.Type; import com.google.gson.Gson; import com.google.gson.GsonBuilder; import com.google.gson.JsonParseException; /** * Json工具类. */ public class GsonUtils { private static Gson gson = new GsonBuilder().create(); public static String toJson(Object value) { return gson.toJson(value); } public static <T> T fromJson(String json, Class<T> classOfT) throws JsonParseException { return gson.fromJson(json, classOfT); } public static <T> T fromJson(String json, Type typeOfT) throws JsonParseException { return (T) gson.fromJson(json, typeOfT); } } package haidong.haidong; import java.io.BufferedReader; import java.io.DataOutputStream; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.URL; import java.util.List; import java.util.Map; /** * http 工具类 */ public class HttpUtil { public static String post(String requestUrl, String accessToken, String params) throws Exception { String contentType = "application/x-www-form-urlencoded"; return HttpUtil.post(requestUrl, accessToken, contentType, params); } public static String post(String requestUrl, String accessToken, String contentType, String params) throws Exception { String encoding = "UTF-8"; if (requestUrl.contains("nlp")) { encoding = "GBK"; } return HttpUtil.post(requestUrl, accessToken, contentType, params, encoding); } public static String post(String requestUrl, String accessToken, String contentType, String params, String encoding) throws Exception { String url = requestUrl + "?access_token=" + accessToken; return HttpUtil.postGeneralUrl(url, contentType, params, encoding); } public static String postGeneralUrl(String generalUrl, String contentType, String params, String encoding) throws Exception { URL url = new URL(generalUrl); // 打开和URL之间的连接 HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); // 设置通用的请求属性 connection.setRequestProperty("Content-Type", contentType); connection.setRequestProperty("Connection", "Keep-Alive"); connection.setUseCaches(false); connection.setDoOutput(true); connection.setDoInput(true); // 得到请求的输出流对象 DataOutputStream out = new DataOutputStream(connection.getOutputStream()); out.write(params.getBytes(encoding)); out.flush(); out.close(); // 建立实际的连接 connection.connect(); // 获取所有响应头字段 Map<String, List<String>> headers = connection.getHeaderFields(); // 遍历所有的响应头字段 for (String key : headers.keySet()) { System.err.println(key + "--->" + headers.get(key)); } // 定义 BufferedReader输入流来读取URL的响应 BufferedReader in = null; in = new BufferedReader( new InputStreamReader(connection.getInputStream(), encoding)); String result = ""; String getLine; while ((getLine = in.readLine()) != null) { result += getLine; } in.close(); System.err.println("result:" + result); return result; } } 3.pom.xml配置 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>haidong</groupId> <artifactId>haidong</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>haidong</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>net.sf.json-lib</groupId> <artifactId>json-lib</artifactId> <version>2.4</version> <classifier>jdk15</classifier> </dependency> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20160810</version> </dependency> <!-- https://mvnrepository.com/artifact/net.sf.ezmorph/ezmorph --> <dependency> <groupId>net.sf.ezmorph</groupId> <artifactId>ezmorph</artifactId> <version>1.0.6</version> </dependency> <!-- https://mvnrepository.com/artifact/commons-beanutils/commons-beanutils --> <dependency> <groupId>commons-beanutils</groupId> <artifactId>commons-beanutils</artifactId> <version>1.9.3</version> </dependency> <!-- https://mvnrepository.com/artifact/commons-collections/commons-collections --> <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.commons/commons-lang3 --> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.4</version> </dependency> <!-- https://mvnrepository.com/artifact/commons-lang/commons-lang --> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.6</version> </dependency> <!-- https://mvnrepository.com/artifact/commons-logging/commons-logging --> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.2</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <!--用于解析json--> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.4</version> </dependency> <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.2.4</version> </dependency> </dependencies> </project>

优秀的个人博客,低调大师

Django 搭建CMDB系统完整[2](设置CSS\JS\IMAGES)

在cmdb里面新建static目录,用于存放css js images mkdir -p /dj/cmdb/static/images,scripts,style 设置settings.py STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR,'static') STATICFILES_DIRS = [ ("style", os.path.join(STATIC_ROOT, 'style')), ("images", os.path.join(STATIC_ROOT, 'images')), ("scripts", os.path.join(STATIC_ROOT, 'scripts')), ] 设置urls.py from django.contrib.staticfiles.urls import staticfiles_urlpatterns from django.contrib import staticfiles from django.views.static import serve urlpatterns = [ url(r'^static/(?P<path>.*)',views.main_page,name='main_page'), url(r'^login/$', login), ] template中引用格式为 <link href="/static/style/authority/login_css.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="/static/scripts/jquery/jquery-1.7.1.js"></script> login_css.css文件 @charset "UTF-8"; { margin: 0; padding: 0; list-style: none; } html,body { background: #0D1D3E; font: normal 15px "Microsoft YaHei"; } login_area { width: 100%; height: 433px; position: absolute; top: 22%; } login_box { margin: 0 auto; width: 812px; height: 408px; background: url('../../images/login/login.png') 0px 0px no-repeat; position: relative; } login_form { width: 370px; height: 320px; position: absolute; top: 10px; right: 20px; } login_tip { height: 35px; list-style: 35px; font-weight: bold; color: red; padding-top: 15px; margin-top: 55px; } btn_area { margin-top: 20px; margin-left: 80px; } .username,.pwd { width: 200px; height: 30px; line-height: 30px; margin-top: 20px; outline: 0; padding: 5px; border: 1px solid; border-color: #C0C0C0 #D9D9D9 #D9D9D9; border-radius: 2px; background: #FFF; box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1), 0 1px 0 rgba(255, 255, 255, 0.2); -webkit-transition: box-shadow, border-color .5s ease-in-out; -moz-transition: box-shadow, border-color .5s ease-in-out; -o-transition: box-shadow, border-color .5s ease-in-out; } .login_btn { width: 80px; height: 30px; line-height: 30px; text-align: center; border-style: none; cursor: pointer; font-family: "Microsoft YaHei", "微软雅黑", "sans-serif"; background: url('../../images/login/btn.jpg') 0px -1px no-repeat; } .login_btn:hover { width: 80px; height: 30px; line-height: 30px; text-align: center; border-style: none; cursor: pointer; font-family: "Microsoft YaHei", "微软雅黑", "sans-serif"; background: url('../../images/login/btn_hover.jpg') 0px 0px no-repeat; color: #fff; } login.html文件 <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>CMDB-后台系统</title> <link href="/static/style/authority/login_css.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="/static/scripts/jquery/jquery-1.7.1.js"></script> </head> <body> {% if form.has_errors %} <p>Your username and password didn't match. Please try again.</p> {% endif %} <div id="login_center"> <div id="login_area"> <div id="login_box"> <div id="login_form"> <form id="submitForm" action="." method="post"> <div id="login_tip"> <span id="login_err" class="sty_txt2"></span> </div> <div> <label for="id_username">用户名:{{ form.username }} </div> <div> <label for="id_password">密码:</label>{{ form.password }} </div> <div id="btn_area"> <input type="hidden" name="next" value="/" /> {% csrf_token %} <input type="submit" class="login_btn" id="login_sub" value="登 录"> <input type="reset" class="login_btn" id="login_ret" value="重 置"> </div> </form> </div> </div> </div> </div> </body> </html>

优秀的个人博客,低调大师

Kubernetes-在Kubernetes集群上搭建Stateful Elasticsearch集群

准备工作 Elasticsearch镜像,我以Elasticsearch官方镜像的5.6.10版本为基础创建的。 Dockerfile FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.10 MAINTAINER leo.lee(lis85@163.com) WORKDIR /usr/share/elasticsearch USER root # copying custom-entrypoint.sh and configuration (elasticsearch.yml, log4j2.properties) # to their respective directories in /usr/share/elasticsearch (already the WORKDIR) COPY custom-entrypoint.sh bin/ COPY elasticsearch.yml config/ COPY log4j2.properties config/ # assuring "elasticsearch" user have appropriate access to configuration and custom-entrypoint.sh # make sure custom-entrypoint.sh is executable RUN chown elasticsearch:elasticsearch config/elasticsearch.yml config/log4j2.properties bin/custom-entrypoint.sh && \ chmod +x bin/custom-entrypoint.sh # start by running the custom entrypoint (as root) CMD ["/bin/bash", "bin/custom-entrypoint.sh"] custom-entrypoint.sh #!/bin/bash # This is expected to run as root for setting the ulimits set -e ################################################################################## # ensure increased ulimits - for nofile - for the Elasticsearch containers # the limit on the number of files that a single process can have open at a time (default is 1024) ulimit -n 65536 # ensure increased ulimits - for nproc - for the Elasticsearch containers # the limit on the number of processes that elasticsearch can create # 2048 is min to pass the linux checks (default is 50) # https://www.elastic.co/guide/en/elasticsearch/reference/current/max-number-threads-check.html ulimit -u 2048 # swapping needs to be disabled for performance and node stability # in ElasticSearch config we are using: [bootstrap.memory_lock=true] # this additionally requires the "memlock: true" ulimit; specifically set for each container # -l: max locked memory ulimit -l unlimited # running command to start elasticsearch # passing all inputs of this entry point script to the es-docker startup script # NOTE: this entry point script is run as root; but executes the es-docker # startup script as the elasticsearch user, passing all the root environment-variables # to the elasticsearch user su elasticsearch bin/es-docker "$@" elasticsearch.yml # attaching the namespace to the cluster.name to differentiate different clusters # ex. elasticsearh-acceptance, elasticsearh-production, elasticsearh-monitoring cluster.name: "elasticsearch-${NAMESPACE}" # we provide a node.name that is the POD_NAME-NAMESPACE # ex. elasticsearh-0-acceptance, elasticsearh-1-acceptance, elasticsearh-2-acceptance node.name: "${POD_NAME}-${NAMESPACE}" network.host: ${POD_IP} # A hostname that resolves to multiple IP addresses will try all resolved addresses # we provide the name for the headless service # which resolves to the ip addresses of all the live attached pods # alternatively we can directly reference the hostnames of the pods discovery.zen.ping.unicast.hosts: es-discovery-svc # minimum_master_nodes need to be explicitly set when bound on a public IP # set to 1 to allow single node clusters # more info: https://github.com/elastic/elasticsearch/pull/17288 discovery.zen.minimum_master_nodes: 2 bootstrap.memory_lock: true #------------------------------------------------------------------------------------- # RECOVERY: https://www.elastic.co/guide/en/elasticsearch/guide/current/important-configuration-changes.html # SETTINGS TO avoid the excessive shard swapping that can occur on cluster restarts #------------------------------------------------------------------------------------- # how many nodes shall be present to consider the cluster functional; # prevents Elasticsearch from starting recovery until these nodes are available gateway.recover_after_nodes: 2 # how many nodes are expected in the cluster gateway.expected_nodes: 3 # how long we want to wait after [gateway.recover_after_nodes] is reached in order to start recovery process (if applicable). gateway.recover_after_time: 5m #------------------------------------------------------------------------------------- # The following settings control the fault detection process using the discovery.zen.fd prefix: # How often a node gets pinged. Defaults to 1s. discovery.zen.fd.ping_interval: 1s # How long to wait for a ping response, defaults to 30s. discovery.zen.fd.ping_timeout: 10s # How many ping failures / timeouts cause a node to be considered failed. Defaults to 3. discovery.zen.fd.ping_retries: 2 log4j2.properties status = error appender.console.type = Console appender.console.name = console appender.console.layout.type = PatternLayout appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n rootLogger.level = info rootLogger.appenderRef.console.ref = console 将这4个文件放入同一级目录下,然后在该目录下使用命令创建镜像 docker build -t [image name]:[version] . 创建完成后将镜像上传到私有镜像仓库中。 ES需要用到存储,需要提前创建持久卷(PV) persistent-volume-es.yaml kind: PersistentVolume apiVersion: v1 metadata: name: k8s-pv-es1 labels: type: local spec: storageClassName: gce-standard-sc capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/usr/share/elasticsearch/data" persistentVolumeReclaimPolicy: Recycle --- kind: PersistentVolume apiVersion: v1 metadata: name: k8s-pv-es2 labels: type: local spec: storageClassName: gce-standard-sc capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/usr/share/elasticsearch/data" persistentVolumeReclaimPolicy: Recycle --- kind: PersistentVolume apiVersion: v1 metadata: name: k8s-pv-es3 labels: type: local spec: storageClassName: gce-standard-sc capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/usr/share/elasticsearch/data" persistentVolumeReclaimPolicy: Recycle 使用如下命令创建 kubectl create -f persistent-volume-es.yaml create pv 部署Elasticsearch集群 elasticsearch.yaml #create the statefulset headless service apiVersion: v1 kind: Service metadata: name: es-discovery-svc labels: app: es-discovery-svc spec: # the set of Pods targeted by this Service are determined by the Label Selector selector: app: elasticsearch # exposing elasticsearch transport port (only) # this service will be used by es-nodes for discovery; # communication between es-nodes happens through # the transport port (9300) ports: - protocol: TCP # port exposed by the service (service reacheable at) port: 9300 # port exposed by the Pod(s) the service abstracts (pod reacheable at) # can be a string representing the name of the port @the pod (ex. transport) targetPort: 9300 name: transport # specifying this is a headless service by providing ClusterIp "None" clusterIP: None --- #create the cluster-ip service apiVersion: v1 kind: Service metadata: name: es-ia-svc labels: app: es-ia-svc spec: selector: app: elasticsearch ports: - name: http port: 9200 protocol: TCP - name: transport port: 9300 protocol: TCP --- #create the stateful set apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: elasticsearch labels: app: elasticsearch spec: # the headless-service that governs this StatefulSet # responsible for the network identity of the set. serviceName: es-discovery-svc replicas: 3 # Template is the object that describes the pod that will be created template: metadata: labels: app: elasticsearch spec: securityContext: # allows read/write access for mounted volumes # by users that belong to a group with gid: 1000 fsGroup: 1000 initContainers: # init-container for setting the mmap count limit - name: sysctl image: busybox imagePullPolicy: IfNotPresent command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true containers: - name: elasticsearch securityContext: # applying fix in: https://github.com/kubernetes/kubernetes/issues/3595#issuecomment-287692878 # https://docs.docker.com/engine/reference/run/#operator-exclusive-options capabilities: add: # Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)) - IPC_LOCK # Override resource Limits - SYS_RESOURCE image: registry.docker.uih/library/leo-elsticsearch:5.6.10 imagePullPolicy: IfNotPresent ports: - containerPort: 9300 name: transport protocol: TCP - containerPort: 9200 name: http protocol: TCP env: # environment variables to be directly refrenced from the configuration - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP # elasticsearch heapsize (to be adjusted based on need) - name: "ES_JAVA_OPTS" value: "-Xms2g -Xmx2g" # mounting storage persistent volume completely on the data dir volumeMounts: - name: es-data-vc mountPath: /usr/share/elasticsearch/data # The StatefulSet guarantees that a given [POD] network identity will # always map to the same storage identity volumeClaimTemplates: - metadata: name: es-data-vc spec: accessModes: [ "ReadWriteOnce" ] resources: requests: # elasticsearch mounted data directory size (to be adjusted based on need) storage: 20Gi storageClassName: gce-standard-sc # no LabelSelector defined # claims can specify a label selector to further filter the set of volumes # currently, a PVC with a non-empty selector can not have a PV dynamically provisioned for it # no volumeName is provided 使用如下命令进行部署 kubectl create -f elasticsearch.yaml 部署完后发现还没运行起来,通过日志查出是用户对文件夹【/usr/share/elasticsearch】没有权限引起的,文件夹的权限是root用户,这里可以在外面通过root用户手动修改文件夹权限,将文件夹权限赋给普通用户即可。

优秀的个人博客,低调大师

借助Beats快速搭建可视化运维系统

题记:本例实现了一个对个人PC的可视化运维dashboard。拓展至N个节点的集群也同理可以实现。对于个人或者企业而言,约等于0成本的对接,将一步迈入可视化运维监控的阶段。 背景介绍 Beats 平台集合了多种单一用途数据采集器。这些采集器安装后可用作轻量型代理,从成百上千或成千上万台机器向 Logstash 或 Elasticsearch 发送数据。Metricbeat是一个轻量级的指标采集器,用于从系统和服务收集指标。从 CPU 到内存,从 Redis 到 Nginx,Metricbeat 能够以一种轻量型的方式,输送各种系统和服务统计数据。这篇文章向用户演示,如何使用Metricbeat采集一台Mac电脑的指标信息,投递到阿里云Elasticsearch(以下统称‘阿里云ES’)上,并且在Kibana中生成对应dashborar

优秀的个人博客,低调大师

如何在阿里云容器服务上搭建Jenkins X

"Jenkins X is a CI/CD solution for modern cloud applications on Kubernetes." 这是Jenkins社区对于Jenkins X 的官方总结和定义。显而易见,它是一套以Jenkins作为核心发动机,以GitOps作为方法论,集成了nexus, docker-registry 和chartmuseum 等一系列交付标准存储组件的持续集成和持续交付解决方案。 下面我们讲介绍如何在阿里云容器服务上快速安装Jenkins X。 首先,需要在 阿里云容器服务控制台 创建一个香港集群,如果创建的集群只有一个worker节点,建议添加一台配置不低于8C16G的ECS。 进入集群管理页面,找到 “Master 节点 SSH 连接地址”,SSH登录Master。 安装 git。yum

优秀的个人博客,低调大师

用Tensorflow搭建预测磁盘性能的机器学习模型

前言:这篇文章的内容是去年上家公司参加部门code hackthon活动上运用了一些简单的Tensorflow机器学习模型,做的一个预测磁盘性能的小工具。因为和现在所做的行业和工作内容有些一定差距,就不详述应用的详细场景。google今年的开发者大会上定位所有的方向主攻“AI First”,相信机器学习将来会越来越多的提高大家的工作效率,也希望以后能用机器学习来给业务赋能。 背景:企业级存储器在目前应用在众多银行和大企业中,目前主流的存储,底层介质依然还是性价比高的磁盘(价格便宜,容量大),不过由于flash的强势崛起,新的中高端存储会越来越多的使用flash做为其存储介质。在实际使用中,不同企业在存储数据的时候应用场景不一样,有些是频繁,小批量;有些是单次大文件;有些是无规律的写入大小数据。不同存储在这些应用场景下,性能差距会比较大。这会导致销售人员在前线了解到需求后,需要测试部门模拟相应的场景,来给出存储的具体性能数据,这种场景可能需要花费一周才能拿到性能数据。 目标:收集足够多的磁盘性能原始数据,选择合适的机器学习模型来仿真性能数据。以后销售人员在评估性能的时候,只需要将数据输入到系统中,就可以得出一个较合理的结果。 说明:影响磁盘性能有30多个因素,在初期的模型中选择了特征显著的9个参数,磁盘性能的结果也有10多个维度来表示,这里选择了2个特征值。 直接贴: import tensorflow as tfimport numpy as npimport csvimport timefrom sklearn.preprocessing import StandardScaler input_ = []output1_ = []output2_ = []data_lenth = 1000with open('train_5.csv') as f:f_csv = csv.reader(f)headers = next(f_csv)for row in f_csv:num_row = [ float(i) for i in row ]input_.append(num_row[0:9])output1_.append(num_row[9:10])output2_.append(num_row[10:11]) my_X = np.array(input_[0:data_lenth])my_Y1 = np.array(output1_[0:data_lenth])my_Y2 = np.array(output2_[0:data_lenth]) scaler_x = StandardScaler().fit(my_X)scaler_y1 = StandardScaler().fit(my_Y1)scaler_y2 = StandardScaler().fit(my_Y2) trX = scaler_x.transform(my_X)trY1 = scaler_y1.transform(my_Y1)trY2 = scaler_y1.transform(my_Y2)print(" starting normalize *")time.sleep(2)print(" normalize input data *")print(trX)print(" normalize response_time_rnd *")print(trY1)print(" normalize response_time_seq *")print(trY2) 创建两个占位符,数据类型是 tf.float32 X = tf.placeholder(tf.float32)Y = tf.placeholder(tf.float32)biases = tf.Variable(tf.zeros(1) + 0.1) 创建一个变量系数 w , 最后训练出来的值,应该接近 2 w = tf.Variable(tf.zeros([1, 9]), name = "weights")y_model = tf.multiply(X, w)+biases 定义损失函数 (Y - y_model)^2 cost = tf.square(Y - y_model) 定义学习率 learning_rate = 0.01 使用梯度下降来训练模型,学习率为 learning_rate , 训练目标是使损失函数最小 train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) with tf.Session() as sess: 初始化所有的变量 init = tf.global_variables_initializer()sess.run(init) 对模型训练100次 for i in range(100): for (x, y) in zip(trX, trY1): sess.run(train_op, feed_dict = {X: x, Y: y}) 输出 w 的值 W = sess.run(w) 输出 b 的值 B = sess.run(biases) test_input = np.transpose(np.transpose(np.array([100,5000,4000]))) test_input = np.transpose(np.transpose(np.array([50,100,5000,4000,70,20,40,90,1000000])))W = np.transpose(W) print(" testing response_time_rnd *")print(W)print(B)print("testing data: ")print(test_input)test_X = scaler_x.transform(test_input)print(test_X)test_output = np.dot(test_X,W)+Bprint("result data(response_time_rnd): ")print(scaler_y1.inverse_transform(test_output)) with tf.Session() as sess: 初始化所有的变量 init = tf.global_variables_initializer()sess.run(init) 对模型训练100次 for i in range(100): for (x, y) in zip(trX, trY2): sess.run(train_op, feed_dict = {X: x, Y: y}) 输出 w 的值 W = sess.run(w) 输出 b 的值 B = sess.run(biases) W = np.transpose(W)print(" testing response_time_seq *")print(W)print(B)print("testing data: ")print(test_input)test_X = scaler_x.transform(test_input)print(test_X)test_output = np.dot(test_X,W)+Bprint("result data(response_time_seq): ")print(scaler_y1.inverse_transform(test_output)) 此处的入参需要做归一化处理,不做归一化,数据量达到一定量时就会出现不收敛的情况。 scaler_x = StandardScaler().fit(my_X)scaler_y1 = StandardScaler().fit(my_Y1)scaler_y2 = StandardScaler().fit(my_Y2)矩阵运算的转置处理。 trX = scaler_x.transform(my_X)trY1 = scaler_y1.transform(my_Y1)trY2 = scaler_y1.transform(my_Y2)这部分是模型的核心,注释说明比较清楚。 创建两个占位符,数据类型是 tf.float32 X = tf.placeholder(tf.float32)Y = tf.placeholder(tf.float32)biases = tf.Variable(tf.zeros(1) + 0.1) 创建一个变量系数 w , 最后训练出来的值,应该接近 2 w = tf.Variable(tf.zeros([1, 9]), name = "weights")y_model = tf.multiply(X, w)+biases 定义损失函数 (Y - y_model)^2 cost = tf.square(Y - y_model) 定义学习率 learning_rate = 0.01 使用梯度下降来训练模型,学习率为 learning_rate , 训练目标是使损失函数最小 train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) with tf.Session() as sess: # 初始化所有的变量 init = tf.global_variables_initializer() sess.run(init) # 对模型训练100次 for i in range(100): for (x, y) in zip(trX, trY1): sess.run(train_op, feed_dict = {X: x, Y: y}) # 输出 w 的值 W = sess.run(w) # 输出 b 的值 B = sess.run(biases) 结果:因为对数据做了奇异值处理,所以用模型跑出来的结果八成以上是符合预期的,不过当实际存在较多奇异点的时候,准确率会下降到六至七成。而且原始数据的量还不够,也会导致现实场景奇异点较多。后续考虑用多层神经网络模型来模拟这种场景,在数据源足够的情况下,结果会更加符合预期。 附件说明: py文件是包含QT界面的完整文件 csv文件是数据素材

优秀的个人博客,低调大师

搭建云存储,对比私有云和公有云的不同

【大咖・来了 第7期】10月24日晚8点观看《智能导购对话机器人实践》 云存储这样的隐喻或许感觉会有些夸张,但我们从中仍能感觉到和云相关的模糊的概念,对于这项技术的未来也是充满了未知,这对我们熟知的私有云存储也是一样。虽然市场上各种宣传的私有云存储都声称自己受益于在原有公有云上增加了防火墙,然而却有迹象表明私有云存储的名声却逐渐降到公共存储之下。 说起公共存储,很难不与后网络公司时代的选择性外包联系起来,但尽管如此,它还是具备着简单和固有的可用性。公共存储的名字听起来也缺乏专有性,很像是把东西直接堆放在那里而不会得到有序的管理。并且说实话,“公用”这个词听起来还是没有“云”来的酷。 私有云和公有云有何不同? 名字的改变不会改变其本质,它仍然是一种能提供更好服务的存储架构技术。我们不得不承认的是,不管选择的是哪个品牌的产品,企业都会受益于所选择的私有云架构。从某种意义上来说,私有云市场的产生是公有云发展的结果。有一条我们不得不承认的是,不管是公有云还是私有云,存储是云解决方案的不可替代部分。尤其是对服务器虚拟化来说,可以让云计算得以应用。虽然如此,要保证云的成功部署,一个成熟的存储战略是十分必要的。 和公有云存储的优势相比后,我们发现私有云存储有如下的一些特点: 可用性。在用户需要的时候,空间需要能够被及时分配,并且要求能在使用完后及时的收回。 服务质量。需要有详细的服务水平描述并严格参照执行。可衡量的标准可以用于定义用户能得到怎样的响应时间、恢复时间以及活动时间的支持。 成本固定。云环境中通常是根据每单元存储收费的。用户只需根据服务水平协议对实际使用的部分付费,而不是根据分配的空间或者某一个标准。 在参看云存储价格表的时候,往往会看到一些诱人的优惠部分,但是谁会真正从中受益呢?公有云和私有云之间有着很明显的区别。在公有云架构中,用户和签订协议的组织将受益。对于用户来说,他们将得到所有应用上的支持,对于组织来说,成本相对固定,也许相比于自我维护一套系统要更加低廉。IT部门需要告诉云服务提供商平台即服务的精确定义。虽然业务部门的花费会相对固定,公有云同样有“用多少买多少”的退款优势,并且有具体的服务水平协议,而遗憾的是,这些标准不适应于IT部门。用户需要获得足够的存储空间并开始对其进行管理,会监控系统并开始详细的成本核算。 此外,需要注意的是,这些好处中并没有提到“使用***进的存储阵列”、“最快的磁盘”或者“10Gb的以太网”。事实上,并没有相关的技术准则。公有云主要关注的是更好的运营:服务水平、成本控制以及快速响应能力。然而存储厂商们往往不会专注于运营方面的问题,他们只是卖硬件和软件。那么对于私有云,这些厂商们到底会卖些什么呢?可以确信的是,绝对不会只是硬件上的升级和一次臆想。幸运的是,这一点得到了确认。只要能把私有云应用于正确的环境和配置上,就绝对不会只是硬件上的概念。 在谈到云架构的时候,一些厂商强调于可扩展性和灵活性上的需求。可以确信的是,成本相对低的云架构模型会更有吸引力。但是几乎所有的厂商都会声称他们有这样那样的属性,因此这些定义就显得不那么有用了。此外,云部署不单指硬件上的架构,因为云发展到***,流程上的意义将大于产品本身。

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册