首页 文章 精选 留言 我的

精选列表

搜索[设置],共10000篇文章
优秀的个人博客,低调大师

Recovering unassigned shards on elasticsearch 2.x——副本shard可以设置replica为...

Recovering unassigned shards on elasticsearch 2.x 摘自:https://z0z0.me/recovering-unassigned-shards-on-elasticsearch/ I got accross the problem when decided to add a node to the elasticsearch cluster and that node was not able to replicate the indexes of the cluster. This issue is usually happens when there is not enough disk space available, or not available master or different elasticsearch version. While my servers had more than enough disk space and also the master was available with the help of theelasticsearch discussI found out that the new node was having a different version than old nodes. Basically while installing on Debian jessie I just runapt-get install elasticsearchwhich ended up installing the latest available version. To install a specific version of the elasticsearch you prety much need to add={version}. #apt-get install elasticsearch={version} Now that I have identified the reasons for unallocated shards and successfully downgraded the elasticsearch to the required version by running the command above after starting the node the cluster was still in red state with unassigned shards all over the place: #curl http://localhost:9200/_cluster/health?pretty { "cluster_name" : "z0z0", "status" : "red", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 6, "active_shards" : 12, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 8, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 60.0 } #curl http://localhost:9200/_cat/shards site-id 4 p UNASSIGNED site-id 4 r UNASSIGNED site-id 1 p UNASSIGNED site-id 1 r UNASSIGNED site-id 3 p STARTED 0 159b 10.0.0.6 node-2 site-id 3 r STARTED 0 159b 10.0.0.7 node-3 site-id 2 r STARTED 0 159b 10.0.0.6 node-2 site-id 2 p STARTED 0 159b 10.0.0.7 node-3 site-id 0 r STARTED 0 159b 10.0.0.6 node-2 site-id 0 p STARTED 0 159b 10.0.0.7 node-3 subscription 4 p UNASSIGNED subscription 4 r UNASSIGNED subscription 1 p UNASSIGNED subscription 1 r UNASSIGNED subscription 3 p STARTED 0 159b 10.0.0.6 node-2 subscription 3 r STARTED 0 159b 10.0.0.7 node-3 subscription 2 r STARTED 0 159b 10.0.0.6 node-2 subscription 2 p STARTED 0 159b 10.0.0.7 node-3 subscription 0 p STARTED 0 159b 10.0.0.6 node-2 subscription 0 r STARTED 0 159b 10.0.0.7 node-3 At this point I was pretty desperate and whatever I tried it either did not do anything or ended up in all kind of failures. So I set thenumber_of_replicasto0by running the following query: #curl -XPUT http://localhost:9200/_settings?pretty -d ' { "index" : { "number_of_replicas' : 0 } }' and started to stop the nodes one by one until I was having only one live node. At this point I decided to start trying to reroute the unassigned shards and if it won't work I would just start over my cluster. So I did run the following: #curl -XPOST -d ' { "commands" : [ { "allocate" : { "index" : "site-id", "shard" : 1, "node" : "node-3", "allow_primary" : true } } ] }' http://localhost:9200/_cluster/reroute?pretty I've seen that the rerouted shard became initialized then running so I did the same command on the rest of unassigned shards. Runningcurl http://localhost:9200/_cluster/health?prettyconfirmed that I am on the good track to fix the cluster. #curl http://localhost:9200/_cluster/health?pretty { "cluster_name" : "z0z0", "status" : "green", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 10, "active_shards" : 20, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } So the cluster was green again but was running out of one node. So it was time to bring up the other nodes one by one. When all the nodes were up I set thenumber_of_replicasto1by running the following: #curl -XPUT http://localhost:9200/_settings -d ' { "index" : { "number_of_replicas" : 1 } }' So my elasticsearch cluster is back on running 3 nodes and still in green state. After alot of googling and wasted time I decided to write this article so that if anyone would come accross this issue would have a working example of how to fix it. 本文转自张昺华-sky博客园博客,原文链接:http://www.cnblogs.com/bonelee/p/7459391.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

[Android]为Spinner填充数据后设置默认值的问题

正文 问题很奇怪,此外还发现适配完数据后会默认选中第一个,并且这个默认选中第一个的操作并不是马上执行的,而是一段时候后再执行,并触发OnItemSelectedListener事件。下面直奔主题: 旧代码: spinner.setAdapter(adapter); spinner.setSelection( 2 ); 新代码: spinner.setAdapter(adapter); spinner.setSelection( 2 , true ); 在来看setSelection有两个参数的函数重载的说明: setSelection(intposition,booleananimate) 英文:Jump directly to a specific item in the adapter data. 中文:直接跳到数据适配器中指定项。 以下是两个函数的源代码: /** *Jumpdirectlytoaspecificitemintheadapterdata. */ public void setSelection( int position, boolean animate){ // Animateonlyifrequestedpositionisalreadyonscreensomewhere boolean shouldAnimate = animate && mFirstPosition <= position && position <= mFirstPosition + getChildCount() - 1 ; setSelectionInt(position,shouldAnimate); } @Override public void setSelection( int position){ setNextSelectedPositionInt(position); requestLayout(); invalidate(); } 本文转自over140 51CTO博客,原文链接:http://blog.51cto.com/over140/582222,如需转载请自行联系原作者

优秀的个人博客,低调大师

在oc项目中添加swift文件,并设置oc-swift混编

在项目中右键添加一个swift文件,我添加的LearnSwift.swift 添加后Xcode弹出‘是否配置oc桥接头文件’的提示,点确认,系统自动生成一个项目名-Bridging-Header.h的文件,我的是LearnSwift-Bridging-Header.h。 配置oc桥接头文件的提示 LearnSwift-Bridging-Header.h实质上是一个供swift使用的头文件,在这里把所有要暴露给swift的oc头文件都添加进去,这样写swift的时候就可以直接使用那些oc的类和方法了。因为swift本身没有头文件,只有.swift;而oc有.h和.m文件。有了这个声明,.swift就可以直接使用oc的.h了 我的LearnSwift-Bridging-Header.h文件内容 接下来就可以直接在LearnSwift.swift中写swift代码啦 ~ 随便声明一个类,打些log: 我的LearnSwift.swift文件内容 编译直接通过,这就是在swift中使用oc类和方法了,一切顺利! 在oc中使用swift,我在AppDelegate.m中使用刚才创建的swift类,那么先要引用swift头文件,即#import "LearnSwift-Swift.h",然后在这个.m文件中就可以任意使用swift创建的类和方法了:LearnSwift *learnSwift = [[LearnSwift alloc] init]; [learnSwift logsth:@"code from oc"]; 编译运行直接过! 输出:this is a log from swift : code from oc 理解下#import "LearnSwift-Swift.h" 其实是项目名-Swift.h,这也是Xcode自动生成的,根据你写的所有swift代码,生成一个oc的.h文件,进行类和方法的声明,这样在oc里引用这个头文件后,就相当于引用了所有swift声明,可以直接使用了。 本文转自 卓行天下 51CTO博客,原文链接:http://blog.51cto.com/9951038/1860144,如需转载请自行联系原作者

优秀的个人博客,低调大师

ES field store yes no 区别——可以设置为false,如果_source有的话

store By default, field values areindexedto make them searchable, but they are notstored. This means that the field can be queried, but the original field value cannot be retrieved. Usually this doesn’t matter. The field value is already part of the_sourcefield, which is stored by default. If you only want to retrieve the value of a single field or of a few fields, instead of the whole_source, then this can be achieved withsource filtering. In certain situations it can make sense tostorea field. For instance, if you have a document with atitle, adate, and a very largecontentfield, you may want to retrieve just thetitleand thedatewithout having to extract those fields from a large_sourcefield: PUT my_index { "mappings": { "my_type": { "properties": { "title": { "type": "text", "store": true }, "date": { "type": "date", "store": true }, "content": { "type": "text" } } } } } PUT my_index/my_type/1 { "title": "Some short title", "date": "2015-01-01", "content": "A very long content field..." } GET my_index/_search { "stored_fields": [ "title", "date" ] } COPY AS CURL VIEW IN CONSOLE Thetitleanddatefields are stored. This request will retrieve the values of thetitleanddatefields. 来自:https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-store.html 本文转自张昺华-sky博客园博客,原文链接:http://www.cnblogs.com/bonelee/p/6428653.html,如需转载请自行联系原作者

优秀的个人博客,低调大师

❤️‍🔥FlyFlow 工作流:多样化审批设置和多租户管理

FlyFlow 介绍 官网地址:www.flyflow.cc 演示网址:pro.flyflow.cc FlyFlow 借鉴了钉钉与飞书的界面设计理念,致力于打造一款用户友好、快速上手的工作流程工具。相较于传统的基于 BPMN.js 的工作流引擎,我们提供的解决方案显著简化了操作逻辑,使得用户能够在极短的时间内构建定制化的业务流程,即便是不具备深厚技术背景的普通用户也能迅速掌握,实现零门槛的高效工作流配置。 本周更新: 新增:租户管理 新增:实现审批人节点找人多样话配置 新增:标签功能 新增:支持管理员流程干预表单 优化:实现权限按钮级别和接口级别控制 优化:抢单节点人员多样化配置 优化:人员被禁用了不能作为审批节点的处理人等 优化:其他审批节点指定人员审批显示和处理保持顺序一致 修复:查询发起人空指针异常 修复:多一个分支连续的抄送节点显示人员错误

优秀的个人博客,低调大师

谷歌每年给苹果 80-120 亿美金,设置默认搜索这么挣钱?

据《纽约时报》报道,作为美国政府最大的反垄断案件之一,美国司法部正调查苹果和谷歌之间的一项利润丰厚的交易。 2017 年,苹果更新了一项协议,将谷歌的搜索引擎保留为苹果设备上的默认选项。苹果每年估计会收到 80-120 亿美元,让谷歌成为iPhone和 Siri 的默认搜索引擎。这被认为是谷歌向任何人支付的最大一笔款项,占苹果年利润的14% 至 21%。检察官称,这笔交易是保护谷歌垄断和扼杀竞争的非法手段的代表。目前谷歌近一半的搜索流量来自苹果设备,失去协议的前景被描述为 "可怕",是公司内部的 "红色代码 "情景。由于谷歌的广告系统,谷歌的搜索流量在其商业模式中不可或缺。(MacX-新闻)原文链接

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册