Spring Boot + Elasticsearch
spring data elasticsearch | elasticsearch |
---|---|
2.0.0.RELEASE | 2.2.0 |
1.4.0.M1 | 1.7.3 |
1.3.0.RELEASE | 1.5.2 |
1.2.0.RELEASE | 1.4.4 |
1.1.0.RELEASE | 1.3.2 |
1.0.0.RELEASE |
https://github.com/helloworldtang/spring-data-elasticsearch
1、None of the configured nodes are available 或者
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
原因:spring data elasticSearch 的版本与Spring boot、Elasticsearch版本不匹配。
解决:
Spring Boot Version (x) | Spring Data Elasticsearch Version (y) | Elasticsearch Version (z) |
---|---|---|
x <= 1.3.5 | y <= 1.3.4 | z <= 1.7.2* |
x >= 1.4.x | 2.0.0 <=y < 5.0.0** | 2.0.0 <= z < 5.0.0** |
这是版本之间的对应关系。Spring boot 1.3.5默认的elasticsearch版本是1.5.2,此时启动1.7.2版本以下的Elasticsearch客户端连接正常。
注:注意java的es默认连接端口是9300,9200是http端口,这两个在使用中应注意区分。
2、Caused by: java.lang.IllegalArgumentException: @ConditionalOnMissingBean annotations must specify at least one bean (type, name or annotation)
原因:spring boot是1.3.x版本,而es采用了2.x版本。在es的2.x版本去除了一些类,而这些类在spring boot的1.3.x版本中仍然被使用,导致此错误。
解决:依照问题1中的版本对应关系,启动特定版本的es即可。
We often use Elasticsearch to improve performance in our application, especially searching and caching, to make our application scale and adapt in real-time.
Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. In this article, I would like to introduce how to use Elasticsearch in java applications: by using Spring Boot data Elasticsearch. Spring Boot now easy and powerful, and we can build fast Java and web applications with a simple configuration.
By following the steps below, you can start writing your first application.
Source code: https://github.com/herotl2005/spring-data-elasticsearch-sample
Requirement enviroment
1. Install Elasticsearch
2. Install Gradle
3. IDE Eclipse or Intellij IDEA
Step by Step Coding
1. Gradle build
dependencies { testCompile group: 'junit', name: 'junit', version: '4.11' compile 'org.springframework.boot:spring-boot-starter-data-elasticsearch:1.2.0.RELEASE' compile 'org.springframework.data:spring-data-cassandra:1.1.1.RELEASE' compile 'org.springframework:spring-test:4.1.2.RELEASE' compile 'org.springframework.boot:spring-boot-starter-logging:1.2.0.RELEASE' compile 'org.springframework.boot:spring-boot-starter-web:1.2.0.RELEASE' compile 'org.springframework.boot:spring-boot-starter-actuator:1.2.0.RELEASE' }
2. Elasticsearch configuration
@Configuration@PropertySource(value = "classpath:elasticsearch.properties") @EnableElasticsearchRepositories(basePackages = "co.paan.repository") public class ElasticsearchConfiguration { @Resource private Environment environment; @Bean public Client client() { TransportClient client = new TransportClient(); TransportAddress address = new InetSocketTransportAddress(environment.getProperty("elasticsearch.host"), Integer.parseInt(environment.getProperty("elasticsearch.port"))); client.addTransportAddress(address); return client; } @Beanpublic ElasticsearchOperations elasticsearchTemplate() { return new ElasticsearchTemplate(client()); } }
You put elasticsearch host and post in your application properties file.
elasticsearch.host = localhost # if you use you local elasticsearch host elasticsearch.port = 9300
3. Data mapping object:
In this application, we have 2 entities data object mapping: Post and Tag
@Document(indexName = "post", type = "post", shards = 1, replicas = 0) public class Post { @Idprivate String id; private String title;// @Field(type= FieldType.Nested) private List<Tag> tags; public String getId() { return id; } public void setId(String id) { this.id = id; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public List<Tag> getTags() { return tags; } public void setTags(List<Tag> tags) { this.tags = tags;} }
public class Tag { private String id; private String name; public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } }
4. Repository: we extends from ElasticsearchRepository
public interface PostRepository extends ElasticsearchRepository<Post, String>{ Page<Post> findByTagsName(String name, Pageable pageable); }
5. Data access service
public interface PostService { Post save(Post post); Post findOne(String id); Iterable<Post> findAll(); Page<Post> findByTagsName(String tagName, PageRequest pageRequest); }
@Servicepublic class PostServiceImpl implements PostService{ @Autowired private PostRepository postRepository; @Override public Post save(Post post) { postRepository.save(post); return post; } @Overridepublic Post findOne(String id) { return postRepository.findOne(id); } @Overridepublic Iterable<Post> findAll() { return postRepository.findAll(); } @Overridepublic Page<Post> findByTagsName(String tagName, PageRequest pageRequest) { return postRepository.findByTagsName(tagName, pageRequest); } }
6. Testing and the result
@Testpublic void testFindByTagsName() throws Exception { Tag tag = new Tag(); tag.setId("1"); tag.setName("tech"); Tag tag2 = new Tag(); tag2.setId("2"); tag2.setName("elasticsearch"); Post post = new Post(); post.setId("1"); post.setTitle("Bigining with spring boot application and elasticsearch"); post.setTags(Arrays.asList(tag, tag2)); postService.save(post); Post post2 = new Post(); post2.setId("1"); post2.setTitle("Bigining with spring boot application"); post2.setTags(Arrays.asList(tag)); postService.save(post); Page<Post> posts = postService.findByTagsName("tech", new PageRequest(0,10));Page<Post> posts2 = postService.findByTagsName("tech", new PageRequest(0,10));Page<Post> posts3 = postService.findByTagsName("maz", new PageRequest(0,10));assertThat(posts.getTotalElements(), is(1L)); assertThat(posts2.getTotalElements(), is(1L)); assertThat(posts3.getTotalElements(), is(0L)); }
7. You can find detail project at github: https://github.com/herotl2005/spring-data-elasticsearch-sample
The Integration Zone is brought to you in partnership with 3scale. Learn how API providers have changed the way we think about integration in The Platform Vision of API Giants.
转载地址:https://dzone.com/articles/first-step-spring-boot-and
http://blog.csdn.net/hong0220/article/details/50583409
spring data elasticsearch 查询方式:
1、通过名字解析
1 2 3 4 5 6 7 8 9 10 11 12 13 | public interface BookRepository extends Repository<Book, String> { List<Book> findByNameAndPrice(String name, Integer price); } 根据上面的方法名会生成下面的Elasticsearch查询语句 { "bool" : { "must" : [ { "field" : { "name" : "?" } }, { "field" : { "price" : "?" } } ] } } |
2、@query注解
1 2 3 4 | public interface BookRepository extends ElasticsearchRepository<Book, String> { @Query ( "{" bool " : {" must " : {" field " : {" name " : " ? 0 "}}}}" ) Page<Book> findByName(String name,Pageable pageable); } |
3、构建Filter
1 2 3 4 5 6 7 8 9 10 11 | 使用过滤器可以提高查询速度+ private ElasticsearchTemplate elasticsearchTemplate; SearchQuery searchQuery = new NativeSearchQueryBuilder() .withQuery(matchAllQuery()) .withFilter(boolFilter().must(termFilter( "id" , documentId))) .build(); Page<SampleEntity> sampleEntities = elasticsearchTemplate.queryForPage(searchQuery,SampleEntity. class ); |
利用Scan和Scroll处理大结果集
Elasticsearch在处理大结果集时可以使用scan和scroll。在Spring Data Elasticsearch中,可以向下面那样使用ElasticsearchTemplate来使用scan和scroll处理大结果集。
Example 39. Using Scan and Scroll(使用scan和scroll)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | SearchQuery searchQuery = new NativeSearchQueryBuilder() .withQuery(matchAllQuery()) .withIndices( "test-index" ) .withTypes( "test-type" ) .withPageable( new PageRequest( 0 , 1 )) .build(); String scrollId = elasticsearchTemplate.scan(searchQuery, 1000 , false ); List<SampleEntity> sampleEntities = new ArrayList<SampleEntity>(); boolean hasRecords = true ; while (hasRecords){ Page<SampleEntity> page = elasticsearchTemplate.scroll(scrollId, 5000L , new ResultsMapper<SampleEntity>() { @Override public Page<SampleEntity> mapResults(SearchResponse response) { List<SampleEntity> chunk = new ArrayList<SampleEntity>(); for (SearchHit searchHit : response.getHits()){ if (response.getHits().getHits().length <= 0 ) { return null ; } SampleEntity user = new SampleEntity(); user.setId(searchHit.getId()); user.setMessage((String)searchHit.getSource().get( "message" )); chunk.add(user); } return new PageImpl<SampleEntity>(chunk); } }); if (page != null ) { sampleEntities.addAll(page.getContent()); hasRecords = page.hasNextPage(); } else { hasRecords = false ; } } } |
http://www.cnblogs.com/rainwang/p/5725214.html

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
Spark Shuffle Write阶段磁盘文件分析
前言 上篇写了 Spark Shuffle 内存分析后,有不少人提出了疑问,大家也对如何落文件挺感兴趣的,所以这篇文章会详细介绍,Sort Based Shuffle Write 阶段是如何进行落磁盘的 流程分析。 入口处: org.apache.spark.scheduler.ShuffleMapTask.runTask runTask对应的代码为: val manager = SparkEnv.get.shuffleManager writer = manager.getWriter[Any, Any]( dep.shuffleHandle, partitionId, context) writer.write(rdd.iterator(partition, context).asInstanceOf[Iterator[_ <: Product2[Any, Any]]]) writer.stop(success = true).get 这里manager 拿到的是 org.apache.spark.shuffle.sort.SortShuffleWriter 我们看他是如何...
- 下一篇
Spark Streaming 流式计算实战
这篇文章由一次平安夜的微信分享整理而来。在Stuq 做的分享, 原文内容。 业务场景 这次分享会比较实战些。具体业务场景描述: 我们每分钟会有几百万条的日志进入系统,我们希望根据日志提取出时间以及用户名称,然后根据这两个信息形成 userName/year/month/day/hh/normal userName/year/month/day/hh/delay 路径,存储到HDFS中。如果我们发现日志产生的时间和到达的时间相差超过的一定的阈值,那么会放到 delay 目录,否则放在正常的 normal 目录。 Spark Streaming 与 Storm 适用场景分析 为什么这里不使用 Storm呢? 我们初期确实想过使用 Storm 去实现,然而使用 Storm 写数据到HDFS比较麻烦: * Storm 需要持有大量的 HDFS 文件句柄。需要落到同一个文件里的记录是不确定什么时候会来的,你不能写一条就关掉,所以需要一直持有。 * 需要使用HDFS 的写文件的 append 模式,不断追加记录。 大量持有文件句柄以及在什么时候释放这些文件句柄都是一件很困难的事情。另外使用 HDF...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- Docker安装Oracle12C,快速搭建Oracle学习环境
- CentOS6,CentOS7官方镜像安装Oracle11G
- CentOS7设置SWAP分区,小内存服务器的救世主
- Docker使用Oracle官方镜像安装(12C,18C,19C)
- SpringBoot2全家桶,快速入门学习开发网站教程
- CentOS7安装Docker,走上虚拟化容器引擎之路
- CentOS7编译安装Gcc9.2.0,解决mysql等软件编译问题
- Docker快速安装Oracle11G,搭建oracle11g学习环境
- CentOS7编译安装Cmake3.16.3,解决mysql等软件编译问题
- Jdk安装(Linux,MacOS,Windows),包含三大操作系统的最全安装