您现在的位置是:首页 > 文章详情

Redis从入门到放弃系列(二) Hash

日期:2019-05-15点击:279

Redis从入门到放弃系列(二) Hash

本文例子基于:5.0.4 Hash是Redis中一种比较常见的数据结构,其实现为hashtable/ziplist,默认创建时为ziplist,当到达一定量级时,redis会将ziplist转化为hashtable

Redis从入门到放弃系列(一) String

首先让我们来看一下该如何在redis里面使用Hash类型

//将hash表中key的域field的值设为value //如果key不存在,一个新的哈希表被创建并进行HSET操作 //如果域field已经存在于哈希表中,旧值将被覆盖 hset key field value 

代码示例:

//创建不存在的field >hset user:1 id 1 (integer) 1 //覆盖原先的field >hset user:1 id 2 (integer) 0 >hget user:1 id "2" //获取不存在的field >hget user:1 not_exist (nil) ---------------------------------- // hsetnx key field value //当不存在该field 设置成功返回1 ,否则返回0 > hsetnx user:1 id 1 (integer) 1 > hsetnx user:1 id 1 (integer) 0 > hget user:1 id "1" ---------------------------------- // hmset key field value [field value ....] //批量设置多个键值对 >HMSET user:1 id 1 name "黑搜丶D" wechat "black-search" OK ---------------------------------- //hget key field //获取hash表key中给定的field的值 >hget user:1 id "1" ---------------------------------- // hmget key field[field...] //按照我们输入的field的顺序返回 >hmget user:1 name wechat id not_exist 1) "黑搜丶D" 2) "black-search" 3) "1" 4) (nil) ---------------------------------- // hdel key field 删除返回被成功移除的域的数量 > hgetall user:1 1) "id" 2) "1" 3) "name" 4) "black-search" > HDEL user:1 name (integer) 1 > HDEL user:1 name (integer) 0 ---------------------------------- // HINCRBY key field increment // 为hash表某个整数类型的field增加increment ,返回增加increment之后的大小 > hset user:1 wechat "black-search" (integer) 1 > HINCRBY user:1 wechat 2 (error) ERR hash value is not an integer > HINCRBY user:1 id 21 (integer) 22 > hget user:1 id "22" 

至此,redis hash的用法先告一段落.


debug object key

本文开头的时候讲默认创建为ziplist,当达到一定的量级转化为hashtable,那么具体是在什么时候才会转化成hashtable呢?

# Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. hash-max-ziplist-entries 512 hash-max-ziplist-value 64 

从上文我们可以知道,只有当我们满足以下两个条件会将ziplist转化为hashtable结构

  1. 保存的所有键值对个数小于 512个 (这个限制是由 hash-max-ziplist-entries 参数控制,默认 512)
  2. 保存的所有键值对的长度都小于 64 字节(这个限制是由 hash-max-ziplist-value 参数控制,默认 64)
// 这里测试当键值对小于等于512时,hash的类型 @RequestMapping("/") public void test(){ List<Long> list = redisTemplate.executePipelined(new RedisCallback<Long>() { @Override public Long doInRedis(RedisConnection redisConnection) throws DataAccessException { redisConnection.openPipeline(); for (int i=0;i<512;i++){ redisConnection.hSet("key".getBytes(),("field"+i).getBytes(),"value".getBytes()); } return null; } }); System.out.println("结束"); } //我们发现这里hash的类型就是ziplist > debug object key Value at:0xbc6f80 refcount:1 encoding:ziplist serializedlength:2603 lru:14344435 lru_seconds_idle:17 //让我们调大一下循环的次数,改为513,我们发现 > debug object key Value at:0xbc6f80 refcount:1 encoding:hashtable serializedlength:7587 lru:14344656 lru_seconds_idle:4 

源码解析

//首先我们来看一下dict的结构 typedef struct dict { dictType *type; void *privdata; dictht ht[2]; long rehashidx; /* rehashing not in progress if rehashidx == -1 */ unsigned long iterators; /* number of iterators currently running */ } dict; typedef struct dictType { uint64_t (*hashFunction)(const void *key); void *(*keyDup)(void *privdata, const void *key); void *(*valDup)(void *privdata, const void *obj); int (*keyCompare)(void *privdata, const void *key1, const void *key2); void (*keyDestructor)(void *privdata, void *key); void (*valDestructor)(void *privdata, void *obj); } dictType; /* This is our hash table structure. Every dictionary has two of this as we * implement incremental rehashing, for the old to the new table. */ typedef struct dictht { dictEntry **table; unsigned long size; unsigned long sizemask; unsigned long used; } dictht; typedef struct dictEntry { void *key; union { void *val; uint64_t u64; int64_t s64; double d; } v; struct dictEntry *next; } dictEntry; 

从以上我们可以知道,dict里面包含了两个dictht(ps:hashtable),通常情况下只有一个dictht有值.但是当dict扩容/缩容的时候,需要分配新的dictht,然后渐进式搬迁,当迁移结束之后,旧的dictht被删除,只保留新的dictht dict如何解决hash冲突呢?其实原理跟Java的HashMap是一样的,采用数组+链表的方式去解决

渐进式rehash

我们知道,redis是单进程的,如果要将一个大的字典扩容是会比较耗时的,那么有可能就会将其他请求挂起。所以redis采用渐进式rehash来完成这一项艰巨任务~

dictEntry *dictAddRaw(dict *d, void *key, dictEntry **existing) { long index; dictEntry *entry; dictht *ht; //这里每次都会进行搬迁~ if (dictIsRehashing(d)) _dictRehashStep(d); /* Get the index of the new element, or -1 if * the element already exists. */ if ((index = _dictKeyIndex(d, key, dictHashKey(d,key), existing)) == -1) return NULL; /* Allocate the memory and store the new entry. * Insert the element in top, with the assumption that in a database * system it is more likely that recently added entries are accessed * more frequently. */ //当字典处于搬迁中,将新添加的元素挂到新的数组下面 ht = dictIsRehashing(d) ? &d->ht[1] : &d->ht[0]; entry = zmalloc(sizeof(*entry)); entry->next = ht->table[index]; ht->table[index] = entry; ht->used++; /* Set the hash entry fields. */ dictSetKey(d, entry, key); return entry; } 

这样,在客户端每次请求(hset/hdel等)都会去判断是否需要搬迁,那么当客户端不请求我们的时候,有可能没有完整的搬迁?no no no redis会在定时任务里面扫描处于rehash的dict,然后完成剩余的搬迁~代码如下

/* This function handles 'background' operations we are required to do * incrementally in Redis databases, such as active key expiring, resizing, * rehashing. */ void databasesCron(void) { /* Expire keys by random sampling. Not required for slaves * as master will synthesize DELs for us. */ if (server.active_expire_enabled) { if (server.masterhost == NULL) { activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW); } else { expireSlaveKeys(); } } /* Defrag keys gradually. */ if (server.active_defrag_enabled) activeDefragCycle(); /* Perform hash tables rehashing if needed, but only if there are no * other processes saving the DB on disk. Otherwise rehashing is bad * as will cause a lot of copy-on-write of memory pages. */ if (server.rdb_child_pid == -1 && server.aof_child_pid == -1) { /* We use global counters so if we stop the computation at a given * DB we'll be able to start from the successive in the next * cron loop iteration. */ static unsigned int resize_db = 0; static unsigned int rehash_db = 0; int dbs_per_call = CRON_DBS_PER_CALL; int j; /* Don't test more DBs than we have. */ if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum; /* Resize */ for (j = 0; j < dbs_per_call; j++) { tryResizeHashTables(resize_db % server.dbnum); resize_db++; } /* Rehash */ //重点在这里rehash if (server.activerehashing) { for (j = 0; j < dbs_per_call; j++) { int work_done = incrementallyRehash(rehash_db); if (work_done) { /* If the function did some work, stop here, we'll do * more at the next cron loop. */ break; } else { /* If this db didn't need rehash, we'll try the next one. */ rehash_db++; rehash_db %= server.dbnum; } } } } } 

应用场景

储存业务数据,我们发现其实hset的用法很简单,回顾上一讲最后的应用场景

//上一讲使用string >set user:1 '{"id":1,"name":"黑搜丶D","wechat":"black-search"}' //让我们使用hash来实现相似的做法 > HMSET user:1 id 1 name "黑搜丶D" wechat "black-search" OK //获取key的某个field的值 >hget user:1 wechat "black-search" //获取到key的所有 field:value组合 > HGETALL user:1 1) "id" 2) "1" 3) "name" 4) "\xe9\xbb\x91\xe6\x90\x9c\xe4\xb8\xb6D" 5) "wechat" 6) "black-search" 

相对于string的用法,我们使用hash get某个field或者set某个field会省很多带宽~

黑搜丶D

原文链接:https://my.oschina.net/u/4131421/blog/3050067
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章