【新萄京赌场2757com】Cache源码分析,Php经典分页源码

Cache的成效不用说大家都驾驭咯,那些天也面试了某一个人,开掘许四个人框架用多了,基础都记不清了,你问一些政工,他总是说框架化解了,而平昔不晓得是怎么回事,所以也提示我们应该专一平常基础知识的集合,之后对有个别主题素材工夫格外熟悉.

#*********************************************************
#文件名称: function.php
#功效描述: 消息增添修改管理模块
#【新萄京赌场2757com】Cache源码分析,Php经典分页源码。前后相继制作:留印(adleyliu)
#联系qq  :14339095
#关联邮箱:[email=adleyliu@163.com]adleyliu@163.com[/email]
#官方网站: [url=]
#copyright (c) 2007-2008 115000.com.cn all rights reserved.
#终极更新:     2006-11-20
#*********************************************************

一、Guava的陈设性观念##\

在此之前一篇短文,简要的统揽了刹那间GuavaCache具备的一对特征。比如像缓存淘汰、删除监听和缓存刷新等。本次重视写一些Guava
Cache是什么达成这么些特色的。
GuavaCache的源码在
https://github.com/google/guava
GuavaCache的安插性是看似与ConcurrentHashMap的,主要靠锁的细化,来减小并发,同一时间通过Hash算法来增长速度检索速度。可是GuavaCahce和ConcurrentHash不一致的是GuavaCache要帮忙广大的Cache性子,所以安顿上照旧很相比复杂的。

LocalCache是一种很好的优化方案,它能够成倍的增高管理作用。面临高产出的呼吁,响应十二分可观。假如访谈的能源异常的小,可以装入内部存款和储蓄器,同不时间又不影响JVM的GC的图景下。那么LocalCache就太符合你了。在本身的体系中关键用LocalCache作为Redis的缓存。功效极度可观。

群里也稍微朋友对基础知识很不屑,总说有力量就可以了,基础知识考不出来什么.对于如此的见解,小编向来不苟同.
以此只是少数感概罢了. 上边看正题,介绍一个php的Cache类:

#*********************************************************
#分页函数
#*********************************************************
function yl_list_page($pageurl,$rsnum,$pages,$pagecount,$pagesize){
#url
#总记录
#总页数
#此时此刻页码
#每页展现数
   //$pageurl=’?’;
   $pcount = $pages;
   $page_info = ‘<div class=pagenum>’;
   $page_info .= ‘<div class=num>’; 
   if (($pcount > 1) && ($pcount == $pagecount) ){
       $page_info .= ‘<a href = ‘.$pageurl.’page=’.intval($pagecount-1).’>上一页</a>’;
   }elseif (($pagecount != 1) && ($pcount != $pagecount)){
      $page_info .= ‘<a href = ‘.$pageurl.’page=’.intval($pagecount-1).’>上一页</a>’;
   }
      $page_info .= ‘<a href = ‘.$pageurl.’page=1>页首</a>’;
   if ($pagecount > 4){
     $page_info .= ‘<a href = ‘.$pageurl.’page=1>[1]</a><span class=dot>…</span>’;
   }
   if ($pcount > $pagecount+2){
       $endpage = $pagecount+2;
   }else{
     $endpage = $pcount;
   }
   for ($n = ($pagecount-2); $n <= $endpage; $n++){
      if (!($n < 1)){
         if ($n == intval($pagecount)){
         $page_info .= ‘<span class=normal>’.$n.'</span>’;
     }else{
        $page_info .= ‘<a href = ‘.$pageurl.’page=’.$n.’>[‘.$n.’]</a>’;
     }
      }
   }
   if ($pagecount+2 < $pcount){
       $page_info .= ‘<span class=dot>…</span><a href=’.$pageurl.’page=’.$pcount.’>[‘.$pcount.’]</a>’;
   }
       $page_info .= ‘<a href = ‘.$pageurl.’page=’.$pcount.’>页尾</a>’;
   if (($pagecount == 1) && ($pcount != $pagecount) && ($pcount != 0)){
       $page_info .= ‘<a href = ‘.$pageurl.’page=’.intval($pagecount + 1).’>下一页</a>’;
   }else if (($pagecount != 1) && ($pcount != $pagecount)){
       $page_info .= ‘<a href = ‘.$pageurl.’page=’.intval($pagecount + 1).’>下一页</a>’;
   }
       $page_info .= ‘</div></div>’;
   $page_info .= ‘<div class=pagenum>’;
   $page_info .= ‘<div class=num><span class=normal> 共:’.$rsnum .’条/’.$pcount.’页 每页/’.$pagesize.’条</span></div>’;
   $page_info .= ‘ <div class=num>’;
   //echo ‘<form name=page action=’.$pageurl.’>’;
   $page_info .= ‘ 转到’;
   $page_info .= ‘<input type=text name=page value=\’1\’ class=login_left style=\’width:28px;height:18px;\’>’;
   $page_info .= ‘页<input type=submit name=submit3 class=login_submit style=\’width:28px;height:18px;padding-top:1px;\’ onclick=document.myform.action.value=\’go\’> ‘;
   //echo ‘</form>’;
   $page_info .= ‘</div>’;
   $page_info .= ‘</div>’;
      return $page_info;
}

二、源码的深入分析##\

此间大家第一以LoadingCache为例子来解析GuavaCache的构造和促成,首先Wiki的事例是:

LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
       .maximumSize(1000)
       .expireAfterWrite(10, TimeUnit.MINUTES)
       .removalListener(MY_LISTENER)
       .build(
           new CacheLoader<Key, Graph>() {
             public Graph load(Key key) throws AnyException {
               return createExpensiveGraph(key);
             }
           });

此间GuavaCache首要运用builder的格局,CacheBuilder的每叁个艺术都回去那一个CacheBuilder知道build方法的调用。
那便是说大家先看一下CacheBuilder的种种艺术:

   /**
   *
   * 指定一个Cahce的大小上限,当Cache中的数据将要达到上限的时候淘汰掉不常用的。
   * Specifies the maximum number of entries the cache may contain. Note that the cache <b>may evict
   * an entry before this limit is exceeded</b>. As the cache size grows close to the maximum, the
   * cache evicts entries that are less likely to be used again. For example, the cache may evict an
   * entry because it hasn't been used recently or very often.
   *
   * <p>When {@code size} is zero, elements will be evicted immediately after being loaded into the
   * cache. This can be useful in testing, or to disable caching temporarily without a code change.
   *
   * <p>This feature cannot be used in conjunction with {@link #maximumWeight}.
   *
   * @param size the maximum size of the cache
   * @return this {@code CacheBuilder} instance (for chaining)
   * @throws IllegalArgumentException if {@code size} is negative
   * @throws IllegalStateException if a maximum size or weight was already set
   */
  public CacheBuilder<K, V> maximumSize(long size) {
    checkState(
        this.maximumSize == UNSET_INT, "maximum size was already set to %s", this.maximumSize);
    checkState(
        this.maximumWeight == UNSET_INT,
        "maximum weight was already set to %s",
        this.maximumWeight);
    checkState(this.weigher == null, "maximum size can not be combined with weigher");
    checkArgument(size >= 0, "maximum size must not be negative");
    this.maximumSize = size;
    return this;

动静检查实验之后正是实践了一个赋值操作。
同理

 public CacheBuilder<K, V> expireAfterWrite(long duration, TimeUnit unit) {
    checkState(
        expireAfterWriteNanos == UNSET_INT,
        "expireAfterWrite was already set to %s ns",
        expireAfterWriteNanos);
    checkArgument(duration >= 0, "duration cannot be negative: %s %s", duration, unit);
    this.expireAfterWriteNanos = unit.toNanos(duration);
    return this;
  }

  public <K1 extends K, V1 extends V> CacheBuilder<K1, V1> removalListener(
      RemovalListener<? super K1, ? super V1> listener) {
    checkState(this.removalListener == null);

    // safely limiting the kinds of caches this can produce
    @SuppressWarnings("unchecked")
    CacheBuilder<K1, V1> me = (CacheBuilder<K1, V1>) this;
    me.removalListener = checkNotNull(listener);
    return me;
  }

执行build方法:

  public <K1 extends K, V1 extends V> LoadingCache<K1, V1> build(
      CacheLoader<? super K1, V1> loader) {
    checkWeightWithWeigher();
    return new LocalCache.LocalLoadingCache<K1, V1>(this, loader);
  }

这里根本重临五个LocalCache.LocalLoadingCache,那是LocalCache的贰个内部类,到此地GuavaCahce真正的存款和储蓄结构出现了,LocalLoadingCache承接了Local马努alCache达成了LoadingCache接口。实例化的时候,依照CacheBuilder创设了二个LocalCache,而LoadingCache和Local马努alCache只是在LocalCache上做了代理。

LocalLoadingCache(    CacheBuilder<? super K, ? super V> builder, CacheLoader<? super K, V> loader) {  super(new LocalCache<K, V>(builder, checkNotNull(loader)));}

private LocalManualCache(LocalCache<K, V> localCache) {  this.localCache = localCache;}

那么LocalCache的构建是怎样的呢?

  LocalCache(
      CacheBuilder<? super K, ? super V> builder, @Nullable CacheLoader<? super K, V> loader) {
    //并发度,seg的个数
    concurrencyLevel = Math.min(builder.getConcurrencyLevel(), MAX_SEGMENTS);
    //key强弱关系
    keyStrength = builder.getKeyStrength();
    //value的强弱关系
    valueStrength = builder.getValueStrength();
    //比较器,类似于Object.equal
    keyEquivalence = builder.getKeyEquivalence();
    valueEquivalence = builder.getValueEquivalence();
    //最大权重,weigher为null那么maxWeight=maxsize
    maxWeight = builder.getMaximumWeight();
    //entry的权重,用于淘汰策略
    weigher = builder.getWeigher();
    //lastAccess之后多长时间删除
    expireAfterAccessNanos = builder.getExpireAfterAccessNanos();
    //在写入后长时间之后删除
    expireAfterWriteNanos = builder.getExpireAfterWriteNanos();
    //刷新的时间间隔
    refreshNanos = builder.getRefreshNanos();
    //entry删除之后的Listener
    removalListener = builder.getRemovalListener();
    //删除监听的队列
    removalNotificationQueue =
        (removalListener == NullListener.INSTANCE)
            ? LocalCache.<RemovalNotification<K, V>>discardingQueue()
            : new ConcurrentLinkedQueue<RemovalNotification<K, V>>();
    //时钟
    ticker = builder.getTicker(recordsTime());
    //创建Entry的Factory
    entryFactory = EntryFactory.getFactory(keyStrength, usesAccessEntries(), usesWriteEntries());
    //缓存的状态统计器,用于统计缓存命中率等
    globalStatsCounter = builder.getStatsCounterSupplier().get();

    //加载数据的Loader
    defaultLoader = loader;

    //初始化HashTable的容量
    int initialCapacity = Math.min(builder.getInitialCapacity(), MAXIMUM_CAPACITY);

    //没有设置权重设置但是有maxsize的设置,那么需要减小容量的设置
    if (evictsBySize() && !customWeigher()) {
      initialCapacity = Math.min(initialCapacity, (int) maxWeight);
    }

    // Find the lowest power-of-two segmentCount that exceeds concurrencyLevel, unless
    // maximumSize/Weight is specified in which case ensure that each segment gets at least 10
    // entries. The special casing for size-based eviction is only necessary because that eviction
    // happens per segment instead of globally, so too many segments compared to the maximum size
    // will result in random eviction behavior.

    //类似于ConcurentHashMap
    int segmentShift = 0;//seg的掩码
    int segmentCount = 1;//seg的个数
    //如果seg的个数事故小于并发度的
    //初始化并发度为4,默认的maxWeight是-1,默认是不驱逐
    while (segmentCount < concurrencyLevel && (!evictsBySize() || segmentCount * 20 <= maxWeight)) {
      ++segmentShift;
      segmentCount <<= 1;
    }
    this.segmentShift = 32 - segmentShift;
    segmentMask = segmentCount - 1;

    this.segments = newSegmentArray(segmentCount);

    int segmentCapacity = initialCapacity / segmentCount;
    if (segmentCapacity * segmentCount < initialCapacity) {
      ++segmentCapacity;
    }

    int segmentSize = 1;
    while (segmentSize < segmentCapacity) {
      segmentSize <<= 1;
    }
    //默认不驱逐
    if (evictsBySize()) {
      // Ensure sum of segment max weights = overall max weights
      long maxSegmentWeight = maxWeight / segmentCount + 1;
      long remainder = maxWeight % segmentCount;
      for (int i = 0; i < this.segments.length; ++i) {
        if (i == remainder) {
          maxSegmentWeight--;
        }
        this.segments[i] =
            createSegment(segmentSize, maxSegmentWeight, builder.getStatsCounterSupplier().get());
      }
    } else {
      //为每一个Segment进行初始化
      for (int i = 0; i < this.segments.length; ++i) {
        this.segments[i] =
            createSegment(segmentSize, UNSET_INT, builder.getStatsCounterSupplier().get());
      }
    }
  }

起头化的时候起初化一些陈设等,能够见到和ConcurrentHashMap基本一致,可是引进了有的其余的定义。

那么向后看一下,最根本的多少个办法,首先是put方法:

    @Override
    public void put(K key, V value) {
      localCache.put(key, value);
    }
  /**
   * 代理到Segment的put方法
   * @param key
   * @param value
   * @return
   */
  @Override
  public V put(K key, V value) {
    checkNotNull(key);
    checkNotNull(value);
    int hash = hash(key);
    return segmentFor(hash).put(key, hash, value, false);
  }
      @Nullable
    V put(K key, int hash, V value, boolean onlyIfAbsent) {
      //保证线程安全,加锁
      lock();
      try {
        //获取当前的时间
        long now = map.ticker.read();
        //清除队列中的元素
        preWriteCleanup(now);
        //localCache的Count+1
        int newCount = this.count + 1;
        //扩容操作
        if (newCount > this.threshold) { // ensure capacity
          expand();
          newCount = this.count + 1;
        }
        //获取当前Entry中的HashTable的Entry数组
        AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
        //定位
        int index = hash & (table.length() - 1);
        //获取第一个元素
        ReferenceEntry<K, V> first = table.get(index);
        //遍历整个Entry链表
        // Look for an existing entry.
        for (ReferenceEntry<K, V> e = first; e != null; e = e.getNext()) {
          K entryKey = e.getKey();
          if (e.getHash() == hash
              && entryKey != null
              && map.keyEquivalence.equivalent(key, entryKey)) {
            // We found an existing entry.
            //如果找到相应的元素
            ValueReference<K, V> valueReference = e.getValueReference();
            //获取value
            V entryValue = valueReference.get();
            //如果entry的value为null,可能被GC掉了
            if (entryValue == null) {
              ++modCount;
              if (valueReference.isActive()) {
                enqueueNotification( //减小锁时间的开销
                    key, hash, entryValue, valueReference.getWeight(), RemovalCause.COLLECTED);
                //利用原来的key并且刷新value
                setValue(e, key, value, now);//存储数据,并且将新增加的元素写入两个队列中
                newCount = this.count; // count remains unchanged
              } else {
                setValue(e, key, value, now);//存储数据,并且将新增加的元素写入两个队列中
                newCount = this.count + 1;
              }
              this.count = newCount; // write-volatile,保证内存可见性
              //淘汰缓存
              evictEntries(e);
              return null;
            } else if (onlyIfAbsent) {//原来的Entry中包含指定key的元素,所以读取一次,读取操作需要更新Access队列
              // Mimic
              // "if (!map.containsKey(key)) ...
              // else return map.get(key);
              recordLockedRead(e, now);
              return entryValue;
            } else {
              //如果value不为null,那么更新value
              // clobber existing entry, count remains unchanged
              ++modCount;
              //将replace的Cause添加到队列中
              enqueueNotification(
                  key, hash, entryValue, valueReference.getWeight(), RemovalCause.REPLACED);
              setValue(e, key, value, now);//存储数据,并且将新增加的元素写入两个队列中
              //数据的淘汰
              evictEntries(e);
              return entryValue;
            }
          }
        }
        //如果目标的entry不存在,那么新建entry
        // Create a new entry.
        ++modCount;
        ReferenceEntry<K, V> newEntry = newEntry(key, hash, first);
        setValue(newEntry, key, value, now);
        table.set(index, newEntry);
        newCount = this.count + 1;
        this.count = newCount; // write-volatile
        //淘汰多余的entry
        evictEntries(newEntry);
        return null;
      } finally {
        //解锁
        unlock();
        //处理刚刚的remove Cause
        postWriteCleanup();
      }
    }

代码相比长,看上去是比较恶心的,注释写了某个,那么重大说多少个注意的点:

  1. 加锁,和ConcurrentHashMap同样,加锁是为着有限支撑线程安全。
  2. preWriteCleanup:在每贰回做put此前都要清理一下,清理什么?看下代码:

    @GuardedBy("this")
    void preWriteCleanup(long now) {
      runLockedCleanup(now);
    }
    void runLockedCleanup(long now) {
      if (tryLock()) {
        try {
          drainReferenceQueues();
          expireEntries(now); // calls drainRecencyQueue
          readCount.set(0);
        } finally {
          unlock();
        }
      }
    }
    @GuardedBy("this")
    void drainReferenceQueues() {
      if (map.usesKeyReferences()) {
        drainKeyReferenceQueue();
      }
      if (map.usesValueReferences()) {
        drainValueReferenceQueue();
      }
    }
    @GuardedBy("this")
    void drainKeyReferenceQueue() {
      Reference<? extends K> ref;
      int i = 0;
      while ((ref = keyReferenceQueue.poll()) != null) {
        @SuppressWarnings("unchecked")
        ReferenceEntry<K, V> entry = (ReferenceEntry<K, V>) ref;
        map.reclaimKey(entry);
        if (++i == DRAIN_MAX) {
          break;
        }
      }
    }

看起来大概有一点懵,其实它要做的正是清空多少个连串keyReferenceQueue和valueReferenceQueue,那多少个连串是怎样事物?其实是援用使用队列。
GuavaCache为了协理弱援引和软引用,引入了援引清空队列。相同的时间将key和Value包装成了keyReference和valueReference。
在创建Entry的时候:

    @GuardedBy("this")
    ReferenceEntry<K, V> newEntry(K key, int hash, @Nullable ReferenceEntry<K, V> next) {
      return map.entryFactory.newEntry(this, checkNotNull(key), hash, next);
    }

使用map.entryFactory创设Entry。Factory的开首化是经过

entryFactory = EntryFactory.getFactory(keyStrength, usesAccessEntries(), usesWriteEntries());

兑现的。keyStrength是大家在开头化时钦命的援引强度。可选的有工厂有:

    static final EntryFactory[] factories = {
      STRONG,
      STRONG_ACCESS,
      STRONG_WRITE,
      STRONG_ACCESS_WRITE,
      WEAK,
      WEAK_ACCESS,
      WEAK_WRITE,
      WEAK_ACCESS_WRITE,
    };

因而相应的厂子创立对应的Entry,这里首要说一下WeakEntry:

    WEAK {
      @Override
      <K, V> ReferenceEntry<K, V> newEntry(
          Segment<K, V> segment, K key, int hash, @Nullable ReferenceEntry<K, V> next) {
        return new WeakEntry<K, V>(segment.keyReferenceQueue, key, hash, next);
      }
    },

  /**
   * Used for weakly-referenced keys.
   */
  static class WeakEntry<K, V> extends WeakReference<K> implements ReferenceEntry<K, V> {
    WeakEntry(ReferenceQueue<K> queue, K key, int hash, @Nullable ReferenceEntry<K, V> next) {
      super(key, queue);
      this.hash = hash;
      this.next = next;
    }

    @Override
    public K getKey() {
      return get();
    }

    /*
     * It'd be nice to get these for free from AbstractReferenceEntry, but we're already extending
     * WeakReference<K>.
     */

    // null access

    @Override
    public long getAccessTime() {
      throw new UnsupportedOperationException();
    }

    @Override
    public void setAccessTime(long time) {
      throw new UnsupportedOperationException();
    }

    @Override
    public ReferenceEntry<K, V> getNextInAccessQueue() {
      throw new UnsupportedOperationException();
    }

    @Override
    public void setNextInAccessQueue(ReferenceEntry<K, V> next) {
      throw new UnsupportedOperationException();
    }

    @Override
    public ReferenceEntry<K, V> getPreviousInAccessQueue() {
      throw new UnsupportedOperationException();
    }

    @Override
    public void setPreviousInAccessQueue(ReferenceEntry<K, V> previous) {
      throw new UnsupportedOperationException();
    }

    // null write

    @Override
    public long getWriteTime() {
      throw new UnsupportedOperationException();
    }

    @Override
    public void setWriteTime(long time) {
      throw new UnsupportedOperationException();
    }

    @Override
    public ReferenceEntry<K, V> getNextInWriteQueue() {
      throw new UnsupportedOperationException();
    }

    @Override
    public void setNextInWriteQueue(ReferenceEntry<K, V> next) {
      throw new UnsupportedOperationException();
    }

    @Override
    public ReferenceEntry<K, V> getPreviousInWriteQueue() {
      throw new UnsupportedOperationException();
    }

    @Override
    public void setPreviousInWriteQueue(ReferenceEntry<K, V> previous) {
      throw new UnsupportedOperationException();
    }

    // The code below is exactly the same for each entry type.

    final int hash;
    final ReferenceEntry<K, V> next;
    volatile ValueReference<K, V> valueReference = unset();

    @Override
    public ValueReference<K, V> getValueReference() {
      return valueReference;
    }

    @Override
    public void setValueReference(ValueReference<K, V> valueReference) {
      this.valueReference = valueReference;
    }

    @Override
    public int getHash() {
      return hash;
    }

    @Override
    public ReferenceEntry<K, V> getNext() {
      return next;
    }
  }

WeakEntry传承了WeakReference完毕了ReferenceEntry,也等于说那些援用是弱援引。WeakEntry援用的key和Value随时大概会被回收。构造的时候参数里面有ReferenceQueue<K>
queue,那几个便是我们地点提到的KeyReferenceQueue,所以在Key被GC掉的时候,会活动的将引用出席到ReferenceQueue那样我们就会处理相应的Entry了。Value也是同样的。是否感到十三分牛逼?
回来正题清理KeyReferenceQueue:

    @GuardedBy("this")
    void drainKeyReferenceQueue() {
      Reference<? extends K> ref;
      int i = 0;
      while ((ref = keyReferenceQueue.poll()) != null) {
        @SuppressWarnings("unchecked")
        ReferenceEntry<K, V> entry = (ReferenceEntry<K, V>) ref;
        map.reclaimKey(entry);
        if (++i == DRAIN_MAX) {
          break;
        }
      }
    }

    void reclaimKey(ReferenceEntry<K, V> entry) {
    int hash = entry.getHash();
    segmentFor(hash).reclaimKey(entry, hash);
  }

    /**
     * Removes an entry whose key has been garbage collected.
     */
    boolean reclaimKey(ReferenceEntry<K, V> entry, int hash) {
      lock();
      try {
        int newCount = count - 1;
        AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
        int index = hash & (table.length() - 1);
        ReferenceEntry<K, V> first = table.get(index);

        for (ReferenceEntry<K, V> e = first; e != null; e = e.getNext()) {
          if (e == entry) {
            ++modCount;
            ReferenceEntry<K, V> newFirst =
                removeValueFromChain(
                    first,
                    e,
                    e.getKey(),
                    hash,
                    e.getValueReference().get(),
                    e.getValueReference(),
                    RemovalCause.COLLECTED);
            newCount = this.count - 1;
            table.set(index, newFirst);
            this.count = newCount; // write-volatile
            return true;
          }
        }

        return false;
      } finally {
        unlock();
        postWriteCleanup();
      }
    }

地点就是清理进程了,借使发现key恐怕value被GC了,那么会在put的时候接触清理。
3.setValue都干了什么?setValue其实是将value写入Entry,不过那是三个写操作,所以会刷新上贰回写的时刻,然而那是依照什么维护的吗?

    /**
     * Sets a new value of an entry. Adds newly created entries at the end of the access queue.
     */
    @GuardedBy("this")
    void setValue(ReferenceEntry<K, V> entry, K key, V value, long now) {
      ValueReference<K, V> previous = entry.getValueReference();
      int weight = map.weigher.weigh(key, value);
      checkState(weight >= 0, "Weights must be non-negative");

      ValueReference<K, V> valueReference =
          map.valueStrength.referenceValue(this, entry, value, weight);
      entry.setValueReference(valueReference);
      //写入队列
      recordWrite(entry, weight, now);
      previous.notifyNewValue(value);
    }

        /**
     * Updates eviction metadata that {@code entry} was just written. This currently amounts to
     * adding {@code entry} to relevant eviction lists.
     */
    @GuardedBy("this")
    void recordWrite(ReferenceEntry<K, V> entry, int weight, long now) {
      // we are already under lock, so drain the recency queue immediately
      drainRecencyQueue();
      totalWeight += weight;

      if (map.recordsAccess()) {
        entry.setAccessTime(now);
      }
      if (map.recordsWrite()) {
        entry.setWriteTime(now);
      }
      accessQueue.add(entry);
      writeQueue.add(entry);
    }

其实GuavaCache会维护四个类别叁个Write队列和四个Access队列,用那五个连串来完成方今读和近些日子写的解除操作,大家得以推测那七个种类需求有序,同期也须求能便捷牢固元素。以Access队列为例:

  /**
   * A custom queue for managing access order. Note that this is tightly integrated with
   * {@code ReferenceEntry}, upon which it reliese to perform its linking.
   *
   * <p>Note that this entire implementation makes the assumption that all elements which are in the
   * map are also in this queue, and that all elements not in the queue are not in the map.
   *
   * <p>The benefits of creating our own queue are that (1) we can replace elements in the middle of
   * the queue as part of copyWriteEntry, and (2) the contains method is highly optimized for the
   * current model.
   */
  static final class AccessQueue<K, V> extends AbstractQueue<ReferenceEntry<K, V>> {
    final ReferenceEntry<K, V> head =
        new AbstractReferenceEntry<K, V>() {

          @Override
          public long getAccessTime() {
            return Long.MAX_VALUE;
          }

          @Override
          public void setAccessTime(long time) {}

          ReferenceEntry<K, V> nextAccess = this;

          @Override
          public ReferenceEntry<K, V> getNextInAccessQueue() {
            return nextAccess;
          }

          @Override
          public void setNextInAccessQueue(ReferenceEntry<K, V> next) {
            this.nextAccess = next;
          }

          ReferenceEntry<K, V> previousAccess = this;

          @Override
          public ReferenceEntry<K, V> getPreviousInAccessQueue() {
            return previousAccess;
          }

          @Override
          public void setPreviousInAccessQueue(ReferenceEntry<K, V> previous) {
            this.previousAccess = previous;
          }
        };

    // implements Queue

    @Override
    public boolean offer(ReferenceEntry<K, V> entry) {
      // unlink
      connectAccessOrder(entry.getPreviousInAccessQueue(), entry.getNextInAccessQueue());

      // add to tail
      connectAccessOrder(head.getPreviousInAccessQueue(), entry);
      connectAccessOrder(entry, head);

      return true;
    }

    @Override
    public ReferenceEntry<K, V> peek() {
      ReferenceEntry<K, V> next = head.getNextInAccessQueue();
      return (next == head) ? null : next;
    }

    @Override
    public ReferenceEntry<K, V> poll() {
      ReferenceEntry<K, V> next = head.getNextInAccessQueue();
      if (next == head) {
        return null;
      }

      remove(next);
      return next;
    }
      head.setNextInAccessQueue(head);
      head.setPreviousInAccessQueue(head);
    }
    }
  }

要害关心多少个点:offer方法,offer首要做了多少个事情:
1.将Entry和它的前节点后节点的涉及断开,那样就供给Entry中爱惜它的前向和后向援引。
2.将新扩张的节点参预到行列的尾巴,寻觅尾节点用了head.getPreviousInAccessQueue()。能够看出来是个环形队列。
3.将新添的节点,恐怕新调度出来的节点设为尾巴部分节点。

由此这几点,能够查出,这段时间立异的节点分明是在尾部的,head前面包车型客车节点料定是不活跃的,在每叁次清除过期节点的时候断定清除head之后的晚点的节点,这一点能够由此poll进行认证。

Write队列也是同理。也正是每一回写入操作都会更新成分的援引和写入的小时,何况更新成分在读写队列中的地方。作者又二遍以为它挺牛逼的。

4.evictEntries(e),item的淘汰,那些操作是在安装了Cache中能缓存最大条文的前提下接触的:

    /**
     * Performs eviction if the segment is over capacity. Avoids flushing the entire cache if the
     * newest entry exceeds the maximum weight all on its own.
     *
     * @param newest the most recently added entry
     */
    @GuardedBy("this")
    void evictEntries(ReferenceEntry<K, V> newest) {
      if (!map.evictsBySize()) {
        return;
      }

      drainRecencyQueue();

      // If the newest entry by itself is too heavy for the segment, don't bother evicting
      // anything else, just that
      if (newest.getValueReference().getWeight() > maxSegmentWeight) {
        if (!removeEntry(newest, newest.getHash(), RemovalCause.SIZE)) {
          throw new AssertionError();
        }
      }

      while (totalWeight > maxSegmentWeight) {
        ReferenceEntry<K, V> e = getNextEvictable();
        if (!removeEntry(e, e.getHash(), RemovalCause.SIZE)) {
          throw new AssertionError();
        }
      }
    }

此地根本做了几件事,首先剖断是还是不是张开淘汰,之后呢清理RecencyQueue,然后决断新扩充的成分是还是不是有相当的大的权重,假使是那么直接删掉,因为它太重了。最终判别是或不是权重已经超先生出上限,若是是的话那么大家就排除最近起码有采用的Entry,直到Weight小于上限。

    // TODO(fry): instead implement this with an eviction head
    @GuardedBy("this")
    ReferenceEntry<K, V> getNextEvictable() {
      for (ReferenceEntry<K, V> e : accessQueue) {
        int weight = e.getValueReference().getWeight();
        if (weight > 0) {
          return e;
        }
      }
      throw new AssertionError();
    }

此间相比便于疑心的是:Weight是啥?其实只要不做设置Weight都以1,Weight上限便是maxSize。可是Guava允许自个儿定义Weight,那么上限正是maxWeight了。这一部分能够看上边开首化部分。

5.removeListener:removeListener能够见到,在要素被遮掩的时候后注册了二个事变,同期在finnally里面进行了二遍清理:

    /**
   * Notifies listeners that an entry has been automatically removed due to expiration, eviction, or
   * eligibility for garbage collection. This should be called every time expireEntries or
   * evictEntry is called (once the lock is released).
   */
  void processPendingNotifications() {
    RemovalNotification<K, V> notification;
    while ((notification = removalNotificationQueue.poll()) != null) {
      try {
        removalListener.onRemoval(notification);
      } catch (Throwable e) {
        logger.log(Level.WARNING, "Exception thrown by removal listener", e);
      }
    }
  }

能够见见为了减小put的开销,这里做了二个近似于异步的操作,何况在解锁之后做如此的操作来防止阻塞其余的put。

关于Guava的Put操作就解析完了,的确是够复杂的。下边看一下get部分:

    // LoadingCache methods
    //local cache的代理
    @Override
    public V get(K key) throws ExecutionException {
      return localCache.getOrLoad(key);
    }

      /**
   * 根据key获取value,如果获取不到进行load
   * @param key
   * @return
   * @throws ExecutionException
     */
  V getOrLoad(K key) throws ExecutionException {
    return get(key, defaultLoader);
  }

    V get(K key, CacheLoader<? super K, V> loader) throws ExecutionException {
    int hash = hash(checkNotNull(key));//hash——>rehash
    return segmentFor(hash).get(key, hash, loader);
  }

  // loading
    //进行指定key对应的value的获取,读取不加锁
    V get(K key, int hash, CacheLoader<? super K, V> loader) throws ExecutionException {
      //保证key-value不为null
      checkNotNull(key);
      checkNotNull(loader);

      try {
        if (count != 0) { // read-volatile  volatile读会刷新缓存,尽量保证可见性,如果为0那么直接load
          // don't call getLiveEntry, which would ignore loading values
          ReferenceEntry<K, V> e = getEntry(key, hash);
          //如果对应的Entry不为Null,证明值还在
          if (e != null) {
            long now = map.ticker.read();//获取当前的时间,根据当前的时间进行Live的数据的读取
            V value = getLiveValue(e, now);
            //元素不为null的话可以不刷新
            if (value != null) {
              recordRead(e, now);//为entry增加accessTime,同时加入recencyQueue
              statsCounter.recordHits(1);//更新当前的状态,增加为命中,可以用于计算命中率
              //判断当前有没有到刷新的时机,如果没有的话那么返回原值。否则进行刷新
              return scheduleRefresh(e, key, hash, value, now, loader);
            }
            //value为null,如果此时value正在刷新,那么此时等待刷新结果
            ValueReference<K, V> valueReference = e.getValueReference();
            if (valueReference.isLoading()) {
              return waitForLoadingValue(e, key, valueReference);
            }
          }
        }
        //如果取不到值,那么进行统一的加锁get
        // at this point e is either null or expired;
        return lockedGetOrLoad(key, hash, loader);
      } catch (ExecutionException ee) {
        Throwable cause = ee.getCause();
        if (cause instanceof Error) {
          throw new ExecutionError((Error) cause);
        } else if (cause instanceof RuntimeException) {
          throw new UncheckedExecutionException(cause);
        }
        throw ee;
      } finally {
        postReadCleanup();//每次Put和get之后都要进行一次Clean
      }
    }

get的落到实处和JDK1.6的ConcurrentHashMap理念一样,都是put加锁,然而get是用volatile保证。
这里最首要做了几件事:

  1. 先是取得Entry,Entry不为null获取相应的Value,如若Value不为空,那么注脚值还在,那么此时判定一下是还是不是要刷新直接回到了。不然判别当前引用是还是不是在Loading,如果是就等候Loading停止。
  2. 假使取不到Entry或然Value为null
    而且未有在Loading,那么此时举行lockedGetOrLoad(),那是多个大活儿。

    V lockedGetOrLoad(K key, int hash, CacheLoader<? super K, V> loader) throws ExecutionException {
      ReferenceEntry<K, V> e;
      ValueReference<K, V> valueReference = null;
      LoadingValueReference<K, V> loadingValueReference = null;
      boolean createNewEntry = true;

      lock();//加锁,因为会改变数据结构
      try {
        // re-read ticker once inside the lock
        long now = map.ticker.read();
        preWriteCleanup(now);//清除引用队列,Acess队列和Write队列中过期的数据,这算是一次put操作

        int newCount = this.count - 1;
        AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
        int index = hash & (table.length() - 1);
        ReferenceEntry<K, V> first = table.get(index);
        //定位目标元素
        for (e = first; e != null; e = e.getNext()) {
          K entryKey = e.getKey();
          if (e.getHash() == hash
              && entryKey != null
              && map.keyEquivalence.equivalent(key, entryKey)) {
            valueReference = e.getValueReference();
            //如果目前处在loading状态,不创建新元素
            if (valueReference.isLoading()) {
              createNewEntry = false;
            } else {
              V value = valueReference.get();
              if (value == null) { //可能被GC掉了,加入removeListener
                enqueueNotification(
                    entryKey, hash, value, valueReference.getWeight(), RemovalCause.COLLECTED);
              } else if (map.isExpired(e, now)) { //可能过期了
                // This is a duplicate check, as preWriteCleanup already purged expired
                // entries, but let's accomodate an incorrect expiration queue.
                enqueueNotification(
                    entryKey, hash, value, valueReference.getWeight(), RemovalCause.EXPIRED);
              } else {//目前就已经加载过了,返回
                recordLockedRead(e, now);
                statsCounter.recordHits(1);
                // we were concurrent with loading; don't consider refresh
                return value;
              }
              //删除在队列中相应的引用,因为后面要新创建
              // immediately reuse invalid entries
              writeQueue.remove(e);
              accessQueue.remove(e);
              this.count = newCount; // write-volatile
            }
            break;
          }
        }
        //创建新的Entry,但是此时是没有值的
        if (createNewEntry) {
          loadingValueReference = new LoadingValueReference<K, V>();

          if (e == null) {
            e = newEntry(key, hash, first);
            e.setValueReference(loadingValueReference);
            table.set(index, e);
          } else {
            e.setValueReference(loadingValueReference);
          }
        }
      } finally {
        unlock();
        postWriteCleanup();
      }

      if (createNewEntry) {
        try {
          // Synchronizes on the entry to allow failing fast when a recursive load is
          // detected. This may be circumvented when an entry is copied, but will fail fast most
          // of the time.
          synchronized (e) {
            return loadSync(key, hash, loadingValueReference, loader);
          }
        } finally {
          statsCounter.recordMisses(1);
        }
      } else {
        // The entry already exists. Wait for loading.
        return waitForLoadingValue(e, key, valueReference);
      }
    }

先是说一下为啥加锁,加锁的原因有七个:

  1. load算是多少个写操作,改造数据结构,必要加锁。
  2. 为了防止缓存击穿,加锁一个防止缓存击穿的发出,当然是JVm级其余不是遍布式等第的。

因为是写所以要扩充preWriteCleanup,依照key定位一下Entry,倘使能稳固到,那么判定是或不是在Loading,假设是的话不创建新的Entry何况等待Loading截止。要是或不是那么判定value是不是为null和是不是过期,若是是的话都要开展创办新Entry,借使都不是声明value是加载过了,那么更新下Access队列然后回到。
接下去清除一下Access和Write队列的因素,创制新的Entry。这里相比有趣:

   // at most one of loadSync/loadAsync may be called for any given LoadingValueReference
    //同步刷新
    V loadSync(
        K key,
        int hash,
        LoadingValueReference<K, V> loadingValueReference,
        CacheLoader<? super K, V> loader)
        throws ExecutionException {
      ListenableFuture<V> loadingFuture = loadingValueReference.loadFuture(key, loader);
      return getAndRecordStats(key, hash, loadingValueReference, loadingFuture);
    }

这里开创了四个loadingReference,那也正是在此以前看来的决断是或不是在Loading。如若是Loading状态那么表面有叁个线程正在更新Cache,别的的线程等待就足以了。

那边可以见到实际也支撑异步的刷新:

    ListenableFuture<V> loadAsync(
        final K key,
        final int hash,
        final LoadingValueReference<K, V> loadingValueReference,
        CacheLoader<? super K, V> loader) {
      final ListenableFuture<V> loadingFuture = loadingValueReference.loadFuture(key, loader);
      loadingFuture.addListener(
          new Runnable() {
            @Override
            public void run() {
              try {
                getAndRecordStats(key, hash, loadingValueReference, loadingFuture);
              } catch (Throwable t) {
                logger.log(Level.WARNING, "Exception thrown during refresh", t);
                loadingValueReference.setException(t);
              }
            }
          },
          directExecutor());
      return loadingFuture;
    }

背后更新的逻辑就不贴了。
从上边我们能够看到,对于每一趟get都会去开展Access队列的更新,同一时间对于二十八线程的更新只会挑起二个线程去load数据,对于不设有的数据,get时也会开展三回load操作。同一时间经过同步操作消除了缓存击穿的主题材料。不得不说GuavaCache设计的很神奇。

实质上Guava还大概有三个相比较旧事物,asMap(),大家认为GuavaCache像Map,可是还不完全都是Map,那么就提供了三个方法以Map的视图去展现。
看下asMap()

    @Override
    public ConcurrentMap<K, V> asMap() {
      return localCache;
    }

实际便是localCache重临了,重临类型是ConcurrentMap,那么大家看看localCache的存在延续结构:

@GwtCompatible(emulated = true)
class LocalCache<K, V> extends AbstractMap<K, V> implements ConcurrentMap<K, V> {

果真和Map关系大大的,也便是说,LocalCache本人是个ConcurrentMap,但是对于LocalCache的那几个map方法大家是调用不到的,因为大家只可以用LoadingCache嘛。通过asMap大家能获取LocalCache,可是大家不能够利用除了Map接口之外的主意,也便是说大家无法采纳机关加载等一种类的成效。
正如官方Wiki说的:

新萄京赌场2757com 1

Paste_Image.png

由来全体的着力源码解析完了,以为多少恶心,源码那东西就要静下来细细的看,收获会异常的大。

是因为小说相比长,借使有怎样难点还请赐教。最终,祝自身这一个苦逼码农圣诞欢跃。

一、LocalCache的实现:##\

实在LocalCache的落实方案有众两种,首先大家能体悟的正是JDK内部许多的Collection类,其实对于List和Array都以足以做LocalCache的。然则因为他俩的数据结构决定了他们一定是行不通的。所以选拔Hash作为LocalCache是一个很好的挑三拣四。JDK对于HashTable的达成有过各样,大家得以依附气象实行选取。

HashTable:对接口做了伙同保障线程安全,然而功用非常低。不提出选用。
HashMap:线程不安全的贯彻,自动扩大体积,未有出现难题的景色下得以选用。
ConcurrentHashMap:线程安全的HashTable实现,写加锁,读不加锁。对所粒度举行裁减,分段有限匡助并发的吞吐量,建议利用。

当然也能够运用List、Array等。当然效能的确是跟不上。

除却JDK的容器之外,还应该有想Ehcache、Guava Cache这种,
ehcache使用起来不是很有利,所以小编在档案的次序中没用过。可是Guava
Cache在那方面很惊艳,也是本人很喜欢的三个Localcache的兑现。

贴一下代码吧:上边也可以有下载地址,其实很简短,主要的是上学

二、Guava Cache的使用:##\

对于Guava
Cache的行使,官方的Wiki应该算是最佳的表明了,不过没有办法都是立陶宛(Lithuania)语,大概望着望着就困了。官方Wiki的传送门:https://github.com/google/guava/wiki
那边大致的牵线下GuavaCache的采取:
GuavaCache使用时重点分三种格局:LoadingCache和CallableCache

率先种LoadingCache是一种含有加载作用的Cache达成,加载是应用伊始化LoadingCache的时候钦点的CacheLoader。示举例下:

LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
       .maximumSize(1000)
       .expireAfterWrite(10, TimeUnit.MINUTES)
       .removalListener(MY_LISTENER)
       .build(
           new CacheLoader<Key, Graph>() {
             public Graph load(Key key) throws AnyException {
               return createExpensiveGraph(key);
             }
           });

其次种CallableCahce
解决了LoadingCache不灵活,它同意每回get操作能够和谐钦点回调函数,进而在未有目的值的时候能够灵活的依据Call函数进行加载。

Cache<Key, Value> cache = CacheBuilder.newBuilder()
    .maximumSize(1000)
    .build(); // look Ma, no CacheLoader
...
try {
  // If the key wasn't in the "easy to compute" group, we need to
  // do things the hard way.
  cache.get(key, new Callable<Value>() {
    @Override
    public Value call() throws AnyException {
      return doThingsTheHardWay(key);
    }
  });
} catch (ExecutionException e) {
  throw new OtherException(e.getCause());
}

可以看看下边包车型大巴运用方法比上边的行使方法要灵活,可是复杂程度也更加的千头万绪。

相关文章