Tomcat架构解析之3 Connector NIO
上文簡(jiǎn)單記錄了默認(rèn)的Connector的內(nèi)部構(gòu)造及消息流,同時(shí)此Connector也是基于BIO的實(shí)現(xiàn)。
除BIO,也可以通過配置快速部署NIO的connector。在server.xml中如下配置;
整個(gè)Tomcat是一個(gè)比較完善的框架體系,各組件間都是基于接口實(shí)現(xiàn),方便擴(kuò)展
像這里的org.apache.coyote.http11.Http11NioProtocol和BIO的org.apache.coyote.http11.Http11Protocol都是統(tǒng)一的實(shí)現(xiàn)org.apache.coyote.ProtocolHandler接口
ProtocolHandler的實(shí)現(xiàn)類
從整體結(jié)構(gòu)上來說,NIO還是與BIO的實(shí)現(xiàn)保持大體一致
NIO connector的內(nèi)部結(jié)構(gòu)
還是可以看見Connector中三大件
- Http11NioProtocol
- Mapper
- CoyoteAdapter
基本功能與BIO的類似
重點(diǎn)看看Http11NioProtocol.
和JIoEndpoint一樣,NioEndpoint是Http11NioProtocol中負(fù)責(zé)接收處理socket的主要模塊
NioEndpoint的主要流程
Acceptor及Worker分別是以線程池形式存在
Poller是一個(gè)單線程
注意,與BIO的實(shí)現(xiàn)一樣,默認(rèn)狀態(tài)下,在server.xml中
- 沒有配置<Executor>,則以Worker線程池運(yùn)行
- 配置了<Executor>,則以基于juc 系列的ThreadPoolExecutor線程池運(yùn)行。
Acceptor
- 接收socket線程,這里雖然是基于NIO的connector,但是在接收socket方面還是傳統(tǒng)的serverSocket.accept()方式,獲得SocketChannel對(duì)象
- 然后封裝在一個(gè)tomcat的實(shí)現(xiàn)類org.apache.tomcat.util.net.NioChannel對(duì)象中
- 然后將NioChannel對(duì)象封裝在一個(gè)PollerEvent對(duì)象中,并將PollerEvent對(duì)象壓入events queue里。這里是個(gè)典型的生產(chǎn)者-消費(fèi)者模式,Acceptor與Poller線程之間通過queue通信,Acceptor是events queue的生產(chǎn)者,Poller是events queue的消費(fèi)者。
Poller
Poller線程中維護(hù)了一個(gè)Selector對(duì)象,NIO就是基于Selector來完成邏輯的
在Connector中并不止一個(gè)Selector,在Socket的讀寫數(shù)據(jù)時(shí),為了控制timeout也有一個(gè)Selector,在后面的BlockSelector中介紹。可以先把Poller線程中維護(hù)的這個(gè)Selector標(biāo)為主Selector
Poller是NIO實(shí)現(xiàn)的主要線程。首先作為events queue的消費(fèi)者,從queue中取出PollerEvent對(duì)象,然后將此對(duì)象中的channel以O(shè)P_READ事件注冊(cè)到主Selector中,然后主Selector執(zhí)行select操作,遍歷出可以讀數(shù)據(jù)的socket,并從Worker線程池中拿到可用的Worker線程,然后將socket傳遞給Worker。整個(gè)過程是典型的NIO實(shí)現(xiàn)。
Worker
Worker線程拿到Poller傳過來的socket后,將socket封裝在SocketProcessor對(duì)象中。然后從Http11ConnectionHandler中取出Http11NioProcessor對(duì)象,從Http11NioProcessor中調(diào)用CoyoteAdapter的邏輯,跟BIO實(shí)現(xiàn)一樣。在Worker線程中,會(huì)完成從socket中讀取http request,解析成HttpServletRequest對(duì)象,分派到相應(yīng)的servlet并完成邏輯,然后將response通過socket發(fā)回client。在從socket中讀數(shù)據(jù)和往socket中寫數(shù)據(jù)的過程,并沒有像典型的非阻塞的NIO的那樣,注冊(cè)O(shè)P_READ或OP_WRITE事件到主Selector,而是直接通過socket完成讀寫,這時(shí)是阻塞完成的,但是在timeout控制上,使用了NIO的Selector機(jī)制,但是這個(gè)Selector并不是Poller線程維護(hù)的主Selector,而是BlockPoller線程中維護(hù)的Selector,稱之為輔Selector。
NioSelectorPool
NioEndpoint對(duì)象中維護(hù)了一個(gè)NioSelecPool對(duì)象,這個(gè)NioSelectorPool中又維護(hù)了一個(gè)BlockPoller線程,這個(gè)線程就是基于輔Selector進(jìn)行NIO的邏輯。以執(zhí)行servlet后,得到response,往socket中寫數(shù)據(jù)為例,最終寫的過程調(diào)用NioBlockingSelector的write方法。
public int write(ByteBuffer buf, NioChannel socket, long writeTimeout,MutableInteger lastWrite) throws IOException { SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector()); if ( key == null ) throw new IOException("Key no longer registered"); KeyAttachment att = (KeyAttachment) key.attachment(); int written = 0; boolean timedout = false; int keycount = 1; //assume we can write long time = System.currentTimeMillis(); //start the timeout timer try { while ( (!timedout) && buf.hasRemaining()) { if (keycount > 0) { //only write if we were registered for a write //直接往socket中寫數(shù)據(jù) int cnt = socket.write(buf); //write the data lastWrite.set(cnt); if (cnt == -1) throw new EOFException(); written += cnt; //寫數(shù)據(jù)成功,直接進(jìn)入下一次循環(huán),繼續(xù)寫 if (cnt > 0) { time = System.currentTimeMillis(); //reset our timeout timer continue; //we successfully wrote, try again without a selector } } //如果寫數(shù)據(jù)返回值cnt等于0,通常是網(wǎng)絡(luò)不穩(wěn)定造成的寫數(shù)據(jù)失敗 try { //開始一個(gè)倒數(shù)計(jì)數(shù)器 if ( att.getWriteLatch()==null || att.getWriteLatch().getCount()==0) att.startWriteLatch(1); //將socket注冊(cè)到輔Selector,這里poller就是BlockSelector線程 poller.add(att,SelectionKey.OP_WRITE); //阻塞,直至超時(shí)時(shí)間喚醒,或者在還沒有達(dá)到超時(shí)時(shí)間,在BlockSelector中喚醒 att.awaitWriteLatch(writeTimeout,TimeUnit.MILLISECONDS); }catch (InterruptedException ignore) { Thread.interrupted(); } if ( att.getWriteLatch()!=null && att.getWriteLatch().getCount()> 0) { keycount = 0; }else { //還沒超時(shí)就喚醒,說明網(wǎng)絡(luò)狀態(tài)恢復(fù),繼續(xù)下一次循環(huán),完成寫socket keycount = 1; att.resetWriteLatch(); } if (writeTimeout > 0 && (keycount == 0)) timedout = (System.currentTimeMillis() - time) >= writeTimeout; } //while if (timedout) throw new SocketTimeoutException(); } finally { poller.remove(att,SelectionKey.OP_WRITE); if (timedout && key != null) { poller.cancelKey(socket, key); } } return written; }也就是說當(dāng)socket.write()返回0時(shí),說明網(wǎng)絡(luò)狀態(tài)不穩(wěn)定,這時(shí)將socket注冊(cè)O(shè)P_WRITE事件到輔Selector,由BlockPoller線程不斷輪詢這個(gè)輔Selector,直到發(fā)現(xiàn)這個(gè)socket的寫狀態(tài)恢復(fù)了,通過那個(gè)倒數(shù)計(jì)數(shù)器,通知Worker線程繼續(xù)寫socket動(dòng)作。
看一下BlockSelector線程的邏輯;
public void run() { while (run) { try { ...... Iterator iterator = keyCount > 0 ? selector.selectedKeys().iterator() : null; while (run && iterator != null && iterator.hasNext()) { SelectionKey sk = (SelectionKey) iterator.next(); KeyAttachment attachment = (KeyAttachment)sk.attachment(); try { attachment.access(); iterator.remove(); ; sk.interestOps(sk.interestOps() & (~sk.readyOps())); if ( sk.isReadable() ) { countDown(attachment.getReadLatch()); } //發(fā)現(xiàn)socket可寫狀態(tài)恢復(fù),將倒數(shù)計(jì)數(shù)器置位,通知Worker線程繼續(xù) if (sk.isWritable()) { countDown(attachment.getWriteLatch()); } }catch (CancelledKeyException ckx) { if (sk!=null) sk.cancel(); countDown(attachment.getReadLatch()); countDown(attachment.getWriteLatch()); } }//while }catch ( Throwable t ) { log.error("",t); } } events.clear(); try { selector.selectNow();//cancel all remaining keys }catch( Exception ignore ) { if (log.isDebugEnabled())log.debug("",ignore); } }使用這個(gè)輔Selector主要是減少線程間的切換,同時(shí)還可減輕主Selector的負(fù)擔(dān)。以上描述了NIO connector工作的主要邏輯,可以看到在設(shè)計(jì)上還是比較精巧的。NIO connector還有一塊就是Comet,有時(shí)間再說吧。需要注意的是,上面從Acceptor開始,有很多對(duì)象的封裝,NioChannel及其KeyAttachment,PollerEvent和SocketProcessor對(duì)象,這些不是每次都重新生成一個(gè)新的,都是NioEndpoint分別維護(hù)了它們的對(duì)象池;
ConcurrentLinkedQueue<SocketProcessor> processorCache = new ConcurrentLinkedQueue<SocketProcessor>() ConcurrentLinkedQueue<KeyAttachment> keyCache = new ConcurrentLinkedQueue<KeyAttachment>() ConcurrentLinkedQueue<PollerEvent> eventCache = new ConcurrentLinkedQueue<PollerEvent>() ConcurrentLinkedQueue<NioChannel> nioChannels = new ConcurrentLinkedQueue<NioChannel>()當(dāng)需要這些對(duì)象時(shí),分別從它們的對(duì)象池獲取,當(dāng)用完后返回給相應(yīng)的對(duì)象池,這樣可以減少因?yàn)閯?chuàng)建及GC對(duì)象時(shí)的性能消耗
總結(jié)
以上是生活随笔為你收集整理的Tomcat架构解析之3 Connector NIO的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 干货|EOS和它引领的POS新时代
- 下一篇: Android及java中list循环添