JLBH示例3 –吞吐量对延迟的影响
在這篇文章中:
- 關于吞吐量對延遲的影響的討論
- 如何使用JLBH測量TCP回送
- 添加探針以測試TCP往返的兩半
- 觀察增加吞吐量對延遲的影響
- 了解必須降低吞吐量才能在高百分位數時獲得良好的延遲。
在帖子中,我們看到了考慮協調遺漏的影響或測量延遲一次迭代的影響將對后續迭代產生影響。
直觀上我們了解吞吐量會影響延遲。 很自然
如果我們提高吞吐量,我們還將提高延遲。
進入一個非常擁擠的商店將影響您選擇和購買商品的速度。 另一方面,考慮一個很少見的商店。 可能是因為在這樣的商店中,店主遠離茶歇,直到您等待他放下自己的茶并前往柜臺為您服務時,您的購買才會被延遲。 。
這正是運行基準測試并改變吞吐量時的結果。
通常,當您提高吞吐量時,延遲會增加,但在吞吐量下降到某個閾值以下時,延遲也會增加。
下面的代碼對通過環回的往返TCP調用進行計時。
我們添加了兩個探針:
- client2server –完成往返前半部分所花費的時間
- server2client –完成行程的下半部分所花費的時間
這些探查沒有考慮到協調的遺漏,只有端到端的時間才考慮到協調的遺漏。
這是基準測試的代碼:
package org.latency.tcp;import net.openhft.affinity.Affinity; import net.openhft.chronicle.core.Jvm; import net.openhft.chronicle.core.jlbh.JLBHOptions; import net.openhft.chronicle.core.jlbh.JLBHTask; import net.openhft.chronicle.core.jlbh.JLBH; import net.openhft.chronicle.core.util.NanoSampler;import java.io.EOFException; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.channels.ServerSocketChannel; import java.nio.channels.SocketChannel;public class TcpBenchmark implements JLBHTask {private final static int port = 8007;private static final boolean BLOCKING = false;private final int SERVER_CPU = Integer.getInteger("server.cpu", 0);private JLBH jlbh;private ByteBuffer bb;private SocketChannel socket;private NanoSampler client2serverProbe;private NanoSampler server2clientProbe;public static void main(String[] args) {JLBHOptions jlbhOptions = new JLBHOptions().warmUpIterations(50000).iterations(50000).throughput(20000).runs(5).jlbhTask(new TcpBenchmark());new JLBH(jlbhOptions).start();}@Overridepublic void init(JLBH jlbh) {this.jlbh = jlbh;client2serverProbe = jlbh.addProbe("client2server");server2clientProbe = jlbh.addProbe("server2clientProbe");try {runServer(port);Jvm.pause(200);socket = SocketChannel.open(new InetSocketAddress(port));socket.socket().setTcpNoDelay(true);socket.configureBlocking(BLOCKING);} catch (IOException e) {e.printStackTrace();}bb = ByteBuffer.allocateDirect(8).order(ByteOrder.nativeOrder());}private void runServer(int port) throws IOException {new Thread(() -> {if (SERVER_CPU > 0) {System.out.println("server cpu: " + SERVER_CPU);Affinity.setAffinity(SERVER_CPU);}ServerSocketChannel ssc = null;SocketChannel socket = null;try {ssc = ServerSocketChannel.open();ssc.bind(new InetSocketAddress(port));System.out.println("listening on " + ssc);socket = ssc.accept();socket.socket().setTcpNoDelay(true);socket.configureBlocking(BLOCKING);System.out.println("Connected " + socket);ByteBuffer bb = ByteBuffer.allocateDirect(8).order(ByteOrder.nativeOrder());while (true) {readAll(socket, bb);bb.flip();long time = System.nanoTime();client2serverProbe.sampleNanos(time - bb.getLong());bb.clear();bb.putLong(time);bb.flip();writeAll(socket, bb);}} catch (IOException e) {e.printStackTrace();} finally {System.out.println("... disconnected " + socket);try {if (ssc != null)ssc.close();} catch (IOException ignored) {}try {if (socket != null)socket.close();} catch (IOException ignored) {}}}, "server").start();}private static void readAll(SocketChannel socket, ByteBuffer bb) throws IOException {bb.clear();do {if (socket.read(bb) < 0)throw new EOFException();} while (bb.remaining() > 0);}@Overridepublic void run(long startTimeNs) {bb.position(0);bb.putLong(System.nanoTime());bb.position(0);writeAll(socket, bb);bb.position(0);try {readAll(socket, bb);server2clientProbe.sampleNanos(System.nanoTime() - bb.getLong(0));} catch (IOException e) {e.printStackTrace();}jlbh.sample(System.nanoTime() - startTimeNs);}private static void writeAll(SocketChannel socket, ByteBuffer bb) {try {while (bb.remaining() > 0 && socket.write(bb) >= 0) ;} catch (IOException e) {e.printStackTrace();}}@Overridepublic void complete() {System.exit(0);} }以下是以20,000次迭代/秒的吞吐量運行時的結果:
Warm up complete (50000 iterations took 2.296s) -------------------------------- BENCHMARK RESULTS (RUN 1) ---------Run time: 2.5s Correcting for co-ordinated:true Target throughput:20000/s = 1 message every 50us End to End: (50,000) 50/90 99/99.9 99.99 - worst was 34 / 2,950 19,400 / 20,450 20,450 - 20,450 client2server (50,000) 50/90 99/99.9 99.99 - worst was 16 / 26 38 / 72 287 - 336 server2clientProbe (50,000) 50/90 99/99.9 99.99 - worst was 16 / 27 40 / 76 319 - 901 OS Jitter (26,960) 50/90 99/99.9 99.99 - worst was 9.0 / 16 44 / 1,340 10,220 - 11,800 -------------------------------------------------------------------- -------------------------------- BENCHMARK RESULTS (RUN 2) --------- Run time: 2.5s Correcting for co-ordinated:true Target throughput:20000/s = 1 message every 50us End to End: (50,000) 50/90 99/99.9 99.99 - worst was 42 / 868 4,590 / 5,110 5,370 - 5,370 client2server (50,000) 50/90 99/99.9 99.99 - worst was 20 / 27 38 / 92 573 - 2,560 server2clientProbe (50,000) 50/90 99/99.9 99.99 - worst was 19 / 27 38 / 72 868 - 1,740 OS Jitter (13,314) 50/90 99/99.9 99.99 - worst was 9.0 / 16 32 / 96 303 - 672 -------------------------------------------------------------------- -------------------------------- BENCHMARK RESULTS (RUN 3) --------- Run time: 2.5s Correcting for co-ordinated:true Target throughput:20000/s = 1 message every 50us End to End: (50,000) 50/90 99/99.9 99.99 - worst was 34 / 152 999 / 2,160 2,290 - 2,290 client2server (50,000) 50/90 99/99.9 99.99 - worst was 17 / 26 36 / 54 201 - 901 server2clientProbe (50,000) 50/90 99/99.9 99.99 - worst was 16 / 25 36 / 50 225 - 1,740 OS Jitter (14,306) 50/90 99/99.9 99.99 - worst was 9.0 / 15 23 / 44 160 - 184 ---------------------------------------------------------------------------------------------------- SUMMARY (end to end)--------------- Percentile run1 run2 run3 % Variation var(log) 50: 33.79 41.98 33.79 13.91 90: 2949.12 868.35 151.55 75.92 99: 19398.66 4587.52 999.42 70.53 99.9: 20447.23 5111.81 2162.69 47.62 99.99: 20447.23 5373.95 2293.76 47.24 worst: 20447.23 5373.95 2293.76 47.24 -------------------------------------------------------------------- -------------------------------- SUMMARY (client2server)------------ Percentile run1 run2 run3 % Variation 50: 16.13 19.97 16.90 10.81 90: 26.11 27.14 26.11 2.55 99: 37.89 37.89 35.84 3.67 99.9: 71.68 92.16 54.27 31.76 99.99: 286.72 573.44 200.70 55.32 worst: 335.87 2555.90 901.12 55.04 -------------------------------------------------------------------- -------------------------------- SUMMARY (server2clientProbe)------- Percentile run1 run2 run3 % Variation 50: 16.13 18.94 16.13 10.43 90: 27.14 27.14 25.09 5.16 99: 39.94 37.89 35.84 3.67 99.9: 75.78 71.68 50.18 22.22 99.99: 319.49 868.35 225.28 65.55 worst: 901.12 1736.70 1736.70 0.00 --------------------------------------------------------------------應該發生的是:
client2server + server2client?= endToEnd
而且,這少得多的是在第50個百分點發生的情況
為了演示的目的,進行第二次運行:
19.97 + 18.94?= 41.98
如果這只是您要測量的全部,您可能會說通過我的機器運行20k / s消息沒有問題。
但是,我的筆記本電腦顯然無法處理該音量,如果我們再次查看第二個90%百分位數的運行情況。
27.14 + 27.14!?= 868.35
隨著百分位數的增加,它變得越來越糟……
但是,如果我將吞吐量更改為每秒5k條消息,我會在第90個百分位數上看到這一點:
32.23 + 29.38?= 62.46
因此,我們看到,如果要在高百分位數上實現低延遲,則必須將吞吐量降低到正確的水平。
這就是為什么如此重要,以至于我們能夠使用JLBH改變吞吐量。
翻譯自: https://www.javacodegeeks.com/2016/04/jlbh-examples-3-affects-throughput-latency.html
總結
以上是生活随笔為你收集整理的JLBH示例3 –吞吐量对延迟的影响的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Stackoverflow的见解:投票最
- 下一篇: maven依赖最佳实践_Maven最佳实