日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > java >内容正文

java

Which one is faster: Java heap or native memory?

發布時間:2025/3/21 java 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Which one is faster: Java heap or native memory? 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

2019獨角獸企業重金招聘Python工程師標準>>>

Which one is faster: Java heap or native memory?

11-29-2012?by??Sergio Oliveira Jr.??|??8 Comments

One of the?advantages?of the Java language is that you do not need to deal with memory allocation and deallocation. Whenever you instantiate an object with the?new?keyword, the necessary memory is allocated in the JVM heap. The heap is then managed by the garbate collector which reclaims the memory after the object goes out-of-scope. However there is a backdoor to reach the off-heap native memory from the JVM. In this article I am going to show how an object can be stored in memory as a sequence of bytes and how you can choose between storing these bytes in heap memory or in direct (i.e. native) memory. Then I will try to conclude which one is faster to access from the JVM: heap memory or direct memory.

Allocating and Deallocating with Unsafe

The?sun.misc.Unsafe?class allows you to allocate and deallocate native memory from Java like you were calling?malloc?andfree?from C. The memory you create goes off the heap and are not managed by the garbage collector so it becomes your responsibility to deallocate the memory after you are done with it. Here is my?Direct?utility class to gain access to the?Unsafeclass.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 publicclassDirectimplementsMemory { privatestaticUnsafe unsafe; privatestaticbooleanAVAILABLE =false; static{ try{ Field field = Unsafe.class.getDeclaredField("theUnsafe"); field.setAccessible(true); unsafe = (Unsafe)field.get(null); AVAILABLE =true; }catch(Exception e) { // NOOP: throw exception later when allocating memory } } publicstaticbooleanisAvailable() { returnAVAILABLE; } privatestaticDirect INSTANCE =null; publicstaticMemory getInstance() { if(INSTANCE ==null) { INSTANCE =newDirect(); } returnINSTANCE; } privateDirect() { } @Override publiclongalloc(longsize) { if(!AVAILABLE) { thrownewIllegalStateException("sun.misc.Unsafe is not accessible!"); } returnunsafe.allocateMemory(size); } @Override publicvoidfree(longaddress) { unsafe.freeMemory(address); } @Override publicfinallonggetLong(longaddress) { returnunsafe.getLong(address); } @Override publicfinalvoidputLong(longaddress,longvalue) { unsafe.putLong(address, value); } @Override publicfinalintgetInt(longaddress) { returnunsafe.getInt(address); } @Override publicfinalvoidputInt(longaddress,intvalue) { unsafe.putInt(address, value); } }

Placing an object in native memory

Let’s move the following Java object to native memory:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 publicclassSomeObject { privatelongsomeLong; privateintsomeInt; publiclonggetSomeLong() { returnsomeLong; } publicvoidsetSomeLong(longsomeLong) { this.someLong = someLong; } publicintgetSomeInt() { returnsomeInt; } publicvoidsetSomeInt(intsomeInt) { this.someInt = someInt; } }

Note that all we are doing below is saving its properties in the?Memory:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 publicclassSomeMemoryObject { privatefinalstaticintsomeLong_OFFSET =0; privatefinalstaticintsomeInt_OFFSET =8; privatefinalstaticintSIZE =8+4;// one long + one int privatelongaddress; privatefinalMemory memory; publicSomeMemoryObject(Memory memory) { this.memory = memory; this.address = memory.alloc(SIZE); } @Override publicvoidfinalize() { memory.free(address); } publicfinalvoidsetSomeLong(longsomeLong) { memory.putLong(address + someLong_OFFSET, someLong); } publicfinallonggetSomeLong() { returnmemory.getLong(address + someLong_OFFSET); } publicfinalvoidsetSomeInt(intsomeInt) { memory.putInt(address + someInt_OFFSET, someInt); } publicfinalintgetSomeInt() { returnmemory.getInt(address + someInt_OFFSET); } }

Now let’s benchmark read/write access for two arrays: one with millions of?SomeObjects and another one with millions ofSomeMemoryObjects. The code can be seen?here?and the results are below:

// with JIT: Number of Objects: 1,000 1,000,000 10,000,000 60,000,000 Heap Avg Write: 107 2.30 2.51 2.58 Native Avg Write: 305 6.65 5.94 5.26 Heap Avg Read: 61 0.31 0.28 0.28 Native Avg Read: 309 3.50 2.96 2.16 // without JIT: (-Xint) Number of Objects: 1,000 1,000,000 10,000,000 60,000,000 Heap Avg Write: 104 107 105 102 Native Avg Write: 292 293 300 297 Heap Avg Read: 59 63 60 58 Native Avg Read: 297 298 302 299

Conclusion:?Crossing the JVM barrier to reach native memory is approximately 10 times slower for reads and 2 times slower for writes.?But notice that each?SomeMemoryObject?is allocating its own native memory space so the reads and writes are not continuous, in other words, each direct memory object reads and writes from and to its own allocated memory space that can be located anywhere.?Let’s benchmark read/write access to continuous direct and heap memory to try to determine which one is faster.

Accessing large chunks of continuous memory

The test consist of allocating a byte array in the heap and a corresponding chunk of native memory to hold the same amount of data. Then we sequentially write and read a couple of times to measure which one is faster. We also test random access to any location of the array and compare the results. The sequential test can be seen?here. The random one can be seenhere. The results:

// with JIT and sequential access: Number of Objects: 1,000 1,000,000 1,000,000,000 Heap Avg Write: 12 0.34 0.35 Native Avg Write: 102 0.71 0.69 Heap Avg Read: 12 0.29 0.28 Native Avg Read: 110 0.32 0.32 // without JIT and sequential access: (-Xint) Number of Objects: 1,000 1,000,000 10,000,000 Heap Avg Write: 8 8 8 Native Avg Write: 91 92 94 Heap Avg Read: 10 10 10 Native Avg Read: 91 90 94 // with JIT and random access: Number of Objects: 1,000 1,000,000 1,000,000,000 Heap Avg Write: 61 1.01 1.12 Native Avg Write: 151 0.89 0.90 Heap Avg Read: 59 0.89 0.92 Native Avg Read: 156 0.78 0.84 // without JIT and random access: (-Xint) Number of Objects: 1,000 1,000,000 10,000,000 Heap Avg Write: 55 55 55 Native Avg Write: 141 142 140 Heap Avg Read: 55 55 55 Native Avg Read: 138 140 138

Conclusion:?Heap memory is always faster than direct memory for sequential access. For random access, heap memory is a little bit slower for big chunks of data, but not much.

Final Conclusion

Working with Native memory from Java has its usages such as when you need to work with large amounts of data (> 2 gigabytes) or when you want to escape from the garbage collector [1]. However in terms of latency, direct memory access from the JVM is not faster than accessing the heap as demonstrated above. The results actually make sense since crossing the JVM barrier must have a cost. That’s the same dilema between using a direct or a heap?ByteBuffer. The speed advantage of the direct?ByteBuffer?is not?access speed?but the ability to talk directly with the operating system’s native I/O operations. Another great example discussed by?Peter Lawrey?is the use of?memory-mapped files?when working with time-series.

轉載于:https://my.oschina.net/fourthmoon/blog/116146

總結

以上是生活随笔為你收集整理的Which one is faster: Java heap or native memory?的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。