日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

关于在本地idea当中提交spark代码到远程的错误总结(第二篇)

發布時間:2025/7/14 编程问答 23 豆豆
生活随笔 收集整理的這篇文章主要介紹了 关于在本地idea当中提交spark代码到远程的错误总结(第二篇) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

當代碼能正常提交到spark集群運行的時候,出現下面的錯誤:

Exception in thread "main" java.lang.OutOfMemoryError: PermGen spaceat java.lang.ClassLoader.defineClass1(Native Method)at java.lang.ClassLoader.defineClass(ClassLoader.java:800)at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)at java.net.URLClassLoader.access$100(URLClassLoader.java:71)at java.net.URLClassLoader$1.run(URLClassLoader.java:361)at java.net.URLClassLoader$1.run(URLClassLoader.java:355)at java.security.AccessController.doPrivileged(Native Method)at java.net.URLClassLoader.findClass(URLClassLoader.java:354)at java.lang.ClassLoader.loadClass(ClassLoader.java:425)at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)at java.lang.ClassLoader.loadClass(ClassLoader.java:358)at scala.collection.SeqViewLike$AbstractTransformed.<init>(SeqViewLike.scala:43)at scala.collection.SeqViewLike$$anon$4.<init>(SeqViewLike.scala:79)at scala.collection.SeqViewLike$class.newFlatMapped(SeqViewLike.scala:79)at scala.collection.SeqLike$$anon$2.newFlatMapped(SeqLike.scala:635)at scala.collection.SeqLike$$anon$2.newFlatMapped(SeqLike.scala:635)at scala.collection.TraversableViewLike$class.flatMap(TraversableViewLike.scala:160)at scala.collection.SeqLike$$anon$2.flatMap(SeqLike.scala:635)at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:58)at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:48)at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:46)at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:53)at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:53)at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:153)at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:829)at p.JavaSparkPi.main(JavaSparkPi.java:30) Exception in thread "Thread-3" java.lang.OutOfMemoryError: PermGen space Exception in thread "Thread-30" java.lang.OutOfMemoryError: PermGen space Exception in thread "Thread-33" java.lang.OutOfMemoryError: PermGen space

?

除了出現上面的問題之外還會出現下面這個錯誤??吹竭@個錯誤的第一反應內存溢出 Job aborted due to stage failure: Total size of serialized results of 34 tasks (1033.9 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)

Exception in thread "main"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"

2018-12-19 10:42:51,599 WARN? [shuffle-client-0] server.TransportChannelHandler : Exception in connection from /10.8.30.108:50610
java.io.IOException: Connection reset by peer
? at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
? at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
? at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
? at sun.nio.ch.IOUtil.read(IOUtil.java:192)
? at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
? at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
? at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
? at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
? at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
? at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
? at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
? at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
? at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
? at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
? at java.lang.Thread.run(Thread.java:745)
2018-12-19 10:42:51,610 INFO? [dispatcher-event-loop-1] yarn.ApplicationMaster$AMEndpoint : Driver terminated or disconnected! Shutting down. tc-20024:50610
2018-12-19 10:42:51,614 INFO? [dispatcher-event-loop-1] yarn.ApplicationMaster : Final app status: SUCCEEDED, exitCode: 0
2018-12-19 10:42:51,623 INFO? [Thread-3] yarn.ApplicationMaster : Unregistering ApplicationMaster with SUCCEEDED
2018-12-19 10:42:51,637 INFO? [Thread-3] impl.AMRMClientImpl : Waiting for application to be successfully unregistered.
2018-12-19 10:42:51,743 INFO? [Thread-3] yarn.ApplicationMaster : Deleting staging directory .sparkStaging/application_1545188975663_0002
2018-12-19 10:42:51,745 INFO? [Thread-3] util.ShutdownHookManager : Shutdown hook called

?這個種種的跡象都顯示是程序的內存溢出造成的,那為什么會內存溢出那,原因是我們隊結果集進行collect操作的時候,整的結果作為一個大的集群全部的聚集到了driver 端也就是我們的idea當中。這個時候我們的客戶端如果內存不是夠大的情況下就會出現內存溢出的情況

你可以調大你的內存。但是這樣是治標不治本的操作,在后面的操作過程當中,你也不知道后面的數據量多大,配置多大的driver內存合適那,這個就很難界定了。所以我們在處理數據的時候盡量的減輕對driver端的壓力??梢允褂胒oreachpartition的方法將數據全部在excutor端進行

處理。

參考這篇文章執行:https://segmentfault.com/a/1190000005365244?utm_source=tag-newest

?

這里注意一下,所有的數據都是按照row輸出在excutor端的不是我們的控制臺。

?

轉載于:https://www.cnblogs.com/gxgd/p/10179052.html

《新程序員》:云原生和全面數字化實踐50位技術專家共同創作,文字、視頻、音頻交互閱讀

總結

以上是生活随笔為你收集整理的关于在本地idea当中提交spark代码到远程的错误总结(第二篇)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。