日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

直播技术(从服务端到客户端)二

發(fā)布時(shí)間:2025/7/25 编程问答 43 豆豆
生活随笔 收集整理的這篇文章主要介紹了 直播技术(从服务端到客户端)二 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

播放


在上一篇文章中,我們敘述了直播技術(shù)的環(huán)境配置(包括服務(wù)端nginx,nginx-rtmp-module, ffmpeg, Android編譯,iOS編譯)。從本文開始,我們將敘述播放相關(guān)的東西,播放是直播技術(shù)中關(guān)鍵的一步,它包括很多技術(shù)如:解碼,縮放,時(shí)間基線選擇,緩存隊(duì)列,畫面渲染,聲音播放等等。我將分為三個(gè)部分為大家講述整個(gè)播放流程;

  • Android

    第一部分是基于NativeWindow的視頻渲染,主要使用的OpenGL ES2通過傳入surface來將視頻數(shù)據(jù)渲染到surface上顯示出來。第二部分是基于OpenSL ES來音頻播放。第三部分,音視頻同步。我們使用的都是android原生自帶的一些庫來做音視頻渲染處理。

  • IOS

    同樣IOS也分成三個(gè)部分,第一部分視頻渲染:使用OpenGLES.framework,通過OpenGL來渲染視頻畫面,第二部分是音頻播放,基于AudioToolbox.framework做音頻播放;第三部分,視音頻同步。

利用原生庫可以減少資源的利用,降低內(nèi)存,提高性能;一般而言,如果不是通曉android、ios的程序員會(huì)選擇一個(gè)統(tǒng)一的視頻顯示和音頻播放庫(SDL),這個(gè)庫可以實(shí)現(xiàn)視頻顯示和音頻播。但是增加額外的庫意味著資源的浪費(fèi)和性能的降低。

Android

我們首先帶來android端的視頻播放功能,我們分成三個(gè)部分,1、視頻渲染;2、音頻播放;3、時(shí)間基線(音視頻同步)來闡述。

1、視頻渲染


ffmpeg為我們提供瀏覽豐富的編解碼類型(ffmpeg所具備編解碼能力都是軟件編解碼,不是指硬件編解碼。具體之后文章會(huì)詳細(xì)介紹ffmpeg),視頻解碼包括flv, mpeg, mov 等;音頻包括aac, mp3等。對于整個(gè)播放,FFmpeg主要處理流程如下:

<code class="language-C++ hljs scss has-numbering"> <span class="hljs-function">av_register_all()</span>; <span class="hljs-comment">// 注冊所有的文件格式和編解碼器的庫,打開的合適格式的文件上會(huì)自動(dòng)選擇相應(yīng)的編解碼庫</span><span class="hljs-function">avformat_network_init()</span>; <span class="hljs-comment">// 注冊網(wǎng)絡(luò)服務(wù)</span><span class="hljs-function">avformat_alloc_context()</span>; <span class="hljs-comment">// 分配FormatContext內(nèi)存,</span><span class="hljs-function">avformat_open_input()</span>; <span class="hljs-comment">// 打開輸入流,獲取頭部信息,配合av_close_input_file()關(guān)閉流</span><span class="hljs-function">avformat_find_stream_info()</span>; <span class="hljs-comment">// 讀取packets,來獲取流信息,并在pFormatCtx->streams 填充上正確的信息</span><span class="hljs-function">avcodec_find_decoder()</span>; <span class="hljs-comment">// 獲取解碼器,</span><span class="hljs-function">avcodec_open2()</span>; <span class="hljs-comment">// 通過AVCodec來初始化AVCodecContext</span><span class="hljs-function">av_read_frame()</span>; <span class="hljs-comment">// 讀取每一幀</span><span class="hljs-function">avcodec_decode_video2()</span>; <span class="hljs-comment">// 解碼幀數(shù)據(jù)</span><span class="hljs-function">avcodec_close()</span>; <span class="hljs-comment">// 關(guān)閉編輯器上下文</span><span class="hljs-function">avformat_close_input()</span>; <span class="hljs-comment">// 關(guān)閉文件流</span></code>

我們先來看一段代碼:

<code class="language-C++ hljs php has-numbering">av_register_all(); avformat_network_init(); pFormatCtx = avformat_alloc_context(); <span class="hljs-keyword">if</span> (avformat_open_input(&pFormatCtx, pathStr, <span class="hljs-keyword">NULL</span>, <span class="hljs-keyword">NULL</span>) != <span class="hljs-number">0</span>) {LOGE(<span class="hljs-string">"Couldn't open file: %s\n"</span>, pathStr);<span class="hljs-keyword">return</span>; }<span class="hljs-keyword">if</span> (avformat_find_stream_info(pFormatCtx, &dictionary) < <span class="hljs-number">0</span>) {LOGE(<span class="hljs-string">"Couldn't find stream information."</span>);<span class="hljs-keyword">return</span>; } av_dump_format(pFormatCtx, <span class="hljs-number">0</span>, pathStr, <span class="hljs-number">0</span>); </code>

這段代碼可以算是初始化FFmpeg,首先注冊編解碼庫,為FormatContext分配內(nèi)存,調(diào)用avformat_open_input打開輸入流,獲取頭部信息,配合avformat_find_stream_info來填充FormatContext中相關(guān)內(nèi)容,av_dump_format這個(gè)是dump出流信息。這個(gè)信息是這個(gè)樣子的:

<code class="language-text hljs lasso has-numbering">video infomation: Input <span class="hljs-variable">#0</span>, flv, from <span class="hljs-string">'rtmp:127.0.0.1:1935/live/steam'</span>:Metadata:Server : NGINX RTMP (github<span class="hljs-built_in">.</span>com/sergey<span class="hljs-attribute">-dryabzhinsky</span>/nginx<span class="hljs-attribute">-rtmp</span><span class="hljs-attribute">-module</span>)displayWidth : <span class="hljs-number">320</span>displayHeight : <span class="hljs-number">240</span>fps : <span class="hljs-number">15</span>profile : level : <span class="hljs-built_in">Duration</span>: <span class="hljs-number">00</span>:<span class="hljs-number">00</span>:<span class="hljs-number">00.00</span>, start: <span class="hljs-number">15.400000</span>, bitrate: N/AStream <span class="hljs-variable">#0</span>:<span class="hljs-number">0</span>: Video: flv1 (flv), yuv420p, <span class="hljs-number">320</span>x240, <span class="hljs-number">15</span> tbr, <span class="hljs-number">1</span>k tbn, <span class="hljs-number">1</span>k tbcStream <span class="hljs-variable">#0</span>:<span class="hljs-number">1</span>: Audio: mp3, <span class="hljs-number">11025</span> Hz, stereo, s16p, <span class="hljs-number">32</span> kb/s</code>

整個(gè)音頻播放流暢其實(shí)看起來也是很簡單的,主要分:1、創(chuàng)建實(shí)現(xiàn)播放引擎;2、創(chuàng)建實(shí)現(xiàn)混音器;3、設(shè)置緩沖和pcm格式;4、創(chuàng)建實(shí)現(xiàn)播放器;5、獲取音頻播放器接口;6、獲取緩沖buffer;7、注冊播放回調(diào);8、獲取音效接口;9、獲取音量接口;10、獲取播放狀態(tài)接口;
做完這10步,整個(gè)音頻播放器引擎就創(chuàng)建完畢,接下來就是引擎讀取數(shù)據(jù)播放。

<code class="language-C++ hljs objectivec has-numbering"><span class="hljs-keyword">void</span> playBuffer(<span class="hljs-keyword">void</span> *pBuffer, <span class="hljs-keyword">int</span> size) {<span class="hljs-comment">// 判斷數(shù)據(jù)可用性</span><span class="hljs-keyword">if</span> (pBuffer == <span class="hljs-literal">NULL</span> || size == -<span class="hljs-number">1</span>) {<span class="hljs-keyword">return</span>;}LOGV(<span class="hljs-string">"PlayBuff!"</span>);<span class="hljs-comment">// 數(shù)據(jù)存放進(jìn)bqPlayerBufferQueue中</span>SLresult result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue,pBuffer, size);<span class="hljs-keyword">if</span> (result != SL_RESULT_SUCCESS)LOGE(<span class="hljs-string">"Play buffer error!"</span>); }</code>

這段代碼主要闡述的播放的過程,通過將數(shù)據(jù)放進(jìn)bqPlayerBufferQueue,供播放引擎讀取播放。記得我們在創(chuàng)建緩沖buffer的時(shí)候,注冊了一個(gè)callback,這個(gè)callBack的作用就是通知可以向緩沖隊(duì)列中添加數(shù)據(jù),這個(gè)callBack的原型如下:

<code class="hljs lasso has-numbering"><span class="hljs-literal">void</span> videoPlayCallBack(SLAndroidSimpleBufferQueueItf bq, <span class="hljs-literal">void</span> <span class="hljs-subst">*</span>context) {<span class="hljs-comment">// 添加數(shù)據(jù)到bqPlayerBufferQueue中,通過調(diào)用playBuffer方法。</span><span class="hljs-literal">void</span><span class="hljs-subst">*</span> <span class="hljs-built_in">data</span> <span class="hljs-subst">=</span> getData();int size <span class="hljs-subst">=</span> getDataSize();playBuffer(<span class="hljs-built_in">data</span>, size); }</code>
<code class="hljs cpp has-numbering"><span class="hljs-keyword">typedef</span> <span class="hljs-keyword">struct</span> PlayInstance {ANativeWindow *window; <span class="hljs-comment">// nativeWindow // 通過傳入surface構(gòu)建</span><span class="hljs-keyword">int</span> display_width; <span class="hljs-comment">// 顯示寬度</span><span class="hljs-keyword">int</span> display_height; <span class="hljs-comment">// 顯示高度</span><span class="hljs-keyword">int</span> stop; <span class="hljs-comment">// 停止</span><span class="hljs-keyword">int</span> timeout_flag; <span class="hljs-comment">// 超時(shí)標(biāo)記</span><span class="hljs-keyword">int</span> disable_video; VideoState *videoState; <span class="hljs-comment">//隊(duì)列</span><span class="hljs-keyword">struct</span> ThreadQueue *<span class="hljs-built_in">queue</span>; <span class="hljs-comment">// 音視頻幀隊(duì)列</span><span class="hljs-keyword">struct</span> ThreadQueue *video_queue; <span class="hljs-comment">// 視頻幀隊(duì)列</span><span class="hljs-keyword">struct</span> ThreadQueue *audio_queue; <span class="hljs-comment">// 音頻幀隊(duì)列</span>} PlayInstance;</code>

我們主要分析延時(shí)同步的那一段代碼:

<code class="hljs autohotkey has-numbering">// 延時(shí)同步int64_t pkt_pts = pavpacket.pts<span class="hljs-comment">;</span>double show_time = pkt_pts * (playInstance->videoState->video_time_base)<span class="hljs-comment">;</span>int64_t show_time_micro = show_time * <span class="hljs-number">1000000</span><span class="hljs-comment">;</span>int64_t played_time = av_gettime() - playInstance->videoState->video_start_time<span class="hljs-comment">;</span>int64_t delt<span class="hljs-built_in">a_time</span> = show_time_micro - played_time<span class="hljs-comment">;</span><span class="hljs-keyword">if</span> (delt<span class="hljs-built_in">a_time</span> < -(<span class="hljs-number">0.2</span> * <span class="hljs-number">1000000</span>)) {LOGE(<span class="hljs-string">"視頻跳幀\n"</span>)<span class="hljs-comment">;</span><span class="hljs-keyword">continue</span>;} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (delt<span class="hljs-built_in">a_time</span> > <span class="hljs-number">0.2</span> * <span class="hljs-number">1000000</span>) {av_usleep(delt<span class="hljs-built_in">a_time</span>)<span class="hljs-comment">;</span>}</code>

這是一段Swift代碼。在ios采用的是swift+oc+c++混合編譯,正好借此熟悉swift于oc和c++的交互。enableAudio主要是創(chuàng)建一個(gè)audioManager實(shí)例,進(jìn)行注冊回調(diào),和開始播放和暫停服務(wù)。audioManager是一個(gè)單例。是一個(gè)封裝AudioToolbox類。下面的代碼是激活A(yù)udioSession(初始化Audio)和失效AudioSession代碼。

<code class="language-oc hljs objectivec has-numbering">- (<span class="hljs-built_in">BOOL</span>) activateAudioSession {<span class="hljs-keyword">if</span> (!_activated) {<span class="hljs-keyword">if</span> (!_initialized) {<span class="hljs-keyword">if</span> (checkError(AudioSessionInitialize(<span class="hljs-literal">NULL</span>,kCFRunLoopDefaultMode,sessionInterruptionListener,(__bridge <span class="hljs-keyword">void</span> *)(<span class="hljs-keyword">self</span>)),<span class="hljs-string">"Couldn't initialize audio session"</span>))<span class="hljs-keyword">return</span> <span class="hljs-literal">NO</span>;_initialized = <span class="hljs-literal">YES</span>;}<span class="hljs-keyword">if</span> ([<span class="hljs-keyword">self</span> checkAudioRoute] &&[<span class="hljs-keyword">self</span> setupAudio]) {_activated = <span class="hljs-literal">YES</span>;}}<span class="hljs-keyword">return</span> _activated; }- (<span class="hljs-keyword">void</span>) deactivateAudioSession {<span class="hljs-keyword">if</span> (_activated) {[<span class="hljs-keyword">self</span> pause];checkError(AudioUnitUninitialize(_audioUnit),<span class="hljs-string">"Couldn't uninitialize the audio unit"</span>);<span class="hljs-comment">/*fails with error (-10851) ? checkError(AudioUnitSetProperty(_audioUnit,kAudioUnitProperty_SetRenderCallback,kAudioUnitScope_Input,0,NULL,0),"Couldn't clear the render callback on the audio unit");*/</span>checkError(AudioComponentInstanceDispose(_audioUnit),<span class="hljs-string">"Couldn't dispose the output audio unit"</span>);checkError(AudioSessionSetActive(<span class="hljs-literal">NO</span>),<span class="hljs-string">"Couldn't deactivate the audio session"</span>); checkError(AudioSessionRemovePropertyListenerWithUserData(kAudioSessionProperty_AudioRouteChange,sessionPropertyListener,(__bridge <span class="hljs-keyword">void</span> *)(<span class="hljs-keyword">self</span>)),<span class="hljs-string">"Couldn't remove audio session property listener"</span>);checkError(AudioSessionRemovePropertyListenerWithUserData(kAudioSessionProperty_CurrentHardwareOutputVolume,sessionPropertyListener,(__bridge <span class="hljs-keyword">void</span> *)(<span class="hljs-keyword">self</span>)),<span class="hljs-string">"Couldn't remove audio session property listener"</span>);_activated = <span class="hljs-literal">NO</span>;} }- (<span class="hljs-built_in">BOOL</span>) setupAudio {<span class="hljs-comment">// --- Audio Session Setup ---</span>UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback;<span class="hljs-comment">//UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;</span><span class="hljs-keyword">if</span> (checkError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,<span class="hljs-keyword">sizeof</span>(sessionCategory),&sessionCategory),<span class="hljs-string">"Couldn't set audio category"</span>))<span class="hljs-keyword">return</span> <span class="hljs-literal">NO</span>;<span class="hljs-keyword">if</span> (checkError(AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange,sessionPropertyListener,(__bridge <span class="hljs-keyword">void</span> *)(<span class="hljs-keyword">self</span>)),<span class="hljs-string">"Couldn't add audio session property listener"</span>)){<span class="hljs-comment">// just warning</span>}<span class="hljs-keyword">if</span> (checkError(AudioSessionAddPropertyListener(kAudioSessionProperty_CurrentHardwareOutputVolume,sessionPropertyListener,(__bridge <span class="hljs-keyword">void</span> *)(<span class="hljs-keyword">self</span>)),<span class="hljs-string">"Couldn't add audio session property listener"</span>)){<span class="hljs-comment">// just warning</span>}<span class="hljs-comment">// Set the buffer size, this will affect the number of samples that get rendered every time the audio callback is fired</span><span class="hljs-comment">// A small number will get you lower latency audio, but will make your processor work harder</span><span class="hljs-preprocessor">#if !TARGET_IPHONE_SIMULATOR</span>Float32 preferredBufferSize = <span class="hljs-number">0.0232</span>;<span class="hljs-keyword">if</span> (checkError(AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,<span class="hljs-keyword">sizeof</span>(preferredBufferSize),&preferredBufferSize),<span class="hljs-string">"Couldn't set the preferred buffer duration"</span>)) {<span class="hljs-comment">// just warning</span>} <span class="hljs-preprocessor">#endif</span><span class="hljs-keyword">if</span> (checkError(AudioSessionSetActive(<span class="hljs-literal">YES</span>),<span class="hljs-string">"Couldn't activate the audio session"</span>))<span class="hljs-keyword">return</span> <span class="hljs-literal">NO</span>;[<span class="hljs-keyword">self</span> checkSessionProperties];<span class="hljs-comment">// ----- Audio Unit Setup -----</span><span class="hljs-comment">// Describe the output unit.</span>AudioComponentDescription description = {<span class="hljs-number">0</span>};description<span class="hljs-variable">.componentType</span> = kAudioUnitType_Output;description<span class="hljs-variable">.componentSubType</span> = kAudioUnitSubType_RemoteIO;description<span class="hljs-variable">.componentManufacturer</span> = kAudioUnitManufacturer_Apple;<span class="hljs-comment">// Get component</span>AudioComponent component = AudioComponentFindNext(<span class="hljs-literal">NULL</span>, &description);<span class="hljs-keyword">if</span> (checkError(AudioComponentInstanceNew(component, &_audioUnit),<span class="hljs-string">"Couldn't create the output audio unit"</span>))<span class="hljs-keyword">return</span> <span class="hljs-literal">NO</span>;UInt32 size;<span class="hljs-comment">// Check the output stream format</span>size = <span class="hljs-keyword">sizeof</span>(AudioStreamBasicDescription);<span class="hljs-keyword">if</span> (checkError(AudioUnitGetProperty(_audioUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,<span class="hljs-number">0</span>,&_outputFormat,&size),<span class="hljs-string">"Couldn't get the hardware output stream format"</span>))<span class="hljs-keyword">return</span> <span class="hljs-literal">NO</span>;_outputFormat<span class="hljs-variable">.mSampleRate</span> = _samplingRate;<span class="hljs-keyword">if</span> (checkError(AudioUnitSetProperty(_audioUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,<span class="hljs-number">0</span>,&_outputFormat,size),<span class="hljs-string">"Couldn't set the hardware output stream format"</span>)) {<span class="hljs-comment">// just warning</span>}_numBytesPerSample = _outputFormat<span class="hljs-variable">.mBitsPerChannel</span> / <span class="hljs-number">8</span>;_numOutputChannels = _outputFormat<span class="hljs-variable">.mChannelsPerFrame</span>;LoggerAudio(<span class="hljs-number">2</span>, @<span class="hljs-string">"Current output bytes per sample: %ld"</span>, _numBytesPerSample);LoggerAudio(<span class="hljs-number">2</span>, @<span class="hljs-string">"Current output num channels: %ld"</span>, _numOutputChannels);<span class="hljs-comment">// Slap a render callback on the unit</span>AURenderCallbackStruct callbackStruct;callbackStruct<span class="hljs-variable">.inputProc</span> = renderCallback; <span class="hljs-comment">// 注冊回調(diào),這個(gè)回調(diào)是用來取數(shù)據(jù)的,也就是</span>callbackStruct<span class="hljs-variable">.inputProcRefCon</span> = (__bridge <span class="hljs-keyword">void</span> *)(<span class="hljs-keyword">self</span>);<span class="hljs-keyword">if</span> (checkError(AudioUnitSetProperty(_audioUnit,kAudioUnitProperty_SetRenderCallback,kAudioUnitScope_Input,<span class="hljs-number">0</span>,&callbackStruct,<span class="hljs-keyword">sizeof</span>(callbackStruct)),<span class="hljs-string">"Couldn't set the render callback on the audio unit"</span>))<span class="hljs-keyword">return</span> <span class="hljs-literal">NO</span>;<span class="hljs-keyword">if</span> (checkError(AudioUnitInitialize(_audioUnit),<span class="hljs-string">"Couldn't initialize the audio unit"</span>))<span class="hljs-keyword">return</span> <span class="hljs-literal">NO</span>;<span class="hljs-keyword">return</span> <span class="hljs-literal">YES</span>; }</code>

總結(jié)


本文主要是講述了ffmpeg實(shí)現(xiàn)播放的邏輯,分為android和ios兩端,根據(jù)兩端平臺(tái)的特性做了相應(yīng)的處理。在android端采用的是NativeWindow(surface)實(shí)現(xiàn)視頻播放,OpenSL ES實(shí)現(xiàn)音頻播放。實(shí)現(xiàn)音視頻同步的邏輯是基于第三方時(shí)間基準(zhǔn)線,音頻和視頻同時(shí)調(diào)整的方案。在ios端采用的是OpenGL實(shí)現(xiàn)視頻渲染,AudioToolbox實(shí)現(xiàn)音頻播放。音視頻同步和android采用的是一樣。其中兩端的ffmpeg邏輯是一致的。在ios端OpenGL實(shí)現(xiàn)視頻渲染沒有重點(diǎn)闡述如何使用OpenGL。這個(gè)有興趣的同學(xué)可以自行研究。
備注:整個(gè)代碼工程等整理之后會(huì)發(fā)布出來。
最后添加兩張播放效果圖


總結(jié)

以上是生活随笔為你收集整理的直播技术(从服务端到客户端)二的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。