當前位置:
首頁 > 知識 > GPUImage的filter 響應處理鏈 的理解筆記

GPUImage的filter 響應處理鏈 的理解筆記


GPUImage的filter的textures處理鏈式結構

兩個最重要的的地方:

  1. 最重要的一個類GPUImageOutput

    (所有的filter的父類,其他也有繼承它的,如

    GPUImageUIElement,UIKit元素通過CG轉gles貼圖 等等

    );
  2. 協議(或者介面)

    GPUImageInput

繼承GPUImageOutput且遵循GPUImageInput的filter,處理完成後輸出又可以作為下一個filter的輸入。

1 @protocol GPUImageInput
2 - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
3 - (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
4 - (NSInteger)nextAvailableTextureIndex;
5 - (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
6 - (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
7 - (CGSize)maximumOutputSize;
8 - (void)endProcessing;
9 - (BOOL)shouldIgnoreUpdatesToThisTarget;
10 - (BOOL)enabled;
11 - (BOOL)wantsMonochromeInput;
12 - (void)setCurrentlyReceivingMonochromeInput:(BOOL)newValue;
13 @end

GPUImageFramebuffer

framebuffer的封裝類,根據onlyGenerateTexture判斷 只生成紋理 或 framebuffer;摘自 - (void)generateFramebuffer;

    1. 只生成紋理的情況典型:GPUImageUIElement,

      GPUImageVideoCamera

      等等

    2. 生成framebuffer,判斷是否支持快速上傳紋理數據(其實是判斷

      CVOpenGLESTextureCacheCreate

      是否可用)

如果支持快速上傳紋理,CVPixelBufferCreate生成renderTarget,CVOpenGLESTextureCacheCreateTextureFromImage根據renderTarget(sourceImage)生成renderTexture,最後調用glFramebufferTexture2D將framebufferrenderTexture綁定在一塊,framebuffer輸出到texture(註:framebuffer也可以綁定到renderBuffer,也常稱為colorbuffer,renderbuffer直接顯示在CALayer上了;綁定在texture上通常作為中間值);

如果不支持;先generate texture,再綁定,glTexImage2D上傳數據到GPU,最後調用glFramebufferTexture2D將framebuffertexture綁定在一塊

GPUImageFramebuffer中的- (CGImageRef)newCGImageFromFramebufferContents;,用於從framebuffer中取出圖像數據生成CGImageRef;

1    CGDataProviderRef dataProvider = NULL;
2 if ([GPUImageContext supportsFastTextureUpload])
3 {
4 #if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
5 NSUInteger paddedWidthOfImage = CVPixelBufferGetBytesPerRow(renderTarget) / 4.0;   //位元組對齊後的圖片佔用寬度可能要大
6 NSUInteger paddedBytesForImage = paddedWidthOfImage * (int)_size.height * 4;
7
8 glFinish;    //強制提交前面調用的gl指令到GPU硬體,阻塞調用
9 CFRetain(renderTarget); //

防止出現野指針,在回調中釋放

I need to retain the pixel buffer here and release in the data source callback to prevent its bytes from being prematurely deallocated during a photo write operation 10 [self lockForReading]; 11 rawImagePixels = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget); 12 dataProvider = CGDataProviderCreateWithData((__bridge_retained void*)self, rawImagePixels, paddedBytesForImage, dataProviderUnlockCallback);

//全局的framebuffercache強引用當前自身,防止framebuffer在切換時出現問題

13 [[GPUImageContext sharedFramebufferCache] addFramebufferToActiveImageCaptureList:self]; // In case the framebuffer is swapped out on the filter, need to have a strong reference to it somewhere for it to hang on while the image is in existence 14 #else 15 #endif 16 } 17 else 18 { 19 [self activateFramebuffer]; 20 rawImagePixels = (GLubyte *)malloc(totalBytesForImage); 21 glReadPixels(0, 0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels); //阻塞調用,直接從framebuffer中讀取image 原始數據 22 dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback); 23 [self unlock]; // Don"t need to keep this around anymore 24 }

最後CGImageCreate生成圖片返回。

GPUImageOutput

類的說明其實已經很明了,視頻採集,拍照等都是以它為基類,同意套路:源(視頻,靜態圖)上傳圖片幀給OpenGL ES作為textures,這些textures作為下一個filter的輸入,形成處理texture的鏈式結構。

/** GPUImage"s base source object

Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include:

- GPUImageVideoCamera (for live video from an iOS camera)
- GPUImageStillCamera (for taking photos with the camera)
- GPUImagePicture (for still images)
- GPUImageMovie (for movies)

Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.
*/

類中工具函數runSynchronouslyOnContextQueue等,通過dispatch_get_specific防止死鎖,注意不要用dispatch_get_current_queue;

通過對三個典型類的作用解讀,分別為 (GPUImagePicture)source->(GPUImageFilter)filter->(GPUImageView)output,形成處理鏈式結構,當然還有其他的pipeline。

  • GPUImagePicture

GPUImagePicture只繼承GPUImageOutput,專門用作讀取輸入數據,上傳GPU,交個鏈條下一步GPUImageFilter處理

      • 初始化initWithCGImage

        :讀取CGImageRef數據,判斷是否需要CG的輔助來處理圖片數據,如需要CG,固定套路(包含解壓圖片語義)

        CGBitmapContextCreate

        -->

        CGContextDrawImage

        ;不需要的情況,

        CGImageGetDataProvider

        -->

        CGDataProviderCopyData

        -->

        CFDataGetBytePtr

        獲取原始數據, 最後通過調用

        glTexImage2D

        上傳imageData到當前outputFramebuffer 的GPU texture

      • 處理圖片processImageWithCompletionHandler:

        根據addTarget加入的所有target(即鏈條結構中間的filter),逐個setInputFramebuffer設置當前outputFramebuffer中

        processed texture

        為下一步filter的輸入;
      • newFrameReadyAtTime

        通知各個加入的target處理數據。
  • GPUImageFilter

GPUImageFilter(實際開發中通常用到它的子類)繼承GPUImageOutput,同時遵循GPUImageInput 協議,類的說明如下

/** GPUImage"s base filter class

Filters and other subsequent elements in the chain conform to the GPUImageInput protocol,
which lets them take in the supplied or processed texture from the previous link in the chain and do something with it.
Objects one step further down the chain are considered targets,
and processing can be branched by adding multiple targets to a single output or filter. */

GPUImageFilter中 setInputFramebuffer (GPUImageInput協議方法)簡單地賦值 ;

- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;

{

firstInputFramebuffer = newInputFramebuffer;

[firstInputFramebuffer lock];

}

然後調用newFrameReadyAtTime;

- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
static const GLfloat imageVertices = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};    //頂點數據,兩個三角形組成texture區域
[self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]]; [self informTargetsAboutNewFrameAtTime:frameTime]; }

激活該filter中的 filterProgram(已經attach過 頂點shader 和 片元shader),然後綁定輸入的texture並渲染。

- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
if (self.preventRendering)
{
[firstInputFramebuffer unlock];
return;
}

[GPUImageContext setActiveShaderProgram:filterProgram];

  

/**


* 從GPUImageFrameBufferCache中取出可重用的outputFramebuffer


*


**/


outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];

[outputFramebuffer activateFramebuffer];
if (usingNextFrameForImageCapture)
{
[outputFramebuffer lock];
}

[self setUniformsForProgramAtIndex:0];

glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT);

glActiveTexture(GL_TEXTURE2);

//選擇GL_TEXTURE2

glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);

//綁定當前輸入的framebuffer中的texture

glUniform1i(filterInputTextureUniform, 2);

//分別設置頂點shader中的頂點數據,和將來用於片元shader中的texture坐標數據

glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices); glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); [firstInputFramebuffer unlock]; if (usingNextFrameForImageCapture) { dispatch_semaphore_signal(imageCaptureSemaphore); } }

informTargetsAboutNewFrameAtTime中輪詢兩次當前的target,第一次大致還是調用父類的setInputFramebufferForTarget(父類GPUImageOutput),第二次繼續newFrameReadyAtTime,又回到了從source添加target的原點。

GPUImageView

作為最終的輸出target只實現了GPUImageInput的協議,只能接受source或者filter傳過來的數據,不再作為輸出了;

其中的setInputFramebuffer 和 newFrameReadyAtTime和filter中處理如出一轍,但是加了一個調用;如下,正如開頭提到的framebuffer也可以綁定到renderBuffer,也常稱為colorbuffer,renderbuffer直接顯示在CAEAGLLayer上了;最終通過設置屏幕大小的緩衝區,直接顯示在手機屏幕上。

- (void)presentFramebuffer;
{
glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
[[GPUImageContext sharedImageProcessingContext] presentBufferForDisplay];
}

其中的displayRenderBuffer通過createDisplayFramebuffer方法創建,都是些模板代碼,沒什麼可記錄的。

小結

GPUImage的代碼結構可謂是鏈式處理結構的典範,很值得學習;本文只記錄了processing chain(source->filter-->filter...->output)的數據流向,很多細節以後再記錄。

參考
喜歡這篇文章嗎?立刻分享出去讓更多人知道吧!

本站內容充實豐富,博大精深,小編精選每日熱門資訊,隨時更新,點擊「搶先收到最新資訊」瀏覽吧!


請您繼續閱讀更多來自 達人科技 的精彩文章:

如何在require中使用VUE
JavaScript的執行過程
Java顯式鎖學習總結之四:ReentrantLock源碼分析
js中的面向對象入門

TAG:達人科技 |

您可能感興趣

Servlet Cookie 處理
ZUUL 處理 gerrit patch-set 的流程
Jdk 動態代理異常處理分析,UndeclaredThrowableException
三款照片處理軟體橫評:Lightroom、CaptureOne、AfterShot
Django Channel處理Websocket鏈接
iPad找不到Apple Pencil了怎麼處理?
原Movidius CEO Remi El-Ouazzane:深度了解終端視覺處理器VPU
Google Chrome通過改進Cookie處理來提升用戶隱私
Google愛上Intel+AMD合體處理器:Chromebook要用它
扎克伯格正處理Cambridge Analytica引發的爭議
流式處理:使用 Apache Kafka的Streams API 實現 Rabobank 的實時財務告警
Python的Socket知識2:粘包處理
國產處理器表態 不受Meltown和Spectre漏洞影響
Intel+AMD處理器合體:Google筆記本也要用
偽 「Photoshop」的圖像處理
Pytorch 中如何處理 RNN 輸入變長序列 padding
JSP Cookie 處理
Kafka 源碼分析 5 :KafkaConsumer 消費處理
JQury datatables 改變處理中 顯示樣式
python介面測試之token&session的處理