GPUImage – GPUImageFramebuffer

GPUImage cannot be used to deal with the targets like UIImage, CGImage and CIImage which are build-in classes in iOS when using GPUImage to process images or videos. So the first step of using GPUImage is that the data of images or videos you want to process should be loaded into a carrier which is defined by GPUImage, so that these data could be processed in the pipeline of GPUImage in series. The carrier I said is called GPUImageFramebuffer. Let’s say an image which includes all kinds of information of it like its pixels could be treated as a set of liquid pigments, and the most common image class in iOS -UIImage is like a colour palette. The essential step of showing an image in the device screen is to put the pigments into the colour palette. This kind of palette in GPUImage is GPUImageFramebuffer. So after pouring all pigments from UImage colour palette to GPUImageFramebuffer, GPUImage could modify or mix or do some other processing using these pigments from an image. This procedure in this post is compared to a multi-dimensional pipeline. Therefore, the process seems like a colour palette(an object of GPUImageFramebuffer) with the pigments flows through the pipeline. When the next object/node receives the palette, it would deal with the pigments and generate a new palette with the prcessed results to transmit to the next.

GPUImageFramebuffer

The properties and methods of GPUImageFramebuffer are demonstrated as follows in both English and Chinese.

//the size of the colour palette, when generating a new palette, you should know how large a palette is enough to carry the all pigments.
//颜料盒子的大小,盒子创建的时候你要知道需要用一个多大的盒子才能刚好容纳这些颜料
@property(readonly) CGSize size;  

//some settings of generating texture
//用于创建纹理时的相关设置
@property(readonly) GPUTextureOptions textureOptions;  

//the pointer of the texture object
//纹理对象指针
@property(readonly) GLuint texture; 


//This property is related to the relationship of framebuffer and texture. Texture is set as one of properties of framebuffer by GPUImage. If missingFramebuffer is YES, then this object would generate a texture. One of situations requires when it's not necessary to use framebuffer in the object of GPUImagePicture, but texture. If missingFramebuffer is NO, then the framebuffer object would be generated and then bind with the texture object generated next.
//这个属性的设置就涉及到framebuffer和texture的关系,此处先不细说。GPUImage中将texture作为framebuffer对象的一个属性实现两者关系的绑定。若missingFramebuffer为YES,则对象只会生成texture,例如GPUImagePicture对象的图像数据就不需要用到framebuffer,只要texture即可;若为NO,则会先生成framebuffer对象,再生成texture对象,并进行绑定。
@property(readonly) BOOL missingFramebuffer; 

//设置buffer大小初始化,texture设置为默认,又创建framebuffer又创建texture对象。
- (id)initWithSize:(CGSize)framebufferSize; 

- (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;


- (id)initWithSize:(CGSize)framebufferSize overriddenTexture:(GLuint)inputTexture;  //自己创建好texture替代GPUImageFramebuffer对象初始化时创建的texture

- (void)activateFramebuffer; //绑定frame buffer object才算是创建完成,也就是FBO在使用前,一定要调用此方法。

//以下方法涉及framebuffer对象的内存管理,之后会具体说明。开发时基本不会手动调用以上方法。
- (void)lock; 
- (void)unlock; 
- (void)clearAllLocks;
- (void)disableReferenceCounting;
- (void)enableReferenceCounting; 

//generating image data with type of CGImage from framebuffer
//从framebuffer中导出生成CGImage格式图片数据
- (CGImageRef)newCGImageFromFramebufferContents;  

//以上方法涉及到GPUImageFramebuffer对象管理自身生成的用于存储处理后的图像数据CVPixelBufferRef对象。
- (void)restoreRenderTarget; 
- (void)lockForReading;
- (void)unlockAfterReading; 

//返回CVPixelBufferRef类型对象实际占用内存大小
- (NSUInteger)bytesPerRow; 

//返回CVPixelBufferRef类型对象
- (GLubyte *)byteBuffer; 

It could be noticed that there are three major parts of GPUImageFramebuffer.h.

  1. generation of framebuffer and texture.
  2. Memory management of objects of GPUImageFramebuffer in GPUImage.
  3. Explicit processes of CVPixelBufferRef objects which are used to save data.

1. Framebuffer and Texture

· Texture

There is a kind of definition of TEXTURE I found in a Chinese website as below (has been translated to English).

Generally, the ‘texture’ is used to represent one or more two-dimensional graphics. The object would seem more realistic if the textures were mapped on it in some specialised manners. Texturing has became an essential method for rendering in the widely used praphic system. The texture could be regarded as a groupd of colour of each pixel, while a texture indicates several characters of an object including its colour, graphic and even tactile in the real world. The texture only represents the coloured pattern on the surface of an object, but it cannot affect the object on its geometic structure. Moreover, it is just a kind of high-intensity calculation.

In my memories, when I was a graduate student, the in-class practice of computer graphic was developing 3D scenes using OpenGL framework based on C language. Although I cannot remember the detailed process of creating a helicopter clearly, there are two points I am still very impressive, which are: 1. no shape can be created excepting triangles. For examples, a square is made up of two triangles, and a circle is made up of many triangles. The more triangles there are, the less jagged the circle shows; 2. different patterns could be ‘sticked’ on the surface of a 3D graphic which are composed of 2D graphics. It seems like packaging a can. The freshly produced can is actually just a cylinder made of aluminum or other materials without any pattern outside. The last step of producing is sticking a piece of papaer or spraying the information and images on the can. Finally, cans would be delivered to different storehouses or shops, so that I was able to buy milk powder printed with a certain brand and also information on the can. As you can see, texture is the piece of paper or printed pattern outside the can.

The following methods are used to create texture objects in class GPUImageFramebuffer. The producing mechanism is the same as OpenGL ES.

- (void)generateTexture;

{

//The unit of texture could be seen as the cell for storing information of texture in GPU. More quantity of cells there are in the GPU, more expensive the GPU is. This method is used to select current active texture unit instead of to active a texture. 
//纹理单元相当于显卡中存放纹理的格子,格子有多少取决于显卡贵不贵。此方法并不是激活纹理单元,而是选择当前活跃的纹理单元。
glActiveTexture(GL_TEXTURE1);  

//Generating textures. The second input parameter is the address of texture, which points the space for storaging the generated textures. 
//生成纹理,第二个参数为texture的地址,生成纹理后texture就指向纹理所在的内存区域。
glGenTextures(1, &_texture); 

//Binding the texture created using above method with the active texture unit. In my view, this is for temporarily naming the texture unit.
//将上方创建的纹理名称与活跃的纹理单元绑定,个人理解为暂时的纹理单元命名。
glBindTexture(GL_TEXTURE_2D, _texture);  

//If the size of the displayed texture is smaller than the loaded texture, it would be processed using GL_TEXTURE_MIN_FILTER.
//当所显示的纹理比加载进来的纹理小时,采用GL_TEXTURE_MIN_FILTER的方法来处理。
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, _textureOptions.minFilter);

//If the size of the displayed texture is larger than the loaded texture, it would be processed using GL_TEXTURE_MAG_FILTER.
//当所显示的纹理比加载进来的纹理大时,采用GL_LINEAR的方法来处理。
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, _textureOptions.magFilter);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, _textureOptions.wrapS);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, _textureOptions.wrapT);

//The configuration parameters of texutures using above are all saved in GPUTextureOptions. Also, this structure could be created with different parameters if needed.
//以上配置texture的参数都存放在GPUTextureOptions结构体中,使用时候如果有特殊要求,也可以自己创建然后通过初始化方法传入。
}

注:以上源码中最后两行的注释,Non-Power-of-Two Textures译为无二次幂限制的纹理。大概意思是纹理大小可以不等于某个数的二次幂。
// This is necessary for non-power-of-two textures

The simple explanation of WRAP configuration for textures shows below.

http://blog.csdn.net/wangdingqiaoit/article/details/51457675

· Framebuffer

In OpenGL framework, the object of frame buffer is called the FBO. In my view, if the texture was the content outside a can, the frame buffer would be the piece of paper wrapped on the can. The frame buffer is used for buffering textures and rendering them on the screen, and this is the ‘render to texture’ process. Of course, frame buffers could not only buffer the original texture but also the texture processed by Open GL. For instance, in addition to the brand name, logo and other information of the canned milk in the photo, it seems to have the special texture and reflection effects. This could be made by obtaining the original image in the reflection for transparency, stretching, fogging, etc. After these process, the same image as the actual reflection is produced, and rendered to the can last.

Regard to this, the texture could be a part of the frame buffer. Hence, in GPUImage, the address of texture is defined as a property in GPUImageFramebuffer. There are also some situations that the framebuffer is not necessary to create. For example, when the input is being initialised like a GPUImagePicture object, only creating texture but no any frame buffer should be fine when loading image sources. There is an initialisation method of GPUImageFramebuffer that has a parameter onlyGenerateTexture. When onlyGenerateTexture equals to YES, this initialised object only has textures but no any frame buffer.

//Binding a named texture with a target. After binding, the previous bindind would be no longer available. If it's the first time binding, the type of the texture would be set the same as the target. Detailed explanation: http://www.dreamingwish.com/frontui/article/default/glbindtexture.html
//将一个命名的纹理绑定到一个纹理目标上,当把一张纹理绑定到一个目标上时,之前对这个目标的绑定就会失效。当一张纹理被第一次绑定时,它假定成为指定的目标类型。例如,一张纹理若第一次被绑定到GL_TEXTURE_1D上,就变成了一张一维纹理;若第一次被绑定到GL_TEXTURE_2D上,就变成了一张二维纹理。当使用glBindTexture绑定一张纹理后,它会一直保持活跃状态直到另一张纹理被绑定到同一个目标上,或者这个被绑定的纹理被删除了(使用glDeleteTextures)。

glBindTexture(GL_TEXTURE_2D, _texture); 
//Setting a 2D or cubic texture. Detailed explanation: http://blog.csdn.net/csxiaoshui/article/details/27543615
//用来指定二维纹理和立方体纹

glTexImage2D(GL_TEXTURE_2D, 0, _textureOptions.internalFormat, (int)_size.width, (int)_size.height, 0, _textureOptions.format, _textureOptions.type, 0); 
//The parameter GL_COLOR_ATTACHMENT0 tells OpenGLES to bind the texture object to the binding point 0 of the FBO (each FBO can bind multiple colors at one time, and each of them corresponds to a binding point of the FBO). And the parameter GL_TEXTURE_2D specifies the texture with a two-dimensional format. _Texture saves a texture identifier which points to a previously prepared texture object. The texture can be a multi-mapped image. And the last parameter indicates a level of 0, which refers to using the original image. 
//参数GL_COLOR_ATTACHMENT0是告诉OpenGLES把纹理对像绑定到FBO的0号绑定点(一个FBO在同一个时间内可以绑定多个颜色缓冲区,每个对应FBO的一个绑定点),参数GL_TEXTURE_2D是指定纹理的格式为二维纹理,_texture保存的是纹理标识,指向一个之前就准备好了的纹理对像。纹理可以是多重映射的图像,最后一个参数指定级级为0,指的是使用原图像。

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _texture, 0); 
//This is the last step of creating a frambuffer and binding it with a texture. I suppose that the texture should be released from the frame buffer after binding.
//这是在创建framebuffer并绑定纹理的最后一步,个人理解是在绑定之后framebuffer已经获得texture,需要释放framebuffer对texture的引用。

glBindTexture(GL_TEXTURE_2D, 0); 

There is a private method of the class GPUImageFramebuffer which is -(void)destroyFramebuffer. This method would be called when the object is deallocating. The key code of the method is:

glDeleteFramebuffers(1, &framebuffer);

framebuffer = 0;

2. Managing Objects of GPUImageFramebuffer

I believe that many people who have used GPUImage framework have had the same experience encountered this assert. Let’s take a specific example to illustrate what GPUImage does to GPUImageFramebuffer. The scenario is: to create a GPUImagePicture object picture, a GPUImageHueFilter object filter and a GPUImageView object imageView. The procedure is the pic is processed by the filter, then displayed on the imageView. Let’s take a look at what happens to the GPUImageFramebuffer during the entire process.

· Framebuffer in the picture object of GPUImagePicture

After performing a series of processing on the initialised UIImage object, the pic gets its own framebuffer:

It can be seen that the framebuffer object is not created during initialising the GPUImageFramebuffer, but is obtained by calling one of the methods of the singleton [GPUImageContext sharedFramebufferCache]. sharedFramebufferCache is actually a common subclass object of NSObject, and it does not have the available processing function of data cache like NSCache object. But GPUImage has this sharedFramebufferCache object in a single instance to manage the framebuffer objects generated during the process. The method for obtaining framebuffers has two parameters: 1. the size of the texture. Normally this size is the initial size of the input picture. Of course, there are some other situations about sizes that will be mentioned later; 2. whether to return a framebuffer object with textures only.

Step 1.

The inputs of the texture size, textureOptions (default) and a unique loopupHash generated by the method -sharedFramebufferCache的- (NSString *)hashForSize:(CGSize)size textureOptions:(GPUTextureOptions)textureOptions onlyTexture:(BOOL)onlyTexture; called by the texture. It can be seen that the lookupHash retrieved by GPUImagePicture objects with the same texture size is the same.

Step 2.

The lookupHash is used as a key for finding numberOfMatchingTexturesInCache from the dictionary property framebufferTypeCounts in sharedFramebufferCache. Literally, it equals to the number of GPUImageFramebuffer objects that meet the conditions in the cache. And the converted integer type of it is numberOfMatchingTextures.

Step 3.

If numberOfMatchingTexturesInCache is less than 1, that is, if there is no available GPUImageFramebuffer objects that meet the conditions in the cache, the GPUImageFramebuffer initialisation method is called to generate a new framebuffer object then. Otherwise, lookupHash and (numberOfMatchingTextures – 1) would be stitched into a key for retrieving framebuffer objects from another dictionary framebufferCache. Then, the value of numberOfMatchingTexturesInCache in framebufferTypeCounts would be updated by minus 1.

Step 4.

If the last returned framebuffer object is nil, it should be initialised in case.

Step 5.

-(void) lock; method of the framebuffer object as the return value would be called. framebufferReferenceCount would be added 1 then. framebufferReferenceCount is a property of framebuffer, which equals to 0 when the framebuffer initialized. And this property means the number of times the object is referenced literally. This is because the framebuffer object should be referenced by its pic and the image content would be loaded then.

However, different from GPUImageFilter type object, [outputFramebuffer disableReferenceCounting]; would be called after the framebuffer obtained. This method sets the frameCounter’s referenceCountingDisabled to YES. And the value of this attribute in-(void) lock; would lead to different results. If referenceCountingDisabled equals to YES, the framebufferReferenceCount would not plus 1.

- (void)lock;
{

if (referenceCountingDisabled)

{

return;

}

framebufferReferenceCount++;

}

However, the problem lies here. Before the pic obtains the framebuffer, -(void) lock; is called when the framebuffer was found from sharedFramebufferCache. At this time, the framebufferReferenceCount of the framebuffer object obtained by pic has been plus 1. And the referenceCountingDisabled is set to YES after this situation, which causes the outputFramebuffer property of the pic object dealloc not being released, leading to a memory leak. In order to address this problem, I wrote a caterogy of GPUImagePicture, overriding the dealloc method, and replaced the original [outputFramebuffer unlock]; with [outputFramebuffer clearAllLocks]; to ensure that the framebufferReferenceCount of the outputFramebuffer would be reset to 0, so that the outputFramebuffer can be released successfully.

· Framebuffer in the filter object of GPUImageHueFilter

There is the simplest structure of a single input source filter object as an example, and it is through the GPUImageHueFilter object. a filter is the same as a pic, which means it will get a framebuffer storing the processed data from sharedFramebufferCache and pass it to the next object. The difference is that filter has a firstInputFramebuffer variable, which is used to reference the outputFramebuffer of the previous node. If it is a filter object that inherits from GPUImageTwoInputFilter, its variables will have an additional secondInputFramebuffer. If you want to go through a filter, you must use the filter as the target of pic and call pic’s processImage method. The method invocation order in the filter is:

1. Referencing the input framebuffer and calling the lock method. Assume that the framebuffer of the pic is created by initialization. The framebufferReferenceCount before passing in is 1 and the framebufferReferenceCount after passing this method is 2.

- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex {


firstInputFramebuffer = newInputFramebuffer;

[firstInputFramebuffer lock];

} 

2. Processing of the filter operating, that is, rendering the texture of the framebuffer. It’s quite complicated here, so this part doesn’t involve the specific rendering process.

- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;

The outputFramebuffer of the filter is the same as the pic, which is retrieved from sharedFramebufferCache. Therefore, the framebufferReferenceCount of the framebuffer is already 1 when the outputFramebuffer variable is assigned. Then there is a judgment condition: usingNextFrameForImageCapture. A global search in the class found that when calling -(void) useNextFrameForImageCapture, usingNextFrameForImageCapture will be set to YES. So when will this method be called? Those who use GPUImage to write the simplest off-screen rendering function could get a little familiar with this method, that is, this method must be called before the image is exported when filter processing. why? The reason lies in this judgment condition. If usingNextFrameForImageCapture is YES, then the outputFramebuffer needs to be locked again, in order to ensure that the outputFramebuffer needs to be referenced after the processing is completed. So that the image object can be generated from it, otherwise it will be recycled into shareFramebufferCache.

After these steps, this unlock method of the input source is finally called. At this time, the framebufferReferenceCount of firstInputFramebuffer would be 0 generally. And firstInputFramebuffer would be added to shareFramebufferCache.

[firstInputFramebuffer unlock];

3. Next, In this executed method, it will pass its own outputFramebuffer to the next node in the chain, just like the process of the pic to the filter.

- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;

[Self framebufferForOutput] would return outputFramebuffer in a general filter. If usingNextFrameForImageCapture equals to YES, you can simply understand that if the outputFramebuffer of the current object is passed to the next target and there are other uses, do not leave the outputFramebuffer variable blank. If usingNextFrameForImageCapture is NO, the current outputFramebuffer is set to nil, but the original framebuffer pointed to by outputFramebuffer would not be recycled to shareFramebufferCache. The reason is the framebuffer has been passed to the next target, and the lock method is called on the framebuffer in the corresponding assignment method. Repeatedly, until the last node, either generating a picture or displaying it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.