
Previously, I have analogised the GPUImage processing with a kind of pipeline. Thus, the GPUImageFramebuffer has been interpreted as a “flowing carrier” in the processing pipeline in the previous post. The previous post explained that GPUImageFramebuffer is a “flowing carrier” in the pipeline. In regard to this post, I would like to introduce the ‘sources’ that could be as the input of the pipeline. GPUImage provides five input classes:
- GPUImagePicture
- GPUImageRawDataInput
- GPUImageUIElement
- GPUImageMovie
- GPUImageVideoCamera.
Each of them is a sub-class of GPUImageOutput and does not need to follow the GPUImageInput protocol during the entire process. Let’s say if there is an object of a subclass of GPUImageOutput, it can be transmitted with its information to the next node in the pipeline Object. And only the objects of the class that adopt the GPUImageInput protocol can receive and process the information from the previous to the next. Therefore, as an input, there is no necessary to receive information from other nodes, but a one-way delivery. Five different input sources are introduced as the following content.
1. GPUImagePicture
This is one of the class I use frequently for implementing processing pictures. There are five initialisation methods for GPUImagePicture, whereas the final implementation is completely consistent that all of them would catch CGImage objects and load into textures. That is, these initialisation methods will eventually call the following initialisation method.
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
– Variables
CGSize pixelSizeOfImage;
The actual size data of a picture would be saved as this variable during initialising an input object. However, if the actual pic size is greater than the maximum size that the GPU can provide for storing the texture, pixelSizeOfImage would be equal to this maximum size.
BOOL hasProcessedImage;
It is set as NO when initialising its object. This variable is used to control if each node object should perform a specific process when calling – (void)addTarget: method during initialising the processing chain. And The judgement is only executed when it is actually in the process.
dispatch_semaphore_t imageUpdateSemaphore;
The semaphore object is used to deal with the order of execution in multithreading. In different groups, semaphore has a smaller granularity. This variable is initialised in the initialisation method, which is applied in the-(BOOL) processImageWithCompletionHandler: (void (^) (void)) completion method. The specific function is to prevent data inconsistency caused by multiple calls.
– Initialisation
1. Obtaining the size of the picture object. If the width or height equals to 0, the process would be broken by an assertion. It should be noticed that there is one more judgment here. The comment means that if the size of the picture exceeds the maximum size of the texture, the picture needs to be compressed by the limited size.
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
2. I could not understand the use of shouldSmoothlyScaleOutput property well until grasping the mipmap technology. The default value of shouldSmoothlyScaleOutput is NO. In this situation, the texture would not be processed and stored using mipmap. If it is YES, the width and height of the picture must be an integer value as multiple of 2.
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
Mario_Q indicates the overall understanding of Mipmap as follows. Mipmap is a powerful texture technology, which can improve rendering performance and enhance the visual quality of the scene. It can be used to solve two common problems that occur when using general texture mapping:
- Blinking, when the surface of the object being rendered on the screen is very small compared to the texture image to which it is applied, flickering occurs. Especially when the camera and objects are moving, this negative effect is easier to see.
- Performance issues. After loading a large amount of texture data, it must be filtered (reduced), and only a small part is displayed on the screen. The larger the texture, the greater the performance impact.
Mipmap can solve the above two problems. When loading textures, not just loading a texture, but loading a series of textures from large to small in mipmapped texture state. Then OpenGL will choose the most suitable texture according to the size of the given geometric image. Mipmap is to scale the texture by a multiple of 2 until the image is 1×1 in size, then store these maps, and choose an appropriate image when you want to use it. This will add some additional memory. Using mipmap technology in the square texture map, it is probably more than one-third of the original memory space.
3. If the value of picture size is ok and shouldSmoothlyScaleOutput equals to NO, then you need to determine whether the picture object meets the storage configuration of GL by comparing the standard configuration with the CGImage properties. Otherwise, it needs to redraw to generate a new CGImage object.
4. The detailed redrawing process is as follows. First, open up a section of image data storage space and record this section of the address as imageData. After redrawing, an address that stores the image data to be used would be caught. If there is no necessary to redraw, the CGImage object can be directly loaded to the address through the method.
// For resized or incompatible image: redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (size_t)pixelSizeToUseForTexture.width, (size_t)pixelSizeToUseForTexture.height, 8, (size_t)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), newImageSource);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
5. This step is to load input information to the texture (this involves using the encapsulated serial queue and the current EAGLContext object, which will be explained in a separate article afterwards). Firstly, preparing the outputFramebuffer of the input. Secondly, the texture parameters should be configured if Mipmap technique is taken into account. Then, loading the pic information into the texture. Finally, a mipmap would be generated after loading if Mipmap is available.
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
// no need to use self.outputTextureOptions here since pictures need this texture formats and type
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, format, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
glBindTexture(GL_TEXTURE_2D, 0);
6. Finally, do not forget releasing the objects in the CoreGraphic and CoreFundation framework created during the process.
free(imageData);
CFRelease(dataFromImageDataProvider);
– Image Rendering
- (void)processImage;
The above method will be called very frequently during processing. After the entire processing chain is set up, if you want to display the processed final result on GPUImageView or export it, or even export the image directly from the filter node in the middle, you need to perform the following methods. As the name implies, the purpose of this method is to tell the input source to start processing incoming pictures. Looking at the specific implementation code, you will find that the actual operation is equivalent to passing the image content and parameters owned by the input source to the next node or nodes in the chain.
The following figure is a multi-branch processing chain. After processing Filter1, you can export pictures that only have the effect of Filter1; after processing Filter2, it will be displayed on the GPUImageView object; after processing Filter3, it will continue to pass to Filter5 for the next rendering. Each arrow is equivalent to a fuse, then the lighter is to call processImage method of GPUImagePicture object.

The processImage method basically just call processImageWithCompletionHandler: method. Through the loop to traverse the already added targets, such as Filter1, Filter2, Filter3, Filter4 in the above figure. Be aware that this for loop is executed in a defined asynchronous serial queue, and semaphore is used after the loop, which means that the completion callback will not be triggered until the entire operation is completed.
for (idcurrentTarget in targets)
{
//Getting the location of the current target is related to Filter. For example, if it is a Filter with two or more input sources, the order in which each input source is added determines the corresponding processing order and thus affects the final effect. This will be introduced later in the Filter article.
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setCurrentlyReceivingMonochromeInput:NO];
//Passing the texture size of its own FrameBuffer so that the next target can generate or obtain the same size texture for storage of processed content.
[currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
//Passing the FrameBuffer that stores the processed content itself. If it is the input, the content is the original picture data.
[currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:kCMTimeIndefinite atIndex:textureIndexOfTarget];
}
– An easier way for obtaining image information processed via filters
GPUImagePicture offers a method for obtaining processed images more easily, whereas the integrated processing chain is the basic premise still. The situation of crashes caused by missing some code that must be written to export the image could not be considered in this way. That makes convenience. The first input parameter is the Filter object, which needs to be passed to the last Filter in the Filter chain that needs to be processed. And the second parameter is the block callback that returns an image object.
- (void)processImageUpToFilter:(GPUImageOutput*)finalFilterInChain withCompletionHandler:(void (^)(UIImage *processedImage))block;
{
[finalFilterInChain useNextFrameForImageCapture];
[self processImageWithCompletionHandler:^{
UIImage *imageFromFilter = [finalFilterInChain imageFromCurrentFramebuffer];
block(imageFromFilter);
}];
}
2. GPUImageRawDataInput
The basic process is consistent with GPUImagePicture, which is to import image data. The difference between them is that some image objects such as UIImage or CGImage format can be directly imported to a GPUImagePicture object, while GPUImageRawDataInput only accepts the binary data of an image as the content loaded into the texture. There are two ways (I only know these two, while there may be other ways existing) that can make the image object into the data content of the two mechanisms: 1.UIImage-> NSData-> bytes; 2.CoreGraphic redraws a UIImage object saved to a prescribed address. But this way of loading image data is rarely used. No one wants to take a bend to achieve the effect that GPUImagePicture can achieve. However, if you want to perform some specific effects to the image by CoreGraphic before loading it, you can use GPUImageRawDataInput to load the processed data directly without having to generate a picture object.
This class has two public variables that are the same as the GPUImagePicture class. And there are three initialisation methods in total, and ultimately the last method is called. Firstly, let’s focus on the four input parameters of this method:
- (id)initWithBytes:(GLubyte *)bytesToUpload size:(CGSize)imageSize pixelFormat:(GPUPixelFormat)pixelFormat type:(GPUPixelType)pixelType;
bytesToUpload
GLubyte is a data type in OpenGL framework, an unsigned single-byte integer, containing values from 0 to 255. Converting a picture object to binary is equivalent to storing the RGB or RGBA value of each pixel in the picture with binary data. The individual colour value range of the pixel is 0 to 255, so for example, the content storage of R for a certain pixel is the minimum GLubyte unit length. Then GLubyte * can be understood as a pointer to a memory address that stores binary data. Finally, this parameter will be used as the last parameter when calling glTexImage2D function of OpenGL. The last parameter is pixels specifies the pointer to the image data in the memory.
imageSize
It presents the size of an image inputted as binary data. An image could be seen as an array of two-dimensional binary data. The first row stores the pixel data in the top row of the picture, and so forth. The binary data of the picture occupies a continuous length in the cache (the simplest case of storage is assumed), which is a one-dimensional storage form. Then, based on the data, the size of the original image cannot be read straightforwardly, which means that the length of the pixel data of the first row of the picture is unknown. Therefore, one of the purposes of this parameter is to specify the width and height of the original image when calling the OpenGL glTexImage2D function to write the texture to determine the width and height of the texture image. Another purpose is that the GPUImageRawDataInput should be employed in order to obtain its own Framebuffer.
pixelFormat
This parameter is mainly used as the third parameter of the glTexImage2D function. Although there are four types in the enumeration, only two of them (GPUPixelFormatRGBA, GPUPixelFormatRGB) are actually applied when GPUImageRawDataInput is actually utilised for initialisation. It stands whether the initialized image data has an alpha channel literally. The explanation of the third parameter of OpenGL’s glTexImage2D function is: internal format specifies the colour component in the texture. The available values are GL_ALPHA, GL_RGB, GL_RGBA, GL_LUMINANCE, GL_LUMINANCE_ALPHA and so forth. These optional values have a corresponding relationship with GPUPixelFormat. Therefore, it’s not necessary to consider too much when setting this parameter. There are only two options: with transparency and without transparency.

Moreover, the default pixelFormat parameter is GPUPixelFormatBGRA showed on the top line of the GPUImageRawDataInput header file.
// The default format for input bytes is GPUPixelFormatBGRA, unless specified with pixelFormat:
pixelType
This parameter is also used as one of the parameters of glTexImage2D function to specify the data type of the pixel data. It can be roughly seen as the storage accuracy of binary data. GPUPixelType has only two enumeration items showed below.

Similarly, the comment states that its pixelType is set as GPUPixelTypeUByte by default.
// The default type for input bytes is GPUPixelTypeUByte, unless specified with pixelType: