Farlanki.

将YUV图像渲染到屏幕上(OpenGL)

字数统计: 2.2k阅读时长: 10 min
2017/05/16 Share

YUV 是我们在处理视频数据的时候经常会遇到的一种格式。我们在使用苹果的 VideoToolbox 框架将视频进行解码后,得到的就是 NV12 格式的 CVPixelBuffer。本篇文章将会在苹果的源码上解释如何将 YUV 图像渲染到屏幕上。本篇文章主要着重介绍在 iOS 上使用 OpenGL ES 如何将一个 YUV 格式的 buffer 显示出来。

iOS 上的 OpenGL

iOS 使用的是 OpenGL ES,这是 OpenGL 在移动端上的实现。苹果为了简化基于 OpenGL ES 应用的开发,设计了 GLKit 这个框架。使用这个框架能让我们更轻松地实现纹理加载,矩阵运算,渲染结果显示等功能。但是本篇文章的重点不是 GLKit,所以不会详细介绍 GLKit 的相关功能。

1.初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
- (void)setupGL
{
if (!_context || ![EAGLContext setCurrentContext:_context]) {
return;
}

[self setupBuffers];
[self loadShaders];

glUseProgram(self.program);

// 0 and 1 are the texture IDs of _lumaTexture and _chromaTexture respectively.
glUniform1i(uniforms[UNIFORM_Y], 0); //设置Y分量的纹理为 texture Unit0
glUniform1i(uniforms[UNIFORM_UV], 1); //设置UV分量的纹理为 texture Unit1
glUniform1f(uniforms[UNIFORM_ROTATION_ANGLE], 0); //设置旋转角度
glUniformMatrix3fv(uniforms[UNIFORM_COLOR_CONVERSION_MATRIX], 1, GL_FALSE, _preferredConversion); //设置 YUV -> RGB 变换矩阵
}

在初始化 OpenGL context 后,这段代码完成了以下操作:

  1. 设置了Y分量的纹理 Unit
  2. 设置了UV分量的纹理 Unit
  3. 设置纹理的旋转角度
  4. 设置 YUV -> RGB 的变换矩阵

    这里有几个地方需要注意。
    OpenGL 支持多个纹理 Unit,在绑定纹理之前需要设置 纹理Unit,在将当前的纹理 Unit 设置成别的 Unit 之前,所有的操作都将在当前设置的纹理 Unit 上进行。
    Unitform 是指在一次渲染中,对于每个顶点或者每个像素都不会改变的值。由于在渲染的过程中,纹理和变换矩阵都不会变化,所以使用 Unitform 向 shader 传送数据。

设置帧缓冲

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
- (void)setupBuffers
{
glDisable(GL_DEPTH_TEST);

glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), 0);

glEnableVertexAttribArray(ATTRIB_TEXCOORD);
glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), 0);

[self createBuffers];
}

glGenFramebuffers(1, &_frameBufferHandle); //创建帧缓冲
glBindFramebuffer(GL_FRAMEBUFFER, _frameBufferHandle);

glGenRenderbuffers(1, &_colorBufferHandle); //创建渲染缓冲
glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);

[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self]; //为渲染缓冲分配内存
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &_backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_backingHeight);

glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBufferHandle); //绑定渲染缓冲到帧缓冲
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER)); //检查 frameBuffer 是否完整
}

这一步主要是创建了帧缓冲和渲染缓冲。帧缓冲区是存放 OpenGL 渲染结果的地方。在 这篇文章 中,苹果介绍了几种存放 OpenGL 渲染结果的地方。本篇文章使用的是 Core Animation–aware renderbuffer。这种 render buffer 将和 CAEAGLLayer 共享储存空间。在使用的时候,需要使用renderbufferStorage:fromDrawable:分配内存。

设置 shader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
- (BOOL)loadShaders
{
GLuint vertShader = 0, fragShader = 0;

// Create the shader program.
self.program = glCreateProgram();

if(![self compileShaderString:&vertShader type:GL_VERTEX_SHADER shaderString:shader_vsh]) {
NSLog(@"Failed to compile vertex shader");
return NO;
}

if(![self compileShaderString:&fragShader type:GL_FRAGMENT_SHADER shaderString:shader_fsh]) {
NSLog(@"Failed to compile fragment shader");
return NO;
}

// Attach vertex shader to program.
glAttachShader(self.program, vertShader);

// Attach fragment shader to program.
glAttachShader(self.program, fragShader);

// Bind attribute locations. This needs to be done prior to linking.
glBindAttribLocation(self.program, ATTRIB_VERTEX, "position"); //将 attribute 和 位置绑定 : position 应该用第0个 pointer
glBindAttribLocation(self.program, ATTRIB_TEXCOORD, "texCoord"); //texCoord 应该用第1个 pointer

// Link the program.
if (![self linkProgram:self.program]) {
NSLog(@"Failed to link program: %d", self.program);

if (vertShader) {
glDeleteShader(vertShader);
vertShader = 0;
}
if (fragShader) {
glDeleteShader(fragShader);
fragShader = 0;
}
if (self.program) {
glDeleteProgram(self.program);
self.program = 0;
}

return NO;
}

// Get uniform locations.
uniforms[UNIFORM_Y] = glGetUniformLocation(self.program, "SamplerY");
uniforms[UNIFORM_UV] = glGetUniformLocation(self.program, "SamplerUV");
// uniforms[UNIFORM_LUMA_THRESHOLD] = glGetUniformLocation(self.program, "lumaThreshold");
// uniforms[UNIFORM_CHROMA_THRESHOLD] = glGetUniformLocation(self.program, "chromaThreshold");
uniforms[UNIFORM_ROTATION_ANGLE] = glGetUniformLocation(self.program, "preferredRotation");
uniforms[UNIFORM_COLOR_CONVERSION_MATRIX] = glGetUniformLocation(self.program, "colorConversionMatrix");

// Release vertex and fragment shaders.
if (vertShader) {
glDetachShader(self.program, vertShader);
glDeleteShader(vertShader);
}
if (fragShader) {
glDetachShader(self.program, fragShader);
glDeleteShader(fragShader);
}

return YES;
}

这一步加载了 vertex shader 和 fragment shader,并且完成了一些针对 uniform 参数的设置,这些设置将在传递 uniform 信息的时候用到。

显示 pixelbuffer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
- (void)setPixelBuffer:(CVPixelBufferRef)pb
{
if(_pixelBuffer) {
CVPixelBufferRelease(_pixelBuffer);
}
_pixelBuffer = CVPixelBufferRetain(pb);

int frameWidth = (int)CVPixelBufferGetWidth(_pixelBuffer);
int frameHeight = (int)CVPixelBufferGetHeight(_pixelBuffer);
[self displayPixelBuffer:_pixelBuffer width:frameWidth height:frameHeight];
}


- (void)displayPixelBuffer:(CVPixelBufferRef)pixelBuffer width:(uint32_t)frameWidth height:(uint32_t)frameHeight
{
if (!_context || ![EAGLContext setCurrentContext:_context]) {
return;
}

if(pixelBuffer == NULL) {
NSLog(@"Pixel buffer is null");
return;
}

CVReturn err;

size_t planeCount = CVPixelBufferGetPlaneCount(pixelBuffer);

/*
Use the color attachment of the pixel buffer to determine the appropriate color conversion matrix.
*/
CFTypeRef colorAttachments = CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);

if (CFStringCompare(colorAttachments, kCVImageBufferYCbCrMatrix_ITU_R_601_4, 0) == kCFCompareEqualTo) {
_preferredConversion = kColorConversion601;
}
else {
_preferredConversion = kColorConversion709;
}

/*
CVOpenGLESTextureCacheCreateTextureFromImage will create GLES texture optimally from CVPixelBufferRef.
*/

/*
Create Y and UV textures from the pixel buffer. These textures will be drawn on the frame buffer Y-plane.
*/

CVOpenGLESTextureCacheRef _videoTextureCache;

// Create CVOpenGLESTextureCacheRef for optimal CVPixelBufferRef to GLES texture conversion.
err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _context, NULL, &_videoTextureCache);
if (err != noErr) {
NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
return;
}

glActiveTexture(GL_TEXTURE0);
//激活 textureUnit 0

err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT, //y 的元素全部放入红色 component
frameWidth,
frameHeight,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
if (err) {
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}

glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

if(planeCount == 2) {
// UV-plane.
glActiveTexture(GL_TEXTURE1);
//激活 texture Unit 1
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT, // uv 的元素全部放入 rg
frameWidth / 2,
frameHeight / 2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
if (err) {
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}

glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
//Unit1 的texture 2d槽位中存放着texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}

glBindFramebuffer(GL_FRAMEBUFFER, _frameBufferHandle);

// Set the view port to the entire view.
glViewport(0, 0, _backingWidth, _backingHeight);

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

// Use shader program.
glUseProgram(self.program);
// glUniform1f(uniforms[UNIFORM_LUMA_THRESHOLD], 1);
// glUniform1f(uniforms[UNIFORM_CHROMA_THRESHOLD], 1);
glUniform1f(uniforms[UNIFORM_ROTATION_ANGLE], 0);
glUniformMatrix3fv(uniforms[UNIFORM_COLOR_CONVERSION_MATRIX], 1, GL_FALSE, _preferredConversion);

// Set up the quad vertices with respect to the orientation and aspect ratio of the video.
CGRect viewBounds = self.bounds;
CGSize contentSize = CGSizeMake(frameWidth, frameHeight);
CGRect vertexSamplingRect = AVMakeRectWithAspectRatioInsideRect(contentSize, viewBounds); //计算填充满 bounds 而保持长宽比的矩形

// Compute normalized quad coordinates to draw the frame into.
CGSize normalizedSamplingSize = CGSizeMake(0.0, 0.0);
CGSize cropScaleAmount = CGSizeMake(vertexSamplingRect.size.width/viewBounds.size.width,
vertexSamplingRect.size.height/viewBounds.size.height); //计算缩放比例

// Normalize the quad vertices.
if (cropScaleAmount.width > cropScaleAmount.height) {
normalizedSamplingSize.width = 1.0;
normalizedSamplingSize.height = cropScaleAmount.height/cropScaleAmount.width;
}
else {
normalizedSamplingSize.width = cropScaleAmount.width/cropScaleAmount.height;
normalizedSamplingSize.height = 1.0;;
}

/*
The quad vertex data defines the region of 2D plane onto which we draw our pixel buffers.
Vertex data formed using (-1,-1) and (1,1) as the bottom left and top right coordinates respectively, covers the entire screen.
*/
GLfloat quadVertexData [] = {
-1 * normalizedSamplingSize.width, -1 * normalizedSamplingSize.height,
normalizedSamplingSize.width, -1 * normalizedSamplingSize.height,
-1 * normalizedSamplingSize.width, normalizedSamplingSize.height,
normalizedSamplingSize.width, normalizedSamplingSize.height,
};
//取四个顶点

// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, quadVertexData);
glEnableVertexAttribArray(ATTRIB_VERTEX);

/*
The texture vertices are set up such that we flip the texture vertically. This is so that our top left origin buffers match OpenGL's bottom left texture coordinate system.
*/
CGRect textureSamplingRect = CGRectMake(0, 0, 1, 1);
GLfloat quadTextureData[] = {
CGRectGetMinX(textureSamplingRect), CGRectGetMaxY(textureSamplingRect),
CGRectGetMaxX(textureSamplingRect), CGRectGetMaxY(textureSamplingRect),
CGRectGetMinX(textureSamplingRect), CGRectGetMinY(textureSamplingRect),
CGRectGetMaxX(textureSamplingRect), CGRectGetMinY(textureSamplingRect)
};//翻转纹理

glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, quadTextureData);
glEnableVertexAttribArray(ATTRIB_TEXCOORD);

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
[_context presentRenderbuffer:GL_RENDERBUFFER]; //显示 core animation aware render buffer

[self cleanUpTextures];
// Periodic texture cache flush every frame
CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);

if(_videoTextureCache) {
CFRelease(_videoTextureCache);
}
}

这里完成了渲染一个 pixelbuffer 的绝大部分工作。关键是从 pixelbuffer 中取出 Y 平面的数据和 UV 平面的数据,生成两个纹理,传递给 Shader。注意CVOpenGLESTextureCacheCreateTextureFromImage这个方法。在创建 Y 平面的纹理时,使用的是 GL_RED_EXT 的格式。因为 Y 平面的数据都是八位的,这里就将所有数据都放在 RGBA 中 R 的位置,也是八位。再看看生成 UV 平面的纹理的方法,和创建 Y 平面的纹理大同小异,只是有一些参数上的区别。UV 平面使用的是 GL_RG_EXT ,这里代表的是将 UV 数据放入 RG 的位置,一共16位。另外,这里设置的宽高都是 Y 平面的一半,因为这里处理的是 NV12 的 YUV 格式。每4个Y分量对应一个UV分量。

shader处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

const GLchar *shader_fsh = (const GLchar*)"varying highp vec2 texCoordVarying;"
"precision mediump float;"
"uniform sampler2D SamplerY;"
"uniform sampler2D SamplerUV;"
"uniform mat3 colorConversionMatrix;"
"void main()"
"{"
" mediump vec3 yuv;"
" lowp vec3 rgb;"
// Subtract constants to map the video range start at 0
" yuv.x = (texture2D(SamplerY, texCoordVarying).r - (16.0/255.0));"
" yuv.yz = (texture2D(SamplerUV, texCoordVarying).rg - vec2(0.5, 0.5));"
" rgb = colorConversionMatrix * yuv;"
" gl_FragColor = vec4(rgb, 1);"
"}";

const GLchar *shader_vsh = (const GLchar*)"attribute vec4 position;"
"attribute vec2 texCoord;"
"uniform float preferredRotation;"
"varying vec2 texCoordVarying;"
"void main()"
"{"
" mat4 rotationMatrix = mat4(cos(preferredRotation), -sin(preferredRotation), 0.0, 0.0,"
" sin(preferredRotation), cos(preferredRotation), 0.0, 0.0,"
" 0.0, 0.0, 1.0, 0.0,"
" 0.0, 0.0, 0.0, 1.0);"
" gl_Position = position * rotationMatrix;"
" texCoordVarying = texCoord;"
"}";

注意fragment shader 中对纹理的处理。

1
2
"    yuv.x = (texture2D(SamplerY, texCoordVarying).r - (16.0/255.0));"
" yuv.yz = (texture2D(SamplerUV, texCoordVarying).rg - vec2(0.5, 0.5));"

这两行代码从纹理中取出了YUV三个量。取出Y分量的时候使用的是Y纹理中像素的R值,取出UV分量的时候使用的是UV纹理中像素的 rg 值,和生成纹理时契合。

注意这里的 varying 值

1
varying vec2 texCoordVarying

varying 是 vertex shader 的输出和 fragment shader 的输入。varying 变量会在光栅化的时候进行线性插值。这里的 varying 值是纹理的坐标。这样,在处理每一个像素的时候,都会取到一个相近的纹理坐标。

参考资料

  1. iOS OpenGL渲染YUV数据

  2. YUV颜色编码解析

  3. https://forums.khronos.org/showthread.php/71283)

  4. 仓库地址

CATALOG
  1. 1. iOS 上的 OpenGL
  2. 2. 1.初始化
  3. 3. 设置帧缓冲
  4. 4. 设置 shader
  5. 5. 显示 pixelbuffer
  6. 6. shader处理
  7. 7. 参考资料