The V4L2 API was primarily designed for devices exchanging image data with applications. The v4l2_pix_format and v4l2_pix_format_mplane structures define the format and layout of an image in memory. The former is used with the single-planar API, while the latter is used with the multi-planar version (see ). Image formats are negotiated with the &VIDIOC-S-FMT; ioctl. (The explanations here focus on video capturing and output, for overlay frame buffer formats see also &VIDIOC-G-FBUF;.)
Single-planar format structure struct <structname>v4l2_pix_format</structname> &cs-str; __u32 width Image width in pixels. __u32 height Image height in pixels. Applications set these fields to request an image size, drivers return the closest possible values. In case of planar formats the width and height applies to the largest plane. To avoid ambiguities drivers must return values rounded up to a multiple of the scale factor of any smaller planes. For example when the image format is YUV 4:2:0, width and height must be multiples of two. __u32 pixelformat The pixel format or type of compression, set by the application. This is a little endian four character code. V4L2 defines standard RGB formats in , YUV formats in , and reserved codes in &v4l2-field; field Video images are typically interlaced. Applications can request to capture or output only the top or bottom field, or both fields interlaced or sequentially stored in one buffer or alternating in separate buffers. Drivers return the actual field order selected. For details see . __u32 bytesperline Distance in bytes between the leftmost pixels in two adjacent lines. Both applications and drivers can set this field to request padding bytes at the end of each line. Drivers however may ignore the value requested by the application, returning width times bytes per pixel or a larger value required by the hardware. That implies applications can just set this field to zero to get a reasonable default.Video hardware may access padding bytes, therefore they must reside in accessible memory. Consider cases where padding bytes after the last line of an image cross a system page boundary. Input devices may write padding bytes, the value is undefined. Output devices ignore the contents of padding bytes.When the image format is planar the bytesperline value applies to the largest plane and is divided by the same factor as the width field for any smaller planes. For example the Cb and Cr planes of a YUV 4:2:0 image have half as many padding bytes following each line as the Y plane. To avoid ambiguities drivers must return a bytesperline value rounded up to a multiple of the scale factor. __u32 sizeimage Size in bytes of the buffer to hold a complete image, set by the driver. Usually this is bytesperline times height. When the image consists of variable length compressed data this is the maximum number of bytes required to hold an image. &v4l2-colorspace; colorspace This information supplements the pixelformat and must be set by the driver, see . __u32 priv Reserved for custom (driver defined) additional information about formats. When not used drivers and applications must set this field to zero.
Multi-planar format structures The v4l2_plane_pix_format structures define size and layout for each of the planes in a multi-planar format. The v4l2_pix_format_mplane structure contains information common to all planes (such as image width and height) and an array of v4l2_plane_pix_format structures, describing all planes of that format. struct <structname>v4l2_plane_pix_format</structname> &cs-str; __u32 sizeimage Maximum size in bytes required for image data in this plane. __u16 bytesperline Distance in bytes between the leftmost pixels in two adjacent lines. __u16 reserved[7] Reserved for future extensions. Should be zeroed by the application.
struct <structname>v4l2_pix_format_mplane</structname> &cs-str; __u32 width Image width in pixels. __u32 height Image height in pixels. __u32 pixelformat The pixel format. Both single- and multi-planar four character codes can be used. &v4l2-field; field See &v4l2-pix-format;. &v4l2-colorspace; colorspace See &v4l2-pix-format;. &v4l2-plane-pix-format; plane_fmt[VIDEO_MAX_PLANES] An array of structures describing format of each plane this pixel format consists of. The number of valid entries in this array has to be put in the num_planes field. __u8 num_planes Number of planes (i.e. separate memory buffers) for this format and the number of valid entries in the plane_fmt array. __u8 reserved[11] Reserved for future extensions. Should be zeroed by the application.
Standard Image Formats In order to exchange images between drivers and applications, it is necessary to have standard image data formats which both sides will interpret the same way. V4L2 includes several such formats, and this section is intended to be an unambiguous specification of the standard image data formats in V4L2. V4L2 drivers are not limited to these formats, however. Driver-specific formats are possible. In that case the application may depend on a codec to convert images to one of the standard formats when needed. But the data can still be stored and retrieved in the proprietary format. For example, a device may support a proprietary compressed format. Applications can still capture and save the data in the compressed format, saving much disk space, and later use a codec to convert the images to the X Windows screen format when the video is to be displayed. Even so, ultimately, some standard formats are needed, so the V4L2 specification would not be complete without well-defined standard formats. The V4L2 standard formats are mainly uncompressed formats. The pixels are always arranged in memory from left to right, and from top to bottom. The first byte of data in the image buffer is always for the leftmost pixel of the topmost row. Following that is the pixel immediately to its right, and so on until the end of the top row of pixels. Following the rightmost pixel of the row there may be zero or more bytes of padding to guarantee that each row of pixel data has a certain alignment. Following the pad bytes, if any, is data for the leftmost pixel of the second row from the top, and so on. The last row has just as many pad bytes after it as the other rows. In V4L2 each format has an identifier which looks like PIX_FMT_XXX, defined in the videodev.h header file. These identifiers represent four character (FourCC) codes which are also listed below, however they are not the same as those used in the Windows world. For some formats, data is stored in separate, discontiguous memory buffers. Those formats are identified by a separate set of FourCC codes and are referred to as "multi-planar formats". For example, a YUV422 frame is normally stored in one memory buffer, but it can also be placed in two or three separate buffers, with Y component in one buffer and CbCr components in another in the 2-planar version or with each component in its own buffer in the 3-planar case. Those sub-buffers are referred to as "planes".
Colorspaces [intro] Gamma Correction [to do] E'R = f(R) E'G = f(G) E'B = f(B) Construction of luminance and color-difference signals [to do] E'Y = CoeffR E'R + CoeffG E'G + CoeffB E'B (E'R - E'Y) = E'R - CoeffR E'R - CoeffG E'G - CoeffB E'B (E'B - E'Y) = E'B - CoeffR E'R - CoeffG E'G - CoeffB E'B Re-normalized color-difference signals The color-difference signals are scaled back to unity range [-0.5;+0.5]: KB = 0.5 / (1 - CoeffB) KR = 0.5 / (1 - CoeffR) PB = KB (E'B - E'Y) = 0.5 (CoeffR / CoeffB) E'R + 0.5 (CoeffG / CoeffB) E'G + 0.5 E'B PR = KR (E'R - E'Y) = 0.5 E'R + 0.5 (CoeffG / CoeffR) E'G + 0.5 (CoeffB / CoeffR) E'B Quantization [to do] Y' = (Lum. Levels - 1) · E'Y + Lum. Offset CB = (Chrom. Levels - 1) · PB + Chrom. Offset CR = (Chrom. Levels - 1) · PR + Chrom. Offset Rounding to the nearest integer and clamping to the range [0;255] finally yields the digital color components Y'CbCr stored in YUV images. ITU-R Rec. BT.601 color conversion Forward Transformation int ER, EG, EB; /* gamma corrected RGB input [0;255] */ int Y1, Cb, Cr; /* output [0;255] */ double r, g, b; /* temporaries */ double y1, pb, pr; int clamp (double x) { int r = x; /* round to nearest */ if (r < 0) return 0; else if (r > 255) return 255; else return r; } r = ER / 255.0; g = EG / 255.0; b = EB / 255.0; y1 = 0.299 * r + 0.587 * g + 0.114 * b; pb = -0.169 * r - 0.331 * g + 0.5 * b; pr = 0.5 * r - 0.419 * g - 0.081 * b; Y1 = clamp (219 * y1 + 16); Cb = clamp (224 * pb + 128); Cr = clamp (224 * pr + 128); /* or shorter */ y1 = 0.299 * ER + 0.587 * EG + 0.114 * EB; Y1 = clamp ( (219 / 255.0) * y1 + 16); Cb = clamp (((224 / 255.0) / (2 - 2 * 0.114)) * (EB - y1) + 128); Cr = clamp (((224 / 255.0) / (2 - 2 * 0.299)) * (ER - y1) + 128); Inverse Transformation int Y1, Cb, Cr; /* gamma pre-corrected input [0;255] */ int ER, EG, EB; /* output [0;255] */ double r, g, b; /* temporaries */ double y1, pb, pr; int clamp (double x) { int r = x; /* round to nearest */ if (r < 0) return 0; else if (r > 255) return 255; else return r; } y1 = (255 / 219.0) * (Y1 - 16); pb = (255 / 224.0) * (Cb - 128); pr = (255 / 224.0) * (Cr - 128); r = 1.0 * y1 + 0 * pb + 1.402 * pr; g = 1.0 * y1 - 0.344 * pb - 0.714 * pr; b = 1.0 * y1 + 1.772 * pb + 0 * pr; ER = clamp (r * 255); /* [ok? one should prob. limit y1,pb,pr] */ EG = clamp (g * 255); EB = clamp (b * 255); enum v4l2_colorspace Identifier Value Description Chromaticities The coordinates of the color primaries are given in the CIE system (1931) White Point Gamma Correction Luminance E'Y Quantization Red Green Blue Y' Cb, Cr V4L2_COLORSPACE_SMPTE170M 1 NTSC/PAL according to , x = 0.630, y = 0.340 x = 0.310, y = 0.595 x = 0.155, y = 0.070 x = 0.3127, y = 0.3290, Illuminant D65 E' = 4.5 I for I ≤0.018, 1.099 I0.45 - 0.099 for 0.018 < I 0.299 E'R + 0.587 E'G + 0.114 E'B 219 E'Y + 16 224 PB,R + 128 V4L2_COLORSPACE_SMPTE240M 2 1125-Line (US) HDTV, see x = 0.630, y = 0.340 x = 0.310, y = 0.595 x = 0.155, y = 0.070 x = 0.3127, y = 0.3290, Illuminant D65 E' = 4 I for I ≤0.0228, 1.1115 I0.45 - 0.1115 for 0.0228 < I 0.212 E'R + 0.701 E'G + 0.087 E'B 219 E'Y + 16 224 PB,R + 128 V4L2_COLORSPACE_REC709 3 HDTV and modern devices, see x = 0.640, y = 0.330 x = 0.300, y = 0.600 x = 0.150, y = 0.060 x = 0.3127, y = 0.3290, Illuminant D65 E' = 4.5 I for I ≤0.018, 1.099 I0.45 - 0.099 for 0.018 < I 0.2125 E'R + 0.7154 E'G + 0.0721 E'B 219 E'Y + 16 224 PB,R + 128 V4L2_COLORSPACE_BT878 4 Broken Bt878 extents The ubiquitous Bt878 video capture chip quantizes E'Y to 238 levels, yielding a range of Y' = 16 … 253, unlike Rec. 601 Y' = 16 … 235. This is not a typo in the Bt878 documentation, it has been implemented in silicon. The chroma extents are unclear. , ? ? ? ? ? 0.299 E'R + 0.587 E'G + 0.114 E'B 237 E'Y + 16 224 PB,R + 128 (probably) V4L2_COLORSPACE_470_SYSTEM_M 5 M/NTSC No identifier exists for M/PAL which uses the chromaticities of M/NTSC, the remaining parameters are equal to B and G/PAL. according to , x = 0.67, y = 0.33 x = 0.21, y = 0.71 x = 0.14, y = 0.08 x = 0.310, y = 0.316, Illuminant C ? 0.299 E'R + 0.587 E'G + 0.114 E'B 219 E'Y + 16 224 PB,R + 128 V4L2_COLORSPACE_470_SYSTEM_BG 6 625-line PAL and SECAM systems according to , x = 0.64, y = 0.33 x = 0.29, y = 0.60 x = 0.15, y = 0.06 x = 0.313, y = 0.329, Illuminant D65 ? 0.299 E'R + 0.587 E'G + 0.114 E'B 219 E'Y + 16 224 PB,R + 128 V4L2_COLORSPACE_JPEG 7 JPEG Y'CbCr, see , ? ? ? ? ? 0.299 E'R + 0.587 E'G + 0.114 E'B 256 E'Y + 16 Note JFIF quantizes Y'PBPR in range [0;+1] and [-0.5;+0.5] to 257 levels, however Y'CbCr signals are still clamped to [0;255]. 256 PB,R + 128 V4L2_COLORSPACE_SRGB 8 [?] x = 0.640, y = 0.330 x = 0.300, y = 0.600 x = 0.150, y = 0.060 x = 0.3127, y = 0.3290, Illuminant D65 E' = 4.5 I for I ≤0.018, 1.099 I0.45 - 0.099 for 0.018 < I n/a
Indexed Format In this format each pixel is represented by an 8 bit index into a 256 entry ARGB palette. It is intended for Video Output Overlays only. There are no ioctls to access the palette, this must be done with ioctls of the Linux framebuffer API. Indexed Image Format Identifier Code   Byte 0     Bit 7 6 5 4 3 2 1 0 V4L2_PIX_FMT_PAL8 'PAL8' i7 i6 i5 i4 i3 i2 i1 i0
RGB Formats &sub-packed-rgb; &sub-sbggr8; &sub-sgbrg8; &sub-sgrbg8; &sub-srggb8; &sub-sbggr16; &sub-srggb10; &sub-srggb10alaw8; &sub-srggb10dpcm8; &sub-srggb12;
YUV Formats YUV is the format native to TV broadcast and composite video signals. It separates the brightness information (Y) from the color information (U and V or Cb and Cr). The color information consists of red and blue color difference signals, this way the green component can be reconstructed by subtracting from the brightness component. See for conversion examples. YUV was chosen because early television would only transmit brightness information. To add color in a way compatible with existing receivers a new signal carrier was added to transmit the color difference signals. Secondary in the YUV format the U and V components usually have lower resolution than the Y component. This is an analog video compression technique taking advantage of a property of the human visual system, being more sensitive to brightness information. &sub-packed-yuv; &sub-grey; &sub-y10; &sub-y12; &sub-y10b; &sub-y16; &sub-uv8; &sub-yuyv; &sub-uyvy; &sub-yvyu; &sub-vyuy; &sub-y41p; &sub-yuv420; &sub-yuv420m; &sub-yvu420m; &sub-yuv410; &sub-yuv422p; &sub-yuv411p; &sub-nv12; &sub-nv12m; &sub-nv12mt; &sub-nv16; &sub-nv24; &sub-m420;
Compressed Formats Compressed Image Formats &cs-def; Identifier Code Details V4L2_PIX_FMT_JPEG 'JPEG' TBD. See also &VIDIOC-G-JPEGCOMP;, &VIDIOC-S-JPEGCOMP;. V4L2_PIX_FMT_MPEG 'MPEG' MPEG multiplexed stream. The actual format is determined by extended control V4L2_CID_MPEG_STREAM_TYPE, see . V4L2_PIX_FMT_H264 'H264' H264 video elementary stream with start codes. V4L2_PIX_FMT_H264_NO_SC 'AVC1' H264 video elementary stream without start codes. V4L2_PIX_FMT_H264_MVC 'MVC' H264 MVC video elementary stream. V4L2_PIX_FMT_H263 'H263' H263 video elementary stream. V4L2_PIX_FMT_MPEG1 'MPG1' MPEG1 video elementary stream. V4L2_PIX_FMT_MPEG2 'MPG2' MPEG2 video elementary stream. V4L2_PIX_FMT_MPEG4 'MPG4' MPEG4 video elementary stream. V4L2_PIX_FMT_XVID 'XVID' Xvid video elementary stream. V4L2_PIX_FMT_VC1_ANNEX_G 'VC1G' VC1, SMPTE 421M Annex G compliant stream. V4L2_PIX_FMT_VC1_ANNEX_L 'VC1L' VC1, SMPTE 421M Annex L compliant stream. V4L2_PIX_FMT_VP8 'VP8' VP8 video elementary stream.
Reserved Format Identifiers These formats are not defined by this specification, they are just listed for reference and to avoid naming conflicts. If you want to register your own format, send an e-mail to the linux-media mailing list &v4l-ml; for inclusion in the videodev2.h file. If you want to share your format with other developers add a link to your documentation and send a copy to the linux-media mailing list for inclusion in this section. If you think your format should be listed in a standard format section please make a proposal on the linux-media mailing list. Reserved Image Formats &cs-def; Identifier Code Details V4L2_PIX_FMT_DV 'dvsd' unknown V4L2_PIX_FMT_ET61X251 'E625' Compressed format of the ET61X251 driver. V4L2_PIX_FMT_HI240 'HI24' 8 bit RGB format used by the BTTV driver. V4L2_PIX_FMT_HM12 'HM12' YUV 4:2:0 format used by the IVTV driver, http://www.ivtvdriver.org/The format is documented in the kernel sources in the file Documentation/video4linux/cx2341x/README.hm12 V4L2_PIX_FMT_CPIA1 'CPIA' YUV format used by the gspca cpia1 driver. V4L2_PIX_FMT_JPGL 'JPGL' JPEG-Light format (Pegasus Lossless JPEG) used in Divio webcams NW 80x. V4L2_PIX_FMT_SPCA501 'S501' YUYV per line used by the gspca driver. V4L2_PIX_FMT_SPCA505 'S505' YYUV per line used by the gspca driver. V4L2_PIX_FMT_SPCA508 'S508' YUVY per line used by the gspca driver. V4L2_PIX_FMT_SPCA561 'S561' Compressed GBRG Bayer format used by the gspca driver. V4L2_PIX_FMT_PAC207 'P207' Compressed BGGR Bayer format used by the gspca driver. V4L2_PIX_FMT_MR97310A 'M310' Compressed BGGR Bayer format used by the gspca driver. V4L2_PIX_FMT_JL2005BCD 'JL20' JPEG compressed RGGB Bayer format used by the gspca driver. V4L2_PIX_FMT_OV511 'O511' OV511 JPEG format used by the gspca driver. V4L2_PIX_FMT_OV518 'O518' OV518 JPEG format used by the gspca driver. V4L2_PIX_FMT_PJPG 'PJPG' Pixart 73xx JPEG format used by the gspca driver. V4L2_PIX_FMT_SE401 'S401' Compressed RGB format used by the gspca se401 driver V4L2_PIX_FMT_SQ905C '905C' Compressed RGGB bayer format used by the gspca driver. V4L2_PIX_FMT_MJPEG 'MJPG' Compressed format used by the Zoran driver V4L2_PIX_FMT_PWC1 'PWC1' Compressed format of the PWC driver. V4L2_PIX_FMT_PWC2 'PWC2' Compressed format of the PWC driver. V4L2_PIX_FMT_SN9C10X 'S910' Compressed format of the SN9C102 driver. V4L2_PIX_FMT_SN9C20X_I420 'S920' YUV 4:2:0 format of the gspca sn9c20x driver. V4L2_PIX_FMT_SN9C2028 'SONX' Compressed GBRG bayer format of the gspca sn9c2028 driver. V4L2_PIX_FMT_STV0680 'S680' Bayer format of the gspca stv0680 driver. V4L2_PIX_FMT_WNVA 'WNVA' Used by the Winnov Videum driver, http://www.thedirks.org/winnov/ V4L2_PIX_FMT_TM6000 'TM60' Used by Trident tm6000 V4L2_PIX_FMT_CIT_YYVYUY 'CITV' Used by xirlink CIT, found at IBM webcams. Uses one line of Y then 1 line of VYUY V4L2_PIX_FMT_KONICA420 'KONI' Used by Konica webcams. YUV420 planar in blocks of 256 pixels. V4L2_PIX_FMT_YYUV 'YYUV' unknown V4L2_PIX_FMT_Y4 'Y04 ' Old 4-bit greyscale format. Only the most significant 4 bits of each byte are used, the other bits are set to 0. V4L2_PIX_FMT_Y6 'Y06 ' Old 6-bit greyscale format. Only the most significant 6 bits of each byte are used, the other bits are set to 0. V4L2_PIX_FMT_S5C_UYVY_JPG 'S5CI' Two-planar format used by Samsung S5C73MX cameras. The first plane contains interleaved JPEG and UYVY image data, followed by meta data in form of an array of offsets to the UYVY data blocks. The actual pointer array follows immediately the interleaved JPEG/UYVY data, the number of entries in this array equals the height of the UYVY image. Each entry is a 4-byte unsigned integer in big endian order and it's an offset to a single pixel line of the UYVY image. The first plane can start either with JPEG or UYVY data chunk. The size of a single UYVY block equals the UYVY image's width multiplied by 2. The size of a JPEG chunk depends on the image and can vary with each line. The second plane, at an offset of 4084 bytes, contains a 4-byte offset to the pointer array in the first plane. This offset is followed by a 4-byte value indicating size of the pointer array. All numbers in the second plane are also in big endian order. Remaining data in the second plane is undefined. The information in the second plane allows to easily find location of the pointer array, which can be different for each frame. The size of the pointer array is constant for given UYVY image height. In order to extract UYVY and JPEG frames an application can initially set a data pointer to the start of first plane and then add an offset from the first entry of the pointers table. Such a pointer indicates start of an UYVY image pixel line. Whole UYVY line can be copied to a separate buffer. These steps should be repeated for each line, i.e. the number of entries in the pointer array. Anything what's in between the UYVY lines is JPEG data and should be concatenated to form the JPEG stream.