HD Video Transcoding Strategies using Multicore Media Processors: Part 2 – Flexible Architecture

Delivering video across a variety of platforms involving multiple codecs can be efficiently handled by multicore media processors. Part Two explains the architectural requirements for flexible processing.

By Bahman Barazesh, Senior Technical Manager, and George Kustka, Senior Video Architect, LSI Corporation

Page 4 of 4
Video/Imaging DesignWire
(4/12/2010 8:30:02 AM)

Multicore Decoder Architecture

Video-decoder structures must handle different encoder options, such as single NALU or multiple NALU implementations. The H.264 decoder involves sequential operations as well as parallel operations. Pipelined sequential operations generally are more efficient.

The entropy decoder includes sequential operations and local loops that cannot always be divided among parallel tasks running on several cores. The complexity of the entropy decoder is relatively low compared to the computation required by the reconstruction block. With the increasing capa­bilities of the DSP cores, this functionality can be implemented on a single DSP core. For systems where this is not true, other partitions are possible.

null
Figure 5: H.264 Decoder Block Diagram

View full size

Figure 5 depicts one example of a multicore architecture where a single DSP core implements entropy decoding and a number of DSP cores are assigned to the reconstruction block. This partitioning allows task-to-task communication to remain local on a given core and achieves more efficient cache performance. Data partitioning is also advantageous for overall latency optimizations because decoding is implemented in a pipelined fashion, where macro-blocks are decoded as soon as data from neighboring macro-blocks is available.

Summary

Flexible media processors lie at the heart of a successful multimedia deployment. These flexible media processors enable scalable and real-time any-to-any IP voice and video communications, with high-quality experience. They allow implementing multiple instances of video standards on a hardware platform where the video software is co-resident with voice, high-quality audio, and other multimedia applications. The result is a flexible, low-cost, and low-power platform that supports customer needs and allows for the addition of new options and codec features on the same hardware.

About the Authors

Bahman Barazesh is a senior technical manager at LSI Corporation.  Bahman possesses over 25 years of experience in signal processing system and software development for voice-band modems, ADSL-DMT modems, Voice over IP and Video coding/transcoding applications.   He has led development of many industry leading signal processing platforms and products to high volume deployment . He came to LSI through the company’s acquisition of Agere Systems in 2007. Prior to working at Agere, Bahman held technical lead positions in Lucent/AT&T microelectonics, Apple Computer and Philips Datacommunications. Bahaman has participated actively in ITU standards development for voice-band modem and ADSL standards and holds 14 issued patents in signal processing techniques and programmable architectures. Bahman received an engineering degree and doctorate in engineering from the Écode Nationale Supérieure des Télécommunications , Paris, France.

George J. Kustka is a senior video architect at LSI Corporation.  He has been an active pioneer of signal processing for data communications and video compression since he joined Bell Laboratories in 1972.  His contributions have included hardware and DSP software for voice-band analog modems, modems for digital access, and broad-band cable transmission.  He played an active role in development of HDTV technology and has developed numerous video and audio codecs over the years. He has received 16 patents in data transmission and codec technologies.

Page 4:

Pages: 1 2 3 4



MOST POPULAR ARTICLES