Поддерживает ли AMD встроенное декодирование видео?

317
mrK

Поддерживают ли какие-либо процессоры AMD встроенное декодирование видео в том же ключе, что и серия Intel Core? Если так, может кто-нибудь указать мне какие-либо статьи, которые сравнивают эти два?

1

3 ответа на вопрос

0
Rich Homolka

Я полагаю, вы имеете в виду на упаковке, а не на смерти. Матрица является частью производства чипа. Когда вы видите «на одном кристалле», это означает, что они печатают схемы на одном и том же кусочке кремния.

Это действительно зависит от того, какой уровень вы имеете в виду. У AMD и Intel были мультимедийные расширения со времен первого Pentium. Они помогают со многими математическими задачами, включая видео.

AMD не поддерживает некоторое ускорение видео . Он находится на том же кусочке кремния, хотя и не в самом GPU. Я не думаю, что это совместимо с тем, как Intel это делает, поэтому не уверен, что это соответствует вашей серии «Like Intel Core».

mrK, похоже, говорит о процессорах, а не о графических процессорах (отсюда Intel и AMD, а не Nvidia и AMD / ATI). ATI Avivo - это набор технологий для видеокарт ATI, а не для процессоров AMD. Lèse majesté 11 лет назад 0
0
Sourav

AMD APU [Vision series] does with inbuilt ATI GPU

0
Lèse majesté

There are currently 2 main types of on-die video acceleration: APUs and SIMD instruction set extensions. APUs are simply IGP GPUs that sit on the chip rather than being part of the motherboard chipset. Like other IGPs, they share the main system memory, but they are accessed and operate separately from the CPU itself. Both Intel and AMD have processors with APUs.

The other type of on-die video acceleration are SIMD instruction sets that are part of the CPU architecture itself. These are part of the CPU proper, and they're accessed via CPU instructions. SIMD instruction sets give CPUs the vector processing capabilities usually only found on GPUs, Stream Processors, and DSPs.

Specifically, SIMD instructions are used to apply a single operation to a large set of data, which is stereotypical of the types of mathematical operations performed in multimedia processing, 3D modeling, scientific modeling, etc., which are problems with a high level of data parallelism. The reason they were historically excluded from CPU ISAs is because they're not useful for most traditional general-purpose computing tasks like running OSes or word processors, surfing the web, reading email, etc., which rely on SISD or perhaps MISD instructions.

However, as casual computing has evolved to include more gaming and multimedia, CPU manufacturers began to add such instructions to CPU architectures in order to boost computer performance without needing a powerful GPU (either in an IGP or discrete video card). This began in mainstream computing with MMX, then SSE, and now the latest iteration is SSE5 proposed by Intel but also largely supported by AMD in the Bulldozer cores.

The one thing that previous GPUs (and other dedicated coprocessors) had over CPU instruction set extensions was that GPU architectures are designed for very specific applications like 2D/3D rendering, video encoding/decoding, etc., whereas CPU architectures have to be generalized to handle all types of applications, so even with their SIMD extensions, they're not a match for dedicated GPUs in terms of speed. But since Intel introduced Quick Sync on some of their CPUs, this has somewhat changed. Sandy Bridge CPUs with Quick Sync can actually transcode video much faster than even high-end discrete video cards. Though the downside is that the results are somewhat lower quality than pure software transcoding, but this seems to be true with hardware accelerated video transcoding in general.

And this is perhaps the main problem with hardware accelerated video. It's easy for developers to support software encoding/decoding because they're only using the standard x86 instruction sets. To do hardware encoding/decoding, there are no industry standards, only vendor-specific proprietary extensions. So even comparing one hardware solution to another is difficult because different video encoders/decoders will be better adapted for a specific hardware solution. CPU and GPU manufacturers recognize this too, and so they all form close alliances with specific software vendors in order to ensure there's a leading video transcoding application which performs best on their technology (Nvideo CUDA, or AMD APP, or Intel Quick Sync).

If you're interested in comparisons between the leading hardware acceleration technologies for video encoding/decoding, I would suggest this article on Tom's Hardware. But ultimately their conclusion was that (at least in 2011) there's no clear winner. For speed, you probably want to use Quick Sync, but output quality is a different matter, and that's where your chosen transcoder and playback software matters.