Правда ли, что встроенные видеокарты снижают производительность процессора?
1749
Vorac
Рассмотрим Intel HD Graphics. Видеокарта находится внутри процессора и использует обычную системную память. Поэтому он использует полосу пропускания Front Side Bus и Northbridge. Следовательно, доступ к основной памяти - самое большое узкое место в современных вычислениях - становится медленнее событий.
Но эта технология применяется на всех новых процессорах Intel для настольных ПК. Что мне не хватает?
Вы упускаете цену. Добавление отдельной видеокарты увеличивает цену, в то время как пользователи со встроенной видеокартой прекрасно справляются с задачей.
agtoever 8 лет назад
0
Нет; Это 100% ложь. Да; У более старых поколений платформы это могло иметь эффект, но это не было правдой с тех пор, как в 2004 году был выпущен Pent 4
Ramhound 8 лет назад
0
2 ответа на вопрос
4
Lunatik
FSB and Northbridge are obsolete technologies. All modern systems work on the basis of a point-to-point interconnect (Intel's QuickPath Interconnect, AMD's HyperTransport etc.) which doesn't have a 'hub' like a Northbridge or a bus with limited global bandwidth as was the case with FSB.
This alone reduces the burden on system resources, but the fact that a lot of the time the GPU and CPU are talking only to each other, via extremely high-bandwidth, low-latency connections and caches on the die, means a lot of the traffic doesn't even leave the socket.
One theoretical way in which an integrated GPU might reduce the performance of the CPU is due to thermal limiting. If the GPU forces the die to reach high temperatures then the CPU may be throttled to reduce temperatures. Other than that I doubt an integrated GPU can measurably reduce the CPU's performance.
2
Hennes
On older chipsets (P4 era, some core2duo's) you are right. These are bandwidth starved and integrating a GPU on the same die as the CPU will reduce CPU performance.
On more modern systems there no longer is a FSB (see @Lunatik's answer which is very good in that regard) and CPU performance decrease due to shared bandwidth to the memory should be minimal.
However:
It does use on die space, which means less space for a CPU. In other words, you could have had a faster CPU with the same amount of silicon.
It does produce heat. Which might mean less overhead for turbo'ing.
But also: It is cheaper to use an APU or a die with CPU and GPU than a separate CPU and a dedicated GPU. While this is only relevant for people who do not use a dedicated GPU it does save costs. And if the onboard GPU is fast enough for most people then it makes economical sense to build these.
Two more notes:
But this technology gets put on all newer desktop Intel CPUs. What am I missing?
For most users:
Cheaper to integrate (for the chip builders)
Faster to integrate (hence the whole AMD APU idea).
Sensible for most users (office use, families who just mail or use social media).
Possibly not economically sensible to build two lines of CPU's. One with GPU part and one without (though I guess only Intel can confirm this).
Note that memory access might not limit a CPU too much thanks to its caches. But it can limit the on-board GPU die. Hence the 4th level cache on Intel chips with Iris graphics. Ditto for AMD where higher speeds RAM means much better APU-gaming performance.