What's new

Tutorial Intel vs AMD vs NVIDIA

  • Thread starter Monopolyman
  • Start date
  • Views 27,018
denz

denz

The Kingslayer
Retired
MotM End of the Year 2014 Trifecta
Messages
10,043
Reaction score
6,290
Points
2,110
Sin$
7
Prefix_NA

Prefix_NA

Seasoned Member
Mythical Veteran Legendary Veteran Fabled Veteran
Messages
3,770
Reaction score
963
Points
530
Sin$
7
https://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_support

That is true. But having said that, it seems more developers are starting to implement GameWorks into their games.

Gameworks is bad for everyone it makes all games run like **** on low/mid range cards then we suffer glitchy physics systems.

webm related.

Nvidia has also been sneaky about "accidentally" doing good in benchmarks due to not rendering certain shaders. We all saw how great the 1080 did in Ashes of Singularity despite having no new DX12 features but then it turned out its got a "bug" that doesn't render snow properly do to skipping a shader.
 
Last edited:
RetroGamers17

RetroGamers17

Enthusiast
Messages
125
Reaction score
5
Points
80
Sin$
0
Personally, I know all the major brands are gonna do a good job or atleast decent at keeping up with updates so i just buy whatever is the best bang for the buck. Unless if im feeling rich and like a fanboy, Then i buy intel.
 
HYX

HYX

Administrator
Administrator
Lifetime
Hidden Devils
Scaling the Mountain Programmer Odysseus' Summit
Messages
2,503
Solutions
31
Reaction score
1,240
Points
1,075
Sin$
0
I would say Nvidia would out perform and AMD card but if you are going for a hackintosh AMD is what I would personally recommend though.
 
M

MissyCrissy

Enthusiast
Messages
46
Reaction score
6
Points
65
Sin$
7
we have to understand that pc motherboard as we know it was designed at the end of the 80s.
its is too big, rugged, massive, huge pci express sockets, huge amps chipsets, huge ram sockets, in other words its OLD !
compare your motherboard on PC with a motherboard in an mobile phone....

Nvidia is the world leader in graphics. nobody can beat them atm. they have researches in military field and in autonomous driving vehicles.
AMD is for gamers, its low price, huge energy consumer and one step back of NVidia chipsets
Intel on the other side is the future. Inter will redesign the PC as we know it.
Intel has already researched a CPU with GPU integrated with RAM and motherboard redesigned and integrated in a small board. This is the future of the PC. it will be a device no bigger than your smartphone but as powerful as an i12 9900k with 12 cores with 64GB of RAm with a RTX 4080 TI with 20GB VRAM and 8k compatibility
 
Prefix_NA

Prefix_NA

Seasoned Member
Mythical Veteran Legendary Veteran Fabled Veteran
Messages
3,770
Reaction score
963
Points
530
Sin$
7
Nvidia is the world leader in graphics. nobody can beat them atm. they have researches in military field and in autonomous driving vehicles.
AMD is for gamers, its low price, huge energy consumer and one step back of NVidia chipsets
Intel on the other side is the future. Inter will redesign the PC as we know it.
Intel has already researched a CPU with GPU integrated with RAM and motherboard redesigned and integrated in a small board. This is the future of the PC. it will be a device no bigger than your smartphone but as powerful as an i12 9900k with 12 cores with 64GB of RAm with a RTX 4080 TI with 20GB VRAM and 8k compatibility


A few notes

1) AMD's compute performance per watt destroys nvidia. Which is why AMD cards are used by miners.

2) Intel is actually not the future on PC's AMD has resigned future and Intel is actually dying now that they hit their limit. I don't think x86 will even last 10 more years

The CPU and gpu integrated with ram was done by AMD on the Xbox 1 with using the ESram for unified cache between the CPU & GPU. This concept was designed by AMD years ago and its been actually used by AMD.


Not sure if you are aware but AMD started huge revolutions in CPU's recently and this summer the Chiplet revolution is starting. Intel will have to follow suit or their monolithic designs will be left in the dust.

AMD already revolutionized the HEDT/Server platforms with their 4 Ryzen dies on one package (Technically Intel beat AMD here in the past like a decade ago but ditched it for being unviable at the time.)

AMD is revolutionizing this again with the new IO die.

Intel is slow to adopt here. AMD already has GPU's with HBM on the interposer.

AMD already has Chiplet designs working with 2 CPU chips and an IO die on a single package

There is nothing stopping AMD from releasing a Chiplet with a CPU + GPU + small amount of HBM for unified cache except testing & time. Expect in the next 2-3 years AMD to have chiplet designs using HBM as a large pool of cache (Slower than cache but high volume)




Intel has made basically no gains on their CPU's since Haswell other than Die Shrinks & supporting DDR4 ram. They are struggling on their die shrinks now they cannot out compete Samsung & TSMC forever because their fabs are not as profitable due to only creating Intel chips. I would not be surprised if within 5 years Intel starts phasing out their fabs and starts using TSMC/Samsung.
 
theroach

theroach

Premium
Premium
Messages
392
Reaction score
40
Points
180
Sin$
7
1) AMD's compute performance per watt destroys nvidia. Which is why AMD cards are used by miners.
Worthless information due to the mining situation completely crashing, and completely wrong, period.

2) Intel is actually not the future on PC's AMD has resigned future and Intel is actually dying now that they hit their limit. I don't think x86 will even last 10 more years
Again, completely wrong, and/or your complete speculation, which is usually bias and wrong.

Not sure if you are aware but AMD started huge revolutions in CPU's recently and this summer the Chiplet revolution is starting. Intel will have to follow suit or their monolithic designs will be left in the dust.

Sort of true, AMD did push Intel to finally move from quad-core mainstream cpu's, but the rest is wrong. Intel already responded to AMD's chips with complete dominance. AMD is still a strong player in the field, but your claim of being left in the dust is completely laughable. The rest of your comments all fallow the same suite, not completely true and bias, but worth little time to respond to each individually. You are the poster child of an AMD fanboy, and while I know its hard sitting and waiting for AMD's time to shine, it is not anywhere in the near future. Radeon 7 looks good though, under water cooling and power mods if that makes you feel better.
 
Prefix_NA

Prefix_NA

Seasoned Member
Mythical Veteran Legendary Veteran Fabled Veteran
Messages
3,770
Reaction score
963
Points
530
Sin$
7
Worthless information due to the mining situation completely crashing, and completely wrong, period.

New coins come and go people say this all the time. Also not just mining but other applications can benefit from the compute power like rendering.


Also I want to respond you your mention about Radeon VII being a good card.

Do you know that Radeon VII is just a Vega 64 but on 7nm with some cores disabled. The entire benefits from Vega 64 to VII are just in the node shrink for increased speeds (and faster memory bandwidth due to using twice as many HBM stacks)

Technically they already produced this card as a professional card, and they just send the low binned ones to desktop market and sold it as a high end desktop GPU. Its more of a prosumer GPU its actually impressive as a Prosumer GPU in terms of price per dollar but nothing special in performance or anything.

Radeon VII is not impressive at all at a technological level at all its the same old tech with a shrink.

This was pretty much expected.

Nvidia will get less gains going to 7nm than AMD had for 2 reasons

1) They already used TSMC nodes while AMD used GloFo nodes (worse)
2) They are less restricted on power consumption due to not having a hardware scheduler.

*Disclaimer Nvidia may have better gains on the 3000 series vs 2000 series than the VII vs V64 as Nvidia 3000 series will have new architecture where VII from 64 was just a shrink. But the improvements from a node shrink will be lower.

VII was a proof of marketability for 7nm. It was a test to see if 7nm is as good as TSMC claimed. And it went above and beyond. VII is a TSMC success not an AMD success. But this success will show in the CPU side as well. TSMC created a fantastic node.

Again, completely wrong, and/or your complete speculation, which is usually bias and wrong.

You misunderstand me about Intel vs AMD. Intel made fantastic chips. But their chips were not their success point. The success came from their superior nodes. Their chips have not improved much other than the shrinks.

Back over a decade ago Shrinks used to be much bigger but it became harder & harder to shrink.

Compare the 4770k to the 9900k

Intel has improved on these things only
1) Performance per watt
2) Higher clock speeds
3) Higher Core Count
4) Support for DDR4 and some chipset changes.

The actually architecture is less than 10% faster IPC in 5 Generations. But the node increase was fantastic. Sure DDR4 & some other features help but actual chip power is not increasing. Ryzen IPC is not far behind Intel they lose on clock speeds which is mostly a problem with weaker silicone on Ryzen 2000 series chips. This is no longer the case when they swap to TSMC 7nm. Clock Speeds should be close to Intel.

Until 2017 Intel had a stupidly large lead on the Transistor side. TSMC & others even scrapped their 20nm tech ages ago and reused their 20nm when they swapped to Finfets arround 2017 (TSMC 16nm is reused from their old unmarketable 20nm but on finfet). AMD was still producing 28nm Chips until Ryzen & Polaris.

Intel swapped to Finfets on their 3rd Gen CPU's and kept making improvements from then. It was revolutionary at the time because before Intel was using Finfets people assumed Moores law was dead but Intel managed to keep it alive for a bit longer.

Now as Intel struggles to get their 10nm Profitable, GloFo gives up on cutting edge nodes we see a problem. Moores law is dying again. TSMC finally caught up to Intel which it has not came close to doing in years and TSMC actually not just has 7nm complete but has multiple products using their 7nm out on the market proving it works (What TSMC calls 7nm is basically same as Intel 10nm, so its not like TSMC is far ahead of Intel ATM)

There are only 2 ways to solve the new issue all fabs are having on small shrinks

1) We need a new revolutionary way to build our nodes (Like Intel did with Finfets in the past)

2) Enter the Chiplet Revolution

You don't need to cram more transistors if you can just use smaller CPU dies daisy chained together on a single package with a single IO die. The point of a single IO die allows you to have far smaller chips without having extra parts like memory controllers on each die.

Another thing about the IO die is most parts on the IO die do not scale well with shrinks. This means using older 14nm tech which is far cheaper can work with not any realistic downsides. While using the cutting edge nodes with higher defect rates will work fine on CPU die as the dies are far smaller.

AMD is pushing this out on CPU's in a few months. This is not the end this is the beginning. 2020 is going to be where the benefits really show on this (as DDR5 releases)

Waffer Math
The tiny die on the top right is the Ryzen 2 CPU Die notice how tiny the die is?

Ryzen 2700x was approx 213 mm² Ryzen 2 seems to be about 80mm² die size
cpu44_678x452.jpg


If we use a Die Yield Calculator and for a 300mm Waffer we assume .1 defect density

Ryzen 2 dimensions would get 92.19% Yields 656 Good dies, 56 defect dies and 48 partial dies per waffer




A larger die with say 12x17 size (200mm^2) would get only 81% yield, with 223 good dies, 50 defect dies, and 26 partial dies.

Intel's 7900 package size was 22mm x 14mm which gives 74% yield = 133 good die and 47 defect die with 8 partial dies.

https://caly-technologies.com/die-yield-calculator/

The Yields increase much further when you factor in a defect on a core could be simply disabled and sold as a lower cost chip.


The issue we have today is each node shrink is costing exponentially more and more while giving less benefits and defect rates are really cutting into the profits. Intel sells monolithic dies while AMD can daisy chain all their chiplets.

Sort of true, AMD did push Intel to finally move from quad-core mainstream cpu's, but the rest is wrong. Intel already responded to AMD's chips with complete dominance. AMD is still a strong player in the field, but your claim of being left in the dust is completely laughable. The rest of your comments all fallow the same suite, not completely true and bias, but worth little time to respond to each individually. You are the poster child of an AMD fanboy, and while I know its hard sitting and waiting for AMD's time to shine, it is not anywhere in the near future. Radeon 7 looks good though, under water cooling and power mods if that makes you feel better.
Its easy speculation for everything.

Intel can no longer rely on their transistor advantage and has to join the chiplet revolution. They are already facing the limits of their monolithic chip designs. They cannot hold their edge.

It is possible Intel is silently working on this but we know this. Intel can no longer rely on Node improvements. And unless they have some sort of alternative to the infinity fabric they will fall behind.


Intel is no Slouch in the CPU market they had dual package dies before AMD they just never managed to make it successful as they had no way of a low latency interconnect in the past.

There are multiple ways to solve this issue today and Intel actually looks like they are going to have something in this market.


Intel announced years ago they wanted to look at 3d stacked memory on package. To many people it was seen as ok that is the logical next step AMD has used unified cache for their APU's in practice on the Xbox 1 for the esram. Intel is investing big in their GPU side now and it would make sense to have a HBM pool for unified cache of the GPU & CPU.

But what I think this leads to is a bigger picture. If Intel wants to do this on package ram. Why not put 2 CPU dies on an interposer or something? If you are already doing this why not separate the CPU die and create a separate IO die and maybe even a separate GPU die to decrease the total die size of your CPU.

If Intel could put two 8 core chips, an IO die & a GPU die on a single interposer in theory they could create something even faster than AMD's infinity Fabric.

The downside of the Interposer is extra silicon but like the IO die but even more so you can use lower density nodes for the interposer & you can use basically junk silicon that will have nearly 100% perfect yields.


Intel has 3 options

1) Big Interposer for multiple dies with interconnects between the die's (Faster)
2) Adopting something similar to AMD's infinity Fabric. (Lower Cost)

3) something no one else is currently thinking about



Chiplets are the future of desktop computing Intel is going to jump on board or will be left in the dust.


Yield success rate is the biggest issue Intel is facing. This issue is solved with Chiplets.
 
Last edited:
FlashwithSymbols

FlashwithSymbols

Newbie
Messages
4
Reaction score
0
Points
20
Sin$
0
It almost always depends on budget. You can get a much better build with AMD but historically intel CPUs have been greater performing (better single core performance) but if the programs you use, use multiple cores than AMD may be better; it depends on everyones usage.

AMD gpu's are pretty good but I prefer intel any day.
 
Austin

Austin

What a random
10th Anniversary
Messages
443
Solutions
1
Reaction score
185
Points
210
Sin$
-7
AMD only truly shines with game streaming with their new processors in relation to intel when it comes to price vs performance.
 
J

jessica90rose

Newbie
Messages
16
Reaction score
2
Points
45
Sin$
0
For any graphics card costing $350 or more, Nvidia typically wins in value and performance, and Nvidia's GPUs are more efficient at each price point. AMD still loses in overall efficiency, drawing 30-50W more power, but at least you're getting better performance and a lower price.
 
BearMonster15

BearMonster15

Newbie
Messages
9
Reaction score
2
Points
10
Sin$
7
I believe Nvidia is way better for gaming and AMD is way better for work or film making i dunno.
 
taylorsmithily

taylorsmithily

Newbie
Messages
2
Reaction score
0
Points
10
Sin$
0
AMD's Ryzen APU series of processors are great if you're looking for a gaming PC on a low budget. I also used AMD for my rent laptops. AMD uses a scaled-down version of its Vega graphics card to accompany either a 4C/4T CPU or a 4C/8T CPU. While this won't run any games at 4k in ultra settings, AMD did design them to be used for gaming. If you want to buy high-end laptops or desktops, One should prefer Intel based processors because performance in this segment is way better than AMD.
 
Vegeta

Vegeta

宇宙のマスター
Super Moderators
VIP
MotM Trifecta Diamonds Are Forever
Messages
12,472
Solutions
2
Reaction score
11,444
Points
3,490
Sin$
0
All I know is I got the 3080 on launch and I’m gonna get the 3080 Ti on launch too.

f*** AMD
 
HYX

HYX

Administrator
Administrator
Lifetime
Hidden Devils
Scaling the Mountain Programmer Odysseus' Summit
Messages
2,503
Solutions
31
Reaction score
1,240
Points
1,075
Sin$
0
All I know is I got the 3080 on launch and I’m gonna get the 3080 Ti on launch too.

f*** AMD
You have a bot or something to make sure it is a given that you get one? From what I am seeing the scalpers aren't slowing down so it may be a tough grab.
 
Vegeta

Vegeta

宇宙のマスター
Super Moderators
VIP
MotM Trifecta Diamonds Are Forever
Messages
12,472
Solutions
2
Reaction score
11,444
Points
3,490
Sin$
0
You have a bot or something to make sure it is a given that you get one? From what I am seeing the scalpers aren't slowing down so it may be a tough grab.
Not at all, I live in Canada and we have MemoryExpress, similar to MicroCenter in the USA. ME allows you to “backorder” the card on launch day and at that point its a lottery for your spot on the list.

I waited two months for my 3080 after purchasing it on September 25th, I was like number 5,000 in the queue and thats because I ordered it three days after launch cause I forgot.

I expect the 3080 Ti to be similar and I’ll be ordering on launch day this time.
 
Top Bottom
Login
Register