An Nvidia GeForce GTX 780 Ti costs $700, while a Quadro K6000 costs $4000—yet they use the same underlying GK110 core!
The same can be said for other workstation GPUs from both Nvidia and AMD.
What exactly does this price difference pay for with a workstation GPU? It is my understanding that they have specially-tuned drivers for CAD and other intensive business applications, sacrificing speed in gaming applications for greater accuracy and performance in such business software, but this by itself can't explain the cost difference. They may have more memory, and often of the ECC type, but that still can't explain a nearly sixfold difference.
Would hardware validation explain the difference? I suspect it goes like this: among the GPU chips that test as usable, 30% go into a high-end consumer card, and 68% go into a slightly cheaper consumer card; the other 2% go through even deeper validation, and the few that pass get put into a workstation card. Could this be the case, and is this why they're so expensive?
Answer
It's primarily market segmentation to allow price discrimination. Businesses who make money from work done with these cards have different requirements than gamers. Nvidia and AMD are taking advantage of that by asking them to pay more.
There are some minor differences to create this rate fence. For example, the Quadro / Fire Pro models use different drivers which prioritize rendering accuracy over speed. On the Tesla models, ECC RAM is a selling point for server farms, and NVidia claim higher reliability for 24/7 operation.
The company I work for designs GPGPU accelerated software. Our server suppliers will only sell us Tesla (or GRID) systems. I know if I buy a 1U server with 3x K40 cards, it won't melt in my client's data center. So I'm willingly paying triple price for my cards. I imagine anyone buying a Quadro card for business has the same rationale.
Comments
Post a Comment