Radeon r9 295x2
The AMD Radeon R9 295X2 Review
Although the days of AMD’s “small die” strategy have long since ended, one aspect of AMD’s strategy that they have stuck with since the strategy’s inception has been the concept of a dual-GPU card. AMD’s first modern dual-GPU card, the Radeon HD 3870 X2 (sorry, Rage Fury MAXX), came at a low point for the company where such a product was needed just to close the gap between AMD’s products and NVIDIA’s flagship big die video cards. However with AMD’s greatly improved fortune these days, AMD no longer has to play just to tie but they can play to win. AMD’s dual-GPU cards have evolved accordingly and these days they are the high-flying flagships of AMD’s lineup, embodying the concept of putting as much performance into a single card as is reasonably possible.
The last time we took a look at a new AMD dual-GPU video card was just under a year ago, when AMD launched the Radeon HD 7990. Based on AMD’s then-flagship Tahiti GPUs, the 7990 was a solid design that offered performance competitive with a dual card (7970GHz Edition Crossfire) setup while fixing many of the earlier Radeon HD 6990’s weaknesses. However the 7990 also had its shares of weaknesses and outright bad timing – it came just 2 months after NVIDIA released their blockbuster GeForce GTX Titan, and it also launched at a time right when the FCAT utility became available, enabling reliable frame pacing analysis and exposing the weak points in AMD’s drivers at the time.
Since then AMD has been hard at work on both the software and hardware sides of their business, sorting out their frame pacing problems but also launching new products in the process. Most significant among these was the launch of their newer GCN 1.1 Hawaii GPU, and the Radeon R9 290 series cards that are powered by it. Though Tahiti remains in AMD’s product stack, Hawaii’s greater performance and additional features heralded the retail retirement of the dual-Tahiti 7990, once again leaving an opening in AMD’s product stack.
That brings us to today and the launch of the Radeon R9 295X2. After much consumer speculation and more than a few teasers, AMD is releasing their long-awaited Hawaii-powered entry to their dual-GPU series of cards. With Hawaii AMD has a very powerful (and very power hungry) GPU at their disposal, and for its incarnation in the R9 295X2 AMD is going above and beyond anything they’ve done before, making it very clear that they’re playing to win.
|AMD GPU Specification Comparison|
|AMD Radeon R9 295X2||AMD Radeon R9 290X||AMD Radeon HD 7990||AMD Radeon HD 7970 GHz Edition|
|Stream Processors||2 x 2816||2816||2 x 2048||2048|
|Texture Units||2 x 176||176||2 x 128||128|
|ROPs||2 x 64||64||2 x 32||32|
|Memory Clock||5GHz GDDR5||5GHz GDDR5||6GHz GDDR5||6GHz GDDR5|
|Memory Bus Width||2 x 512-bit||512-bit||2 x 384-bit||384-bit|
|VRAM||2 x 4GB||4GB||2 x 3GB||3GB|
|Transistor Count||2 x 6.2B||6.2B||2 x 4.31B||4.31B|
|Typical Board Power||500W||250W||375W||250W|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm||TSMC 28nm|
|Architecture||GCN 1.1||GCN 1.1||GCN 1.0||GCN 1.0|
Starting with a brief look of the specifications, much of the Radeon R9 295X2’s design, goals, and performance can be observed in the specifications alone. Whereas the 7990 was almost a 7970GE Crossfire setup on a single card, AMD is not making any compromises for the R9 295X2, equipping the card with a pair of fully enabled Hawaii GPUs and then clocking them even higher than their single-GPU flagship, the R9 290X. As a result unlike AMD’s past dual-GPU cards, which made some performance tradeoffs in the name of power consumption and heat, AMD’s singular goal with the R9 295X2 is to offer the complete performance of a R9 290X “Uber” Crossfire setup on a single card.
Altogether this means we’re looking at a pair of top-tier Hawaii GPUs, each with their full 2816 SPs and 64 ROPs enabled. AMD has set the boost clock on these GPUs to 1018MHz – just 2% faster than the 290X – which means performance is generally a wash compared to the R9 290X in CF, but none the less offering just a bit more performance that should offset the penalties from the additional latency the necessary PCIe bridge chip introduces. Otherwise compared to the retired 7990, the R9 295X2 should be a far more capable card, offering 40% more shading/texturing performance and 2x the ROP throughput of AMD’s previous flagship. Like the R9 290X compared to the 7970GHz we’re still looking at what are fundamentally parts from the same generation and made on the same 28nm process, so AMD doesn’t get the benefits of a generational improvement in architectures and manufacturing, but even within the confines of 28nm AMD has been able to do quite a bit with Hawaii to improve their performance over Tahiti based products.
Meanwhile AMD is taking the same no-compromises strategy when it comes to memory. The R9 290X was equipped with 4GB of 5GHz GDDR5, operating on a 512-bit memory bus, and for the R9 295X2 in turn each GPU is getting the same 4GB of memory on the same bus. The fact that AMD has been able to lay down 1024 GDDR5 memory bus lines on a single board is no small feat (the wails of the engineers can be heard for miles), and while it is necessary to keep up with the 290X we weren’t entirely sure if AMD was going to be able and willing to pull it off. Nonetheless, the end result is that each GPU gets the same 320GB/sec as the 290X does, and compared to the 7990 this is an 11% increase in memory bandwidth, not to mention a 33% increase in memory capacity.
Now as can be expected by any card labeled a “no compromises” card by its manufacturer, all of this performance does come at a cost. Hawaii is a very powerful GPU but it is also very power hungry; AMD has finally given us an official Typical Board Power (TBP) for the R9 290X of 250W, and with R9 295X2 AMD is outright doubling it. R9 295X2 is a 500W card, the first 500W reference card from either GPU manufacturer.
As one can expect, moving 500W of heat is no easy task. AMD came close once before with the 6990 – a card designed to handle up to 450W in its AUSUM mode – but the 6990 was dogged by the incredibly loud split blower AMD needed to use to cool the beast. For the 7990 AMD dropped their sights and their power target to just 375W, and at the same time went to a large open air blower that allowed them to offer a dual-GPU card with reasonable noise levels. But for the R9 295X2 AMD is once again turning up the heat, requiring new methods of cooling if they want to offer 500W of cooling while maintaining reasonable noise levels.
To dissipate 500W of heat AMD has moved past blowers and even open air coolers, and moved on to a closed loop liquid cooler (CLLC). We’ll cover AMD’s cooling apparatus in more detail when we take a closer look at the construction of the R9 295X2, but as with AMD’s 500W target AMD is charting new territory for a reference card by making a CLLC the baseline cooler. With two Asetek pumps and a 120mm radiator to dissipate heat, the R9 295X2 is a significant departure from AMD’s past designs and an equally significant change in the traditionally conservative system requirements for a reference card.
In any case, the fact that AMD went this route isn’t wholly surprising – there aren’t too many ways to move 500W of heat – but the lack of significant binning did catch us off guard. Dual-GPU cards are often (but not always) using highly binned GPUs to further contain power consumption, which isn’t something AMD has done this time around, and hence the reason for the R9 295X2’s doubled power consumption. So long as AMD can remove the heat then they’ll be fine, and from our test results it’s clear that AMD has definitely done some binning, but none the less it’s interesting that we aren’t seeing as aggressive binning here as in past years.
Finally, let’s dive into pricing, availability, and competition. Given the relatively exotic cooling requirements for the R9 295X2, it comes as no great surprise that AMD is targeting the same luxury video card crowd that the GTX Titan pioneered last year when it premiered at $1000. This means using more expensive cooling devices, a greater emphasis on build quality with a focus on metal shrouding, and a few gimmicks to make the card stand out in windowed cases. To that end the R9 295X2 will by its very nature be an extremely low volume part, but if AMD has played their cards right it will be the finest card they have ever built.
The price for that level of performance and quality on a single card will be $1499 (€1099 + VAT), $500 higher than the 7990’s $999 launch price, and similarly $500 higher than NVIDIA’s closest competitor, the GTX Titan Black. With two R9 290Xs running for roughly $1200 at current prices, we’ve expected for some time that a dual-GPU Hawaii card would be over $1000, so AMD isn’t too far off from our expectations. Ultimately AMD’s $1500 price tag amounts to a $300 premium for getting two 290Xs on to a single card, along with the 295X2’s much improved build quality and more complex cooling apparatus. Meanwhile GPU complexity and heat density has reached a point where the cost of putting together a dual-GPU card is going to exceed the cost of a single card, so these kinds of dual-GPU premiums are going to be here to stay.
As always, the R9 295X2’s competition will be a mix of dual video card setups such as dual R9 290Xs and dual GTX 780 Tis, and of course NVIDIA’s forthcoming dual-GPU card. When it comes to dual video card setups the latter will always be cheaper than a single dual-GPU card, so the difference lies in the smaller space requirements of a single video card and the power/heat/noise savings that such a card provides. In the AMD ecosystem the reference 290X is dogged by its loud reference cooler, so as we’ll see in our test results the R9 295X2 will have a significant advantage over the 290X when it comes to noise.
Meanwhile in NVIDIA’s ecosystem, NVIDIA has the dual GTX 780 Ti, the dual GTX Titan Black, and the GTX Titan Z. The dual GTX 780 Ti is going to be closest competitor to the R9 295X2 at roughly $1350, with a pair of GTX Titan Blacks carrying both a performance edge and a significant price premium. As for the GTX Titan Z, NVIDIA’s forthcoming dual-GPU card is scheduled to launch later this month, and while it should be a performance powerhouse it’s also going to retail at $3000, twice the price of the R9 295X2. So although the GTX Titan Z can be used for gaming, we’re expecting it to be leveraged more for its compute performance than its gaming performance. In any case based on NVIDIA’s theoretical performance figures we have a strong suspicion that the GTX Titan Z is underclocked for TDP reasons, so it remains to be seen whether it’s even gaming performance competitive with the R9 295X2.
For availability the R9 295X2 will be a soft launch for AMD, with AMD announcing the card 2 weeks ahead of its expected retail date. AMD tells us that the card should start appearing at retailers and in boutique systems on the week of April 21st, and while multiple AMD partners will be offering this card we don’t have a complete list of partners at this time (but expect it to be a short list). The good news is that unlike most of AMD’s recent product launches, we aren’t expecting availability to be a significant problem. Due to the price premium over a pair of 290Xs and recent drops in cryptocoin value, it’s unlikely that miners will want the 295X2, meaning the demand and customer base should follow the more traditional gamer demand curves.
Finally, it’s worth noting that unlike the launch of the 7990, AMD isn’t doing any game bundle promotions for the R9 295X2. AMD hasn’t been nearly as aggressive on game bundles this year, and in the case of the R9 295X2 there isn’t a specific product (e.g. GTX Titan) that AMD needs to counter. Any pack-in items – be it games or devices – will be the domain of the board partners this time around. Also, the AMD Mystery Briefcase was just a promotional item, so partners won’t be packing their retail cards quite so extravagantly.
AMD Radeon R9 295X2
|Максимальное разрешение||4096 х 2160|
|Количество транзисторов||6200 x 2 млн|
|Площадь||438 x 2 мм²|
|Частота ядра||1018 MHz|
|Универсальных шейдерных блоков||2816 x 2|
|Блоков растеризации (ROP)||64 x 2|
|Текстурных блоков (TMU)||176 x 2|
|Пиксельная скорость заполнения (pixel fillrate)||65.2 x 2 GPixel/s|
|Текстурная скорость заполнения (texel fillrate)||179.2 x 2 GTexel/s|
|Объем||4096 x 2 Mb|
|Ширина шины||512 x 2 bit|
|Пропускная способность||320 x 2 GB/s|
|Макс. потребляемая энергия (TDP)||500 W|
|Мин. требования к блоку питания||850 W|
|Разъемы дополнительного питания||3 x 8-pin|
|Поддерживаемые API и технологии|
|SLI / CrossFireX||CrossFireX|
|Другие технологии||• ATI Stream• ATI Eyefinity• HDCP• AvivoHD• AMD HD3D• AMD PowerPlay• AMD PowerTune• AMD ZeroCore• AMD Mantle• AMD App Acceleration|
• AMD TrueAudio
( ~ 600 моделей )Быстродействие Radeon R9 295X2
в играхРейтинг видеокарт
( + спецификации )Сервис сравнения процессоров
( ~ 2 600 моделей )Рейтинг процессоров
( + спецификации )
Radeon R9 295X2 8 GB Review: Project Hydra Gets Liquid Cooling
Dreadnought. Perhaps you know the word from Final Fantasy. Or maybe Warhammer. Or Star Trek, even.
But the dreadnoughts I was thinking about during my week locked up in the lab were the 20th-century battleships built by Britain, France, Germany, Italy, Japan, and the U.S. Before the signing of the Washington Naval Treaty in 1922, each of those countries (and several others) poured tons of resources into one-upping each other, commissioning capital ships able to move faster, fire further, and prevent more damage. Eventually, the exercise became economically exhausting.
But all in the name of claiming superiority, right?
The graphics card market is in the midst of its own arms race. AMD fired a white-hot salvo back in 2011 with the introduction of its Radeon HD 7970, which easily dwarfed Nvidia’s GeForce GTX 580. A few short months later, Nvidia shot back with the GeForce GTX 680, hitting harder and for less money. Since then, both companies have traded broadsides, introducing the Radeon HD 7970 GHz Edition, GeForce GTX 690, Radeon HD 7990, GeForce GTX Titan, and Radeon R9 290X, all leveraging relatively similar architectures to push the performance envelope. Increased prices were offset by higher frame rates, which affluent gamers willingly paid.
If those cards are the dreadnoughts of our industry, then we’re about to enter the era of super-dreadnoughts (yes, that’s a thing).
A couple of weeks ago, Nvidia announced its GeForce GTX Titan Z, a dual-GK110-powered, triple-slot behemoth. Jen-Hsun called it the perfect card for those in need of a supercomputer under their desk. And using his 8 TFLOP specification, I worked backward to a core clock rate around 700 MHz per GPU. That’s more than 100 MHz lower than the GK110 on a GeForce GTX Titan. Wouldn’t you be better off building that supercomputer using two, three, or even four Titans? We have to wait and see; the Titan Z isn’t available yet.
Although one GeForce GTX Titan Z appears destined to be quite a bit slower than a pair of Titans, Nvidia plans to ask an astounding $3000, or 50% more for it.
In response, AMD is escalating the arms race with its Radeon R9 295X2, another dual-GPU specimen. But this one is quite a bit different. To begin, it sports Hawaii GPUs that run just a bit faster than the single-processor Radeon R9 290X. Also, the 295X2 is a dual-slot board. How is such a feat possible? Closed-loop liquid cooling, of course.
AMD Fires Back With (Relative) Value
The existence of this card wasn’t a carefully-guarded secret. In fact, AMD had a marketing agency shipping out care packages alluding to its arrival. But a lot of the 295X2’s rumored specifications were completely wrong. Let's set the record straight, shall we?
Learn More About Hawaii
For more information on the Hawaii GPU, check out Radeon R9 290X Review: AMD's Back In Ultra-High-End Gaming
Again, AMD starts with two Hawaii processors, each manufactured at 28 nm and composed of 6.2 billion transistors. Those GPUs are unaltered, sporting a full 2816-shader configuration with 176 texture units, 64 ROPs, and an aggregate 512-bit memory bus. Four gigabytes of GDDR5 per processor are attached, yielding a card with 8 GB on-board.
AMD has a respectable track record of keeping its dual-GPU boards almost as fast as two single-GPU flagships. The Radeon HD 6990 ran something like 50 MHz slower than a Radeon HD 6970. But it still managed to accommodate two fully-operational Cayman processors. The Radeon HD 7990 did battle against the GeForce GTX 690 with Tahitis also operating 50 MHz slower than the then-fastest card in AMD’s stable. They too were fully-featured, with all 2048 shaders enabled.
|28 nm||28 nm||28 nm||28 nm|
|2 x 6.2 Billion||6.2 Billion||7.1 Billion||7.1 Billion|
|Up to 1018 MHz||Up to 1 GHz||837 MHz||875 MHz|
|2 x 2816||2816||2688||2880|
|Up to 11.5 TFLOPS||5.6 TFLOPS||4.5 TFLOPS||5.0 TFLOPS|
|2 x 176||176||224||240|
|Up to 358.3 GT/s||176 GT/s||188 GT/s||210 GT/s|
|2 x 64||64||48||48|
|Up to 130.3 GP/s||64 GP/s||40 GP/s||41 GP/s|
|2 x 512-bit||512-bit||384-bit||384-bit|
|2 x 4 GB GDDR5||4 GB GDDR5||6 GB GDDR5||3 GB GDDR5|
|Up to 5 GT/s||5 GT/s||6 GT/s||7 GT/s|
|2 x 320 GB/s||320 GB/s||288 GB/s||336 GB/s|
|500 W||250 W||250 W||250 W|
The Radeon R9 295X2's twin Hawaii GPUs go even further. Whereas a reference Radeon R9 290X runs at up to 1000 MHz, the 295X2 gets a small bump to 1018 MHz. Yes, the processors are still subject to the dynamic throttling behavior we illustrated in The Cause Of And Fix For Radeon R9 290X And 290 Inconsistency. But because cooling is better this time around, we’ve been told that throttling shouldn’t be an issue.
Between the two GPUs, their respective memory packages, and a bunch of power circuitry, AMD plants a PEX 8747 switch, the same 48-lane, five-port device found on its Radeon HD 7990 and Nvidia’s GeForce GTX 690. The switch interfaces with each Hawaii processor’s PCI Express 3.0 controller, facilitating a 16-lane connection between the GPUs and platform.
AMD also offers a similar array of display outputs as what we saw on the 7990, including one dual-link DVI-D connector and four mini-DisplayPort interfaces.
For all of that, AMD claims it will charge $1500 (or €1100 + VAT). The Radeon R9 295X2 won’t be available immediately, either. As of right now, the company says you’ll find it for sale online the week of April 21st. Don and I are in agreement here: we’ve seen too many missed price estimates and ship dates from AMD to take this one as gospel. We'll treat $1500 as general guidance for now.
AMD claims its Radeon R9 295X2 is designed for 4K gaming. But I also wanted to run 2560x1440. Not only is that resolution far more common in the high-end space, but it also serves as a good baseline before we get to the Ultra HD numbers.
Arma 3 demonstrates a platform bottleneck at 2560x1440, even with the game's lushest detail settings switched on. Average frame rates from most configurations hover just under 80, while minimums are just under 70 FPS.
Only the Radeon HD 7990 and GeForce GTX 690 fall short of the choke point, though both still deliver a readily-playable experience.
Charting frame rate over time shows the ultra-high-end boards in their narrow range up top, as the other two cards trail.
Frame time variance attempts to quantify the smoothness of a given graphics card’s performance. Once upon a time, not long ago, this was a very real issue for AMD in multi-GPU arrays. Its processors would deliver frames as they were made ready, sometimes resulting in runts—frames on-screen for so short of a time that you don’t actually perceive them.
The company first addressed concerns over reported versus experienced frame rates with a special driver that more evenly paced the rate at which output was displayed. And although the Radeon HD 7990 and Radeon R9 290X cards approach CrossFire differently, incredibly low frame time variance in Arma 3 shows that both solutions demonstrate effective pacing to keep variance low.
This sample of frame times reveals a handful of small spikes, but overall consistent performance.
Gone is the bottleneck as we downshift to 3840x2160 and watch these cards further-differentiate themselves. The Radeon R9 295X2 sits up top, followed by two Radeon R9 290X boards in CrossFire. The GeForce GTX 780 Ti and Titans in SLI take third and fourth place.
Although we haven’t seen any dropped or runt frame issues from Nvidia in the past, two GK110s should easily best a pair of GK104s. However, Fraps and FCAT results seem to agree that Titans in SLI and the GeForce GTX 690 report similar average frame rates. What we’re likely missing is the fact that the 690’s 2 GB of memory per GPU causes quite a bit of stuttering. So, while the frame rate appears high through Fraps, the experience of gaming on a 690 at 4K is not nearly as pleasant.
That same phenomenon isn’t captured in the frame rate over time chart, where the GeForce GTX 690 appears quite quick. More notable is that the Radeon R9 295X2 is faster than the R9 290Xes in CrossFire, which in turn outperforms the two high-end combos from Nvidia.
The frame time variance at 3840x2160 is much higher than it was at 2560x1440, which we’d expect given significantly lower frame rates. However, all the way down to the Titans in SLI, even worst-case variance isn’t all that bad.
The GeForce GTX 690 registers significantly higher variance at Ultra HD. AMD’s Radeon HD 7990 runs into bad worst-case variance, while its average and 75th-percentile numbers are much more reasonable.
AMD's Radeon R9 295 X2 graphics card reviewed
Several weeks ago, I received a slightly terrifying clandestine communique consisting only of a picture of myself in duplicate and the words, “Wouldn’t you agree that two is better than one?” I assume the question wasn’t truly focused on unflattering photographs or, say, tumors. In fact, I had an inkling that it probably was about GPUs, as I noted in a bemused news item.
A week or so after that, another package arrived at my door. Inside were two small cans of Pringles, the chips reduced to powder form in shipping, and a bottle of “Hawaiian volcanic water.” Also included were instructions for a clandestine meeting. Given what had happened to the chips, I feared someone was sending me a rather forceful signal. I figured I’d better comply with the sender’s demands.
So, some days later, I stood at a curbside in San Jose, California, awaiting the arrival of my contacts—or would-be captors or whatever. Promptly at the designated time, a sleek, black limo pulled up in front of me, and several “agents” in dark clothes and mirrored sunglasses spilled out of the door. I was handed a document to sign that frankly could have said anything, and I compliantly scribbled my signature on the dotted line. I was then whisked around town in the limo while getting a quick-but-thorough briefing on secrets meant for my eyes only—secrets of a graphical nature, I might add, if I weren’t bound to absolute secrecy.
Early the next week, back at home, a metal briefcase was dropped on my doorstep, as the agents had promised. It looked like so:
After entering the super-secret combination code of 0-0-0 on each latch, I was able to pop the lid open and reveal the contents.
Wot’s this? Maybe one of the worst-kept secrets anywhere, but then I’m fairly certain the game played out precisely as the agents in black wanted. Something about dark colors and mirrored sunglasses imparts unusual competence, it seems.
Pictured in the case above is a video card code-named Vesuvius, the most capable bit of graphics hardware in the history of the world. Not to put too fine a point on it. Alongside it, on the lower right, is the radiator portion of Project Hydra, a custom liquid-cooling system designed to make sure Vesuvius doesn’t turn into magma.
Mount Radeon: The R9 295 X2
Liberate it from the foam, and you can see Vesuvius—now known as the Radeon R9 295 X2—in all of its glory.
You may have been wondering how AMD was going to take a GPU infamous for heat issues with only one chip on a card and create a viable dual-GPU solution. Have a glance at that external 120-mm fan and radiator, and you’ll wonder no more.
If only Pompeii had been working with Asetek. Source: AMD.
The 295 X2 sports a custom cooling system created by Asetek for AMD. This system is pre-filled with liquid, operates in a closed loop, and is meant to be maintenance-free. As you can probably tell from the image above, the cooler pumps liquid across the surface of both GPUs and into the external radiator. The fan on the radiator then pushes the heat out of the case. That central red fan, meanwhile, cools the VRMs and DRAM on the card.
We’ve seen high-end video cards with water cooling in the past, but nothing official from AMD or Nvidia—until now. Obviously, having a big radiator appendage attached to a video card will complicate the build process somewhat. The 295 X2 will only fit into certain enclosures. Still, it’s hard to object too strongly to the inclusion of a quiet, capable cooling system like this one. We’ve seen way too many high-end video cards that hiss like a Dyson.
There’s also the matter of what this class of cooling enables. The R9 295 X2 has two Hawaii GPUs onboard, fully enabled and clocked at 1018MHz, slightly better than the 1GHz peak clock of the Radeon R9 290X. Each GPU has its own 4GB bank of GDDR5 memory hanging off of a 512-bit interface. Between the two GPUs is a PCIe 3.0 switch chip from PLX, interlinking the Radeons and connecting them to the rest of the system. Sprouting forth from the expansion slot cover are four mini-DisplayPort outputs and a single DL-DVI connector, ready to drive five displays simultaneously, if you so desire.
So the 295 X2 is roughly the equivalent of two Radeon R9 290X cards crammed into one dual-slot card (plus an external radiator). That makes it the most capable single-card graphics solution that’s ever come through Damage Labs, as indicated by the bigness of the numbers attached to it in the table below.
|Peak pixel |
|Memory bandwidth |
|Radeon HD 7970||30||118/59||3.8||1.9||264|
|Radeon HD 7990||64||256/128||8.2||4.0||576|
|Radeon R9 280X||32||128/64||4.1||2.0||288|
|Radeon R9 290||61||152/86||4.8||3.8||320|
|Radeon R9 290X||64||176/88||5.6||4.0||320|
|Radeon R9 295 X2||130||352/176||11.3||8.1||640|
|GeForce GTX 690||65||261/261||6.5||8.2||385|
|GeForce GTX 770||35||139/139||3.3||4.3||224|
|GeForce GTX 780||43||173/173||4.2||3.6 or 4.5||288|
|GeForce GTX Titan||42||196/196||4.7||4.4||288|
|GeForce GTX 780 Ti||45||223/223||5.3||4.6||336|
Those are some large values. In fact, the only way you could match the bigness of those numbers would be to pair up a couple of Nvidia’s fastest cards, like the GeForce GTX 780 Ti. No current single GPU comes close.
There is a cost for achieving those large numbers, though. The 295 X2’s peak power rating is a jaw-dropping 500W. That’s quite a bit higher than some of our previous champs, such as the GeForce GTX 690 at 300W and the Radeon HD 7990 at 375W. Making this thing work without a new approach to cooling wasn’t gonna be practical.
Exotic cooling, steep requirements
AMD has gone out of its way to make sure the R9 295 X2 looks and feels like a top-of-the-line product. Gone are the shiny plastics of the Radeon HD 7990, replaced by stately and industrial metal finishes, from the aluminum cooling shroud up front to the black metal plate covering the back side of the card.
That’s not to say that the 295 X2 isn’t any fun. The bling is just elsewhere, in the form of illumination on the “Radeon” logo atop the shroud. Another set of LEDs makes the central cooling fan glow Radeon red.
I hope you’re taken by that glow—I know I kind of am—because it’s one of the little extras that completes the package. And this package is not cheap. The suggested price on this puppy is $1499.99 (or, in Europe, €1099 plus VAT). I believe that’s a new high-water mark for a consumer graphics card, although it ain’t the three frigging grand Nvidia intends to charge for its upcoming Titan Z with dual GK110b chips. And I believe the 295 X2’s double-precision math capabilities are fully enabled at one-quarter the single-precision rate, or roughly 2.8 teraflops. That makes the 295 X2 a veritable bargain by comparison, right?
Well, whatever the case, AMD expects the R9 295 X2 to hit online retailers during the week of April 21, and I wouldn’t be shocked to see them sell out shortly thereafter. You’ll have to decide for yourself whether 295 X2’s glowy lights, water cooling, and other accoutrements are worth well more than the $1200 you’d put down for a couple of R9 290X cards lashed together in a CrossFire config.
You know, some things about this card—its all-metal shroud, illuminated logo, secret agent-themed launch, metal briefcase enclosure, and exploration of new price territory—seem strangely familiar. Perhaps that’s because the GeForce GTX 690 was the first video card to debut an all-metal shroud and an illuminated logo; it was launched with a zombie apocalypse theme, came in a wooden crate with prybar, and was the first consumer graphics card to hit the $1K mark. Not that there’s anything wrong with that. The GTX 690’s playbook is a fine one to emulate. Just noticing.
The Radeon HD 7990 (left) and R9 295 X2 (right)
Assuming the R9 295 X2 fits into your budget, you may have to make some lifestyle changes in order to accommodate it. The card is 12″ long, like the Radeon HD 7990 before it, but it also requires a mounting point for the 120-mil radiator/fan combo that sits above the board itself. Together, the radiator and fan are 25 mm deep. If you’re the kind of dude who pairs up two 295 X2s, AMD recommends leaving a one-slot gap between the two cards, so that airflow to that central cooling fan isn’t occluded. I suspect you’d also want to leave that space open in a single-card config rather than, say, nestling a big sound card right up next to that fan.
More urgently, your system’s power supply must be able to provide a combined 50 amps across the card’s two eight-pin PCIe power inputs. That wasn’t a problem for the Corsair AX850 PSU in our GPU test rig, thanks to its single-rail design. Figuring out whether a multi-rail PSU offers enough amperage on the relevant 12V rails may require some careful reading, though.
Now for a whole mess of a issues
The Radeon R9 295 X2 is a multi-GPU graphics solution, and that very fact has triggered a whole mess of issues with a really complicated backstory. The short version is that AMD has something of a checkered past when it comes to multi-GPU solutions. The last time they debuted a new dual-GPU graphics card, the Radeon HD 7990, it resulted in one of the most epic reviews we’ve ever produced, as we pretty much conclusively demonstrated that adding a second GPU didn’t make gameplay anywhere near twice as smooth as a single GPU. AMD has since added a frame-pacing algorithm to its drivers in order to address that problem, with good results. However that fix didn’t apply to Eyefinity multi-display configs and didn’t cover even a single 4K panel. (The best current 4K panels use two “tiles” and are logically treated as dual displays.)
A partial fix for 4K came later, with the introduction of the Radeon R9 290X and the Hawaii GPU, in the form of a new data-transfer mechanism for CrossFire known as XDMA. Later still, AMD released a driver with an updated frame pacing for older GPUs, like the Tahiti chip aboard the Radeon R9 280X and the HD 7990.
And, shamefully, we haven’t yet tested either XMDA CrossFire or the CrossFire + 4K/Eyefinity fix for older GPUs. I’ve been unusually preoccupied with other things, but that’s still borderline scandalous and sad. AMD may well have fixed its well-documented CrossFire issues with 4K and multiple displays, and son, testing needs to be done.
Happily, the R9 295 X2 review seemed like the perfect opportunity to spend some quality time vetting the performance of AMD’s current CrossFire solutions with 4K panels. After all, AMD emphasized repeatedly in its presentations that the 295 X2 is built for 4K gaming. What better excuse to go all out?
So I tried. Doing this test properly means using FCAT to measure how individual frames of in-game animation are delivered to a 4K panel. Our FCAT setup isn’t truly 4K capable, but we’re able to capture one of the two tiles on a 4K monitor, at a resolution of 1920×2160, and analyze performance that way. It’s a bit of a hack, but it should work.
Emphasis on should. Trouble is, I just haven’t been able to get entirely reliable results. It works for GeForces, but the images coming in over HDMI-to-DVI-to-splitter-to-capture-card from the Radeons have some visual corruption in them that makes frame counting difficult. After burning a big chunk of last week trying to make it work by swapping in shorter and higher-quality DVI cables, I had to bail on FCAT testing and fall back on the software-based Fraps tool in order to get reliable results. I will test XMDA CrossFire and the like with multiple monitors using FCAT soon. Just not today.
Fraps captures frame times relatively early in the production process, when they are presented as final to Direct3D, so it can’t show us exactly when frames are reaching the screen. As we’ve often noted, though, there is no single place where we can sample to get a perfect picture of frame timing. The frame pacing and metering methods used in multi-GPU solutions may provide regular, even frame delivery to the monitor, but as a result, the animation timing of those frames may not match their display times. Animation timing is perhaps better reflected in the Fraps numbers—depending on how the game engine tracks time internally, which varies from game to game.
This stuff is really complicated, folks.
Fortunately, although Fraps may not capture all the nuances of multi-GPU microstuttering and its mitigation, it is a fine tool for basic performance testing—and there are plenty of performance challenges for 4K gaming even without considering frame delivery to the display. I think that’ll be clear very soon.
One more note: I’ve run our Fraps results though a three-frame low-pass filter in order to compensate for the effects of the three-frame Direct3D submission queue used by most games. This filter eliminates the “heartbeat” pattern of high-and-then-low frame times sometimes seen in Fraps results that doesn’t translate into perceptible hitches in the animation. We’ve found that filtered Fraps data corresponds much more closely to the frame display times from FCAT. Interestingly, even with the filter, the distinctive every-other-frame pattern of multi-GPU microstuttering is evident in some of our Fraps results.
The 4K experience
We’ve had one of the finest 4K displays, the Asus PQ321Q, in Damage Labs for months now, and I’ve been tracking the progress of 4K support in Windows, in games, and in graphics drivers periodically during that time. This is our first formal look at a product geared specifically for 4K gaming, so I thought I’d offer some impressions of the overall experience. Besides, I think picking up a $3000 4K monitor ought to be a prerequisite for dropping $1500 on the Radeon R9 295 X2, so the 4K experience is very much a part of the overall picture.
The first thing that should be said is that this 31.5″ Asus panel with a 3840×2160 pixel grid is a thing of beauty, almost certainly the finest display I’ve ever laid eyes upon. The color reproduction, the uniformity, the incredible pixel density, the really-good-for-an-LCD black levels—practically everything about it is amazing and wondrous. The potential for productivity work, video consumption, or simply surfing the web is ample and undeniable. To see it is to want it.
The second thing to be said is that—although Microsoft has made progress and the situation isn’t bad under Windows 8.1 when you’re dealing with the file explorer, desktop, or Internet Explorer—the 4K support in Windows programs generally is still awful. That matters because you will want to use high-PPI settings and to have text sizes scaled up to match this display. Reading five-point text is not a good option. Right now, most applications do scale up their text size in response to the high-PPI control panel settings, but the text looks blurry. Frustrating, given everything, but usable.
The bigger issues have to do with the fact that today’s best 4K displays, those that support 60Hz refresh rates, usually present themselves to the PC as two “tiles” or separate logical displays. They do so because, when they were built, there wasn’t a display scaler ASIC capable of handling the full 4K resolution. The Asus PQ321Q can be connected via dual HDMI inputs or a single DisplayPort connector. In the case of DisplayPort, the monitor uses multi-stream transport mode to essentially act as two daisy-chained displays. You can imagine how this reality affects things like BIOS screens, utilities that run in pre-boot environments, and in-game menus the first time you run a game. Sometimes, everything is squished up on half of the display. Other times, the image is both squished and cloned on both halves. Occasionally, the display just goes black, and you’re stuck holding down the power button in an attempt to start over.
AMD and Nvidia have done good work making sure their drivers detect the most popular dual-tile 4K monitors and auto-configure them as a single large surface in Windows. Asus has issued multiple firmware updates for this monitor that seem to have helped matters, too. Still, it often seems like the tiling issues have moved around over time rather than being on a clear trajectory of overall improvement.
Here’s an example from Tomb Raider on the R9 295 X2. I had hoped to use this game for testing in this review, but the display goes off-center at 3840×2160. I can’t seem to make it recover, even by nuking the registry keys that govern its settings and starting over from scratch. Thus, Lara is offset to the left of the screen while playing, and many of the in-game menus are completely inaccessible.
AMD suggested specifying the aspect ratio for this game manually to work around this problem, but doing so gave me an entire game world that was twice as tall as it should have been for its width. Now, I’m not saying that’s not interesting and maybe an effective substitute for some of your less powerful recreational drugs, because wow. But it’s not great for real gaming.
Another problem that affects both AMD and Nvidia is a shortage of available resolutions. Any PC gamer worth his salt knows what to do when a game doesn’t quite run well enough at the given resolution, especially if you have really high pixel densities at your command: just pop down to a lower res and let the video card or monitor scale things up to fill the screen. Dropping to 2560×1440 or 1920×1080 would seem like an obvious strategy with a display like this one. Yet too often, you’re either stuck with 3840×2160 or bust. The video drivers from AMD and Nvidia don’t consistently expose even these two obvious resolutions that are subsets of 3840×2160 or anything else remotely close. I’m not sure whether this issue will be worked out in the context of these dual-tile displays or not. Seems like they’ve been around quite a while already without the right thing happening. We may have to wait until the displays themselves get better scaler ASICs.
There’s also some intermittent sluggishness in using a 4K system, even with the very fastest PC hardware. You’ll occasionally see cases of obvious slowness, where screen redraws are laborious for things like in-game menus. Such slowdowns have been all but banished at 2560×1600 and below these days, so it’s a surprise to see them returning in 4K. I’ve also encountered some apparent mouse precision issues in game options menus and while sniping in first-person shooters, although such things are hard to separate precisely from poor graphics performance.
In case I haven’t yet whinged enough about one of the coolest technologies of the past few years, let me add some about the actual experience of gaming in 4K. I’ve gotta say that I’m not blown away by it, when my comparison is a 27″ 2560×1440 Asus monitor, for several reasons.
For one, game content isn’t always 4K-ready. While trying to get FCAT going, I spent some time with this Asus monitor’s right tile in a weird mode, with only half the vertical resolution active. (Every other scanline was just repeated.) You’d think that would be really annoying, and on the desktop, it’s torture. Fire up a session of Borderlands 2, though, and I could play for hours without noticing the difference, or even being able to detect the split line, between the right and left tiles. Sure, Crysis 3 is a different story, but the reality is that many games won’t benefit much from the increased pixel density. Their textures and models and such just aren’t detailed enough.
Even when games do take advantage, I’m usually not blown away by the difference. During quick action, it’s often difficult to appreciate the additional fidelity packed into each square inch of screen space.
When I do notice the additional sharpness, it’s not always a positive. For example, I often perceive multiple small pixels changing quickly near each other as noise or flicker. The reflections in puddles in BF4 are one example of this phenomenon. I don’t think those shader effects have enough internal sampling, and somehow, that becomes an apparent problem at 4K’s high pixel densities. My sense is that, most of the time, lower pixel densities combined with supersampling (basically, rendering each pixel multiple times at an offset and blending) would probably be more pleasing overall than 4K is today. Of course, as with many things in graphics, there’s no arguing with the fact that 4K plus supersampling would be even better, if that were a choice. In fact, supersampling may prove to be an imperative for high-PPI gaming. 4K practically requires even more GPU power and will soak it up happily. Unfortunately, 4X or 8X supersampling at 4K is not generally feasible right now.
Don’t get me wrong. When everything works well and animation fluidity isn’t compromised, gaming at 4K can be a magical thing, just like gaming at 2560×1440, only a little nicer. The sharper images are great, and edge aliasing is much reduced at high PPIs.
I’m sure things will improve gradually as 4K monitors become more common, and I’m happy to see the state of the art advancing. High-PPI monitors are killer for productivity. Still, I think some other display technologies, like G-Sync/Freesync-style variable refresh intervals and high-dynamic-range panels, are likely to have a bigger positive impact on gaming. I hope we don’t burn the next few years on cramming in more pixels without improving their speed and quality.
Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:
|Chipset||Intel X79 Express|
|Memory size||16GB (4 DIMMs)|
|Memory type||Corsair Vengeance CMZ16GX3M4X1600C9 |
DDR3 SDRAM at 1600MHz
|Memory timings||9-9-9-24 1T|
|Chipset drivers||INF update 126.96.36.1993 |
Rapid Storage Technology Enterprise 188.8.131.523
|Audio||Integrated X79/ALC898 |
with Realtek 184.108.40.20671 drivers
|Hard drive||Kingston HyperX 480GB SATA|
|Power supply||Corsair AX850|
|OS||Windows 8.1 Pro|
|Driver revision||GPU base |
|GPU boost |
|GeForce GTX 780 Ti||GeForce 337.50||875||928||1750||3072|
|2 x GeForce GTX 780 Ti||GeForce 337.50||875||928||1750||3072 (x2)|
|Radeon HD 7990||Catalyst 14.4 beta||950||1000||1500||3072|
|XFX Radeon R9 290X||Catalyst 14.4 beta||–||1000||1250||4096|
|Radeon R9 295 X2||Catalyst 14.4 beta||–||1018||1250||4096 (x2)|
Thanks to Intel, Corsair, Kingston, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.
Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.
In addition to the games, we used the following test applications:
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Click on the buttons above to cycle through plots of the frame times from one of our three test runs for each graphics card. You’ll notice that the lines for the multi-GPU solutions like the R9 295 X2 and two GTX 780 Ti cards in SLI are “fuzzier” than the those from the single-GPU solutions. That’s an example of multi-GPU micro-stuttering, where the two GPUs are slightly out of sync, so the frame-to-frame intervals tend to vary in an alternating pattern. Click on the buttons below to zoom in and see how that pattern looks up close.
The only really pronounced example of microstuttering in our zoomed-in plots is the GTX 780 Ti SLI config, and it’s not in terrible shape, with the peak frame times remaining under 25 ms or so. The thing is, although we can measure this pattern in Fraps, it’s likely that Nvidia’s frame metering algorithm will smooth out this saw-tooth pattern and ensure more consistent delivery of frames to the display.
Not only does the 295 X2 produce the highest average frame rate, but it backs that up by delivering the lowest rendering times across 99% of the frames in our test sequence, as the 99th percentile frame time indicates.
Here’s a broader look at the frame rendering time curve. You can see that the 295 X2 has trouble in the very last less-than-1% of frames. I can tell you where that happens in the test sequence, when my exploding arrow does its thing. We’ve seen frame time spikes on both brands of video cards at this precise spot before. Thing is, if you look at the frame time plots above, Nvidia appears to have reduced the size of that spike recently, perhaps during the work it’s done optimizing this new 337.50 driver.
These “time spent beyond X” graphs are meant to show “badness,” those instances where animation my be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.
Per our discussion above, the GTX 780 Ti SLI aces this test by never crossing the 50-ms threshold. The R9 295X X2 is close behind—and solidly ahead of a single Hawaii GPU aboard the Radeon R9 290X. That’s the kind of real-world improvement we want out of a multi-GPU solution. This is where I’d normally stop and say we’ll want to verify the proper frame delivery with FCAT, but in this particular case, I’ll skip that step and call it good. Subjectively speaking, Crysis 3 on the 295 X2 at 4K is amazingly fluid and smooth, and this game has the visual fidelity to make you appreciate the additional pixels.
Assassin’s Creed 4 Black Flag
Uh oh. Click through the plots above, and you’ll see occasional frame time spikes from AMD’s multi-GPU solutions, both the HD 7990 and the R9 295 X2. Those same spikes are absent from the plots of the R9 290X and the two GeForce configs. The spikes have a fairly modest impact on the 295 X2’s FPS average, which is still much higher than a single 290X card’s, but they’re reflected more clearly in the latency-sensitive 99th percentile metric.
The 295 X2 is still faster than a single R9 290X overall in Black Flag, but its multi-GPU scaling is marred by those intermittent slowdowns. Meanwhile, the GTX 780 Ti SLI setup never breaches the 33-ms barrier, not even once.
Thanks to the hard work put in by Johan Andersson and the BF4 team, this game is now an amazing playground for folks who want to understand performance. I was able to collect performance data from the game engine directly here, without the use of Fraps, and I grabbed much more of it than I can share in the context of this review, including information about the CPU time and GPU time required to render each frame. BF4 supports AMD’s Mantle, where Fraps cannot go, and the game now even includes an FCAT overlay rendering option, so we can measure frame delivery with Mantle.
I’m on board for all of that—and I even tried out two different frame-pacing options BF4 offers for multi-Radeon setups—but I didn’t have time to include it all in this review. In the interests of time, I’ve only included Direct3D results below. Trust me, the differences in performance between D3D and Mantle are slight at 4K resolutions, where the GPU limits performance more than the CPU and API overhead. Also, given the current state of multi-GPU support and frame pacing in BF4, I think Direct3D is unquestionably the best way to play this game on a 295 X2.
Still, we’ll dig into that scrumptious, detailed BF4 performance data before too long. There’s much to be learned.
Check each one of the metrics above, and it’s easy to see the score. The R9 295 X2 is pretty much exemplary here, regardless of which way you choose to measure.
Oddly enough, although its numbers look reasonably decent, the GTX 780 Ti SLI setup struggles, something you tell by the seat of your pants when playing. My insta-theory was that the cards were perhaps running low on memory. After all, they “only” have 3GB each, and SLI adds some memory overhead. I looked into it by logging memory usage with GPU-Z while playing, and the primary card was using its RAM pretty much to the max. Whether or not that’s the source of the problem is tough to say, though, without further testing.
Batman: Arkham Origins
Well. We’re gliding through the rooftops in this test session, and the game must be constantly loading new portions of the city as we go. You’d never know that when playing on one of the GeForce configs, but there are little hiccups that you can feel all along the path when playing on the Radeons. For whatever reason, this problem is most pronounced on the 295 X2. Thus, the 295 X2 fares poorly in our latency-sensitive performance metrics. This is a consistent and repeatable issue that’s easy to notice subjectively.
Guild Wars 2
Uh oh. Somehow, the oldest game in our roster still doesn’t benefit from the addition of a second GPU. Heck, the single 290X is even a little faster than the X2. Not what I expected to see here, but this is one of the pitfalls of owning a multi-GPU solution. Without the appropriate profile for CrossFire or SLI, many games simply won’t take advantage of additional GPUs.
Call of Duty: Ghosts
Hm. Watch the video above, and you’ll see that the first part of our test session is a scripted sequence that looks as if it’s shown through a camera lens. This little scripted bit starts the level, and I chose to include it because Ghosts has so many fricking checkpoints riddled throughout it, there’s practically no way to test the same area repeatedly unless it’s at the start of a mission. By looking at the frame time plots, you can see that the Radeons really struggle with this portion of the test run—and, once again, the multi-GPU configs suffer the most. During that bit of the test, the 290X outperforms the 295 X2.
Beyond those opening seconds, the 295 X2 doesn’t perform too poorly, although the dual 780 Ti cards are faster, but by then the damage is done.
I decided to just use Thief‘s built-in automated benchmark, since we can’t measure performance with AMD’s Mantle API using Fraps. Unfortunately, this benchmark is pretty simplistic, with only FPS average and minimum numbers (as well as a maximum, for all that’s worth).
Watch this test run, and you can see that it’s a struggle for most of these graphics cards. Unfortunately, Mantle isn’t any help, even on the single-GPU R9 290X. I had hoped for some gains from Mantle, even if the primary benefits are in CPU-bound scenarios. Doesn’t look like that’s the case.
As you can see, Thief‘s developers haven’t yet added multi-GPU support to their Mantle codepath, so the 295 X2 doesn’t perform at its best with Mantle. With Direct3D, though, the 295 X2 easily leads the pack.
Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.
Yeah, so this is the same test rig in each case; only the graphics card changes. Dropping in the R9 295 X2 raises the total system power consumption at the wall outlet to an even 700W, over 130W higher than with dual GTX 780 Ti cards.
Noise levels and GPU temperatures
The good news here is that, despite its higher power draw and the presence of a water pump and an additional 120-mm fan, the Radeon R9 295 X2 isn’t terribly loud at all. This is progress. A couple of generations ago, the Radeon HD 6990 exceeded 58 dBA in the same basic test conditions. I’m not sure I want to see all future dual-GPU cards come with a radiator appendage hanging off of ’em, but I very much prefer that to 58 dBA of noise.
We couldn’t log the 295 X2’s temperatures directly because GPU-Z doesn’t yet support this card (and you need to log temps while in full-screen mode so both GPUs are busy). However, the card’s default PowerTune limit is 75°C. Given how effective PowerTune is at doing its job, I’d fully expect the 295 X2 hit 75°C during our tests.
Notice, also, that our R9 290X card stays relatively cool at 71°C. That’s because it’s an XFX card with an excellent aftermarket cooler. The card not only remained below its thermal limit, but also ran consistently at its 1GHz peak clock during our warm-up period and as we took the readings. Using a bigger, beefier cooler, XFX has solved AMD’s problem with variable 290X clock speeds and has erased the performance difference between the 290X’s default and “uber” cooling modes in the process. The performance results for the 290X on the preceding pages reflect that fact.
Let’s sum up our performance results—and factor in price—using our world-famous scatter plots. These overall performance results are a geometric mean of the outcomes on the preceding pages. We left Thief out of the first couple of plots since we tested it differently, but we’ve added it to a third plot to see how it affects things.
As usual, the best values will tend toward the top left of the plot, where performance is high and price is low, while the worst values will gravitate toward the bottom right.
As you can see, the 295 X2 doesn’t fare well in our latency-sensitive 99th percentile FPS metric (which is just frame times converted to higher-is-better FPS). You’ve seen the reasons why in the test results: frame time spikes in AC4 and Arkham Origins, struggles in a portion of our Call of Duty: Ghosts test session, and negative performance scaling for multi-GPU in Guild Wars 2. These problems push the R9 295 X2 below even a single GeForce GTX 780 Ti in the overall score.
AMD’s multi-GPU struggles aren’t confined to the 295 X2, either. The Radeon HD 7990 is, on paper, substantially more powerful than the R9 290X, but its 99th percentile FPS score is lower than a single 290X card’s.
The 295 X2 does somewhat better if you’re looking at the FPS average, and the addition of Thief makes the Radeons a little more competitive overall. Still, two GTX 780 Ti cards in SLI are substantially faster even in raw FPS terms. And we know that the 295 X2 struggles to produce consistently the sort of gaming experience that its hardware ought to provide.
I’ve gotta say, I find this outcome incredibly frustrating and disappointing. I believe AMD’s hardware engineers have produced probably the most powerful graphics card we’ve ever seen. The move to water cooling has granted it a massive 500W power envelope, and it has a 1GB-per-GPU advantage in memory capacity over the GeForce GTX 780 Ti SLI setup. Given that we tested exclusively in 4K, where memory size is most likely to be an issue, I fully expected the 295 X2 to assert its dominance. We saw flashes of its potential in Crysis 3 and BF4. Clearly the hardware is capable.
At the end of the day, though, a PC graphics card requires a combination of hardware and software in order to perform well—that’s especially true for a multi-GPU product. Looks to me like the R9 295 X2 has been let down by its software, and by AMD’s apparent (and, if true, bizarre) decision not to optimize for games that don’t wear the Gaming Evolved logo in their opening titles. You know, little franchises like Call of Duty and Assassin’s Creed. It’s possible AMD could fix these problems in time, but one has to ask how long, exactly, owners of the R9 295 X2 should expect to wait for software to unlock the performance of their hardware. Recently, Nvidia has accelerated its practice of having driver updates ready for major games before they launch, after all. That seems like the right way to do it. AMD is evidently a long way from that goal.
I dunno. Here’s hoping that our selection of games and test scenarios somehow just happened to be particularly difficult for the R9 295 X2, for whatever reason. Perhaps we can vary some of the test scenarios next time around and get a markedly better result. There’s certainly more work to be done to verify consistent frame delivery to the display, anyhow. Right now, though, the 295 X2 is difficult to recommend, even to those folks who would happily pony up $1500 for a graphics card.
I occasionally post pictures of expensive graphics cards Twitter.
AMD Radeon R9 295X2 review
The R9 295X2 is likely the final throw of the dice for AMD's current spin of Graphics Core Next (GCN) architecture. It takes a pair of the fastest Radeon graphics chips available and squeezes them into one behemoth of a graphics card.
That's a familiar refrain, with both AMD and Nvidia traditionally filling out their top-end lineups with dual-GPU cards based on their finest single-GPUs. This time around AMD have done things slightly differently.
Normally these dual-GPU cards operate with the top graphics chips, but in order to have them running effectively on a single PCB, their engineers will clock down those processors. With AMD's previous dual-GPU card, the Radeon HD 7990, they'd clocked their Tahiti XT chips at 950MHz compared with the 1GHz clockspeed of the chips at the heart of their top HD 7970 GHz cards. Likewise with Nvidia's GTX 690's the GPUs were clocked at just 915MHz where the comparative GTX 680 was running at a hefty 1,006MHz.
This is one of the reasons I'm left rather cold by Nvidia's announcement of their $3,000 GTX Titan Z. At some $1,000 more expensive than a pair of the GTX Titan Black cards that it's based on, it's also likely to run slower than that SLI pairing. The Titan Black's GK110 GPU runs at 889MHz and the Titan Z is likely to be closer to 800MHz, if it follows tradition.
The R9 295X2 doesn't follow that tradition.
Thanks to the Asetek-designed liquid chip-chiller they can actually run the R9 295X2's twin Hawaii XT chips faster than the Hawaii XT GPUs in the Radeon R9 290X. This is the first time a reference-designed board has turned up either with water-cooling as standard or with the dual-GPU configuration setup quicker than the single-GPU cards it's come from.
On the sample I've been testing that only amounts to some 18MHz faster than the 1GHz GPUs in the R9 290X, but I would have still been impressed if they'd kept the exact same core clock.
My reference R9 290X runs at 95ºC when it's going at games full pelt. Trying to cope with two of those chips, at that temperature, with one dual-slot air-cooler, would have been almost impossible. The water-cooling route then was vital, and the fact the R9 295X2 is limited to run up to 75ºC before throttling back is interesting too. In practice my R9 295X2 barely runs much above 65ºC in-game.
And it performs some impressive gaming feats at those temps too, most impressively at 4K. That's what this dual-GPU monster was designed for - offering genuinely playable gaming performance at resolutions as high as 3840 x 2160.
As well as being the fastest single graphics card available, it's also probably the simplest way to get a decent gaming setup running on a 4K screen.
The simplest, but not the cheapest.
Because of the all the extra engineering effort and expense that's gone into putting the R9 295X2 and it's water-cooled, Titan-aping shroud together, it's retailing for £1,100 / $1,500. In the UK that's some £250-odd more expensive than buying a pair of R9 290X cards and linking them together in a CrossFire-capable motherboard.
It's also more expensive than buying a pair of Nvidia GTX 780 Ti cards. But thanks to the huge 8GB GDDR5 frame buffer the R9 295X2 is sporting—filling out those twin 512-bit memory buses—the Radeon card is a far better bet for seriously high-resolution gaming.
Taking Battlefield 4 as an example, the AMD card is able to hit 60FPS at Ultra settings in 4K, while the SLI GTX 780 Ti is some 25% slower at 48FPS. It reads even worse for Nvidia in the GRID 2 benchmark, though the platform-agnostic Heaven 4.0 synthetic test still gives the win to Nvidia.
But yes, we are still talking about a single graphics card that likely costs more than most of our full gaming PCs. It might make sense if you've already spent that much on a 4K-capable screen and need something to actually run games at that rarified resolution, but for the rest of us it's an extravagant liquid-luxury.
That's always what these cards are like though. They're produced in such limited numbers that you're more likely to see it shining out of the side of a LAN event show machine than out of the perspex side of your buddies' rigs. These are tech showcase cards, proof-of-concept creations designed to demonstrate the extent at which different company's technology can be pushed. As an example of what AMD's GCN architecture is capable of when it's scaled up and water-cooled the R9 295X2 is thoroughly impressive. It's a great technical achievement to produce such a good-looking card with such low thermals and such slick performance. But as a card that I could actually recommend anyone buy? Not so much.
The R9 295X2 only makes sense if you're looking for a seriously high-end, 4K-capable miniature gaming machine. That's the only place you need such efficient cooling and use of space. In a desktop rig, with a chassis capable of holding a pair of graphics cards, you can get the same level of performance from a pair of R9 290X cards. With third-party air-cooling solutions from the likes of Sapphire, you can get decent thermals and still hit the same 4K speeds.
I'm happy to applaud its design and performance, but I couldn't recommend it as a sensible purchase; even if you can afford one or find one for sale.
It's the performance of the R9 295X2 at 4K resolutions that really impresses, though the cheaper Nvidia SLI pair does have a few tricks up its sleeve when it comes to Heaven and the Unreal Engine 3's Bioshock Infinite.
Power-wise you can also see where the Nvidia Kepler architecture is more efficient, with a pair of GTX 780 Ti cards drawing some 200W less juice at peak platform operation.
Synthetic 4K tessellation performance:
Heaven 4.0 – (Min) Avg FPS: higher is better
Radeon R9 295X2 – (14) 30
GeForce GTX 780Ti SLI – (17) 32
Radeon R9 290X – (10) 17
GeForce GTX 780TI – (13) 22
DirectX 11 2560x1600 performance:
Metro: Last Light – (Min) Avg FPS: higher is better
Radeon R9 295X2 – (17) 50
GeForce GTX 780Ti SLI – (12) 47
Radeon R9 290X – (17) 28
GeForce GTX 780TI – (20) 32
Battlefield 4 – (Min) Avg FPS: higher is better
Radeon R9 295X2 – (27) 86
GeForce GTX 780Ti SLI – (64) 89
Radeon R9 290X – (34) 53
GeForce GTX 780TI – (42) 60
DirectX 11 4K gaming performance:
Battlefield 4 – (Min) Avg FPS: higher is better
Radeon R9 295X2 – (13) 60
GeForce GTX 780Ti SLI – (18) 48
Radeon R9 290X – (12) 32
GeForce GTX 780TI – (22) 33
Bioshock Infinite – (Min) Avg FPS: higher is better
Radeon R9 295X2 – (14) 58
GeForce GTX 780Ti SLI – (10) 67
Radeon R9 290X – (16) 30
GeForce GTX 780TI – (9) 42
GRID 2 – (Min) Avg FPS: higher is better
Radeon R9 295X2 – (80) 99
GeForce GTX 780Ti SLI – (58) 72
Radeon R9 290X – (44) 54
GeForce GTX 780TI – (45) 55
Max temperature - ºC: cooler is better
Radeon R9 295X2 – 75
GeForce GTX 780Ti SLI – 82
Radeon R9 290X – 95
GeForce GTX 780TI – 65
Peak power performance:
100% GPU – Watts: lower is better
Radeon R9 295X2 – 681
GeForce GTX 780Ti SLI – 485
Radeon R9 290X – 368
GeForce GTX 780TI – 389
AMD Radeon R9 295X2 Review, Specs, Benchmarked, 4k“The massive Radeon 295X2 can easily handle modern games, even at 4K, but its high price makes it suitable for only the most die-hard gamers.”
- Fastest video card available today
- Gobs of memory bandwidth
- Impressive specifications and design
- Needs two 8-pin PCIe power connectors
- Large, requires 120mm fan mount for water cooler
- Drivers aren’t mature
There is an eternal conflict that’s been waged for over a decade. Every year the fight continues, each side deals blows to the other, each striving to gain even the slightest foothold. I am talking, of course, about the war between AMD and Nvidia.
The title of “world’s most powerful video card” is the crown that this war is fought over. Though few people actually buy the most powerful card on the market, its existence has a halo effect, and is used as ammo in forum battles about which company is the best. The crown has traded places countless times over the last decade, as each side counters the other.
Since late 2013, however, the war has been tied. Nvidia’s Titan Black and AMD’s Radeon R9 290X offer very similar performance, making it hard for either to definitively claim the title of “most powerful.” Now, AMD has a piece of hardware that it hopes will sway the conflict in its favor; the $1,499 Radeon R9 295X2.
The 295X2 takes two of the company’s fastest graphics cores and slaps them onto a single circuit board. This is a tried-and-true method of creating a new “world’s fastest” video card that has been used several times in the past, but it’s a tactic that has often resulted in performance oddities and graphical glitches. Has AMD managed to avoid these pitfalls and deliver the best video card ever?
Hands on video
Two in one
AMD shipped the Radeon 295X2 in a large briefcase full of foam padding. Absurd as that may seem, it was probably a wise idea. This card is massive, measuring 13 inches long, and it’s water-cooled from the factory. That means a radiator, a fan, and the tubes routing fluid to and from the card were already attached when we received the hardware.
Though impressive, water cooling causes some practical problems. Owners will have to find room not only for the card’s length and height (as this is, of course, a double-wide card that will obstruct any PCI slot below the slot into which it’s installed), but also for the radiator and fan, which requires a 120mm fan mount. Our test rig didn’t happen to have one free, so we had to make room by detaching an exhaust fan.
The cooler is required to handle the twin GPUs, which offer a combined 5,632 stream processors cranking out 11.5 teraflops of raw compute power. In other words, the Radeon 295X2 is over six times more powerful than a PlayStation 4 (which quotes 1.84 Tflops), and twice as powerful as the Nvidia GTX Titan Black (which quotes 5.1 Tflops) – on paper, at least.
Memory performance looks equally impressive. The Radeon 295X2’s GPUs talk to the motherboard over a 512-bit memory interface connected to 8GB of GDDR5 RAM. This translates to over 640 GBps of memory bandwidth, which is almost twice the 336 GB/sec delivered by the GTX Titan Black. The raw bandwidth provided by the 295X2 is astounding, and should be more than enough to handle 4k.
The 295X2 will simply demolish any other single card on the market today.Extreme performance requires extreme power draw, however. AMD says the card can draw up to 500 watts on its own, which means that you’ll need a beefy power supply with two 8-pin PCIe power connectors. AMD also warned that a minimum of 50 amps must be sent over the 12-volt rail. Otherwise, anything less could cause a crash, or even tank your power supply if it’s a cheaply constructed unit. Our 1000 watt Silverstone PSU was up to the challenge, but buyers with an old, inexpensive or low-output PSU will need to upgrade.
Though the 295X2 is a single card, its use of dual GPUs necessitates an internal CrossFireX connection. This means that the card runs in CrossFire mode by default, even though it occupies only a single PCIe slot. This creates some potential problems because, as anyone who has used a dual-GPU graphics card knows, not all games work well with CrossFire (or Nvidia’s SLI). Driver support has come a long way over the years, but there are still situations where you may not be able to take full advantage of the power offered by both GPUs. This is usually the case with very new or very old games that don’t have CrossFire or SLI driver support.
Price and positioning
The AMD Radeon R9 295X2 is expected to hit retailers later this month at a price of $1,499. That’s a lot, but not unexpected. After all, the latest high-end cards from Nvidia, like the Titan and Titan Black, have generally sold for around $1,000 – but they also have a single GPU.
The Titan is not the most obvious performance competitor, however, as it is sold not just as a gaming graphics card, but also as a workstation card capable of significant double-precision performance. The real rival to the 295X2 is not one card, but two; the Nvidia GeForce GTX 780 Ti configured in two-way SLI. These cards sell for around $700 each, which puts the total cost at $1,400.
A pair of 780 Ti cards line up with the Radeon 295X2’s on-paper performance. Each provides 5.04 Tflops of compute, which adds up to about 1 Tflop less than the Radeon. The cards use a narrower 384-bit memory interface as well, and make do with 3GB of RAM on each card, for a total of 6GB; 2GB less than what the 295X2 packs. We’re interested to see if that has a noticeable negative impact on 4k performance.
The Radeon R9 295X2 looks like it’s tied with the GTX 780 Ti SLI setup, on paper at least. AMD’s hardware should be more powerful, but it also costs $100 more, and it’s impossible to know without testing if its additional memory and bandwidth grant it a significant advantage. With that said, let’s get to the benchmarks.
Our test system
Falcon Northwest’s Talon serves as our test system. The tower boasts a Core i7-4770K processor overclocked to 4.5 GHz, 16GB of RAM and two 240GB SSDs running in RAID 0. These impressive specifications make it unlikely that any portion of the system aside from the video card will be a bottleneck, and will provides us with accurate benchmark results.
We began our examination of the Radeon R9 295X2 with synthetic performance benchmarks. These are not real games, but instead test loops meant to simulate them. Their advantage is precision, as the settings and conditions of each loop never varies. This makes them great for judging relative performance.
Our testing begins with 3DMark, an extremely popular cross-platform benchmark. We pay attention to two scores, one from the Cloud Gate test loop, which represents a moderately demanding title, and Fire Strike, which emulates cutting-edge 3D graphics. We run this benchmark at default settings.
The competitors are very closely matched here. While the Radeon R9 295X2 wins, its margin of victory is within a few percentage points. That’s hardly enough to be definitive.
Next up is Valley, a benchmark from Unigine which features beautiful sweeping vistas and towering forests. This test loop is a good stand-in for games that feature sprawling outdoor areas.
There’s an interesting back-and-forth here. The Radeon tends to perform better at Medium, but Nvidia tends to work better at Ultra. This is likely due to driver and hardware optimizations that work better with specific effects used at Ultra detail.
The Heaven benchmark focuses on high-polygon buildings and objects rather than scenery, and this generally means that it runs slower than Valley, but it also has higher peaks and lower troughs.
Are you starting to see a trend here? So are we! Once again, the Radeon wins at Medium detail, but loses at Ultra. This time, the difference is more substantial than it was in Valley.
Real-world game performance
Normally, we test games at 2560×1440 with FRAPS but, because of this card’s power, we felt it was necessary to also test at both 1440p and 4k resolution.
Total War: Rome 2
We lead off with Total War: Rome 2, Creative Assembly’s controversial strategy game. Though it was released with more than its fair share of bugs, this is one of the most visually enticing strategy titles ever crafted, and it’s very demanding at high detail settings. Let’s see how our dual-GPU wonders fared.
At Medium detail we can see that, no matter the resolution, the average framerate halts at around 90 FPS. There’s obviously a bottleneck restricting performance, though it’s certainly not the video cards. Realistically, it hardly matters, as 90 FPS is plenty quick.
We see a better comparison at Extreme detail, where the Nvidia GTX 780 Ti SLI configuration earns a small, but consistent lead. The extra frames aren’t noticeable in real-world gaming, but a win is a win.
The latest first-person shooter from DICE provides cutting-edge visuals and huge battlefields that can tax even the newest hardware. While we have no doubt that the Radeon R9 295X2 can handle the game at 2560×1440, we wondered how it would tackle 4k. Let’s take a look.
At 2560×1440, the Radeon has an obvious advantage. At Extreme detail, it beats the Nvidia cards by over 10 frames per second. And please note, we are not using AMD Mantle in this benchmark.
At 4k, however, the story changes. Nvidia takes a commanding lead at Medium detail, and a very minor lead at Extreme. Again, it’s only a lead of a few frames per second, but a win is a win.
Still, the Radeon R9 295X2 is indeed capable of playing Battlefield 4 at Extreme detail. The card’s average framerate of 52 FPS is more than enough for enjoyable gameplay.
Borderlands 2 is a couple of years old, but it’s a popular title that can still prove demanding when played at high resolution on a mid-range video card. The game also uses Unreal Engine 3, which is incredibly common. Let’s see how our pair of wonder twins handled Borderlands 2.
As with other benchmarks, performance between the Radeon and Nvidia setups proved to be incredibly similar. In fact, we received the exact same average framerate when we tested the game at 4k resolution with all details turned on and/or set to Maximum.
The only oddity of note here is the 295X2’s performance at 2560×1440 with detail set to Medium. At that setting, we received an average of 153 FPS, which is 22 FPS less than what the pair of GTX 780 Tis managed. The 295X2 also produced the exact same average at Medium detail when the resolution was kicked up to 4k. We’re not sure what caused this, but we confirmed it by re-testing the game multiple times, re-installing the game, and re-installing the driver, none of which changed the result.
League of Legends
Riot Games’ free-to-play hit is the least demanding game in our test. Still, it’s good to see how less demanding games play on high-end hardware. These titles aren’t always targeted for driver optimizations, so results can vary, and bugs can emerge.
The Nvidia cards clearly win here, though it hardly matters. Both dual-GPU arrangements can blast through the game without breaking a sweat. With V-Sync on, the cards use so little of their maximum grunt that fan speed rarely deviates from idle.
We did run into a hitch while playing using the Radeon card, however. The game has specific graphical effects that appear around the borders of the display to indicate that heavy damage was inflicted. When these appeared, the screen would flicker, as black horizontal bars strobed up and down the screen. This problem always stopped when the effect was over, but it was very distracting.
Crysis 3, the latest game in the Crysis franchise, is also one of the most demanding games on the market. Even Battlefield 4 is a cinch to run compared to this monster. How do these cards handle it at 2560×1440 and 4k?
At 1440p, the pair of Nvidia cards manages a better experience, providing an average of 6 extra FPS at Very High, and over 20 more at Medium. We felt this in-game as well; it seemed that the GTX 780 Tis created smoother gameplay, and we noticed less screen tearing (with V-Sync off).
At 4k, however, the SLI configuration runs out of steam – or, to be more accurate, memory. While it holds its own at Medium detail, upping the settings to Very High brings the framerate to a standstill. This is an obvious sign that the GTX 780 Ti simply lacks the memory bandwidth required. The Radeon doesn’t really provide a playable experience either, but it’s much closer. Turn down a couple of features in the “Advanced Graphics” section, and you’ll get around 30 FPS.
Need more power!
Our wattmeter caught our test system drawing 103 watts at idle. That’s actually rather impressive, as it’s 31 watts more than what our system required when equipped with AMD’s significantly less powerful Radeon R7 250X card. Idle power efficiency has greatly improved over the last half-decade.
At full load, however, consumption increased to 617 watts. That’s certainly a lot, and if this were a desktop PC review, we’d have to remark that our test rig is the third most power-hungry gaming PC we’ve ever seen. High power draw is to be expected, however, and physics must be obeyed. Extreme performance demands extreme power draw.
While the 295X2 is efficient, the pair of Nvidia 780 Ti cards drew even less power. They consumed only 76 watts at idle. That’s amazing because our test rig needed 72 watts to power a single Nvidia GTX 650. The green team needed less power at load too, as we measured 582 watts of power draw while gaming.
There’s no doubt that the Radeon R9 295X2 is the fastest single video card on the market right now. While Nvidia has announced a dual-GPU card called the GTX Titan Z, it’s not yet available, and it’s targeting a much higher $3,000 price point. The 295X2 will simply demolish any other single card on the market today, including the GTX Titan Black and the GTX 780 Ti. This advantage translates to the ability to handle demanding games at 4k resolution and respectable detail levels, something single-GPU competitors can’t offer.
Does that mean you should buy the R9 295X2? That depends. While impressive, our tests show that a pair of GTX 780 Tis can trade blows with AMD’s latest at 2560x1440p, and they do so without a bulky, awkward water cooler. Nvidia’s hardware consumes less power, and the drivers are more mature because the product has been on the market since last year. The Radeon R9 295X2 also costs $100 more than a pair of GTX 780 Ti cards.
4k performance is the Radeon’s advantage. While it doesn’t always win the framerate battle, it completely destroyed the 780 GTX Ti SLI setup in Crysis 3. That’s an important victory, because the SLI configuration’s slide-show result at 4k and Very High detail shows that video memory could be a limiting factor in future games. The twin 780 GTX Ti cards are competitive at 4k right now, but will they be tomorrow?
The 295X2 only makes sense for gamers with deep pockets who want to game at 4k before anyone else. If that sounds like you, this is the only single video card that can satisfy your needs.
- Fastest video card available today
- Gobs of memory bandwidth
- Impressive specifications and design
- Needs two 8-pin PCIe power connectors
- Large, requires 120mm fan mount for water cooler
- Drivers aren’t mature