Gddr5 vs hbm2


GDDR5 vs GDDR5X vs HBM vs HBM2 vs GDDR6 Memory Comparison

Graphics Cards are equipped with the latest and the fastest memories around. The most popular and commonly used high performance memory in budget-range, mid-range and most of the high-end graphics card is GDDR5 memory. GDDR5 has been around for quite some time and is used on many high performance graphics cards. But the time is moving forward and technology is getting more and more advanced and now we can see faster memories being developed that includes GDDR5X, HBM and HBM2 memory. Here in this post i am going to tell you about all these high performance graphics card memories and also compare them on basis of physical and technical aspects.

Check out: Best Budget Gaming DDR4 RAM for Intel & AMD PC

GDDR5 Memory

GDDR5 is the most widely used high-speed memory that you see in the current generation graphics cards. It is the successor of GDDR3 and GDDR4 memory. Nowadays GDDR3 or DDR3 is only used in entry level graphics cards whereas GDDR4 is not even in existence anymore.

GDDR5 is one of the fastest graphics card memories and is used in many graphics cards starting from budget, midrange to high-end graphics cards. Some of the latest powerful graphics cards using GDDR5 memory are GTX 1060, GTX 1070 and Radeon RX 480. Budget and mid-range graphics cards using GDDR5 memory are GT 730, GT 740, RX 460, GTX 750 Ti, GTX 1050 Ti etc.

GDDR5 is a high bandwidth memory and has lower power consumption compared to its predecessors. It can reach speeds or has transfer rates to up to 8 Gbps. GDDR5 memory is manufactured by Samsung, Hynix, ELPIDA or Micron, and GDDR5 memory chips are available in sizes of  512MB, 1GB, 2GB, 4GB and 8GB. The bus-width of each GDDR5 memory chip is 32-bits wide. The so called successor of GDDR5 memory is GDDR5X memory which is even faster.

GDDR5X Memory

GDDR5X is an upgrade or improved version of the GDDR5 memory. Both GDDR5 and GDDR5X are high bandwidth SGRAM (synchronous graphics random access memory) that are used in graphics cards, high performance servers and other advanced hardware units. GDDR5X is twice as fast as the normal GDDR5 memory and can achieve speeds in range of 10 – 14 Gbps. In future the memory speed of 16 Gbps can be made possible for GDDR5X. Currently GDDR5X memory is manufactured by Micron.

GDDR5X is also consumes less power compared to GDDR5. GDDR5X memory chips are available in sizes of 4GB, 6GB, 8GB and 16GB. The most popular Graphics Cards using GDDR5X memory includes GeForce GTX 1080 and Nvidia TITAN X (Pascal). High-end Workstation Graphics Cards such as Nvidia Quadro P5000 and Quadro P6000 also uses high speed GDDR5X memory. Samsung is planning to launch GDDR6 memory in 2018 which will be the real successor of GDDR5 memory. It will have speeds up to 16 Gbps, have lower power consumption and will run at 1.35V only.

Note: It must be noted that in a PCB you cannot replace GDDR5 with GDDR5X memory because both have different pins. GDDR5 uses 170 pins per chip while GDDR5X uses 190 pins per chip.

GDDR6

GDDR6 is the upcoming GDDR memory and the successor of GDDR5 and GDDR5X. The memory will run at 1.35V and offer memory speeds up to 16Gbps (or up to 18GB/s) offering bandwidth up to 72GB/s. The memory is built on 10nm technology and will have higher density of up to 32GB per die. GDDR6 is expected to make its way in the upcoming Volta / Turing based graphics cards from Nvidia. This super fast high bandwidth memory is aim towards high-end gaming, virtual reality, cryptocurrency mining and artificial intelligence (AI). GDDR6 is being manufactured by Samsung, Micron and Hynix. Samsung and Micron GDDR6 will cater to the enthusiasts segment and will have maximum speed of 16Gbps (16GB & 32Gb dies), while Hynix will offer their GDDR6 to the mainstream segment and will have speeds of 10-12 Gbps to 12-14 Gbps with 8GB dies.

GDDR6 memory is present in the Turing GPU architecture based workstation and gaming graphics cards from Nvidia that include Quadro RTX 8000, Quadro RTX 6000, Quadro RTX 5000, GeForce GTX 1660 Ti, GeForce RTX 2080 Ti, RTX 2080, RTX 2070, RTX 2060, Nvidia TITAN RTX, RTX 2060 Super, RTX 2070 Super and RTX 2080 Super. AMD Graphics Cards that use GDDR6 memory include Radeon RX 5700 and Radeon RX 5700 XT.

HBM Memory

HBM stands for High Bandwidth Memory that is manufactured by Hynix and Samsung. It is also used in graphics cards and other advanced units. HBM memory is used in few graphics cards as of now. HBM is non-planar memory with 3D design structure in form of a cube or cuboid. In HBM multiple memory chips are stacked over one another to form a cube like structure. This makes it occupies less space on the graphics card PCB and you can even put it close to the GPU. The change in the surface area also allows faster processing along the chip.

HBM memory placed closed to the GPU in a graphics card PCB as shown below.

Each Stack of HBM memory is independent of the other stacks but they work together. HBM is also known as compact memory or stacked memory because of its small form factor. A normal HBM memory stack consist of four 4 DRAM dies on a base die and has two 128-bit channels per DRAM die making 8 channels in total which results in 1024-bit per stack of memory interface. So a graphics card having four 4-Hi HBM stacks have memory bus width of 4 x 1024 = 4096-bits. The operating speed of HBM memory is 1 Gbps but its memory bandwidth is much higher compared to GDDR5 memory. This is because of its much wider memory bus. The memory bandwidth of HBM memory can go as high as 128 GB/s per stack. HBM can have 1 GB capacity per stack and supports 4GB per package.

HBM memory uses less power compared to both GDDR5 and GDDR5X memory. The first graphics card to use HBM memory was AMD Radeon R9 Fury X. It is also used in dual GPU graphics card Radeon Pro Duo.

Advantages of HBM Memory

  • Lower Power Consumption
  • Small Form Factor
  • Higher Bandwidth
  • Less heating
  • Better Performance

Disadvantages of HBM Memory

  • Expensive as of now
  • Low availability as of now

HBM2 Memory

HBM 2 is the second generation HBM memory having all HBM characteristics but with higher memory speed and bandwidth. It can have 8 DRAM dies per stack and with transfer rates up to 2 Gbps. With 1024-bit wide memory interface it can have memory bandwidth of 256 GB/s per stack which is double of normal HBM or HBM 1 memory. The total capacity of HBM2 is also more and it can have up to 8GB per stack. The first GPU chip to utilize HBM2 memory is Nvidia Tesla P100. Nvidia’s latest Pascal series Workstation Graphics Card Nvidia Quadro GP100 also comes with HBM2 memory. HBM2 memory will be used mainly for VR Gaming, AR Gaming and other heavy memory intensive applications.

GPU architectures supported by HBM2 include Vega, Pascal and the latest Volta GPU architecture from Nvidia. The successor of HBM2 is HBM3 which will be launched in 2019 or 2020. Some of graphics cards that uses HBM2 memory includes Nvidia Titan V, Radeon Vega Frontier Edition, Radeon RX Vega 56, Radeon Vega RX 64, Nvidia Quadro GP100.

Update: The next-gen or 2nd generation HBM2 memory from Samsung is known as Aquavolt. It comes in 8GB HBM2 stacks (8-Hi height) with speed of 2.4Gbps at 1.2V. It is is much faster than the previous generation (first-gen) HBM2 memory that offered maximum speeds of 1.6Gbps @ 1.2V and 2.0Gbps @ 1.35V. This also means up to 50% additional performance over the first-gen HBM2 memory. It is also 9.6 times faster than the 8GB GDDR5 memory @8Gbps.

This second generation Aquavolt HBM2 memory with 1024-bit memory bus can deliver bandwidth of around 307GB/s per 8GB stack, which is huge. You may see this 2nd gen HBM2 memory on next-generation high-end workstation graphics cards having up to 32GB memory capacity, which can offer about 1.2TB/s of enormous bandwidth. Samsung has achieved this performance with their new HBM2 memory by using new technologies in their TSV design (Through Silicon Via) and making tweaks to their thermal control. The new 8GB HBM2 single package will have 8 x 8GB HBM2 dies that vertically interconnects with over 5000 TSVs per die. Samsung has also added more thermal bumps between the HBM2 dies for better heat dissipation and it also acts as a protective layer at the bottom of the stack.

HBM3 Memory

HBM3 is an upcoming high speed memory and the successor of HBM2 memory. HBM3 memory will be faster, have lower power consumption and have increased capacity over HBM2 memory. HBM3 will allow up to 64GB VRAM on graphics cards and memory bandwidth of up to 512GB/s per stack. You can see the mass production and use of HBM3 memory by 2020,

How to Calculate Memory Bandwidth

Use the below mentioned formula to calculate bandwidth of any type of memory.

Memory Bandwidth = (Effective Memory Clock x Bus Width) / 8

GDDR5 vs GDDR5X vs HBM vs HBM2 Memory

Here is neck to neck comparison of GDDR5, GDDR5X, HBM and HBM2 memory types.

Memory GDDR5 GDDR5X HBM HBM2
Manufacturer Samsung, Hynix, Elpida Micron Hynix, Samsung Samsung, Hynix
Appearance Square / Rectangular Chip Square / Rectangular Chip Cube / Cuboid Cube / Cuboid
Maximum Capacity 8GB per Die 16GB per Die 1GB per Stack 4GB / 8GB per Stack
Maximum Speed 8 Gbps 10 to 14 Gbps (16 Gbps in future) 1 Gbps 2.4 Gbps
Bus Width 32-bit per chip 64-bit per chip 1024-bit per stack 1024-bit per stack or more
Power Consumption Low Same / Lower than GDDR5 Lower than GDDR5 and GDDR5X Lower than HBM
Graphics Cards Used in Many Graphics Cards from budget, mid-range to high-end e.g. GT 740, GTX 1070, RX 480 etc. GeForce GTX 1080, GTX 1080 Ti, GTX 1060, Nvidia Titan X (Pascal) Radeon R9 Fury X, Radeon Pro Duo Nvidia Tesla P100, Nvidia Quadro GP100, Radeon RX Vega 56, Radeon RX Vega 64, Nvidia Titan V, AMD Radeon VII

See also:

Final Words

At the end I would like to say that all these above mentioned memories are made for high performance and are used in high performance hardware including top graphics cards. GDDR5 is the oldest of the lot while others are relatively newer. HBM2 is certainty the best memory in terms of performance and power consumption but it is very new and is not used in any commercial graphics cards as of now. You can expect to see use of more HBM2 memory in the upcoming high-end graphics cards from Nvidia and AMD in 2018. If you have any queries or suggestions then you can connect with me by leaving a comment below.

graphicscardhub.com

The Cost of HBM2 vs. GDDR5 & Why AMD Had to Use It

Variations of “HBM2 is expensive” have floated the web since well before Vega’s launch – since Fiji, really, with the first wave of HBM – without many concrete numbers on that expression. AMD isn’t just using HBM2 because it’s “shiny” and sounds good in marketing, but because Vega architecture is bandwidth starved to a point of HBM being necessary. That’s an expensive necessity, unfortunately, and chews away at margins, but AMD really had no choice in the matter. The company’s standalone MSRP structure for Vega 56 positions it competitively with the GTX 1070, carrying comparable performance, memory capacity, and target retail price, assuming things calm down for the entire GPU market at some point. Given HBM2’s higher cost and Vega 56’s bigger die, that leaves little room for AMD to profit when compared to GDDR5 solutions. That’s what we’re exploring today, alongside why AMD had to use HBM2.

There are reasons that AMD went with HBM2, of course – we’ll talk about those later in the content. A lot of folks have asked why AMD can’t “just” use GDDR5 with Vega instead of HBM2, thinking that you just swap modules, but there are complications that make this impossible without a redesign of the memory controller. Vega is also bandwidth-starved to a point of complication, which we’ll walk through momentarily.

Let’s start with prices, then talk architectural requirements.

AMD’s Long-Term Play & Immediate Risk

AMD’s pricing structure for Vega uniquely leans on bundle packs to help improve the company’s value argument in a competitive market. MSRP is $400 on RX Vega 56, $500 on RX Vega 64, and an added $100 upcharge in exchange for two games and some instant discounts. AMD’s intention with this is to offer greater value to gamers, but clearly also will help the company increase margins and move more Ryzen parts, thereby recouping potentially low or negative margins on Vega. This is aided particularly with game bundles, where AIB partners pay AMD about $29 for the game codes, though that is often waived or offered in exchange for MDF. AMD also stated desire to stave off some mining purchases with increased bundle prices, as this would offset the value proposition of the card. Since the bundles are sold as standalone SKUs and can’t be broken by consumers (into parts), it seems that this is potentially an effective solution at keeping miners at bay.

This move is also mystified by complex economics surrounding potential long-term plays by AMD, with the company having already lost on a bet that HBM2 pricing would be cheaper by the time Vega rolled-out; in fact, price increased at least once within the past year, and Hynix ultimately failed to deliver on AMD’s demands in a timely fashion. In the meantime, AMD could have its outlook set on increased supply driving down cost to build Vega GPUs. This might mean taking a slight loss or running on slim margins for now, but hoping for a payoff down the line. Part of this hinges on Hynix potentially coming online with HBM2 at some point, which could help reduce the cost figures we’ll go over in this video. Ultimately, AMD also needs to reclaim gaming marketshare, and part of that reclamation process will be aided by gritting teeth through painfully slim margins or even losses at launch.

The Cost of HBM2

There are two major costs with a video card: The GPU die and the memory, with follow-up costs comprised of the VRM and, to a lesser extent, the cooler.

Let’s start with HBM2 and interposer pricing, as that’s what we’re most confident in. Speaking with David Kanter of Real World Tech, the analyst who broke news on Maxwell’s tile-based rasterization and who previously worked at Microprocessor Report, we received the following estimate: “The HBM2 memory is probably around $150, and the interposer and packaging should be $25.” We later compared this estimate with early rumors of HBM2 pricing and word from four vendors who spoke with GamersNexus independently, all of which were within $5-$10 of each other and Kanter’s estimate. This gave us high confidence in the numbers. Taking his $175 combined HBM2 + interposer figure, we’re nearly half-way to the MSRP of the Vega 56 card, with the rest of costs comprised of the VRM, GPU, and dime-a-dozen electrical components. It’d cost a “normal person,” for instance, about $45 to build the VRM on Vega – that’d include the $2.70 per-phase cost of the IRF6894s and IRF6811 hi- and lo-side DirectFETs, about $8.80 for all six of the IR3598 drivers, and roughly $4 on the IR35217 (from public sellers and datasheets). AMD is a large company and would receive volume discounts. Even as individuals, we could order 10,000 of these parts and drive that cost down, so these numbers are strictly to give an idea of what it’d cost you to build the VRM.

We’re not sure how many AMD ordered and aren’t going to speculate on what the company’s discount would be, but those numbers give an idea for what someone might pay if not a major corporation. This primarily helps explain why AMD opted for the same PCB and VRM on Vega: FE, Vega 64, and Vega 56, especially given the BIOS and power lock on V56. Although Vega can certainly benefit from the advanced VRM, the necessity of it lessens as we get to V56. The increased volume from V56 orders could offset cost across the entire product stack, to a point of it being cheaper to overbuild Vega 56 than to order two or three completely different sets of VRM components and PCBs.

Regardless, we’re at about $150 on HBM2 and $25 on the interposer, putting us around $175 cost for the memory system.

We did speak with numerous folks, including Kanter, on estimated GPU die pricing, but the estimates had a massive range and were ultimately just educated guesses. Without something more concrete to work with, we’re going to just stick to HBM2 and interposer pricing, as that’s the figure we know. GPU cost and yield cost are only really known by AMD and GlobalFoundries, at this point, so no point in working with total speculation.

The Cost of GDDR5

The next question is what GDDR5 costs. A recent DigiTimes report pegs GDDR5 at about $6.50 for an 8Gb module, though also shows pricing for August onward at $8.50 per module. With old pricing, that’s around $52 cost for an 8GB card, or $68 with new pricing. We do not presently know GDDR5X cost. This puts us at around 3x the cost for HBM2 which, even without factoring in yields or the large GPU die, shows why AMD’s margins are so thin on Vega. We also know that AMD is passing along its HBM2 cost to partners at roughly a 1:1 rate – they’re not upcharging it, which is what typically happens with GDDR. There’s no room to upcharge the HBM2 with Vega’s price target.

Ignoring GPU cost and cost of less significant components, like the VRM and cooler, we’re at $100-$130 more than 8GB of GDDR5 cost to build. This is also ignoring other costs, like incalculable R&D or packaging costs. Again: We’re just focusing on memory today.

Why AMD Had to Use HBM2

(Marketing image from SK Hynix)

Now that we know how much HBM2 costs, we need to talk about why AMD decided to use it. Like most of AMD’s hardware, the company is partly trying to make a long-term technological play in the market. This started with Fiji and has progressed through Vega.

There’s more to it, though. HBM2 critically allows AMD to run lower power consumption than GDDR5 would enable, given the Vega architecture.

Speaking with Buildzoid, we know that Vega: Frontier Edition’s 16GB HBM2 pulls 20W max, using a DMM to determine this consumption. This ignores the voltage controller’s 3.3v draw, but we’re still at 20W memory, and no more than an additional 10W for the controller – that’s less than 30W for the entire memory system on Vega: Frontier Edition.

We also know that an RX 480 uses 40-50W for its 8GB, which is already a significant increase in power consumption per-GB over Vega: FE. The RX 480 also has a memory bandwidth of 256GB/s with 8GB GDDR5, versus Vega 64’s 484GB/s. The result is increased bandwidth, the same capacity, and lower power consumption, but at higher cost to build. In order for an RX 480 to hypothetically reach similar bandwidth, power consumption would increase significantly. Buildzoid calculates that a hypothetical 384-bit GDDR5 bus on Polaris architecture would push 60-75W, and an imaginary 512-bit bus would do 80-100W. For this reason alone, HBM2 saves AMD from high power budget that would otherwise be spent solely on memory. This comes down to architectural decisions made years ago by AMD, which are most readily solved for with HBM2, as HBM2 provides greater bandwidth per watt than GDDR5. HBM is effectively a necessity to make Vega at least somewhat power efficient while keeping the higher memory bandwidth. Imagine Vega 56, 64, or FE drawing an additional 70-100W – the world wouldn’t have it, and it’d be among the hottest cards since the GTX 480 or R9 290X.

The Vega architecture is clearly starved by memory bandwidth, too: Overclocking HBM2 alone shows this, as its gains are greater than just core clock increases. AMD didn’t have another choice but to go with HBM2, even though costs would be roughly one-third on the memory. GDDR5 might be possible, but not without blowing power consumption through the roof or losing on performance by limiting bandwidth.

AMD provided GN with a statement pertaining to choices revolving around HBM2, which reads as follows:

“AMD chose HBM2 memory for Vega because this advanced memory technology has clear benefits on multiple fronts. HBM2 is a second-generation product that offers nearly twice the bandwidth per pin of first-generation HBM thanks to various refinements.

“As we noted in the Vega whitepaper, HBM2 offers over 3x the bandwidth per watt compared to GDDR5. Each stack of HBM2 has a wide, dedicated 1024-bit interface, allowing the memory devices to run at relatively low clock speeds while delivering tremendous bandwidth. Also, thanks to die stacking and the use of an interposer, Vega with HBM2 achieves a 75% smaller physical footprint for the GPU die plus memories versus a comparable GDDR5 solution.

“The combination of high bandwidth, excellent power efficiency, and a compact physical footprint made HBM2 a clear choice for Vega. We have no plans to step back to GDDR5.”

AMD ended up opting for two stacks of HBM2 on the current Vega cards, which limits its bandwidth to Fury X bandwidth (2x 1024-bit Vega vs. 4x 1024-bit Fury X), ultimately, but AMD does benefit in the bandwidth-per-watt category. That’s the crux of this decision.

Battling for Marketshare & Margin

As for cost, knowing that the memory system gets us up to nearly $200 as a starting point, it is inarguable that AMD has lower margins on Vega products than could be had with GDDR5 – but the company also didn’t have a choice but to use HBM2. NVidia forced AMD’s hand by dropping the 1080 Ti in March, followed by 1070 and 1080 MSRP reductions. That’s ignoring the current insane GPU pricing (which inflates the 1070s & V64s into overpriced oblivion) and just looking at MSRP, as that’s ultimately where the two companies battle under normal conditions. AMD and nVidia also do not see a dollar of the upcharge by distributors and retailers, so margins for either company are theoretically unimpacted by the inflated consumer-side pricing. That’s why, ultimately, we’re looking at MSRP – AMD and nVidia sell their product to the AIB partners for a cost which should theoretically be largely unchanging. AMD is able to make back some of these margins with bundle packs, where a pair of games can be sold to AIBs for ~$29, then to consumers for $100, or where Ryzen and motherboard parts help recoup margins. Each motherboard sold is another chipset moved, and Ryzen sales go completely to AMD. Either way, AMD has to increase its GPU marketshare, and fighting through early losses or slim margins is part of that. The long-term play is clearly hoping that increased demand and supply will lower cost to build, so it remains to be seen how that’ll play-out.

AMD’s investing a lot of effort to try and recoup some of the margins: Bundle packs are a major boon, either through direct cost of games sold or through accompanying Ryzen product sales, and reusing the same PCB & VRM further helps slim margins. This is particularly true with a low volume part like Vega: FE, as using the same board will help meet MOQ thresholds for discounts, aided by higher volume V64 and V56 parts. Without immediate production runs of significant quantities on each SKU, it makes more sense for AMD to reuse the VRM and board than to design a cheaper V56 board, as cost across all three SKUs is lowered with higher quantities. This particular move has the upshot of benefitting V56 consumers (though the benefit would be more relevant with unlocked BIOS), as you end up with a seriously overbuilt VRM for what’s needed. The VRM can easily handle 360W through the card and is more than the V56 will ever draw stock or with a 50% offset. We’ve even tested up to 406W successfully (with a 120% offset) through V56, though it’s probably inadvisable to do that for long-term use.

But that gives us a starting point, and helps to better contextualize what people mean when they say “HBM2 is expensive.” It is – about 3x more than GDDR5, give or take $20 and assuming high yield – and that’s rough for AMD’s profitability. That said, the company really couldn’t have reasonably gotten GDDR5 onto Vega’s present design without severe drawbacks elsewhere. It wouldn’t compete, and we’d be looking at another Polaris mid-range GPU or much higher power consumption. At less than 30W for 16GB of HBM2, GDDR5 just can’t compete with that power consumption under command of the Vega architecture. It’d be cheaper to make, but would require significantly higher power consumption or a smaller bus, neither of which is palatable. AMD isn’t using HBM2 to try and be the “good guy” by pushing technology; the company was in a tough spot, and had to call riskier shots out of necessity. Although it’d be nice if every GPU used HBM2, as it is objectively superior in bandwidth-per-watt, both AMD’s architecture and market position pressure the company into HBM adoption. HBM2 would benefit a 1070 by way of lower power consumption, but the 1070 doesn’t need HBM2 to get the performance that it does – the architecture is less bandwidth-hungry, and ultimately, nVidia isn’t in the same market position as AMD. In a 30-to-70 market, AMD has to make these more expensive plays in attempt to claw back marketshare, with hopes of enjoying better margins further down the generation.

Editorial: Steve Burke Video: Andrew Coleman

www.gamersnexus.net

Types of VRAM Explained: HBM vs. GDDR5 vs. GDDR5X

Video RAM: What’s the difference between the types available today?

Some Samsung VRAM

All graphics cards need both a GPU and VRAM to function properly. While the GPU (Graphics Processing Unit) does the actual processing of data to output images on your monitor, the data it is processing and providing is stored and accessed from the chips of VRAM (Video Random Access Memory) surrounding it.

Outputting high-resolution graphics at a quick rate requires both a beefy GPU and a large quantity of high-bandwidth VRAM working in tandem. For most of the past decade, VRAM design was fairly stagnant, and focused on using more power to achieve greater VRAM clock speeds.

But the power consumption of that process was beginning to impinge on the power needed by newer GPU designs. In addition to possibly bottlenecking GPU improvements, the standard sort of VRAM (which is known as GDDR5) was also determining (and growing) the form factor (i.e. the actual size) of graphics cards.

Chips of GDDR5 VRAM have to be attached directly to the card in a single layer, which means that adding more VRAM involves spreading out horizontally on the graphics card. And moving beyond a tight circle of VRAM around the GPU means increasing the travel distance for the transfer process as well.

With these concerns in mind, new forms of VRAM began to be developed, such as HBM and GDDR5X, which have finally surfaced in the past couple of years. These are explained below, as straightforward as possible.

HBM vs. GDDR5:

If you want the differences between these two varieties of VRAM summed up in two simple sentences, here they are:

GDDR5 (SGRAM Double Data Rate Type 5) has been the industry standard form of VRAM for the better part of a decade, and is capable of achieving high clock speeds at the expense of space on the card and power consumption.

HBM (High Bandwidth Memory) is a new kind of VRAM that uses less power, can be stacked to increase memory while taking up less space on the card, and has a wide bus to allow for higher bandwidth at a lower clock speed.

Here is a per-package (one stack of HBM vs. one chip of GDDR5) comparison:[i]

HBM (1 stack) GDDR5 (1 chip)

Higher Bandwidth

(~100 GB/s)

(~28 GB/s)

Smaller Form Factor

(Stackable, Integrated)

(Single-layer)

Higher Clock Speed

(~1 Gb/s)

(~7 Gb/s)

Lower Voltage

(~1.3 V)

(~1.5 Volts)

Widely Available

(New, Needs Redesigned Cards)

(Old, Cards Designed Alongside)

Less Expensive

(New, Needs Redesigned Cards)

(Old, Cards Designed Alongside)

Again, don’t be fooled by the ✓ that GDDR5 received there for having a higher clock speed; HBM, with its wide bus, still boasts a higher overall bandwidth per Watt (according to AMD, over three times as much bandwidth per Watt). The lower clock speed is related to how HBM attains its energy savings.

A diagram of HBM’s stacked design, by ScotXW

The idea here is that GDDR5, with its narrow channel, keeps being pushed to higher and higher clock speeds in order to achieve the performance that is currently expected out of VRAM. This is very costly from a power perspective. HBM, on the other hand, moves at a lower rate across a wide bus.

With the huge gains in GPU processing power and the increasing consumer appetite for high-resolution gaming (a higher resolution means more visible detail, which means more data, which requires VRAM that is both higher capacity and higher speed), it seemed inevitable that most cards, starting at the top-end and moving down, would be re-designed to feature a version of HBM (such as the already-developed HBM2, or otherwise) in the future. But then, last year, yet another new standard of VRAM came about which called that into question.

GDDR5 vs. GDDR5X:

You may have seen some news in the past year or so regarding a form of VRAM called GDDR5X, and wondered exactly what this might be. For starters, here’s a simple-sentence-summary like the one offered for HBM and GDDR5 above:

GDDR5X (SGRAM Double Data Rate Type 5X) is a new version of GDDR5, which has the same low- and high-speed modes at which GDDR5 operates, but also an additional third tier of even higher speed with reportedly twice the data rate of high-speed GDDR5.

Here is a per-package (one chip of GDDR5X vs. one chip of GDDR5) comparison:[ii]

GDDR5X (1 chip) GDDR5 (1 chip)

Higher Bandwidth

(~56 GB/s)

(~28 GB/s)

Smaller Form Factor

Tie

(Single-layer)

Tie

(Single-layer)

Higher Clock Speed

(~14 Gb/s)

(~7 Gb/s)

Lower Voltage

(~1.35 Volts)

(~1.5 Volts)

Widely Available

(New, Only in High-end Cards)

(Old, Cards Designed Alongside)

Less Expensive

(New, Only in High-end Cards)

(Old, Cards Designed Alongside)

So, you might be wondering, if a chip of GDDR5X is still operating at just around 60% of the overall bandwidth of a stack of HBM while not even quite making the same power savings or space savings, then why is it a big deal? Isn’t it still just immediately made obsolete by HBM? Well, the answer is no, for two reasons.

The first thing to notice is that it’s not a perfect comparison. After all, one chip is just one chip, whereas a stack has the advantage of holding multiple chip-equivalents. Just because they take up the same real estate on the card, that doesn’t mean they are the same amount of memory. So, in theory, a GDDR5X array with the same VRAM capacity as some HBM VRAM array would come much closer in overall graphics card VRAM bandwidth (perhaps just over 10% slower than the HBM system, as estimated by Anandtech).

And yes, that’s still lower, but there are further advantages to GDDR5X when you consider the development side of things. HBM being an entirely new form of VRAM means that chip developers will need to redesign their products with new memory controllers. GDDR5X has enough similarities to GDDR5 to make it a much easier and less expensive proposition to implement it. For this reason, even if HBM, HBM2, and other HBM-like solutions win out in the long run, GDDR5X is likely to see a wider roll-out than HBM in the short run (and possibly at a lower cost to the consumer).

Which Graphics Cards Use Which VRAM:

Now that you’ve heard about these exciting new developments in VRAM design, you might be wondering what sort of VRAM lies within your card, or else where you can get your hands on some of this new technology.

A Founder’s Edition GTX 1070

Well, for the time being, most of the cards that are available, from the low-end through the mid-range and into the lower high-end (currently including every card from our Minimum tier to our Exceptional tier builds) still feature GDDR5 VRAM. Popular cards in this year’s builds, from the RX 480 to the GTX 1060 to the GTX 1070, all feature this fairly standard variety of high-clock-speed, relatively-space-inefficient, relatively-energy-inefficient VRAM.

NVIDIA’s highest tier of cards, including the GTX 1080 and the Titan X, currently feature GDDR5X. It seems likely (but not guaranteed) that NVIDIA will continue to make use of GDDR5 and GDDR5X in the near future, simply because that is their current trend and the design implementation is less costly.

AMD, meanwhile, has rolled out HBM in some of their high-end cards, including the R9 Fury X and the Pro Duo. Don’t be surprised if you see smaller form factor cards sporting HBM from AMD in the future. Perhaps using HBM and related innovations will be the avenue through which AMD finally breaks free of their reputation for making cards with comparable performance, but worse thermals and power consumption, compared to NVIDIA.

What about GDDR6?

Micron has been teasing yet another new memory technology for over a year now: GDDR6. Their current plan is have GDDR6 on the market in or before 2018 (though their earlier estimates were closer to 2020). And, while info on it is scarce, they are now claiming that it will provide 16 Gb/s per pin (meaning somewhere in the neighborhood of 64 GB/s of overall bandwidth per chip—compared to 56 GB/s per chip of GDDR5X and 100 GB/s per stack of HBM).

Is GDDR6 likely to start showing its face in high-end cards over a year from now? Yes, it is.

It’s a GDDR solution, which means—like GDDR5X—it will be less costly for manufacturers to implement than HBM.

Does that mean you should shelve your planned build until it shows up? Absolutely not.

Three reasons: (1) at the claimed speed of GDDR6, it still has a significantly lower overall bandwidth and likely lower power savings than HBM, let alone HBM2; (2) at the claimed speed of GDDR6, it is less than 15% faster than GDDR5X, which is unlikely to be noticeable to the user; and (3) there is no guarantee that this new standard will be released by Micron on schedule, nor that it will live up to its claimed figures (ancient wisdom you should always heed: benchmarks before buying).

Conclusion:

So, would I say you should pick your card based on its VRAM type? In the current market situation, I would say probably not. Frankly, there just aren’t enough cards out there with HBM or GDDR5X to put together proper apples-to-apples benchmark comparisons. But this information definitely helps to illustrate something that we here at Logical Increments are all about: a well-balanced build is crucial.

Consider: a high amount of VRAM (and VRAM that performs at a high level) is going to be most important in set-ups that run at a high resolution. And if you’re already balancing your build well—by following our guides, for instance—then you are not likely to end up in a situation where you buy a 4K monitor (such as the grandiose Dell Ultrasharp 4K 31.5” LCD Monitor) and pair it with a low-end graphics card (like the respectable yet modest RX 460).

And for those of us who are mid-range builders, don’t despair. As with any new technology in the computer world, what is currently rare and expensive will likely become both commonplace and affordable in the future.

Notes:

[i] This chart features numbers from AMD’s press release infographic concerning HBM and specified by JEDEC’s standard document concerning HBM.

[ii] This chart features numbers specified by JEDEC’s standard document concerning GDDR5X.

Daniel Podgorski is a contributing writer for Logical Increments. He is also the researcher, writer, and web developer behind The Gemsbok, where you can find articles on books, games, movies, and philosophy.

blog.logicalincrements.com

GDDR5 vs GDDR5X vs HBM vs HBM2 vs GDDR6

Last Updated on April 9th, 2019

There is no denying that when we are looking at the different sort of graphics card memory, we are reminded of the traditional memories. Well, that is mainly because the lettering is the same, so it can bring forth some confusion that people have to deal with.

However, we can assure you that these memories are predominantly different from what else is available in the market. With that said, since we are talking about graphics card memory, the most common type of memories you will get to see are GDDR5, GDDR5X, as well as GDDR6. Additionally, you will also get to see HBM and HBM 2.

So what is the purpose of these memories, and how are they different from each other? That is exactly what we are going to find out in this article. This is for people who are normally confused and do not really know what they should be going for when buying a graphics card.

Therefore, let’s not waste more time and have a look.

GDDR5

The first and perhaps the most common type of memory you will get to see is the GDDR5 memory.

This is something that is used in most of the high-end graphics cards available in the market.

As far as the memory performance is concerned, GDDR5 happens to be one of the fastest memory types available in the market. As a matter of fact, you can find this type of memory being used in GPUs like GTX 1000 series, starting from the GTX 1060 and GTX 1070 as well as some AMD GPUs like RX 480, 580, and even some other options.

Additionally, some mid-tier cards are also using this memory type. Needless to say, as far as being mainstream is concerned, GDDR5 is definitely on top of the food chain.

GDDR5X

In the hindsight, you can call the GDDR5X an improved or an upgraded version of the GDDR5 memory type. Both the predecessor and the successor are high bandwidth memories and their extends to not just the graphics cards, but high-end servers as well.

However, as far as the GDDR5X is concerned, it is supposedly twice as fast as the GDDR5. This means that the memory can achieve speeds that range from 10 gigabits per second and going all the way up to 14 gigabits per second and even 16 gigabits per second.

The memory is currently being manufactured by Micron, one of the leading names in the memory business.

Another benefit here is that the GDDR5X is that it consumes less power as compared to the GDDR5. The memory chip is available in different capacities ranging from 4GB and going all the way to 16GB. The most common graphics cards using this type of memory is the Nvidia GTX 1080, as well as the Nvidia Tian X (Pascal). Additionally, the same memory is found in the Quadro P5000 and P6000 by Nvidia.

HBM

On the other hand, we have HBM or High Bandwidth Memory. This is also used in graphics cards, as well as other advanced units available in the market. Currently, Hynix and Samsung are manufacturing this type of memory type. The number of graphics cards using HBM is not that high.

For those who are unaware, HBM is a non-planar memory, and it comes in a 3D design structure that looks a lot like a cub. Multiple memory chips are stacked over one and the other in order to make the HBM memory smaller in size, therefore, taking less space on graphic cards.

The benefits of HBM memory are that it consumes less power, has a smaller form factor, higher bandwidth, reduced thermals, and overall better performance. However, at the same time, there are some downsides like an increased price, as well as availability issues.

HBM2

HBM 2 is a lot like the original HBM as it does have all the same features.

However, there are some improvements made. Mainly in terms of speed, and bandwidth. For starters, HBM 2 can have 8 DRAM in a single stack and can offer transfer speeds up to 2 gigabits per second, as well as a 1024-bit memory interface.

This is some serious specification upgrade as compared to the original. As of now, the HBM 2 memory is being used Nvidia Titan V, Radeon Vega Frontier Edition, Radeon RX Vega 56, and 64, as well as Nvidia Quadro GP100.

GDDR6

The last memory on the list is the GDDR6; the true successor to the GDDR5 and GDDR5X. The memory is definitely faster than both of the predecessors and will be capable of running at 1.35 volts, as well as offering speeds up to 16 gigabits per second, as well as 18 gigabits per second.

The memory is already available in the market and is mainly being used in the Nvidia Turing based GPUs that are available in the market. Which means that GTX 1660Ti, RTX 2060, 2070, 2080, 2080Ti, as well as the future Titan card that will be based on the Turing architecture.

Conclusion

In conclusion, the one thing that we can say for sure is that the graphics memory market is rapidly evolving. If you look at the other situation of the desktop memory, it is still at DDR4, which happens to be the latest and greatest.

However, an important thing that must be noticed here is that there is a huge difference between both memory types, and that is why the comparison is simply not possible.

With that said, the latest memories are doing really well in the market. So, it is only safe to assume that the future is looking good. As to what memories await us in the future, we cannot really say much about that. So, let’s wait and see what we are going to be getting.

[Total: 2    Average: 2.5/5]

gadgetsenthusiast.com

Стандарт памяти GDDR5X поборется с HBM2 за рынок видеокарт

Новости: > 2015 > 10 > 26

Tweet

Компания AMD в этом году впервые использовала стековую память HBM в графических адаптерах. Постепенно данный стандарт должен заменить GDDR5, но последний не собирается сдаваться без боя. Новым козырем в его рукаве стала спецификация GDDR5X, которая обеспечивает массу важных преимуществ.

Во-первых, GDDR5X предлагает удвоенную пропускную способность на каждый контакт в сравнении с GDDR5 (64 байта против 32 байт). Общая пропускная способность сразу же возросла с 1,5 – 6 Гбит/с (GDDR5) до 10 – 12 Гбит/с (GDDR5X). Потенциально этот показатель может увеличиться и до 16 Гбит/с. Во-вторых, напряжения VDD и VDDQ снизились с 1,5 до 1,35 В, что повышает энергоэффективность и снижает тепловыделение – важные показатели в современных графических адаптерах, особенно в мобильных версиях. В-третьих, вся экосистема GDDR5, включая командные протоколы, количество и размещение контактов, а также другие особенности перешли без существенных изменений в GDDR5X. А это значит, что инженерам не нужно вносить значимые правки в существующий дизайн, что во многом снижает затраты на подготовку и выпуск новых видеоускорителей.

В результате, ограниченное количество микросхем HBM2 и более высокая их стоимость на старте продаж сыграет на руку стандарту GDDR5X, который мы сможем увидеть в новых среднепроизводительных видеокартах компаний AMD и NVIDIA. Его использование позволит им повысить производительность видеоподсистемы и графического адаптера в целом, сохранив при этом сравнительно низкую стоимость. А вот для флагманских устройств отлично подойдет стандарт HBM2, ведь для данного класса решений фактор цены играет второстепенную роль.

http://www.techpowerup.com Сергей Будиловский

Новость прочитана 2146 раз(а)

Тэги: amd   gddr5   gddr5x   hbm2   nvidia   

>

Подписаться на наши каналы
Социальные комментарии Cackle

ru.gecid.com

GDDR6 VS HBM2 Memory

Both GDDR6 and HBM2 are types of memory that enable processors to perform better in a wide variety of applications due to their high memory bandwidth. In this article, we compare the architecture, performance (including Bandwidth and speed), and price differences between the two types of memory.

Short Definitions of GDDR6 and HBM2

GDDR6 is an abbreviation for Graphics Double Data Rate type 6. It’s a type of synchronous graphics RAM with high bandwidth, designed for high-performance applications such as graphics cards and game consoles.

HBM2 stands for High Bandwidth Memory (v2) and is another type of memory commonly found in graphics cards, as well as 3D-stacked DRAM chips. It makes use of stacked DRAM dies which are connected with microbumps and through-silicon vias.

Video Comparison of  GDDR6 VS HBM2 by Semiconductor Engineering:

Architecture of GDDR6

The architecture of GDDR6 is an interesting combination of features commonly found in GDDR5 and GDDR5X, as well as some from HBM2. However, it does make some notable improvements.

For starters, GDDR5 and earlier versions only supported a single 32-bit channel with one command/address bus and one 32-bit data bus. This was simple and very straightforward. With GDDR5X, there was a single true 32-bit channel, but you could split that into 2 16-bit pseudo-channels. This configuration didn’t come with a lot of flexibility because you had both read and write operations within the same row.

GDDR6’s two-channel architecture

GDDR6 comes as a single chip, but that chip actually behaves like two DRAMs that are completely independent. They each have their own command/address bus, as well as their own 16-bit data bus. This is a benefit, because the more channels you have in a system, the more chances the memory controller has to manage the DRAMs. Which helps to avoid large stalls that usually happen because of page activation limits or page refreshes.

Another excellent benefit of GDDR6’s two-channel architecture is that it counteracts the consequences of the 16-cycle burst length. If you have a burst length of 16 with a 32-bit wide bus, the resulting transaction atom is 64 bytes. Numerous architectures, both GPU and CPU, make use of 32-byte transaction atoms. If you split up the DRAM in two channels that are independent and 16 bits each, you get a 16n prefetch, and still keep a transaction size of 32 bytes.

QDR and DDR

Up until GDDR5X, DRAM was DDR (double data rate). The data bits would change at the rising and the falling edge of the word clock (WCK). With GDDR5X, we saw the introduction of QDR (quad data rate). Data would toggle at four times the WCK frequency, or twice as fast as DDR. GDDR5X had support for both QDR and DDR, but the DRAM would run at half the speed during the DDR mode. It was only there as a power saving option.

With GDDR6, you now have a choice between QDR and DDR at full speed. The specifications don’t really require a vendor to support both, so you’re basically getting two standards. As an example, if you have a GDDR6 DRAM at 14 Gbps, WCK will run at 3.5 GHz for a QDR device and 7 GHz for a DDR device. In both situations, the command and address clock, as well as the command and address lines themselves, run at 1.75 Gbps. In the specifications, there was also a mention of ODR (octa data rate), but no further details were uncovered, so this may come in the future.

Architecture of HBM2

HBM, or high bandwidth memory, provides higher bandwidth while also utilizing less power. Therefore, it has a different architecture than GDDR. All the DRAM dies (up to eight) are stacked, as well as an optional base die which contains a memory controller. They’re interconnected with through-silicon vias (TSV) and microbumps, as mentioned earlier. The way HBM works is similar to the Hybrid Memory Cube that was developed by Micron Technology but is incompatible with it.

Compared to other DRAM memories, the HBM memory bus is very wide. If you have four DRAM dies in the stack (4-Hi), there are two 128-bit channels for each die, which totals 8 channels and 1024 bits. Therefore, if you have a GPU with four 4-Hi HBM stacks, the total memory bus width is 4096 bits. As a comparison, the bus width for a GDDR memory is 32 bits and 16 channels for a GPU that has a 512-bit memory interface.

Interposer

Considering there are a lot of connections to the memory, this requires a new method of connecting the memory to the GPU. Both AMD and Nvidia have made use of interposers, which are purpose-built chips, to achieve this. The interposer also requires that the memory and processor are physically close, which also reduces the memory paths. However, the fabrication of a semiconductor device is much more expensive than the manufacture of PCBs, which adds to the cost of the final product.

The DRAM for HBM is tightly coupled to the host computer die, using a distributed interface. This interface is actually divided into channels that are completely independent of one another, and they’re also not necessarily synchronous to each other. To achieve a high-speed operation while keeping the power consumption low, the HBM DRAM makes use of a wide-interface architecture. There’s a 500 MHz differential clock, and each channel interface has a 128-bit data bus which operates at DDR data rates. While the initial version supported transfer rates of 1 GT/s (gigatransfer per second), HBM2 increases this up to 2 GT/s.

Performance

When we’re only discussing performance, limiting things to only the memory type would be a mistake. Both GDDR6 and HBM2 are made in a different way, and they excel in different areas. For example, the high bandwidth of HBM2 is ideal in situations such as AI and advanced computing. This is why we’re seeing it in data center GPUs, such as Nvidia’s Tesla V100, where the cost consideration is less important. On the other hand, GDDR6 can offer a lot of the same performance, albeit with higher power requirements, but at a lower price. This makes it much more accessible for the everyday user, hence we’re seeing it in commercial graphics cards.

As far as gaming goes, both types of memory found their way in gaming graphics cards. However, we couldn’t say that one type of memory is better than another for gaming because there are a lot of other factors that come into play. Things such as bus width, base clock, and boost clock all matter, and you can find graphics cards with identical memory types performing differently from one another due to other factors being different. There is one advantage that HBM2 has here, even with the higher cost. The fact that it has a lot more bandwidth means that you’ll get better performance and less latency.

Price (Production Cost)

This is a section where GDDR6 takes the win by a large margin. To begin with, HBM2 is only in its second iteration and is not nearly as widespread as GDDR in terms of fabrication. The additional silicon interposer of HBM2 which gives electrical connectivity to the SoC is first and foremost, a design complexity. Solving this complexity is costly. Then, you have the fact that a semiconductor device is much more expensive to produce than a PCB, which adds even more to the cost.

As a result, the GPUs that would offer comparable results to their GDDR6 counterparts are significantly more expensive if they use HBM2. HBM2 does introduce a lower power consumption as well, but the production and implementation costs rules the price battle in favor of GDDR6.

Wrapping things up – which one is better?

In their current iterations, we couldn’t say which type of memory wins. There are tradeoffs with both types, and it’s merely a matter of which one is more acceptable to the end user.

Starting things off with the per-pin data rate, GDDR6 comes in with a much higher 16 Gbps, compared to HBM2’s 2 Gbps. The fact that there are no additional chips to manufacture also helps keep the costs down, which is always welcome.

On the other hand, due to the fact that it’s a stacked memory, the relative area of the PHY controller is much smaller for HBM2, with GDDR6 taking up as much as 1.5-1.75 x the area of HBM2’s controller. This also means that GDDR6 consumes 3.5-4.5 x the power when compared to HBM2. At the end of the day, it is up to the GPU manufacturers to choose which kind of memory fits their GPU’s intended use better and implement it in their graphics cards.

www.techsiting.com


Смотрите также