Ge force 750 ti


NVIDIA GeForce GTX 750 Ti

Общая информация
РазработчикNVIDIA
Год выхода2014
Категория видеокартыДесктопная
Тип видеокартыДискретная
ИнтерфейсPCIe 3.0
Максимальное разрешение4096 x 2160
Графический процессор
АрхитектураMaxwell
ЯдроGM107
Количество чипов1
Техпроцесс28 nm
Количество транзисторов1870 млн
Площадь148 мм²
Частота ядра1020 - 1085 MHz
Универсальных шейдерных блоков640
Блоков растеризации (ROP)16
Текстурных блоков (TMU)40
Пиксельная скорость заполнения (pixel fillrate)16.32 GPixel/s
Текстурная скорость заполнения (texel fillrate)40.8 GTexel/s
Видеопамять
ТипGDDR5
Объем2048 Mb
Частота5400 MHz
Ширина шины128 bit
Пропускная способность86,4 GB/s
Питание
Макс. потребляемая энергия (TDP)60 W
Макс. допустимая температура95° C
Мин. требования к блоку питания300 W
Разъемы дополнительного питанияНет
Поддерживаемые API и технологии
DirectX11.2
OpenGL4.4
OpenCL1.2
Shader Model5.0
SLI / CrossFireXНет
Другие технологии• NVIDIA G-Sync• NVIDIA Adaptive V-Sync• NVIDIA GameStream• NVIDIA GPU Boost• HDCP• NVIDIA CUDA• NVIDIA PhysX• NVIDIA Surround• NVIDIA 3D Vision• FXAA

• TXAA

Сравнить GeForce GTX 750 Ti с другой видеокартой

( ~ 600 моделей )

Быстродействие GeForce GTX 750 Ti

в играх

Рейтинг видеокарт

( + спецификации )

Сервис сравнения процессоров

( ~ 2 600 моделей )

Рейтинг процессоров

( + спецификации )

www.chaynikam.info

A Review of The Nvidia GeForce GTX 750 Ti - October 2019

When it comes to console gamers, any of them are going to be able to tell you all about the continual competition between PlayStation and Xbox. However, for those who prefer to play their favorite games on their personal computer, there’s a different rivalry to be aware of that involves the EVGA GeForce GTX 1050 Ti

This is a rivalry based on innovation and technology that seems as if it will be never-ending. We are talking about the competition between two graphics card makers: Nvidia and AMD. Many gamers have a favorite and look to their preferred company exclusively for new cards.For those who aren’t loyal to one brand or the other, deciding on the best card is a tall order.

Check Price

Today, we’re going to be reviewing the EVGA GeForce GTX 1050 Ti graphics card, which is a part of the 700-series from the Nvidia brand. It competes mainly with AMD’s R9 and R7 series graphics cards.

The GTX 1050 Ti is an inexpensive option that is primarily geared towards gamers on a budget or new to PC gaming.Despite the low price point at less than a hundred bucks, this card offers 1080p gaming, which is impressive, and is a capable and affordable choice

  • Exceptional performance per watt
  • Impressively low power requirements
  • Nearly nonexistent heat production of heat
  • Quiet and cool even using the stock cooler
  • Affordable price point
  • Support for advanced Nvidia features
  • Slim and compact design
  • List Element
  • Extra display outputs aren't available
  • No support is included for SLI

The first thing to be aware of when looking at the 1050 Ti is that it features next-generation Maxwell architecture. That’s something rare for NVIDIA, which traditionally launches with enthusiast-class hardware that has stood the test of time. 

This card is entry-level, so it isn’t going to play every game on ultra-quality settings, but it does have other advantages. One of those is that it drops power consumption and boosts efficiency to the ultimate level. This makes it a serious contender for Steam Box builders and home theater PC owners who appreciate both of those things.

 This is a savvy move on the part of Nvidia, since the less-expensive 1050 Ti is more of an impulse buy than the higher-quality Nvidia offerings like the Titan Black. When comparing the 1050 Ti to other cards in the 700 series, you’ll find that the 1050 Ti does come with G-sync. This helps prevent the screen tearing and stutter caused by refresh rate. To take advantage of that, the monitor must be G-syn capable. These are some of the monitors that provide this: 

  • Acer Predator X34 34-inch        3440 x 1440 ($970) 
  • AOC G2460PG 24-inch 1920          x 1080 ($400)
  • Asus RoG Swift PG278Q 27-inch 2560 x 1440 ($600)
  • BenQ XL2420G 24-inch 1920 x 1080 ($400)
  • Dell S2716DG 27-inch             2560 x 1440 ($450)
  • Phillips 272G5DYEB 27-inch      1920 x 1080 ($400)

Of course, there are various other options available to you. This just gives a bit of reference on what you can expect regarding price, size, and resolution if you are upgrading your monitor to take advantage of the G-sync feature.

It’s not surprising that the 1050 Ti is a tiny card, but it may be even more miniscule than expected. The size is only 5.7 inches and comes with a single-slot design. What is even more interesting is that this graphics card does not need extra power connectors. Comparing this to the R7 line, specifically the 260x, is eye-opening. 

The AMD model is larger by an inch and requires a six-pin power connector to operate. Despite how small the 1050 Ti is, it can support up to three monitors and also includes two Dual-Link DVI ports and a port for Mini-HDMI.

In addition to that, the 1050 Ti is one of the most efficient video cards out there. It consumes a mere 60 watts. The R7 260X needs almost double that at 115 watts. As for the power supply requirement, you’re good to go at only 300 watts, so it’s unlikely anyone will need to upgrade to get the visuals possible from this little GPU. This makes it a fantastic upgrade option for all sorts of machines.

Moving on to specifications, you won’t be disappointed. It has 512 CUDA cores, a base clock of 1020MHz and a boost of 1085MHz. It also comes with two gigabytes of video RAM at 5400MHz to take advantage of. While another version without the Ti signifier is available, the two-gig version is likely to be best for anyone who plans to run more than a single monitor.

As with most video cards out there, the big and the brightest improve on the design to offer extra perks for their loyal customers. For the purposes of this review, we’ll look at the Gigabyte GTX 1050 Ti Windforce OC, MSI GTX 1050 Ti Gaming OC, and the Zotac GTX 1050 Ti.

Gigabyte’s Windforce OC uses the Nvidia printed circuit board as a guide for its version. However, it uses the included solder points as a way to implement a six-pin power connector, which makes it the better choice for overclocking gamers. It features two HDMI, one DVI-I, and one DVI-D connectors as well. 

Unlike the single-slot original GTX 750 Ti, this one will take up two slots in your gaming rig. It’s a beautiful option for someone wanting a little boost on the 750 Ti, and it stays cool and works quietly even when overclocked. The only real disadvantage is that it’s a bit tall due to a heat pipe.

MSI Gaming Radeon RX 580 256-bit 8GB GDRR5 DirectX 12 VR Ready CFX...
  • Brand MSI, Model RX580ARMORMK2 8GOC
  • Interface PCI Express x16, Chipset Manufacturer AMD, GPU Series AMD Radeon RX 500 Series, GPU Radeon RX 580, Boost...
  • Effective Memory Clock 8000 MHz, Memory Size 8GB, Memory Interface 256-Bit, Memory Type GDDR5, DirectX DirectX 12,...

Like the Gigabyte model, MSI uses the reference design of Nvidia but does not include a power connector. It does have a massive cooler on deck, though, which is a plus as far as keeping things properly chilly goes. This 750 Ti manages to handle all that is thrown at it without getting loud and never getting hot. 

The connectors on the MSI include one HDMI, one D-Sub, and one DVI-D. While it is quiet and cool, it suffers in the same way as Gigabyte does by being a little taller than most would like. It’s also a dual-slot model, unlike Nvidia’s original.

Rather than including a six-pin connector like the Gigabyte option, Zotac uses the 16-lane PCI Express slot for power. It’s built on the same power circuit board as Nvidia dropped and is the only modification that has only a single fan for cooling. As for ports, you’ll receive one DVI-D, one DVI-I, and a mini-HDMI. Unfortunately, this model is a bit louder than the others and can warm up more quickly. That said, it’s compact and short, so it should fit anywhere you need it to go.

Averaging the performance ability of each of the above cards leads to the determination that the GTX 750 Ti is similar to the Radeon HD 7850. That’s nothing to ignore, as that model came out as a GPU costing around $250. It’s even more exciting when you consider the lack of need for a power connector and the new Maxwell-based board.

Testing of this GPU compared against others shows an average frame rate of 90 per seconds per watt. You can look at that against the Radeon R7 260X at 65 frames and the 650 Ti Boost at 47 frames to see this is an inexpensive graphics card, but it can put out some real power when it needs to.

When it comes to noise, the tests use a studio microphone placed about 20 inches away from the center of the card. The Nvidia 750 Ti had an idle noise level of 31.5 dB and 34.1 while in game. The other models were similar but slightly lower in most cases. You’ll also find that it’s easy to keep the card cool, even if all you add is a fan and a small sink. Within a closed chassis, you’re likely to hear next to nothing.

Moving on, we look at average temperatures for the graphics cards listed above. It’s not surprising that these are low, primarily since they use next to no power compared to other GPUs on the market. As expected, the Nvidia model has a temperature around 82 degrees Fahrenheit when idling and tops out at 149 degrees Fahrenheit when in an intensive gaming environment. The other models are a bit cooler, with the lowest temperatures going to the Gigabyte Windforce model.

When the 750 Ti is in an idling state, it has roughly the same level of consumption as the R7 260X and is increasingly better against powerhouses like the GTX 680, Radeon R9 280X, and others. Things only get better from there when you look at multi-monitor efficiency between cards. Based on a case where three monitors are running VGA cards, the 750 Ti is below the GTX 780, Titan, and more. The R7 260X uses over twice the power of the Nvidia GPU. With gaming, we see power consumption of around 60 watts for the 750 Ti, while the R7 260X is at 92 and the Titan is all the way up at 210.

Honestly, most people looking to pick up the 750 Ti probably aren’t doing so for cryptocurrency mining, but having an idea of the possible performance can offer insight into just what this energy-efficient card brings to the table. One perk of this card is that Maxwell architecture handles hashing much better than Kepler, making it the better choice if you are considering the GTX 680 or 770. However, it’s just not made for Bitcoin mining and gets killed by cards like the R7 260X.

When it comes to Litecoin mining, things are significantly better. Litecoin requires the use of a password-based key, which means it’s more challenging to create dedicated hardware. That means GPUs are still useful in this mining operation. However, continuous investment in power and equipment is always something to worry about. When you consider performance per watt, Nvidia kills AMD. You’re going to spend less on the GPU and get better results.

Looking at benchmarks for the 750 Ti, among others, we’ve come up with some points to look at to determine what you may be gaining and losing choosing this Nvidia green card. We’re going to base our calculations on frame rate, which means the higher the number, the better you can get your game on.

  • AMD R7 260X – 29 average, 18 minimum
  • Nvidia GTX 650 Ti Boost – 35 average, 18 minimum
  • Nvidia GTX 750 Ti – 36 average,   23 minimum
  • AMD R7 260X – 36 average, 18 minimum
  • Nvidia GTX 650 Ti Boost – 48 average, 10 minimum
  • Nvidia GTX 750 Ti – 47 average,   6 minimum
  • AMD R7 260X – 37 average, 31 minimum
  • Nvidia GTX 650 Ti Boost – 31 average, 25 minimum
  • Nvidia GTX 750 Ti – 27 average,   23 minimum
  • AMD R7 260X – 50 average, 40 minimum
  • Nvidia GTX 650 Ti Boost – 52 average, 40 minimum
  • Nvidia GTX 750 Ti – 48 average,  39 minimum
  • AMD R7 260X – 58 average, 45 minimum
  • Nvidia 650 Ti Boost – 69    average, 52 minimum
  • Nvidia 750 Ti – 59 average, 46 minimum
  • AMD R7 260X – 22 average, 10 minimum
  • Nvidia 650 Ti Boost – 23     average, 8 minimum
  • Nvidia 750 Ti – 23 average, 8 minimum

When it comes down to it, the Nvidia GTX 750 Ti is an impressive graphics card that has many uses. Most of the people who choose to purchase it are likely to do so based on its low price and will be impressed by the level of performance that it brings to the table when gaming. It may not have the best performance compared to all the GPUs three times its price, but it sure raises the bar for affordable gaming. 

www.hardwaresecrets.com

GeForce GTX 750 Ti Review: Maxwell Adds Performance Using Less Power

These days, gamers like their graphics cards beefy. Double-slot coolers and fancy fan shrouds are typically what elicit Tim Allen-style grunts and knowing nods of approval. After all, high frame rates require complex GPUs. Billions of transistors cranking away at Battlefield 4 get hot. And all of that heat needs to go somewhere.

So if you're coming to the table with a short, naked PCB, it'd better have a trick or two up its figurative sleeve. 

Yet, Nvidia, perhaps trying to prove a point, shipped out its reference GeForce GTX 750 Ti on a less than six-inch board. With no auxiliary power connector. Sporting a little bolted-on orb-style heat sink and fan. It's pretty much the same size as GeForce GTX 650 Ti. But without the big cooler, GTX 750 Ti is daintier than even a lot of sound cards we've tested.

Nevertheless, Nvidia claims that its first Maxwell architecture-based product targets gaming at 1920x1080 in the latest titles using some pretty demanding settings. Could this be the graphics world’s Prius?

Maxwell In The Middle

Maxwell’s story is intriguing, partly because of what it means to the company’s design approach moving forward, but also because Nvidia is keeping more architectural details to itself than usual. Let’s start with the design.

Back in December of last year, we were in Santa Clara learning about Nvidia’s Tegra K1 SoC. We already knew that K1’s graphics engine was Kepler-based, essentially a single SMX with notable changes to the structures connecting various subsystems in a bid to optimize for power. But Jonah Alben, senior vice president of GPU engineering, also made it clear that every new architecture, from Maxwell onward, would be built with mobile in mind. Engineers would optimize the fabrics between GPU components based on performance targets and power budgets. However, the fundamental building blocks would stay common between segments, and efficiency would guide the important decisions.

This is clearly good news for the Tegra family, which continues clawing around for a more meaningful slice of market share. K1-based devices aren't even here yet and we're already thinking about Nvidia's claim that Maxwell offers two times the performance-per-watt of Kepler, and what such sizable improvements could mean to mobile gaming.

A renewed emphasis on efficiency should be good on the desktop too though, providing the company's retooled architecture continues scaling up from single- to double- and triple-digit power ceilings.

Fortunately, you won't have to wait long for an answer. The GeForce GTX 750 Ti launching today should demonstrate what Maxwell can do (at least at a 60 W TDP). Nvidia says its more effective design pulls power consumption way down and nudges performance up, even in a GPU featuring fewer CUDA cores. Knowing that it wouldn’t have a new process technology node to lean on, Nvidia had to make its improvements to Maxwell with 28 nm manufacturing in mind. In other words, it needed to gets its GPUs working smarter, since simply tacking on more resources wouldn’t be an option.

The Maxwell Streaming Multiprocessor

Company representatives tell us that Maxwell’s biggest gains come from a redesign of the Streaming Multiprocessor, now abbreviated as SMM.

In Kepler, each SMX plays host to 192 CUDA cores, four warp schedulers, and a 256 KB register file. There’s also 64 KB serving as shared memory and L1 cache, a separate texture cache, and a uniform cache, plus 16 texture units. The big jump in CUDA core count and control logic helped Nvidia overcome losing Fermi’s doubled shader frequency. But the SMX apparently proved difficult to fully utilize in this configuration.  

Maxwell attempts to address that by partitioning the SMX into four blocks, each with its own instruction buffer, warp scheduler, and pair of dispatch units. Kepler’s 256 KB register file now gets split into four 64 KB slices. And the blocks have 32 CUDA cores each, totaling 128 across the SMM (down from Kepler’s 192). The previous architecture’s 32 load/store and 32 special function units carry over to Maxwell. However, double-precision math is further pared back to 1/32 the rate of FP32; that was 1/24 in the mainstream Kepler-based GPUs.

GM107 SMM (Left) Versus GK106 SMX (Right)
Per SM:GM107GK106Ratio
CUDA Cores1281922/3x
Special Function Units32321x
Load/Store32321x
Texture Units8161/2x
Warp Schedulers441x
Geometry Engines111x

Every pair of blocks is tied to a 12 KB texture and L1 cache, adding up to 24 KB per SMM. Block pairs are also associated with four texture units, meaning SMMs come armed with eight. That’s half as many texture units compared to Kepler’s SMX. And the table above makes it look like GM107 actually gives up some ground to GK106. But don’t freak out about bottlenecks quite yet. Remember, the architecture is supposed to get more done using less resources.

Lastly, there’s a 64 KB shared memory space for the SMM, which carries over from Fermi and then Kepler, but is no longer called out as L1 cache for compute tasks. It used to be that this space could be configured as 48 KB of shared space and 16 KB of L1 and vice versa. Now that's not necessary, so all 64 KB is used as a shared address space for GPU compute.

As you might imagine, cutting 64 CUDA cores and eight texture units from the SMM means that each building block consumes significantly less die size. Meanwhile, Nvidia claims that it’s able to hold onto ~90% of the multiprocessor’s performance by keeping cores busy in a sustained way. If you’re contemplating what that might mean to a tablet, you’re not alone. But in a desktop application, Nvidia’s simply able to pack more SMMs into a set amount of space. The GeForce GTX 650 Ti this card replaces employed four SMX blocks, while GeForce GTX 750 Ti incorporates five SMMs.

Constructing GM107

This is the first time we’ve seen Nvidia introduce a new architecture on a decidedly mid-range graphics card. With Fermi, it was the full-force GF100. Even the Kepler-based GK104 was an impressively-fast way to meet that architecture. So, the messaging is quite a bit different with GM107 leading the charge. Of course, that’s because GeForce GTX 750 Ti has to slot into a portfolio still dominated by Kepler, rather than simply ascending a throne.

And so it does so using a fully-enabled implementation of GM107, composed of five SMMs in a single Graphics Processing Cluster with its own Raster Engine. GM107 can set up one visible primitive per clock cycle, which is just behind GK106's primitive rate of 1.25 prim/clock and double GK107's .5 prim/clock.

As in Nvidia architectures prior, ROP partitions and L2 cache slices are aligned. And like the GeForce GTX 650 Ti’s GK106 processor, GM107 sports two partitions with eight units each, giving you up to 16 32-bit integer pixels per clock. Where the two GPUs really diverge is their L2 cache capacity. In GK106, you were looking at 128 KB per slice, adding up to 256 KB in an implementation with two ROP partitions. GM107 appears to wield 1 MB per slice, yielding 2 MB of memory used for servicing load, store, and texture requests. According to Nvidia, this translates to a significant load shifted away from the external memory system, along with notable power savings.

Going easy on memory bandwidth is smart, since GM107 exposes a pair of 64-bit memory controllers to which 1 or 2 GB of 1350 MHz GDDR5 DRAM is attached. Peak throughput is, interestingly, exactly what we got from GeForce GTX 650 Ti: 86.4 GB/s. The memory is feeding fewer CUDA cores, but they’re managed more efficiently. So, the big L2 is supposed to play an instrumental role in preventing a bottleneck. 

Indeed, a look at global in-page random cache latencies helps illustrate how Maxwell's memory hierarchy keeps the GPU busy more consistently.

Beyond the pieces of GM107 devoted to gaming and compute tasks, Nvidia also says it improved the fixed-function NVEnc block. That’s the bit of logic responsible for letting ShadowPlay encode your frag fest with minimal performance impact. It’s what enables streaming to the Shield. And it accelerates a few transcoding apps to get big movies onto your portable devices quickly. Whereas Kepler was capable of encoding H.264-based content ~4x faster than real-time, Maxwell purportedly achieves 6-8x real-time. H.264 decode performance is said to be eight to 10 times quicker than it was before, also. Nvidia achieves those gains, it says, by simply speeding up the fixed-function blocks.

GeForce GTX 650GeForce GTX 650 TiGeForce GTX 750 TiGeForce GTX 660GPUArchitectureSMsGPCsShader CoresTexture UnitsROP UnitsProcess NodeCore/Boost ClockMemory ClockMemory BusMemory BandwidthGraphics RAM (GDDR5)Power ConnectorsMaximum TDPPrice
GK107GK106GM107GK106
KeplerKeplerMaxwellKepler
2455
1213
384768640960
32644080
16161624
28 nm28 nm28 nm28 nm
1058 MHz925 MHz1020 /1085 MHz980 / 1033 MHz
1250 MHz1350 MHz1350 MHz1502 MHz
128-bit128-bit128-bit192-bit
80 GB/s86.4 GB/s86.4 GB/s144.2 GB/s
1 or 2 GB 1 or 2 GB1 or 2 GB2 GB
1 x 6-pin1 x 6-pinNone1 x 6-pin
64 W110 W60 W140 W
$130 (2 GB)$150 (2 GB)$150 (2 GB)$190 (2 GB)

All told, the GM107 GPU ends up with 1.87 billion transistors in a 148 mm² die. If you keep the comparison to GeForce GTX 650 Ti going, then the first Maxwell-based processor replaces GK106, which packs 2.54 billion transistors into a 221 mm² die. Before we get to our performance results, we have to assume Nvidia’s emphasis on efficiency is significant enough to let the company use less transistors on a smaller die, cut out a lot of CUDA cores and texture units, and still improve performance overall. At least, that’s what we’ll be looking for…

Alternatively, you can put GM107 up against the 1.3 billion-transistor, 118 mm² GK107, if your preference is a face-off of thermal ceilings. In that case, the Maxwell-based processor is more complex, larger, significantly faster, and yet it should still use less power.

Page 2

If you’re building a graphics card, dropping a low-power GPU onto it gives you some options. You don’t get shoehorned into a dual-slot, actively-cooled beast of a board. For its reference design, Nvidia chose a PCB less than six inches long—it pretty much ends with the PCI Express slot connector. There’s a single-slot I/O bracket with two dual-link DVI connectors and a mini-HDMI output. However, Nvidia covers the GM107 processor with an orb-style heat sink and fan that eat up two slots worth of space, so you still have to budget accordingly.

The card’s 60 W ceiling is easily satisfied by a 16-lane PCI Express slot, which is rated for up to 75 W. That means you won’t find an auxiliary power connector on the PCB (even if there are holes for one).  We’ve long been fans of cards fitting this profile because of how flexible they are. Previously, AMD’s Radeon HD 7750 was your best bet for upgrading an old decrepit box with too-little power output or not enough connectors for a decent add-in board. Now the GeForce GTX 750 Ti is gunning for that position.

Gigabyte GeForce GTX 660 OC 2GB Upper-Mainstream Performance

Unfortunately, there’s also no SLI bridge connector. True to Nvidia’s mainstream approach, the $150 price point is right about where you lose the option to sling two boards together for higher performance. This is a competitive disadvantage; AMD’s alternatives in the same price range allow CrossFire configurations. It’s probable that Nvidia could achieve SLI over PCI Express, but the company says it doesn’t see much demand from enthusiasts looking to link $150 cards. If you feel differently, speak up. We’d be curious to see if a couple of GM107s could beat a GeForce GTX 770, for sure.

Nvidia plans to offer two versions of the GeForce GTX 750 Ti—one with 1 GB of GDDR5, priced at $140, and available later in February, and a 2 GB model that should be selling for $150 by the time you read this. Moreover, there will be a GeForce GTX 750 that ships later in the month at a price point of $120.

The initial round of partner boards includes a mix of cards eating up one and two expansion brackets, but they’re all dual-slot designs. Down the road, though, we’re told to expect dual-slot passively-cooled solutions. Single-slot configurations are also possible, though nobody seems certain that a low-profile fan can yield a pleasant experience.

www.tomshardware.com

سعر ومواصفات NVIDIA GeForce GTX 750 Ti

NVIDIA GeForce GTX 750 Ti August 30, 2018 - 4:04 pm admin NVIDIA

  • من سلسلة : GeForce 700 .
  • حجم الذاكرة : 2 جيجا بايت GDDR5 .
  • معمارية الكارت : 128 بت .
  • سعة نقل البيانات : 86.4 جيجا بايت في الثانية .

سعر ومواصفات كارت الشاشة NVIDIA GeForce GTX 750 Ti تقدمه اليكم شركة نيفيديا الامريكية في سلسلة GeForce 700 الذي تم الاعلان عنه في يوم 18 فبراير من عام 2014 ، حيث يأتي بذاكرة عشوائية بسعة 2048 ميجا بايت GDDR5 ، و يصل سعة نقل البيانات الي 86.4 جيجا بايت في الثانية وبمعمارية 128 بت ، وبسرعات معالجة Base Clock بسرعة 1020 ميجا هيرتز و Boost Clock بسرعة 1085 ميجا هيرتز و Memory Clock بسرعة 5.4 Gbps ، ويصل الحد الأقصى للوضوح الرقمى الي 4096×2160 بكسل ، ويدعم الكارت العديد من التقنيات NVIDIA GameStream و GPU Boost 2.0 و 3D Vision و CUDA و PhysX و TXAA و Adaptive VSync, FXAA و NVIDIA Surround و G-SYNC-ready ، واليكم كافة تفاصيل الكرت .

إمكانيات الكرت
حجم الذاكرة - 2048 ميجا بايت GDDR5
سرعات المعالجة CUDA Cores : 640 Base Clock بسرعة 1020 ميجا هيرتز Boost Clock بسرعة 1085 ميجا هيرتز

Memory Clock بسرعة 5.4 Gbps

معمارية كرت 128 بت
سعة نقل البيانات 86.4 جيجا بايت في الثانية
نوع الكرت مكتبي
واجهة الكارت PCI Express 3.0 x16
تاريخ الإعلان 18 فبراير من عام 2014
دعم نظام التشغيل معتمد لنظام التشغيل Windows 7 و Windows 8 و Windows Vista و XP
دعم التكنولوجيا
اصدار دايركت إكس DirectX 12 API
اصدار OpenGL 4.3
اصدار OpenCL 2.0
اصدار Shader Model 5.0
دعم تكنولوجيا اخري NVIDIA GameStream و GPU Boost 2.0 و 3D Vision و CUDA و PhysX و TXAA و Adaptive VSync, FXAA و NVIDIA Surround و G-SYNC-ready
دعم الشاشة
الحد الأقصى للوضوح الرقمى 4096×2160 بكسل
منافذ الكرت One Dual Link DVI-I One Dual Link DVI-D

One mini-HDMI

دعم الشاشات المتعددة 3 شاشات
المواصفات الحرارية والطاقة
إمداد الطاقة الموصى به 300 وات
اقصي درجة حراره 98 درجة °C
في مصر يبدأ من 2000 جنية أخر تحديث : 2019/05/11
في السعودية لم يحدد السعر بعد
ملاحظه : قد تختلف مواصفات كارت الشاشة في بعض البلدان العربية عن بعضها لذلك تأكد قبل الشراء ولا نضمن صحة المواصفات بنسبة 100 % في حالة وجود اخطاء اتصل بنا.

www.nologygate.com

Nvidia GeForce GTX 750 and 750 Ti review

In this segment of the article we will look at the reference (original design) based specs and architecture. The GeForce GTX 750 and 750 Ti are based on Maxwell GPU architecture. The product however is using the GM107 which is based on a 28nm fabrication process. That chip has 1.87 Billion transistors.

Maxwell

NVIDIA’s first-generation “Maxwell” GPU architecture implements a number of architectural enhancements designed to extract even more performance and more power efficiency per watt consumed. The first Maxwell-based GPU is codenamed “GM107” and designed for use in power-limited environments like notebooks and small form factor (SFF) PCs. The first graphics card that is based on the GM107 GPU is the GeForce GTX 750 Ti. Because of GM107’s architectural efficiency, at 1080p resolution a GeForce GTX 750 Ti will have a TDP hovering at merely 60 Watts. 

For Maxwell Nvidia applied a new design Streaming Multiprocessor (SM) with the focus on improving overall performance per watt. The new Maxwell SM architecture now has five SMs in GM107, compared to two in GK107, with only a 25% increase in die area. Maxwell also has a larger L2 cache design; 2048KB in GM107 versus 256KB in GK107. With more cache located on-chip, fewer requests to the graphic card's DRAM are needed, thus reducing overall board power and improving performance.

Maxwell implements multiple SM units within a GPC (Graphics Processing Cluster), and each SM includes a Polymorph Engine and Texture Units, while each GPC includes a Raster Engine. ROPs are still aligned with L2 cache slices and Memory Controllers. The GM107 GPU contains one GPC, five Maxwell Streaming Multiprocessors (SM), and two 64-bit memory controllers (128-bit total). 

When we peek at NVIDIA's slide decks we see them denoting that the GeForce GTX 750 Ti has been designed for 1080p gaming with medium settings and FXAA. That's true, but please do understand this remains an entry level graphics card series. As far as we are concerned, mainstream will start at the GTX 760.

  • The GeForce GTX 650 boasts 384 CUDA (shader) cores (GK107)
  • The GeForce GTX 650 Ti  boasts 768 CUDA (shader) cores (GK106)
  • The GeForce GTX 660 boasts 960 CUDA (shader) cores (GK106)
  • The GeForce GTX 660 Ti boasts 1344 CUDA (shader) cores (GK104)
  • The GeForce GTX 750 boasts 512 CUDA (shader) cores (GM107)
  • The GeForce GTX 750 Ti  boasts 640 CUDA (shader) cores (GM107)

GeForce GTX 750

The 1Gb GeForce GTX 750 ships with 4 activated SMX units containing 512 Shader Cores and 32 texture units. The core clock frequency will be 1020 MHz while it can boost to 1085 MHz. The memory speed is locked at a 5010 MHz effective data rate based on a 1252 MHz quad data rate for GDDR5 over 128-bit memory bus.

GeForce GTX 750 Ti

The more interesting product will be the GTX 750 Ti which has 5 activated SMX units containing 640 Shader Cores and 40 texture units / 16 ROPs. The core clock frequency will be 1020 MHz while it can boost to 1085 MHz. The memory speed is locked at an 5400 MHz effective data rate based on a 1350 MHz quad data rate for GDDR5 over 128-bit memory bus.

 

You'll notice that the TDP of these cards is set at a maximum of 60 Watts. Thanks to Maxwell the GeForce GTX 750 and Ti draw relatively little power. Many GeForce GTX 750 boards are capable of hitting speeds in excess of 1100 MHz easily. The idle power of the GeForce GTX 750 series is merely a few watts, actually 3 to 5W representing terrific in-class idle power. In addition, HD video is playable at ~13W, again representing great power consumption. Display outputs include two dual-link DVIs and one mini-HDMI, however this will differ per board partner.

128-bit Memory Interface

The memory subsystem of the GeForce GTX 750 cards are based upon two 64-bit memory controllers in use (128-bit) with either 1GB or 2GB of GDDR5 memory. The ROP (raster operation) engine was cut down to 16 units. With this release, NVIDIA now has the first Maxwell products on their way. The new graphics adapters are DirectX 11.2 ready.

  GeForce GTX650 GeForce GTX650 Ti GeForce GTX660 GeForce GTX660 Ti GeForce GTX750  GeForce GTX750 Ti GeForce GTX760
Stream (Shader) Processors 384 768  960 1344 512 640 1152
Core Clock (MHz) 1058 925 980 915 1020 1020  980
Shader Clock (MHz) 1058 925 - - - - -
Boost clock (Mhz) - 1033 980 1085  1085  1033
Memory Clock (effective MHz) 5000  5400 6008 6008 5010/5400 5400 6008
Memory amount

www.guru3d.com

GeForce GTX 750 Ti

Где купить
Где купить

СЕРЬЕЗНЫЕ ИГРЫ.НЕВЕРОЯТНАЯ ЦЕНА. Начни игру, вооружившись GeForce® GTX, с новой видеокартой GTX 750 Ti. Она создана на основе ультраэнергоэффективной архитектуры нового поколения, а также обеспечивает на 25% более высокую производительность и в два раза большую энергоэффективность, чем GTX 650Ti. GTX 750 Ti предлагает потрясающую производительность по невероятной цене, что делает ее видеокартой, которую выбирают для серьезных игр.

СПЕЦИФИКАЦИИ GPU:
Ядер NVIDIA CUDA® 640
Базовая тактовая частота (МГц) 1020
Тактовая частота с ускорением (МГц) 1085
СПЕЦИФИКАЦИИ ПАМЯТИ:
Быстродействие памяти 5.4 Гбит/с
Стандартный объем памяти 2048 MБ
Интерфейс памяти GDDR5
Полоса пропускания шины памяти 128-bit
Пропускная способность памяти (Гбит/с) 86.4
Смотреть галерею Узнайте больше

www.nvidia.ru


Смотрите также