Gtx 980i

Gtx 980i DEFAULT

Nvidia GTX 980 Ti Review: Titan X performance at a fraction of the price

GTX980-Ti

Nvidia has launched its next high-end GPU, and it’s a stunner. Just as the GTX 780 followed the original Titan, the GTX 980 Ti will slide in between the Titan X and the upper-end “standard” consumer card, the GTX 980. This new card is based on the same GM200 GPU as the Titan X, but trims the VRAM buffer down to 6GB, from Titan X’s 12GB. The cooler design is outwardly identical to the shroud and fan that Nvidia has deployed since it first unveiled the GTX Titan.

GF-Cooler

Overall, the GTX 980 Ti is a very modest step down from what the Titan X offers. It has 22 SM clusters as opposed to Titan X’s 24, for a total of 2816 GPU cores (a roughly 9% reduction). Trim the texture units by the same ratio (176 as opposed to 192) and keep the total number of ROPS the same (96). Then cut the RAM in half, for a total of 6GB, down from 12GB, and voila — you have the GTX 980 Ti.

GTX980-TiSpecs

The memory clock, base clock, and boost clock on the 980 Ti are all identical to Titan X, as is its pixel fill rate. Texture rate is down slightly, thanks to the decreased number of texture mapping units. Both chips have a 384-bit memory bus. Nvidia has promised that the 980 Ti has full access to its memory pool, and that overall GPU memory bandwidth should be in-line with Titan X. We see no evidence of any memory-related issues, and the 6GB memory buffer on the card give the chip room to breathe in any case.

On paper, the GTX 980 Ti packs virtually all of the Titan X’s punch into a much lower $649 price.

Competitive positioning

If you follow the GPU market with any regularity, you’re likely already aware that AMD has a new, High Bandwidth Memory-equipped graphics card launching in the near future, possibly dubbed the Radeon Fury. As things stand today, however, AMD only has one GPU that seriously plays in the $500+ space — the R9 295X2. At $619, it’s down substantially from its $1500 launch price — cheap enough to be considered potent competition for Nvidia’s GTX 980 Ti and Titan X cards.

R9-295X2

Dual-vs-single GPU comparisons are intrinsically tricky. The doubled-up card is almost always the overall winner — it’s exceptionally rare for AMD or Nvidia to have such an advantage over the other that two cards can’t outpace one. The reason dual GPUs don’t automatically sweep such comparisons is twofold: First, not all games support more than one graphics card, which leaves the second GPU effectively sitting idle. Second, even when a game does support multiple cards, it typically takes driver optimizations to fully enable it.

The R9 295X2, Titan X, GTX 980, and GTX 980 Ti were all tested in a Haswell-E system with an Asus X99-Deluxe motherboard, 16GB of DDR4-2667, and Windows 8.1 64-bit with all patches and updates installed. The latest AMD Catalyst Omega drivers and Nvidia GeForce 352.90 drivers were used. Our power consumption figures are going to be somewhat higher in this review than in some previous stories — the 1200W PSU we used for testing was a standard 80 Plus unit, and not the 1275 80 Plus Platinum that we’ve typically tested with.

V-Sync was disabled in all tests, as was G-Sync.

Sours: https://www.extremetech.com/extreme/206956-nvidia-gtx-980-ti-review-titan-x-performance-at-a-fraction-of-the-price

EVGA GeForce GTX 980 4GB SC GAMING ACX 2.0, 26% Cooler and 36% Quieter Cooling Graphics Card 04G-P4-2983-KR

Style:GTX 980 SC ACX 2.0

The new EVGA GeForce GTX 980 is powered by the next-generation NVIDIA Maxwell architecture, giving you incredible performance, unmatched power efficiency, and cutting-edge features. Maxwell is the most advanced GPU architecture ever made, designed to be the engine of next-generation gaming. Inspired by light, it was designed to solve some of the most complex lighting and graphics challenges in visual computing. For the first time, gaming GPUs can dynamically render indirect light using the new VXGI (Voxel Global Illumination technology. Scenes are significantly more lifelike as light interacts more realistically in the game environment. Incredible Speed and Power Efficiency The GTX 980 is the world's fastest GPU and the GTX 970 offers the most advanced performance in its class. Each delivers 2x the performance of previous-generation cards, bringing new gaming experiences to virtual reality, HD, and ultra-resolution 4K displays. Dynamic Super Resolution Technology Enable the detail of 4K monitors, on a 1080P display. DSR produces smoother images by rendering a game at a high resolution, then downscaling it to the native resolution of the display using advanced filtering. Super-Smooth and Stutter-Free GeForce GTX 980 and 970 cards support tear-free, super-fast NVIDIA G-Sync monitor display technology, including 4K. Together, these technologies provide the most immersive and competitive gaming experiences possible.

Sours: https://www.amazon.com/EVGA-GeForce-Quieter-Graphics-04G-P4-2983-KR/dp/B00NT9UT3M
  1. G37 rear differential
  2. Bernat velvet colors
  3. Wok paddle
  4. Loraincounty com forums

GeForce GTX 980 Ti Revisited: How does it fare against the GTX 1070 and RTX 2060?

Today we’re revisiting an old friend, the GeForce GTX 980 Ti and we’re doing so with a 36 game benchmark covering the 1080p and 1440p resolutions. We're particularly interested to see how it performs against more modern GPUs like the GTX 1070 and the GeForce RTX 2060. So in a way we suppose you could call this more of a GTX 980 Ti vs. GTX 1070 vs. RTX 2060 test.

About a year ago we reviewed the RTX 2060 for the first time. We happened to miss the official launch because Nvidia "lost" our sample in the mail but that meant we didn’t have to rush our tests and we ended up with the full hog and provided a massive 36 game test.

Looking back at that data, we found that the RTX 2060 was able to beat the GTX 1070 Ti by a slim margin, making it 13% faster than the vanilla GTX 1070. We didn’t include older GPUs at the time like the GTX 980 Ti, so the Maxwell-based flagship was absent from that feature. It will be interesting to see if those margins have changed, while also taking a look at how much faster the RTX 2060 is when compared to the GTX 980 Ti.

The GTX 980 Ti was released back in mid-2015 for $650 and the RTX 2060 in early 2019 for $350. That makes the modern Turing GPU three and a half years newer and almost 50% cheaper. The 980 Ti was a beast of a GPU, it shared the same 601mm2 die with the Titan X and although not all SM units were enabled, it still packed an impressive 2816 CUDA cores, 6GB of GDDR5 memory on a 384-bit wide memory bus, and enjoyed a memory bandwidth of 336.5 GB/s.

A year later the GTX 1070 arrived and although it packed just 1920 CUDA cores -- 32% fewer than the 980 Ti -- thanks to a ~60% increase in clock speed, performance ended up being very similar and this was largely due to Nvidia moving from TSMC’s 28nm process to what at the time was their latest 16nm manufacturing.

The jump to the 12nm process with the RTX 2060 was less extreme, but here we got an entirely new architecture. Notably, the RTX 2060 die is 42% larger than the GTX 1070 and although the core count has only increased by a little over 10%, the cores are much wider and support technologies such as real-time ray tracing.

Despite the massive increase in die size, Nvidia sold the RTX 2060 for a little less than the Pascal-based GTX 1070. In a way they had to, as the newer GPU had more features but was just marginally faster. Turing is the more modern architecture, featuring better support for DX12 and Vulkan. So again, it'll be interesting to see if that margin has grown and where -- in other words, which games are favoring Turing today.

Our test setup consists of the MSI GTX 980 Ti Gaming (graphics card names were a little simpler back then) pitted against the MSI RTX 2060 Gaming Z and GTX 1070 Gaming X. Powering the GPU test rig is the Intel Core i9-9900K overclocked to 5 GHz with 16GB of DDR4-3400 memory. As usual, rather than having 36 individual graphs, we’ll look at about a dozen of the more interesting games and then jump into the performance breakdown graphs. For the discussion, we’ll be focusing on the 1440p results...

Benchmarks

Doom Eternal is an interesting game to start with as it makes heavy use of async compute and this is a technology neither the Maxwell or Pascal architectures were able to utilize at the hardware level.

At 1440p the key advantage the GTX 1070 has over the 980 Ti is the 8GB VRAM buffer, that and what is likely better driver optimization. The end result means the 980 Ti was almost 20% slower. However it’s the RTX 2060 and it’s more modern architecture that really excels in this title, reaching almost 100 fps on average and that meant the 980 Ti was 31% slower.

It’s also worth noting that the 2060 makes out even better at 1080p as the 980 Ti is also limited to a 6GB memory buffer. Whereas it was 18% faster than the GTX 1070 at 1440p, it’s a whopping 36% faster at 1080p as the memory constraints are less of an issue at this lower resolution.

Moving to Resident Evil 3 we find a situation where the GTX 1070 is no faster than the GTX 980 Ti. If anything, it’s a little slower. This was also the case with Resident Evil 2 -- both games use the same engine and look very similar.

With 68 fps on average at 1440p, the 980 Ti fairs very well in this title. The RTX 2060 was faster, but this time by just a 13% margin, so while that is certainly progress, it’s not a ton of it.

Where we do see a significant step forward for the Turing based GPU is in Rainbow Six Siege. Here the RTX 2060 was a whopping 35% faster than the GTX 980 Ti and almost 40% faster than the GTX 1070.

Rainbow Six Siege is a compute-heavy title and previously this meant AMD’s 5th gen GCN products walked all over Nvidia’s Pascal GPUs, such as the GTX 1070. However the upgraded Turing cores tackled that weakness and now it’s Nvidia who enjoys an advantage in this title, even when compared to Radeon Navi GPUs.

The leap forward seen here is incredible, though it has to be said, with 79 fps on average the GTX 980 Ti still delivered impressive performance at 1440p.

The 980 Ti also performs very well in Call of Duty: Modern Warfare, spitting out over 60 fps at all times during our test, for an average of 81 fps. It was also just 7% slower than the GTX 1070, though it did trail the much newer RTX 2060 by a 24% margin.

The RTX 2060 does perform well in this title, even at 1440p and a 23% performance uplift over the GTX 1070 is certainly nothing to sneeze at.

This game also supports ray tracing and is therefore heavily optimized for Turing. We've witnessed Pascal GPUs such as the GTX 1060 performing quite poorly relative to competitors, in this case the RX 580. So while we're sure Turing improvements help here, we’d wager that driver optimization is playing its role as well.

The GTX 980 Ti performed well in F1 2019 using the new DX12 mode, here it was just 11% slower than the GTX 1070, but more importantly was able to provide smooth playable performance at 1440p with 76 fps on average. The RTX 2060 was 32% faster and 18% faster than the GTX 1070, so a reasonable performance uplift in this latest F1 title.

Fortnite also supports DX12 now and using this API we see comparable performance between the GTX 980 Ti and GTX 1070 at 1440p using the maximum quality preset. For competitive-type quality settings either will work just fine.

The RTX 2060 does offer a 23% performance boost and perhaps not the kind of gain you’d expect to see in this title.

Another popular battle royale game where the GTX 1070 and 980 Ti are evenly matched. Both GPUs average just over 70 fps. The RTX 2060 was about 20% faster at 1440p, which is a strong performance lead and certainly not a bad generation-on-generation performance uplift.

Shadow of the Tomb Raider remains a visually impressive game even when it's been 18 months since it launched. As you can see it’s a demanding one, even at 1440p. Here the 980 Ti failed to achieve a 60 fps average, as did the GTX 1070, both rendering just over 50 fps. This made the 23% boost offered by the RTX 2060 quite noticeable.

World War Z is yet another title where the GTX 1070 and 980 Ti are very evenly matched and it’s another title where both easily push above 60 fps at 1440p using the maximum in game quality settings. The RTX 2060 was around 18% faster so a reasonable performance uplift there, but it has to be said the 980 Ti hardly looks outdated.

The Gears 5 results are interesting, the GTX 1070 is clearly faster than the 980 Ti at 1080p, offering a 12% performance upgrade. However at 1440p the results come together and here the 1070 is just 6% faster, verging on a margin of error-type difference. The RTX 2060 is just 13% faster than the GTX 1070, though we see a similar margin between the two at both tested resolutions.

Ghost Recon Breakpoint has recently been updated to support Vulkan and this has led to big performance gains for modern GeForce GPUs -- gains of up to 20%.

It seems clear that Nvidia has not optimized for the 980 Ti and Maxwell as the GTX 1070 was a whopping 18% faster at 1440p. A similar result to what we saw with another new title, Doom Eternal.

Last up we have World of Tanks and as is often the case with older titles, we see very little difference in performance between the three GPUs, particularly at 1440p. We’re talking a 10% or lower margin between the slowest and fastest GPU. This can be attributed to better driver optimization for older GPUs and or the inability to take advantage of modern features supported by newer GPUs.

For the most part the GTX 980 Ti still looks to be handling itself rather well, though there are some titles such as Ghost Recon Breakpoint, which appear to lack proper driver optimization and therefore will require gamers to reduce graphics settings for smooth performance at 1440p. Since we’ve only looked at a dozen of the tested games, let’s see how these GPUs compared across all 36 games.

Performance Breakdown

At 1080p, the GTX 980 Ti was on average 5% slower than the GTX 1070. The only outlier here was The Division 2 where the 980 Ti was for 26% slower for some reason.

Removing that result changes the average by a single percent, making the 980 Ti ~4% slower. For the most part, you’re looking at little to no difference between these two GPUs, 21 of the 36 games tested saw a margin of 5% or less which we typically deem a draw.

Moving to 1440p reduces the margin to 4% and removing The Division 2 doesn’t change anything in that average. 22 of the 36 games saw a margin of 5% difference or less, which confirms these GPUs are very evenly matched.

When compared to the RTX 2060, the GTX 980 Ti was 20% slower on average at 1080p. The Division 2 along with Doom Eternal are weak titles for the Maxwell part.

The old GTX 980 Ti trails the RTX 2060 by a similar margin at 1440p. For this match up it was 19% slower on average. Certainly it's on the more demanding, newer, or more technically advanced games where the RTX 2060 gets away from the GTX 980 Ti.

The margins are much smaller in titles such as World of Tanks, War Thunder, For Honor and Resident Evil 3, for example. Whereas they’re quite considerable in titles such as Doom Eternal, The Division 2, Control, DiRT Rally 2.0, Strange Brigade, Rainbow Six Siege, Red Dead Redemption 2, Assassin’s Creed Odyssey, Call of Duty: Modern Warfare, and well… a few others, you get the point.

What We Learned

While the GTX 980 Ti is starting to show its age, the once mighty flagship GPU still has some fight in it, which is great to see. We have to admit, the initial intention of this benchmark session was to investigate how well the 980 Ti stacked up against the newer GTX 1070 in 2020, we often found ourselves more interested in the battle between the GTX 1070 and RTX 2060.

But we’ll stop ourselves from jumping right to that and talk a little more about the 980 Ti versus the GTX 1070. Last time we compared these two head to head in a large range of games the 980 Ti was actually 1% faster or basically identical and that’s still mostly true today. The games that tipped the results in the 1070’s favor include The Division 2, Ghost Recon Breakpoint and Doom Eternal.

None of those titles were tested before and in our opinion the 980 Ti probably shouldn’t be that much slower, so we suspect this is a result of Nvidia failing to optimize for older hardware.

The GTX 980 Ti is 5 years old now and we’re pretty confident that Nvidia abandons all optimizations around the 6 year old mark. This can be seen when looking at Kepler-based GPUs. We suspect we’re getting to a point where the GeForce 900 series will start to fall away in newer titles. If you happen to be second-hand shopping, keep that in mind.

Lets now shift gears to the RTX 2060...

Upon release we found it was ~13% faster than the GTX 1070. In today's test the RTX 2060 was 20% faster on average, so which games are responsible for the overall improvement in performance?

The biggest contributor is Control, a newer title that wasn't part of the previous test. Control is also an Nvidia sponsored title and a great deal of time has been invested into making sure RTX series GPUs deliver maximum performance, largely so ray tracing doesn’t result in a complete slideshow.

Interestingly, Strange Brigade performance has also been dramatically improved and this title does make use of async compute. Although not a popular title, it is often used for benchmarking, so it doesn’t surprise me that Nvidia has made an effort to optimize performance for Turing-based GPUs here.

The RTX 2060 was also 39% faster in Rainbow Six Siege, but that’s not much different to the 33% win it enjoyed using the older DX11 version. Nvidia has optimized Turing for Wolfenstein: The New Colossus, originally the 2060 was 20% faster, now with upgraded drivers and multiple game patches, it’s 33% faster. Red Dead Redemption 2 is another new game that massively favors the 2060.

Between new games that better utilize modern GPUs and Nvidia’s focus on driver optimizations for Turing, the RTX 2060 has been able to further distance itself from the GTX 1070 and consequently the GTX 980 Ti. For those of you still rocking the GTX 980 Ti, overall we’d say it’s holding up well and you’re probably reasonably satisfied with the experience, but we suspect it'll start to fall away now and upcoming generations should see it outpaced for around $200.

The GeForce GTX 1070 is still a solid buy but with Nvidia focusing its attention on Turing and future generations supporting ray tracing and DLSS, it'll be interesting to see how well Pascal ages over the next few years. No doubt, we’ll have plenty more benchmark content in the future that will monitor the situation.

Shopping Shortcuts:
  • GeForce RTX 2080 Ti on Amazon
  • GeForce RTX 2080 Super on Amazon
  • GeForce RTX 2070 Super on Amazon
  • GeForce RTX 2060 Super on Amazon
  • GeForce RTX 2060 on Amazon
  • AMD Ryzen 9 3900X on Amazon
  • AMD Ryzen 5 3600 on Amazon

482 interactions

Sours: https://www.techspot.com/review/2005-geforce-gtx-980-ti-revisited/
The GTX 980 is 6 years old... But Can it Still Game well?

GeForce 900 series

For GeForce cards with a model number of 9XX0, see GeForce 9 series.

Series of GPUs by Nvidia

The GeForce 900 series is a family of graphics processing units developed by Nvidia, succeeding the GeForce 700 series and serving as the high-end introduction to the Maxwell microarchitecture, named after James Clerk Maxwell. They are produced with TSMC's 28 nm process.

With Maxwell, the successor to Kepler, Nvidia expected three major outcomes: improved graphics capabilities, simplified programming, and better energy efficiency compared to the GeForce 700 series and GeForce 600 series.[6]

Maxwell was announced in September 2010,[7] with the first Maxwell-based GeForce consumer-class products released in early 2014.[8]

Architecture[edit]

Main article: Maxwell (microarchitecture)

First generation Maxwell (GM10x)[edit]

First generation Maxwell GM107/GM108 were released as GeForce GTX 745, GTX 750/750 Ti and GTX 850M/860M (GM107) and GT 830M/840M (GM108). These new chips provide few consumer-facing additional features; Nvidia instead focused on power efficiency. Nvidia increased the amount of L2 cache from 256 KiB on GK107 to 2 MiB on GM107, reducing the memory bandwidth needed. Accordingly, Nvidia cut the memory bus from 192 bit on GK106 to 128 bit on GM107, further saving power.[9] Nvidia also changed the streaming multiprocessor design from that of Kepler (SMX), naming it SMM. The structure of the warp scheduler is inherited from Kepler, which allows each scheduler to issue up to two instructions that are independent from each other and are in order from the same warp. The layout of SMM units is partitioned so that each of the 4 warp schedulers in an SMM controls 1 set of 32 FP32 CUDA cores, 1 set of 8 load/store units, and 1 set of 8 special function units. This is in contrast to Kepler, where each SMX has 4 schedulers that schedule to a shared pool of 6 sets of 32 FP32 CUDA cores, 2 sets of 16 load/store units, and 2 sets of 16 special function units.[10] These units are connected by a crossbar that uses power to allow the resources to be shared.[10] This crossbar is removed in Maxwell.[10] Texture units and FP64 CUDA cores are still shared.[9] SMM allows for a finer-grain allocation of resources than SMX, saving power when the workload isn't optimal for shared resources. Nvidia claims a 128 CUDA core SMM has 86% of the performance of a 192 CUDA core SMX.[9] Also, each Graphics Processing Cluster, or GPC, contains up to 4 SMX units in Kepler, and up to 5 SMM units in first generation Maxwell.[9]

GM107 supports CUDA Compute Capability 5.0 compared to 3.5 on GK110/GK208 GPUs and 3.0 on GK10x GPUs. Dynamic Parallelism and HyperQ, two features in GK110/GK208 GPUs, are also supported across the entire Maxwell product line.

Maxwell provides native shared memory atomic operations for 32-bit integers and native shared memory 32-bit and 64-bit compare-and-swap (CAS), which can be used to implement other atomic functions.

While it was once thought that Maxwell used tile-based immediate mode rasterization,[11] Nvidia corrected this at GDC 2017 saying Maxwell instead uses Tile Caching.[12]

NVENC[edit]

Main article: Nvidia NVENC

Maxwell-based GPUs also contain the NVENC SIP block introduced with Kepler. Nvidia's video encoder, NVENC, is 1.5 to 2 times faster than on Kepler-based GPUs meaning it can encode video at 6 to 8 times playback speed.[9]

PureVideo[edit]

Main article: Nvidia PureVideo

Nvidia also claims an 8 to 10 times performance increase in PureVideo Feature Set E video decoding due to the video decoder cache paired with increases in memory efficiency. However, H.265 is not supported for full hardware decoding, relying on a mix of hardware and software decoding.[9] When decoding video, a new low power state "GC5" is used on Maxwell GPUs to conserve power.[9]

Second generation Maxwell (GM20x)[edit]

Second generation Maxwell introduced several new technologies: Dynamic Super Resolution,[13] Third Generation Delta Color Compression,[14] Multi-Pixel Programming Sampling,[15] Nvidia VXGI (Real-Time-Voxel-Global Illumination),[16] VR Direct,[17][18][19] Multi-Projection Acceleration,[14] and Multi-Frame Sampled Anti-Aliasing (MFAA)[20](however support for Coverage-Sampling Anti-Aliasing (CSAA) was removed).[21] HDMI 2.0 support was also added.[22][23]

Second generation Maxwell also changed the ROP to memory controller ratio from 8:1 to 16:1.[24] However, some of the ROPs are generally idle in the GTX 970 because there are not enough enabled SMMs to give them work to do and therefore reduces its maximum fill rate.[25]

Second generation Maxwell also has up to 4 SMM units per GPC, compared to 5 SMM units per GPC.[24]

GM204 supports CUDA Compute Capability 5.2 compared to 5.0 on GM107/GM108 GPUs, 3.5 on GK110/GK208 GPUs and 3.0 on GK10x GPUs.[14][24][26]

Maxwell second generation GM20x GPUs have an upgraded NVENC which supports HEVC encoding and adds support for H.264 encoding resolutions at 1440p/60FPS & 4K/60FPS compared to NVENC on Maxwell first generation GM10x GPUs which only supported H.264 1080p/60FPS encoding.[19]

Maxwell GM206 GPU supports full fixed function HEVC hardware decoding.[27][28]

Advertising controversy[edit]

GTX 970 hardware specifications[edit]

Issues with the GeForce GTX 970's specifications were first brought up by users when they found out that the cards, while featuring 4 GB of memory, rarely accessed memory over the 3.5 GB boundary. Further testing and investigation eventually led to Nvidia issuing a statement that the card's initially announced specifications had been altered without notice before the card was made commercially available, and that the card took a performance hit once memory over the 3.5 GB limit were put into use.[29][30][31]

The card's back-end hardware specifications, initially announced as being identical to those of the GeForce GTX 980, differed in the amount of L2 cache (1.75 MB versus 2 MB in the GeForce GTX 980) and the number of ROPs (56 versus 64 in the 980). Additionally, it was revealed that the card was designed to access its memory as a 3.5 GB section, plus a 0.5 GB one, access to the latter being 7 times slower than the first one.[32] The company then went on to promise a specific driver modification in order to alleviate the performance issues produced by the cutbacks suffered by the card.[33] However, Nvidia later clarified that the promise had been a miscommunication and there would be no specific driver update for the GTX 970.[34] Nvidia claimed that it would assist customers who wanted refunds in obtaining them.[35] On February 26, 2015, Nvidia CEO Jen-Hsun Huang went on record in Nvidia's official blog to apologize for the incident.[36] In February 2015 a class-action lawsuit alleging false advertising was filed against Nvidia and Gigabyte Technology in the U.S. District Court for Northern California.[37][38]

Nvidia revealed that it is able to disable individual units, each containing 256KB of L2 cache and 8 ROPs, without disabling whole memory controllers.[39] This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time unless one segment is reading while the other segment is writing because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the two GDDR5 controllers and itself.[39] This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 0.5 GB in a low speed segment on a 32-bit bus.[39]

On July 27, 2016, Nvidia agreed to a preliminary settlement of the U.S. class action lawsuit,[37] offering a $30 refund on GTX 970 purchases. The agreed upon refund represents the portion of the cost of the storage and performance capabilities the consumers assumed they were obtaining when they purchased the card.[40]

Async compute support[edit]

While the Maxwell series was marketed as fully DirectX 12 compliant,[2][41][42] Oxide Games, developer of Ashes of the Singularity, uncovered that Maxwell-based cards do not perform well when async compute is utilized.[43][44][45][41]

It appears that while this core feature is in fact exposed by the driver,[46] Nvidia partially implemented it through a driver-based shim, coming at a high performance cost.[45] Unlike AMD's competing GCN-based graphics cards which include a full implementation of hardware-based asynchronous compute,[47][48] Nvidia planned to rely on the driver to implement a software queue and a software distributor to forward asynchronous tasks to the hardware schedulers, capable of distributing the workload to the correct units.[49] Asynchronous compute on Maxwell therefore requires that both a game and the GPU driver be specifically coded for asynchronous compute on Maxwell in order to enable this capability.[50] The 3DMark Time Spy benchmark shows no noticeable performance difference between asynchronous compute being enabled or disabled.[50] Asynchronous compute is disabled by the driver for Maxwell.[50]

Oxide claims that this led to Nvidia pressuring them not to include the asynchronous compute feature in their benchmark at all, so that the 900 series would not be at a disadvantage against AMD's products which implement asynchronous compute in hardware.[44]

Maxwell requires that the GPU be statically partitioned for asynchronous compute to allow tasks to run concurrently.[51] Each partition is assigned to a hardware queue. If any of the queues that are assigned to a partition empty out or are unable to submit work for any reason (e.g. a task in the queue must be delayed until a hazard is resolved), the partition and all of the resources in that partition reserved for that queue will idle.[51] Asynchronous compute therefore could easily hurt performance on Maxwell if it is not coded to work with Maxwell's static scheduler.[51] Furthermore, graphics tasks saturate Nvidia GPUs much more easily than they do to AMD's GCN-based GPUs which are much more heavily weighted towards compute, so Nvidia GPUs have fewer scheduling holes that could be filled by asynchronous compute than AMD's.[51] For these reasons, the driver forces a Maxwell GPU to place all tasks into one queue and execute each task in serial, and give each task the undivided resources of the GPU no matter whether or not each task can saturate the GPU or not.[51]

Products[edit]

GeForce 900 (9xx) series[edit]

GeForce 900M (9xxM) series[edit]

Some implementations may use different specifications.

Model Launch Code nameFab (nm) Transistors (million) Die size (mm2) BusinterfaceCore config[a]Clock speeds FillrateMemory API support (version) Processing power (GFLOPS) TDP (watts) SLI support[b]
Base core clock (MHz) Boost core clock (MHz) Memory (MT/s) Pixel (GP/s)[c]Texture (GT/s)[d]Size (MiB) Bandwidth (GB/s) Type Bus width (bit) DirectXOpenGLOpenCLVulkanSingle precision[f]Double precision[g]
GeForce 910M[69][70][71]Aug 18, 2015 GF117 28 585 116 PCIe 3.0 x8 96:16:8 775 1550 1800 3.1 12.4 1024 14.4 DDR364 12.0 (11_0)[1][4]4.6 1.1 N/A 297.6 1/12 of SP 33 No
March 15, 2015 GK208 Un­known 87 384:16:8 575 575 5.13 9.2 2048 1.2 1.1 441.6 18.4
GeForce 920M[72][73][74]March 13, 2015 GF117 585 116 96:16:8 775 1550 3.1 12.4 1024 1.1 N/A 297.6 1/12 of SP
GK208 Un­known 87 384:32:16 954 954 7.6 30.5 2048 1.2 1.1 732.7 22.9
GeForce 920MX[75][76]March 2016 GM108 1870 148 256:24:8 1072 1176 8.58 25.7 2048 DDR3 GDDR5 549 1/32 of SP 16
GeForce 930M[77][78]March 13, 2015 384:24:8 928 941 7.4 22.3 2048 DDR3 712.7 22.3 33
GeForce 930MX[79][80]March 1, 2016 Un­known Un­known PCIe 3.0 x8 Un­known 952 1020 2000 Un­known Un­known 2048 Un­known DDR3 GDDR5 Un­known Un­known Un­known Un­known Un­known Un­known Un­known
GeForce 940M[81][82][83]March 13, 2015 GM107 1870 148 PCIe 3.0 x16 640:40:16 1029 1100 2002 16.5 41.2 2048 16 - 80.2 GDDR5 DDR3 128 1.2 1.1 1317 41.1 75 No
GM108 Un­known Un­known PCIe 3.0 x8 384:24:8 8.2 16.5 64 790.3 24.7 33
GeForce 940MX[84][85]March 10, 2016 1870 148 384:24:8 1122 1242 8.98 26.93 2048
4096
16.02 (DDR3)
40.1 (GDDR5)
861.7 Un­known 23
GeForce 945M[86][87][88]2015 GM107 ? 640:40:16 1029 1085 ? 16.46 41.2 ? ? DDR3 GDDR5 128 1,317.1 ? 75 ?
GM108 ? ? PCIe 3.0 x8 384:24:8 1122 1242 8.98 26.93 64 861.7 23
GeForce GT 945A[89][90]March 13, 2015 Un­known Un­known 384:24:8 1072 1176 1800 8.58 25.73 2048 14.4 DDR3 Un­known Un­known Un­known 33 Un­known
GeForce GTX 950M[91][92]March 13, 2015 GM107 1870 148 PCIe 3.0 x16 640:40:16 914 1085 5012 14.6 36.6 2048(GDDR5)
4096(DDR3)
80(GDDR5)
32(DDR3)
DDR3

GDDR5

128 1.2[93]1.1 1170 36.56 75 No
GeForce GTX 960M[94][95]640:40:16 1029 1085 16.5 41.2 2048
4096
80 GDDR5 1317 41.16 65
GeForce GTX 965M[96][97]January 5, 2015 GM204 5200 398 1024:64:32 924 950 5000 30.2 60.4 12.0 (12_1)[1][4]1945 60.78 60 [98]Yes
GeForce GTX 970M[99]October 7, 2014 1280:80:48 924 993 5012 37.0 73.9 3072
6144
120 192[100]2365 73.9 75
GeForce GTX 980M[101]1536:96:64 1038 1127 49.8 99.6 4096
8192
160 256[100]3189 99.6 100
GeForce GTX 980 (Notebook)[102]September 22, 2015 2048:128:64 1064 1216 7010 72.1 144 224 256 4612 144 145
Model Launch Code nameFab (nm) Transistors (million) Die size (mm2) BusinterfaceCore config[a]Clock speeds FillrateMemory API support (version) Processing power (GFLOPS) TDP (watts) SLI support[b]
Base core clock (MHz) Boost core clock (MHz) Memory (MT/s) Pixel (GP/s)[c]Texture (GT/s)[d]Size (MiB) Bandwidth (GB/s) Type Bus width (bit) DirectXOpenGLOpenCLVulkanSingle precision[f]Double precision[g]
  1. ^ abcdShader Processors : Texture mapping units : Render output units
  2. ^ abcdA maximum of 2 dual-GPU cards can be connected in tandem for a 4-way SLI configuration as dual-GPU cards feature on-board 2-way SLI.
  3. ^ abcdPixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.[25]
  4. ^ abcdTexture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.
  5. ^ abcdefDue to the disabling of one L2 cache/ROP unit without disabling all of the memory controllers attached to the disabled unit in the GTX 970, the memory in this GPU has been segmented. One segment must be reading while the other must be writing to achieve the peak speed. Since the peak speed is impossible to reach with pure reads or pure writes, these segments and their associated buses are split in this table.
  6. ^ abcdSingle precision performance is calculated as 2 times the number of shaders multiplied by the base core clock speed.
  7. ^ abcdDouble precision performance of the Maxwell chips' are 1/32 of single-precision performance.[52][53]

Chipset table[edit]

Main article: Comparison table of GeForce 900 series

Discontinued support[edit]

"Driver 368.81 is the last driver to support Windows XP/Windows XP 64-bit".

Nvidia announced that after Release 390 drivers, it will no longer release 32-bit drivers for 32-bit operating systems.[103]

Notebook GPUs based on the Kepler architecture moved to legacy support in April 2019 and stopped receiving critical security updates after April 2020.[104][105] The Nvidia GeForce 910M and 920M from the 9xxM GPU family are affected by this change.

Nvidia announced that after Release 470 drivers, it would transition driver support for the Windows 7 and Windows 8.1 operating systems to legacy status and continue to provide critical security updates for these operating systems through September 2024.[106]

See also[edit]

References[edit]

  1. ^ abcdRyan Smith. "Maxwell 2's New Features: Direct3D 11.3 & VXGI - The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  2. ^ ab"Maxwell and DirectX 12 Delivered". The Official NVIDIA Blog.
  3. ^"MSDN Blogs". msdn.com. Microsoft.
  4. ^ abcdRyan Smith. "Microsoft Details Direct3D 11.3 & 12 New Rendering Features". anandtech.com.
  5. ^"Vulkan Driver Support". Nvidia. February 10, 2016. Retrieved April 25, 2018.
  6. ^"Nvidia: Next-Generation Maxwell Architecture Will Break New Grounds - X-bit labs". xbitlabs.com. Archived from the original on June 29, 2013.
  7. ^Ryan Smith. "GTC 2010 Day 1: NVIDIA Announces Future GPU Families for 2011 And 2013". anandtech.com.
  8. ^"GeForce GTX 750 Class GPUs: Serious Gaming, Incredible Value". geforce.com.
  9. ^ abcdefgSmith, Ryan; T S, Ganesh (February 18, 2014). "The NVIDIA GeForce GTX 750 Ti and GTX 750 Review: Maxwell Makes Its Move". AnandTech. Archived from the original on February 18, 2014. Retrieved February 18, 2014.
  10. ^ abcRyan Smith, Ganesh T S. "Maxwell: Designed For Energy Efficiency - The NVIDIA GeForce GTX 750 Ti and GTX 750 Review: Maxwell Makes Its Move". anandtech.com.
  11. ^Kanter, David (August 1, 2016). "Tile-based Rasterization in Nvidia GPUs". Real World Technologies. Retrieved August 16, 2016.
  12. ^Triolet, Damien (March 3, 2017). "GDC: Nvidia talks about Tile Caching by Maxwell and Pascal". Hardware.fr. Retrieved May 24, 2017.
  13. ^"Dynamic Super Resolution Improves Your Games With 4K-Quality Graphics On HD Monitors". geforce.com.
  14. ^ abc"Archived copy"(PDF). Archived from the original(PDF) on July 21, 2017. Retrieved September 20, 2014.CS1 maint: archived copy as title (link)
  15. ^"NVIDIA - Maintenance". geforce.com.
  16. ^"Maxwell's Voxel Global Illumination Technology Introduces Gamers To The Next Generation Of Graphics". geforce.com.
  17. ^"NVIDIA Maxwell GPUs: The Best Graphics Cards For Virtual Reality Gaming". geforce.com.
  18. ^"How Maxwell's VR Direct Brings Virtual Reality Gaming Closer to Reality". The Official NVIDIA Blog.
  19. ^ abRyan Smith. "Display Matters: HDMI 2.0, HEVC, & VR Direct - The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  20. ^"Multi-Frame Sampled Anti-Aliasing Delivers Better Performance To Maxwell Gamers". geforce.com.
  21. ^"New nVidia Maxwell chips do not support fast CSAA". realhardwarereviews.com.
  22. ^"Introducing The Amazing New GeForce GTX 980 & 970". geforce.com.
  23. ^Ryan Smith. "The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  24. ^ abcRyan Smith. "Maxwell 2 Architecture: Introducing GM204 - The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  25. ^ ab"Here's another reason the GeForce GTX 970 is slower than the GTX 980". techreport.com. October 2014.
  26. ^"Maxwell: The Most Advanced CUDA GPU Ever Made". Parallel Forall. September 19, 2014.
  27. ^Ryan Smith. "NVIDIA Launches GeForce GTX 960". anandtech.com.
  28. ^Ryan Smith. "NVIDIA Launches GeForce GTX 950; GM206 The Lesser For $159". anandtech.com.
  29. ^"NVIDIA Discloses Full Memory Structure and Limitations of GTX 970". PCPer. Archived from the original on February 25, 2015. Retrieved January 28, 2015.
  30. ^"GeForce GTX 970 Memory Issue Fully Explained – Nvidia's Response". WCFTech. January 24, 2015.
  31. ^"Why Nvidia's GTX 970 slows down when using more than 3.5GB VRAM". PCGamer. January 26, 2015.
  32. ^"GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". AnandTech.
  33. ^"NVIDIA Working on New Driver For GeForce GTX 970 To Tune Memory Allocation Problems and Improve Performance". WCFTech. January 28, 2015.
  34. ^"NVIDIA clarifies no driver update for GTX 970 specifically". PC World. January 29, 2015.
  35. ^"NVIDIA Plans Driver Update for GTX 970 Memory Issue, Help with Returns". pcper.com.
  36. ^"Nvidia CEO addresses GTX 970 controversy". PCGamer. February 26, 2015.
  37. ^ abChalk, Andy (February 22, 2015). "Nvidia faces false advertising lawsuit over GTX 970 specs". PC Gamer. Retrieved March 27, 2015.
  38. ^Niccolai, James (February 20, 2015). "Nvidia hit with false advertising suit over GTX 970 performance". PC World. Retrieved March 27, 2015.
  39. ^ abcdRyan Smith. "Diving Deeper: The Maxwell 2 Memory Crossbar & ROP Partitions - GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". anandtech.com.
  40. ^"Nvidia settles class action lawsuit". Top Class Actions. July 27, 2016. Retrieved July 27, 2016.
  41. ^ abhttp://international.download.nvidia.com/geforce-com/international/images/nvidia-geforce-gtx-980-ti/nvidia-geforce-gtx-980-ti-directx-12-advanced-api-support.png
  42. ^ ab"GeForce GTX 980 - Specifications - GeForce". geforce.com.
  43. ^"DX12 GPU and CPU Performance Tested: Ashes of the Singularity Benchmark". pcper.com. Archived from the original on April 15, 2016. Retrieved August 31, 2015.
  44. ^ abHilbert Hagedoorn. "Nvidia Wanted Oxide dev DX12 benchmark to disable certain DX12 Features ? (content updated)". Guru3D.com.
  45. ^ ab"The Birth of a new API". Oxide Games. August 16, 2015.
  46. ^"[Various] Ashes of the Singularity DX12 Benchmarks". Overclock.net. August 17, 2015.
  47. ^"Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12". TechPowerUp.
  48. ^Hilbert Hagedoorn. "AMD Radeon R9 Fury X review". Guru3D.com.
  49. ^"[Various] Ashes of the Singularity DX12 Benchmarks". Overclock.net. August 17, 2015.
  50. ^ abcShrout, Ryan (July 14, 2016). "3DMark Time Spy: Looking at DX12 Asynchronous Compute Performance". PC Perspective. Archived from the original on July 15, 2016. Retrieved July 14, 2016.
  51. ^ abcdeSmith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 9. Retrieved July 21, 2016.
  52. ^Smith, Ryan (September 18, 2014). "The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". AnandTech. p. 1. Retrieved September 19, 2014.
  53. ^Ryan Smith. "The NVIDIA GeForce GTX Titan X Review". anandtech.com.
  54. ^"GeForce GTX 950 - Specifications - GeForce". geforce.com.
  55. ^"GeForce GTX 950 (OEM) - Specifications - GeForce". geforce.com.
  56. ^"NVIDIA GeForce GTX 950 OEM Specs".
  57. ^"GeForce GTX 960 - Specifications - GeForce". geforce.com.
  58. ^"GeForce GTX 960 (OEM) - Specifications - GeForce". geforce.com.
  59. ^"NVIDIA GeForce GTX 960 OEM Specs".
  60. ^"NVIDIA GeForce GTX 960 OEM Specs".
  61. ^"GeForce GTX 970 - Specifications - GeForce". geforce.com.
  62. ^Ryan Smith. "GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". anandtech.com.
  63. ^"NVIDIA Responds to GTX 970 3.5GB Memory Issue". pcper.com.
  64. ^Ryan Smith. "Practical Performance Possibilities & Closing Thoughts - GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". anandtech.com.
  65. ^"GeForce GTX 980 Ti - Specifications - GeForce". geforce.com.
  66. ^"GeForce GTX TITAN X - Specifications - GeForce". geforce.com.
  67. ^"NVIDIA TITAN X GPU Powers "Thief in the Shadows" VR Experience - NVIDIA Blog". The Official NVIDIA Blog.
  68. ^"NVIDIA GeForce GTX TITAN X". TechPowerUp.
  69. ^"GeForce 910M - Specifications - GeForce". geforce.com.
  70. ^"Archived copy". Archived from the original on June 24, 2017. Retrieved October 27, 2016.CS1 maint: archived copy as title (link)
  71. ^https://www.techpowerup.com/gpudb/2764/geforce-910m
  72. ^"GeForce 920M - Specifications - GeForce". geforce.com.
  73. ^"NVIDIA GeForce 920M". TechPowerUp. Archived from the original on March 4, 2016. Retrieved March 17, 2015.
  74. ^"NVIDIA GeForce 920M". TechPowerUp.
  75. ^"GeForce 920MX - Specifications - GeForce". geforce.com.
  76. ^https://www.techpowerup.com/gpudb/2826/geforce-920mx
  77. ^"GeForce 930M - Specifications - GeForce". geforce.com.
  78. ^"NVIDIA GeForce 930M". TechPowerUp.
  79. ^"GeForce 930MX - Specifications - GeForce". geforce.com.
  80. ^https://www.techpowerup.com/gpudb/2825/geforce-930mx
  81. ^"GeForce 940M - Specifications - GeForce". geforce.com.
  82. ^"NVIDIA GeForce 940M". TechPowerUp.
  83. ^"NVIDIA GeForce 940M". TechPowerUp.
  84. ^"GeForce 940MX - Specifications - GeForce". geforce.com.
  85. ^"NVIDIA GeForce 940MX". TechPowerUp GPU Database. Retrieved December 16, 2017.
  86. ^"GeForce 945M - Specifications - GeForce". geforce.com.
  87. ^https://www.techpowerup.com/gpudb/2773/geforce-945m
  88. ^https://www.techpowerup.com/gpudb/2836/geforce-945m
  89. ^NVIDIA™ GeForceGT 945A (1GB GDDR5) user-selectable by application via NVIDIA Control Panel http://store.hp.com/us/en/ContentView?catalogId=10051&langId=-1&storeId=10151&eSpotName=Sprout-Pro#!
  90. ^https://www.techpowerup.com/gpudb/2813/geforce-945a
  91. ^"GeForce GTX 950M - Specifications - GeForce". geforce.com.
  92. ^"NVIDIA GeForce GTX 950M". TechPowerUp.
  93. ^"NVIDIA GeForce GTX 980". TechPowerUp.
  94. ^"GeForce GTX 960M - Specifications - GeForce". geforce.com.
  95. ^"NVIDIA GeForce GTX 960M". TechPowerUp.
  96. ^"GeForce GTX 965M - Specifications - GeForce". geforce.com.
  97. ^"NVIDIA GeForce GTX 965M". TechPowerUp.
  98. ^"Eurocom Configure Model". eurocom.com.
  99. ^"GeForce GTX 970M - Specifications - GeForce". geforce.com.
  100. ^ abTriolet, Damien (February 4, 2016). "GTX 970: 3.5 Go et 224-bit au lieu de 4 Go et 256-bit ?". Hardware.FR (in French). Retrieved May 27, 2016.
  101. ^"GeForce GTX 980M - Specifications - GeForce". geforce.com.
  102. ^"GeForce GTX 980 (Notebook) - Specifications - GeForce". geforce.com.
  103. ^"Support Plan for 32-bit and 64-bit Operating Systems | NVIDIA".
  104. ^Eric Hamilton (March 9, 2019). "Nvidia to end support for mobile Kepler GPUs starting April 2019". Techspot.
  105. ^"List of Kepler series GeForce Notebook GPUs". Nvidia.
  106. ^"Support Plan for Windows 7 and Windows 8/8.1 | NVIDIA".

External links[edit]

Sours: https://en.wikipedia.org/wiki/GeForce_900_series

980i gtx

GeForce GTX 980 Ti GAMING 6G

A built-in screen and video capturing tool named Predator which captures your screen as still images or videos with the push of a button and allows you to capture and record your coolest, goofiest and most awesome gaming moments on your PC!

Learn More
rwdimg

 

The only thing better than gaming with one MSI GeForce® GTX 900 GAMING series graphics card is to have two or more running in SLI. The 2Way SLI Bridge L is the perfect link for your ultimate MSI GAMING SLI setup. Equipped with a premium LED illuminated GAMING logo, that can be synchronized to show the same effects as your MSI GeForce® GTX 900 GAMING graphics cards. Optimized for 4K+ resolutions and 144Hz+ refresh rates, the MSI GAMING SLI bridge is ready for next generation gaming!

RWDIMG
RWDIMG

Featuring a premium LED illuminated MSI GAMING Dragon to lighten the mood. This brand new function allows you to choose from 5 unique modes to set the right ambience for your gaming moments with just one click.

rwdimg
rwdimg
rwdimg

With every new generation of GPUs comes more performance. With every new generation of MSI Twin Frozr, we give you less noise and heat! We've listened to all your requests and the new Twin Frozr V is smaller, features stronger fans, generates less noise, keeps your graphics card and its components cooler and matches perfectly with your MSI GAMING motherboard including some funky LED lighting. We've spent 18 months on the development of the Twin Frozr V, including field testing in gaming cafés to ensure the cards have the quality and stability to give you the FPS you need!

rwdimg

Smart cooling, stay quiet.
MSI’s Twin Frozr V Thermal Designs are equipped with ZeroFrozr technology which was first introduced by MSI back in 2008. ZeroFrozr technology eliminates fan noise in low-load situations by stopping the fans when they are not needed. Compared to other graphics cards, there is no continuous hum of airflow to tell you there’s a powerful graphics card in your gaming rig. This means you can focus on gaming without the distraction of other sounds coming from your PC.

imgRWD
imgrwd
rwdimg

The MSI Gaming App allows for one-click performance profiles that unlock extra performance for gaming or keep your card silent during light use. It also features the EyeRest tab, giving you easy access to image quality improving technology. The LED control tab gives you full control over your MSI GAMING LED lights to set the mood.

>Download Now!

OC Mode
Maximum Performance through higher clock speeds and increased fan performance

Gaming Mode (Default)
The best balance between in-game performance and thermal

Silent Mode
The best environment for minimal fan noise

rwdimg
rwdimg
MSI GAMING APP
EYEREST TAB

Quickly adjust display settings to your visual needs. Expand the tab by clicking on the “eyecon” and select your preferred setting.

rwdimg
EYE REST MODE

Helps you sleep and rest better by reducing the blue light balance of your screen to let your body acclimatize to the night time.

rwdimg
GAMING MODE

More intense colors and increased contrast let you live the gaming life as the creators meant it.

rwdimg
MOVIE MODE

Dynamically adjusts Gamma and Contrast ratios for the clearest movie viewing experience

rwdimg
rwdimg

Real-time monitor GPU Core, DRAM Frequency, GPU & CPU Temperatures and much more, in game or in other fullscreen applications, customizable to your preference.

rwdimg

One of the deciding factors in performance is the quality of the components used. That is why MSI only uses MIL-STD-810G certified components for its Gaming cards because only these components have proven to be able to withstand the torturous circumstances of extreme gaming and overclocking.

rwdimg

HI-C CAP
MSI’s Hi-c CAPs are tiny, and super-efficient capacitors. Their small footprint allows the installation of heat sinks and their high efficiency (93%) actually reduces the total thermal footprint of the card.

rwdimg

SUPER FERRITE CHOKES
Super Ferrite Chokes use a Ferrite core that is Super-Permeable. This allows the Super Ferrite Chokes to run at a 35 degree Celsius lower temperature, have a 30% higher current capacity, a 20% improvement in power efficiency and better overclocking power stability.

rwdimg

SOLID CAP
With their aluminum core design, Solid CAP's have been a staple in high-end design mainboard designs and provides lower Equivalent Series Resistance (ESR) as well as its over-10-year lifespan.

rwdimg
rwdimg

MSI Afterburner is world’s favorite cross-vendor GPU overclocking tool. The easy interface gives access to the most detailed information about your graphics card and allows for tinkering with pretty much anything available on your graphics card.

Compatible with 64-bit Apps, available in many languages, including, Russian, Spanish, Chinese and Korean and completely customizable with many user-generated skins, everyone feels at home.

You can also run Afterburner on your iOS or Android smartphone and the built-in benchmarking utility Kombustor gives you insight in your Graphics Cards’ true performance.

Get MSI Afterburner Here

rwdimg

Traditional Fan Blade

Maximizes downwards airflow and air dispersion to the massive heat sink below them.

Dispersion Fan Blade

Intake more airflow to maximizes air dissipation to heat sink.

Enhanced dissipation efficiency
MSI has fitted Twin Frozr coolers with the all new Airflow Control technology which guides more airflow directly onto the heat pipes by using special deflectors on the heat sink. In addition, this exclusive heat sink design increases heat sink surface area, greatly enhancing the dissipation efficiency.

rwdimg

Enhanced dissipation efficiency
SuperSU Architecture is the best cooling solution for graphics cards. The GPU is cooled by a massive nickel-plated copper base plate connected to Super Pipes (8mm heat pipes) on the MSI GAMING series graphics card. Additionally, the new heat pipe layout increases efficiency by reducing the length of unused heat pipe and a special SU-form design.

rwdimg

MSI GAMING Graphics Cards give you more performance out of the box. Whether you use the card pre-overclocked or use the Gaming App to use its full potential, you can just get right into the game and enjoy sublime performance. Overclocking through the MSI Gaming App is covered by warranty to take away your worries. Get in there and start winning!

rwdimg

Advanced Dispersion Blade design generates 19% more airflow without increasing drag for supreme silent performance.

rwdimg
AioDragonEye

MSI Dragon Eye allows you to watch a YouTube video or Twitch Stream while playing a game simultaneously. Simply add a link or stream to the Dragon Eye application and select the size, position, volume and transparency and start gaming. With a few hotkeys you can start/pause your video or set the volume.

WTFast is the Gamers Private Network ; like a global automated army of IT specialists all working together to optimize your game connection from end to end. WTFast reports rich connection stats for your online game, so you can see exactly what is happening with your game connection.

  • Built just for MMO gamers
  • Reduce average ping
  • Greatly reduce connection flux, spikes and packet loss
  • MSI Exclusive 2-month premium license

GET 2-MONTH PREMIUM LICENSE FOR FREE

THE ULTIMATE PC GAMING PLATFORM

Get Game Ready with GeForce® GTX.

GeForce GTX graphics cards are the most advanced ever created. Discover unprecedented performance, power efficiency, and next-generation gaming experiences.

Nvidia
Nvidia

THE ULTIMATE GRAPHICS FOR VIRTUAL REALITY

NVIDIA’s unique set of features ensure you get the right level of performance, image quality, and latency to ensure your VR experience is nothing short of amazing

Nvidia

A BETTER GAMING EXPERIENCE

The easiest way to update your drivers, optimize your games, and share your victories

Nvidia

STUTTER, TEAR-FREE GAMEPLAY

Synchronizes the display refresh to your GeForce GTX GPU for the fast, smooth gaming

 

nvidia-gf-gtx-logoHDMI logo

© 2021 NVIDIA Corporation. All Rights Reserved. NVIDIA, the NVIDIA logo, GeForce, GeForce Experience, GeForce GTX, G-SYNC, NVIDIA GPU Boost, and NVLink are registered trademarks and/or trademarks of NVIDIA Corporation in the United States and other countries. All other trademarks and copyright are the property of their respective owners.

All images and descriptions are for illustrative purposes only. Visual representation of the products may not be perfectly accurate. Product specification, functions and appearance may vary by models and differ from country to country . All specifications are subject to change without notice. Please consult the product specifications page for full details.Although we endeavor to present the most precise and comprehensive information at the time of publication, a small number of items may contain typography or photography errors. Products may not be available in all markets. We recommend you to check with your local supplier for exact offers.

Sours: https://us.msi.com/Graphics-card/GTX-980-Ti-GAMING-6G.html
NVIDIA GTX 980 Ti in 2021 Revisit: Benchmarks vs. 1080 Ti, 3080, 6800 XT, \u0026 More

Asus GeForce GTX 980 vs Nvidia Geforce GTX 1660 Super

Asus GeForce GTX 980

Nvidia Geforce GTX 1660 Super

vs

54 facts in comparison

Asus GeForce GTX 980

Nvidia Geforce GTX 1660 Super

Why is Asus GeForce GTX 980 better than Nvidia Geforce GTX 1660 Super?

  • 3MHz faster memory clock speed?
    1753MHzvs1750MHz
  • 64bit wider memory bus width?
    256bitvs192bit
  • 640 more shading units?
    2048vs1408
  • 40 more texture mapping units (TMUs)?
    128vs88
  • 16 more render output units (ROPs)?
    64vs48
  • 2 more DisplayPort outputs?
    3vs1

Why is Nvidia Geforce GTX 1660 Super better than Asus GeForce GTX 980?

  • 403MHz faster GPU clock speed?
    1530MHzvs1127MHz
  • 0.41 TFLOPS higher floating-point performance?
    5.03 TFLOPSvs4.616 TFLOPS
  • 13.58 GPixel/s higher pixel rate?
    85.68 GPixel/svs72.1 GPixel/s
  • 40W lower TDP?
    125Wvs165W
  • 13.1 GTexels/s higher texture rate?
    157.1 GTexels/svs144 GTexels/s
  • 6990MHz higher effective memory clock speed?
    14000MHzvs7010MHz
  • 112GB/s more memory bandwidth?
    336GB/svs224GB/s
  • 2GB more RAM memory?
    6GBvs4GB

General info

The thermal design power (TDP) is the maximum amount of power the cooling system needs to dissipate. A lower TDP typically means that it consumes less power.

A higher transistor count generally indicates a newer, more powerful processor.

Small semiconductors provide better performance and reduced power consumption. Chipsets with a higher number of transistors, semiconductor components of electronic devices, offer more computational power. A small form factor allows more transistors to fit on a chip, therefore increasing its performance.

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard for connecting components, such as graphics cards and SSDs, to a motherboard. Newer versions can support more bandwidth and deliver better performance.

The graphics card contains two graphics processing units (GPUs). This generally results in better performance than a similar, single-GPU graphics card.

6.warranty period

Unknown. Help us by suggesting a value. (Asus GeForce GTX 980)

Unknown. Help us by suggesting a value. (Nvidia Geforce GTX 1660 Super)

When covered under the manufacturer’s warranty it is possible to get a replacement in the case of a malfunction.

The graphics card uses a combination of water and air to reduce the temperature of the card. This allows it to be overclocked more, increasing performance.

The width represents the horizontal dimension of the product. We consider a smaller width better because it assures easy maneuverability.

9.height

Unknown. Help us by suggesting a value. (Nvidia Geforce GTX 1660 Super)

The height represents the vertical dimension of the product. We consider a smaller height better because it assures easy maneuverability.

Performance

The graphics processing unit (GPU) has a higher clock speed.

The number of pixels that can be rendered to the screen every second.

The memory clock speed is one aspect that determines the memory bandwidth.

The number of textured pixels that can be rendered to the screen every second.

Shading units (or stream processors) are small processors within the graphics card that are responsible for processing different aspects of the image.

TMUs take textures and map them to the geometry of a 3D scene. More TMUs will typically mean that texture information is processed faster.

When the GPU is running below its limitations, it can boost to a higher clock speed in order to give increased performance.

The ROPs are responsible for some of the final steps of the rendering process, writing the final pixel data to memory and carrying out other tasks such as anti-aliasing to improve the look of graphics.

Memory

The effective memory clock speed is calculated from the size and data rate of the memory. Higher clock speeds can give increased performance in games and other apps.

Random-access memory (RAM) is a form of volatile memory used to store working data and machine code currently in use. It is a quick-access, temporary virtual storage that can be read and changed in any order, thus enabling fast data processing.

A wider bus width means that it can carry more data per cycle. It is an important factor of memory performance, and therefore the general performance of the graphics card.

Newer versions of GDDR memory offer improvements such as higher transfer rates that give increased performance.

Error-correcting code memory can detect and correct data corruption. It is used when is it essential to avoid corruption, such as scientific computing or when running a server.

Ports

Devices with a HDMI or mini HDMI port can transfer high definition video and audio to a display.

2.HDMI version

Unknown. Help us by suggesting a value. (Asus GeForce GTX 980)

Newer versions of HDMI support higher bandwidth, which allows for higher resolutions and frame rates.

4.HDMI ports

Unknown. Help us by suggesting a value. (Asus GeForce GTX 980)

More HDMI ports mean that you can simultaneously connect numerous devices, such as video game consoles and set-top boxes.

Features

DirectX is used in games, with newer versions supporting better graphics.

OpenGL is used in games, with newer versions supporting better graphics.

Some apps use OpenCL to apply the power of the graphics processing unit (GPU) for non-graphical computing. Newer versions introduce more functionality and better performance.

The graphics card supports multi-display technology. This allows you to configure multiple monitors in order to create a more immersive gaming experience, such as having a wider field of view.

A lower load temperature means that the card produces less heat and its cooling system performs better.

Ray tracing is an advanced light rendering technique that provides more realistic lighting, shadows, and reflections in games.

7.PassMark (G3D) result

Unknown. Help us by suggesting a value. (Asus GeForce GTX 980)

Unknown. Help us by suggesting a value. (Nvidia Geforce GTX 1660 Super)

This benchmark measures the graphics performance of a video card. Source: PassMark.

This benchmark is designed to measure graphics performance. Source: AnandTech.

9.OpenGL ES version

Unknown. Help us by suggesting a value. (Asus GeForce GTX 980)

Unknown. Help us by suggesting a value. (Nvidia Geforce GTX 1660 Super)

OpenGL ES is used for games on mobile devices such as smartphones. Newer versions support better graphics.

Which are the best graphics cards?

MSI GeForce RTX 3090 Suprim X

Gigabyte Aorus GeForce RTX 3080 Ti Xtreme

Gigabyte Aorus GeForce RTX 3080 Ti Master

Asus ROG Strix LC GeForce RTX 3080 Ti Gaming OC

Asus ROG Strix GeForce RTX 3090 Gamin OC

Asus ROG Strix GeForce RTX 3080 Ti Gaming OC

Gigabyte Aorus GeForce RTX 3090 Xtreme

Gigabyte Aorus GeForce RTX 3090 Master

Asus TUF GeForce RTX 3080 Ti Gaming OC

Gigabyte GeForce RTX 3080 Ti Vision OC

Show all
This page is currently only available in English.
Sours: https://versus.com/en/asus-geforce-gtx-980-vs-nvidia-geforce-gtx-1660-super

You will also be interested:

Moreover, the pockets are full of money. Order ten of these. But everything turned out to be much simpler. And then I did not regret my choice for a second. Because the sex that I experienced in broad daylight and already literally in an hour there was just incredibly pure, sincere and so-so directly.



1250 1251 1252 1253 1254