Here is my process for new cards: Pick a pricepoint, head over to Video Card Benchmark and scroll down until you find the first (ie fastest) video card that meets that price point. Also double check prices at PC Part Picker. For used cards, the chart is still pretty useful, just a bit more manual (and money saving!) to get used prices from eBay/Mercari.
I personally have an AMD bias though, since they have pulled way less shit than nVidia. But, that is your decision to make.
I must be in the minority, but I’ve been digging Intels Arc GPUs. For their price point and the fact I don’t play bleeding edge AAA games, they’ve actually done pretty well. Additionally I’m tired of nvidia’s price gouging and AMD following after, I want to support a disruptive third party. Their driver support gets better every release and I can’t wait to see their next generation of cards.
They are good, they are cheap, and they're targeting the midrange to low-end hardware segment which is not covered by any other manufacturer.
I have a 3090 in my desktop but I have an Arc card on my server for Moonlight/Sunshine streaming, as well as Plex transcoding. It's the cheapest card to have AV1 encoding built in.
I also keep seeing them increase performance significantly with every driver update, which is pretty cool.
I'm interested in your use of the Arc card for media transcoding. What one did you get and how would you say it compares to a GTX 960? The one in my server died and I stuck a spare 2060 in there a while back and am looking to downgrade to something sensible.
Most of my media is 1080p x264 with some 4k HEVC (and growing) if that helps.
Too many edge case issues, especially for someone who plays a lot of indie titles and uses Linux. Also, they kinda just went into the low performance market. If they'd launch something for the upper midrange I'd be more interested (assuming they improved on a lot of fronts of course).
The new dedicated cards are actually very good. They sell them at a competitive price because they are not powerhouses, but they get the job done. If you're targeting 1080p at your top end, it's almost a no-brainer to go with an Arc card. If you're pushing a higher resolution, it's probably better to go with another manufacturer, unless you're fine with higher resolutions and lower framerates.
Supposedly Nvidia has become a lot better on Linux lately. They finally dropped their weird framebuffer API or whatever (the one that was the reason for horrible Wayland compatibility and also caused a heated Linus Torvalds moment), and I think they even made their linux drivers open source.
They do support their driver yes, but it will never be as good as long as it’s proprietary. The open nvidia module isn’t ready and still backed by proprietary blobs.
Historically speaking, Nvidia was always the best for Linux. Nvidia's success history with Linux trace back to the 2004 with State-of-the-art 3d capabilities (albeit for arcade machines). At that time ATi radeon 3D capabilities for Linux were below sub-par.
The problem with Linux+Nvidia is that it was never "the Linux way"... but always the "Nvidia way".
The Linux way is... flexibility: it mean you can use whatever kind of Linux you want, and the drivers works straight out of the box (basically you need open source drivers). Instead Nvidia always pushed for fixed binary blob that required specific kernel and rigid environment.
The modern support for Linux by AMD is mostly "the Linux way", that's why the Linux community love AMD more than Nvidia.
In any case of hardware parity between Nvidia and AMD; Linux crowd will always prefer AMD, because AMD mean you can use any kind of Linux distro-thing and have an uncompromising gaming experience.
I've used a 3090 on Ubuntu and Arch without any issues for things like 3D rendering (Blender, Daz) and most of the Steam games I played without any issue. I was also able to run most of the AI models and tools.
AMD? Well, it works ok for games I guess, but it's a huge pile of shit other than that. Linux tards who pretend to care about "proprietary software!!!" on the one hand then talk about Proton/gaming performance in the other are nothing but hypocrites.
They’re all pretty good. Even the Intel cards are pretty good now. I guess, what’s most important to you? If you want maximum compatibility with games, go for Nvidia. If you want better price to performance, go with AMD or Intel. Although, if I were you, I’d wait until AMD and Intel’s next gen. Both are coming (relatively) soon (probably before the end of the year), and will probably be a lot better than what’s out now.
One caveat, if you use or plan to use Linux, Nvidia can present some difficulties, so avoid them.
Actually two caveats, if you plan to use hardware encoding, like you’ll be streaming on Twitch while you play games, avoid AMD. Their hardware encoding is pretty trash. Both Nvidia and Intel are much better.
My current lineup (I know I have a lot of machines, but my wife and I both play games, and I do AI workloads as well):
RTX 3090 (mostly for AI)
Radeon RX 6700 XT (great card)
Arc A380 (for transcoding, but I’ve gamed on it, and it’s great)
Radeon RX 6600 (my main card, just because it’s in my living room HTPC, running ChimeraOS)
The amount of self-hosted AI integrations is only going to grow as well. I have a 3090 in a closet PC and I use it for everything from image generation to VSCode/Neovim code completion and code chat. One of the things I'd really like to see in the next few years is a wide variety of local AI driven self hosted Alexa replacements.
For the hardware encoding side it used to be true before OBS introduced better AMD encoder support. I have a 6800XT and it works just fine for streaming casually, though I agree that if you stream professionally then Nvidia is the better option.
I've been wondering what would be the smartest choice to upgrade my 1660 Super. CPU is a Ryzen 5 3200 and I've got 16GB of RAM. Dunno if just upgrading the GPU would make a huge difference.
I bought a rtx4070 ti super recently because of superior raytracing and AI. If you don't care about those things just go by price/performance. Tom's Hardware has a benchmark of all cards on the same chart.
On Windows, Nvidia without thinking twice.
On Linux, depends, on rDNA 4 and the next release of Nvidia drivers, but probably still Nvidia.
Unfortunately, despite how much I would rather buy from someone else, AMD's products are just inferior, especially software.
Examples of AMD being worse:
AMD's implementation of opengl is a joke, the open source implementation used on Linux is several times faster and made for free by volunteers, without internal knowledge
AMD will never run physx, which is every day less relevant, but if AMD from the past had proposed an alternative we would have a standardized physics extension in DirectX by now, like with dlss
AMD's ray accelerators are "incomplete" compared to Nvidia RT cores, which is why ray tracing is better on Nvidia, and which is why with rDNA 4 they are changing how they work
GCN was terrible and very different from Nvidia's architecture, it was hard to optimize for both. rDNA is more similar, but now AMD has a plethora of old junk to maintain compatible with rDNA
Nvidia has been constantly investing in new software technologies (nowadays it's mainly AI), AMD didn't and now it's always playing catch up
AMD also has its wins, for example:
They often make their stuff open source, mainly because it's convenient for its underdog position
Has a pretty good software stack on Linux (much better than on windows) partly because it's not entirely done by them
Nvidia has been a bad faith actor for many years on the Linux space, even if it's in its redemption arc
Modern GPU seems to be catching up in compute performance
AMD is less greedy with VRAM, mainly because they are less at risk of competing with their own enterprise lineup
Current Nvidia's prices are stupid
I would still prefer Nvidia right now, but maybe it's gonna change with the next releases.
P.s. I have used a GTX 1060, an RX 480, and a Vega 56
but if AMD from the past had proposed an alternative we would have a standardized physics extension in DirectX by now, like with dlss
Why the fuck put this on AMD when it was Nvidia who did their usual proprietary bullshit? "AMD is worse than Nvidia because they didn't provide us with a better alternative!" ???
The OpenGL UMD was completely re-engineered. This premiered with the 22.7.1 release, so nearly two years ago. AMD now have the most performant, highest quality OpenGL UMD in the industry, which is particularly relevant for workstation use cases (where OpenGL remains the backbone of WS graphics).
PhysX is proprietary, I don't know what can be done about that, but your point is valid here, though given the rise of other physics engines at play, I don't really know if this is a big hit? Do we really want further consolidation in game systems?
AMDs approach to ray acceleleration has always favoured die area efficiency up until now, though I can totally understand your disappointment with the performance in that area. That said, the moment I really care about RTRT in gaming is when it's no longer contingent on the raster model. reflections, shadows and GI are nice and all, but we're still not really there yet.
I dont know how GCN was such a terrible arch since it was the basis of an entire console generation. An argument could be made about how its GPGPU design may have hindered it at gaming on desktops but it had matured extremely well over time with driver upgrades, despite their given price + perf targets at release. Aside from that (and related to point 1), RDNA UMDs are all PAL based. I'm not sure what you're alluding to with this? Could you please elaborate?
Your final remark is untrue (FMF, AL+, gfx feature interop, mic ANS, a plethora of GPUOpen technologies) but I will forgive you not keeping up with a vendor's tech if you don't actively use their products.
I'm literally using a full AMD PC right now. I don't like Nvidia as much as the next person. I think they use terrible monopolistic practices, and if the competition were on par I would not buy Nvidia. But they aren't.