GeForce RTX 4090 owner reports melting power connectors

Flanders

Honorary Master
Joined
Nov 20, 2003
Messages
14,726
What's the minimum spec power supply required?

Need to know this as well as upgrade my hamster wheel farm and win the lotto.
 

Fulcrum29

Honorary Master
Joined
Jun 25, 2010
Messages
55,064
Say that both these two cases are honest, does anyone know which power supplies were used, and whether the respective rail(s) wasn't overutilised?

Nvidia did say that they reached out to the one user, so I am keen on how they will reply.
 
  • Like
Reactions: Yuu

WAslayer

Executive Member
Joined
May 13, 2011
Messages
8,938
It's probably that stupid new connector from Nvidia and element of pebkac.. for the amount of power this thing draws, if there are any pins not making proper contact, you will get arcing which is going to generate enough heat to melt things like this..
 
  • Like
Reactions: Yuu

wizardofid

Executive Member
Joined
Jul 25, 2007
Messages
9,381
It's probably that stupid new connector from Nvidia and element of pebkac.. for the amount of power this thing draws, if there are any pins not making proper contact, you will get arcing which is going to generate enough heat to melt things like this..
That is exactly what some hardware reviewers are alluring to, in proper seated cables. However the cables are crappy, and it wouldn't surprise me if the new ATX spec gets a revision soon.
What's the minimum spec power supply required?

Need to know this as well as upgrade my hamster wheel farm and win the lotto.
linus manage to run it at default speeds on a 650 watt power supply. But he mentioned it was a brand new power supply, power supplies derates overtime, but that 650 leaves zero overhead, no extra HDDs, RGB, water pumps nothing and the tested power supplies were all high spec ones, average entry, mid range psu wouldn't be able to do it.

The average requirements range from 850 to 1200 watt pending the card, most manufactures recommend at least a 1kw unit.But if you can afford a 4090, getting a PSU is the least of you budgetary concerns as that GPU will bottleneck all but the best hardware any ways. Don't expect your 4 year old 1kw PSU is going to cut it. Average derate is between 3-6% for top quality PSUs and as much as 10% or more for entry and midrange PSUs.

It is best to buy a new PSU if you getting a top of the line card, especially older than 4 years.
 
Last edited:

wizardofid

Executive Member
Joined
Jul 25, 2007
Messages
9,381
It's probably that stupid new connector from Nvidia and element of pebkac.. for the amount of power this thing draws, if there are any pins not making proper contact, you will get arcing which is going to generate enough heat to melt things like this..

Nvidia didn't create the cable, they actually complained that the cable isn't good enough and showed lab tests where the cables melted and such.
 
  • Like
Reactions: Yuu

Flanders

Honorary Master
Joined
Nov 20, 2003
Messages
14,726
That is exactly what some hardware reviewers are alluring to, in proper seated cables. However the cables are crappy, and it wouldn't surprise me if the new ATX spec gets a revision soon.

linus manage to run it a default speeds on a 650 watt power supply. But he mentioned it was a brand new power supply, power supplies derates overtime, but that 650 leaves zero overhead, no extra HDDs, RGB, water pumps nothing and the tested power supplies were all high spec ones, average entry, mid range psu wouldn't be able to do.

The average requirements range from 850 to 1200 watt pending the card, most manufactures recommend at least a 1kw unit.But if you can afford a 4090, getting a PSU is the least of you budgetary concerns as that GPU will bottleneck all but the best hardware any ways. Don't expect your 4 year old 1kw PSU is going to cut. Average derate is between 3-6% for top quality PSUs and as much as 10% or more for entry and midrange PSUs.

It is best to buy a new PSU if you getting a top of the line card, especially older than 4 years.

Comprehensive reply :thumbsup:

Would be a long time before even considering one of these for me. An entire system overhaul would need to happen for me to even consider a 4XXX series card. I'd imagine 6+ to be the current at the time.
 

wizardofid

Executive Member
Joined
Jul 25, 2007
Messages
9,381
Comprehensive reply :thumbsup:

Would be a long time before even considering one of these for me. An entire system overhaul would need to happen for me to even consider a 4XXX series card. I'd imagine 6+ to be the current at the time.
With ray tracing being new and all you do need beefier machine, but as the technology improves so will the requirements lower for running games at ultra. Pretty hard for a mid range card to be able to run some thing at max settings. It use to able to do so in standard games, but ray tracing changed that quite a bit. I am personally in the camp where ray tracing just isn't worth it at the moment, standard shading is more than good enough for the average joe. Faking with ambient occlusion has gotten pretty good and realtime lighting has only improved over the last few years.

That is not to say older games don't benefit from using a ray tracing graphics card and reshade for example, before the use of ambient occlusion and physical based rendering shaders we have now.
 

Fulcrum29

Honorary Master
Joined
Jun 25, 2010
Messages
55,064
312486777_1457146084696557_8165197117727367033_n.png
 

Fulcrum29

Honorary Master
Joined
Jun 25, 2010
Messages
55,064
Two owners?? How is this even news?

Because less than a month ago,


Melting 12VHPWR Cables​

PCI-SIG, the group that governs PCIe standards, has issued an email to all of its members and their suppliers regarding melting 12VHPWR cables. That email was technically private, but so many groups were on the email that it inevitably got passed around and ended up in our hands too.

As a reminder, 12VHPWR is the official name for the new PCIe 5.0 12+4-pin cable used on some current and upcoming high-power graphics cards and is capable of handling 600W sustained on a single cable.

You may have already seen this story covered on some rumor sites citing that the problem is with adapters converting to 8-pin PCIe connectors, but that’s incorrect and may have been based on incomplete information. Here’s what the email had to say:

“Please be advised that PCI-SIG has become aware that some implementations of the 12VHPWR connectors and assemblies have demonstrated thermal variance, which could result in safety issues under certain conditions. Although PCI-SIG specifications provide necessary information for interoperability, they do not attempt to encompass all aspects of proper design, relying on numerous industry best-known methods and standard design practices. As the PCI-SIG workgroups include many knowledgeable experts in the field of connector and system design, they will be looking at the information available about this industry issue and assisting in any resolution to whatever extent is appropriate.”

“As more details emerge, PCI-SIG may provide further updates. In the meantime, we recommend members work closely with their connector vendors and exercise due diligence in using high-power connections, particularly where safety concerns may exist.”

The PDF explains that Nvidia has been testing 12VHPWR connectors to validate that prototype power supplies and cables can meet the specification of 55A continuous. During this testing, Nvidia found certain cable routing conditions led to excess heat and, in some cases, melting.

The conditions required for the excess heat were either subjecting the cables to severe bending or a high number of mating cycles (about 40). Cables tested in these scenarios exhibited hot spots at roughly 2 and a half hours, and melting at 10 to 30 hours. Connectors from multiple suppliers have failed.

This is with a continuous 55A of current (or 660W at 12V), which would not be a typical load condition, especially not in gaming. Nvidia did not observe any failures on connectors with low mating cycles and without any bend.

Photos provided in the PDF show some pretty gruesome melting, and it’s not in the same area each time. The failures occurred on different pins depending on the direction the cable was bent. This could be seriously dangerous.

The document includes per-pin measurements taken during the testing. As the cable was bent in various directions, severe current imbalance resulted from huge swings in resistance. We’ll use the last set of data as an example as it’s the most severe. The resistance in pins 3 and 4 measured high, especially pin 3, resulting in a measured 36.4A on a single pin, or 436W on a single pin, leading to a hotspot temperature of 180 degrees celsius. By the way, the current rating of stranded 16AWG is only between 5A to 7A, so we’re talking about 5 to 7 times the rated current.

The PDF goes on to hypothesize that the bending and side-loading cause the plug to improperly seat in the receptacle, perhaps deforming it. The testing conditions might seem extreme, with the cable being bent around at full load for hours on end, but this kind of thing is done to ensure a margin of safety as products get used and age in various circumstances.

Group members are encouraged to do independent testing and share the results with Nvidia, who also has volunteered to work with the manufacturers of the connectors to fix this issue. Nvidia and the PCI-SIG are trying to get ahead of a potential problem before it is allowed to become widespread.

Our opinion is that while the situation is serious and should be taken seriously, this isn’t likely to be a problem that you’ll encounter in your gaming PCs. The test conditions are intentionally extreme in a way that you likely won’t have in your own system, especially if you take care not to put too much strain on any of your cables and connectors. It’s never a good practice to shove, bend, or cram any cables.

Source: GamersNexus

This is why I said that these claims need to be validated. The internet is known to spur on trouble where there isn’t any. Not to illegitimate the concerns, two owners could have been impacted, and within a short ownership period.
 

Markd

Expert Member
Joined
Oct 8, 2009
Messages
1,677
Well one thing is for sure, there's no way Nvidia can progress to the 5xxx gen without sorting out some key issues. Other than melting issues the card is also mammoth in size so some people would need a new case and a new PSU for the privilege. Nevermind the cost.
 

wizardofid

Executive Member
Joined
Jul 25, 2007
Messages
9,381
Two owners?? How is this even news?
thicc boy aren't you. When AMD straight up reports it won't be using the connector on RDNA 3, it is pretty clear there are major problems. But more people have come forward showing melted cables, one user reported within 3 days of getting his GPU. Cable modders are now supplying thicker gauge wire cables as well as a 90 degree connector, as well as a metal support, to avoid the cable bending and putting too much strain on the connector. JayzTwoCents showed last night the connector gets a toasty 50 degrees.
 

Voicy

Honorary Master
Joined
Sep 19, 2007
Messages
11,565
Well one thing is for sure, there's no way Nvidia can progress to the 5xxx gen without sorting out some key issues. Other than melting issues the card is also mammoth in size so some people would need a new case and a new PSU for the privilege. Nevermind the cost.
They're going to have to rework some of these next gen systems soon.

The simplest fix I can see is changing the entire system architecture from 12V to 24V to bring down the current usage.
 

Sinbad

Honorary Master
Joined
Jun 5, 2006
Messages
81,152
They're going to have to rework some of these next gen systems soon.

The simplest fix I can see is changing the entire system architecture from 12V to 24V to bring down the current usage.
Agreed. 600w at 12V is 50 amps. If you're building a house you need 10mm cable to carry that. That's 3.5mm diameter core.
 
Top