GeForce RTX 4090 owner reports melting power connectors

Itsa Trap

Expert Member
Joined
Jan 22, 2020
Messages
4,110
I mean, when the GPU looks like it should use this kind of connection:

4090.png



and suggested minimum PSU specs suggest a backyard Pebblebed reactor to run it...

That Nvidia candle is gonna burn bright.


No doubt the reason our loadshedding schedule is higher, all the fangirls buying 4090's and plugging them in.

At least the economy will be bolstered by the sales... though the savings not so much.
 

wizardofid

Executive Member
Joined
Jul 25, 2007
Messages
9,381
at 220v it is not 50Amps.
The what now. 30% math strike again. 600watt divided by 12volt = 50 amps. Just in case you are unaware. A switching power supply steps down the voltage from the mains from 220 volts to 12volts with minor rails 3.3v and 5 volts derived from the 12volt rail. 600 watt draw at 220 volts only amounts to 2.72 amps, because the math is 600 watts divided by 220 volts = 2.72 amps.

However due you efficiency the draw can be higher from the mains than the actual output. In other words, it might output 600 watts (12 volt), but has a higher draw of say 800 watts(220v), (pending rating), also the reason you don't mess around with a power supply while it is on, the voltage may be low but the amperage will insta kill.

P = I × V or Watts = Amps × Volts.
 
Last edited:

WalkWithMe

Senior Member
Joined
Dec 10, 2016
Messages
661
The what now. 30% math strike again. 600watt divided by 12volt = 50 amps. Just in case you are unaware. A switching power supply steps down the voltage from the mains from 220 volts to 12volts with minor rails 3.3v and 5 volts derived from the 12volt rail. 600 watt draw at 220 volts only amounts to 2.72 amps, because the math is 600 watts divided by 220 volts = 2.72 amps.

However due you efficiency the draw can be higher from the mains than the actual output. In other words, it might output 600 watts (12 volt), but has a higher draw of say 800 watts(220v), (pending rating), also the reason you don't mess around with a power supply while it is on, the voltage may be low but the amperage will insta kill.

P = I × V or Watts = Amps × Volts.
Agreed. 600w at 12V is 50 amps. If you're building a house you need 10mm cable to carry that. That's 3.5mm diameter core.
That is my point to the original post, suggesting using 10mm cable for 600W or 50A. that cable is needed for 220V not 12V!
 

Fulcrum29

Honorary Master
Joined
Jun 25, 2010
Messages
55,064
This is what Igor Labs said,


Poorly manufactured adapter responsible for melting RTX 4090 cables, report claims​

Upon closer inspection, the crew discovered that four 14 gauge wires are distributed across a total of six contacts (each outer contact is tied to one wire, and the two inner contacts each have two wires soldered to them).

"The solder base is a mere 0.2 mm thin copper base with a width of 2 mm per incoming wire, which then results in 4 mm per pair for the middle connections," Igor's added. It is so fragile that "carefully lifting off the enveloping layer causes the thin plate to tear immediately." Similarly, bending the cables at the connector – which can easily happen when plugging them in or unplugging them during normal installation or removal – could also induce damage.

Igor's Lab has contacted Nvidia about the issue and believes a recall of the adapter is the least the GPU maker could do for customers at this stage.

Direct source (pictures inside): https://www.igorslab.de/en/adapter-...hot-12vhpwr-adapter-with-built-in-breakpoint/

The conclusions, as quoted,
  • The problem is not the 12VHPWR connection as such, nor the repeated plugging or unplugging.
  • Standard compliant power supply cables from brand manufacturers are NOT affected by this so far.
  • The current trigger is NVIDIA’s own adapter to 4x 8-pin in the accessories, whose inferior quality can lead to failures and has already caused damage in single cases.
  • Splitting each of the four 14AWG leads onto each of the 6 pins in the 12VHPWR connector of the adapter by soldering them onto bridges that are much too thin is dangerous because the ends of the leads can break off at the solder joint (e.g., when kinked or bent several times).
  • Bending or kinking the wires directly at the connector of the adapter puts too much pressure on the solder joints and bridges, so that they can break off.
  • The inner bridge between the pins is too thin (resulting cross section) to compensate the current flow on two or three instead of four connected 12V lines.
  • NVIDIA has already been informed in advance and the data and pictures were also provided by be quiet! directly to the R&D department.

a 'quality' third-party adapter may resolve the issue in the interim, but it seems best to upgrade to a new ATX 3.0 PSU with a 'native' cable.

The rumours that AMD is avoiding to use ATX 3.0 power rails on the upcoming 7000 series due to this is then nonsense. From what is known now, Nvidia did poor quality control, but the issue may be bigger than this.
 

wizardofid

Executive Member
Joined
Jul 25, 2007
Messages
9,381
This is what Igor Labs said,




Direct source (pictures inside): https://www.igorslab.de/en/adapter-...hot-12vhpwr-adapter-with-built-in-breakpoint/

The conclusions, as quoted,


a 'quality' third-party adapter may resolve the issue in the interim, but it seems best to upgrade to a new ATX 3.0 PSU with a 'native' cable.

The rumours that AMD is avoiding to use ATX 3.0 power rails on the upcoming 7000 series due to this is then nonsense. From what is known now, Nvidia did poor quality control, but the issue may be bigger than this.
Say what now

saywhat.jpg
 

Sinbad

Honorary Master
Joined
Jun 5, 2006
Messages
81,152
It's not that simple.

Resistance, which causes heat, is proportional to amps AND volts, not just amps.
No, resistance may vary with heat but it does not directly vary with voltage.
Increased amperage may cause increased heat if the resistance is not negligible.
 

wizardofid

Executive Member
Joined
Jul 25, 2007
Messages
9,381
What is your point?

Nvidia's 'Adaptergate' is not the reason why AMD will not use ATX 3.0 power rails, you know, the smart rail.
Never said it was.... It wasn't a direct result, but a result of their own testing. ATX 3.0 power rails ? Your making it sound like Power supplies was designed from the ground up again. Same power supply design, just with better OCP, and the requirement for large/more capacitor(s) to handle spike loads and connector change, and you don't even necessarily need a larger/more capacitor(s) for the 12 volt rail, if your over current protection isn't over zealous with spike loads. Still the same, buck, full bridge, half bridge, quasi resonant and push pull.

But only push pull, buck, full bridge and quasi resonant is of relevance at they are the only ones that supply up to 1000 watts or more. Half bridge only supplies up to 500watt, so not likely to see any changes there. There is no immediate redesign of topologies or the way they operate.

Oh and calling it a smart rail entails the addition of digital controllers, with the use if micro controllers or programmable gate arrays. Well that is essentially what a digital supply uses. Consider the "smart" rail having a type of read only memory reporting back the rail output, it can't do much more than that. It is fundamentally still a "dumb" rail with the addition of read back, and the GPU won't pull more than the rail limit is what it essentially boils down. Unless it is a digital power supply then it is able load balance rail(s). Also of note is that intel doesn't specify how to deploy/make a power supply only the specifications to be compliant. But to keep power supplies cheap and affordable most are going to deploy a dumb approach with read back only. To go digital you would either need to use full bridge or quasi resonant, as those two are really the ones to benefit the most out of going digital, the design efficiency on the others are problematic and general not worth it. That isn't even speaking about the costs with full bridge being at least 2,5 times standard pricing and 2.8 times for quasi, include digital and you can pretty much double the price.

There is no point making it "smart" as the intention is for read back only, essentially letting the GPU know what it can draw.

I also dont see the need to isolate the rail and revert back to the old multi rail design, unless it is a high wattage unit that can afford to split to a dedicated rail, it would be suicide on 1kw or less as it needs to supply up to 600 watts as per the requirement of the spec, of course the specification states it needs to label cables on how much it can draw. So even if the entire 12volt rail is 600 watt, information supplied to gpu can restrict it to 300 watts only. That limit is essentially hard coded and no way around it. It is not able to dynamically adjust output either.

Power supplies with single rail design is more common and practically replaced the old multi rail design and for good reason, not that single rail is any less safe compared to multi rail.
 
Last edited:

Fulcrum29

Honorary Master
Joined
Jun 25, 2010
Messages
55,064
Odd that Igor's Lab has a whole other cable,


Buildzone is in the view that it might be the pin blades, and not the wire.

 

Fulcrum29

Honorary Master
Joined
Jun 25, 2010
Messages
55,064
New report,


312451908_811377460139219_5756050343671666062_n.jpg


312707150_811377510139214_6972159565417401157_n.jpg


312707144_811381076805524_264535510647563218_n.jpg


MSI GeForce RTX 4090 Gaming Trio X
MSI MEG Ai1300P PCIe5 1300W (80 Plus Platinum)

How credible these reports are, well...
 
Top