What is your point?
Nvidia's 'Adaptergate' is not the reason why AMD will not use ATX 3.0 power rails, you know, the smart rail.
Never said it was.... It wasn't a direct result, but a result of their own testing. ATX 3.0 power rails ? Your making it sound like Power supplies was designed from the ground up again. Same power supply design, just with better OCP, and the requirement for large/more capacitor(s) to handle spike loads and connector change, and you don't even necessarily need a larger/more capacitor(s) for the 12 volt rail, if your over current protection isn't over zealous with spike loads. Still the same, buck, full bridge, half bridge, quasi resonant and push pull.
But only push pull, buck, full bridge and quasi resonant is of relevance at they are the only ones that supply up to 1000 watts or more. Half bridge only supplies up to 500watt, so not likely to see any changes there. There is no immediate redesign of topologies or the way they operate.
Oh and calling it a smart rail entails the addition of digital controllers, with the use if micro controllers or programmable gate arrays. Well that is essentially what a digital supply uses. Consider the "smart" rail having a type of read only memory reporting back the rail output, it can't do much more than that. It is fundamentally still a "dumb" rail with the addition of read back, and the GPU won't pull more than the rail limit is what it essentially boils down. Unless it is a digital power supply then it is able load balance rail(s). Also of note is that intel doesn't specify how to deploy/make a power supply only the specifications to be compliant. But to keep power supplies cheap and affordable most are going to deploy a dumb approach with read back only. To go digital you would either need to use full bridge or quasi resonant, as those two are really the ones to benefit the most out of going digital, the design efficiency on the others are problematic and general not worth it. That isn't even speaking about the costs with full bridge being at least 2,5 times standard pricing and 2.8 times for quasi, include digital and you can pretty much double the price.
There is no point making it "smart" as the intention is for read back only, essentially letting the GPU know what it can draw.
I also dont see the need to isolate the rail and revert back to the old multi rail design, unless it is a high wattage unit that can afford to split to a dedicated rail, it would be suicide on 1kw or less as it needs to supply up to 600 watts as per the requirement of the spec, of course the specification states it needs to label cables on how much it can draw. So even if the entire 12volt rail is 600 watt, information supplied to gpu can restrict it to 300 watts only. That limit is essentially hard coded and no way around it. It is not able to dynamically adjust output either.
Power supplies with single rail design is more common and practically replaced the old multi rail design and for good reason, not that single rail is any less safe compared to multi rail.