Eh, isn't it as simple as assuming inputs are linearly corrected in sRGB, doing the processing as normal but all outputs gets corrected back into linear-for-sRGB? So even if you take a screenshot of a corrected image, it will be corrected for sRGB, and applying the darkening correction will still be correct. If input and output corrections are equal you are not any worse off than without any corrections whatsoever, but all processing and effects now appear correctly.
To put it differently, the problem is with "2+2=10", too dark results from effects and runaway brightness in add operations for example. Correcting for that is the important bit (2.2 is a nice approximation, and probably what even your monitor assumes, but actual srgb compliancy never hurts) and as long as inputs (loaded images, colour pickers etc.) are inversely corrected everything will be as consistent from start to finish as if there was no correction. So the problem of overcorrection from already corrected sources doesn't exist any more or less than it did before, and you don't have to care about what the colour space of the input is any more or less than before. (Although abiding by the colour space specified in image metadata would be a good idea surely? Even if it could be incorrect.)
The only problem I see would be that gradients would now look odd, since physically linear gradients don't look all that gradient actually. But that's an issue of simply changing the gradient to produce an overcorrected version of itself no?