Who is ever really going to be confident in signing off on complicated software systems when in effect, it's saying, "Yes this system will absolutely not break/be breached/have something happen to it". Given the complexity of modern software, it doesn't seem like a feasible solution.
Even top software companies have things break or go wrong very frequently compared to traditional engineering.
I don't see this as a good solution in the software world.
I don't think it would demand that the software be infallible. If a "new type of earthquake" happens, the Civil engineers won't be responsible for their bridges failing. In the same way, if there is reasonable effort made to ensure the system is secure by standards of the time, I don't think these theoretical individuals would be blamed for particularly novel (zero day) or unpreventable (social engineering, to at least some extent) attacks.
Even top software companies have things break or go wrong very frequently compared to traditional engineering.
I don't see this as a good solution in the software world.