I've tried it. It's truly incredible what it can produce, but it often produces the wrong thing. Furthermore, it cannot do logical inferences, and it cannot do mathematics.
In short, the code it produces has to be reviewed by programmers who know what they're doing. Sometimes that speeds things up to have it produce the code and we review it, but sometimes it slows things down and it's easier to just write it yourself.
I see ChatGPT as an extremely powerful tool that can boost a programmer's productivity, but despite its incredible capability it cannot be trusted in ways that we can trust human programmers.
In short, the code it produces has to be reviewed by programmers who know what they're doing. Sometimes that speeds things up to have it produce the code and we review it, but sometimes it slows things down and it's easier to just write it yourself.
I see ChatGPT as an extremely powerful tool that can boost a programmer's productivity, but despite its incredible capability it cannot be trusted in ways that we can trust human programmers.