Out of curiosity: if humans have trouble coming up with anything non-trivial, like regexes, why should something that has been trained on the output of humans do much better?
To me it feels like if 90% of $TASK content out there would be bad and people would struggle with it, then the AI-genrated $TASK output would be similarly flawed, be it regarding a programming language or something else.
As a silly example, consider how much bad legacy PHP code is out there and what the answers to some PHP questions could become because of that.
But it's still possible to get answers to simplistic problems reasonably fast, or at least get workable examples to then test and iterate upon, which can easily save some time.
https://regex101.com/r/sbpy8s/1
But this matches for example
as one single word.But I would like that it matches separately
and in this case.Likewise, I’d want for example
to be matched as two separate words and After a bit more reading online I thought that maybe the following regex would do what I want: https://regex101.com/r/1NT5Ie/1But that does not match
as a word.What I want is a way to include everything after dog that is not \b
And likewise everything preceding cat that is not \b
Edit: I think I’ve found it after reading https://stackoverflow.com/questions/4541573/what-are-non-wor...
Seems to behave exactly like I want.https://regex101.com/r/f3uJUE/1