You're thinking too concretely. The point of this article isn't to talk specifically about modeling recipes; it's to use the complexity of recipes (something concrete most people know about) as an example of how this "edge case poisoning" affects the system. I'm sure he has specific non-recipe examples in mind, but using those would either have to 1) give you a lot of background knowledge about the system, which wasn't his point 2) possibly reveal privileged information about clients and so on.
So here's a concrete example: the config file / structure for a virtual machine. Your basic VM has a # of cpus, an amount of memory, a virtual disk, and a virtual network card. Oh, but this VM is actually a "service VM" that is providing an emulated device for another VM. And this VM is actually a fast, ephemeral clone of another VM: it has copy-on-write memory and isn't allowed to write to the disk. And this VM is a live-snapshoting clone of a remote VM: it doesn't execute, but just receives memory and disk updates from the remote VM, until the heartbeat is lost, and then continues. Oh, and this VM's disk is actually provided over the network by a SAN. Oh, and...
The result being that if, like 95% of people, you just want to make a plain VM, you have to wade through a massive list of who-knows-what options to make it work. Balancing making it simple for those 95%, while functional for the other 5%, is a challenge.
But to the GP’s point, this VM config is a program, so why are you expressing it as a static config file? The ending virtual machine will be valid as long as the steps involved in its creation are.
No, it’s not, and writing VM configuration as a Turing-complete program means you can’t perform structured queries over all the VMs you manage without executing arbitrary code, and you can’t make bulk modification at all.
At least, not without restricting yourself to a tractable subset like dependabot does.
The article gives one example - find out if a recipe has ingredient X. You could also imagine "find all recipes (from a cookbook) that are vegan", or "find all recipes I could follow given the contents of my fridge", etc.
This is what you'd do in a classic OOP approach. Allows for different behavior across variants by pulling out the shared interface. (I think this is what the author mentions when they speak of "different level of abstraction"?)
The downside of this approach is that for subtrees of shared behavior you can go the multi-level inheritance route (risky if you're not sure the leaves will hold their parent's contract) accept the extra boilerplate for similar behavior.
It's interesting to me how this happens quite often and polymorphism is still our go-to solution.
In this case, recipe is data and programs can be generated from data. I case of data being equivalent to a program, why complicate things with inheritance or composition? Just repeat the data. We aren't maintainig the code, we are generating it, using it once and discarding it. If you want your data smaller, you just compress it.
Let's say we find a whole new edge case after this thing has been running for six months. Now we need to update the data structure and the generator code that knows how to interpret the data. So I'm not sure how much is gained.
I think one pitfall that a lot of software designers fall into is assuming they can know the entire problem domain up front. Maybe that works for a super-mature industry like airline reservations or something. But I still tend to doubt it.
In my experience you constantly get stuff that borks your data model after going live. I always assume this will happen continuously throughout the lifecycle of the product, and try design accordingly.