Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You definitely wouldn't want to use Verilog to iterate quickly on a simulation model. It's a lot more work: in a software model, you can (for example) say the equivalent of "this instruction is a divide and takes 32 cycles", then mark the divider busy for the next 32 cycles in a reservation table and increment the "total cycles taken by this benchmark" counter. In the RTL you'd actually have to build a divider and integrate it into the pipeline.

This shortcut is possible because of a common technique of splitting functional and timing details apart: the functional emulator simply runs the instructions in a big interpreter loop and tracks the machine state as, say, QEMU or Bochs would, while the timing model is just cycle accounting given the instruction stream. In contrast, when you build a model in RTL, you're actually doing all the work that industry microarchitects do: you need to get right all the details of (say) speculative execution, or cache tag matching, or whatever, because your microarchitecture is implementing the code execution directly. That's a lot harder to do!

People do sometimes write RTL for their proposed microarchitectures, but that's usually done for power or timing (clock speed / critical path) results. And they usually model just whatever new thing (prediction table, synchronization widget, cache eviction logic) they propose, rather than the whole chip.



Being a little pedantic, but you could use verilog to write a high level model as you describe. The language certainly doesn't restrict you to its synthesizable subset.

That being said, it's generally easier and cheaper (good verilog implementations aren't free) to use a general purpose language for what your describe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: