Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My experience with GPU accelerated development is quite horrible on Windows for anything other than the NVIDIA prepped docker container. There was always something missing or some drover was incompatible. In the long run I have always regretted developping Python on Windows, also often because whatever was developped was to be deployed on a Linux box.

I do not think it's purely windows to blame here though. It's only quite recently that NVIDIA started fixing their documentation and instructions on getting all the right CUDA CuDNN stuff running properly on a system.



Hi. PM on Windows & WSL here.

Imagine if you could run AI/ML apps and tools that are coded to take advantage of DirectML on Windows and/or atop DirectML via WSL.

Now you can run the tools you want and need in whichever environment you like ... on any (capable) GPU you like: You don't have to buy a particular vendor's GPU to run your code.

If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.


> If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.

Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).


This. Some games would have a handful of different renderers for different setups, while other games would only support one specific card type (and if you were lucky, a software renderer).

Those days sucked. Bigtime. If we can avoid doing the same mistakes for machine learning then we should.


> Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).

Which is still nonsense, since this only affected the OGL driver shipped by Microsoft. In contrast to truly bad actors like Apple, OEM were free to ship their own OGL drivers from day 1.

So sorry mate, but I have to call BS on that one.


Prob. different PM though.


>Now you can run the tools you want and need in whichever environment you like

Isn't the linked post saying you have to be running on Windows though? It seems like it would make way more sense to either port directX to Linux, or ditch directX and put those resources into supporting Vulcan.


Whichever environment you like as long as it's on Windows


Hi!

Don't you think the effort to achieve this would be absolutely massive? I don't know what kind of resources are thrown on this project, but I'd estimate minimum to be 3 dev teams for 2 years just to get a few variations of ResNet to work "as is". And that's just for regular models, that don't require quantization or (auto-)mixed precision for training.


Neither pytorch nor tensorflow support WinML, so this is going to be a bit of a stretch still, since CUDA is still the toolkit of choice for mainstream ML frameworks.


> horrible on Windows for anything other than the NVIDIA prepped docker container

o-O nvidia-docker does not even support Windows.

I think the only thing you need to know is which CUDA version your cuDNN requires, and it was quite clearly stated on the download page. Also the same on Linux. For nvidia-docker you used to need a specific driver version.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: