It's a widely-used and widely-studied method, so the popular implementations are pretty optimized and the toolkits extensive. Unless you have some fundamental insight/optimization specific to your problem, I don't think it's actually worth it to roll your own. Easier to just start with something open sourced and build a small module on top.
Nvidia is rolling FEM into Physx[1] - exciting stuff! GPU-accelerated FEM is going to be a game changer if they didn't cut too many corners in the pursuit of speed.
It depends. The difference vs traditional engineering software is in how much deviation from reality there is, and how much you can tolerate. Both of these depends on what exactly you're simulating and what sort of results you're looking for.
For example, Nvidia published a paper a while ago using Flex as a simulation environment for training AI to perform robotic tasks:
Nvidia is rolling FEM into Physx[1] - exciting stuff! GPU-accelerated FEM is going to be a game changer if they didn't cut too many corners in the pursuit of speed.
I wonder if it can be coupled with FleX...
[1]: https://news.developer.nvidia.com/announcing-nvidia-physx-sd...