They are fundamentally not equivalent, you can not guarantee that a system respects a safety property with testing. You can prove that certain concrete instances, i.e your test cases inputs are safe, but you don't know where there exists a path in your program where the safety property is violated.
This is incredibly important in critical systems because it might not be possible to even detect a safety violation has occurred, for example file system corrupts data, distributed consensus algorithm commits to the wrong value, compiler slightly mis compiles your code.
We want to be ensure there exists no path where something "bad" can occur, this is what verification gives us over testing.
There is always the fundamental problem of needing to communicate what we want to the computer, but the goal is to do it minimal way, and then ensure the program we write exactly matches the minimal implementation.
This is incredibly important in critical systems because it might not be possible to even detect a safety violation has occurred, for example file system corrupts data, distributed consensus algorithm commits to the wrong value, compiler slightly mis compiles your code.
We want to be ensure there exists no path where something "bad" can occur, this is what verification gives us over testing.
There is always the fundamental problem of needing to communicate what we want to the computer, but the goal is to do it minimal way, and then ensure the program we write exactly matches the minimal implementation.