Typically you see p < 0.01 etc even with 0.05 alpha. A lot of stats software gives only those inexact values.
But yes, the interpretation of p-values and confidence levels are wildly misunderstood. p > alpha is often taken as "evidence of absence" of an effect, which is just wrong. Or when for some quantity p1 < alpha and other p2 > alpha, it's often intepreted that the quantities differ.
But yes, the interpretation of p-values and confidence levels are wildly misunderstood. p > alpha is often taken as "evidence of absence" of an effect, which is just wrong. Or when for some quantity p1 < alpha and other p2 > alpha, it's often intepreted that the quantities differ.
It's a mess.