I suspect that if someone could come up with a reasonable way to single out confounding variables, there would prove to be an inverse correlation between SLOC per function point and code quality.
My rationales:
- More SLOC means more places for a bug to hide.
- More SLOC means more logic to have to reason about.
- More repetition leads to more SLOC.
- More repetition increases the chance that a bug can be fixed in one spot but not in others.
- More repetition increases the chance for regression bugs resulting from updates not being fully propagated
Anecdotally, on my team it seems that we tend to have the least quality issues in the stuff that's written by folks who produce the most factored code.
Sure there is, if you control for functionality. You can't solve complex problems concisely without the right abstractions, and the right abstractions are 90% of the battle for code quality.