I've used R (3 years) and Python (8+ years) in data science and much prefer Python, because it can do things that aren't just pure data analysis, and because pandas is so amazingly good compared to R's data matrix solutions, in my opinion. I believe that the algorithmic trading industry has gone fully into Python and away from R for these reasons.
R has data.table. It is the game changer as I agree base R data.frame do not cut it for performance. tibble will come close once they incorporate more of the data.table performance tricks.
Does R have robust CSV parsing? I remember using the default and it'd be extremely finicky about getting the header and index flags right and wouldn't typecast numeric columns properly (instead they'd end up as factors and not play nice)
Python version of data.table has very fast CSV parsing (compared to Pandas), and it didn't have issues like those you mention. Even if data.table had issues with CSV parsing, you could probably use Apache Arrow to parse CSV into arrow table and then convert it to data.table (but that is probably suboptimal).
It happens, but mostly because other formats don't produce usable CSV's. The biggest problem is if there are any free-entry text fields (common for customer/business name), and there isn't full quoting around these fields, base R will break.
I believe both fread and readr::read_csv do the right thing here, but the base-R perspective on data manipulation before read.csv is to use Perl (the R-core team are pretty old-school, to be fair).
I've been a heavy user of all 3, and pandas syntax is a nightmare compared to dplyR or data.table in R. That being said, I still use pandas because I prefer python for non-analysis.