The data structures are often nested, and can change quite often, so spending time normalizing all of those in to columns and tables wouldn't make a lot of sense.
I suppose on the human level the more important reason is I often get requests from non-techies in the company to see snippets from this data. They can read JSON just fine (Well, most of them... a few still insist on using Excel, so I have to flatten the JSON to csv with jq)
If you're really talking about streaming a JSON Lines file into a SQL DB as a series of binary JSON documents, that might make a lot of sense.
If, however, you're really talking about a single JSON document, then you're going to be generating that 190GB intermediate JSON file either way. Plugging it into a database to query it just seems like an extra step for little benefit. (It's basically a variant on the question of whether you'll get better whole-pipeline latency from a data warehouse, or a data lake — which has no general answer.)
"JSON Lines" isn't "streamable JSON", it's just JSON that's easier to use with existing tools in a streaming fashion. Just like you can stream an HTML/XML file with a SAX parser, you can do the same with JSON.
I don't think there's any reason it's necessary to have an 190gb json file, nor anything stopping one from incrementally dumping it into Sqlite. Though it would depend on the format of the proprietary file.
But I will add that there's an obvious benefit to dumping the data into the database: it has indexes and querying capacity that doesn't involve full 190GB file scans. The I/O of a 190gb scan alone takes time.
I suppose on the human level the more important reason is I often get requests from non-techies in the company to see snippets from this data. They can read JSON just fine (Well, most of them... a few still insist on using Excel, so I have to flatten the JSON to csv with jq)