Yes, I am confident that we have avoided that specific pitfall because as I mentioned we do not ever re-serialize the JSON.
However, thinking on it more, there is one thing Groxx mentioned that has given me pause, and that is "Numbers don't have a specified precision". This could be an issue in the future if, for example, computers suddenly use 128-bit numbers and JSON implementations start to actually use such numbers. It's an edge case that might result in incompatibility with clients that use 64-bit numbers. So we'll consider the possibility of changing the serialization format after researching this more.
> So we'll consider the possibility of changing the serialization format after researching this more.
I asked Brendan Eich if JavaScript will ever change the way it represents floating point numbers (from 64-bit IEEE 754 values), and he said no ("don't break the Web"). And to represent larger bit values in JSON you're supposed to use a JSON string and schema to specify the type. I'm not sure exactly what he means by that, but I assume it means something like using JSON keys like "<BigInt>mykey" and string values instead of number values. Then manually parsing the string into whatever native 128-bit type or whatever representation you have.
So - JSON still seems to be the favorite here, but I totally get the anxiety around it because of how under-specified it is. If you avoid re-serialization nothing should break.
Avoiding re-serialization does avoid a ton of issues, yea. Though I would suggest building in validation for e.g. no duplicate keys immediately, because unexpected duplicates are a common source of bugs and security exploits (e.g. it's a fairly common issue with http headers when requests cross multiple systems).
After evaluating the various options, we seem to be settling on the idea that the protocol will specify that for duplicate keys, the handling must be the same as in browsers: meaning, the last key is the one that's used.
However, thinking on it more, there is one thing Groxx mentioned that has given me pause, and that is "Numbers don't have a specified precision". This could be an issue in the future if, for example, computers suddenly use 128-bit numbers and JSON implementations start to actually use such numbers. It's an edge case that might result in incompatibility with clients that use 64-bit numbers. So we'll consider the possibility of changing the serialization format after researching this more.