Doesn’t OpenAI embedding model support 8191/8192 tokens? That aside, declaring a winner by token size is misleading. There are more important factors like cross language support and precision for example
Updated the section to refer to the "Retrieval Average" column of the MTEB leaderboard. Is that the right column to refer to? Can someone link me to an explanation of how that benchmark works? Couldn't find a good link on it