Thank you! AoE was my first online game ever, and still my favorite RTS of all time (sorry Starcraft, I love you too), so it was especially fun to add that one.
Miners don't necessarily need supercomputers to mine. Sure, perhaps miners who set up huge operations might be in trouble, but anyone with access to moderately strong computational power, including basic laptops or even cheap SoCs, can mine. It might be beneficial to have mining spread out among those running less powerful computers, assuming that the miners act independently, for then the risk of a single mining firm/group controlling a dangerously large percentage of computational power would decrease substantially.
The author also doesn't mention tipping, which will eventually be the dominant source of mining fees. In a drastic case, Bitcoin users could provide tips substantially larger than the current reward for block discovery/vetting. Should this be the case, the system could be self-correcting, and if the Bitcoin network is too dependent on large miners, the tipping system could act as a "bail out" in the near-term.
If you have to pay for the electricity, it's only economical to mine (Bitcoins) with an ASIC. The CPU and GPU mining is now so slow that you will get very few Bitcoins in average, and the cost of the electricity to run it will be grater than the money you can get from the Bitcoins. (To get a smooth outcome you should probably mine in a pool, but this decrease the expected average result only little.)
The volume of tips would have to increase drastically. In any case, the volume would depend on the price. Slightly off point, it might not be completely unrealistic to imagine a world in which miners do not derive their main source of income from mining. Perhaps miners could running mining programs on SoCs during the day while the miners themselves work other/additional jobs. While the number of miners mining in this way would have to be very large in order for this configuration to be possible, should this be the case, miners wouldn't be nearly as financially dependent on mining, and they could be more immune to fluctuations in the price
SELECT to_tsvector(post.title) ||
to_tsvector(post.content) ||
to_tsvector(author.name) ||
to_tsvector(coalesce((string_agg(tag.name, ' ')), '')) as document
FROM post
JOIN author ON author.id = post.author_id
JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id
JOIN tag ON tag.id = posts_tags.tag_id
GROUP BY post.id, author.id;
be rewritten as:
SELECT to_tsvector(post.title) ||
to_tsvector(post.content) ||
to_tsvector(author.name) ||
to_tsvector(coalesce((string_agg(tag.name, ' ')), '')) as document
FROM post
JOIN author ON author.id = post.author_id
JOIN posts_tags ON posts_tags.post_id = post.id
JOIN tag ON tag.id = posts_tags.tag_id
GROUP BY post.id, author.id;
I think that both Python and Ruby can serve this purpose well, but both the block and method_missing functionality in Ruby can result in highly flexible and concise APIs. Correct me if I'm wrong, but I don't think there's a precise analog in Python to the Proc/Block syntax found in Ruby, maybe with the exception of lambdas. Furthermore, delegation using method_missing can result in APIs/libraries being very concise and readable.
In Python, since functions can be defined anywhere, and decorators can execute arbitrary code (including registering and running the functions they decorate) and be passed variables from the scope they're used, you can make some pretty powerful DSLs. And method_missing delegation can largely be replaced by defining classes at runtime, especially now that people are getting used to passing hashes as arguments (in fact, Rails seems to be slowly deprecating find_by_foo_and_bar in favor of where(foo: _, bar: _)?).
"There is no persistence layer involved for experiment group allocation, so that we could minimize the latency/load on our production services. We left all of the complicated metric computations to offline data processing." Does that mean that the experiment to which the user is allocated is stored through a log in memory instead of being stored on disk/on disk and in memory (through logs)? If so, are there frequent trade-offs between memory/cpu usage in running experiments?
Putting ("triggering") someone into an experiment group is most often some function calls: do they meet the criteria (e.g., country, ..), do they fall in the right bucket. The fact that they've been triggered is logged to Kafka, an open-source messaging system whose logs we push (eventually) into Hive.
"But with an algorithm that allows them to assemble in parallel, then they can shape up faster"-Could a variation of the Paxos algorithm accomplish this? It sounds like the task of achieving agreement among several robots without a centralized leader is analogous to the decentralized parliament described in Lamport's Paxos paper.
I wonder how relocations along towns near state boundaries, such as moving from the Arkansas side to the Texas side of Texarkana, reflect overall interstate migrations.
Great news! It's unfortunate that record labels still pursue a Twentieth Century licensing-based business model. While I do believe that what's bad for major record labels isn't necessarily bad for artists, I do understand the point that record labels acted as quasi VCs for artists prior to the rise of the internet; prior to the introduction of platforms like Kickstarter, there must have been a fairly substantial void in funding, especially in the early to mid naughts. Hopefully sights like Bop.fm can help fill the funding void for emerging artists in combination with crowdfunding sites and streaming platforms.