The public transport service in Hannover/Germany once had a screensaver that you could configure to show the next departure from your nearest station. I thought that was clever marketing. Today you probably could implement this as a web service.
In the Windows 95 days (and probably in the Windows 2000 days, and maybe also today) "all" a screensaver was was an .exe renamed to -if memory serves- .scr.
There may have been some special interface that the program being run was expected to conform to so the screensaver subsystem would invoke it, but (IIRC) a screensaver could do anything an ordinary program could do. (That was the big reason for being cautious about where/who you got your screensavers from.)
I like the idea mentioned in the article of exploring limited higher-order functions: functions that can only take functions as argument that themselves are not taking functions as arguments. But what simplification does this buy (in the implementation of such a language) over a language that is fully functional? It is not explained in the article.
If you return a function, the compiler has to bundle up the values of any variables that the function uses from its outer environment and return the whole package. That's known as a closure. You can't just return a function pointer.
Basically, if you don't return functions, you can avoid dealing with them "escaping" the lexical environment in which their code lives.
This makes you appreciate curation in a museum. Why are these objects shown together? Historical period, size, design feature, material, production method? It looks completely arbitrary.
I think it’s more of a “bruh look at all my wing nuts, whoa” type thing rather than a classification and taxonomy chronicling the history of the wing nut.
The main feature of interval parsing appears to be that it can jump over content such that a later part in a file does not depend on knowing everything that comes before it. Has Dogma similar expressiveness?
Yes, the `offset` function does this by specifying a bit-offset to branch to. For example the ICO `dir_entry`, which is a directory list of icon resources in the file. https://github.com/kstenerud/dogma/blob/master/v1/examples/i... - It's using image_offset*8 because everything in an ICO file is a byte-offset (8 bits)
Filecoin, which is based on IPFS, creates a market for unused storage. I think that idea is great but for adoption it needs to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found. So maybe it is an enterprise solution? That isn't spelled out either. So I am not surprised that this has little traction and the article further confirms the impression.
> to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found
I agree with this fully. But as said elsewhere, it's kind of far away from that, and also slightly misdirected.
Imagine asking someone to get started with web development by sending them to https://www.ietf.org/rfc/rfc793.txt (the TCP specification). Filecoin is just the protocol, and won't ever solve that particular problem, as it's not focused on solving that particular problem, it's up to client implementations to solve.
But the ecosystem is for sure missing an easy to use / end-user application like Dropbox for storing files in a decentralized and secure way.
Clearly much more going on but take a machine that can serve 10k req/s with [insert 100 things here] without flinching and watch it maybe, just maybe, do 10 with IPFS.
My understanding of simulated annealing is that solutions that are not improvements are still accepted with some probability in early steps but that this probability decreases as "temperature" drops. Looking at your description (but not code) I did not see that aspect but it looked like you would only accept improvements of the cost function. Is this correct or where does your solution accept slight regressions with some probability, too?
Based on the other comments, they are correct in that when doing annealing you usually want to accept some moves that do lead to regression that might not improve the regression to escape early local minimums in your objective.
I abused the definition of annealing a lot in the post but I briefly touched on the idea:
"At first, you might want to make moves or swaps over large distances or you might want to accept some percent of moves that don't improve the objective, but as time goes on, you want to make smaller moves and be less likely to select moves that don't improve the objective. This is the "annealing" part of simulated annealing in the context of FPGA placement."
I think I might have made the writing confusing because I mixed the original definition of the annealing approach (of accepting moves that don't improve the objective) with the use of "annealing" for other things like action parameters (ex. swap distance between two nodes). Something I should edit to clarify better.
Note that, yes, the thing I implemented doesn't do any annealing but rather just pick actions that only improve the objective. I am working on some extensions to add real annealing but that turned out to have a lot of more in-depth technical work that is not obvious.
"At first, you might want to make moves or swaps over large distances or you might want to accept some percent of moves that don't improve the objective, but as time goes on ...
However, as it turns out, you technically don't need this annealing part to make FPGA placement work. You can just randomly try different moves and accept or reject them based on whether they improve the objective function. This is what I did in my toy implementation of an FPGA placer just to keep it simple."
"Annealing, as implemented by the Metropolis procedure, differs from iterative improvement in that the procedure need not get stuck since transitions out of a local optimum are always possible at nonzero temperature. A second and more important feature is that a sort of adaptive divide-and-conquer occurs.
Gross features of the eventual state of the system appear at higher tempera-tures; fine details develop at lower tem-peratures. This will be discussed with specific examples."
Yes, without a temperature and cooling schedule, how can it be annealing? It's in the name. It may sound harsh, but I'd call it an abuse of the term to do hillclimbing but call it annealing. It also seems lazy, since doing it right is an all but trivial addition to the code. Finding the best cooling schedule might require some experimentation though.
So obscure that in a field as important as optimization we still think in terms of „escaping from local minima“. Also (as a total outsider) the progress in general optimization algorithms/implementations appears to be very slow (I was shocked how old ipopt is). I was wondering if all the low hanging inductive biases (for real world problems) have already been exploited or if we just have no good ways of expressing them? Maybe learning them from data in a fuzzy way might work?
Unless you come with some new take on the P ?= NP problem, there isn't much we can improve on generic optimization.
There are all kinds of possibilities for specific problems, but if you want something generic, you have to traverse the possibility space and use its topology to get into an optimum. And if the topology is chaotic, you are out of luck, and if it's completely random, there's no hope.
Couldn‘t there be something between chaotic and completely random, let’s call it correlated, where e.g. (conditional) independence structures are similar in real-world problems?
You mean something that is well behaved in practical situations but intractable in general?
There is plenty of stuff like that, things don't even need to be chaotic for that. Anyway, chaotic and random are just two specific categories. There are many different ones. Nature happens to like those two (or rather, not random exactly, but it does surely likes things that look like it), that's why I pointed them.
And beyond this intuition (escape from local optima), the reason that annealing matters is that you can show that (under conditions) with the right annealing schedule (it's rather slow, T ~ 1/log(Nepoch) iirc?) you will converge to the global optimum.
I'm not well-versed enough to recall the conditions, but it wouldn't surprise me if they are quite restrictive, and/or hard to implement (e.g., with no explicit annealing guidance to choose a specific temperature).
For covering the subject of writing, the typesetting of the PDF is remarkably poor. The mix of serif and sans serif, the font sizes don't match the structure of the document, and enumerations have inconsistent indentation - to point out just a few blemishes.
What does your business do? Stripe might not have listed your business model explicitly but could still prefer to not have you as a customer if it is an edge case.
IMHO payment processing is an 'essential service', and processors should be required to provide service to any lawful customer, regardless of their preferences.
Maybe they are doing some shady crypto stuff... (Maybe they participate in selling "numbers with certain properties" which are called Bitcoins, but are just numbers in reality...)
We have a traditional SaaS business model. We aren't an edge case with a complex structure.
We looked up their Restricted Businesses List and we are clean.
> Helping you towards a profitable crypto journey with the power of AI.
Which are not signs of a "traditional SaaS business model", yes, it is a service but you are leaving out the context of your service: cryptocurrencies. Which is enough to probably trigger some alarms inside Stripe, I've worked in fintech before and it's much better for the company to be risk-averse and blocking first, asking later than the opposite.
Given this context you should get in touch with Stripe and somehow explain how your business model isn't a risk to Stripe, unfortunately you are working in a domain with a lot of scams, and other fraudulent activities. You might not be doing it but Stripe has no way to know that, and won't be investing man-hours of cost trying to investigate your whole company to see if you aren't being sneaky and defrauding people with a seemingly legitimate business.
Edit: In fintech I worked on fraud detection, analysis, reporting and tooling for fraud agents to run their investigations, another red flag that a fraud agent would encounter when investigating your company is that the company is registered in Estonia, my best guess would be through the e-Residency scheme that allows for online business registering.
Neither of the co-founders (you included) seems to be from Estonia, or residents in Estonia. If I were a fraud agent looking at this case I'd flag it for further review based on all of these factors: company working in an industry primed with scams and frauds, using the e-Residency scheme from Estonia, while neither founders appears to be citizens or residents of Estonia. At the company I worked for (pretty big fintech in the EU) this would be put under a "manual review" and immediately block further transactions until the fraud review was done.