This will not be taught in textbooks. Wildberger works in a math world where "infinity" or even "really really big numbers" don't exist. There is nothing mathematically wrong about that world, but others (like me) do not find this problem context to be a useful framework to work in.
Interesting. So sort of a polar opposite to hyperreal numbers[1]. Instead of postulating de existence of numbers to the infinite and beyond, postulating there is no infinite.
> There is nothing mathematically wrong about that world
Thank you for acknowledging this. Every time Norm's work comes up on HN there is a subcurrent of comments about how his philosophy of math is wrong or dumb whose are arguments can be summed up as "Lol no infinity wtf".
Do I personally agree with his philosophy? No. But I still watched all his videos because they are entertaining, thoughtful, and his is rigorous in his definitions and examples.
I've been using it as my primary search engine for a couple of months. It's not great as a search engine. I find their locality of search to not be well-supported (e.g. the search "food near me" works good in google and not great in ecosia).
Ecosia doesn't emphasize recent events, news, or posts in search results as much as I'm used to --- but I haven't decided if this is good or bad.
It's not so bad that I've changed. But I do sometimes use a better search engine when I want better results.
I made a partial replacement that doesn't allow user-submitted content at https://davidlowryduda.com/static/MathShare/. It just stores the content in the URL, and is limited by URL size limits. In practice this means you have approximately one page of text.
I wrote about making this at https://davidlowryduda.com/mathshare/. I was trying out the LLM interaction that made it near the top of HN recently, and it worked very well.
I'm confronted with a similar problem frequently. I have a growing bash script and it's slowly growing in complexity. Once bash scripts become sufficiently long, I find editing them later to be very annoying.
So instead, at some point I change the language entirely and write a utility in python/lua/c/whatever other language I want.
As time goes on, my limit for "sufficient complexity" to justify leaving bash and using something like python has dropped radically. Now I follow the rule that as soon as I do something "nontrivial", it should be in a scripting language.
As a side-effect, my bash scripting skills are worse than they once were. And now the scope of what I consider "trivial" is shrinking!
My problem with python is startup time, packaging complexity (either dependency hell or full blown venv with pipx/uv). I’ve been rewriting shell scripts to either Makefiles (crazy but it works and is rigorous and you get free parallelism) or rust “scripts” [0] depending on their nature (number of outputs, number of command executions, etc)
Also, using a better shell language can be a huge productivity (and maintenance and sanity) boon, making it much less “write once, read never”. Here’s a repo where I have a mix of fish-shell scripts with some converted to rust scripts [1].
I've often read that people have a problem with Python's startup time, but that's not at all my experience.
Yes, if you're going to import numpy or pandas or other heavy packages, that can be annoyingly slow.
But we're talking using Python as a bash script alternative here. That means (at least to me) importing things like subprocess, pathlib. In my experience, that doesn't take long to start.
34 milliseconds doesn't seem a lot of time to me. If you're going to run it in a tight loop than yes, that's going to be annoying, but in interactive use I don't even notice delays as small as that.
As for packaging complexity: when using Python as a bash script alternative, I mostly can easily get by with using only stuff from the standard library. In that case, packaging is trivial. If I do need other packages then yes, that can be major nuisance.
Not necessarily. Shell scripts often embody the unix “do one thing and do it right” principle. To download a file in a bash script you wouldn’t (sanely) source a bash script that implements an http client;, you would just shell out to curl or wget. Same for parsing a json file/response: you would just depend on and defer to jq. Whereas in python you could do the same but most likely/idiomatically would pull in the imports to do that in python.
It’s what makes shell scripts so fast and easy for a lot of tasks.
I have exactly the same issue. I maintain a project called discord.sh which sends Discord webhooks via pure Bash (and a little bit of jq and curl). At some point I might switch over to Go or C.
I'm using it daily for many years now and it does exactly what I expect it to do.
Now I'm a little concerned by the end of your message because it could make its usage a bit trickier...
My main usecase is to curl the raw discord.sh file from GitHub in a Dockerfile and put in in /user/local/bin, so then I can _discord.sh_ anytime I need it.
Mostly used for CI images.
The only constraint is to install jq if it's not already installed on the base image.
Switching to Go or C would make the setup much harder I'm afraid
On the concern of it would be harder to setup, I think it would be easier in fact, you would simply curl the Go or C statically generated binary to your path and would alleviate the need for jq or curl to be installed alongside.
I think the reason I haven’t made the switch yet is I like Bash (even though my script is getting pretty big), and in a way it’s a testament to what’s possible in the language. Projects like https://github.com/acmesh-official/acme.sh really show the power of Bash.
That and I think the project would need a name change, and discord.sh as a name gets the point across better than anything I can think of.
From what it seems , it seems that its possible to run this thing without installing go,rust,c itself
to quote from the page
With scriptisto you can build your binary in an automatically managed Docker container, without having compilers installed on host. If you build your binary statically, you will be able to run it on host. There are a lot if images that help you building static binaries, starting from alpine offering a MUSL toolchain, to more specialized images.
Find some docker-* templates via scriptisto new command.
Examples: C, Rust. No need to have anything but Docker installed!
Builds in Docker enabled by populating the docker_build config entry, defined as such:
Also I am watching the video again because I had viewed it a looong time ago !
Why would that make the setup harder? If they provide a statically-linked executable, you can just download and run it, without even the need to install jq or anything else. It's not like they'd provide Go code and ask you to compile it yourself. Go isn't Python.
Yesterday, I had a problem where wget alone could do 98% of what I wanted. I could restrict which links it followed, but the files I needed to retrieve were a url parameter passed in with a header redirect at the end. I spent an hour relearning all the obscure stuff in wget to get that far. The python script is 29 lines, and it turns out I can just target a url that responds with json and dig the final links out of that. Usually though, yeh, everything starts as a bash script.
I definitely agree. Bash is such an unpleasant language to work with, with so many footguns, that I reach for a language like Python as soon as I'm beyond 10 lines or so.
About 5 years ago, StackOverflow messed up and declared that they were making all content submitted by users available under CC-BY-SA 4.0 [1]. The error here is that the users-content agreement was that all users' contributions are made available under CC-BY-SA 3.0 (and not anything about later). In the middle there were also some licensing problems concerning code vs noncode that were confusing.
I remember thinking that if any of the super answerers really wanted, they could have tried to sue for illegally making their answers available under a different license. But I thought that without any damages, this probably wasn't likely to succeed.
But now I wonder whether making all content available to AI scrapers and OpenAI in particular might be enough to actually base a case. As far as I can tell, StackOverflow continued being duplicitous with what license applies to what content for half of the year 2018 and the first few months of the year 2019. Their current licensing suggests CC-BY-SA 3.0 for things before May 5 2018, and CC-BY-SA 4.0 for things after. Sometime in early 2019 (if memory serves, it was after the meta post I link to), they made users login again and accept a new license agreement for relicensing content. But those middle months are murky.
My understanding of licensing law is that something like 3.0 -> 4.0 is very unlikely to be a winnable case in the US.
Programmers think like machines. Lawyers don't. A lot of confusion comes from this. To be clear, there are places where law is machine-like, but I believe licensing is not one of them.
If two licenses are substantively equivalent, a court is likely to rule that it's a-okay. One would most likely need to show a substantive difference to have a case.
IANAL, but this is based on one conversation with a law professor specializing in this stuff, so it's also not completely uninformed. But it matches up with what you wrote. If your history is right, the 2019 changes is where there would be a case.
The joyful part here is that there are 200 countries in the world, and in many, the 3.0->4.0 would be a valid complaint. I suspect this would not fly in most common law jurisdictions (British Empire), but it would be fine in many statutory law ones (e.g. France). In the internet age, you can be sued anywhere!
> If two licenses are substantively equivalent, a court is likely to rule that it's a-okay. One would most likely need to show a substantive difference to have a case.
Which does exist and can affect the ruling. CC notably didn't grant sui generis database rights until 4.0, and I'm aware of at least one case where this could have mattered in South Korea because the plaintiff argued that these rights were never granted to and thus violated by the defendant. Ultimately it was found that the plaintiff didn't have database rights anyway, but could have been else.
A super literal reading of some bad wording in 3.0 created an effect the authors say they did not intend and fixed in 4.0. Given the authors did not intend this interpritation a judge is likly to assume people using the licence before it came to light also did not, hence switching to 4.0 is fine. Conversly now this is widiy known continuing to use 3.0 could be seen as explicitly choosing the novel interpritation (arguably this would be a substantive change).
> a judge is likly to assume people using the licence before it came to light also did not
Why would the judge have to assume anything? The person suing could simply tell the judge they did mean to use the older interpretation, and that they disagree with the "fix". They're the ones that get to decide, since they agreed to post content using that specific license, not the "fixed" one.
But the people suing aren't trying to choose how the license is interpreted, they're trying to prevent the other party from changing the text. If the change is meant to "fix" how the text should be interpreted (which is what you said), then they're the ones trying to choose the exact interpretation.
I personally write "IANAL", not to reduce my personal legal liability, but rather to give a heads up to those reading that I am not an expert, that I am likely wrong, and that you likely shouldn't listen to me.
I feel there's a common thread that maybe should be some kind of internet law that people who make a point of noting they are not experts, are more often correct than people who confidently write as though they are.
You see this particularly with crypto, where "I am not a crypto expert" is usually accompanied by a more factual statement than one from the self proclaimed expert elsewhere in the thread.
One cannot legally practice law without a license. The definition of that varies by jurisdiction. Fortunately, in my jurisdiction, "practicing law" generally implies taking money, and it's very hard to get in trouble for practicing law without a license. However, my jurisdiction is a bit of an outlier here. Yours might differ.
In general, the line is drawn at the difference between providing legal information and legal advice.
Generic legal discussions, like this one, are generally not considered practicing law. Legal information is also okay. If I say "the definition of manslaughter is ...," or "USC ___ says ___," I'm generally in the clear.
Where the line is crossed is in interpreting law for a specific context. If I say "You committed manslaughter and not murder because of ____, which implies ____," or "You'd be breaking contract ____ because clause 5 says ____, and what you're doing is ____," that's legal advice.
The reasons cited for this are multifold, but include non-obvious ones, such as that clients will generally present their case from their perspective. A non-lawyer will be unlikely to have experience with what questions to ask to get a more objective view (or even if the client is objective, what information they might need to make a determination). Even if you are an expert in the law, it's very easy to accidentally give incorrect advice, which can have severe consequences.
In practice, most of this is protectionism. Bar associations act like a guild. Lawyers are mostly incompetent crooks, and most are not very qualified to provide legal advice either, but c'est la vie. If you've worked with corporate lawyers, this statement might come off as misguided, but the vast majority of lawyers are two-bit operations handling hit-and-runs, divorces, and similar.
In either case, it's helpful to give the disclaimer so you know I'm not a lawyer, and don't rely on anything I say. It's fine for casual conversation, but if tomorrow you want to start a startup which helps people with legal problems, talk to a qualified lawyer, and don't rely on a random internet post like this one.
I always assumed it was the same type of courtesy as IMHO, and someone taking legal advice from random strangers on the internet wouldn't result in any legal liability on the side of the commenters.
Yes, people have been sued before for giving advice that was acted upon.
I remember hearing about an construction engineer who was sued for giving bad advice whilst drunk to a farmer over fixing a dam. The dam failed and the engineer was found to be liable.
I can see the reasonning behind the case, as the engineer has plausible expertise in the domain and could credibly give actionable advice.
When it comes to lawyers, there is already a legal framework where lawyers are responsible when giving legal advice, even when it's not toward their clients, the same way medical professionals have specific liabilities regarding the medical acts they can perfom.
Non lawyers giving legal advice doesn't fit that framing, except if they explicitely pose as one. I'd also exclude malicious intent, as whatever the circumstances, if it can be proven and results in actual harm there's probably no escape for the perpetrator.
That’s possible because the engineer is licensed. A random guy giving bad advice and failing to disclose he’s not an engineer would do no such thing (so long as he didn’t suggest he was an engineer).
It is worth remembering that law professors have a vested interest in making sure the system work as you described. If contract law was straightforward, they'd be out of job.
That's an admirable goal but if there are any "bugs" in the contract you probably don't want it executed mindlessly. Human language isn't code and even code isn't always perfect so I'd rather not be legally required to throw someone out a window because someone couldn't spell "defederate".
I agreed in the abstract, but not in the specific (the specific professor was one of integrity, and sufficiently famous this was not an issue).
However, it's worth noting the universe is a cesspool of corruption. If you pretend it works the way it ought to and not the way it does, you won't have a very good time or be very successful. The entire legal system is f-ed, and if you pretend it's anything else, you'll end up in prison or worse.
> if any of the super answerers really wanted, they could have tried to sue for illegally making their answers available under a different license.
they can plausibly sue people other than stackoverflow if they attempt to reuse the answers under a different license. but i think it's very difficult to find a use that 4.0 permits that 3.0 doesn't
The blog illustrates that such assumptions about what's a sufficient attribution are fraught with danger, so "the smallest professional courtesy" can expose you to a $150k risk
People put their content on the site for the public to use, and now the public is using it, it's just that "the public" includes AIs. Admittedly, a non-human public, nonetheless ...
The problem is LLMs don't provide attribution/credit which directly violates the license[0]
Otherwise search engines were already "non-human public" that scraped the site but directly linked to the answers, which was great. They didn't claim its their work like these models. The problem isn't human vs non-human. LLMs aren't magic, they don't create stuff out of thin air, what they're doing is simply content laundering.
The container ship "unluckily" maneuvered between the protective barriers. About 4 more protective barriers would have stopped this collapse.
------
No bridge survives being struck by a container ship. That's why barriers are erected around critical points. There already were barriers, they just weren't complete coverage for some reason. (EDIT: Maybe the older 1970s era design of this particular bridge wouldn't allow more protection to be placed. Obviously this situation calls for a full investigation / lessons learned kind of thing, as part of the new bridge building process)
Older bridges no, but newer bridges should absolutely. The Bay Bridge was struck in 2007 and came away mostly unscathed due to earlier efforts to prevent catastrophic damage in that scenario;
In my defense, the SF-Oakland Bay Bridge is older and carries more 4x more traffic than the "other" Bay bridge! But yeah, given that the Chesapeake one is just down the road from the bridge that collapsed, I get the confusion.
In a Bridge of Theseus sort of way. The entire Eastern span is very new, a lot of the approaches have been rearranged, and major components of the Western span has been replaced over the years. But I guess none of this affects the age of the bridge, at least in Wikipedia’s estimation :)
A lot of bridges have their pilings set on mini islands, terrifically reinforced piles of stone and concrete that extend for quite some distance around the actual support. I don't know why some are built without that, it always weirds me out seeing the spindly legs going straight into the water, and this is why.
Edit to add: Check out Fort Carroll, precisely such an artificial island just a few hundred yards away in the very same harbor. It was built in the 1840's as a military position to defend the harbor, and has fallen into disuse. Now just imagine if the bridge sat on a couple of those, instead of the foundations it had. Ship would've barely dented the wall.
Civil engineering is very complex and doesn’t go off of feelings. I’m sure the type of soil and rock that the bridge is built on inform such decisions.
I would and furthermore I think there is a massive bias at play - if the exact same disaster happened in China there would be jokes about bridges made of Chinesium.
There is an expectation that a disaster happening in the west in a result of unforeseeable act of god, but in China it will be a result of corruption or shoddy workmanship.
Whereas in reality maintenance standard in the west have fallen but in the east they improved.
So now this bias protects responsible decision makers from legal consequences - no one went to prison for grenfell disaster, postmaster scandal or the Boeing debacle.
> Whereas in reality maintenance standard in the west have fallen
In the context of this incident, are you saying that we _previously_ used to go around retrofitting our 50-year-old bridges with more modern defenses, and then at some point since then we stopped doing this? Obviously if we're talking about new construction, it stands to reason that standards have only _increased_, but this was an old bridge built to old standards. So which standards have "fallen" to result in this disaster specifically?
> 46,154, or 7.5% of the nation’s bridges, are considered structurally deficient, meaning they are in “poor” condition. Unfortunately, 178 million trips are taken across these structurally deficient bridges every day
I would also be interested to know how they decide to pick up a site. I was very surprised to learn that a technical note posted only to my website was picked up somehow. (I am a mathematician and so there are other things on my site, but it’s some custom static site generator thing and I’m still astounded).