In the first code example in the readme ("First program"), there's `sdlcls`, `SDLinit`, and `SDLShow`. Is there some significance to the capitalisation?
Ah, it might be nice to mention that before the first code example, then. Or just use consistent case in the first example, to avoid distracting people with details that aren't the thing you're trying to demonstrate.
In 1983, AT&T released the fifth version of Unix, called "System V". Part of the release was an ABI specification for how the different parts of the system would talk to one another. Notably, the main portion of the spec described portable things like the file-format of executables, and the details for each supported platform were described in appendixes.
The SysV ABI is still used to this day, although the specification itself has withered until only two chapters remain[1], and CPU vendors still publish "System V ABI appendix" documents for platforms that System V's authors could not have dreamed of[2].
C as an interface is going to be around for a very long time, like POSIX and OpenGL and the SysV ABI standard. C as an actual language might not - it might wind up as a set of variable types that other languages can map into and out of, like what happened to the rest of the SysV ABI specification.
Steve Wozniak was incredibly foresighted when designing the Apple II, to make sure that expansion cards could disable the default ROMs and even disable the CPU, making this kind of thing possible. The article mentions a chunk of memory "used by peripheral devices"; every expansion card got its own slice of the address space, so you could plug a card in any slot and it would Just Work (maybe you'd have to tell software what slot the card was in). I was very disappointed when I "upgraded" to a 386 and suddenly cards had to be manually configured to non-conflicting IRQs and I/O addresses.
And if something didn't work he included a complete debugger inside "Apple II Machine Language Monitor" in ROM so you could always just disassemble and poke at things, pipe disassembly to the printer, read memory, change code, add own macros to CTRL+Y and rerun stuff. All that without extra software or a massive pile of printed assembly.
from BASIC:
CALL -151 (short for CALL 65385, but BASIC can't handle unsigned INT so that wouldn't work)
F666G
I don't think this is entirely due to Wozniak. Early "home" computer systems were based on connecting cards to a bus (eg the S-100 bus), eg. with one card supporting the CPU, another RAM, a third for disk drive, video card etc, etc. The cards where then memory mapped, presumably you controlled the memory mapping by setting jumpers. (I guess you're saying that Apple II managed this automatically?) Of course the full story might be a bit more complicated: 6502 and 6800 used memory mapped I/O, whereas 8080 (and Z80?) had certain I/O pins coming out of the CPU.
You're correct; slot 6 for instance is $C600. If you crashed to the system monitor you could boot a disk by entering C600G (with the 'G' standing for 'go to').
IIRC the disk controller had firmware that loaded the first 256 byte sector from disk into memory.
I woldn't go so far as to say it was "prophetic". Contemporary DEC PDP-8 (OMNIBUS) and PDP-11 (UNIBUS / QBUS) systems have a similar approach to "interoperability", where cards for peripherals were also mapped into the machine's address space. It was great that Woz saw the utility of this and brought it into the homebrew/microcomputer design.
I understand that Steve knew about that but trying to create an inexpensive computer, working with hard trade-offs makes his decision very wise and smart. We can know about complex architectures but it might be very difficult to copy them in cheaper devices.
APL and K are still pretty daunting, but I've recently been dabbling in Lil[1], which is something like a cross between K and Lua. I can fall back on regular procedural code when I need to, but I appreciate being able to do things like:
127 * sin (range sample_rate)*2*pi*freq_hz/sample_rate
This produces one second audio-clip of a "freq_hz" sine-wave, at the given sample-rate. The "range sample_rate" produces a list of integers from 0 to sample_rate, and all the other multiplications and divisions vectorise to apply to every item in the list. Even the "sin" operator transparently works on a list.
It also took me a little while to get used to the operator precedence (always right-to-left, no matter what), but it does indeed make expressions (and the compiler) simpler. The other thing that impresses me is being able to say:
maximum:if x > y x else y end
...without grouping symbols around the condition or the statements. Well, I guess "end" is kind of a grouping symbol, but the language feels very clean and concise and fluent.
for that matter, i always wonder how people mistake python for numpy :) they have surprisingly little in common.
but enough talking about languages that suck. let's talk about python!
i'm not some braniac on a nerd patrol, i'm a simple guy and i write simple programs, so i need simple things. let's say i want an identity matrix of order x*x.
nothing simpler. i just chose one of 6 versions of python found on my system, create a venv, activate it, pip install numpy (and a terabyte of its dependencies), and that's it - i got my matrix straight away. i absolutely love it:
assuming you're referring to numpy as to have anything to do with python spec, i totally agree with you. only it doesn't. so don't pytorch and pandas (and good so, poor python doesn't need any extra help to be completely f).
> you get an nxn identity matrix by...
no, man, that's how you get it. really advanced technique, kudos!
i get it by:
id:{...} /there are many ways to implement identity in k, and it's fun!
id 3
+1.00 +0.00 +0.00
+0.00 +1.00 +0.00
+0.00 +0.00 +1.00
but if you can keep a secret, more recently we've gotten so lazy and disingenuous in k land, and because we need them bloody matrices so often now, we just do it like so:
(but of course before we do that we first install python4, numpy, pytorch, pandas and polars - not because we need them, just to feel like seasoned professionals who know what they're doing)
this is of course obvious first idea, but the recipe from above is actually from the official k4 cookbook. t=t is less innocent than it seems, i'm afraid.
Pretty much, yeah! The difference is that in Python the function that calculates a single value looks like:
foo(x)
...while the function that calculates a batch of values looks like:
[foo(x) for x in somelist]
Meanwhile in Lil (and I'd guess APL and K), the one function works in both situations.
You can get some nice speed-ups in Python by pushing iteration into a list comprehension, because it's more specialised in the byte-code than a for loop. It's a lot easier in Lil, since it often Just Works.
A few more examples in K and Lil where pervasive implicit iteration is useful, and why their conforming behavior is not equivalent to a simple .map() or a flat comprehension: http://beyondloom.com/blog/conforming.html
It isn't really related to the Infocom that released the Zork games, except in a legal sense. Infocom was sold to Activision in 1986, and shut down as a studio in 1989. Circuit's Edge was published in 1990, labelled "Infocom" but just because that's the brand Activision chose to market it under.
Legally, no. But it was written in the spirit of Infocom games — all the gameplay is designed around text narrative, with the classic text parser to drive the action — and Effinger had connections with the actual Infocom team (he wrote their Zork paperback adaptations). And Michael E. Moore from Infocom was the associate producer, they'd phone him for game direction and advice.
I think it's spiritually an Infocom game. If the company had persisted and there were any future in adventure games, Circuit’s Edge is exactly what they would have produced.
Brøderbund was the publisher of "Stunts", but not the developer. The developer was Distinctive Software Inc. who had previously developed the hit games Test Drive and Test Drive II: The Duel for Accolade. For whatever reason, Accolade developed Test Drive III in-house, and DSI developed Stunts on their own.
After Stunts, DSI got bought by Electronic Arts. They were briefly "Pioneer Productions" (or at least, people from DSI were part of that group within EA) and made the original Need For Speed, but eventually became just a part of EA Canada.
> Brøderbund was the publisher of "Stunts", but not the developer. The developer was Distinctive Software Inc.
This is literally the first thing I write in the article :-) I also link to a video about DSI's story which in my opinion deserves more views.
Test Drive III being developed by a different company explains why that game is visionary but fundamentally broken, while DSI's creations are still fun to play.
Maybe you had exactly the right CPU for it? On my old computer TD3 run ~3x faster than it should have because the devs forgot to implement the game clock calibration, an unforgivable sin for a 1990 game. On top of that the steering sensitivity was completely off.
Would you like me to register you a nicer domain name?
No, thank you. Even if you can find one (most of them seem to have been registered already, by people who didn't ask whether we actually wanted it before they applied), we're happy with the PuTTY web site being exactly where it is. It's not hard to find (just type ‘putty’ into google.com and we're the first link returned), and we don't believe the administrative hassle of moving the site would be worth the benefit.
I wonder if they changed their mind because Google ceased to be a reliable way to find them.
The first link I get when I searched for "putty" was `putty.org` which, according to the footer: "The PuTTY project or its authors have never owned this domain, registered it, or purchased it."
Nevertheless, I can't consider relying on probabilistic algorithms controlled by 3rd parties to be a wise strategy.
Also, these days, after decades of habit building and a rise in awareness about scam-related stuff, I think people expect to see the name of the project early on in the URL, not in 7th position as it is currently.
Google right now lists the title of putty.org as "PuTTY", even though right now this text is only in the footer. Up until August I guess it provided a download link, but the title was not "PuTTY".
I suspect that the recent kerfuffle motivated people to finally clean out bogus hyperlinks that casually listed putty.org as the download site, which would have been contributing to inflated page rank up to that point. I found one on a wiki and fixed it, myself, and I'm sure that I was not the only person who went looking.
Because it's affiliated with _another_ ssh client and there seems to have been various levels of shadyness over time, see previous discussion: https://news.ycombinator.com/item?id=44558328
Your assumption is false, so the question is without proper foundation. GreenEnd's Chiark is owned by Ian Jackson. Simon Tatham is a user on the system, with a home directory. One of a list of such users, including Rachel Coleman and Matthew Garrett.
It seems almost hostile to users. Why should I need to use some third party tool to find your thing? If you're paying for a domain anyway, pay for a meaningful one.
… Well, I guess that's what they've done. Surely nobody could ever have been this naïve, though; it's not as though Google massaging results into unusable mess is anything new.
> Why should I need to use some third party tool to find your thing?
How else would you find it? By typing domain name guesses into your address bar until you hit the right one? How would you be sure you've hit the right one and not a scammer/squatter?
This is not a particularly easy problem to solve, and I agree that relying on Google to accurately and safely deliver you to the correct web site isn't great either, but I think we'd be much worse off without search engines.
Also a weird choice to go with a nuTLD which may or may not price gouge them in the future leaving them with the choice to either pay up or potentially have someone malicious taking over tons of inbound links.
I barely know what SSH keys are, but last week when I was asked to provide one for an stfp site at work they said create a pair using putty.
Well I googled putty and found a couple different .org domains, one who which said it was legit but not official, and another which said it was official but looked wildly out of date.
Neither one I could find a download for Mac that worked. The one I tried gave a scary “we no longer allow putty sudo access as it’s dangerous” and when I googled this error I could find no explanation to assuage me.
And since I wanted to make sure what I was doing was legit, I searched for alternatives.
Eventually I discovered I could use command line in mac to generate the keys I needed. But first I installed Xcode then ran the command (I used chatgpt to tell me exactly how to get the type and length I needed). It was easy.
Side note, the whole culture of downloading random software and using it with just a single line in a terminal is always sketchy to me too. But I’m not a coder so I’m not used to it.
The idea is that you will need to put some trust in the project anyway, since you’re trying to install it. Might as well make it easier with a one line install.
Edit: You should only do this if someone reliable tells you to, honestly. Doing this with truly random projects you aimlessly find is not a good idea.
If you hadn’t discovered this already with you mac CLI commands, OpenSSH from OpenSSL ‘ssh-keygen’ command is a good way to create SSH keys in ClI and ships in many OSes or is a lightweight download. The OpenSSL website name is unambiguous, which is a benefit.
This is helpful (and something I've used wikipedia for myself) but it's far from ideal since it wouldn't be too hard for someone to edit that page to point to a malicious domain. Not sure if that's happened before, but I can see it as something that could go unnoticed for a quite a while as long as the target site looks legit enough.
That’s the outdated looking website I found that didn’t have mac version. I’m guessing I’m supposed to use the Unix version there?
The website I was sketched out by (but tried it anyway, then got the scary error) was puttygen.com which had me install homebrew (whatever that is) and then do “sudo brew install putty”
I think the main reason you couldn't find a mac version to download is that there is none.
The closest I saw was a .tr.gz file (i.e. a gzipped Tape ARchive) of Unix source code, but A) I don't know of their definition of "Unix" includes OS X / MacOS; and B) judging from your comments here, you don't seem like the type who would want to install software by downloading, decompressing, and compiling source code.
I'm thinking the people who told you to use PuTTY were assuming that you are a Windows user.
Homebrew is a reputable package manager (a.k.a. software installer, for Unix applications on the Mac). That said, I'm pretty sure the version of ssh shipping with the Mac could do the key generation for you so you wouldn't need putty.
Unfortunately the person who owns putty.org started to use it to spread misinformation about vaccines and the pandemic, as you can see on the site today.
This recently [1][2] got a lot of attention on the web and here on HN, along with a post on Mastodon from the author [3]
I imagine trying to disincentivize this and provide another shorter more official looking link is the hope here.
> Since 2020 I have been speaking out against the fraudulent pandemic and the intentionally dangerous injections and my experience has been to have been censored and smeared. If you have not heard of me before, that's the reason.
One weird trick to make your insignificance seem significant!
Did putty.org once link to the putty software? Or an alternative SSH client? Why did the site ever become popular?
I'm trying to grok this, but all of the posts sort of obliquely refer to things that happened in the past (even the old HN links here), rather than explicitly just explain what the hell happened.
The domain owner seems to feel he was providing a service to putty by providing the short domain name and feels slighted that they are moving to have their own now that he is taking actions that they find more objectionable than just also linking to his competitor, but to be honest it always seemed some unethical squatting to me, based on the Putty devs not having the time to complete a UDRP process.
This seems similar to the Notepad++ team using their platform to promote political viewpoints.
The same thing happened with Facebook "pages", when they became a personal "soap box" by the owner of the page. It was downhill from there... You might as well turn the whole web into FB/Twitter/X/Insta promotional spam at that point.
It's not at all similar, and that doesn't have anything to do with the quality or lack thereof of the viewpoints.
The Notepad++ site is run by the authors and reflects their stance. Putty.org is run by an outside party who hijacks the reputation of the PuTTY project to push their agenda.
That’s not how discourse works imho. Yeadon is making extraordinary claims, so the burden of explaining and backing up those claims should be on them, not on us. Until they do, there’s no point in addressing their concerns.
> Unfortunately the person who owns putty.org
> started to use it to spread misinformation
> about vaccines and[...]
Isn't that rather fortunate in the grand scheme of things? It could have been a landing page monetizing various SSH clients for windows.
Instead it's just some guy's website clearly unrelated to PuTTY. He's even gone out of his way to point people looking for PuTTY in the right direction. Who cares what his opinion is about anything else?
Why do you think it is misinformation? The person seems to have great credentials to be speaking on the topic. This is the video linked from putty.org:
Argument from authority is not particularly strong. The information on putty.org is considered misinformation by the vast majority of professionals in the field of infectious diseases.
This is the modern world that we live in. If being “Vice President and Worldwide Head of Research in allergic and respiratory diseases at Pfizer” with 25 years of career does not qualify to talk about vaccines (in the context of of Covid, I assume because I do not know him or the videos), I frankly don’t know what does
Like being an expert in virology and vaccine therapies for example. Or being boots on the ground rather than a bean counter. Really doesn't take that much imagination now, does it? Or is this "modern world that we live in" this anemic on imagination power?
I'm sure we can then find experts with those kinds of qualifications who also pushed covid misinformation (or to use more old-school terms, straight up fucking lies and unfounded, conspiratorial speculations) and held minority opinions.
Then we can lament on how having a minority opinion means your opinion is definitely being unjustly oppressed, as opposed to justly oppressed, which somehow we'll not be able to produce an example for. Does that really matter though if we can just pretend that we do have an example, or even believe outright we do and just not agree?
Or maybe we can lament on how just blindly trusting either authority or expertise is possibly not the most solid idea in the world. As if we actually had the option to do otherwise at scale, even in the best case scenario, and all people were magically equal and equipped to do so.
Humans and their unattainable reasoning ability. Oh the modern world. Yeah right.
So an expert is exactly the one you want to believe, and no other person, and you tailor the definition just exactly, so only people with your opinion are experts.
At least the reading comprehension monster will never hurt you, that's for sure. Your previous comment makes perfect sense now too, along with why you'd be whinging about oh the modern world.
If truth, reason and wisdom looks like not even being able to copy and paste the guy's job title properly from Wikipedia, or absentmindedly forming a strawman with full confidence due to being abject unable to read, indeed, I shall speed right on. That's not a form of truth, reason and wisdom I ever want reaching me.
Like imagine thinking that parsing this:
> I'm sure we can then find experts with those kinds of qualifications who also pushed covid misinformation (or to use more old-school terms, straight up fucking lies and unfounded, conspiratorial speculations) and held minority opinions.
as this:
> So an expert is exactly the one you want to believe, and no other person, and you tailor the definition just exactly, so only people with your opinion are experts.
resembles any form of intelligence. These two are in direct contradiction!
Is this really that big of a bar? Let's read together!
> I'm sure we can then find experts with those kinds of qualifications
So I recognize that there are experts with the "right qualifications", whatever that means to me, we don't even have to agree.
> who also pushed covid misinformation and held minority opinions.
So no, I do not stop recognizing them as experts, despite them not confirming my beliefs. Instead, what I do is consider them to have pushed covid misinformation, holding minority opinions, despite being experts with the "right qualifications".
Was this really that hard? I even featured multiple paragraphs after this arguing back and forth on your behalf!
Trusting expert or authority opinion is analogous to trusted computing. It works until it doesn't, and when there's debate among the trusted parties, there's two options: unanimous consensus, which humanity is not exactly known for as you can tell, or majority consensus, which yielded that the guy is wrong period. Choose anything else, and you're discarding the trust-based model in favor of something else; there's no trust and/or no consensus.
And what model do people turn to when there's no trust? Verifiability. This is why I brought up that at scale, verifiability is simply not viable, not as far as I can tell, and somehow this wasn't what you latched on to either. Current state of affairs could be improved a lot, I do think that academic research output has a lot of room for improvement in accessibility, and that getting up to speed with a different area to one's own shouldn't be as hard as it is. But just think about our guy and his claims in practical terms. He was claiming things like "nuh-uh, no second wave in the UK". How are you going to hand verify that yourself on your own? Are you going to act a Santa Claus one night and just visit everyone and take samples? Come on.
And so this was never actually about either of these. It was about believing different things and then piling on top whatever is available, reversing what came first: the thought, or the rationale behind that thought.
I can understand if someone, irrespective of the (majority) scientific consensus on mask use, vaccination, distancing, sanitation, and isolation, simply still chooses to not fall in line out of gut feeling or whatever, and owns up to it. That is at least intellectually honest. But this "oh so you're thinking <the exact opposite of what I said>" and this "a handful of experts out of millions claim otherwise so they're right and unjustly oppressed, and everyone else is wrong and complicit" rubbish is pitiful. The putty.org owner could swap the current text out for free infinite energy or flat earth theory and it would be equally believable. You see countless of those with the same sob story of being unjustly oppressed and then the thing somehow turning out to be bollocks or a scam, sometimes both, all the time. With the rare but convenient few experts chiming in being the occasional icing on the cake, much like the phony full time jury-only experts presenting on court in favor of insurance companies.
It is simply not reasonable to believe in what the guy is pushing, unless you've been believing that from the get-go - at which point, there's nothing to argue anyways. This is unlike the trust-based or the verification-based models, which have more going for them than just the sheer belief of individuals, and where there is capacity for arguments. Arguments that we are not having, because you're entirely too busy intentionally(?) misreading the guy's work title and qualifications, and intentionally(?) misreading what I wrote.
It took me a while to figure out that the nice product shots of Mac computers were actually live, interactive copies of the relevant operating system, running under emulation. Even the laptops with the screen at a weird angle from the camera.
And the emulator tracks whether you've done the things mentioned in the article, like open a particular control panel or tried a particular menu option.
There are even Easter Eggs and additional tasks. If you click on the system description button for each emulator it will give you a list.
I couldn't get the later emulators to work correctly though. My mouse kept flying off to the right of the screen for some reason. Also unfortunate is the scaling and tilting effect makes the screens look real bad on my machine. Just ugly aliasing artifacts everywhere.
The old 68k Macs are emulated with Basilisk II, which shims the mouse driver so it can just take mouse events from the host OS and move the cursor to the corresponding pixel on screen. The PowerPC Macs and NeXT boxes are emulated with a lower-level emulator that wants to get raw deltas from the mouse, not an absolute pixel position. If you just wave the mouse over the emulator, you'll get something approximating the expected movement (but much slower); once you click on the emulator it captures the mouse and you can use it as intended.
I agree it would be nice to have an "untransformed" view of the screen; I suspect the site might have been designed with the expectation of a high-DPI screen.
Unfortunately on the later Macs the mouse was way too slow to be useful (it kept falling well behind of where I was pointing and then my mouse would exit the area). Clicking on the emulator is when the mouse suddenly acted like I was always moving it right.
reply