Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenVMS on x86 (vmssoftware.com)
93 points by gjvc on Aug 15, 2020 | hide | past | favorite | 80 comments


So VMS has gone full circle in terms of target architecture type. CISC (VAX) -> RISC (Alpha) -> VLIW (IA-64) -> CISC (x86).


So... given what’s happened to most of those architectures, should we take it as a bad omen that it’s been ported to x86?


Well yes I should have said “a bad omen for x86”. I’ve never been a fan of the ISA but then the last time I tried it was 8086...

Let’s just hope they don’t port OpenVMS to ARM and RISC-V :)


Sounds like a great omen to me.


Yes!


My (not quite serious) hope is that this will kill x86. After all every other architecture it ever supported died a mysterious death.


Does OpenVMS still support logging out via 'DISCODUCK'?

https://www.youtube.com/watch?v=ynWhozyOoZQ


Please, I hope this is a thing. If not, I'll code the implementation.


Looks like it ought to still be a thing:

http://h30266.www3.hpe.com/odl/vax/opsys/vmsos73/vmsos73/648...

> "You can abbreviate a command as long as the abbreviated name remains unique among the defined commands on a system. DCL looks only at the first four characters for uniqueness."

and since max command line length seems to be a few K, the masochistic could try instead:

    $ DISCIPLINEOFPROGRAMMING
(are there any other 1976 things in the equivalence set of DISCONNECT?)


What is this thing? What is this for?


DCL is VMS's command line interface. Commands start with full-on verbs (set, copy, define, print, ...). You can use partial strings in commands as long as they are unique (e.g. "set def" instead of "set default"). Imagine being able to type "pyth" at a bash prompt and it running python.

What I think the previous posts are referring to (and which I don't remember from my DCL days) is that apparently DCL only checks the first four characters for uniqueness, so "disc", "discoduck" and "discgolf" match the "disconnect" command.


Skinny hot girls, what's not to like.


You can still occasionally find a Vaxstation 3000 for under $300 on eBay, if you're interested in being a purist.


However, current VMS releases won't run on a VAX, and there's no plan to change that. See their FAQ (which links to the product roadmap for projected future releases):

https://vmssoftware.com/about/faq/#faq5

(I dimly recall reading someplace that they haven't done a VAX build in so long that some of the build chain has suffered bitrot, and some VAX-specific innards might have been misplaced.)


Sure. I'm in the space that doesn't care about that. This whole COVID isolation thing revived an interest in retro computing for me. So I've ordered and repaired/hacked 8+ old CPM, classic mac, RISC Unix, etc computers since March.

It's a little out of control (space/attention wise) now, to where I need to sell the things I've fixed and start a new batch. VAX/VMS might be a good next candidate.


I'm running an AlphaServer DS10L here! It cost me about $275. It feels pretty quick for a ~20 year old machine.


You got a good deal there. All the Alphas I see for sale are $1k+.


This was about a year ago when I bought it. Finding a relatively cheap one did take a couple of months.

There is a DS10L on there for above $500 right now: https://www.ebay.com/itm/Compaq-AlphaServer-DS10L-DH-71AAA-A...

No indication of the exact model, or what it has for RAM or HDs. I would only buy ones that had a screen shot of the console output, since then you have some confidence that it actually boots.


I've noticed a lot of that buying old computers on eBay. No listing of basic info like CPU speed, installed ram, cards, etc.


It kind of makes sense. A lot of people have this stuff sitting around for 20+ years and just want to get rid of it. Connecting a serial console cable to a headless rack mount server might be a lost art.


Is there a case to be made for developing new software on VMS or is this purely for legacy applications?


> Is there a case to be made for developing new software […]

I think one decent case would be to keep yourself honest.

Linux started out a 80386 only, but someone ported it to DEC Alpha. By deciding to run on both 32- and 64-bit platforms in the 1990s, it kept the kernel developers agnostic. Then when the x86 world went 64-bit with amd64, there was probably a lot less 'cleanly up' to do for it to be ported compared to if it had been focused on pure-x86 (see also SPARC and endianness). It's said that NetBSD has a very clean code base because of it's reputed high portability.

Similarly Solaris was very scalable. Partly because in 1996 Sun released the SPARCcenter 2000, which could handle 20 CPU sockets. That was a lot, and I'm guessing not many folks bought one, but they had to deal with it in an official capacity, as time when on more CPUs (and cores) became prevalent Solaris was good to go as that situation became more mainstream.

If you develop for the odd ball situations and corner cases, it may force you to be less lazy as a programmer.


You are absolutely correct; anyone who worked on Solaris scalability and performance back in the day will have plenty of Dragon war stories -- and Solaris did so well on Campfire (a.k.a. UltraSPARC Enterprise 4000, shipped 1996) exactly because so much time and energy had been spent on Dragon (a.k.a. SPARCcenter 2000, shipped 1992).


> Campfire (a.k.a. UltraSPARC Enterprise 4000, shipped 1996)

My main gripe with the various 'E-series' systems (IIRC, we had some E3x00s) was boot-up time, especially the RAM check on power up.

I was in an academic environment, and so every so often we had to power down our lab for electrical/fire code inspection power outages by Facilities. The first time we rebooted one (with 4GB of RAM?) it kept booting and booting and booting and booting and booting and booting. 17 minutes later we got the login prompt on the console.

We put the 17 minutes as a note in our run book so that we knew not to worry if took 'forever' and to move onto the next step. Otherwise we'd start freaking out about something being "broken" with the system(s).

(This was in the 2001-2001 timeframe.)


Yes! I worked at a startup with an E3500, around 1998-99, and those machines took forever to boot. Nice hardware though. I think ours had a whopping 512 meg.


The 2000 was pretty popular AFAIK; I saw a lot of them anyway. Yeah...a lot of stuff got worked out in Solaris on the SC2000, and the journey was painful. But after a year or so of very frequent out-of-cycle patches, some hardware upgrades, and a lot of time with Sun support, they were pretty stable and scalable. Nice machines.

I don't think we ever got the OC-12 ATM cards really stable...fortunately, networking went a different direction and they had a mercifully short lifespan.


I ran a fully populated SPARCcenter 2000. It was a beast. Not like the E10K, but still a beast.


Probably a little bit of both.

Replacing a mainframe with a VM and not losing performance is a huge benefit.

But the software that used to run on that mainframe is now easier to modify, and there's a 10 year backlog of critical bugs that need to be fixed but we were always too scared of breaking something because we couldn't test it - now we have a VM. We can spin up a second one.


VMS is old and used to run multiuser systems and on relatively expensive machines; but they were called 'minicomputers' because they were already less huge/expensive than mainframes. And after the mini came the 'micro' which is more or less the PC as you know it.


i would call a vax a "super mini". one of the crop of 32 bit multi-user systems that came out in the 80's. a pdp-11 was a mini, it was multi-user, but the machine was weak. a 'super mini', with 16MB of memory, now that was a machine you could do great things with.


The VAX succeeded the PDP, though the PDP was kept in production for many years of overlap. They're different generation of technology filling the same niche. The PDP-11/70 was the beast of it's day, and even came in a multi-processor version (the 11/74). Saying the PDP was a mini and the VAX was a supermini is like saying the 286 was a mini but the 386 was a supermini.


Mainframes are completely different beasts though. A VM on a Xeon machine is not a fair comparison to an app running on a Mainframe because by design the mainframe has many redundant systems to stay online for ever. Lots of dedicated hardware for things and redundant systems etc are the selling point to mission critical hardware that is the mainframe

I suspect this is more about not having to pay IBM for the hardware and contracts anymore


I say the same performance with absolute confidence.

I expected a drop in performance when a particular contract I worked with asked us to see if we couldn't get everything running on a VM instead of the IBM iron. Like you suspect, that was about not paying for the hardware anymore because just the price of electricity was a noticeable impact on the bank's overall budget. It was an R&D project to see if we could avoid that.

What we put together was a QEMU instance, running atop 3 very very cheap commodity servers in duplication, so that if one went down it would switch over without downtime. We chose the cheap servers, because we already had them hanging around. (Probably around $1000 in hardware all up, today.)

We did not see a drop in performance or reliance. But we did see an increase in performance. As in, ~30%. I avoided suggesting this is always the case, but it is a significant possibility when changing. As far as I know, the bank in question is now running a similar setup everywhere they used to have a mainframe.


While reading the parent comment ("stay online for ever") I was reminded of VMware lockstep VMs, which nowadays support up to 4 cores. I wouldn't be surprised if a few places do OpenVMS on lockstep.

I am very curious what you mean by "in duplication". Do you mean "3 copies of production and a router" (probably not) - or are you saying you did something like lockstep for QEMU?

Incidentally I just found https://wiki.qemu.org/Features/COLO, which looks like it may be being successfully used privately in one or two places (looking at the email addresses).


It was a little while ago, but it was 3 production servers running with the mirror backend, with a dirty-bitmap [0], and a complicated routing setup (which could hold packets instead of dropping them, depending on their importance). I didn't really have anything to do with the routing though, so I can't say much.

I'd probably use COLO if I was to do it again today.

[0] https://kashyapc.fedorapeople.org/QEMU-Docs/_build/html/docs...


Oooh, disk synchronization. Live migration but just for the block layer! Cool.

I wonder what sorts of scenarios would benefit from using this instead of any of the dozen distributed database architectures it seems are out there.

Incidentally while poking through https://www.qemu.org/docs/master/system/invocation.html I noticed that there are some COLO-related options in there, which is a bit exciting.


Some Arm processors that you can buy today also support dual-core lock step in hardware, such as the Jetson AGX Xavier from Nvidia. (however, not enabled by default for perf reasons, because you halve your cores by design in that mode)

It's there for safety reasons for automotive. (see https://blogs.nvidia.com/blog/2020/05/20/xavier-achieves-ind... )


Could you point us to the source of your information please? The Xavier does not support dual-core lockstep afaik. There are other methods in place to detect random faults, but not that one.

The first widely available ARM cores providing it are fairly new (at least in the automotive domain).

[0] https://developer.arm.com/ip-products/processors/cortex-a/co...


Hello,

As a person who worked with Xavier for quite a while, dual-core lockstep is supported. Nvidia uses their own CPU cores, not Arm's.

See: https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-323... for how to enable it.

> enable_ccplex_lock_step: Boolean; enables or disables CCPLEX dual-core lock step.


Very interesting, thank you very much! I know the lockstepped R5s and the dual execution feature of the Carmels (close but no cigar for ISO26262), but not that one. Gonna ask our FAE about it eventually. ;-D


Not to get your hopes down... but dual-execution in the Carmels and DCLS for CCPLEX should be references to the same thing. :)


Aaaah! Ok, so I did not overlook a big feature but knew it under a different name. Thanks for the clarification!


Fascinating. If the main mainframe use case is more database ops for banking or airline scheduling that doesn’t seem very compute heavy in my head so throwing hardware at the problem makes sense especially if you can save millions to invest in development and modernization.


I agree with you there. Most mainframe tasks today aren't that heavy on the compute - most relied on the amazing I/O stack, and that makes it easier to replace them.

Heavily vectorised code with modern instruction sets can also be more performant than a lot of the older compute chips, but that requires more rewriting of code, and someone who intricately understands both the code and the math. Which makes modernising much more expensive.

The VM approach is a simpler way to get you most of the way there, but replacing heavy compute stacks is usually going to require a decent bit of investment.


VMS isn’t an IBM mainframe OS, though. I think typical native VMS machines are much closer to normal enterprise servers than mainframes, so virtualization would make sense.


From my understanding there used to be a OpenVMS hobbyist license available for people to use. However the parent company seems to be abandoning it and I had trouble discerning if there was going to be some sort of replacement for it at the time I read about it. Does anyone have any better info about this?


The website has a page about a community license [0].

> VMS Software Inc. is excited to announce the availability of the OpenVMS Community License that allows the OpenVMS community members (hobbyists, non-commercial software developers, and others) to obtain an OpenVMS license free of charge.

> OpenVMS x86 licenses will be available later as more stable versions of OpenVMS are released for this architecture.

[0] https://vmssoftware.com/about/news/2020-07-28-community-lice...


Ah nice to see they kept something going.


Interesting that their FAQ doesn't mention that VMS source code is proprietary. The name 'OpenVMS' is ambiguous to say the least.

https://vmssoftware.com/about/faq/

The Wikipedia article about OpenVMS clarifies this in the second paragraph. My point is that the OpenVMS FAQ should address this question because it keeps coming up.


At that time, “open” was applied to things that were based on published standards and portable, as distinguished from systems they were based in proprietary standards or tied to particular processor architectures. VAX/VMS was named to OpenVMS to signal that it supported POSIX and after the port to the Alpha.


VMS was renamed OpenVMS in 1991[1]

The term “Open Source” was coined in 1998 by a group of people in the free software movement[2]

[1] https://en.m.wikipedia.org/wiki/OpenVMS

[2] https://en.m.wikipedia.org/wiki/Open-source-software_movemen...


Somewhat prior to then renaming, the "Open Software Foundation" was formed by a consortium including IBM, HP, and DEC.

This was done because of an uncomfortably close embrace between Sun Solaris and AT&T System V Release 4.

https://en.m.wikipedia.org/wiki/Open_Software_Foundation

UNIX on DEC Alpha was originally named OSF/1.


Hence the ambiguity.


Well, it's the Open Source that is ambiguous then, as it came later as a term...

(which, co-incidentally, is one argument of the free/libre software guys too)


Open in OpenVM has the same background as OpenSTEP, or OpenGroup, meaning open standards, not free beer.


Well, Open Source doesn't mean free beer for that matter, either, it means "copyleft".


Free/libre software generally refers to copyleft. Open Source was coined as a competing term, more friendly to tivoization and propiatary use (eg: selling a hw box with bsd/mit code parts as a commercial, closed solution). An extension of Source Available.

Free/libre software is defined around the "four freedoms" for end-users (run, study/change, (re)distribute verbatim, distribute modifications).

Open Source is more concerned with the rights connected to the source code, and does not quite see the end-user (who runs the binary) as the same as the developer (who modifies and compiles the source).


Really? Recent events from Mozilla and others seem to prove otherwise.


Really? Any concrete example?


The entitled attitude of open source users not to pay for software, examples abound on this site, specially when one does Show HN of a commercial product, only to be flooded by half working open source alternatives.

But if you prefer something more concrete,

https://news.ycombinator.com/item?id=24159244


FTA:

> Did you sign I Love MDN? Great! Are you willing to pay 50-100 euros/dollars per year to keep MDN afloat? If not, this is all about making you feel better, not the technical writers. You’re part of the problem, not the solution.

Not everyone has that kind of money to spend on charity. I find the statement, quite frankly, condescending and ignorant. There are tons of countries in the world, consisting of web developers, who earn [far] less than an average SV wage. Sure, living expenses are also likely lower, but still.


I have travelled across several of those countries, apparently people outside of software development still managed to get hold of their tools, paying in some way for them, without feeling entitled to be given everything as free beer and yet getting paid for their own work.


> [...] still managed to get hold of their tools

Well, that depends. Software [licenses] has different price in the world, adapted to the local market. Hardware as well, but the lowest margin on hardware is simply higher than software. Especially state of the art. So the amount of people running around with a new iPhone in Serbia or India is, percentage-wise, simply lower than in USA. Yet, yes, they all use a browser on their devices. And they all have a legal license. Many people in poorer countries also pay with a different currency: privacy and security. Think Android, think out-of-date devices (older Android and iPhone devices).


Last time I checked a carpenter in Africa also doesn't pay the same as a carpenter in Europe for their saw set or the wood they need to produce the furniture, yet both buy them.


Which was exactly my original point:

> FTA:

> > Did you sign I Love MDN? Great! Are you willing to pay 50-100 euros/dollars per year to keep MDN afloat? If not, this is all about making you feel better, not the technical writers. You’re part of the problem, not the solution.

> Not everyone has that kind of money to spend on charity.

The argument assumes users of MDN have 50-100 EUR to burn on charities. It shouldn't attempt to convince the reader with a static amount. As if a web developer from Serbia who donates 20 EUR a year is somehow part of the problem. They're not. Perhaps, if such donations from poorer countries are not recognized as solving the problem, the problem is that the costs are too high as it is.


>I have travelled across several of those countries, apparently people outside of software development still managed to get hold of their tools

You'd be surprised. Business owners or independent workers that need to own their tools, get into all kinds of debt to get them globally.

In fact a classic example is the loans needed by independent seamstresses to buy sewing machines in the developing countries (and which few banks would give)...


Yes, and that is something that some software developers feel entitled not to do while expecting others to pay for their services.


Which makes sense. Everybody should have the tools to do their work provided to them (for free), but their work paid.


Except that isn't what many in FOSS free beer movement do.

Self employed carpenter buys their saw, self employed FOSS free beer developer downloads some stuff, never upstreams anything, gets paid and screams loudly to upstream when that stuff they got as free beer doesn't work.


>The entitled attitude of open source users not to pay for software, examples abound on this site

That's not an argument for what the term means (either casually or by official and legal definition), just a commentary on an attitude.


Sure it is, or now the FOSS zealots own the meaning of "open"?


No, but they own the meaning of "open source".


They don't own anything, they just feel entitled to do as such, dictionaries and country specific language regulation institutes do.


Will it be faster than in SIMH? Many years ago I tried one of the SIMH VAX simulators on a Windows PC and it was blazingly fast.


It's a bit of an apples and oranges comparison, as the last VAX VMS was released in 2001, and stopped getting support in 2012. I think the main target for x86 VMS is people with Itanium (or Alpha) hardware wanting some sort of supported future path.


Is a hobbyist license generally available?

What are the facts?

Price?

How do I get it?


HPE announced the end of the hobbyist program. IIRC the licenses expire at the end of 2021.


Sad

What a pointless decision. How else shall people learn anything about the system, besides work exposure.


Anyone who wants to experiment with VMS still can. Right or wrong, old OpenVMS releases may as well be free, for anyone who can do a google search.


This comment has info about a community license: https://news.ycombinator.com/item?id=24166978




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: