Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is one very BIG thing that Cobol pioneered: the requirement that not only the programs, but also the data, must be portable across machines. At a time when machines used different character codes, let alone different numeric formats, Cobol was designed to vastly reduce (though it did not completely eliminate) portability woes.

We take this for granted now, but at the time it was revolutionary. In part, we've done things like mandating Unicode and IEEE 754, but nowadays most of our languages also encourage portability. We think very little of moving an application from Windows on x86_64 to Linux on ARMv8 (apart from the GUI mess), but back when Cobol was being created, you normally threw your programs away (“reprogramming”) when you went to a new machine.

I haven't used Cobol in anger in 50 years (40 years since I even taught it), but for that emphasis on portability, I am very grateful.



Another fascinating aspect of COBOL is that it's the one programming language that actively rejected ALGOL influence. There are no functions or procedures, no concept of local variables. In general, no feature for abstraction at all, other than labelled statements. Block-structured control flow (conditionals and loops) was only added in the late 80s.


Contemporary COBOL (the most recent ISO standard is from 2023) has all those things and more.

Rather than rejecting such features, COBOL was just slower to adopt them-because conservatism and inertia and its use in legacy systems. But there are 20+ year old COBOL compilers that support full OO (classes, methods, inheritance, etc)


the other big cobol feature is high precision (i.e. many digest) fixed point arithmetic. not loosing pennies on large sums, and additionally with well defined arithmetics, portably so as you point out, is a killer feature in finance.

you need special custom numerical types to come even close in, say, java or C++ or any other language.


>the other big cobol feature is high precision (i.e. many digest) fixed point arithmetic. not loosing pennies on large sums, and additionally with well defined arithmetics, portably so as you point out, is a killer feature in finance.

I guess you mean:

>digest -> digits

>loosing -> losing

Is that the same as BCD? Binary Coded Decimal. IIRC, Turbo Pascal had that as an option, or maybe I am thinking of something else, sorry, it's many years ago.


There are some regulations in bond pricing or international banking or stuff like that that require over 25 decimal places. IIRC, the best COBOL or whatever could do on the IBM 360's was 15 digits. The smaller, cheaper, and older 1401 business machines didn't have any limit. Of course, for nerdy financial applications, compound interest and discounting of future money would require exponentiation, which was damn-near tragic on all those old machines. So was trying to add or subtract 2 numbers that were using the maximum number of digits, but with a different number of decimal places, or trying to multiply or divide numbers that each used the maximum number decimal places, with the decimal point in various positions, and it was suicide-adjacent to try to evaluate any expression that included multiple max precision numbers in which both multiplication and division each happened at least twice.


> There are some regulations in bond pricing or international banking or stuff like that that require over 25 decimal places.

Sounds interesting. Is there anywhere you know I can read about it, or is there something specific I can search for? All results I'm getting are unrelated.


Sorry, not that I know of. They never let me near that stuff, and the various really important standards bodies are monopoly providers of information, so they inevitably charge more for copies of their standards than non-subservients like me can afford. I just tried to quickly scan the 25,000+ ISO standards and did not find anything even possibly related under $100. The Securities Industry Association was the maven of bonds when I was trying to figure them out, but, knock me over with a feather, they were dissolved almost 20 years ago. You might start with the Basel Committee on Banking Supervision, the Financial Markets Standards Board, the Bank for International Settlements, the Fixed Income Clearing Corporation, or https://www.bis.org/publ/mktc13.pdf.


Binary Coded Decimal is something else.

1100 in “regular” binary is 12 in decimal.

0001 0010 in BCD is 12 in decimal.

ie: bcd is an encoding.

High precision numbers are more akin to the decimal data type in SQL or maybe bignum in some popular languages. It is different from (say) float in that you are not losing information in the least significant digits.

You could represent high precision numbers in BCD or regular binary… or little endian binary… or trinary, I suppose.


yes and... bcd arithmetics are directly supported on CPUs for financial applications, like Z:

IBM z Systems Processor Optimization Primer

https://share.google/0b98AwOZxPvDO6k15


thx for the typo fixes

indeed, it's exactly BCD arithmetics which are part of the standard, with fixed decimal size and comma position

and yes, Turbo Pascal had some limited support for them.

you needed them in the 1990s for data exchange with banks in Germany. "Datenträgeraustauschformat". data storage exchange format. one of my first coding gigs was automatic collection of membership fees. the checksum for the data file was the sum of all bank account numbers and the sum of all bank ID numbers (and the sum of all transferred amounts)... trivial in Cobol. not so much in Turbo C++ :-)

I wasn't aware of the BCD in turbo Pascal... those were the days :-D


welcome.

those were the days, indeed :)


Instead now, you throw everything away when moving to a new language ecosystem. Would love to see parts of languages become aligned in the same manner that CPUs did, so some constructs become portable and compatible between languages.


Great point. But some newer languages do keep compatibility, with Java (Scala, Groovy, Kotlin, Clojure) and .Net (C#, F#, Visual Basic, Powershell) “platforms” being an example, but also with system languages that normally have some simple (no binding required) ABI compatibility with C , like D, Zig and Nim I think.

The newest attempt seems to be revolving around WASM, which should make language interoperability across many languages possible if they finally get the Components Model (I think that’s what they are calling it) ready.


Today, with several of those languages, we really don't care from a code point of view if the deployment target is Linux or Windows, we know it will work the same. That's an achievement.

And many of them can target WASM now too.


What about graal and truffle?


Is Python indentation at some level traced back to Cobol?


I would guess not. indentation in python serves a very different purpose to the mandatory indentation as found in early cobol/fortran.

I am not really an expert but here is my best shot at explaining it based on a 5 minute web search.

Cobol/fortran were designed to run off punch cards, specific columns of the punch card were reserved to specific tasks, things like the sequence number, if the line is a comment, continuation lines.

https://en.wikipedia.org/wiki/COBOL#Code_format

https://web.ics.purdue.edu/~cs154/lectures/lecture024.htm

In python the indentation is a sort of enforced code style guide(sort of like go's refusal to let you load unused modules), by making the indentation you would do normally part of the block syntax, all python programs have to have high quality indentation. whether that author wants to or not.


I don't know much about COBOL, but I did code quite a bit in Fortran. In Fortran, the first five columns were for an optional line number, so these would mostly be blank. The sixth column was a flag that indicated that the line was continued from the one before, this allowed for multiline statements, by default a carriage return was a statement terminator. Like you said, all this came from punch cards.

Columns seven through 72 were for your code


Interestingly, punch cards and early terminals in the 80-132 columns range reached the limits of readable line length and early programming languages were obviously pushed to the limits of human comprehension, making the shape of text in old and new programming languages strikingly consistent (e.g. up to 4 or 5 levels of 4 characters of indentation is normal).


I've heard it was from Haskell?


Block structure as indentation was introduced in Landin's ISWIM. I think the first actual implementation was in Turner's SASL (part of the ancestry of Haskell). Note that Haskell doesn't have Python's ":" and it also has an alternative braces and semicolons block syntax.


Until someone decided we shall have big and little endianness, and that this newfangled internet shall use the correct big endian ordering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: