I "brought" Objective-C to the Amiga sometime in ~1986/1987.
I had recently acquired one of the first Amigas in Germany, still a US NTSC model, and also seen Objective-C discussed in a BYTE article. The beautiful OO structure of the Amiga Exec kernel and the higher OS levels built on top of those abstractions (except the abomination that was AmigaDOS) was almost certainly an inspiration.
Having also recently purchased my first C compiler, Manx Aztec C, I initially had some fun with implementing OOP via some Macros and the Amiga shared library mechanism, which was essentially an array of function pointers/C++ vtable.
I don't quite remember when I got the crazy idea of actually implementing an Objective-C runtime and preprocessor, before or after getting the book. I do remember that it felt very much "here goes nothing", and it really was nothing. I was in high school, I had no CS training, didn't know about compilers, hadn't programmed in C for more than a couple of months.
So I wrote a weird lexer that used a bitmap and two counts to cover all the regex cases, a syntactic recognizer that could tell the Objective-C constructs apart most of the time (it had a hard time distinguishing a method send with a cast from indexing into the return of a function call). And a runtime and basic object library. My dictionary implementation turned out to be ordered almost by accident, I have grown very fond of ordered sets and dicts. It also had automatic serialization ("passivation" as it's called in the book), something Apple has only gotten around to just now, and only for Swift.
It worked and I never looked back. I later ported it to i386 and QNX + MS-DOS for a project, replacing the assembly-language messenger function with a pure ANSI-C one returning the function pointer of the method (a technique later rediscovered for the GNU runtime).
When NeXT came out I knew I had to have one. And ditched my implementation in a heartbeat: SEP!
Interestingly, there was another Objective-C implementation from Germany in around the same timeframe called M:OOP, which actually got a little bit of press coverage:
Many, many years later (~2003 I think, I was barely in high school) my dad brought me a small book, edited by one of the local universities, which was essentially a student's guide to the diploma exam (sort of like a BSc thesis + a written exam, for our American friends -- this system used to be a lot more popular in Europe back then).
It included various study guides, example questions and -- my favourite -- a bunch of recommended subjects for the diploma thesis. "Projects" as in practical things you would study and implement, proposed by every department in the university. I think there were hundreds of them.
One of the was an Objective-C runtime implementation for Linux systems (I got the book in ~2003 but it was older, from around 1996 or 1998, I think, when GNUStep was just becoming a thing). I had read an article about Objective-C and spent a load of cash at the local internet cafe trying to figure out how I could do that.
I lost interest quickly enough when I realized one is already available and it turned out that writing stuff with GNUStep was a lot more fun than figuring out how it worked.
But I did stumble upon a couple of articles about M:OOP. Unfortunately, I don't speak German at all, but a very kind IRC user (whose name I've long forgotten, sadly) basically translated them for me. I found the whole thing absolutely amazing; I was already interested in low-level programming at the time, and this made me even more interested.
That all sounds too familiar. When I was still a little kid going to school, I was using Linux and found Objective-C. Really liked the language and concepts, but lacking a framework for Objective-C on Linux. So I started ObjFW a few years later. Seems you had the same experience before I was even born, except on Amiga :).
It's interesting to see that ObjC has some heritage on the Amiga. That explain why MorphOS (one of the Amiga ecosystem fragments) is doing something similar[1] these days.
This looks fun, considering how much of an influence Next seems to have had on OS 2 & 3. And not just the colour schemes either, BOOPSI (https://en.wikipedia.org/wiki/BOOPSI) looks like an attempt to get some ObjC features into plain old C.
This is actually a port to the real Amiga, and not one of those new exotic Amigas. But if you want to run AmigaOS 3, you can just use FS-UAE. In fact, that screenshot is from FS-UAE, as always copying things over to a real Amiga can be quite annoying during development :).
For AmigaOS proper, you'll need emulation. You can run AROS on just run of the mill x86 hardware, though, which is a pretty good alternative. It's source compatible with AmigaOS, and there's an Amiga emulator that lets you run 68k programs more-or-less transparently. It's a pretty good alternative.
I use the Amiga Forever package to run AmigaOS on my Windows 10 computer. Installation is quick and works flawlessly. I'm even able to remote in using Remote Desktop and access my AmigaOS from my laptop.
There are 3 different versions with varying prices. (http://www.amigaforever.com/)
It uses the WinUAE emulator, which you can download for free. (http://www.winuae.net/) You would need to purchase roms to go with this though. The Amiga Forever distro includes roms.
Edit: Shameless plug: I used to run a CNet BBS back in the day and I got back into that scene by setting up another CNet bbs on my virtual Amiga. You can telnet to it at chksmak.cnetbbs.net:2600
For me the addition of Automated Reference Counting was key. That took a while bunch of tedious but essential boilerplate out of the language. Before ARC, Obj-C was a weird but interesting language that I didn’t want to use. With ARC it all came together into a nice expressive high-level language with raw C power available when you need it.
Totally agree. ARC is excellent and almost always completely transparent. When you're doing something unusual (like storing a pointer to an ObjC object in a C struct or something), there are very simple compiler hints that keep everything happy.
It's so much nicer than all the faff of manual retain/release before (and probably nicer than ObjC garbage collection, but I started ObjC after that was already discouraged, afair).
Come to think of it, garbage collection was an obvious “me too!” feature to catch up with Java and C#, but proved misguided because it didn’t fit well with Objective-C.
ARC was less obvious, but quickly proved to be a much better approach (edit to add: for the typical use cases of Obj-C )
It makes me wonder whether Swift’s vtables and generics are just “me too!” features, this time chasing after C++ fans... Maybe not a great idea, as they’ve also picked up some of the bad stuff like exploding compile times.
And because objective-C is a very strict superset of C (much more so than C++), it's really critical that the oop extensions have a very different syntax from the C subset. There is little confusion what part of the code is "objective", and what is "C"
As a professional, they work with what is asked, not with what they prefer -- especially when dealing with a platform that might totally stop supporting Obj-C down the line. So what they prefer would be irrelevant job wise.
They're professionals, not slaves. They might have a say in the choice of tools, and they can turn down jobs otherwise. There's still plenty of Objective-C work around.
Yes, but it’s not officially “condoned”, so it feels “out of place”. This is very individual, of course, but I don’t like bending the system to get patterns that are not “natively” supported.
In Swift, you cannot currently create classes in code, enhance and swizzle, implement proper proxies. You couldn’t even load a nib without the ObjC runtime. Not even reflection. It’s telling what their priorities were with that language. What good is the language when you cannot implement most of the system frameworks (Cocoa, Core Data, etc.) without nasty hacks?
> In Swift, you cannot currently create classes in code, enhance and swizzle, implement proper proxies
In pure Swift, that is. These are all possible if you reach into the Objective-C runtime, as I'm sure you know well. Overall, though, I do agree with you: I'm unsatisfied with the current reflection API in Swift and I really think that this is something that the core team should focus on some time.
Also I agree you can tell safety was the number one priority. However the positive is basically never getting runtime errors, except when interfacing with IB or obj-c even when people are programming in a hurry, which is nice for me.
> However the positive is basically never getting runtime errors
I don't quite think this is the "safety" that Swift aims for: rather, it prefers safety in the sense of failing fast and failing reliably if something goes wrong. Unwrapping nil and out of bounds array subscripting fall into this category of behavior.
"Never getting runtime errors" is as far fetched as it can be. Consider Xcode suggesting developers use force unwrap ("!"), that leads to many more crashes with unexperienced developers than the ObjC message to nil paradigm.
Subjectively, I have seen much more crashes in third-party software written in Swift than in ObjC (usually unwrapping or casting incorrectly), including in Apple's software. I cannot say if this is down to more inexperienced developers, less time allotted to experienced developers or a worse development model. I'd say a combination of the three.
You’ve made some good points though I’ve found the reflection in swift since v2 to be sufficient for my day to day. I think the nib thing is a problem but in reverse, nibs needs to be swift so we can instantiate them with generics, this would be a game changer!
Without reflection, you cannot create a nib system (or any general purpose archiving solution). For Encodable, they used hacks that were tantamount to macros in C, but that requires compile-time knowledge of the class/struct structure. That's not how nibs work.
Or building a house with toothpaste. But this was more about what's more common/supported by the language/approved by the community rather than what's possible.
Those are actually temporary problems. Once ABI stability is implemented, those DLLs will not be needed. The compiler is obviously a WIP and will take years to become more mature and optimized. Likewise for C++ interop.
There are fundamental issues with Swift, but those are not them.
Compile times are not a temporary problem, they are fundamental. Yes, they can fix some of the more egregious specific problems, but the model they have chosen is inherently expensive to compile as it leans so much on the compiler.
And they no longer have Mr. Moore to bail them out by mere passage of time.
In fact, last I checked the compiler has actually gotten slower, overall, in recent versions.
Could you provide a documentation on the compiler model of Swift, and why it would be slower? How can that model be compared to clang's c++ compiler model?
The model for Swift is for the compiler to do a lot of work.
Doing a lot of work takes more time than doing less work.
There was an interview/article a while back that explained this for C++/D, I think it might have even been by Walter Bright himself, that explained this really well. Sadly, I can't find it right now.
In essence, C++ is predicated on the compiler being able to see through any abstractions you might have provided at compile time, in order to be able to optimize that away. That means you effectively lose separate compilation, because the compiler has to look at the implementation of dependencies, not just at their interfaces. Transitively.
So C++ is a lot slower to compile than C/Objective-C, and Swift is very similar, it just doubles down on that model. With this model it doesn't come as a surprise that compiling everything at once is often massively quicker than compiling single source files, the "whole module optimization without optimization" thing.
I also think there's at least some polynomial factor involved here, so as your projects get bigger, per-file compile times get significantly slower.
C and Objective-C really take separate compilation to heart, so you don't run into nearly the same problems (though: header files :-( ). Go is a more modern take on this, compile times are legendary.
In addition to that, Swift leans extremely heavily on the optimizer to get common constructs to execute in reasonable time, rather than 100x or more slower than you'd expect. That also costs. A lot. tcc doesn't really optimize much, it's a good example of how fast a C compile can go. (I clocked it at several hundred thousand lines of code per second or more, for comparison, numbers from a Swift project were 60 lines of code per second).
And last not least there's pretty wild forward/backward type inference, which will happily go exponential. For example, let's look at the following code:
let a:[Int] = [1] + [2] + [3] + [4] + [5] + [6]
This actually used to fail with "expression too complex", nowadays it compiles in 26 seconds on my machine, and adding one more integer array gets it to fail again, after 59 seconds. Hmmm...
I'm much more hopeful for ABI stability than C++ interoperatability. I think will take a long time before we'll be able to seamlessly work with C++ code as we do currently with say Objective-C or C.
I've been mostly away from the Swift Forums for the last couple weeks, so if they've done something recently I probably missed it. Otherwise, though, AFAIK there's very few user-facing features that we can take advantage of right now. I do know that this is something that is on the roadmap, but so far the only changes related to this feature would be confined to the internals of the compiler, if at all. Here's a relatively recent thread by Doug Gregor where he outlines some of the challenges necessary to surmount if this feature is to ever land: https://forums.swift.org/t/c-objective-c-interop/9989
Sure, by all means they should keep up the good work. I'll be happy to switch when it's ready. But until then, I'll stick to my trusty Objective-C for serious work.
How so? I personally feel that most of the changes Swift made to the language really do solve real problems that Apple and third party developers have had with Objective-C (and other languages, as well).
If I had to implement a compiler or a database, I'd definitely pick a language with a powerful and strict type system like Swift. It's fantastic when the compiler can prevent large classes of errors. I mean, even trivial concepts like an Array<Point2D> are frustrating to express in Objective-C (wrapping C structs with NSValue? Ugh).
But that's not how iOS development works in my experience (YMMV). Most of the logic lives either in the backend or in C/C++ libraries, and Objective-C is just the glue between that and UI frameworks. When this glue layer gets complicated, it's usually because of animations, AutoLayout, UIKit bugs, or workarounds around performance issues. But all of these are issues with the frameworks, not with the programming language.
Objective-C is effectively a domain-specific language for Cocoa. Swift aims to be the next big general-purpose programming language, but it's still only being used for Cocoa. I think this is fundamentally the wrong direction.
There are some legitimate criticisms to be made of Swift, particularly in regards to tooling. But if you feel obliged to downvote somebody for making a factual, if anecdotal, claim about their experience using it, you might want to question how much of your reaction is just resistance to change.
It’s not puke inducing, but it is verbose. The language designers just took a different approach to how to write code. Just for some background, it came out at a similar time to c++. Both were designed to be a ‘better c’
I did some iPhone dev a few years ago. In terms of writing code, I didn’t mind the verbosity because Xcode’s autocomplete worked well enough. It’s not like you actually have to type it all out.
In terms of reading code, its verbosity mainly affects the horizontal dimension, which is strongly mitigated by the proliferation of large widescreen displays.
In terms of learning the language, the verbosity combined with the autocomplete arguably reduced the need to continually refer back to documentation, as the verbose method (message) names were oftentimes self explanatory.
While you are looking, Brad Cox’s other book Superdistribution: Objects as Property on the Electronic Frontier is well worth a read to get a feel for where he wanted to go.
In other languages, a function with that many parameters would be close to unusable. Far too easy to get them mixed up. The way Obj-C forces you to name each argument is great.
Yeah, I'm not complaining, I really do like argument labels. Objective-C sometimes does go overboard with sentence-like labels, but usually it's not an issue.
No, and if you do, then I suggest you visit a doctor.
Rant:
Objective-C is a wonderful language, once you learn it properly.
There's a lot of Swift fanboiism (is that a word?) and a lot of Objective-C hate. I have an opinion on why.
The vast majority of people that used Objective-C were drawn to the success of iOS. They were developers, used to languages like Javascript, Java or C++. Coming to Objective-C, their immediate reaction was "WUTT??? Brackets!" Instead of properly learning the idioms, they fought the language on a daily basis and cursed Apple for forcing them to use a language unlike the one they were used to, in order to jump on the iOS bandwagon.
These people, once Swift was released, jumped boat immediately because they never really understood Objective-C and Swift is familiar.
Another group of people: the seasoned Objective-C developers, with a clue, went: "hmmm... this doesn't solve any of my real problems," but they weren't stupid and Apple was quite clear that Swift was the future. They learned Swift, got used to the bad, learned how to like the good and went along their business.
Another group of people actively dislike Swift, and really enjoy Objective-C. These are considered dinosaurs and have to either hide their opinion or face hiring difficulties, or even dismissal from companies where they are currently employed.
Hiring managers, usually incompetent and non-technical in nature, ask: "Do you use Swift? Do you like it?"
If you want a job, you must say yes. Disagree with me? Try a no.
All new documentation is Swift oriented. All conferences and talks are in Swift. All new books are in Swift. You would have to be crazy to stick to Objective-C, but not for technical reasons.
The Objective-C vs Swift debate is similar to vi vs emacs, except that vi and emacs would be built by the same company and that company explicitly said "emacs is the future."
Objective-C has warts. It is also fast. And stable. And quick to compile. Did I mention stable? It is also: stable.
My code from 20 years ago still runs.
Objective-C is a marriage of my two favorite programming languages: C and Smalltalk.
Objective-C is message oriented, like Alan Kay envisioned, instead of object oriented. The focus should be messages and messaging to nil is a FEATURE. I actually quite like it.
In Objective-C you can drop to C at any given time and have access to all the insanely fast libs. You can also write such insanely fast code yourself.
Swift is immature, unstable, bloated, overengineering and extremely complex as a language. Complex as in C++ complex, but without the speed.
Ever heard of Objective-C++? Yup. Everything I said, except exchange C for C++.
Swift sacrifices LOTS of things and introduces new ideas that have yet to be proven efficient, and you get to rewrite your code all the time.
Have something you wrote 2 years ago? Good luck with that.
All productivity "supposedly" gained by Swift is lost to slow compiles, rewriting older code and migrating to the new version when it comes out.
Swift is safe and everything. Right. Except developers with deadlines will just ! all the optionals when they get in the way.
Prototype a new idea? Prepare to fight the compiler the entire time.
Now, Swift does have some cool things, but so could a new version of Objective-C.
In fact, ARC, which is pretty cool was born out of the development of Swift.
In my opinion, Swift is not a great language. I would even say that it's not good. It's OK. Maybe one day it will be good, but this day is not today.
All the love I hear is based on developers that resist learning something new (like Objective-C's way of doing things) and that's a good thing?
I love Cocoa. There's an impedance mismatch between Swift and Cocoa. Objective-C + Cocoa is wonderful.
Oh well. Objective-C is as good as dead and I'd rather write Go, Kotlin, C, Ruby, Erlang or even C++ than Swift.
People that are aware of Objective-C's limitations and want(ed) an actual improvement, not the 1 step forward (sort of), 3 steps back that we got. And they were pretty close, even got it in the marketing slogan: Objective-C without the C. Or at least without most of C much of the time. Or some. Instead they consistently doubled down on the things that were the least useful, for example structs. With classes and primitives, structs were always an extra, needed only for backwards compatibility and, historically, performance. What do they double down on? Structs. WTF?
The mix of Smalltalk keyword syntax and C syntax was always a bit of a problem. So let's keep both and make the integration between the two even more awkward!
Crucially, the vast majority of this is incidental complexity, not essential complexity. Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
Note that this is coming from the other side, so someone who doesn't like Objective-C much at all, and even from that perspective, Swift falls short.
Of course you can borrow it :) It's not very articulated. Just an early morning brain-to-keyboard dump.
I would count myself on your added category. Objective-C has been mostly unchanged for decades. I wanted an improvement, even if a breaking improvement.
We got properties and dot notation, which I was skeptical of. I saw a lot of misuse of those (specially with slower than O(1) accesses.) They were supported to signal intent, but instead were used as a shortcut.
We got ARC, which I must admit is useful for a large segment of developers, even though it creates a false sense of not needing to understand memory management and bites people left and right with circular dependencies.
The good thing about ARC is that you can use it on a per-file basis and you can still do the performance sensitive parts with MRC.
The magic isa optimization was good.
But these were either small syntactic sugar/quality of life or internal performance optimizations. None provided significant leaps forward.
Apple could have picked Objective-Smalltalk :) And poached Lars Bak from Google.
Swift's slogan is great, but it's misleading.
I read Rob Rix's rant shortly after he wrote it, and it was on the mark.
No, in a lot of circumstances the release / retain worked really well and is really understandable. It is more a point of view from folks who structured their programs in a certain way that made ARC rather unneeded. I look at it more like I know where the resources need to be or I have some very specific rules myself. It is more prevalent where you have some type of document life cycle or specific file interactions.
To you, maybe. But that says more about you than the OP.
The tangible benefits of ARC are at best marginal, and there are downsides, which some can reasonably find more significant, and thus ARC not worthwhile.
First off, "manual" reference counting is misnamed. It is at the very least "semi-automatic" and highly automatable.
So how does a property declaration look with "MRC" vs. ARC?
@property (nonatomic,strong) NSString *str;
vs.
@property (nonatomic,strong) NSString *str;
Can you tell which is which?
So how about use?
someObject.str = @"Hello World!";
vs.
someObject.str = @"Hello World!";
Can you tell which is which? In use, ARC and MRC are mostly indistinguishable, as long as you use accessors/properties for all instance variable access. You do that, right? Right?
OK, so there is automatic dealloc. That's a nice convenience, but actually not intrinsically tied to ARC. How do I know? I implemented automatic dealloc for a project once. Didn't use it much after because the small effort spent on dealloc just didn't seem worth it, especially as you also had other boilerplate to implement, the coders, isEqual and hash.
The one biggie was weak, which was previously handled via non-retained references, so no automatic nilling. This could actually be a pain from time to time, but really not that huge a deal and it was also never really intrinsic to ARC, as shown by Apple recently adding weak for non-ARC code.
So those are/were the upsides. Not really that much.
Among the downsides is a performance penalty that could be both significant and somewhat unpredictable, on the order of 50%-100% slower in realistic scenarios (in extreme cases it can be an order of magnitude or more). You could also get unexpected crashes because of retain/releases added in unexpected places. We had a crash in a method that did nothing but return 0;
However, for me the biggest pain point was the semantic changes in ARCed-C: what used to be warnings about an unknown message are now hard compile errors, and that seriously cramps Objective-C's style as a quick-turnaround exploratory language.
And yes, I have worked with/built both significant ARC and non-ARC codebases.
What's really troubling about these things (GC, ARC, Swift) is the rabid fanboyism. When GC came out, you were complete idiot and a luddite if you didn't embrace it and all its numerous warts (which were beautiful) wholeheartedly, and buy into all the BS.
Then when GC was dropped and replaced by ARC: same thing. Did anyone say GC? We meant beautiful shiny ARC, you ignorant luddite. And it quickly became an incontrovertible truth that ARC was faster than "MRC", despite the fact that this wasn't so. And when people asked about it, mentioning they had measured significant slowdowns, they were quickly attacked and everything questioned, because everybody "knew" the "truth". Until a nice gentleman from Apple chimed in an confirmed. Oops.
So please ratchet down the fanboy setting. Technologies have upsides and downsides. Reasonable people can differ on how they weigh the upsides and the downsides.
And of course we are seeing the same, just magnified by an extraordinary amount, with Swift.
What's really troubling about these things (GC, ARC, Swift) is the rabid fanboyism. When GC came out, you were complete idiot and a luddite if you didn't embrace it
I don’t know how to check this, but that isn’t how I remember it at all.
There was a brief flurry of interest when Apple added GC, but it never caught on. If it had been popular, Apple would have kept it.
Then when ARC arrived, it was genuinely popular, and that’s why it was a lasting success. By automating the exact same retain/release/autorelease protocol that people were already doing by hand, it fit much more neatly into existing Obj-C best practices.
I do partly agree with you, that Swift seems to have some problems that haven’t yet been fully resolved, but Apple are still going full steam ahead with it. It feels more like GC, but Apple are treating it like ARC.
Well, GC was much harder to adopt, as it wasn't incremental. Either your code was GC or not. All of it.
> If it had been popular, Apple would have kept it.
Weeelll...I think the bigger problem with GC was that it didn't work; they never could get all the bugs out. Including the performance issues, but more significantly potentially huge leaks. Well, technically that's also performance.
I also very much liked the idea of ARC, it looked exactly like what I had lobbied for, and it certainly was much better than GC, if more limited (cycles). And then I tried it and noticed (a) for my idiomatic and highly automated use of MRC, the benefits were between minimal and zero and (b) the drawbacks, particularly the stricter compiler errors, were a major PITA, and unnecessarily so.
This becomes noticeable when you write TDD code in an exploratory fashion, because with the errors you have to keep 3 sites in your code up-to-date. That becomes old really fast, but I guess most people don't really do TDD (their loss!), so it's not something that's a major pain point in the community.
Incidentally, someone once mailed me that they had switched some code back from ARC to MRC (partly due to what I'd written), and contrary to their expectations and previous assumptions could confirm that the difference was, in fact, negligible.
> [Swift like ARC when it's more like GC]
That's a good observation. Of course, Swift is a much more dramatic change than even GC ever was, and interestingly the community seems to be much more radical/rabid than Apple. For example, in the community it seems de-rigeur that you must use immutable structs whenever possible, whereas Apple's Swift book gives a few conditions where you might consider structs and then says you should use classes for everything else.
What I mean is, ignore the specific technologies: Apple is treating Swift like an ideal incremental improvement (like ARC was) whereas it’s really a major change in direction that may or may not prove to be a good idea (like GC was).
> Can you tell which is which? In use, ARC and MRC are mostly indistinguishable, as long as you use accessors/properties for all instance variable access.
Well, that's the beauty of upgrading to ARC: your declaration syntax doesn't need to change; usually it just means that you can drop a bunch of autoreleases in your codebase.
I read your blog post, where you mention this snippet of code being optimized in an odd way, printing "1 2" in Clang:
int main() {
int *p = (int*)malloc(sizeof(int));
int *q = (int*)realloc(p, sizeof(int));
*p = 1;
*q = 2;
if (p == q)
printf("%d %d\n", *p, *q);
}
Of course, when you have odd things like this happen you're reaching down into undefined behavior, and then all bets are off.
Really, I you're just focusing on performance too much and neglecting the fact that ARC isn't there for increasing the performance of your code: it's there so that you can program in a manner that's more safe. Sure, it's easy to manage memory manually, until that one time you double free and SIGSEGV.
And add a bunch of allocs, which I find less useful because they don't really communicate intent, whereas the autoreleases usually do. Also, in my code-base, autoreleases are less than 0.5% of the total code, and that includes a lot of legacy code.
In fact, after creating a macro for creating class-side convenience creation methods along with initializers in one go, I could probably drop the use to nearly zero. (The convenience creation methods are always +fooXYZ { return [[[self alloc] initXYZ] autorelease]; } so very automatable).
> undefined behavior
Yes, that's the excuse. It's a bad excuse.
> focusing on performance
Well, performance is my specialty. It also has the advantage of more likely giving you actual data, rather than vague feelings.
> that one time you double free and SIGSEGV
I've not found ARC code to crash less, and if you remember the article, it is about code that cannot possible crash actually crashing due to ARC.
I'm amazed that you find this to be a bad excuse, since it's what all optimizing compilers rely on to produce performant code.
> I've not found ARC code to crash less, and if you remember the article, it is about code that cannot possible crash actually crashing due to ARC.
The code crashes because you violated an invariant at some point of program execution. The way the C-style languages work, it's legal to crash (or not) at a place that isn't necessarily the place where the undefined behavior happens (interestingly, not only does this have to be after: it can also happen before the buggy code executes, due to reordering, pipelining and such). This is one of the guarantees you get "for free" by using a safer language such as Swift.
Yes, I am aware that that is the excuse. It still is a terrible excuse.
> because you violated an invariant at some point of program execution.
Not true. The code in question is a callback, so my code is getting called by Apple code, and ARC dereferences a pointer it has no business de-referencing.
May I remind you that the code that crashed due to a segfault was
{
return 0;
}
Also, if you think "You have violated something, for which we will give you no diagnostic, and therefore we feel free to crash you at some random other place in the program that has nothing to do with the place where the alleged violation took place, again with no diagnostics" is reasonable...well, could I interest you in purchasing a bridge in New York? Or some Nevada oceanfront real estate?
And no, you don't need a "safer" language like Swift, you just need to not go for the crazy modifications the optimizer writers pushed into the C standard.
> if you think "You have violated something, for which we will give you no diagnostic, and therefore we feel free to crash you at some random other place in the program that has nothing to do with the place where the alleged violation took place, again with no diagnostics" is reasonable
No, I generally don't, which is why I use a safer language like Swift most of the time (well, that, and the fact that I can take advantage of a nicer standard library). You put "safer" in quotes, because I don't think you quite understand how compiler optimizations are supposed to work. I think Chris Lattner's three part series, "What Every C Programmer Should Know About Undefined Behavior"[1], is a great explanation from a compiler writer for why dangerous optimizations have to exist. It certainly helped me when I was in a similar place as you, not quite understanding why the optimizer did seemingly stupid things.
Really, the crux of the issue is that every language has tradeoffs: you can program in assembly and know exactly what your program is doing, but you lose the portability and convenience of higher level languages. Then you have the C family of languages, where you get access to some higher level concepts at the cost of ceding control to a compiler. The compiler's job is to generate assembly that matches what you are trying to do in the most efficient way possible. Of course, if it did so too literally it would be very slow to account for every single "stupid" thing you could have done, so there are some general rules that are imposed that you must follow in order for the compiler to do what you want. Then, of course, we have the high-level languages which do account for every stupid thing you might do, and so can provide proper diagnostics.
I have programmed in C since ~1986, so please don't try to explain the language to me, and don't assume that my POV comes from a place of ignorance.
The craziness with undefined behavior is a fairly recent phenomenon. In fact, I started programming in C before there even was a standard, so all behavior was "undefined", yet no compiler manufacturer would have dreamed of taking the liberties that are taken today.
Because they had paying customers.
The actual benefits of the optimizations enabled are fairly minimal, and the cost is insane, with effectively every C program in existence suddenly sprouting crazy behavior, behavior that used to not be there.
Yeah, and while I know Chris personally, like and respect him, I am not taking his word for it.
> The compiler's job is to generate assembly that matches what you are trying to do in the most efficient way possible
Exactly: "matches what you are trying to do". The #1 cardinal rule of optimization is to not alter behavior. That rule has been shattered to little pieces that have now been ground to fine powder.
Sad times.
See: Proebsting's law, "The Death of Optimizing Compilers" and "What every compiler writer should know about programmers
or
“Optimization” based on undefined behaviour hurts performance"
> I have programmed in C since ~1986, so please don't try to explain the language to me, and don't assume that my POV comes from a place of ignorance.
I apologize for my tone, it was more patronizing that I had intended it to be.
> The craziness with undefined behavior is a fairly recent phenomenon. In fact, I started programming in C before there even was a standard, so all behavior was "undefined", yet no compiler manufacturer would have dreamed of taking the liberties that are taken today.
I feel that the current renewed focus on optimizing compilers has really been born out of the general slowing of Moore's law and stagnation in hardware advances in general, as well as improvements in program analysis taken from other languages. Just my personal guess as to why.
> Exactly: "matches what you are trying to do". The #1 cardinal rule of optimization is to not alter behavior. That rule has been shattered to little pieces that have now been ground to fine powder.
The optimizing compiler has a different opinion than you do of "altering behavior". If you're looking for something that follows what you're doing exactly, write assembly. That's the only way you can guarantee that the code you have is what's being executed. A similar, but not perfect solution is compiling C at -O0, which matches the behavior of older compilers: generate assembly that looks basically like the C code that I wrote, and perform little to no analysis on it. Finally, we have the optimization levels, where the difference is that you are telling the compiler to make your code fast; however, in return, you promise to follow the rules. And if you hold up your side of the bargain, the compiler will hold up its own: make fast code that doesn't alter your program's visible behavior.
> The optimizing compiler has a different opinion than you do of "altering behavior".
Obviously. And let's be clear: the optimizing compilers of today. This rule used to be inviolable, now it's just something to be scoffed at, see:
> If you're looking for something that follows what you're doing exactly, write assembly.
Er, no. Compilers used to be able to do this, with optimizations enabled. That this is no longer the case is a regression. And shifting the blame for this regression to the programmers is victim blaming, aka "you're holding it wrong". And massively counter-productive and downright dangerous. We've had at least one prominent security failure due to the compiler removing a safety check, in code that used to work.
> Finally, we have the optimization levels, where the difference is that you are telling the compiler to make your code fast;
Hey, sure, let's have those levels. But let's clearly distinguish them from normal operations: cc -Osmartass [1]
The article you linked to in your blog post is most likely not serious; it's a tongue-in-cheek parody of optimizing compilers, though one that's written in a way that brings it awfully close to invoking Poe's Law.
But back to the main point: either you can have optimizations, or you can have code that "does what you want", but you can't have both. OK, I lied, you can have a very small compromise where you do simple things like constant folding and keep with the intent of the programmer, and that's O0. That's what you want. But if you want anything more, even simple things like loop vectorization, you'll need to give up this control.
Really, can you blame the compiler? If you had a conditional that had a branch that was provably false, wouldn't you want the compiler to optimize it out? Should the compiler emit code for
if (false) {
// do something
}
In the security issue you mentioned, that's basically what the compiler's doing: removing a branch that it knows never occurs.
This is simply not true. And it were horrible if it were true. "Code that does what I want" (or more precisely: what I tell it to) is the very basic requirement of a programming language. If you can't do that, it doesn't matter what else you can do. Go home until you can fulfill the basic requirement.
> very small compromise
This is also not true. The vast majority of the performance gains from optimizations come from fairly simple things, but these are not -O0. After that you run into diminishing returns very quickly. I realize that this sucks for compiler research (which these days seems to be largely optimization research), but please don't take it out on working programmers.
What is true is that you can't have optimizations that dramatically rewrite the code. C is not the language for those types of optimizations. It is the language for assisting the developer in writing fast and predictable code
> even simple things like loop vectorization
I am not at all convinced that loop vectorization is something a C compiler should do automatically. I'd rather have good primitives that allow me to request vectorized computation and a diagnostic telling me how I could get it.
C is not FORTRAN.
As another example: condensing a loop that you can compute the result of at runtime. Again, please tell me about it, rather than leaving it in without comment and "optimizing" it. Yes, I know you're clever, please use that cleverness to help me rather than to show off.
> Really, can you blame the compiler?
Absolutely, I can.
> If you had a conditional that had a branch that was provably false,
"Provable" only by making assumptions that are invalid ("validated" by creative interpretations of standards that have themselves been pushed in that direction).
> wouldn't you want the compiler to optimize it out?
Emphatically: NO. I'd want a diagnostic that tells me that there is dead code, and preferably why you consider it to be dead code. Because if I write code and it turns out to be dead, THAT'S A BUG THAT I WANT TO KNOW ABOUT.
This isn't rocket science.
> security issue you mentioned, that's basically what the compiler's doing: removing a branch that it knows never occurs.
Only for a definition of "knows" (or "never", take your pick) that is so broad/warped as to be unrecognizable, because the branch actually needed to occur and would have occurred had the compiler not removed it!
> The article you linked to in your blog post is most likely not serious
I think I noted that close relationship in the article, though maybe in a way that was a bit too subtle.
Hmm…let's try a simpler question, just so I can get a clearer picture of your opinion: what should the compiler do when I go off the end off an array? Add a check for the bounds? Not put a check and nondeterministically fail based on the the state of the program? How about when you overflow something? Or dereference a dangling pointer?
You seem to not be OK with allowing the compiler to trust the user to not do bad things–but you do trust them enough to out-optimize the compiler. Or am I getting you wrong?
> In my opinion, Swift is not a great language. I would even say that it's not good. It's OK.
That's where I'll have to disagree. I really think Swift is a great language, though it's great in a different way than Objective-C is (though, who knows? Maybe one day it will gain enough reflection facilities to match Objective-C).
> The vast majority of people that used Objective-C were drawn to the success of iOS. They were developers, used to languages like Javascript, Java or C++. Coming to Objective-C, their immediate reaction was "WUTT??? Brackets!" Instead of properly learning the idioms, they fought the language on a daily basis and cursed Apple for forcing them to use a language unlike the one they were used to, in order to jump on the iOS bandwagon. These people, once Swift was released, jumped boat immediately because they never really understood Objective-C and Swift is familiar.
I used to be a Java programmer, and I started iOS development when Swift came out because it looked familiar. Sure, at first I really thought Objective-C was ugly and outdated, but once I really got its design I really came to enjoy it. I'm not saying that it's perfect, or that it does everything right, but there is a certain charm in its model.
> Another group of people actively dislike Swift, and really enjoy Objective-C. These are considered dinosaurs and have to either hide their opinion or face hiring difficulties, or even dismissal from companies where they are currently employed.
Sure, to each their own, but I really think that a lot of these developers just haven't spent enough time with Swift to really get its benefits. It's the same issue that you've mentioned with Swift developers, but in reverse.
> messaging to nil is a FEATURE
The Swift language designers hear you; that's why there's the optional chaining operator.
> In Objective-C you can drop to C at any given time and have access to all the insanely fast libs. You can also write such insanely fast code yourself.
You can do both in Swift.
> Complex as in C++ complex, but without the speed.
Swift is complex, and it's still a bit slower than C++ for certain things, but this doesn't mean this won't always be the case.
> Have something you wrote 2 years ago? Good luck with that.
Swift is still a new language, and it's in active development. However, you do have the guarantee that your code from today will continue to compile in the future.
> Swift is safe and everything. Right. Except developers with deadlines will just ! all the optionals when they get in the way.
I don't think you quite understand the safety that Swift offers. In Swift, an optional unwrap is designed to deterministically fail. The same is not true in the C family of languages, where it would be undefined behavior.
> Now, Swift does have some cool things, but so could a new version of Objective-C
There are some things that would be rather difficult to shoehorn into Objective-C at this point, even assuming that Apple is interested in adding these in. For example, Swift's generic functionality is far ahead of what Objective-C offers.
I find it troubling that you equate somebody not being in love with Swift with trolling. Also, you are questioning my skill in Swift because you disagree with me. That makes perfect sense.
Instead of an ad hominem, you could actually, you know, argue against my points.
You didn't reply to the parent, who stated that Objective-C is puke inducing... I am the one who is a troll. OK :)
I remind you that the OP is about Objective-C being ported to Amiga. Somehow, it attracted people who LIKE Objective-C. The insanity of it all!
Please don't do this. It really drives down the quality of conversation if you engage in ad hominem attacks of the poster. This is especially true in this case because one of the arguments brought up was "rabid fanboyism", which your comment clearly isn't helping disprove.
A few people. I would call them shallow in this regard, but that would imply there was something bad with the code in the first place, even if shallow.
I "brought" Objective-C to the Amiga sometime in ~1986/1987.
I had recently acquired one of the first Amigas in Germany, still a US NTSC model, and also seen Objective-C discussed in a BYTE article. The beautiful OO structure of the Amiga Exec kernel and the higher OS levels built on top of those abstractions (except the abomination that was AmigaDOS) was almost certainly an inspiration.
Having also recently purchased my first C compiler, Manx Aztec C, I initially had some fun with implementing OOP via some Macros and the Amiga shared library mechanism, which was essentially an array of function pointers/C++ vtable.
I don't quite remember when I got the crazy idea of actually implementing an Objective-C runtime and preprocessor, before or after getting the book. I do remember that it felt very much "here goes nothing", and it really was nothing. I was in high school, I had no CS training, didn't know about compilers, hadn't programmed in C for more than a couple of months.
So I wrote a weird lexer that used a bitmap and two counts to cover all the regex cases, a syntactic recognizer that could tell the Objective-C constructs apart most of the time (it had a hard time distinguishing a method send with a cast from indexing into the return of a function call). And a runtime and basic object library. My dictionary implementation turned out to be ordered almost by accident, I have grown very fond of ordered sets and dicts. It also had automatic serialization ("passivation" as it's called in the book), something Apple has only gotten around to just now, and only for Swift.
It worked and I never looked back. I later ported it to i386 and QNX + MS-DOS for a project, replacing the assembly-language messenger function with a pure ANSI-C one returning the function pointer of the method (a technique later rediscovered for the GNU runtime).
When NeXT came out I knew I had to have one. And ditched my implementation in a heartbeat: SEP!
Interestingly, there was another Objective-C implementation from Germany in around the same timeframe called M:OOP, which actually got a little bit of press coverage:
https://ekartco.com/wp-content/uploads/2012/11/m_oop.pdf
As far as I could tell, it never got the fully generic objc_msgSend() running, with only a limited number of arguments supported.
Fun times, but also valuable lessons:
- Go tackle "crazy ambitious" projects.
- A language that can be implemented by a kid with a C compiler is a gem.
- An architectural approach to language is really valuable.