It's more a matter of your personal preference and previous exposure to different languages. The way Rust reads is one of its super strengths in my book. I also really enjoyed Standard ML in university, and Rust picks up some of that (via OCaml).
No doubt there are some things that are hard to read in Rust. In the link posted everything seems very straightforward and should make sense if you know a c-like language.
It's certainly cleaner, but my immediate thought when reading was "that's a compile error" because it looks like it's declaring a variable with a type mismatch. This is where, in other languages, the "case" keyword (or similar) comes in handy (eg: Dart's if-case syntax[1]), except Rust doesn't have that because of its headlong pursuit of 'conciseness'.
I think the fact you had to re-consider it shows you're not thinking in patterns. Rust is a language which always had pattern matching so in Rust it feels natural to, for example:
while let Some(work) = inbox.pop() { /* ... */ }
[edited: thanks HN, of course since this is code I don't need to escape the asterisks, unless I do]
It's not that... well, it might be, but I use pattern matching fairly regularly in a number of languages. But Rust has hijacked the variable syntax and changed it beyond recognition: there's no matching operator being used; the same code does different things based on its surroundings:
// This causes a compilation error
let Some(work) = inbox.pop()
// But putting it 1:1 within a while statement is fine?
while let Some(work) = inbox.pop() { /* ... */ }
Whereas, with Java's pattern matching:
while (inbox.poll() instanceof final Work work) { /* ... */ }
It borrows the variable declaration syntax too, which can be extracted fine without issue, but there's an operator being used to express that pattern matching is happening.
Different languages have different uses, histories, and quirks, yes, and this is by no means to suggest that Java is a better language or that it doesn't have its own flaws. But Rust coopting of that syntax makes the language harder to learn and harder to comprehend at a glance, in my opinion. It seems to be like one of those situations where, once you learn it, it's fine, but that's my point.
First of all, you're right, it does indeed do different things. The stand-alone variant only accepts irrefutable patterns, hence why it needs an "else" branch.
But just for fun, let's make it compile!
enum Result<T> {
Some(T),
}
use Result::Some;
fn pop() -> Result<u32> {
Some(42)
}
fn main() {
// This doesn't cause a compilation error
let Some(work) = pop();
}
I will argue that it's not different, it's exactly the same thing. In both cases you are matching a pattern. The only difference is that in `if` and `while`, because they are conditional, the pattern is allowed to not match (refutable), while in bare `let` it must match (irrefutable).
I'll assume you know you missed a semi-colon, so we'll fix that, which still gets us a compiler error, but specifically the diagnostic says: pattern `None` not covered and it suggests:
let Some(work) = index.pop() else { todo!() };
You seemed puzzled by the fact we can use this pattern for let while, but of course when our pattern doesn't match (for None) the loop ends, that's what a while-let loop does, the if expression which may not match needs a clause for the case where it doesn't match, the suggestion is an else clause.
Remember Rust is a statically typed expression language so Python type situations where maybe the pattern matched or maybe it didn't and something will happen but the language doesn't promise what, those aren't OK because we have static typing, what is the type of "Eh, I don't know, whatever" ?
Edited:: Aha, I realised you wrote "1:1 within a while statement" and now I think I see the problem. That's not a while statement, Rust doesn't have those - it does have while loop expressions - but this isn't one of those either, this is while let, it's different.
This isn't a while loop where the while condition happens to be
a variable assignment, Rust doesn't have that. There's a reason while let has a whole separate entry in the book. This is syntax for a loop which repeatedly performs a pattern match and always exits when it fails.
This is honestly a pretty perfect example of why criticising programming languages can be so frustrating: people are completely unwilling to meet you where you're at. You're response here is to say that of course while-let does that, because that's what while-let does, that's how while-let looks. I'm not being critical of Rust having pattern matching as the while-loop condition, but of how this is expressed in code, specifically with how it looks exactly like an ordinary variable definition, but doesn't at all function like one, and how that can be confusing to a non-zero amount of people. That's the extent of my complaint. I'm ganna go now.
I think that's one of the things I struggle with in Rust; pattern matching is integral and I'm not thinking in patterns (yet). When I see "while let" I don't think about patterns at all, my brain goes straight to variable assignment.
Short answer: It is difficult because you aren't familiar with it, and your brain is still wired for memory-unsafe or GCed languages. But it is actually easier to read than most other languages when you understand it and how to navigate it.
When I first started learning Rust several years ago, I shared your viewpoint. With a heavy preference for C++ syntax, I thought Rust looked atrocious.
Then I learned it. I got good with it. I got comfortable with it. I switched from VS Code based IDEs to vim, then to IDEs with vim bindings (trying Zed now). Now, Rust reads like a dream -- aside from lifetime specifiers, which I think could be much smarter than they are now.
Anyway, it has superior pattern matching and code searching. snake_case is easier to navigate than camelCase or PascalCase. Keywords are short and easily recognizable. Branching logic is much, much easier to follow. Error handling is explicit and also easier to follow (and I started out really missing try-catch). Code gen with macros eliminates a lot of headaches. A 1st class package manager that is fast & can be easily inspected, that seamlessly integrates into the code...the list goes on.
The point is, it is a paradigm shift that doesn't hide much of anything away. When you consider there is no GC and you are in full control of memory, that there are strict syntax and styling rules that make the global Rust codebase universally accessible, once you switch to it you find yourself wishing everything was written in Rust. That is why people using it want to rewrite everything, even well established packages. A Rust-only codebase is buttery smooth.
In general, I find Rust very readable. The areas I do find I struggle are generics and lifetimes, and I don't think it's a feature of Rust syntax that makes me struggle and is instead the fact that people tend to use non-descriptive names for lifetimes or generics. It's hard to pick apart a trait that has four generics name T, U, V, and W, or lifetimes 'a, 'b, and 'c.
Usually people will try to make it the first letter of something meaningful, but it is still much harder to parse.
My own preference is to use more descriptive names except in trivial cases. Having the lifetime mirror a struct member name or a parameter name is really helpful for understanding them, in my opinion.
Yeah, I think it's non-obvious that you can name a lifetime 'frame not 'f - yes the compiler won't know you named it that because it's only supposed to live for one frame, but that's also true for your variable named timeout, Rust can see it's a Duration, so it has appropriate affordances, but it can't know you meant to call set_timeout(timeout) and not just store it somewhere.
I have a backlog item to find Rust docs which use a concise lifetime name but could value a better one, however there aren't actually that many cases other than the scoped threads which do indeed name the scopes 'scope and 'env showing that we can give these meaningful names.
Somewhere in between. There actually is a level of demand for languages which are annoying to use; whether this is annoying syntax or annoying semantics isn't much of a distinction, they dovetail. But more than that, there's demand for ML clones (but with curly braces), and neither ML nor generic-curly-brace is the strongest foundation to build readability on, unfortunately.
On my Samsung Galaxy watch, if I get a notification from my Unifi security cameras, for example, I get a little thumbnail image appear on my watch. There's no special app on my watch, just the app on my paired Galaxy phone.
Will it do this? Or would I just get a text notification? I don't understand smart watches well enough to know how much they are doing themselves vs how much of what they do is to be a mindless projection of whatever the paired phone tells them to do.
The Pebble software doesn't have support for images in notifications right now. But it definitely could/should be added. And it's open source, so you could even do it yourself!
I am not familiar with the pebble SDK or notification API it has. Smart watches usually will display whatever notification the mobile devices instructs it to display.
If you get a push notification on your mobile, I don't see a reason why pebble won't display it. The thumbnail image might be fixed but all the text content will be shown. And FWIW, the entire thing is open source so you can go in an add it, or open feature requests, etc.
Use the Android pebble app "Notification Center" and it should be able to do that for you.
(Notification Center gives you extreme amounts of control over what to send to the watch how it gets displayed, etc. It's the reason I'm still daily wearing my pebble watches)
But Arduino ecosystem is full of superstition and bizarre hacks. It's cargo cult electronics. They will do anything to avoid reading documentation or writing robust code.
Even the power saving recommendation here reeks of it. There is no effort to understand it. Someone on an Arduino forum recommends it, others start to echo it to try to appear like they know what they're talking about, it becomes lore in the Arduino world and you out yourself as a clueless newbie if you don't know to do esp_wifi_set_ps(WIFI_PS_NONE) without questioning anything because that's just the way it's done. It disables the radio in between AP beacons, so unless there's a bug in the implementation it should have no noticeable impact to a quiet WiFi station other than saving a lot of power.
I used to say things like that, but come on: Arduino is targeted at hobbyists. More specifically, it's targeted at hobbyists who don't want to spend too much time learning hardware. If they did, they would be using a "bare" microcontroller better suited for their needs and costing one tenth the price. But they're not interested in microcontroller programming, they just want to get their art project done.
It's the same thing that happened with computers. Billions of people use them, but most just want to access Facebook or use MS Word, not learn OS internals. It's a different world from where we used to be 30-40 years ago, and that's fine. We design simpler, more intuitive products for them.
If a product meant for that group can't be used effectively by the target audience, I think the fault is with the designer, not with the user.
> If they did, they would be using a "bare" microcontroller better suited for their needs and costing one tenth the price.
Where do you get something like an ESP that's one tenth the price? ESPs are cheap and you can run Arduino, ESP-IDF directly, or fringe environments (I had some ESP8266 running NodeMCU because Lua made more sense to me than Arduino).
You can run Arduino code on anything, since it's mostly just a bit of syntactic sugar around C. But I'm sure you know what I mean.
My point is that people who are attracted to Arduino are, by and large, not the kind of people who want to geek out about the inner workings of the MCU, and there's nothing wrong with that.
I'm pretty familiar with the microprocessor architecture of the 8-bit era that I grew up in, and have done a fair amount of hardware hacking. As things have gotten more complex, I've let some things slide, such as the complexity of pipelined architectures.
Arduino is not even syntactic sugar any more. All it retains of its origins, that I'm aware of, is the weird setup() and loop() schtick. And you have limited control over what happens before your code starts. But with most Arduino compatible boards, you have full access to the vendor supplied libraries, and can go as deep as you want. These days my preferred platform at work is Teensy 4, and at home, the wireless enabled boards. I think Paul Stoffgren is some kind of 100x engineer.
But life is short. Over my 61 years, I've carefully rationed the brain cells that I devote to innards of technologies that will soon be obsolete. I read the Turbo Pascal manuals cover to cover, and The Art Of Electronics, but I never cracked Inside Macintosh. I've decided that I will simply not learn anything about any OS that is not Linux, and superficially at that.
I program desktop computers in high level languages, despite total abstraction of the innards.
I think the relative portability of Arduino code has been a huge boon for hobbyists because it encourages the formation of a community of people who can share code and knowledge, even if they're not all using the same processors, and despite sometimes needing to tweak code when porting it from one platform to another. This was also the case with early FORTRAN. Portability across processors revolutionized scientific computing.
The problem isn't with the artist doing a one-off project involving a microcontroller. It's the Arduino "experts" who write blogs, create videos, and dominate forums with their accumulated nonsense. They posit themselves as authorities in the space, newbies adopt and echo whatever rubbish they make up, and the cycle continues. They get very defensive if you try to correct them, even linking directly to documentation supporting it.
If you're going to write a blog about how the ESP32 doesn't connect to the strongest AP so you need to pin it to a specific BSSID in your router settings... Maybe you shouldn't be writing that blog. If you haven't taken at least a moment to check documentation and see that the behaviour you want is already an option that can be selected by changing literally one line in your ESP32's WiFi config. Instead this pseudoscience proliferates.
Instead of spending x2 the initial effort to fix the root cause, you spend x1 the initial effort to implement jank and then spend x10 the effort down the line maintaining the jank.
Deal with what? I would argue that if you're going to the effort of writing a blog post on the topic then you should at least go to the effort of skimming the docs to make sure there isn't already a solution for the common problem you're experiencing.
It's literally one word to change in his WiFi config to get the behaviour he wants. It's already implemented. Who can't "deal" with that?
Personally, I don't use multiple APs with overlapping SSIDs, but if I did than I can see how it would be easier to deal with the logic from the AP management side rather than the client. It's also nice to not have to re-connect IoT things if/when you add or change your APs.
I think I understand you. That functionality doesn't exist in ESP32 Arduino tool chain without more work/more code. Their hobby level perspective is valuable to other hobby level engineers who want a solution.
> It disables the radio in between AP beacons, so unless there's a bug in the implementation it should have no noticeable impact to a quiet WiFi station other than saving a lot of power.
Seems safe, but it probably depends on the clock being accurate, so it can wake up on time for the next beacon, and the clock frequency is likely sensitive to temperature and therefore power usage.
If you're plugged into a wall wart, chances are the power savings aren't going to be too much; if it helps reliability (which should be easy to confirm), then it's likely worth paying a cent or two more a month. It's different if you're running from battery, though.
To be fair, the API people typically use in hobbyist contexts is literally a single call to 'WiFi.begin(ssid, password)'. There's not exactly any obvious room for error here, and any details which may or may not have been implemented incorrectly are so deep inside abstraction layers as to be inaccessible. There's little apparent room for making the code more robust (other than "workarounds" like application level health checks + reboot on error), because everything is supposed to have been taken care of by the abstraction.
If I can disable PM and then my ESP stops disconnecting from WiFi, I'm happy. There's not much more I can do without re-implementing what 'WiFi.begin()' does myself, and I usually have better things to do with my time.
> It disables the radio in between AP beacons, so unless there's a bug in the implementation it should have no noticeable impact to a quiet WiFi station other than saving a lot of power.
A) this increases ripple voltage which eventually impacts RX noise floor. As long as you have enough headroom at the input to your regulator power saving is great, but eventually having a more consistent load becomes the limiting factor for many devices.
B) drastically increases typical latency - not an issue for all applications, but the ESP-IDF network stack has a Nagler that can't always cleanly be disabled and tends to write each little bit of the next layer to the TCP socket.
A) The timing for this is deliberately set to be very conservative in terms of the wakeup window (at the cost of higher power), so the radio is probably powered up for a good 5ms before the beacon arrives. I don't know if you could unintentionally design a 3V3 supply so poor that it takes in the order of milliseconds to adjust to an output current of about 30mA -> 80mA.
B) Yes, this is a fair point, and why I was careful to specify a "quiet" station above. If actively transmitting then there is likely a benefit to disabling power saving, but unlike Arduino bros I will admit at this point that I don't understand the WiFi spec well enough to comment further with any confidence.
Often ESP32 devices at low power can still transmit, but will start to fail to receive acknowledgements.
I have a guess, but no real way to test what's happening. On the scope the start of a transmission says the supply hard, but most of the packet the ramp rate is relatively low. Once the transmission stops and the radio turns over to receive mode, the ramp rate is much faster. On a third device I can record packets and see that they are being sent and acknowledged, but often retransmitted by the ESP who didn't seem to hear the acknowledgement.
> The timing for this is deliberately set to be very conservative in terms of the wakeup window (at the cost of higher power)
Yes, the minimum interval of when to start listening is determined by both radios clock accuracy budget, one of which can be known and the other assumed.
> so the radio is probably powered up for a good 5ms before the beacon arrives.
No, not anywhere near that long. I don't have a board wired out for current measurements, but for reference, 5ms/101ms beacon with DTIM=1 would be a 5% duty cycle without any useful data, unacceptably high for many battery powered devices.