And the Max Headroom style was notably copied for the Back To The Future 2 "80s Cafe" scene, with Max Headroom style version of Reagan, Michael Jackson and Ayatollah Khomeini.
On the night of November 22, 1987, the television signals of two stations in Chicago, Illinois, were hijacked, briefly sending a pirate broadcast of an unidentified person wearing a Max Headroom mask and costume to thousands of home viewers.
And his manner of speaking is an inspiration for the iconic voice of SHODAN from System Shock, though SHODAN removes all the comedy and dials up the terror to 11
It's the followup line that got me on a recent rewatch. For context Marty is trying to show a video, played back on a futuristic camcorder, and The Doc is more amazed by the camera.
"This is truly amazing, a portable television studio. No wonder your president has to be an actor, he's gotta look good on television."
I have a theory that if the homepage didn't show 'Featured On Meta' the reputation of the site would be much higher because we wouldn't all be getting a window into the bickering and nitpicking. The bickering and nitpicking is far too prominent on the homepage
The idea behind inertial navigation is to keep track of the missile's position by constantly measuring its acceleration. By integrating the acceleration, you get the velocity. And by integrating the velocity, you get the position.
This sounds like it couldn't possibly work (surely all the little errors compound?) but apparently it's how Apollo navigated
That is how all self-guided weapons systems worked before GPS was viable. Many still retain that capability as a fallback. Notably, the Tomahawks fired during Desert Storm had to transit over Iranian airspace because they needed the mountainous terrain to correct for their inertial drift before turning toward their targets over the flat Iraqi plains.
GPS can be jammed (see Russia-UKraine war), so inertial systems are still very important for rockets, for example some HIMARS rockets start with GPS and then rely only on inertial while getting close to target.
Himars relies on inertial navigation the entire flight and uses GPS updates to course correct. If the GPS is blocked for a sufficient amount of flightime, even with the intertial navigation, the accuracy can become unusably low.
This is how the Russians have been throwing double digit percentages of launches off course.
Terminal guidance since ~1995 on higher-end weapons has switched to hybrid inertial + scene matching (various sensor types).
F.ex. the 90s Tomahawk used terrain contour matching to orient itself
For more details see https://apps.dtic.mil/sti/tr/pdf/ADA315439.pdf (US translation of a mid-90s Chinese survey of the guidance space, but it covers the material and is publicly available)
Afaik, most modern systems use infrared target matching for final course correction. (Initially developed to allow anti-shipping missiles to autonomously prioritize targets, but now advanced enough to use in land scenarios as well)
I don't think ATACMS or GMLRS missiles have any terminal guidance apart from their aim point. The GMLRS missiles that carry german SMART munitions do, technically, since the SMART munition has it's own targeting system.
It wouldn't make much sense to me, as most ATACMS warheads are area based, not point target based, so they wouldn't expect to aim at a single target. Also these systems are relatively cheap compared to things that DO have such guidance
Almost all. Walleye television-guided glide bombs used edge detection on a television signal to aim themselves in. A human would designate a target at the start but then the bomb would autonomously track the target. An optical fire-and-forget system developed in the 1960s.
Sidewinders are another example. Both developed at China Lake.
When Nintendo Wii motes first appeared, they were some of the few devices at the time with cheap MESM accelerometers and gyroscopes that were programmer-friendly.
I remember taping two together back to back and integrating acceleration across them. That's when I learned Kalman filters. It was accurate enough so I could throw it across my desk and measure the desk length :)
When the MacBook got the acceleration sensor, I hacked up a little program to estimate velocity, and a button to reset at stoplights. Some friends drove me around, it worked poorly. it did pretty ok on the highway, but awful in the city.
I think if I kept messing with it, it'd get a lot better, but I sorta lost interest. This was more of a fun weekend toy.
I think all phones have them, and they might be reachable through chrome/safari. And it is kinda fun to play with, but you'll probably hit sampling rate errors pretty quick. you gotta guess the shape of the curve between datapoints.
It is how Apollo navigated, although both the ground (via ground tracking) as well as the crew (via locating stars through a extant, and the Apollo computer having a database of the position of several dozen bright stars) could update their current position throughout the flight.
Apollo used star sightings to check the accuracy of the gyros that measured which way the spacecraft was pointed. The stars could not be used to determine position like a ship at sea could do.
Besides inertial navigation, they had a transponder that would echo back a continuous pseudorandom bit stream, and the delay gave a precise measurement of distance.
Thank you for the correction, but are you sure that is accurate? I was definitely under the impression that although their position was normally updated by the ground (to the AGC, via their uplink capability), and the sextant was normally used to determine their orientation, the astronauts could use their optical equipment and calculations to determine their position as well as their orientation, albeit it with less precision. This NASA website (https://www.nasa.gov/history/afj/compessay.html#:~:text=Opti...) seems to say as much:
"Optical navigation subsystem sightings of celestial bodies and landmarks on the Moon and Earth are used by the computer subsystem to determine the spacecraft's position and velocity and to establish proper alignment of the stable platform."
"The CM optical unit had a precision sextant (SXT) fixed to the IMU frame that could measure angles between stars and Earth or Moon landmarks or the horizon. It had two lines of sight, 28× magnification and a 1.8° field of view. The optical unit also included a low-magnification wide field of view (60°) scanning telescope (SCT) for star sightings. The optical unit could be used to determine CM position and orientation in space."
The errors would be less of a problem than Apollo's, when your longest possible flight is only 45 minutes or so. And I'm not sure, but I guess the ballistic portion of the flight is uncontrolled (since the steering is from the rocket motors), so perhaps only the first few minutes are all it needs to maintain accuracy for?
The little errors do compound, but the errors have been made progressively littler; a modern ring-laser gyro INS has a drift of one millidegree per hour or less.
Or you can add an external correcting factor, such the Trident's astronav system that takes star-shots to recalibrate the INS.
Isn't it about finding the time difference between pseudorandom coded signals. Granted the satellite positions and paths need to be known, which is another part of the puzzle. That involves some calculus, I'm sure.
Yes but measuring diffs in either the pseudocode itself or the underlying carrier wave is basically measuring relative velocities wrt each sat and the observer.
It's all summing dx/dt + dy/dt + dz/dt, for i paths between satellites and ground stations (or more receivers for differential or rtk or vrs style). [2]
Which reduces most of the time to summing DELTA-Xi + DELTA-Yi + DELTA-Zi + delta-t(timeerrors). For i paths between each sat and ground receiver.
Which you should recognize the transformation if you've ever taken calculus. Even if you don't integrate every time you get a fix.
Part of what I describe as math 'magic' is that you can cancel out most of the unknowns and most of the unsolved calculus if you add a second fixed receiver.
Google and Apple location services 'cheat' and do this via subbing a nearby wifi MAC with known coordinates, which for them is good-enough. But augmented gps from FAA or DOT or coastguard etc work the same way, but with real gps receivers on the ground in realtime. Obviously without having to substitute anything.
Either way- the extra known variable greatly simplifies the math via canceling-out terms.
Plus there are both closed and open form solutions developed since initial GPS deployment that allow solving without direct integration.
Chapter 12 of [0] Surveying gets into the math, including transformations, if you want to see the math details.
Or [1] GPS by van Sickle for a good overview of the various methods/ technologies. (Also survey-centric).
[2] despite wgs84 and lat/lon being associated as default 'GPS coordinates', the 'raw' gps system data is xyz Cartesian in feet, then transformed to lat lon or whatever else.
My assumption was that GPS doesn't use dead reckoning to get a fix (other than the satellite paths). Do receivers use the Doppler effect to directly measure velocity?
Dead reckoning isn't really the right term- there are broadcast and published ephemera (ephemeris-es) for the satellites of various qualities. (Both predicted and observed and then even levels of correction days or weeks later, for high precision/accuracy or strictly static obvs. stuff.)
The doppler mostly comes into play with the small delta-t errors, but again, more math magic cancels most of it out in most cases, or what remains is negligible.
It's more of a signals/sync thing that gets into antenna design and (to simplify) getting all signal cycles from the various satellites working within a single aligned synced cycle, if that makes sense.
One reason the old gps units needed a long time to get an initial fix was waiting to download the broadcast, in ~bits/sec. This can now be downloaded much quicker via internet or other methods.
And there are dozens of other similar shortcuts possible depending on receiver capabilities/ connectivity/ observervation methods.
Which is to say that there's no one 'right' way to get a fix- and the 'most' correct original design was the ~hour long broadcast download. And no one does that anymore.
But just about every method (I'm aware of) is derived one way or another from the general eqns I gave above.
(But my exposure is almost entirely geodesy, engineering, and surveying, and my military (encrypted) knowledge comes from my PLS instructor being ex army intelligence, not hands on. But which is also why I am at least aware of so much of the missile tie-in issues.)
And there are signals processing and CS tricks also, which I only barely grasp.
But if something says it starts with baseline (propagating signal path) lengths to get position, it's skipping the step of how it measures/ estimates those initial baseline lengths.
I also could not believe inertial navigation systems worked as well as they do when I first learned about them. At some point in time the most sophisticated IMUs were actually export-controlled!
Maybe this has changed or is ineffective now that smartphone/quadcopter IMUs have caught up.
Advanced IMUs are still export controlled and the state-of-the-art is classified. The US military considers this a cornerstone technology and has invested heavily in R&D over the years. The IMUs that are widely available commercially have improved significantly over the years but so have the military versions.
> Maybe this has changed or is ineffective now that smartphone/quadcopter IMUs have caught up.
They did not caught up. There are two kind of IMUs: one where you have to account for the rotation of the Earth during signal processing and one where there is no point because it will be lost in the noise anyway. The smartphone/quadcoptee IMUs are the second kind. The first kind is still export controlled.
consumer-grade IMUs are still well below the performance of even much older military-grade IMUs (which tend to be impressive feats of precision engineering with pricetags to match, but also physically much larger). You'll still find anything that's useful for working out position over any time period is export-controlled (dual-use or stricter).
At least modern ICBMs do a star sight to calibrate at the top of their trajectory but, yes, that’s what inertial guidance is. Draper Labs basically pioneered.
Ground-based launch sites can supply both position and orientation very accurately (few seconds of angle), more accurately than stellar correction can, and their gyro platform can keep it within a few seconds of angle as well.
Submarines can determine their position accurately enough, but their orientation data can be improved upon using stars.
The MIRV bus takes in the angular fix just before it starts giving the warheads their individual nudges.
I mean it's a nuclear missile, millimeter accuracy isn't really necessary. Somewhere in the general vicinity is good enough for it's purpose of going boom.
Well, accuracy makes a big difference if you're trying to hit a hardened target like a missile silo. Missile guidance has been a constant effort to squeeze out more and more accuracy. Minuteman I started with an accuracy of 2 km, but now Minuteman III is said to have an accuracy of 120 meters. The Peacekeeper (MX) missile, no longer in service, is said to have an accuracy of 40 meters. You can use a much, much smaller warhead if you're 40 meters away compared to 2 kilometers.
The START II treaty limited Russia and the US's ICBMs to a single warhead each, and the Peacekeepers were optimised as a platform to host multiple independently targetable re-entry vehicles (MIRVs) and when the US agreed to revert to a single warhead per missile the Minuteman III was much cheaper to maintain than the Peacekeeper.
So even though Russia withdrew from START II almost immediately, the US continued to unilaterally remove the MIRV capability from its ICBM fleet and stick to single warhead Minuteman IIIs.
Random errors (i.e. noise) cancel out in the long run thanks to integration.
You're then only left with systematic offset errors which can presumably be calibrated out to a large extent.
We can assume the error will have a random (whether it's actually truly random or merely pseudo-random doesn't matter here, just assume it's indistinguishable from truly random for this discussion) and a non-random component.
The random component I assume to be gaussian (thermal noise, for example) and therefore symmetrical around the real value.
It's obvious we can remove this type of noise through averaging (of which the core operation is integration).
The non-random component I assume to be a skew that can be calibrated out.
With these two assumptions in mind you can see that yes, it's indeed a random walk, but a very well behaved one.
No, you can't remove the random walk error by integrating. The point is that after integrating, what you're left with the random walk error. To make this concrete, if you buy a commercial-grade gyroscope for $10, it will have a random walk error of several º/√h. So after summing the errors for an hour, you're left with several degrees of random error, which is bad. If you spend $100,000 on a navigation-grade gyroscope, you'll get a random walk error < 0.002º/√h, which is much better.
As far as calibrating out the skew, of course you can do that to some extent, but it's not a magic bullet. The Minuteman periodically measures skew and even applies equations for the change in skew with acceleration. The problem is that skew is not constant; it changes with time, changes with temperature, changes with position, and changes randomly, so you can't just calibrate it out. That's one reason why missiles use strategic-grade IMUs for a million dollars rather than a commercial-grade gyro for $10: you're getting drift of .0001º/hour instead of .1º/second.
You are correct, I forgot to separate between long-term and short-term random effects.
Short-term random effects (as in, the part of the gyro's random walk error significantly higher in frequency than the inverse of the integration period) will get cancelled out by integration, assuming they're Gaussian.
Long-term random effects (mainly from time and temperature like you mentioned) will instead tend accumulate with integration aka worsening with time.
P.S. great fan of your many ventures into retro tech, keep them coming!
that was not what you forgot and your summary is still wrong. kens is correct. i suggest programming some simple simulations using a random number generator to get a better feel for the space
try it: take gaussian white noise with zero mean and integrate it twice. You'll see the signal does not stay close to zero, in fact it will drift arbitrarily far away from it over time (it's only necessary to integrate once for this to be true, but doing it twice as an IMU needs to will make it more obvious).
You are correct, I initially did not explicitly separate the noise according to its frequency (my mistake).
Integration only helps with high frequency error and can actually worsen low frequency error, more details in my second reply to kens.
One can't cancel out random errors by integrating. You should take kragen's suggestion and write a quick simulation. To make this concrete, flip a coin 10 times. Take a step to the left for heads and a step to the right for tails. Most of the time you won't end up where you started, i.e. you have residual error.
> One can't cancel out random errors by integrating.
An ideal integrator has a response of 1/s. That's just a 1st order low-pass filter with the pole at 0. Therefore, it will filter out high frequency noise.
> Take a step to the left for heads and a step to the right for tails. Most of the time you won't end up where you started, i.e. you have residual error.
I wrote a quick simulation based on your suggestion [1].
Started by generating 1e6 random points and then applied a high-pass filter.
Calculated the cumulative sum on both the original and the filtered version.
TL;DR: filtered version has small and very fast variations but doesn't feature the much larger amplitude swings seen in the original.
Integration indeed does not help for those large slow swings (I'd call it drift in case of a gyroscope), but that's what I was trying to get at when I started to distinguish between short and long term random effects.
What I was trying to get across originally is that "all the little errors" (which I read to mean tiny fast variations, forgetting that drift is a much bigger issue in gyroscopes) which OP mentioned get filtered/canceled out.
I totally missed to explain that this will vary with frequency, which was my bad.
yes! but also keep in mind that 1/s is never 0 for any finite s, so even at high frequencies the error resulting from random noise is never zero, it's just strongly attenuated
> if you buy a commercial-grade gyroscope for [us]$10, it will have a random walk error of several º/√h. So after summing the errors for an hour, you're left with several degrees of random error, which is bad. If you spend [us]$100,000 on a navigation-grade gyroscope, you'll get a random walk error < 0.002º/√h, which is much better.
if the slope was anything else, the unit of °/√h wouldn't make sense; it would have to be °/h or °/∛h or something. similarly for noise figures given in nanovolts/√Hz
You are completely right. BTW It wasn't a software update, it was a content update, a 'channel file'.
Someone didn't do enough testing. edit: or any testing at all?
It's an automatic update of the product. Semantic "channel vs. binary" doesn't indicate anything. If your software's definition files can cause a kernel mode driver to crash in a bootloop you have bigger problems, but the outcome is the same as if the driver itself was updated.
Indeed. Its worse really, it means there was a bug lurking in their product that was waiting for a badly formatted file to surface it.
Given how widespread the problem is it also means they are pushing these files out without basic testing.
edit: It will be very interesting to see how CrowdStrike wriggle out of the obvious conclusion that their company no longer deserves to exist after a f*k up like this.
That's funny, because IIRC McAfee back in the Windows XP days did this exact same thing! They added a system file to the signature registry and caused Windows computers to BSOD on boot.
That’s even worse—-they should be fuzz testing with bad definitions files to make sure this is safe. Inevitably the definitions updates will be rushed out to address zero days and the work should be done ahead of time to make them safe.
Having spent time reverse-engineering Crowdstrike Falcon, a lot of funny things can happen if you feed it bad input.
But I suspect they don't have much motivation to make the sensor resilient to fuzzing, since the thing's a remote shell anyways, so they must think that all inputs are absolutely trusted (i.e. if any malicious packet can reach the sensor, your attackers can just politely ask to run arbitrary commands, so might as well assume the sensor will never see bad data..)
This is something funny to say when the inputs contain malware signatures, which are essentially determined by the malware itself.
I mean, how hard would it be to craft a malware that has the same signature as an important system file? Preferably one that doesn't cause immediate havoc when quarantined, just a BSOD after reboot, so it slips through QA.
Even if the signature is not completely predictable, the bad guys can try as often as they want and there would not even be way to detect these attempts.
> malware signatures, which are essentially determined by the malware itself.
No they're not. The tool vendor decides the signature, they pick something characteristic that the malware has and other things don't, that's the whole point.
> how hard would it be to craft a malware that has the same signature as an important system file?
Completely impossible, unless you mean, like, bribe one of the employees to put the signature of a system file instead of your malware or something.
Sure, but they do it following a certain process. It's not that CrowdStrike employees get paid to be extra creative in their job, so you likely could predict what they choose to include in the signature.
In addition to that, you have no pressure to get it right the first time. You can try as often as you want and analyzing the updated signatures you even get some feedback about your attempts.
Like, «We require that your employees opens only links on white list, and social networks cannot be put on this list, and we require managed antivirus / firewall solution, but we are Ok that this solution has backdoor directly for 3rd party organization»?
It is crazy. All these PCI DSS and SOC2 looks like a comedy if they allow such things.
At a former employer of about 15K employees, two tools come to mind that allowed us to do this on every Windows host on our network[0].
It's an absolute necessity: you can manage Windows updates and a limited set of other updates via things like WSUS. Back when I was at this employer, Adobe Flash and Java plug-in attacks were our largest source of infection. The only way to reliably get those updates installed was to configure everything to run the installer if an old version was detected, and then find some other ways to get it to run.
To do this, we'd often resort to scripts/custom apps just to detect the installation correctly. Too often a machine would be vulnerable but something would keep it from showing up on various tools that limit checks to "Add/Remove Programs" entries or other mechanisms that might let a browser plug-in slip through, so we'd resort various methods all the way down to "inspecting the drive directory-by-directory" to find offending libraries.
We used a similar capability all the way back in the NIMDA days to deploy an in-house removal tool[1]
[0] Symantec Endpoint Protection and System Center Configuration Manager
[1] I worked at a large telecom at that time -- our IPS devices crashed our monitoring tool when the malware that immediately followed NIMDA landed. The result was a coworker and I dissecting/containing it and providing the findings to Trend Micro (our A/V vendor at the time) maybe 30 minutes before the news started breaking and several hours before they had anything that could detect it on their end.
Hilariously, my last employer was switching to Crowdstrike a few months ago when my contract ended. We previously used Trellis which did not have any remote control features beyond network isolation and pulling filesystem images. During the Crowdstrike onboarding, they definitely showed us a demo of basically a virtual terminal that you could access from the Falcon portal, kind of like the GCP or AWS web console terminals you can use if SSH isn't working.
As I understand, this only manifests after a reboot and if the 'content update' is tested at all it is probably in a VM that just gets thrown away after the test and is never rebooted.
Also, this makes me think:
How hard would it be to craft a malware that has the same signature as an important system file?
Preferably one that doesn't cause immediate havoc when quarantined, just a BSOD after reboot, so it slips through QA.
I don't believe this is what's happened, but I think it is an interesting threat.
Nope, not after a reboot. Once the "channel update" is loaded into Falcon, the machine will crash with a BSOD and then it will not boot properly until you remove the defective file.
> How hard would it be to craft a malware that has the same signature as an important system file?
Very, otherwise digital signatures wouldn’t be much use. There are no publicly known ways to make an input which hashes to the same value as another known input through the SHA256 hash algorithm any quicker than brute-force trial and error of every possibility.
This is the difficulty that BitCoin mining is based on - the work that all the GPUs were doing, the reason for the massive global energy use people complain about is basically a global brute-force through the SHA256 input space.
I was talking about malware signatures, which do necessarily use cryptographic hashes. They are probably more optimized for speed because the engine needs to check a huge number of files as fast as possible.
Cryptographic hashes are not the fastest possible hash, but they are not slow; CPUs have hardware SHA acceleration: https://www.intel.com/content/www/us/en/developer/articles/t... - compared to the likes of a password hash where you want to do a lot of rounds and make checking slow, as a defense against bruteforcing.
That sounds even harder; Windows Authenticode uses SHA1 or SHA256 on partial file bytes, the AV will use its own hash likely on the full file bytes, and you need a malware which matches both - so the AV will think it's legit and Windows will think it's legit.
AFAIK important system files on Windows are (or should be) cryptographically signed by Microsoft. And the presence of such signature is one of the parameters fed to the heuristics engine of the AV software.
> How hard would it be to craft a malware that has the same signature as an important system file?
If you can craft malware that is digitally signed with the same keys as Microsoft's system files, we got way bigger problems.
>How hard would it be to craft a malware that has the same signature as an important system file?
Extremely, if it were easy that means basically all cryptography commonly in use today is broken, the entire Public Key Infrastructure is borderline useless and there's no point in code signing anymore.
Admittedly, I don't know exactly what's in these files. When I hear 'content' I think 'config'. This is going to be very hypothetical, I ask for some patience. Not arguments.
The 'config file' parser is so unsafe that... not only will the thing consuming it break, but it'll take down the environment around it.
Sure, this isn't completely fair. It's working in kernel space so one misstep can be dire. Again, testing.
I think it's a reasonable assumption/request that something try to degrade itself, not the systems around it
edit: When a distinction between 'config' and 'agent' releases is made, it's typically with the understanding that content releases move much faster/flow freely. The releases around the software itself tend to be more controlled, being what is actually executed.
In short, the risk modeling and such doesn't line up. The content updates get certain privileges under certain (apparently mistaken) robustness assumptions. Too much credit, or attention, is given to the Agent!
Sounds like it was a 'channel file' which I think is akin to an av definition file that caused the problem rather than an actual software change. So they must have had a bug lurking in their kernel driver which was uncovered by a particular channel file. Still, seems like someone skipped some testing.
How about a try-catch block? The software reading the definition file should be minimally resilient against malformed input. That's like programming 101.
https://www.youtube.com/watch?v=vAEU-Lf60LA