Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ouster's new digital Lidar – 128 beams, ultra-wide view (ouster.com)
168 points by derek_frome on Jan 6, 2020 | hide | past | favorite | 118 comments


Does anyone know whether eye safety is de facto maintained when your eye is being continuously bombarded by the 100+ scanning lasers being emitted from each of the 100 cars in the vicinity of an intersection? I'm on board with the case with a handful of lasers scanning by quickly but the energy may really start to add up in certain plausible future scenarios.


At 865 nm, laser eye damage is due to the laser being focused by your eyeball into a small spot on your retina. This is in contrast to other wavelengths, like 1550 nm, which do not get focused, but can instead damage the surface or your eyeball at much higher powers.

However, when there are many 865 nm lidars around, each one gets focused to a different spot in your retina, so it is not any more likely to cause damage than a single lidar. Thanks to the low power of the Ouster lidar, it is Class 1 eye-safe at any distance.

If there were a hundred high-power 1550 nm lidars all pointed at the surface of your eyeball, however, I wonder if it would be more likely to cause damage?


I just imagined everyone walking around with Google-glasses style AR lenses everywhere they go with LIDAR protection built-in!


865nm too, presumably it only passes as class 1 eye-safe due to an time x power x aperture exposure calculation...

I wonder how much we have to blame ITAR restrictions on >1555nm lasers for things like this not being more eye-safe on a by-wavelength basis.


> I wonder how much we have to blame ITAR restrictions on >1555nm lasers ...

I tried looking up information about this (restriction details, otherwise feasible uses beyond that wavelength, etc) but it's hard for an outsider to quickly make sense of. Any chance you could elaborate?


https://www.bis.doc.gov/index.php/documents/regulations-docs...

The main section on lasers starts on page 47. The rules are very complicated. See 6A995 d and f.

Lasers are also covered in other sections like 6A205. (I'm amused by Raman shifters being covered-- they're literally just pressurized tubes with hydrogen and mirrors at the end-- I can't imagine anyone qualified to use one couldn't have one fabricated pretty much anywhere).


Reading law is always full of weird stuff like that, in between thousands of lines of monotany.


I've been asking this as well, I've never gotten a satisfactory answer. Other than "you can't see infrared".

I'm concerned once these become more prevalent on all cars or whatever else we stick these on. Curious if there's any studies on if it affects our eyes in anyway.


I swear I get persistence of vision artifacts when I see a LIDAR equipped vehicle, but need to run a double blind test to be sure.


Probably not. You can't see Ir Lasers.


IR and UV radiation can very much ruin your sight.


It doesn't matter if it is IR, UV, or visible. Only the power and duration matter.

Lidar uses very short pulses and very low power and is very safe even en masse.


This is actually more complex than that.

Due to optical aberration wavelengths outside of visible range don't get focused the same way as visible light. This means that the energy is spread over larger area.

Also, both UV and IR light is absorbed in a different way. UV tends to be absorbed much faster than visible light so that little of it reaches the retina while for IR opposite is true, the eye is more translucent to IR and only a portion of it is absorbed at retina and a lot of it tends to pass through it.

What it means is that, while a source of visible light is very well focused on the retina and absorbed in very small volume of retina cells, UV and IR are spread over larger area and then large part of it is absorbed somewhere else.

Still it doesn't mean IR or UV beams aren't dangerous. It is just that you can't directly compare beams by their power.

The way I understand it the main danger is that, since people may not see the IR or UV beam very well, there might be no involuntary response to close the eye.


Love these kind of analyses on HN (still where you can find them!) where someone will describe how the details matter.

Thank you to all involved.

If anyone can point me to the generalized best studies for safety margins and laser energy, please post links -- it's pretty relevant to a lot of AR designs to use laser projectors, and it is of course relevant to the lidar self driving cars at question.


Actually most pulsed lidar systems use pretty high peak powers, which result in an eye-safe average power only because they are on for a sequence of very short pulses (~1ns). It's pretty unlikely OP is actually experiencing vision problems due to lidars, but it's probably possible for some kind of weird interaction to happen in the eyeball due to the high power pulses. Even so, it's unlikely to be causing an damage, temporary or otherwise. I would hope it doesn't happen to him/her while driving or something, though.


I'm being a bit pedantic - I'm pretty sure the visibility of the wavelength does matter (slightly) for lower power lasers, as visible lasers will allow your eye to react to the laser (by looking away) before serious damage - again only in low power lasers (like laser pointers).

If it's not visible I'm not sure that your eye will react quickly enough to prevent damage.


Near/Bright thermal heater range is 780 nm – 1.4 μm.

Unless those heat lamps that also produce visible light damage the eye, I don't see how 865nm lidar could do it if the power is low.


Laser light is much worse for the eyes because the energy is concentrated. It’s why you can buy 100W light bulbs but a 50mW laser needs a warning label.


This doesn't invalidate your point, but the watt comparison you're making isn't correct. 100W is the power of the electricity input to the lightbulb, while the laser wattage is the power of the light output. The actual comparison would be more like 10W to 50 mW (which of course still means that the concentration of energy is required to explain the difference in eye danger).


It’s just a relatively simple calculation of (amount of energy/amount of space) * time You could have a 100w laser pointed at your eye, but if it only operates for a femtosecond it would have no effect. The light isn’t worse or better than any other form of light.


In electronics instantaneous power still needs to be accounted for. You can’t just average it out especially if its many orders of magnitude higher than continuous.

I’d be absolutely shocked if biological systems were somehow different.


> The OS0 lidar sensor: an ultra-wide field of view sensor with 128 lines of resolution

> The OS2 lidar sensor: a long-range sensor with up to 128 lines of resolution

> Two new 32 channel sensors: both an OS0 and OS2 version

> Price: Starts at $16,000 with volume discounts available

I would LOVE to get my hands on this tech. Maybe in 5-10 years when the price comes down to commodity level for hackers to play around with. :-)

Since LIDARs impact airflow over the top of a car, is there a way to make LIDARs less spherical and more triangular or elliptical? How would that impact the scans and can that impact be corrected/recalibrated mathematically?


While this is a ~80% discount on other 128 beam sensors, it's unfortunately still out of reach for the hacker community. We absolutely plan to get prices down to an affordable level for individuals in well under 5 years!

Also, Ouster runs a sponsorship program that gives deeply discounted or free sensors to cool projects. If you have a cool idea, shoot me an email: derek.frome at ouster dot io


> deeply discounted or free sensors to cool projects

This is very nice, if I only I had a cool project, I just have lukewarm ones!


Might be interesting to add the Ouster sensor to our sensor simulation [1] to give people the ability to play around with the data even if it's outside the price range?

[1] blensor.org


Oh, this is interesting! I've been putting together a 6-Kinect rig to take a 3D scan of my body as I go on hormone treatment and an exercise routine, monitoring subtle changes over time.

Does it support Kinect v1 and changing the orientation using the built-in motors?

I also have a few projects using photogrammetry reconstruction of convention booths using 2D images. I've been interested in adding in lidar/pointcloud cameras...



How can one apply for a “cool project”?


Email derek.frome at ouster dot io. I'd love to learn about your project!


has someone explored what the state of the art is if you just use two 1080p LG laser projectors and 2+ cameras and clever software

https://www.bhphotovideo.com/c/product/1453431-REG/lg_hf80la...

it's 1080p of resolution in lighting from two angles at 60 fps for like ~100-200 W. if you had some good cameras and some clever and fast software you could use this thing to illuminate anything you need to know about -- a high frequency light probe. i don't think you need coherent light at all or phase control, you'd just like timing control so you can strobe between left and right lights and use the variation to better characterize objects.

whole system seems doable in budgets of ~$8k hardware (total guess...)


Slightly confusing the way you pasted, not sure if intentional. OS0 starts at $6,000 with volume discounts, while the OS2 that starts at $16,000.


Not sure if this fits your project but Intel's new Real Sense Lidar seems pretty cool. Meant for more indoor application though: https://www.intelrealsense.com/lidar-camera-l515/


I'm already thinking of the scrap yards where these might end up when cars with them are totaled by the insurance company :-). But I agree, if they can do exponential improvements in processing then this version will be fairly cheap in roughly 5 years.


About the airflow thing, there's nothing (to my knowledge) preventing you from changing the shape of the enclosure. It's just going to be bigger.


Am surprised no one yet cited Elon saying "this is fool's errand" because of price, complexity, etc. I think LIDAR will improve significantly the safety of vehicles (autonomous or not) because it can definitely see better and deeper, even in damning weather conditions (rain, fog, haze).

Alas, the price of these devices has to come down at least one order of magnitude. Maybe even two. Still, I am really thankful that other companies (since Tesla has no interest in it) are considering and further developing LIDAR.


Lidar is solidly worse than the human eye in dust, rain etc. Waymo has struggled w/ dust devils in AZ. Cruise has struggled w/ steam vents in SF. I work at a John Deere subsidiary and we are quite interested in dust performance for field work. From our tests Lidar is low on the list for seeing through small particles.

Lots of things could fix this, less beam divergence, custom signals processing on multiple returns. But out of the box, it’s this statement does not hold true.


I work at an industrial plant we use microwave radar based imaging can get quite detailed surface profiles in very poor conditions including inside reactors etc where dust is big issue. I'm not expert in this field I think systems used are continuous wave based.

For vehicle applications specifically probably worth looking into what they use on autonomous vehicles at mine sites imagine that tech probably useful in agriculture. For example Pilbara here in Australia large autonomous fleets in very dusty conditions.


Hi, I've previously been to a company that makes the hardware for autonomous mining vehicles. They rely mainly on DGPS systems for positioning and a combination of simple camera vision and radar for obstacle detection. Keep in mind they don't drive that fast (around 25 mph), are supposed to be in clear unobstructed (unless they're in a queue to load or unload) and just stop when they detect a vehicle or a person.


There are classes of vehicles where the cost of LIDAR is less of a factor, e.g. freight trucks, taxis, public buses, which get even more economic benefit from Level 5 autonomy and can serve as a stepping stone to ramping up LIDAR production to bring the costs down.

Just because LIDAR doesn't make sense for the Model 3 today, doesn't mean it should be entirely discounted.


As far as I’m aware LIDAR cannot see though fog, as the light is dispersed, and even light rain might reduce range significantly.

Tesla cars have radar which can see through any weather condition and detect transparent surfaces, invisible to LIDAR.


There are LIDAR units that can see through fog. If you get data from multiple returns, not just the first, you can tell the difference between fog, rain, and solid surfaces. "First and last" is a big win. A solid obstacle in fog looks like a repeatable "last" (furthest) return, while rain and fog look like random disconnected points.

I think Google's own unit has 8 stored returns.


> LIDAR cannot see though fog

Simply depends on what wavelength of light you use.

Water-absorbing frequencies are nice because the atmosphere then shields most light, giving you nice SNR from your laser illumination. But better sensors could work around this, using other frequencies that can 'see' through fog.

It's certainly a technological limitation of current systems, but it's not an inherent limitation.


Resolution of normal radar is much lower unless you have a gigant receiver. Also there is a significant delay of 10 to 100ms That's why a combination of lidar and radio radar is desired.


> Also there is a significant delay of 10 to 100ms

I'm curious why there is such an apparently long delay?


It really depends on the type of RADAR being used. If it's an FMCW radar, typically you will get a beat signal whose frequency corresponds to the target range. That frequency will vary with range, and in order to be well resolved you have to observe it for something like 1/period. So that puts a fundamental lower bound on how long you have to integrate. There are lots of tricks to improve things, and there are lots of variants of the standard radar hardware/methods, but I suspect that's what OP was referring to.


That doesn't explain 100ms delays though. Speed of light times 100ms round trip is 15,000 km.


It depends entirely on what configuration of RADAR you've got, and what you're pointing it at. You can build a system that will result in returns with beat frequencies of just about anything, depending on the Tx modulation and the range. The question is whether it will be useful for your application in terms of range/velocity resolution, latency, integration time etc.

Like I said I was just speculating about why OP specifically mentioned 10-100ms. the light does indeed travel pretty quickly (although, as anybody in the radar/lidar industry will tell you, not nearly quickly enough!), however the round trip time is just the minimum latency you have to eat to get any information about your target. Once you have light coming back, you need to integrate for some about of time to achieve your desired SNR. That time could be very small, or it could be infinite if there are no photons coming back. Let's randomly say that you're using a RADAR with a Tx bandwidth situated such that the round trip time is 1us, and that your target range is s.t. the beat frequency of the return is 1kHz. Your job is to estimate that frequency, so you have to observe the waveform (by integrating samples for an FFT, typically) for at least one cycle of the RF wavelength. That would require that you wait 1us for the light to fly, and then wait another 1ms for the RF to cycle once. So your measurement latency is ~1ms. Now that's not 100ms, but perhaps you need more than one cycle to give a good estimate of the frequency, and then even more because the target is faint and there aren't many photons coming back. You could possibly arrive at some much higher number, like 10-100ms.

I'm not sure if that was OP's point, but that's all I'm saying ;-)


Also curious, most of what I’ve read about FMCW radar mentions single-digit millisecond latencies.


Yeah, it definitely can be shorter than 100ms. See my sibling comment. It just depends on the type of radar being used and the target range/velocity. Certainly for shortish range targets and mmwave radars on reasonably reflective targets you can get a signal with decent SNR in shorter time frames.


16k each, and Waymo has 4 on it's cars? I think Elon was right to not care about LIDAR until the price point is right.


Elon was right for Tesla, but that doesn't mean the economics don't work for Waymo, even as they continue to improve.

A $100k fully autonomous taxi would print money over its service life (right now they have remote safety drivers on standby).

The question is how quickly they can expand their currently tiny geofenced area.


And this seems to prove him right with $6k - 16k price range. Far too expensive to be practical. Still the potential is there for it to be a useful component for self driving vehicles, only if the price comes down an order of magnitude.


Ouster CEO here.

This is single unit pricing. Volume discounts apply. Still work to do to get this in every honda civic, but it is possible with our technology.


How much is a chauffeur? Or taxi driver salary?


It's a sensor. Not an AI. The competition is cameras and radar, which are very cheap.


If this sensor puts it over the edge and the rest of the cost isn't high, then the competition is paid drivers.


If that were true, sure. But I don't think it's so much better than other sensors for that to be a plausible scenario. Not my field though, so take it with a grain of salt.


Only temperature rated to -4F degrees. I hope they can get that lower. Up to ~25 days a year are <0F in Chicago for instance.


That's just the device temperature right? If it's operating device temperature and not device storage temperature I think it's fine.

-4f (-20c) is cold, the cabin of the car is heated to well above that. Just heat the lidar as well.


That's operating temp without external heater. Storage temp is lower.

Even automotive cameras have built in heaters to allow them to operate lower than -20C. Nothing preventing adding the same functionality to our sensors.


This is the spec for a cold start. If you give it a warm start, you can operate it much lower than -20C! For instance, it's being used in underground mines in Scandinavia without issue.


Scandinavian mines are all well above the freezing point below ground. (+2 - +5C)

From a pure temperature point of view mines are very easy with a known operating temperature and low fluctuations.


The issue will not be at cold but at high temperature. VCSELs have very poor efficiency at high temperature and it’s possible to operate them where increasing current reduces light output. In a vehicle application the temperatures are very high and humidity can also be very high and condensing.


Ouster CEO here.

This couldn't be further from the truth. You can design the VCSEL cavity and top and bottom mirrors for peak efficiency at any temp, including very high temps. I wonder what we did...

Compared to the side emitter diode lasers used in legacy spinning lidar, VCSELS are cheaper, more efficient, more reliable, longer life, and better quality light sources to boot.


Unfortunately the gain falls as function of temperature so you also get a lot less light and you have to pump harder (more current). So while it’s possible to somewhat compensate somewhat with the mirrors the device still has this behavior at high temperatures as the device self heats. This behavior is widely documented in the literature.

VCSELs have a smaller current aperture and the current density is higher than in an edge emitting laser. As the reliability is a function of the junction temperature and the current density, VCSELs operating at high temperatures have significantly reduced lifetime compared to an edge emitting device due to the high current density.

See for example slide 5 which shows how lifetime scales as a function of temperature and current density. For high reliability your devices need to have low current density.

http://www.ieee802.org/3/NGAUTO/public/adhoc/Kropp_NGAUTO_03...


We had some of problem trying Robosense lidars at very cold temperature. Didn't try our Ouster in such conditions to my knowledge. BUT most of the we got good results in temperature much colder than what the lidars were rated for. For the more sensitive lidars my lab developed a hacky warming apparatus using hand warmers :D


TL;DR: the operating temperature range of these sensors is small, nowhere near automotive spec, and a real problem at high temperatures. Also, watch the fine print on Ouster's thermal specs.

The bigger issue is hot weather; electronics and lasers work well (often more efficiently) at cold temps, and the electrical power running through them self-heats the components. The problem arises when the environment is already hot, and the components still self-heat. This is a particularly large problem for LiDAR, where lasers are very sensitive to temperature and typically use some sort of thermoelectric controller to keep the laser itself at a precise constant temperature. But these thermoelectric devices are inefficient at cooling and lose control (go into thermal runaway) when things get too hot. Automotive component thermal specs (AEC-Q100) require operation (and start-up) at -40C up to anywhere from 70-150C depending on grade. Ouster's -10/-20C to 50C range actually relies on an external base heatsink being used, which they never picture and makes the sensor significantly heavier and larger. These sensors are a far cry from being ready for automotive use.


Ouster CEO here.

We didn't claim these were auto rated parts... that being said, our temp spec is in line with the industry, and we're dead set* on reaching auto temp spec in a future iteration of the product.

Our shock, vibe and ingress specs are far better than the competition and pass most auto specs already though. Ruggedness like this was unheard of in spinning lidar even two years ago.

*I believe our internal thermal design group is "cultofthelavapeople at ouster dot io".


Very cool tech, and as a fan of Hyperion I think the company name is very cool as well.


For those who don't know the reference :) https://en.wikipedia.org/wiki/Hyperion_Cantos


word


Nice to see Velodyne get some market pressure.


The last rollout they did with the 32 channel version of the OS-1 had laughable firmware that output a data set consisting of half zeros. Do they still have the policy of "release now, make it work later?"


Anyone have a LIDAR (or similar) product suggestion?

I'm getting ready to start a hobby project that involves scanning the interior surfaces of a house. Ideally the accuracy would be at least 1/16" (1.5mm), including any scan-stitching required because the sensor had to be moved around.

I've seen a few promising products, but none stands out as a perfect match.


I've had some success with some of the off-the-shelf laser distance measurement modules, like the ones Bosch sell at big-box hardware stores:

https://www.boschtools.com/us/en/boschtools-ocs/laser-measur...

They have 1-2mm accuracy and quite good precision, particularly if your environment is stable (temperature, movement, lighting). These modules use an interferometry approach rather than time of flight to achieve their accuracy.

Several of them have a Bluetooth interface which you could reverse-engineer. The work then would be creating a turret to rotate the unit around a known centerpoint and take a bunch of samples. It'd be slow, but it works.

There are also a bunch of modules available on Alibaba for a few tens of dollars that have serial interfaces and seem to have similar performance - they often have 10s of Hz sample rates, so you could speed up scanning quite a lot. They exhibit similar accuracy to the boxed units I've bought, but require you to be comfortable with things like SPI and soldering.


1.5 mm is really ambitious. This is close to metrology-level precision. I wouldn't look into sensors aimed at robotics/autonomous vehicles. Try Creaform, Riegl stuff, etc... Also what you're trying is far from new so maybe get acquainted with what other businesses are doing (e.g. https://www.bentley.com/en/products/brands/contextcapture).


If you just want the result of the scan (not clear if the hobby project is the scanning itself or something else) you should be able to contract it as a service, or just rent the scanner. This is common for architectural scan work.


1,5 mm accuracy isn’t doable that easily. There is company called Photoneo having structured light system with such advertised accuracy. But the scan volume is rather limited.

Working on time-of-flight Stereo camera in my spare time. Few centimeter error is very normal. It was shocking at the beginning, but I now understand why bin picking is still hard task.


A project I wanted to play with 10 years ago, and didn't have time or money, was a tool you could set in a room, and it would put laser 'dots' along the ceiling where crews should hang the parts for a drop ceiling, to minimize cuts of both hanger equipment, and the ceiling tiles.


That's closely related to what I'm going for. Ultimately I'm looking to make an A.R. system that guides various house remodeling tasks, including framing and floor-leveling.



You'll need to do a bunch of averaging to get that sort of precision out of one of those, they tend to have an error proportional to distance that's generally a lot higher than the OP's 1.5mm target. But I agree that that's the right starting place.


Take a look at Leica RTC 360. It's a survey grade terrestrial lidar scanner.

Angular accuracy 18” Range accuracy 1.0 mm + 10 ppm 3D point accuracy 1.9 mm @ 10 m 2.9 mm @ 20 m 5.3 mm @ 40 m

(disclaimer: I work for Leica)


mm wave radar + sar postprocessing? Given you have a 100% static scene, it should work great


Are you aware of any low-cost, consumer-oriented products for that?

Perhaps my Google-Fu is weak today, but I'm only finding research / military projects.


Is something like this (http://www.ti.com/product/IWR6843) relevant? There are also older products such as this (https://www.digikey.com/en/product-highlight/a/acconeer-ab/a...).


I wonder how well this works if 20 such devices shine on the same object.


Lidars can interfere but somewhat less than other types of active sensors. First of all, the detector only needs to be on for a microsecond or two, and it's unlikely for two sensors to be scanning at the same microsecond. Second, laser spots are fairly small and unlikely to overlap. Finally, there are techniques to further get rid of crosstalk, such as coded random pulses with a matched filter.


> First of all, the detector only needs to be on for a microsecond or two, and it's unlikely for two sensors to be scanning at the same microsecond.

This is subject to the Birthday Paradox, right?


LIDARs for volume use should add a few microseconds of random (not pseudorandom) jitter to the outgoing pulse time. That will prevent multiple interfering scans from different units from all synchronizing. You may still get blinded on one scan, but not all of them.


The duty cycles of lidars are generally low enough that interference is rare. The big problem in my experience is objects glinting in sunlight which isn't a problem if you're in in a warehouse but is for rooms with big windows or outside. In any event you always need to be filtering your inputs rather than taking every reading as a sure thing.


Honestly the sunlight issue has been variable for me. Velodyne processing seems to deal with that better than robosense.


That's sort of like asking how well cameras would work if 20 of them were looking at the same object.

Lidar sensors can interfere with each other, and a certain industry-standard company is famous for having terrible problems with this issue, but there are engineering solutions to this problem. FMCW is a popular choice, and gives the benefit of providing instantaneous velocity readings. Of course, due to the Heisenberg uncertainty principle, this means you also get worse distance estimation. There are other ways to engineer around the interference problem as well.

https://www.laserfocusworld.com/home/article/16556322/lasers...


When you get dozens of photographers shooting the same subject and all with their own flash, yes you can get interference that wrecks certain images. And the more photographers gather together the more of them will interfere. So indeed your comparison is apt and doesn't help me understand why it isn't a problem. The non-flash scenario is of course not applicable since lidar is actively lighting its subjects.


> like asking how well cameras would work if 20 of them were looking at the same object.

> Lidar sensors can interfere with each other

Then it's not really like 20 cameras looking at the same object. Mostly due to the fact that cameras are passive observers and not really emitting much. And if they do, it's light and it won't really affect the others because they all benefit from it (within reason).

I'm reasonably certain that such issues will still appear in productive use in the future but will get fixed at the time, not in labs today.


Fair enough, I didn't really expand my comment enough to make my point and was about to delete it but now that someone replied I'll just leave it as is since I don't want to write a small essay.

A good lidar sensor won't have issues with interference.


Is it more or less a forgone conclusion that a good LiDAR sensor would not be affected by potentially hundreds of other similar or identical ones shining their beams across the same region?


For FMCW, there should be no interference at all for any number of sensors (there is more information in the link in my original comment). For pulsed lidar, it's an issue of how well you engineer it. I can confidently say a good sensor can regularly withstand dozens of sensors operating simultaneously in a small area because I've personally witnessed it. The devil is largely in the details of specifications and manufacturing quality (that latter one is a much bigger issue in reality), but there is no theoretical reason you couldn't make it work with hundreds of sensors simultaneously. And like I said, I know current off-the-shelf parts that will work with dozens of sensors simultaneously.

Maybe I can add a little color to my original comment this way: most lidar sensors today, including some very expensive ones from supposedly reputable vendors, are not very good. In my experience, it is more often a manufacturing problem than a design problem (this varies more by company than it does by technology).


Imagine every car has a FMCW LIDAR. The chance of interference is very high. Additionally if those receivers are using SE PDs, then these sensors will also be “blinded” (TIAs and or PDs will saturate) when a nearby Lidar system shines its laser into the detector.


I suppose it will be a good while before we can approach an empirical proof for this sort of thing, since FMCW lidars are still very scarce, even more so than pulsed lidars right now. However, even if every car does have an FMCW lidar, the conditions required to get them to interfere with each other is are:

a) Have identical laser wavelength. Not just '905nm' or '1550nm', but _precisely_ the same wavelength. This is very hard to do even if you try.

b) Have a coincident beam path. Again, this needs to be very precisely aligned.

c) Have an overlapping coherence area. This is a bit technical, but it is a higher bar than just having spots spatially overlapping.

d) Have coherent+matching phase fronts at the detector. Again this is a fairly technical subject these properties vary along the beam path, and transversely. This also vary in time, temperature and many other things. The source lidar is able to 'interfere' with itself (in other words, get a signal), because it compensates for all of these effects with a local copy of the outgoing laser light. Other lidars' outgoing beams will in general, even for 100 cars, not be 'synced up' in this way.

Moreover, those conditions are just the intrinsic interference rejection properties of coherent lidars. Layered on top of that is that two lidars need to be using the same type of modulation, bullseye each other as they scan around the FOV, and provide enough photons to actually contribute to the signal. Then, if you satisfy all of those prerequisites, the interfering lidar also needs to overcome any heuristic/algorithmic rejection of spurious signals. Finally, if all of those conditions match up and you get a signal to punch through, and it's strong enough to over come the true signal, and you can't tell that it's an erroneous signal, then it will result on one bad/missing point in a frame of thousands of points, present for one frame.

You're correct, however, that there is a saturation issue. If you just DOS the photo diodes with photons you can potentially prevent any signals from getting through. But again, this isn't super easy to do. The detectors will almost certainly be balanced, not single ended, and AC coupled. So you really have to blast the photo diode, effectively bringing it up to it's damage threshold so it is just flooded with current and can't do anything, and/or just breaks. The raw laser light doesn't do much, both because the DC signal is rejected and because the balanced detectors will reject common mode signals (clearly you know this already). You also have the same issue with needing to shine into a very narrow field of view, at the right time, for long enough to matter.


Unfortunately FMCW lidar is sweeping the lasers across the same band. You don’t need the exact frequency just a beat frequency that’s within your detection Bw.

Also balanced detectors have something called common mode rejection. This is not infinite. In high volume applications it’s difficult for this to be >25dB but you can buy some devices >35dB.

Given that Lidar dynamic range is ~100dB you will definitely see the DC. I’ve not thought about this too much but it seems like an issue for the AGC as your demodulator won’t be bothered by it.


It's true that the laser frequency is sweeping, but it very well may not be over the same band. The sweep bandwidth in a typical lidar is likely in the 1-10GHz range. The carrier frequency of the laser that this modulation is riding on is probably in the neighborhood of 200THz. Let's say you're using a telecom laser at 1550nm. The actual wavelength of that laser will centered on some channel in the 1530-1580nm band, with each channel spaced by say 100GHz. So already each laser might intentionally be in a different channel, depending on chance and how many cars are there. But even if they are in the same channel, the chirp bandwidth is small compared to the channel bandwidth, so there will likely be at most only partial overlap, depending on where the respective center frequencies actually are. Unless your lidar is using a very expensive, very fiddly laser system, this center frequency will be drifting around within the channel all the time. It varies with temperature, mechanical stress, output power and a bunch of other stuff, depending on the type of laser. However, even if the lasers are magically in the same channel, and perfectly locked to the same center frequency, you still need the light be coherent to produce an interfering RF signal. They will not be coherent.

Certainly the balanced detectors will have finite CMRR. In general you definitely have to make a good detector but it doesn't need to reject to 100dBc. A photodiode might have 100dB of dynamic range, but most likely your RF front end does not, and more importantly for most applications you will be dominated by photon shot noise, so you don't need to push common mode signals all the way to your electronic noise floor. 35dB of rejection works wonders.


> Of course, due to the Heisenberg uncertainty principle, this means you also get worse distance estimation.

This isn't how the Heisenberg uncertainty principle works. For macroscopic objects, the effects are completely dwarfed by other phenomenon. Keep in mind that Planck's constant is 10^{-33} meters.


I was actually wondering about this the other day. So what equation(s) would you use to determine the variance in distance estimation relative to velocity estimation? And it sounds like you're strongly implying the distance variation is immeasurably small while accurately estimating velocity - is this correct? I'm not sure the macro point makes sense, since you could have a large object with only one point measuring it (or more realistically a dozen points, but still far from what people mean when they say macro). But I'm curious to learn more if you can provide the math.


I'm pretty sure the effect you are discussing has to do with the uncertainty relationship inherent to the Fourier Transform [0]. This is very closely related to the Heisenberg uncertainty principle, and states you cannot simultaneously constrain time and frequency, which are the values you need to measure for position and velocity, respectively. In the context of signal processing applications, I don't think the particle nature of light is typically considered, which is why it may not be exactly correct to refer to it as the Heisenberg uncertainty principle in this context. This is a bit outside my domain though, so take it with a grain of salt.

[0] https://en.wikipedia.org/wiki/Uncertainty_principle#Signal_p...


So your're correct that there is a Fourier Transform analogy for the uncertainty principle, but in the context of FMCW lidars (which brought up the question of velocity vs position uncertainty), the measurement of frequency actually determines both the position and the velocity. It's actually a problem for most FMCW lidars because you just get 1-2 frequency measurements and somehow need to disentangle what the range frequency is, as well as what the doppler (velocity) frequency is. A massive amount of effort has been put into developing lidar methods and architectures that solve this problem well.

But in summary, the uncertainty principle as encountered in quantum mechanics has ~nothing to do with a trade off between range accuracy and range uncertainty. It's possible that it could come into play in a very detailed treatment of FMCW lidar SNR, in the context of counting return photons, but also not generally necessary there. The time-frequency uncertainty plays a role in that the range and velocity resolution both get better the longer you stare at a signal. So for a given amount of reflected light, at a given range/velocity, there is a fundamental lower bound to how long you must integrate to a) get a signal at all and b) achieve a desired precision.


It's not just an analogy--the underlying math is the same. These course notes have a nice little summary + a proof: http://www.its.caltech.edu/~matilde/GaborLocalization.pdf


Thank you for this! This is exactly what I was looking for.


It seems to be an extract from "Foundations of Field Computation" by Bruce MacLennan, if you want to read the whole thing: http://web.eecs.utk.edu/~bmaclenn/FFC.pdf

He and Dr. Marcolli have a bunch of interesting stuff on their websites if you like this sort of stuff.


The position/momentum pair is just one (important) case of a more generalized Uncertainty Principle. There's a similar one, due to Gabor (1946), which says that you can't have perfect time-limited and band-limited information, which is presumably what the OP is referring to.

The underlying math is the same, and there's a principle of complementarity that describes other pairs of quantities that need to be traded against one another.


I'd imagine that those cameras might have trouble if they were all using long exposures and strobe lights.

As someone outside of the industry, what's the company that has problems with interference?


How many moving parts does this have? What's the difference between solid state lidar, and "digital lidar"? I understand that there are 0 moving parts with solid state lidar.


This has a single moving part - a brushless motor that turns the turntable. It's rated for over 100,000 continuous hours of operation, and passes automotive shock and vibration standards.

There's a good explanation in the post about what we mean by digital lidar, but the tl;dr version is we use silicon CMOS chips for lasers and detectors vs analog components like side emitting lasers and APDs used by legacy lidar providers.

Solid state is a bit of a buzzword, and most "solid state" lidar sensors actually have small, delicate moving parts inside. Solid state sensors are aimed primarily at consumer vehicles, which are still many years away.

The benefit is (at least in theory) easier integration into the vehicle fascia and (again, in theory) higher reliability vs legacy spinning lidar, which are quite unreliable in the real world.

Ouster's digital lidar sensors are much more reliable than the legacy analog spinning lidar sensors, and much more compact - and therefore easier to integrate.


> Ouster’s engineers have paid to ensure every sensor stands up to the wear and tear

IOW, no one is sure if it actually will


We do a tremendous amount of testing to ensure real-world reliability, and our customers' results bear that out. Full functional safety certification is slated for end of this year, which means it's already well underway.

We make a point of this because legacy spinning lidar is unreliable. But it's unreliable because of the analog design, not because spinning is inherently unreliable.


This seems dubious to be honest. Moving parts break, simply due to mechanical wear at the very least. Gyroscopic forces for example from the spinning motion is less than ideal for drones.

I realize a solid state lidar may be a very challenging prospect but it would be a huge selling point!


If the device is reliable then you should quote a FIT number. A very good VCSEL based transceiver in an indoor environment has a FIT of about 100 at Tj~65C and a CI of 60%. If we assume your FIT rate is similar (it won’t be because your operating conditions are more difficult) and have 128 of these devices your FIT rate ~12800 (assuming independent failures). This puts your MTBF at around 8.9years.

Some transceivers have a FIT rate of ~300FIT so if that’s the case your rel will only be 3 years.


Whatever happened to the solid state lidar sensor that osram was touting back in 2016 ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: