Hacker Newsnew | past | comments | ask | show | jobs | submit | CookWithMe's commentslogin

I believe (hope) I haven't been the most distracted parent - I banned all news apps and facebook, twitter etc. from my smartphone long ago.

Three weeks ago I broke my iPhones screen on a Friday, and we did go on a family trip over the weekend, so I had to go without a smartphone. It was great, I felt more connected to my 2-year old son, but also to my wife.

After that experience, I've purchased a dumbphone. Yes, I really do miss some utility apps, but overall I'd say it has improved my life.


I'm working since 3 years at a growing startup that has been hiring Scala Engineers at least every half year since I started.

In the last ~18 months, the amount of CVs we're getting has been constantly increasing. Part of that is probably that our startups hiring matures, but it's also the kind of CV that is changing.

I'd say that 3 years ago, there was an 80% chance that the applicant was highly self-motivated to learn Scala in their freetime, and tried/did introduce it at his/her current workplace. Today, there is an 80% chance that the applicant either "had to" learn it in their current workplace, or learned Scala when switching jobs. (Don't get me wrong, they're still motivated, and they took the chance when it was there!)

So there is a switch from the Early Adopters to the Early Majority (where the Early Majority now has worked 1 or 2 years with Scala at their current job, and is confident enough to look for a new one).

One driving force was definitely Spark, but there are a lot of Enterprise apps, unrelated to ML (usually with higher traffic requirements). The sort that would have most likely been done with Java or C# 5 years ago. It seems a lot of Enterprises introduce Scala when they try to break up their (Java) monolith into microservices.

So it seems that Scala has been carving out it's place in backend/microservices with scalability requirements, and is eating part of Javas cake there.


Their marketing claim of "Full Self-Driving Hardware" is evil-genius - it's very similar to the Halting problem[0] - Turing proofed that we can not develop an algorithm that can predict whether any program will eventually halt.

It's very similar with this marketing claim. We can never show that the hardware is not capable of being self-driving. Maybe, someone, at some point in the future, could pull it off. Even if the rest of the industry uses Lidar, beefier chips etc., it is not proof that it can not be done with that hardware. Tesla can keep playing that game, until virtually no one owns the current generation of cars anymore.

[0] https://en.wikipedia.org/wiki/Halting_problem


I think it's very different from the halting problem because there is no perfect solution (like there is with "either the problem halts or it does not"). A self-driving car just needs to be good enough, where "good enough" means X% more reliable than the average human driver.

I personally am terrified of human drivers and will not walk close to the edge of a road because I don't trust them to be paying attention. As shown recently, self-driving cars appear to not yet be at that level (though I don't know the numbers - perhaps they are).

When self-driving cars reliably have same-or-few accidents per year (by volume) then I think they can claim "Full Self-Driving".


What's really scary about self-driving cars is that defects can be global.

One software bug can make every Tesla go into casual murder mode on a similar piece of road.


Humans write the software the power those self driving cars... was it an error that always repeated it self or did only happen every billion iterations? I’m all for self driving but it’s still horrifying to think of the edge cases with software touching the physical world...


It's moreso the cases where the ML model(s) fail(s) to perform the correct action. Essentially the inputs after being passed through the weighted neural network fail to sum up to the correct number == rip pedestrian/passengers.


No, it isn't related to the halting problem. Sigh.


Right, it's just a plain old unfalsifiable claim. Sure, still pretty fishy but not related to the halting problem.


Eh. The phrase "technically correct" comes to mind. Sure, it won't be formally disproven, but if Tesla doesn't deliver FSD while these cars are on the road, that will definitely (and very rightfully) be seen as an inability to deliver.

Also, I think the conversation about Lidar in this article (and perhaps among some readers here) misses a key point. Elon doesn't think merely that one can build a Level 4 car without Lidar; he thinks it's vastly preferable for a Level 4 car not to have any dependency on Lidar -- not only for cost reasons, but also for performance in degraded atmospheric conditions (rain, snow, fog, etc.). It's a difference in strategy, not merely an attempt to do more with less.


Visibility range and quality of pictures in degraded atmospheric conditions also goes down significantly. Not sure if it can be an argument for Lidar vs Cameras discussion.


You used the work Marketing. The moment it is called Autopilot, the general population will believe it is so.

A thousand disclaimers and alerts will not be remembered 10 days later, but the word Autopilot will stick. People hear auto-pilot and they think airplanes flying 100% autonomously (which of course is not the case).

As long as that name stands, people will be doing stupid and dangerous things like watching Harry Potter, or anything-but-looking-at-the-road.

I would call it "enhanced cruise control", but hey, that doesn't sell as much as "Autopilot", does it?


If a human can look at the camera and sensor feeds and drive, to my mind that is a good proof that the car has full self driving capable hardware.


> The main concern with this approach is that the event store is no longer immutable

I think what happens when you delete all events of a (deleted) aggregate root (such as a customer who requested to be forgotten) can be interpreted more charitable, in a way that the event store can still be called immutable.

If you look at a functional programming language, you can not force a data structure to be removed from memory. However, that obviously doesn't mean that your program consumes infinite amount of memory. If a data structure is not referenced anymore, it'll be removed from memory (by the GC). Your program itself didn't mutate the state of the data structure, so from that point of view everything is still mutable.

Now, let's apply the same principle to an event store: A deleted aggregate root (the to-be-forgotten customer) should have been removed from all projections (as required per GDPR). If you replay the events, it shouldn't matter to the final state of a read projection whether it processed the events belonging to this aggregate root, or not.

Therefore, one could interpret that removing the events of a deleted aggregate root in a GC-like fashion leaves the event store immutable, in the sense that my program(s) can't mutate the state (themselves), and their output doesn't change.


Volvo is also manufacturing a car with a pedestrian airbag, the V40: https://support.volvocars.com/uk/cars/Pages/owners-manual.as...

Both Volvo and Uber could have opted to use this car instead. Even choosing a SUV is question-worthy, as they're known to be more dangerous for pedestrians due to higher bumper heights: https://en.wikipedia.org/wiki/Criticism_of_sport_utility_veh...

IMO there is at least some negligence on their part for not choosing a car that is more likely to protect pedestrians.


The pedestrian airbag was introduced before the Automatic Emergency Braking (AEB) safety function was available. Volvo no longer offers the pedestrian airbag as they found most pedestrian accidents were avoided by the AEB.

The V40 is also a rather old car and is based on an old platform. The XC90 on the other hand was their newest car at the time the Uber deal was made and is based on their latest platform. So it is not unusual that both Uber and Volvo would prefer the XC90 over the V40. Besides, given the newer technology in the XC90 it is quite possible it is better in pedestrian accidents than the V40 (with the safety systems enabled on both).


Not if you get ulcerative colitis, cancer, or similar.

Or if you get a baby and want parental leave.

Or if you want to send your kids to university.


I wrote some RAML recently and found it very good for creating a machine-readable representation of an API. We also wanted to make it human-readable and use it as our API reference (with the API console). I found it to be mostly good, but had some trouble when explaining larger concepts that span several requests. Also, it's harder to point to the "important" parts of the API if it's sufficiently large than it was with our "freeform" reference.

I haven't looked into Swagger deeply, but RAML seems better at re-usability. Swagger seems to have way more traction though, and also more tools.


Can I suggest API Blueprint [1]? It is much more human friendly and easier to work on API design. Apiary has tools for complete API lifecycle management.

Disclaimer: I work on making API Blueprint better.

[1]: https://apiblueprint.org


Ditto, used RAML on a previous project and everyone involved really enjoyed it. It manages to capture just enough about how thinks work without getting overbearing. I particularly like the fact that it allows for examples to be specified.

I haven't built anything with Swagger but I never clicked with it the way I instantly did with RAML. It's a pity - there seems to be a lot more industry and open source support behind Swagger than there is for RAML which is mostly backed by MuleSoft.


Swagger added support for examples with version 2.0 of the format, see: https://github.com/swagger-api/swagger-spec/blob/master/vers...


If you're interested in re-use in Swagger, see the guidelines here: https://github.com/swagger-api/swagger-spec/blob/master/guid...


The Soviet Gas Pipeline explosion - if the whole CIA story is true at all - should not be labelled a bug... The code allegedly did exactly what it's creator intended ;-)


Well, typically the users decide what is and isn't a bug. The developers can always say "I intended it to do this". ;)


Seems to be a popular tactic. Just like Stuxnet too.


Looks really cool. I was first thinking it saves the JSON with the new Postgres JSON support, but saving it as relational data is even more impressive!

I'd say if the OPTIONS would return a JSON Schema (+ RAML/Swagger) instead of the json-fied DDL, it would be even more awesome. With a bit of code generation this would be super-quick to integrate in the frontend then.


What is so ugly about it? Swift tries to be somewhat close to the usual C-syntax (it's supposed to be the successor of Obj-C, after all).

For someone used to C-syntax, I guess the main deviation is that the return type is in the back and that there's a func keyword. As for the operaters, there is the addition of generics (what is wrong about <> ?), -> to describe signature of a function (a very sensible choice IMO) and ? for Optionals (again, fine for me).

I don't really see how a statically typed language can substantially improve this, but I'd be happy if someone can proof me wrong :-)


It's not clear to me why it's necessary to include <T, U> at the start of the function definition. In Haskell, unquantified type variables are implicitly universally quantified, i.e. you assume that the signature must be valid for any types T and U.

That's the only thing I think could be substantially improved, though. Personally I find the style where you separate the type signature from the function definition easier to read, for example

How about

    <^> : (T -> U) * T? -> U?
    <^> (f, a) {
        return a.map(f)
    }
or even

    <^> : (T -> U) * T? -> U?
    <^> (f, a) -> a.map(f)
which is already pretty close to the Haskell equivalent

    (<^>) :: (t -> u) -> Maybe t -> Maybe u
    (<^>) = fmap
Arguably, Haskell syntax could be improved with more built-in syntax

    <^> : (t -> u) -> t? -> u?
    <^> = fmap
though I haven't thought about what this means for parsing the language.


Explicit is better than implicit. Does Haskell allow functions to have value parameters that aren't declared?


Depends what you mean by "not declared". Haskell doesn't require you to provide type signatures, so if you don't put a type signature on a function, you can have parameters that aren't explicitly declared (though they still have an inferred static type, so you could use ghci to see what all the parameters are). If you do put any type signature on the function, then all parameters would have to be included.


If I make a typo when using a value parameter in a function definition, can I accidentally introduce an extra value parameter?

If I make a typo when using a type parameter in a function definition, can I accidentally introduce an extra type parameter?


> If I make a typo when using a value parameter in a function definition, can I accidentally introduce an extra value parameter?

No.

> If I make a typo when using a type parameter in a function definition, can I accidentally introduce an extra type parameter?

You couldn't introduce a new concrete type. You could introduce a new type variable, but the type signature would still have to typecheck, so it would difficult to come up with a non-contrived example where that would happen.


How do your examples specify that T and U are generics?


As I said, in Haskell "unquantified type variables are implicitly universally quantified" i.e. type variables (which are always lowercase in Haskell, to distinguish them from concrete types which are uppercase) are always generic.

So it's true that in the Swift examples, you would need a convention to distinguish type variables from concrete types, or else you need to explicitly mark them as generic.


Thank you! This actually explained what is going on perfectly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: