Hacker Newsnew | past | comments | ask | show | jobs | submit | RyanShook's commentslogin

I always wonder how many zero-days exist on purpose…

I've heard this sentiment a lot, that governments/secret agencies/whoever create zero-days intentionally, for their own use.

This is an interesting thought to me (like, how does one create a zero-day that doesn't look intentional?) but the more I think about it, the more I start to believe that this fully is not necessary. There are enough faulty humans and memory unsafe languages in the loop that there will always be a zero-day somewhere, you just need to find it.

(this isn't to say something like the NSA has never created or ordered the creation of a backdoor - I just don't think it would be in the form of an "unintentional" zero-day exploit)


I'm not sure that governments actually create them, not prolifically at least. There's been some state actor influence over the years, for sure.

However, exploits that are known (only) by a state actor would most definitely be a closely guarded secret. It's only convenient for a state to release information about an exploit when either it's been made public or it has more consequences for not releasing.

So yes, exactly what you said. It's easier to find the exploits than to create them yourself. By extrapolation, you would have to assume that each state maintains its set of secret exploits, possibly never getting to use them for fear of the other side knowing of their existence. Cat & Mouse, Spy vs Spy for sure.


The NSA surely has ordered a backdoor.

>In December 2013, a Reuters news article alleged that in 2004, before NIST standardized Dual_EC_DRBG, NSA paid RSA Security $10 million in a secret deal to use Dual_EC_DRBG as the default in the RSA BSAFE cryptography library https://en.wikipedia.org/wiki/Dual_EC_DRBG


I think you are right that the shady actors pretty much can use existing bugs.

But you are also right that this is not the only way they work. With the XZ Utils backdoor (2024), we normal nerds got an interesting glimpse into how they create a zero-day. It was luckily discovered by an american developer not looking for zero-days, just debugging a performance problem.


It definitely feels like Claude is pulling ahead right now. ChatGPT is much more generous with their tokens but Claude's responses are consistently better when using models of the same generation.

When both decide to stop subsidized plans, only OpenAI will be somewhat affordable.

Based on what? Why is one more affordable over another? Substantiating your claim would provide a better discussion.

Modern AI is very good at generating content that on the surface appears substantive but ends up being almost meaningless.

Thanks for sharing. Is your goal to make this a paid service eventually? I ended up doing link shortening via Cloudflare Workers recently but I think Bitly premium would be the direct paid competitor to this?

This was great, thank you for sharing.

The dataset claims there are significantly more Citibank locations than McDonalds worldwide which I don’t think can be correct?

It also lists over 56,000 Wildberries worldwide but a quick Google search shows they are a large online retailer. I wonder what is going on with the brand POIs…


Glad you enjoyed it.

There should be enough SQL in the blog to re-purpose extracting out the Wildberries locations and seeing where they land on top of. I've never heard of this firm before you mentioned it.

From Google:

> Citibank operates over 2,300 ATMs within more than 600 U.S. branches, with a total network of over 65,000 fee-free ATMs

So the 57,163 Citibank locations are probably a combination of their branches and ATMs.

Update: I reviewed Alltheplaces a while back, they scrape company websites for store locations. They reported 68,227 locations for Wildberries. ATP is one of the sources Overture use but they seem to use 1.55M of the records from their 19M-record dataset. https://tech.marksblogg.com/alltheplaces.html


I contribute to ATP and can confirm that the author of the wildberries spider was deliberately trying to collect https://wiki.openstreetmap.org/wiki/Tag:shop%3Doutpost (online order pickup locations). It's not a common occurrence within the current set of ATP spiders to capture such features. A quick search indicates that OSM doesn't appear to have tags designed to capture pickup/dropoff partnerships between retail brands, for example, an agreement from a pet supply shop to allow collection of parcels from select fuel stations of a partner brand. Thus I think the author of the wildberries spider has used shop=outpost as the closest tag available in OSM, and Overture Map's filters wouldn't be able to omit these features from their dataset unless Overture Maps adds wildberries to their exclusion list.

Ideally ATP's "located_in" and "located_in:wikidata" fields would be populated for these wildberries pickup locations, making it clear the pickup location is part of a parent feature (e.g. fuel station, supermarket). These fields are specific to ATP and are not OSM fields. OSM would expect features to be merged and a hypothetical field such as "pickup_brands:wikidata=Q1;Q2;Q3" be used instead on the parent feature.

ATP has a much more inclusive set of features it can extract than what Overture Maps, TomTom et al care about. As Overture Maps is more opinionated on what they aggregate they will filter out ATP extracted features such as individual power poles, park bench seats, local government managed street and park trees, stormwater drain manholes, cemetery plots, weather stations, tsunami buoys, etc. I think there might be some exceptions if it helps TomTom et al with their products such as speed camera locations, national postal provider drop-off/pick-up locations within other branded retail shops, etc.


A quick overpass-turbo search for "brand:wikidata=Q24933714 in Moscow" https://overpass-turbo.eu/s/2kaO (Q24933714 being Wildberries https://www.wikidata.org/wiki/Q24933714 ) reveals that almost all locations are tagged shop=outpost https://wiki.openstreetmap.org/wiki/Tag:shop%3Doutpost which identifies them as pick-up locations for goods ordered online. I assume the dataset in the post has mostly the same locations.

At least where I live citi and chase have 2x the number of locations than McD when you count their small branches and stand alone ATMs

In terms of useful AI agents, Siri/Apple Intelligence has been behind for so long that no one expects it to be any good.

I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.

My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.


Yeah, Oracle’s free tier is much more generous than any other cloud provider. They’ve offered that amount of ram for at least a couple years now but we’ll see.


Thanks for your feedback. You can run local models on the instance. https://medium.com/@viplav.fauzdar/running-multiple-open-sou...


Technically, you can also put diesel in a regular gas car. Is that a good idea? Nope.

Don’t try to move the goalposts here.

If those local models with 4b params could do anything useful in the broader range of having a "personal ai assistant" (your words) then why use the 20$ Claude pro subscription and not the local models?

We both know the answer.

You wanted to provide a Tutorial for a 0$ instance "working ai assistant", that could actually do things like "check email, manage files, run scripts". According to your own words.

Now please prove it that you can run a useful ai assistant at 0$ with those local models.


Local models are quite capable. Obviously a 4B model isn't going to do the job of a trillion parameter SOTA model but there are many local models that are both fast and very usable for these agentic flows.

Qwen 30B and GLM Flash (also around 30B) are both very good for example and I use them regularly.


Works pretty well, probably too resource heavy to just always keep on. Suggestion: give the user a shortcut key to close the app in case the blur goes haywire on them.


Hi - thanks for the feedback. I've improved CPU usage with the latest release. I'll look into a kill switch.


My favorite thing about this project is that it's 100% html and you're hosting it for free on GitHub pages. Thanks for sharing! GH repo: https://github.com/jeisey/stormwatch


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: