I had a Pixel 6a with Graphene OS for a year before the phone started to glitch and eventually die. It ran pretty hot; sometimes it was hard to even hold the phone in my hands without burning myself.
I could not get a replacement as I bought the phone in a foreign country (Google doesn’t sell Pixels here in Brazil).
So as much as I love the idea of running a more private phone, I found the hardware extremely fragile and poorly designed, so I will not buy from them again anytime soon.
> I had a Pixel 6a with Graphene OS for a year before the phone started to glitch and eventually die. It ran pretty hot; sometimes it was hard to even hold the phone in my hands without burning myself.
This sounds like your phone may have been one of the Pixel 6a models with a defective battery[1]. It was a major problem for which Google pushed out an update that nerfed the battery life. There is a tool online where you can check if your particular 6a was one with a battery from the bad production batch[2].
But that unfortunately doesn't help if you are in Brazil where, as you say, Pixels aren't officially sold and import/export controls tend to make tech warranties useless in practice.
Which is (almost) the case during sales. The P10 was on sale for $599 not long ago, and you could buy a 9a for little more than $300. That is extremely good value compared to any iThing repoted your every move to Apple.
After years of neglect, I updated the theme, translated all pages to Portuguese and finally posted something new. I hope to continue this and maybe start making it an habit.
Not as generally available as I thought, and for the looks of it, feels just as "hacky" as the preview with respect to the user experience. For some reason, I was expecting more from them.
> This is not true, if you run Debian / CentOS7 / Ubuntu, out of the box the settings are good. The thing you don't want to do is start to modify the network stack by reading random blogs.
I agree these are good defaults, but they are not meant to work well for all kinds of workloads. And yes, if things are working for you they way they are, that's okay; there's no need to change anything.
On the other hand, I personally don't know anyone who runs production servers of any kind on top of unmodified Linux distros.
> On the other hand, I personally don't know anyone who runs production servers of any kind on top of unmodified Linux distros.
You are so, so so lucky... lol. I say that as someone who has come across a desktop CentOS install on a server on multiple occasions, complete with running x-org and like 3-4 desktop environments to choose from, along with ALL of the extras. KDE office apps, Gnome's office apps, etc... HORRIBLE.
Sounds interesting! Do you have urls for more information about this? Would love to read good posts about that! My production servers have been running with standard parameters at every company so far. I feel I might be missing out!
Thank you. So there is a DNAT to get to the Ingress Controller but from there at least it's direct routing to the service endpoint(s)? Does that mean the Virtual IP given to the Service is basically bypassed when using Ingress Controller?
TLS termination at the Ingress Controller and by default unencrypted from there to the service endpoint?
Regarding ways of updating of the NGINX upstreams without requiring a reload, I was just made aware of modules like ngx_dynamic_upstream[1]. I'm sure there are other ways to address this in a less disruptive way than reloading everything, so this is probably something that could be improved in the future.
May I ask how you are automating the ELB/TLS configuration and how that ties into the Ingress controller? Do you somehow specify which ELB it should use? We're in a similar situation.
You can annotate any Service of type LoadBalancer in order to configure various aspects[1] of the associated ELB, including which ACM-managed certificate you want to attach to each listener port.
> Why would any sane Op/Inf/SRE choose not to have at least account-level isolation - is it only a matter of cost due to under-utilization?
In our particular case, yes, pretty much. We are a small company with a small development team, so even if I would want to split accounts to different teams, we would end up having one account for 2-3 users, which doesn't make a lot of sense now.
> This is a great read. I know the single cluster for all env is something that is sort of popular but it's always made me uncomfortable for the reasons stated in the article but also for handling kube upgrades. I'd like to give upgrades a swing on a staging server ahead of time rather than go straight to prod or building out a cluster to test an upgrade on.
I've been doing patch-level upgrades in-place since the beginning, and never had a problem. For more sensitive upgrades, this is what I do: create a new cluster using based on the current state in order to test the upgrade in a safe environment before applying it to production.
And for even more risky upgrades, I go blue/green-like by creating a new cluster with the same stuff running in it, and gradually shifting traffic to the new cluster.
> Could you share which version of NGINX you found the issue with the reloads? Which version the fix was released?
I'm using 0.9.0-beta.13. I first reported this issue in a NGINX ingress PR[1], so the last couple of releases are not suffering from the bug I reported in the blog post.
> I find it interesting/brave that you use a single cluster for several environments.
I'm not working for a big corporation, so dev/staging/prod "environments" are just three deployment pipelines to the same infrastructure.
As of now, things are running smoothly as they are, but I might as well use different clusters for each environment in the future.
> OP didn't mention what Linux distro he's using and what are all of the OS-level configs he changed in the end of the day.
I'm using Container Linux, and yes, I did a few modifications, but I intentionally left them out of the blog post as someone would be tempted to use them as-is.
I'll share more details in that regard if more people seem interested.
I could not get a replacement as I bought the phone in a foreign country (Google doesn’t sell Pixels here in Brazil).
So as much as I love the idea of running a more private phone, I found the hardware extremely fragile and poorly designed, so I will not buy from them again anytime soon.
reply