Hacker Newsnew | past | comments | ask | show | jobs | submit | txutxu's commentslogin

I never did choose any single thing in my job, just because of how it could look in my resume.

After +20 years of Linux sysadmin/devops, and because a spinal disc herniation last year, now I'm looking for a job.

99% of job offers, will ask for EKS/Kubernetes now.

It's like the VMware of the years 200[1-9], or like the "Cloud" of the years 201[1-9].

I've always specialized in physical datacenters and servers, being it on-premises, colocation, embedded, etc... so I'm out of the market now, at least in Spain (which always goes like 8 years behind the market).

You can try to avoid it, and it's nice when you save thousands of operational/performance/security/etc issues and dollars to your company across the years, and you look like a guru that goes ahead of industry issue to your boss eyes, but, it will make finding a job... 99% harder.

It doesn't matter if you demonstrate the highest level on Linux, scripting, ansible, networking, security, hardware, performance tuning, high availability, all kind of balancers, switching, routing, firewalls, encryption, backups, monitoring, log management, compliance, architecture, isolation, budget management, team management, provider/customer management, debugging, automation, programming full stack, and a long etc. If you say "I never worked with Kubernetes, but I learn fast", with your best sincerity at the interview, then you're automatically out of the process. No matter if you're talking with human resources, a helper of the CTO, or the CTO. You're out.


If you say "I never worked with X, but I learn fast", with your best sincerity at the interview, then you're automatically out of the process.

Where X can be not just k8s but any other bullet point on the job req.

It's interesting that the very things that people used to say to get the job 20 years ago -- and not as a plattitude (it's a perfectly reasonable and intelligent thing to say, and in a rational world, exactly what one would hope to hear from a candidate) -- are now considered as red flags that immediately disqualify one for the job.

Very sorry to hear about your current situation - best of luck.


Ive never heard of this - has this been your direct experience?


It's somewhat speculative (because no one ever tells you the reason for dropping your application or not contacting you in the first place) but the impression I have, echoed by what many others seem to be saying, is that the process has shifted greatly from "Is this a strong, reliable, motivated person?" (with toolchain overlap being mostly gravy) to "Do they have 5-8 recent years of X, Y and Z?".

As if years of doing anything is a reliable predictor of anything, or can even be effectively measured.


No icons in my desktop (window manager, fluxbox). It's the UI I'm used since... too many years to count.

Custom keys to launch the most used apps.

Alt+F2 to launch less used apps.

And... the big star... one of my reasons for stay in fluxbox for so many years, the custom menu (right click anywhere in the root background, left click in the corner of the toolbar, or custom key to launch the menu).

Custom menu that can make includes, oh yeah, can be updated via cron/systemd-timer/scripts, etc so you can have a hierarchical menu to all your machines, by project, by datacenter, by service, IPMI, ssh, remote desktop... always up to date (i.e. from ansible-inventory). Or you can have your browser boomarks in the menu (with the same folder hierarchy). Or you can implement your own RSS in the menu via cron. Anything you can imagine.

For me, hierarchy > 2D positioning, and "desktop" != root folder for any "data".

Last step of my weird, non mainstream setup, is the all my data is not in my home, but it's in /data/myuser (in a partition separated from the OS), in folders like backup, docs, downloads, media, src, vms which are linked to my home. All dot files that I care are under my custom config management system (sh script + git), and all the data is in /data (+backups). Why I mention this? it's on topic... some apps try hard to use the "desktop" directory. So I have this bit just for them:

    $ grep DESKTOP .config/user-dirs.dirs
    XDG_DESKTOP_DIR="$HOME/docs/desktop"
Because why the hell a browser need to mkdir ~/Desktop or ~/Downloads for nothing.

My app launching shortcuts/menu are very well organized and optimized. My data is in /data. I live without the "desktop is everything/anything/chaos-thing" seen in many people machines, and I'm happy.

When I've used other's OSs with a desktop full of icons, I dislike specially not being able to make the exact placement because of auto alignment.

If I used desktop+icons, would like something like diagramming apps: "Let me place things exactly where I want, but, let me align groups of selected items, horizontally and vertically, via context menu/shortcut when selected".


Just wanted to pop in as a fellow Fluxbox user that I have essentially the same setup.

All the apps that I launch multiple times per day have their own global shortcut keys. And then for everything else I have a setup where pressing the Windows logo key brings up a "start" menu right where the mouse pointer is. From this menu I can then launch the other apps that I use infrequently. Super convenient.

But really the best part of Fluxbox is that never changes. I think I've run the setup for about 20 years by now with only occasional minor tweaks.

As an old grumpy developer things not changing without my consent is the best possible feature. I have absolutely zero interest to learn new shiny things just because someone wanted to shove them down my throat!


Debian can run with SELinux if you like that.

Debian uses AppArmor by default, probably because of the Canonical influence (there are more Debian developers and maintainers paid by Canonical than by RedHat).

But you can run Debian with SELinux (as well as with other LSMs, MACs, etc like Tomoyo).

At my last jobs, we disabled any of SELinux, AppArmor and Auditd on Debian/Ubuntu, just for the sake of performance. And we never detected any security issue for our usage and requirements. So I'm not an expert in this field.

Not sure what the purpose of the article, or the whole blog, is. You want to influence the choosing of Debian Vs RHEL Vs Oracle Linux in some place? As I'm not sure, will stop here.


It reads like either an ad for Red Hat or an add for SELinux which this guy is probably familiar with (unlike most Linux users and professionals).


I'm writing a book. A novel. By hand (this is, no AI involved, just me and vim).

Sometimes I feel I'm loosing the time, because nowadays there is a lot of AI generated content and even more competence in self-published books.

After a long walk against myself, of about 10 months, it's nearly finished (in my native language, Spanish). It still needs a few more reviews and retouching.

I got recently unemployed after +20 years as Linux sysadmin, and my wife is now unemployed too (after +20 years in HHRR), fortunately we have still a few savings.

I dream that it will (economically) work, but most of the time I intuit there will be less than a few sales from family and friends.

Depending on how it goes, I've already the script for the second and third parts.

In parallel I'm researching different ways to generate cash flow without working for another person. I would like to avoid going to search for a job in the current market of cloud, docker and kubernetes, as I'm more a hardware/colocation guy, and 99,9% of job offers request for docker/kubernetes.


When you say "by hand" do you mean with a pen and paper rather than typing it out? If so, can you say something more about why?


Sorry, edited to clarify. I mean avoiding AI.

Mainly I use vim. But yes, sometimes I also use a traditional notepad with a soft ink pen. To avoid myself some screen time, because I'm out of home, or just to make some use of it.

I think the output flow is about the same using pen and paper than using vim, at least for me.

The original "By hand" did mean "without AI help/inspiration".


I'd love to read when it's available or in preview mode! The mental organization I think that develops when writing on a notebook is far worth the trouble of typing it on vim later, especially for fleshing out scenes and redrafting.


Not sure if call it mental or physical organization, but there is more crossing out of words and whole sentences, words that I try to fit into small spaces, or arrows to/from outlined areas below with more space. Like:

    +--------------+
    |  abcd abcd a |
    | bcd abcd.--\ |
    |            | |
    |  efgh ijkl | |
    | ijkl m nnn | |
    |            | |
    |      +-------+
    |      | abc a |
    |      | abcd  |
    +------+-------+
A technique that I don't know what to call, is the horizontal brace. Like the ASCII 123, but horizontally, to insert something between two words, above or below the line.

A handicap of the notebook, is that you can't find historical data/names/dates so easy, or synonyms... but as I iterate a few times over the text after the first draft, those gaps can be fixed latter while typing the text.


He says he uses Vim, which is on the computer.


He edited his comment to add that detail after I asked


It's wrong.

    3500 in -> go to parents house -> 1000 into bitcoin
                                   ->  100 into speculative coins on bull market, or making sorts in bear markets
                                   ->  300 into gold
                                   ->  600 to stocks with dividend growth
                                   ->  400 to real state or REITs
                                   ->  200 to speculative stocks
                                   ->  500 to your bank account (with some interest)
                                   ->  200 to cash, for those days you go out of your parents home
                                   ->  200 to presents for your parents
If your parents complain, give them the money of the presents, part from the bank account or from the speculative stuff, to silence them.

Repeat and reinvest benefits into non correlative actives.

Once you reach a balance of ((90 years - your age) x 12 months) * (3500 * N), maybe you can leave your parents home (not mandatory), and try to race with your yearly benefices, against the *real* inflation. N is a magic number, to cover the compound inflation during all those years, without penalizing too much the first years.

Maybe next year 3500/month is still ok, but in 30 years it won't.

If you're over 70's, do not follow this advice, take the 3500 and live la vida loca each month. Like in the "latin" song from the country that did never ever speak "latin".

Have a nice day.

Now seriously: The real important stuff when working with budgets, is to "see" the "estimate Vs real" thing.


Yes, seeing the estimate vs the real thing will depend on how you set up your accounts. I have separate accounts for separate concerns, and automatically transfer the budgeted amounts, so I can see at the end of the month if my estimates are correct, or if I need to change them. But if you only have one bank account for everything, this will become harder


Reminds me of those monitoring dashboards printing lot of unexpected output from checks, which in turn doesn't expect not being able to interact with state files on a read-only filesystem, reporting the wrong state (i.e. critical instead of unknown), etc.

More fan of agents and an HA core for monitoring, than from direct push notifications, but still, used this smartd notifications like 15 years ago in Debian, and never realized of this issue (maybe because once the OS disc fails, we cared more about other issues than about the smartd notification).

Those were the years were you could try to put the mechanical disc in the fridge for some minutes and try again, or with some Seagate disks, change the electronic circuit for other disk of the same model and recover the data (they used to fail in the electronic part more than the magnetic plates).


At home/lan we use LACP, VRRP... I mean link aggregation and HA needs are solved time ago.

With multiple ISPs, or on a complex enough LAN, we can use multiple routing tables + weights too.

Also, if the ISP at home can do 10Gbps, 1Gbps, 300 Mbps whatever... I want to be able to use them with a single path, so there is no gain using multiple paths. Eventually, when I have cable+wifi connected at the same time, I use to force one of both, cannot see a reason to prefer using both at the same time.

Maybe the latency thing? Never had that issue at home, but could understand that usage case "just use the network segment with less latency to reach $thing".


> Also, if the ISP at home can do 10Gbps, 1Gbps, 300 Mbps whatever... I want to be able to use them with a single path, so there is no gain using multiple paths. Eventually, when I have cable+wifi connected at the same time, I use to force one of both, cannot see a reason to prefer using both at the same time. >

I don't understand why you would want to be able to use them with a single path. the gain would be being able to aggregate them and have individual tcp streams faster than any one IP connection could handle.

Though personally I think the resilience is more appealing. Not having to have a hard cutover when wifi degrades as I walk away would be nice


If my ISP gives me 10 Gbps, I want my PC to have (at least) a 10Gbps single path to the router.

So, If I already have a 10 Gbps path to the router, I don't want to add a 300 Mbps failing air path added to my way to the router.

In the context of the parent (at home networks), I think most people has two paths... WiFi or RJ45-UTP. And with that multipath setup (WiFi + RJ45, I don't get why other comments are talking about cellular networks "at home") is not usual to walk away; right, you could walk, as far as long is the rj45 cable, but...

To keep HA on WiFi when walking around, there are other technologies more battle tested than MPTCP.


For a long time enterprise firewalls (and more recently SD-WAN) allowed load balancing between different links, but unlike MPTCP the traffic of a single TCP connection is not split up. This is in line with the established network admin wisdom saying that reordering packets of a TCP connection hurts performance.

https://community.fortinet.com/t5/FortiGate/Technical-Tip-Ho...


Some ISPs in Europe are using MPTCP for people being too far from the street cabinets. Typically, for people in the countryside, with < 50 Mbps. Thanks to a transparent proxy installed in the home gateway, and servers in the ISP's network, they can combine both the fixed and cellular networks, and use the fixed one in priority.

MPTCP can also be very interesting for mobility use-cases, even when one network is used at a time, e.g. switching from WiFi to cellular, or different cellular networks in the train, etc.


I like to say that Ansible is a CMS (Configuration Management System).

For me, infrastructure as code, is another thing. The things you can "touch" with your hands.

How do I get a new physical rack, new server, new router with a new IP range, new switch, plan or segment the network layout(s), how do I connect the cables, how do I add disks, how do I install Debian, etc. As code.

For Hosting or Cloud providers, Iac (Infrastructure as Code) is easy. They need pools of resources already deployed, waiting for you, and an API.

For home... it doesn't make too much sense to have pools of public IP ranges, pools of datacenters, racks, routers and switches with SDN capabilities (Software defined network) servers waiting, storage servers waiting, load balancers waiting, and 200x software X/Y waiting...

Maybe the most similar thing of IaC you can do at home, is a PXE install server.

Or, enter the vIaC concept, Virtual Infrastructure as Code?

That you can do, you can define as code switch VLANs, Virtual Switches, Virtual interfaces, DHCP/NAT/bridge, etc, Virtual Machines, Virtual disks, Virtual Hosts (in balancers and webservers already running), and of course, you can cosplay IaC with containers, qemu, etc.

Ansible excels in the area of *configuration management system*. I want X part of the operative system (user, file, package, etc), to be in this state, and I wan to track how it evolves over time, ensure that it continues in such state, or analyze the delta.

IaC -> helps with things that previously were done with the hands

CMS -> helps with things that nobody can touch with the hands, and run inside IaC elements


Hey, thanks for bringing this up, I believe this was a miss from my end to not mention that the article was more geared towards Infrastructure configurations via code and not Infrastructure setup via code.

I have updated the title of the article, the URL remains the same for now, might update the URL and create a redirect later.


> In short, the CPU is trusted and creates...

Here it comes the main bug.

If the data is so confidential, let's say for example "the military plans of a country, or ultra-secret technology" I don't know, I could not use a public network, or a provider that mixes it, in their infrastructure, and/or that has it's own employees.

Recently, I got a laptop and saw an option in the BIOS about Intel SGX (enable/disable/auto). After a short research, was terrified: a source of security BUGS, deprecated for Intel Core (but continued for Xeon in the cloud).

We don't need to talk about past and present hardware bugs (or software bugs and attacks), but let's put it in clear:

If some kind of data shouldn't go out, do not put it out, in the first place.

If the data should never go out, the network should be physically separated and isolated at physical level, from everything.

Otherwise, I don't know... not working with such things, but I could use incompatible custom tech at least; no something so easy to get by an adversary, read, use, study, reverse, fuzz and attack without my knowledge.

Cloud+Enclave "sounds" as secure as those "Third party VPN". Let's say your trust model is thrown to the bin, and start talking from there.


I don't think this tech is targeting TS/SCI data.

> If the data should never go out, the network should be physically separated and isolated at physical level, from everything.

Every company has that kind of data, though, and cybersecurity maximalism is how cybersecurity people get disinvited from architecture discussions. We can't tell users to not turn their computers on, since that invites breaches, and we can't tell IT to airgap the networks because that slows business to a crawl (and pisses of users, etc.)

Cybersecurity is always about risk management. The risk of doing basically anything has to be balanced against the risk of not doing it at all. Often the cost of not doing it is too high, so the job is to use something like confidential computing (if it makes sense) and then try to mitigate attackers trying to get at it.


> If some kind of data shouldn't go out, do not put it out, in the first place.

This is a really important message. Confidential Computing is still in development in the realm of our new-age technologies, but the 'secrecy' of the physical world is a proven primitive we can trust today.


The P in VPN has been perverted long time ago.


Virtually public network


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: