Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft Announces Nano Server (technet.com)
297 points by mwadams on April 8, 2015 | hide | past | favorite | 116 comments


Microsoft has a real hole in their stack in my opinion...

So on Linux you have SSH (for shell) and X11 Forwarding/VNC for GUI remoting. On Windows you have RDP for GUI and nothing for remote shell.

Now, I know what you're going to say, "WMI" but WMI was never designed for use over the internet. You have to forward two fixed ports and a dynamic range (shudder). Plus it isn't security hardened either by design or through fire.

VPN you say? In theory yes. But the reality on the ground is that for most SMBs they're still using RDP or SSH directly to manage Windows/Linux servers, and if Windows wants to compete (and they are dropping RDP) they need to have an "answer" to SSH.

Essentially they need to take whatever WMI is, wrap it into a secure protocol, and bind it to just a single port, then harden the heck out of whatever process directly runs on that port (before you hit the WMI interface itself).

Or alternatively write an SSH server (literally), and after login redirect input to a Powershell process. Nobody would complain about that (plus free SFTP support!).

PS - I'm totally going to get attacked by "VPN purists" here. But really everyone knows that SSH and RDP is extremely common for SMBs/private individuals. Let's quit pretending that it is not and support the client that actually exists, not the client you wished existed.


You are ignoring Windows Remote Management, Microsofts implementation of the WBEM standard protocol for remote management of devices.

https://msdn.microsoft.com/en-us/library/aa384426(v=vs.85).a...

"Windows Remote Management (WinRM) is the Microsoft implementation of WS-Management Protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that allows hardware and operating systems, from different vendors, to interoperate. The WS-Management protocol specification provides a common way for systems to access and exchange management information across an IT infrastructure. WinRM and Intelligent Platform Management Interface (IPMI), along with the Event Collector are components of the Windows Hardware Management features."


Have you ever tried using wsman? It is not fun.

And, being SOAP-based, it's certainly nothing like a command-line.

But most of all, it's not fun.


I've used winrm for some VM config scripts. The ps script would clone a VM from a base image, boot it, then remote into it to config network and other things.

It was a bit arcane to set up but mainly it was slow, like, really slow. All the SOAP serialization is terrible for performance but it is flexible and it did work.


I'm not a ssh expert (I live in the MS world), but powershell remote sessions (PSSession) seems like what you are talking about here... it's like running in a powershell window on a remote machine.


I'm not too confident powershell is really good enough to compete with the ancient unix shell. I've tried using it to automate windows machines, but it's certainly not a smooth experience.

Part of that is the lack of a history of automation - basic things to do with processes and I/O are kind of schizophrenic. All of the basics are clearly unix inspired (standard streams, processes, redirection - this is all unix-like), but many windows tools use different conventions to exchange data and control channels - powershell has a real object model.

In principle powershell's richer object model is a boon to automation - it's so much easier not to make mistakes if you don't need to massage your data through lots of slightly different text-based hacks, but the object model just isn't nearly pervasive enough. Sometimes you still need plain text/binary streams, and the transition isn't great. More fundamentally, the object model isn't well thought out - it's built on .NET, but that's clearly a single-process focused world view, and for a inter-process management tool, that poses problems. Even if sending objects between processes would work smoothly, there's the issue that objects don't compose or reflect as smoothly as text. It's hard to put your finger on, but for example, you can use things like regex's to do fairly fancy pattern matching on strings, and the equivalent for objects just isn't there in powershell. As a result, things work great if you're doing something that the underlying processes "want" you to do - but if you're trying to hack around some limitation, or try something novel, it's much more painful than in bash.

I think it's also worth mentioning some practical problems: Powershell is really slow and heavy. You can cheaply pipe and juggle data in a unix shell, but by doing that in powershell I've seen machines grind to a halt. A fresh powershell instance has a working set of 100MB on my machine; bash: 5MB. That's a problem. Startup time, similarly, is slow. This isn't a problem if you're working purely interactively - no human is going to notice an extra 10ms here or there, but in scripting, you can easily start so many processes that the difference becomes very noticable.

Powershell isn't terrible, but it just doesn't work as well as the unix shell, despite all the warts that has. Maybe some day everything will speak powershell, and it'll faster, smaller, and have a better story concerning pattern matching, but right now it's relatively ineffective compared to a plain unix shell, in my experience (which is windows centric).


I recently tried to automate windows machines as well. You are spot on. Dealing with obscure objects requires you to make mock calls and try and reflect on the underlying types. With bash, you don't have types, just text, which allows you to think outside the box.

Powershell reeks of development with C# on Windows. You have specific function calls that can do very niche things such as convert an application to a virtual machine in IIS. The problem comes up when you are trying to do something novel or just out of the scope of what Microsoft thought up. Bash allows for those things by having separate and reusable components that do their one job well.

Get-Object and Set-Object don't work universally as advertised so you can't just write boilerplate and expect things to work. Instead, you have to google "how do I set up a virtual application?" or "how do I change bindings to an existing virtual application?" You'll get the job done, but you won't feel like you learned anything. I'm glad you wrote this up. It's been something that's been bothering me, but I was having difficulty verbalizing it.


I bet that C# would actually make a better scripting language than powershell - you can actually use most of .net easily (especially generics), and async support is a real boon in scripting. It's mostly missing a decent library to do practical process-glue stuff - the .NET `Process` is verbose, incomplete, and a minefield.


PowerShell is far more usable than sh or even bash as a scripting language. It's more like perl or python were integrated into a nice little command processor, since you can use all the .net libs (though not without some irritating effort). The problem is the cmd host is garbage, the utilities aren't burned into people's brains like the unix ones are, and they have weird Windows specific issues, like not being able to fork off a subprocess.


A scripting language that can not fork off a subprocess does not sound particularly usable to me. Certainly not more usable than bash, which I find eminently usable.

bash gets a bad rap, because there's a ton of horrible shell scripts floating around. For some reason, when people write in bash, they decide that since it's not a "real" programming language, they don't need to take it seriously. They will UPPERCASE all variables (so ugly, and stems from lack of understanding - you're supposed to uppercase environment variables only), fail to make use of functions, fail to make use of variable scoping, invariably choose the worst available comparison tools ("["), fail to make use of appropriate utilities (which is particularly egregious, considering the fact that the utilities (coreutils, moreutils, etc.) are the API of shell programming).

This is rather unfortunate. Well written bash is efficient, and, dare I say it, can be elegant. Of course, there are nicer shells around - zsh comes to mind - but bash is fine.

For the sake of political correctness, I shouldn't say this, but I will say it regardless - Windows Powershell does not come close to the feature set or expressiveness that bash+the Linux userland can have in the hands of someone who knows what they're doing.


> A scripting language that can not fork off a subprocess does not sound particularly usable to me.

See Powershell's Start-Job[0] Get-Job[1] Remove-Job[2].

> Windows Powershell does not come close to the feature set or expressiveness that bash+the Linux userland can have in the hands of someone who knows what they're doing.

Well nobody can argue that Powershell doesn't have just under fifty years of back-catalogue. That's obviously a massive advantage and head start, as is the expertise that comes along with it.

As to the "expressiveness," I'd need to know more specifically what you're referring to to address that point.

[0] https://technet.microsoft.com/en-us/library/hh849698.aspx

[1] https://technet.microsoft.com/en-us/library/hh849693.aspx

[2] https://technet.microsoft.com/en-us/library/hh849742.aspx


> fail to make use of appropriate utilities (which is particularly egregious, considering the fact that the utilities (coreutils, moreutils, etc.) are the API of shell programming).

Some people avoid a lot of the things in the coreutils/moreutils toolbox because they need their shell scripts to be portable between *nix types, and not everything in there is POSIX compliant, even if you set POSIXLY_CORRECT, or even when it is, the syntax is different. I could go through the trouble of having the script check what type of system it's running on and then select the proper flags and syntax, or I could just write something that will work everywhere that might not take advantage of some "feature" found in coreutils.


> bash gets a bad rap, because there's a ton of horrible shell scripts floating around

I remember when this was the most common defense of VB :D


Agreed on most of your points... PowerShell is by far the most powerful shell I've ever worked with on any platform... that said, I don't like it. It's very verbose.

As a JS guy, these days most of my scripts run via node/iojs and are as platform agnostic as possible. More and more I know it only needs to run in linux, or is guiding dockerization. Even then, bash scripts tend to be easier to reason about.


You could always write JScript. Ha ha ha!

http://blog.idleworx.com/2010/01/windows-scripting-host-and-...

I've seen a few open-source projects actually use this. (IIRC, it was WiX)


Actually, I did do a bit of JScript WSH scripts back in the day, well before PowerShell was an option even. IT works relatively well... I even shoved some initial logic into a few of my node scripts for windows that would, if run from WSH relaunch themselves in node.

The biggest issue with JScript in general is probably the lack of open options for certain classes of COM controls, that and COM collections are a beast (the enumeration wrapper you need in JScript is horrible). JScript was my preference just the same back with Classic ASP scripts. Could re-use my logic client and server-side, which usually worked out pretty well... though almost everybody used VBScript, which usually meant in a project both engines were loaded on the server. The other issue was runtime initialization for every request meant you were somewhat limited, though for a couple hundred users on a system at once around 1999-2002, it wasn't bad.


SSH access to PowerShell would be a game changer IMHO for Microsoft. Also, have you seen this? http://www.powershellserver.com/download/


I don't know if it exists the other way around or not, but it would be really nice if Windows had a built in SSH client as well. I can't tell you how many times I've been working on a client's Windows Server machine and needed to remote into a Linux server, but neither I nor they had access to download Putty.


Agreed... I generally wind up with git extensions everywhere which includes most of the gnu tools including ssh. The only down side is that ssh via windows command shell, even in conemu doesn't do ansi/drawing properly... never really investigated as I just try to avoid it all.



Are you aware that is already possible to remote login into a powershell session without any add-on on windows server? Just use Enter-PsSession -computername


PowerShell Server "gives users the power to securely manage Windows remotely through PowerShell from any standard SSH client"


Or you can just run OpenSSH (there are a few nicely packaged versions of it available for Windows) and use PowerShell as the shell. :-)


You can't. At least you couldn't the last time I tried. Powershell hangs in SSH.


I got your point. Manage windows boxes from non-windows stations.


You can have remote access to powershell...


http://www.powershellserver.com/

MS should really include something like this in Windows.

Hell, I (or you) should find the time and write it ourselves, node.js has great TLS support and has a Windows port.


> We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging

Is PowerShell not what you're looking for, or is it not implemented similar to SSH?


Remote Powershell exists, but you don't have a great deal of control over the Powershell interface (like you would with an SSH daemon), if you want to "opt-in" users one by one for remote Powershell, well, you cannot.

Also setting it up to work over the internet securely is, a touch complicated: http://blogs.technet.com/b/askpfeplat/archive/2012/09/17/wan...

It is a cool party trick that it can work via a web-browser and IIS. But if that's what it takes to make Powershell remoting secure without a VPN then it isn't workable.


Just getting it to work with shell, and no web browser access is a lot simpler than that.

A few lines. See my reply at https://news.ycombinator.com/item?id=9343060


Powershell currently only runs on Windows. One can't use powershell remoting from a Mac OS or *nix machine. Until someone adds non-windows support for Powershell, they need to have a built-in ssh server that opens powershell as the shell. Btw, even if that does happen, it's still difficult to use PS. Any errors that occur in the remote environment get displayed as XML blobs on the client :(


PowerShell web access might be a good alternative: http://blogs.technet.com/b/askperf/archive/2012/11/05/window...


You don't need this. You just need Enter-PSSession [ipaddress], which allows you to interact with it like ssh. You need to enable powershell remoting first though.


Sort of. Enter-PSSession is backed by http or https [1] and is not as good as ssh. You can interact with PowerShell, but you can't interact with command-line programs launched from PS.

[1] Grep for "http" on this page: http://ss64.com/ps/enter-pssession.html


Most stuff I use have PS cmdlets so its not really an issue with me.


Yes, but only if both PCs are in the same domain. Getting PowerShell remoting to work in other environments requires a lot of configuration.


That's not true. I had it working over the internet, separate domains with public/private certs fairly easily.


I'd be interested in how you set it up, can you maybe share it somewhere?

Guides like https://wprogramming.wordpress.com/2011/07/11/remote-pssessi... don't seem "fairly easy" to me (that's why I said it requires a lot more configuration compared to Enter-PSSession when both parties are in the same domain).


There's a lot of waffle in that. If you actually look at the commands his asking you to enter, it isn't that much.

It's essentially, Enable-PSRemoting

And commands 1 to 3 in the following guide after Enable-PSRemoting. Try to ignore the waffle. You probably do want to to disable HTTP though.

http://www.sirchristian.net/blog/2013/03/11/using-powershell...

Trying to get SSH to work with certs can involve just as much or more stuff than that.

Maybe i'll write a super simple guide, just listing the bare commands, since the documentation/tutorials for powershell seem non-existent despite being pretty powerful..


If only that was cross platform like SSH is.



My real concern is that figuring out so many things without the GUI is currently a real PITA. Like trying to make IIS act as a reverse proxy. It's a huge pain, and most of the docs are telling you to click various things. Compared to setting up nginx, ouch. Compounding things is the fact that MS took a cover-our-ass policy to installing things, so even after installing IIS, you've gotta go explicitly install "dynamic" compression as an extra. Then to really make it work, there's a file buried in system32 you've gotta edit, because you can't configure those settings by default at a site level. Overall, it's just way more complicated.

I know they're working on it, and PS is a great shell overall, even if it is missing a lot of simple tools by default.

But MS is way late to the party. OpenVZ was in heavy use what, a decade ago? And MS did nothing to respond, really. But hey, maybe they'll pull it off. I don't have a fundamental objection to Windows.


Windows has always been super-visual. For many tasks there isn't even a CLI way to do it short of editing the registry.


The registry is not very discoverable. However, once you discovered, a lot of customization can be had from running as little as a single registry script.

As a trivial example, remapping caps-lock to something sane was easier on the last few windows releases I tried (just run a registry script) than on any of the 'nixes I used in the same time period.


IIS management is done via WMI which will work for this version of Windows Server: https://technet.microsoft.com/en-us/library/jj635848.aspx


Right, it technically exists, and for some common scenarios, some people may get it to work. Step outside that, and it seems like a world of pain. Like, for instance, IIS's half-baked reverse proxy, ARR (Application Request Routing). Despite being aimed at a very narrow set of applications and overall being cumbersome and annoying, all of the docs I've found on it are screenshot-and-clickhere based.

MS has a ton of work to do if they wanna make a non-GUI Windows viable, and it seems like if they don't have a lot of that work done upfront, they're going to end up providing a very unappealing first taste to customers. I hope they succeed, just that it seems to be a much larger problem than just cutting out parts of Windows.


> I don't have a fundamental objection to Windows

I do. I have a fundamental objection to MicroSoft too. I do not know why people forget that MS actually tried very hard to make internet just a special button in MS Word. How they fought against anything looking like an open communication protocol. This is a sin, the kind of sin that shall not be forgiven. I respect Mr Gates being a generous guy in his old age, but he led a company that did and still do a lot of harm to almost all industries over the world.

Be reminded that Windows is still the dominant OS by very very far. Almost everyone on earth working on a desk uses MS Windows, Word, IE and Excel. This is not a sane distribution of powers.

What would we say if 98% of the oil and the gaz stations on earth were operated by the same company? We are living in this Orwellian world, where one company drives 98% of the computers and their major softwares in the world.

Why are we so fast to forget that?


They call it Nano, I call it a server.

Happy to see Microsoft is evolving. Let's hope it will be for everyone's best.


> Let's hope it will be for everyone's best.

Anything making MS better cannot be for everyone's best: they have a de facto monopoly, and of the most dangerous kind.

That's why despite all the bad that can be said against Google and Android, at the very least they had enough weigth (and talent, and luck) to push MS out of mobile. Which was the greatest news for our industry since Internet.

Also, I do not agree with Paul Graham (http://www.paulgraham.com/microsoft.html): sadly, MS is not dead yet. It cannot be dead for real with 9x% of install on PCs.


Even if the implementation turns out to be lacking, the goal is good:

> As customers adopt modern applications and next-generation cloud technologies, they need an OS that delivers speed, agility and lower resource consumption.

I know the kneejerk reply to this is, "Yeah, Linux!", but I can think of at least 2 Windows-based applications in my company which would benefit from this. And in general, yay competition.


92 % fewer security bulletins and 80% fewer reboots. One can hope that's not just promise.


I believe what they are saying is that they looked at historical security updates and saw what percentage of them would have applied. Because it has less code, there are security updates that wouldn't have been relevant.

That seems like a reasonable approach to me, if that was their methodology.


That's based on real world data. Basically, they look at the reboots caused by 32-bit mode and DWM / GUI and video drivers, as well as security bulletins. Any bulletin that patched something that isn't included in Nano was removed, thus they had 92% fewer based on what's included in Nano.


That is not a promise.


80% fewer reboots

Still many times more frequent than w/Linux. The 2008 R2 development system we have needs to be rebooted every two weeks, on average, due to Windows Updates that require a reboot. Updates come in almost every day for CentOS and I only need to reboot once ever couple of months for new kernels.

Can't wait for the no-reboot kernel patches planned for a future Linux kernel.


> The 2008 R2 development system we have needs to be rebooted every two weeks, on average, due to Windows Updates that require a reboot.

Uhh they literally don't release updates that often, so you'll have to explain that one to us...

> Updates come in almost every day for CentOS and I only need to reboot once ever couple of months for new kernels.

Windows doesn't support hotpatching, Linux does, it really is as simple as that. Linux has more people working on it than the Windows kernel does, and is just a more advanced kernel in general at this point.

But for what Microsoft has to work with Windows Server is darn stable and requires very few restarts in my experience (approx. 4/year in my experience with 2008 and 2008 R2).


> Windows doesn't support hotpatching, Linux does, it really is as simple as that. Linux has more people working on it than the Windows kernel does, and is just a more advanced kernel in general at this point.

Actually, hotpatching is easier on Linux because it has a technically inferior virtual memory subsystem to Windows.

The Windows kernel is technically superior in numerous areas: virtual memory, thread synchronization primitives, and I/O (specifically, overlapped I/O), just to name a few.


I'm glad somebody pointed that out. It's infuriating listening to people who've only ever worked with Linux to a low level when they preach about how it is the most modern kernel bar none. The facts are really quite different. Dave Cutler, of Windows NT fame, is no slouch. He is every bit as good as Linus. Just without the petty attitude problems his far more popular rival exhibits.


Cutler is a f'n genius. He was 47 when Bill Gates called him up in the late 80s and poached him from DEC. He was one of the core architects of VMS and brought all his A-team with him to Microsoft. NT was engineered to be a high performance OS from day one, and it is evident throughout the architecture of the kernel and executive.

Linus was 22 and implemented enough system calls such that he could run bash, mocking a UNIX-like system, which, again, VMS ran rings around in the day.

Linux's success has absolutely nothing to do with technical superiority.


Where's the best place to get details on the internals of the Windows kernel regarding things like that?


Honestly, the best set of docs I read were the design documents (about 30 .doc files) released as part of the Windows Research Kernel, which, uh, you may be able to find if you do some creative googling ;-)

From those I was able to appreciate a lot of the why between the object model, I/O request packets (IRPs), handles, asynchronous procedure calls (APCs), memory section objects... basically all the individual concepts that have no equivalent in UNIX.

....and once you understand those primitives, you can start to appreciate the layered driver model, new thread pool stuff in Vista+, registered I/O in Windows 8+, etc.

Books... my recommendation... Windows Internals is the best place to start: http://www.amazon.com/Windows-Internals-Part-Developer-Refer...

Oh, and basically any article Russinovich has written, check out the list on the articles section on his wiki page: http://en.wikipedia.org/wiki/Mark_Russinovich

(E.g. http://windowsitpro.com/systems-management/nt-vsunix-one-sub...)

Also... my interest in all of this stuff has grown exponentially since I started seeing real world results from PyParallel: https://speakerdeck.com/trent/pyparallel-pycon-2015-language...


@trentnelson Don't forget I/O Completion Ports (IOCP) :)


Good grief how did I not even mention them! They are literally my favorite thing ever. I've got like, 30 slides on them here: https://speakerdeck.com/trent/pyparallel-how-we-removed-the-...

;-)


I've heard really good things about the "Inside..." series: http://www.amazon.com/Inside-Windows-Microsoft-Programming-S...


>Uhh they literally don't release updates that often, so you'll have to explain that one to us...

Microsoft most certainly releases reboot updates more often than the once a month patch Tuesday. I'm not sure if they apply to the OP's specific system, Win2008 R2, but I am rebooting my various computers more than once a month.

My system's update history shows two fixes around Mar 27 (KB2976978 and KB3048778) and one of those required a reboot. I remember because I had to do it. This pattern goes back in time quite far - bunch of fixes for patch Tuesday (requires a reboot), and a handful of off-cycle recommended/optional fixes, which often require a reboot too.

If you apply updates as they appear (and take the optional and recommended ones) you will DEFINITELY be rebooting more than once a month.


You're conflating Windows 8 and Windows Server, the two updates you specifically noted are Windows 8 updates.

The first one is a Windows 10 pre-checker (KB2976978), so zero chance of Windows Server getting it, and the second one (KB3048778) isn't a critical update (solves a minor Explorer bug) so would have to be manually installed.

You likely have "Give me Recommended updates the way I receive important updates" checked in the Windows Update GPO/applet and also just seem to think there is no difference on how Windows Server and Windows Desktop are patched (which there is, massively).

A Server 2012 box I happen to have open, has restart events that are corresponding with patch Tuesdays but months are often skipped. You'd likely see a little more if you weren't using Server Core, but nothing like the number you're describing. The thing doesn't even install updates between monthly pushes.

I'm seeing approx. 4-5 a year. Which is still much higher than Linux, but nothing like Windows Desktop.


I'm not conflating anything - I even said I wasn't sure if they applied to Win2008, did you miss reading that? I was challenging the general claim that releases that require reboots don't occur more often than once a month.

Now if you were specifically addressing this for Win2008 R2, well I'll give you that. I was too lazy to spin that system up and dig through its update history.

The GENERAL assumption that the only time you need to reboot is once a month for patch tuesday isn't really accurate. Now for the SPECIFIC case of Win2008 R2, many years after its release, perhaps that is true.

And FWIW, thanks for your concern about my systems, but it is irrelevant and incorrect because you do not understand my particular operating environment. One side task I do is build VM's for everybody else in my group to use, so while I do not have auto-updates checked, I wind up getting all updates for every language and OS we are interested in.

As for "massive" differences in how server and desktops are patched - what would those be precisely? Perhaps you see fewer reboots than I do because you have less installed and fewer windows features enabled, not because of your supposed deep understanding on the "massive" differences between patching a server and desktop.


It supports hotpatching, but there're nuances like everything that is being updates has to support it, otherwise reboot will be required.


> The 2008 R2 development system we have needs to be rebooted every two week, due to Windows Updates [..]

Windows Updates are published once a month (except for very critical fixes), so I wonder how that is possible in your environment?


What is the use-case for this? Modern Linux distros have all the tooling, libraries, utilities already - is the idea to use this for hosting .Net stuff?


Running stuff that requires Windows.


I think the point is that "Windows" is pretty thin with this server. If there's no GUI available that sounds like there will be no USER32 (good riddance btw); and if there's no USER32, many "Windows" programs won't work.


Well obviously - this just seems to be creeping into the space where Linux is an incumbent anyway and I can't see anyone who's running common OSS stacks wanting to use this in favor of Linux based platforms.


If you have servers that need to run Windows, even if they can't run on Nano, you can have fat Windows Server boxes and Nano boxes and configure them using the same tools, rather than having a mixed Windows/Linux shop?


With micro service architecture being one of the big buzz words at the moment, I see this as a great way to separate individual micro services (with some clever routing in front). Using Nano would make these services separate and easy to update without affecting eachother. Continous delivery FTW :-)


I'm interested in what the licensing will be like, compared to "larger" editions.


IMO the licensing needs to be free and very simple. For me, this has always been one of the HUGE benefits of linux. If I have an idea I can spin up a full server stack (OS, SQL, HTTPD, etc.) for $0 (outside of hardware/hosting).

With linux there is no friction you need to overcome in determining if the cost of trying something out is worth the financial side.

The elastic hosting options take away a lot of this, but there is still (to me at least) this mental barrier to Windows in knowing that I might run into licensing costs or administration issues before I determine if the project itself is worth said cost.


I hope it will cost a lot less. Might interesting for the C# people to use non-Azure clouds more if the costs are more inline with Linux based systems.


With MVC moving to a Linux compatible infrastructure in the next version, it may be too little too late, and C# people might just use Linux for real.

Literally the only C# "thing" which won't run on Linux is Visual Studio itself. Heck we might even see a Linux Powershell version here in the next few years.


Not quite, WPF and winforms are still out in the cold along with some other things.

But for sever-side code, .net core + asp.net will get the job done.


We never had any issues deploying .NET applications in Amazon.


asp.vnext is targeting Linux. My guess is those who will run on other clouds will likely use Linux/container*OS instead.


If Microsoft would break out the old Interix (http://en.wikipedia.org/wiki/Interix) code and throw it on here I'd be ecstatic. (Interix was pretty great, back in the day. I think MSFT made a mistake deprecating it. A lot of FLOSS tools built on Interix just fine, back in the '99 timeframe, and if it hadn't been put out to pasture it could have definitely served as a sell into the POSIX world.)


I've set up cygwin with an OpenSSH server on Windows before (to run remote MinGW compiles). Echoing Bill Gates, "In a weak sense, it [NT] is a form of Unix."

That doesn't really cover remote management, though.


I'm guessing this is going to be very similar to what Microsoft has in mind for the Raspberry-Pi?



I think it leans more toward CoreOS and Kubernetes.


Yeah, I believe their RPi version is a lightweight consumer-facing version of Windows 8, not intended for use as a server.


No, there's a lot of confusion from bad messaging with the Pi 2 and Windows 10 announcement. By all reports Win 10 on the Pi 2 is a system to run services that have no GUI, desktop, or other UI.


Wow, no UI? That radically changes my expectations for Windows+Pi 2, it'll be interesting to see how it plays out.


Is that actually true? MS has demoed Windows IoT with a UI on minimal devices already. Both ARM and x86. I don't know if the dragonboard 410c is in a different class than the pi2 however.

Take a look on Don Box's presentation at WinHEC from last month. Device talk stats about ~40 mins, device demo starts at ~44 mins.

http://channel9.msdn.com/Events/WinHEC/2015/Developing-for-t...


See Ben from the Raspberry Pi foundation's response here: https://news.ycombinator.com/item?id=8983801 Sadly there really hasn't been any update or good info on what Win 10 on the Pi 2 will be yet.


A new shell-only Microsoft operating system in 2015 makes me happy for reasons I can't identify.


The main question is pricing. If it will be cheap or free (who knows?), it will be an option for hosting micro-services on myriad of VPSes. A probably a resurrection of interest for ASP.NET framework.


Oh. Microsoft reimplemented Unix, at last. Congratulations :-)


Umm... Haven't you ever heard of Xenix?


Yes. Xenix was not a reimplementation, but a port.


https://github.com/andres-erbsen/dename uses a less flexible federated consensus to build a namecoin-like system without proof of work (or stake, or anything).


I think you're on the wrong thread, maybe you're looking for this: https://news.ycombinator.com/item?id=9341687


Have they embraced and extended?

I can't tell from a quick read.


    --92 percent fewer critical bulletins
    --80 percent fewer reboots
Compared to what a barebones Linux install (Say... a Docker instance)?

I can easily shut off a service in Linux. And I can turn it on. The only reboot needed is for the Kernel itself, and that is soon changing.

What I'm reading here is that MS is most of the way there to Linux.


net start [service name]

net stop [service name]


Actually I tend to use sc start/stop for that, but .. should be the same.

Important utilities:

- findstr ('grep') - sc (control services) - taskkill (kill/control processes) - netsh (everything network) - wevtutil (windows event log)


I figure there's probably a reason Microsoft uses net for this on their help page.


The difference is in synchronicity. Using SC START on the help page should properly be followed by SC QUERY in order to check that the service started. NET START won't return until the service starts, errors, or times out.

> "SC sends the control to the service and then returns to the command prompt. This typically results in SC START returning the service in a state of START_PENDING. NET START will wait for the service it is starting to come to a fully started state before it returns control at the command prompt."

From http://cbfive.com/command-line-service-management-net-v-sc/


Which does absolutely nothing on Patch Tuesday updates that require a reboot.


From this announcement:

http://azure.microsoft.com/blog/2015/04/08/microsoft-unveils...

I read:

> Nano Server provides just the components you need – nothing else

And that's exactly what I do not trust MS with.

With a well documented history of backdoor ridden bloat ware products it is not quite the company that I accept any non-opensource release to have "just what I need and nothing else".

Though I must admit that the opensource train they are riding lately allows me to look at them them from a very new perspective. But still, MS if you are listening: if it is not opensource, I do not trust you!


That's fine. Then this release (and frankly, any MS product) is not for you.

But some enterprises use MS software, and this is a much needed option.

ObVote: I downvoted your complaint imaginary internet points has no bearing on the discussion at hand.


> Then this release (and frankly, any MS product) is not for you.

I've certainly re-considered dotNet after it got open sourced recently! It (finally) seemed like a reasonable proposition -- as I explain in my post BTW.

But indeed, Windows, and especially Windows-closed-source-on-a-server, is not my cup of tea. And I don't understand how it could be anyone's tea.

> But some enterprises use MS software

Sure, and this is a start-up forum. :)

Anyway, I upvoted your post for taking the trouble to explain your downvote. Thanks.


Microsoft is super friendly to startups. I can't say details, but they've given us a ton of support. BizSpark Plus ($5K a month free Azure for a year) is great. They've also got marketing help available.

While Azure is overpriced compared to Google Cloud (and maybe compared to AWS - dunno cause AWS pricing is convoluted), having them comp it is really nice. Azure is also a lot more full service than Google's stuff, if you need more than IaaS.


Just curious--I thought all three platforms were committed to essentially matching each other's prices. Has that not been true in your experience? Or is it something specific that's driving up your perceived price of Azure versus the other options?


No, that's trivially untrue, just go try the pricing calculators.

They make a big deal about being the same price on storage and bandwidth. We talked to MS and determined that even after discounts for committing to Azure, the VMs themselves are 50% more than on Google. Without a commit, the price is 200% of Google's.

And even on storage, Azure isn't competitive, even if the price is the same. Their SSD options that are available now are laughable. (A temp SSD drive that erases on reboot - mostly useless.) Their currently-in-preview SSD option is ... awkward and just plain weird. You have to use special VM instance types, then create special storage accounts, then select from 3 presets in terms of space/perf. Azure actually suggests software striping them together to get more perf.

Google's SSD offering is straightfoward and just works and is quite fast. Want more perf? Just get a bigger disk and they scale up the IOPS, no problem, no fuss, no special VM or anything needed.

I'm sort of an MS fanboy, and I really dislike and distrust Google. But after using GCE a little bit, wow, for IaaS I wouldn't ever choose anything else. Everything seems just simpler, easier, cheaper, faster. (Machines boot super quick, the portal is simple/fast, and they have an SSH client in browser as a kicker.)


Please down voter(s), leave a note.. :)


Even my "request for down vote explanation" gets down voted! Hilarious...


"Resist commenting about being downvoted. It never does any good, and it makes boring reading."

https://news.ycombinator.com/newsguidelines.html


Ah thanks. Didn't know that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: