> No details about the planned solar panel array or its potential power output are known at this time
Taking the 171 acre figure we can compute at least the limits of what it could do.
171 acres is about 700,000 square meters. At a solar incidence of about 1 KW / square meter and an efficiency of 23% that's 161 MW peak power and an average of 4.7 sun hours per day will give about 750 MWh of usable electricity every day.
This calculation ignores shading effects and assumes that the panels will be stationary rather than tracking, you could probably easily deduct another 50% to account for those effects, so say 375 MWh give or take and 80 MW peak. Likely these figures are still very high, think of it as the maximum amount of solar power that you could extract at that location from incident sunlight without further accounting for conversion losses (solar panels produce DC, you'll need inverters if you want AC power, some data center equipment can run on 48V DC but you'd still need a conversion step for that) and so on.
I wonder how big a chunk of the total power requirements for a datacenter that size will be covered by this PV system, to be 100% effective it would have to be sized at least 6 times larger than the total power consumption of the DC, using the grid as a storage system.
WolframAlpha measures an average US household's yearly consumption at 12,000 KWh, or 32.8 KWh per day. The upper bound in the parent comment with the 50% fudge factor, 375 MWh per day, would power (375 MWh/32.8 KWh) = 11,430 households day-for-day, or 22,860 households day-for-day without the fudge factor.
You can feed to, and take from the grid, but it's not a battery.
The Grid is like an open water pipe with holes in it - you can put water back in, or take water out, but you can't store water in it. The power stations are pumping in at one end, and the users are taking out all along the pipe.
Very large numbers of Amazon servers are used for something else than cranking out HTTP pages. Expect a lot of them to be crunching numbers for bio-informatics problems, physics simulations and so on. That's why there is a CUDA enabled instance.
Rendering web pages is actually one of the worst use cases for Amazon from a bang-for-the-buck perspective, especially when you factor in bandwidth.
When people are building 30,000 core compute clusters [1] on EC2 - presumably with zero publicly available web servers, I'd be very interested in any methodology that provides reasonable estimates of revenue based on public web servers.
Nearly 100 upvotes for a page that fails its stated goal and which spends 100's of lines to do something you could do with one system call (sys_write) and a string constant?
The stated goal is not the real point. The real point is to learn about how programs are executed on real systems. Unfortunately, the second half which would (hopefully) have completed the example does not exist.
What you linked to is great, but it uses a different approach to explain the subject. That approach presents the correct thing to do, and explains some of it. The benefit of this approach is that it's short and sweet, but the downside is it's easy to miss subtle points. For example, what you linked to never explains why you have to call sys_exit. Someone reading this would know that's the right thing to do, but they may never really understand why that's so.
An alternative approach - which the posted article does - is to try the obvious but wrong thing, then explain why it fails. Fix that problem, and try the next obvious but wrong thing and keep iterating. The problem with this approach it can be a long process, particularly if you just want to know how to do something and are less interested in the why. But the benefit is that by seeing how things fail, and then seeing the fix for them, the reader can gain a deeper understanding of what's going on.
In general terms, one is more of a reference, and the other is more of a lesson. I like the try, fail, fix iteration for explaining things. I've used it many times when teaching. I find that it matches well with what we would actually do on our own, and for that reason, tends to stick with people better than simply saying "this is how to do it."
I'd be more inclined to sympathize if it had actually worked. As it stands it is a perfect example of how little knowledge of what goes on under the hood is present with the current generation of programmers. That's sad, because even if you never use that knowledge in your 'day job' I think that such knowledge does make you a better programmer.
If you write a blog post about something like this at least finish the damn thing rather than to leave a bunch of obviously wrong snippets laying around to confuse whoever lands on that page.
There are plenty of good pages on introductory assembler on the web, this isn't one of those and I'm really surprised to see it this high on the homepage. Maybe it shows how much the HN crowd would like to get a little bit of insight of what actually powers their computers, the error is mine in assuming that such knowledge would be commonplace here.
I upvoted it because if the reader reads through it, they will leave with more understanding of the system stack than they started with. As someone who has been deeply involved with teaching a systems class, that's near-and-dear to my heart. I think it succeeds in that goal. I'm far less concerned that it doesn't actually emit "hello, world".
That it fails at being an introduction to assembly is, I think, missing the point. It doesn't try to be. It's just trying to demystify some of the system stack.
As for HN itself, I've known for a while now that there's a pretty wide range of people here. While I think the percentage of people will almost no systems experience is more than it was a few years ago, the number wasn't zero a few years ago, either.
You're right about the current generation of programmers, but there might be more to it than "kids are lazy these days". Those programmers don't know much about system programming and assembly but they know about Java/C#, Python/Ruby/Perl/PHP, JavaScript, CSS, Unix command line tools, Windows, maybe a GUI toolkit or two, etc. The software world has been building layers of abstractions over the years to accelerate development of richer applications and it's just natural that we find more people working at higher level of abstractions today than before.
This entire comments thread has a higher than expected level of subtle/downplayed humor. I appreciate subtle humor, but I don't know if Hacker News is the place for it.
You see, that's the problem. His mission was to promote free software, instead of the mission the university hired him for: sending the graduates off on their next journey.
You don't hire a missionary with a vision to pat yourself or your audience on the back.
That's like expecting Sylvester Stallone to do higher mathematics or Mother Theresa to do an arms deal for you.
Some people are what they are and their environment/audience will have to accept them as they are.
The problem lies squarely with the person that hired him, the abstract of the speeches listed should have adequately explained what they were going to get. That's exactly what that rider exists for in the first place, to avoid misunderstandings like that.
I highly doubt if RMS could even tailor his speech to the occasion, he must know it by heart by now except for the Q&A part.
What I found interesting on reading the 'rider' is that he still refers to the GNU operating system as though it is in daily use. I've yet to see a HURD based system do anything useful in production but half the world wide web seems to run on Linux these days. Of course linux is 'merely a kernel'.
But if you write free software the you also give away the right to name that software, after all, a fork is under no obligation to be named after the parent. So RMS holding on to insisting to call Linux GNU/Linux looks to be against the self-imposed freedoms.
> What I found interesting on reading the 'rider' is that he still refers to the GNU operating system as though it is in daily use. I've yet to see a HURD based system do anything useful in production but half the world wide web seems to run on Linux these days. Of course linux is 'merely a kernel'.
Considering that glib, libc, gcc, emacs, the vast majority of the Unix utilities, bash, grub, autoconf, make, readline, gzip, tar, screen, wget, and Gnome are all GNU projects[0], I would say that GNU is most definitely in daily use. The Linux kernel isn't much use without the software on top of it, and it's nothing at all without the compiler that turns it into machine code.
I wasn't referring to the name GNU/Linux; I was specifically refuting the comment I quoted. The funny thing is that the comment effectively justifies RMS's insistence on using GNU/Linux. Because so many refer to the entire distribution as Linux, a lot of people fail to realize the hugely-important role that GNU software plays in Linux systems.
Linux is still useful without KDE, but the functionality provided by GNU is critical and would require a large effort to replace.
> the functionality provided by GNU is critical and would require a large effort to replace
This isn't really true. You could just grab your userspace from a BSD, or Plan9Port. Clang and LLVM do well enough to replace GCC on most important architectures.
GNU is, essentially, a clone of Unix except for the kernel, which is supplied by Linux. Calling your computer a GNU system is about as accurate as calling it a Unix system. Calling it a KDE computer is also technically accurate.
You could also form stacks, like KDE/GNU/Linux, or go all the way and just draw a directed graph of the major software dependencies. This isn't a serious proposal, but I would be kind of pleased if someone actually did this.
But that's taken from all the software available in the repos, not the software that's actually installed on people's systems. For instance, the pie chart shows slices for both Gnome and KDE. How many people have both installed on their systems? Or neither?
Now compare that to how many people have none of the GNU software on their systems.
I would imagine (purely anecdotal) if you take just installed software then the GNU percentage increases as its generally installed on most systems. I would also imagine in the Linux/BSD world the number of people running without GNU software is in the very low single digits.
It's really difficult to determine the relative importance of one software project at this level over another they reliant on each other. The Linux kernel needs GNU as much as GNU needs the kernel (at the moment anyway). You can run the OS without KDE (hence not calling it KDE/Linux).
I can see Stallman's point as he set out to create an operating system called GNU, created almost all of the parts required which were then used by someone else to create a Kernel which was then packaged up with a different name.
Personally I think you can call GNU/Linux whatever you want as the licence its released under has nothing in there saying you need to give mention to GNU in the name. If you package it up you can call it Fred for all I care and I will refer to it as Fred.
Actually in BSD/UNIX people generally don't use the GNU tools, except, maybe for gcc on BSDs (UNICES have their own compilers). We believe GNU tools are of very poor quality.
Are you going to argue that it is? Is GRUB merely a "tool" for booting your computer? Is libc a "tool" for exposing OS-provided functionality to software? Is GNOME a GUI "tool"?
Ok clearly you think the rest of your argument is obvious, but I'm going to be dense and say "yes", those are tools. Just as the Linux kernel is a tool for managing the various resources of the system. Maybe it's the word "merely" that's tripping things up, but I don't see how any of these things fail to fit the word "tool".
It would be easy to argue that most software fits the "tool" label. As far as I know, rms objects to describing GNU as a set of "programming tools" or "development tools" as he (reasonably, in this case, I think) finds those labels unfair. GNU software is required for any operating system using the Linux kernel, as far as I'm aware, and those systems are not limited to programming or development.
Just to offer a counter perspective, I'm sure that RMS honestly believes that the most important thing for your future is the use and advocacy of free software, and that being the case, there would be little point discussing anything else, no? Software increasingly pervades everything we depend on in life. The ownership and control of our futures rests to a large extent on who owns and controls the software we are using. The recent trend has been toward closed platforms behind opaque service interfaces, which is a problem of increasing difficulty for the free software movement.
So, 3 years ago you'd have called it revolutionary.
Technology moves so fast these days that if you don't place the bar for your product sufficiently high you won't be able to get happy customers even if you give it away.
While I think it's a bit extreme to say they didn't have much choice, it seems to me making data more open is becoming somewhat trendy and this sort of peer pressure may have certainly played a role in the Royal Society's decision. Anyways, the inertia of the status quo shouldn't be underestimated and although things are really not that fair, it will probably take quite a bit of time before we see widespread democratization of data.