Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Putting Teeth in Rackspace Public Cloud (querna.org)
79 points by pquerna on July 2, 2014 | hide | past | favorite | 11 comments


IPA is a fantastic idea. Some questions:

1) I am actually curious how IPA is deployed to the ramdisks. Any pointers?

2) The turn around time for provisioning is now dependent on download speed etc. When provisioning batches this could be a problem, right?

3)Did you use any kind of CDN (for image persistence) when dealing with provisioning in different availability zones?

4) Does IPA also implement SSL/Auth?


Great questions!

1) We boot a CoreOS image over PXE. IPA is built using Docker, exported as a filesystem, and runs in a linux container via systemd-nspawn. It can take config options via command line or kernel command line. The build system is here. [1]

2) It could, yes. Images are downloaded directly from Swift, and both the client and the server has 10gig links. We're also investigating multicast and bittorrent as alternatives for image distribution.

3) Not sure if you mean agent images or OS images... regardless, at Rackspace, each region runs as its own standalone cloud - so there shouldn't be any communication between data centers when provisioning. Does that answer your question?

4) We're working on implementing client certificate checking for communication between IPA and Ironic. The agents also live on an isolated VLAN that is only accessible by Ironic and Swift.

[1] https://github.com/openstack/ironic-python-agent/tree/master...


The ramdisk stuff is pretty cool. There's a generic howto on how to do this with CoreOS here: https://coreos.com/docs/running-coreos/bare-metal/booting-wi... -- look under "Adding a Custom OEM".

Of course you can also look at the code we used to implement it, which jroll linked above.


Very insightful. I enjoyed reading the personal view into OnMetal's journey @ Rackspace.


thanks! Author here, happy to answer any questions...


What did you have to do at the BIOS & firmware level? It's the first time I've heard of being able to customise that kind of thing.


Do you plan to offer lower-end OnMetal nodes? For example, 16 GB of RAM, a quad-core Xeon E3 processor, and a 256 GB SSD? Presumably you could put several such nodes in one chassis to make that kind of configuration economical.


avoiding too many forward looking statements

As a general principle we will add more form factors.

For your specific example hardware, that kind of specification could be achieved using the Open Compute Micro Servers / Server Card designs, instead of the full-on 2 processor windmill designs:

http://www.opencompute.org/assets/motherboardandserverdesign...

As you make smaller servers, things like HA-Networking become a higher portion of the cost too, so it might be more feasible if we dropped the 2x10g Bond and went to a single 10g port for example, would losing the HA networking but having those kinds of specs be interesting to you?


I wonder if 2x2.5G (if you can get it to work) would be better than 1x10G.


Not directly related to the article, but asking here: what did you use to make those diagrams? It looks like the Unicode box drawing characters, but I was wondering if it was from a tool or handcrafted.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: