Rackspace how long does resize take




















Note: Each server size has a distinct uptime hourly cost, and the new cost starts when the resize process finishes. You might pay different rates for the same server within a single billing cycle. Warning : The verification step is your last chance to revert the server resize. With a Linux server, you can use Secure Shell SSH to connect to either the public or private IP address for the server and run the following commands to verify the changes:.

A sibling to this comment refers to some work that compared small nodes to micro nodes, and found the small node to be over 2 times faster than the micro node for large processing jobs. However, they don't scale well and on the high end, they under-perform relative to comparable EC2 instance. The results reported by danudey should not come as a surprise. They were not intended to be used for continuous, compute-intensive chores. I think the reason I was 'surprised' is that I expected the 'good for sudden bursts of CPU' to be what it was good for, rather than an actual hard limitation on how it works.

Perhaps this is because I'm not terribly familiar with how EC2 is managed behind the scenes, being a new convert from Rackspace. My post was mostly meant to illustrate that Amazon puts hard limits on how your VM operates which makes it inconsistent over time under load , vs. Rackspace, which gives you a constant amount of CPU capacity all the time.

We ran some benchmarks for our workload on Micros here, but compared them to other EC2 offerings, not Rackspace Huh, that's good to know, thanks. That makes my moving choice easier :. Not to mention you could now have an entire year of EC2 micro and a bunch of other services for free. Not necessarily. Sometimes flexibility is worth more than having the minimum possible cost per clock cycle. True but in theory the advantage with RackSpace is you can mix and match to balance the cost of your known requirements with the flexiblity for your unknown.

The service cost benefit on going to a dedicated server is a fair bit but the management cost goes through the roof. We also prepay to save some expense on the monthly bill. I can easily manage those servers, spin up new ones as requirements change, drop ones I don't need anymore, etc with very little hassle.

We also have a few dedicated servers one actually with Xen on it for running smaller instances and the time and effort to manage instances on it makes it pretty much not worth it for us without a dedicated sysadmin. We need the elasticity. I can't wait 24 hours for new boxes to come.

First of all, there are managed hardware providers who can get you hardware online in less than 4 hours. Second of all, capacity planning can save you from "needing elasticity". Third of all, if you were on machines that gave you reasonable performance i.

You discard that as if nobody needed elasticity. Have you ever dealt with a large B2C site? Traffic tends to be heavily seasonal there, and in many other genres, too.

Moreover, the interesting question is not which provider can provision a pile of metal within 4 hours. The interesting question is which of them will take those machines back a day later, without charging for a full month.

A cloud will do that. Will your managed ISP? Capacity planning. It's a thing. Try it out. No one denies that capacity planning is hard. There are books written on the subject. The points you make are exactly the reason why you need to do capacity planning and plan for mitigating failures. If you aren't planning on 2x in fact more growth then I'm confused as to what kind of growth you really expect in your service. If you aren't giving yourself room for expected and unexpected loads, you're doing it wrong.

Add capacity and load testing to your process. If you work on systems where you have the occasional 2x spike in traffic or planning for 2x capacity requirements in the future is easy then you don't have the same problems as suhail has. I work in advertising for example.

We could have 10 partners at 1x. Add 10 more and be at 1. There isn't a pattern to when we get partners from any of these groups but when we get them they need to go live as quickly as possible and sourcing and prepping hardware in situations like that isn't feasible.

Nor is it feasible to have hardware on standby for the occasional 7x partner since you don't know when they are coming along and they could end up being a 10x partner. You're using that word, I'm not sure it means what you think it means. Over here in the real world, many applications and notably web-applications have one thing in common: They change all the time.

Your capacity plan from October might have been amazingly accurate for the software that was deployed and the load signature that was observed then. Sadly now, in November, we have these two new features that hit the database quite hard. Plus, to add insult to injury, there's another old feature that we had basically written off already that is suddenly gaining immense popularity - and nobody can really tell how far that will go. Sound familiar? Capacity planning isn't just hard, it is costly.

You have to profile every new version of your app, and every new version of the software you depend on. You have to update your planning models with that data, and then you have to provision extra hardware to handle whatever traffic spikes you think you'll be faced with within your planning window. Most of the time, those resources will be idle, but you will still be paying for them.

Plus in the face of an extraordinary event, you'll be giving users a degraded experience. Using "the cloud" doesn't solve all those problems but your costs can track your needs more closely, and with less up-front investment.

Rather than carefully planning revisions to your infrastructure you can build a new one test it, cut over to it, and then ditch the old one. You should still profile your app under load so you can be confident that you can indeed scale up easily, but even that is easier. You can bring up a full-scale version to test for a day and then take it down again. I'm not against capacity planning, but it has it's time and place. The fundamentals of capacity planning do not change based on the magnitude of your data growth.

Why would they? We're mostly talking about looking at your data growth curve and extrapolating points in the future. Why would that become impossible just because the curve is steep?

Because we're not made of gold and we're a startup. If you weren't paying such an enormous premium for your hardware, you'd have a lot more cash. On a per-dollar basis, you're paying anywhere from x the price for computing power on the cloud, depending on which resource you look at CPU, Memory, Disk IOPS, etc.

Do the benchmarks. It's easy to say plan in advance for growth but when there is a lot of varience in your growth then this becomes a problem. You will often find yourself overspending for unused capacity or struggling to meet new capacity. If your growth is a smooth line then yes you can say it is easy to figure out.

But not everyones growth follows such a simple line. The problem is not that is hard; the math can be done, the necessary capacity can be calculated, servers can be ordered. The problem is that buying 3x the number of servers or number of data centers that you need "baseline", to handle the spikes, is a staggering expense.

If they weren't paying x for their power the cloud tax , it wouldn't be so bad. Yup planning is hard when the next guy that registers can double your load, but dedicateds give you a lot of bang for your buck and you can still use VPSs or cloud stuff of course to scale up in a hurry. This is important to Mixpanel as websites all over the world are sending us data. A CDN is probably the easiest service provider to switch. We've tried Panther, Voxel, Cloudfront and use Edgecast.

But not physical servers which is really what I am referring to. Rackspace Cloud has data centers in Texas and Illinois. However, they don't let you choose your account is assigned to one data center at the time of creation.

I spoke with Rackspace at Interop a few weeks ago and they told me they are working on expanding to additional data centers including an international data center very soon and offering choice. You can "choose" insofar as you can email support and they can manually change what data-center your next VM and all subsequent ones will be built in. That is not the story I got when I contacted support and asked if I could deploy a VM specifically to the Chicago cloud.

Rackspace's support told me this was not possible an I'd have to setup an entirely new account in order to deploy to that data center. Well, I guess that's one way of doing it. I'd also add that their billing software can't keep things straight. I've had a couple of servers that I spun up for a demo at a meetup, that somehow ended up with the same name, this prevented me from deleting them, and tells me that their control panel has concurrency issues, since you shouldn't be able to create two servers with the same name.

I didn't notice the issue until 2 months later I thought I had successfully deleted the servers because all of the sudden I received huge bill that contained over hours of usage for each instance, for that month alone.

Turns out their software failed to bill me the previous month so I didn't notice any change. Their response to my ticket about not being able to delete servers was to tell me the steps that I had to take to fix it renaming the servers. I really wish when you had a ticket for stuff like that they'd actually act on it instead of just telling you how to fix stuff and expecting you to do it yourself.

One potential reason not to move: Security. A good friend deep in the security community once told me, off hand, that EC2 was "owned. It is just too convenient to walk away from. Once it matters, I can move the critical stuff to dedicated servers. I'm only noting that, for certain critical services, they themselves do not appear willing to take the risk.

I've worked with Amazon Web Services security people in the past, and while they're not perfect nobody is I have always had the impression that they take security seriously. AWS has many very large customers, including the US government and companies handling HIPAA-restricted data; based on the assumption that Amazon employees don't want to be thrown in jail for 10 years, I think it's safe to say that if EC2 is is "0wned" as you claim, it's certainly not well known within Amazon.

For what it's worth, accidentally or even negliently violating HIPAA is fantastically unlikely to get you charged criminally. Yeah but "0wning" EC2 would most certainly get you charged criminally under a number of laws. Colin was implying that negligent management of EC2 could leave Amazon employees criminally liable. Obviously anybody who "owned up" EC2 is already a criminal. It's extremely complex and service quality is paramount, so it takes a while to make it all happen.

My friend from Amazon works in the supply-chain side of things, and he said he really wants to use it, but everything has to be encrypted and some stuff is off limits. I take it you work on the retail side of things? I'd be interested to hear any more details that you can share. I'm not sure where the confusion lies, but I'm guessing you see "security concerns" as equivalent to "knowledge of ownership"?

It seems to me those are entirely different things, as one can be concerned about potential threat without knowing if it is real or not. But I do not work in the security community myself and may be using language sloppily. I would be much obliged if you could show me where the crux of the confusion lies. To paraphrase what you said: "I didn't take [statement A] seriously until [statement B].

It's difficult to read it any other way. Rewording your original comment: "It was only when that I heard that engineers at Amazon were forbidden from using AWS that I took seriously the comment that EC2 was owned.

Thanks for the reply. There is a connection, of course, but it is not that Amazon knows. Statement B is evidence in the sense that it suggests Amazon does not believe security is sufficiently iron-clad around EC2, which would allow for statement A to be possible in the first place. I honestly did not expect my comment to create such angst. I recognize that the wording was a bit confusing, but it seems the main thing people are upset about is that I am spreading FUD. Of course that would be quite inappropriate if it was completely unfounded, but I have stated exactly where my concerns came from, so it seems perfectly legit to me.

It was one of the few excuses left for me to procrastinate, so at least I should be more productive. Note it does not address the "nightmare scenario" that Xen the virtual machine software is itself vulnerable. At this point, you can access the file system to recover or troubleshoot any issues. The exact process varies based on the operating system. You can exit rescue mode at any time, returning your VM to its original state with any changes made. Rescue mode is automatically exited after 24 hours.

The reboot action performs a soft or hard reboot of the server. A hard reboot power cycles your server, which performs an immediate shutdown and restart. The console action opens a Java web terminal emulator window with a login prompt to the server over a secure HTTPS connection.

It might be necessary to install or update Java on the web browser used to access the console or to switch web browsers to ensure proper operation. The console is a backup means to access a server and should not be the primary method of access.

It is important to size VMs as close as possible to what you need because you can easily add resources like CPU, memory, and disk, but removing those resources requires a restart of the VM that results in downtime.

Before resizing a virtual machine, note that over-allocating resources can negatively impact the performance of the VM and other VMs within your environment. Increasing vRAM consumes the same amount of datastore space which in rare cases, can lead to downtime.



0コメント

  • 1000 / 1000