Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I didn't realize AWS didn't have this already! I've been working exclusively in GCP for the past few years, and I assumed the two platforms were at parity. Is AWS starting to lag behind in new features?


They both have features the other doesn't. If you count up the number of features, AWS is way ahead. But yes, GCP has features AWS doesn't have.


I have less experience with GCP than AWS, but counting the number of features/services on AWS is definitely misleading. So many of the AWS services are effectively abandoned and missing critical features that makes them completely unfit for production usage, and it can be really hard to find this out before you run into the problems yourself. And then out of nowhere 3 years later they'll pick up development on an old thing again and finally fix that critical issue and make it much better. So I'm not sure I'd call that being way ahead.

My impression is that the stuff that GCP does have tends to be more capable and production ready (although I'm curious if others would disagree with that).


What AWS services do you consider "abandoned"?

The only one I can think of that might be in this category is SimpleDB. AWS recommends you use DynamoDB instead of SimpleDB for new applications.

However, I wouldn't call SimpleDB abandoned. SimpleDB continues to work as it has for may years.

One thing that AWS is amazingly good at is not breaking existing customers and their applications. Did you build an application 10 years ago based on SimpleDB? All the APIs you used 10 years ago are still there and available to your application today. Its really quite amazing how dedicated AWS is to not breaking existing customers.


Abandoned as in not being updated, and therefore falling behind all its competitors.

Fair enough that services often continue to work the way they always have, but I'm thinking more of the case where you're actively developing something, and the AWS service has major bugs or is missing significant features that all the alternatives have which makes your job harder building on top of it.

Elasticsearch was a famous example, it went years without any updates during which time upstream Elasticsearch itself improved dramatically. Then they picked it back up again once the elastic.io hosted product got good enough that it was a much better alternative.

Another example is ECS, which was out in the wild for a couple of years with a very limited feature set while GKE was completely eating its lunch and upstream Kubernetes got a lot of major improvements. Then AWS released EKS which sort of seemed to replace ECS for a while, but they have gone back and forth now for a bit with ECS having some features that EKS didn't have (e.g. fargate for a long time) and vice versa.

There are all sorts of other support bugs I've stumbled across in their forums, too many to recall. Often years old threads that have never really been addressed.

Edit: another one that comes to mind is cloudwatch/the entire monitoring & logging stack. It's very basic, not really a viable alternative at all to something like splunk. As such an important thing, I always kind of expected it to get better but it just didn't, you had to export your own logs/events to a separate ES cluster, or to Redshift via S3, or something else, etc. Whereas GCP Stackdriver is a much better solution out of the box.


Your argument seems to be AWS isn't prioritizing the things that you think are most important. I think thats fair. But that doesn't mean the products abandoned or not being updated.


Yup, today I was looking on some examples of AWS CloudFormation templates and I saw version 2010-09-09. Instantly, I thought the webpage I was reading must be old; so I opened the docs. In the docs, I see "The latest template format version is 2010-09-09 and is currently the only valid value."

Last version was 10 years ago and this service is one of the core AWS services, so it's definitively not abandoned.


That's just the template format version, not the version of the service. Actual features and resources are being continuously added to CloudFormation. See release history: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGui...

Unless I'm confusing your comment.


That's just protocol versioning - it means they haven't made backwards-incompatible breaking changes to the template protocol, not that the features haven't been updated in 10 years.


Please do tell us what changes they should make to their template format such that it would require a version update.


In some circles, this is seen as a mark of stability.


Amazon CloudSearch hasn't had any serious updates in 6 years [1]. It is Apache Solr behind the scenes, but with a proprietary API on top. I suspect everyone with serious search requirements has moved to Elasticsearch or another product by now.

[1] https://docs.aws.amazon.com/cloudsearch/latest/developerguid...


I've worked with Cognito extensively, and it's effectively abandoned. This issue sums up some of the common frustrations https://github.com/aws-amplify/amplify-js/issues/3495


Data Pipeline is abandoned - it is also miserable to work with.

Otherwise yeah, there might be too many ways to do the same thing, but AWS generally has a stellar track record of supporting everything else they have made.


Data pipeline has been superseded by Step Functions.


Did AWS announced it or are you announcing it?

I'm searching just now over *.amazon.com and nothing comes up. The product pages also bear zero indication of that.


No one “announced it”. Why would they? If Data Pipeline meets your needs, it still works. But, if you keep track of AWS and where all of the new features are being added, it’s clear that Step Functions is getting all of the new and shiny and can do everything Data Pipeline can do.

https://aws.amazon.com/blogs/compute/implementing-dynamic-et...


SimpleDB is also not officially deprecated, but neither does Amazon ever work on it: https://aws.amazon.com/simpledb/

Amazon are like the anti-Google: rather than killing products, they're happy to have them limp along forever.


CloudFormation. The UI is lacking critical information about the resources that CF operates on and it's been like this for years.


Exactly. It is beyond me how can Serverless.com provide a better experience. I can chose between something slightly broken (hello existing S3 resources) and 10x verbosity of JSON programming. I counted the number of lines for a simple Lambda function, it is 1000 lines of JSON. I think I could not use CF without Serverless.


CloudFormation isn't abandoned and is continuously updated as AWS adds services and options in existing services. The AWS Management Console UI for CloudFormation might be largely abandoned, but the AWS Management Console component for X, in general, seems often to be only distantly related to X in terms of support; the main focus often seems to be on using the service via the API, SDKs, or (for setting up resources) CloudFormation, instead of the console UI.


CloudFormation is... fine until you realize it's missing that resource that you need. CF is notorious for lagging behind products in terms of support. It's incredibly painful.


Cloudformation is the best tool for aws infra-as-a-code. Nothing is even close to it. It rolls back, no leftovers on delete, fast parallel ops, full control of resource properties. Yes, some things can be better, but above is priceless. 10yrs experience here.


Terraform is much, much better.

I used CloudFormation for a few years and ran up against a lot of its limitations. Maximum file size. Maximum resource count. Automatic rollbacks of THE WHOLE STACK on any subsystem failure like unavailable instance. It has no templating built in, so doing ten things with minor differences means copying and pasting 10x or deploying your own templating solution to generate CF. And I did this back in the ROLLBACK_FAILED days, where if you did something that it couldn't automatically undo you were stuck - no way to roll forward or backward, just abandon innplace. The button for "Continue rollback" or whatever that came out 5-10 years ago was huge.

Contrast that with Terraform: all of your points addressed, plus all of mine are non-problems. It lets you do some awesome things that are no doubt inspired by CF, but it takes them to an entirely new level.

There are a few downsides too, but nothing compared to CF. You can build too-complex things much more easily in TF, so you have to be careful not to go overboard. It also bugs me that I can't spec a high-level thing like "compute with 4 cores and 32GB RAM, repeat 10x and put behind a load balancer with DNS name foo" and use it anywhere, I have to say google_compute or azure_load_balancer or aws_dns. That was the biggest disappointment coming out of CF and hearing how awesome this thing was, and then realizing it still left me vendor locked.


> ran up against a lot of its limitations. Maximum file size. Maximum resource count. Automatic rollbacks of THE WHOLE STACK

I totally can see where this is coming from :) CFN is best used as separate templates/stacks for parts of the solution, not the whole solution rolled in a single template. Reusable is the key word here, and Parameters. Let me try a city example. Have separate templates for a school, fire departnemt and house block. Build all Detroit schools using the same school.yml template, just supply different parameters for each. Don't copy-paste code from school.yml into detroit.yml. Actually, there should be no detroit.yml, leave city level to CI/CD job.

> I have to say google_compute or azure_load_balancer or aws_dns

Multicloud? It rarely makes sense. All you get is triple the infra code, triple monitoring tools, triple devops competence requirements. Properly designed solution with HA and AZ/regional redundancy is sufficient on a single cloud platform.


I'm not suggesting multicloud as in using more than one at a time, I mean porting to another cloud or defining some OSS infrastructure in $magic_terraform that any cloud user could deploy without translation.

It would be limited to least-common denominator just because of the vagueness of objects that it would support, but the example I suggested would be incredibly useful.


> It would be limited to least-common denominator ..

Thats exactly why it is not useful. You design your solution using all features of the LB, solution using only basic ones will be meh.


> Multicloud? It rarely makes sense.

It tends to be badly implemented, but it makes a ton of sense as a strategy.


> Cloudformation is the best tool for aws infra-as-a-code.

Talk about damning with faint praise.

> It rolls back,

Sometimes.

> no leftovers on delete,

Well, on successful delete, maybe. DELETE_FAILED with partial stack deletion is a thing (and a thing CF could fairly trivially avoid in some common cases by simply querying resources for deletion protection.)

> full control of resource properties

Except the AWS resources it doesn't support, and properties of supported resources it doesn't support, because CF always lags the underlying services and their APIs.


Until a rollback due to a failure itself fails on a complex stack and prod is hosed for hours while you try to figure out how to unfuck.

Maybe CF has gotten better, but I'm not sure since I totally jumped ship for GCP.


Not sure what you are about, wild guess is cfn nested stacks, yeah, I learned hard way to never use it, its a feature that should not be there.


Oh, yeah, it lags and it is painful, but it is continuously moving forward (but chasing a moving target), even if the UI isn't (it's been probably more than a year since I looked at the UI.)


That is historical. The missing feature gaps are extremely small today.


I hadn't used AWS in about 4 years, but I recently starting using it for a side project. I needed to process a big dataset and I wanted to use pyspark, so I gave EMR a try. I was impressed how easy it was seemingly to create clusters in the UI and then run jobs using an ipython notebook.

That is until I realized that nothing worked. You had to use a version that was 5 versions behind the current version which is the opposite of what the documentation said which explicitly said not to use that version. Even then not everything worked out of the box.


We had to do a security review of EMR recently. I’m amazed it works at all. Hop on one of the cluster nodes and take a look at the processes running.


It was probably an issues specifically with notebooks.


AWS OpsWorks Stacks is several major versions behind on Chef.


Redshift for instance is still using Postgres 8 wire protocol


Redshift is definitely not abandoned. AWS just announced a bunch of improvements and new features at re:invent in the last 2 weeks[0].

Its not obvious to me why supporting a newer wire format would be a high priority for AWS. I think I would rather they work on things like native JSON/semi-structured data support[1] than a new wire format.

[0]: https://aws.amazon.com/redshift/whats-new/ [1]: https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-re...


Amazon didn't touch Redshift for years and years, until suddenly Snowflake became a thing and then they remembered they had Redshift.


Redshift has constantly been improved. One of Amazon’s major initiatives was to get off of Oracle and improving Redshift was a major part of that initiative.


You will eventually run into issues when client interfaces start to drop support for old versions of the wire format to simplify their codebase.


This is already the case, sqlx has to now add specific support for Redshift, its not enough ti use the postgres driver as is


I agree. We usually limit ourselves to a small subset of features we are using from AWS. The newer services tend to be less reliable, reasonable. I think part of it is the APIs being too complicated. Have you tried to use Kinesis low level API? You know what I am talking about. Ok, lets use the high level API. Well it is written in Java (and Java only) so you are going to be running a JVM on your end regardless. If you think I am joking:

https://docs.aws.amazon.com/streams/latest/dev/shared-throug...

I would classify this as a critical issue, and this is part of the reason why we stopped using Kinesis entirely.


Is this true? That isn't typically the Google way. Traditionally they build something and then let it linger for years before killing it.

https://killedbygoogle.com/


One of my favorite features of Cloud Shell on GCP is that for pretty much any action in the UI, it will provide the equivalent fully populated cloud shell command. I hope AWS does something similar.


There is a third party extension that does that and also creates CloudFormation code.

https://chrome.google.com/webstore/detail/console-recorder-f...


That extension is also available for Firefox. GitHub link:

https://github.com/iann0036/AWSConsoleRecorder


I've generally seen that AWS produces new services quickly but features is a little bit different.

From my understanding each service team within AWS is run pretty much as it's own little startup which sometimes make features across services, i.e. tagging, be inconsistent. It also leads to why the UI seems to be so fractured.


One of the first things I noticed when trying out GCP after working on AWS for a few months was how clean and consistent the user interface was. Your comment definitely explains the poor UX on the AWS console!


AWS has always made the expert “UI” first priority by providing APIs and tooling. The on boarding experience for those who hadn’t transitioned to devops yet seem to be a much lower priority for AWS. I find the opposite to be true for Azure and GCE.


This is a confusing sentiment to me. GCP APIs also come first. I don’t even think the UI is possible without them. Can you name a product where this is not the case?


Azure has had this since forever too. You can use either Powershell or Bash (my preference), and it works really well.


AWS has Cloud 9 as a separate service, which includes this kind of terminal-in-browser thingy. This is basically lightweight version of that concept


The question is not this but is this feature really needed. Maybe it is for the GCP user base, maybe it isn't for the AWS user base. I personally use AWS for ~9 years and I never needed such feature. I can achieve exactly the same (quite often do exactly that) by provisioning a small free tier instance with an instance profile that uses the same policy as the service or resource (lambda function for example) that I am debugging.

If AWS lacks anything, it is a "why the hell this API call failing exactly" feature. It is horrendous to debug a resource that is using other resources and you do not have any means to get what _exactly_ is missing. Usually you get an error like "s3 throw a 403, bye" message. The closest to a solution is CloudtTrail with giant amount of JSON entries to go through or try to load it to Athena or other database, and because you do not know what exactly you are looking for it is very hard. I usually just ask the support to debug it for me because they have internal tooling that can do that. Most of our support tickets fall into this category.


AWS LightSail has always supported a cloud shell.

However, those instances get used in more of a "pet" than "cattle" context.


Are you talking about the button in the LightSail interface to open up an SSH session in a browser window? Cloud Shell on GCP is slightly different, in that it gives you a preloaded, preconfigured machine to perform tasks on the command line that you would normally do with the GUI.


> AWS LightSail has always supported a cloud shell.

The difference being that you have to launch & maintain the server yourself (and pay for its runtime).


AWS does have aws cli, which is what this essentially is, except this shell is on the cloud and not another posix shell running locally on your machine.

This probably abstract away the .aws profile? I can't see much reason to use it since I use aws cli just fine in the however many terminal tabs I have.


I have found the Cloud Shell on GCP to be much more convenient for a few reasons:

- You have to install CLI components piecemeal on GCP, and sometimes need to opt into beta features. With the Cloud Shell, it's all preinstalled for you.

- You have to "log into" the CLI if you're running it locally, which can be a minor annoyance. (I know the AWS CLI doesn't have this issue, as it doesn't use OAuth for authentication with the console.)

- All of the data transfers in Cloud Shell are happening between machines in Google's data center, so you get gigabit speed file transfers in the Cloud Shell. For example, this is super useful when you need to download a large bucket to a working directory to make edits to multiple files, or if you need to run scripts that pull and push to/from Cloud Storage.

I think a Cloud Shell for AWS is a net positive! It can make some workloads easier and reduce the amount of configuration you need to do.


What do you mean by posix shell?


Sorry I mean just shell running on presumably your local computer.


Azure has had a cloud shell for a while as well


Psssst, we also had transactional consistency in Azure Blob Storage since 2011 too, but S3 only got it 2 weeks ago!

https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-s3... + https://github.com/gaul/are-we-consistent-yet/blob/master/RE...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: