Expecting a 10x drop in prices in 1 year is ludicrous. Even if the prices follow a pseudo Moore's law and fall by half every 18 months, you're looking at at least 5 years before they reach parity. And in the meantimg HDDs would have gotten cheaper; so expect even more time for parity.
Great point, although we should also talk about the way storage use has shifted.
I used to have some massive programs installed, a lot of digital media and video games at one point. Now Steam / GOG / Blizzard "stores" my games when not in use. Google / Apple / Pandora / Spotify "stores" my music. Amazon or Netflix "stores" my movies.
When I ran an HDD, I wanted a TB. Now that I'm running SSDs, 80 GB seems to meet all my needs (thanks to changes in internet services).
Sure, it won't be true for everyone. (Editing TIFFs for GIS anyone?) I don't run a Chromebook, but its philosophy is not completely insane for a reason...
It's like storage has differentiated into at least two separate purposes. Main system drive for consumer PCs, formerly an HDD only component, is feasibly replaced by SSDs. At the sizes needed for that, the difference in price per GB is already under a factor of two.[1]
I agree to a large extent but I think 80GB is an extreme on the low end. Many modern games (especially MMOs) are 20GB+. Windows 7 alone requires a huge amount of storage. I have a 120GB partition at home (just for OS+programs, not data) and I'm constantly uninstalling steam games in order to install others. If you want a modicum of breathing room, I'd say minimum 200GB.
IIRC, Assassin Creed Unity required 50GB by itself.
If Windows takes up 30GB, you pretty much need to wipe out everything just to play Unity on 80GB. I think 256GB to 512GB is the best bet for the standard consumer.
I just got a 250GB Samsung EVO SSD. Filled it up in a matter of weeks by installing some of my favorite games from Steam. I'm looking to add a 500GB to my build, but I should have STARTED with a 500GB.
A friend of mine is looking to upgrade from a 256GB MBA to a 512GB or 1TB MBP because he needs to view medical imagery offline (the Retina display also helps). <= 128GB storage is not really adequate for people who work with photos or music. 256GB is better but many people do need more.
They are literally the most data intensive software you have in terms of UX. Your movies, music, documents, and even non-served databases are all browsable on low bandwidth low IOPS devices just fine with sophisticated algorithms.
Your load times for a level of Call of Duty are always reflective of how long it takes to get texture data off the disk and into vram, and that is constantly being hammered. It sometimes even causes texture popping on crappy notebook hard drives.
I have 256GB SSDs in my desktop and notebook (with intents to find a nice 1TB drive next year, since we are at the tail end of SATA3) and my Steam library on my desktop is 176GB by itself.
My OS on its own (Arch) discounting pacman provided games (0ad, doom3bfg, darkplaces, doom, etc) is around 6GB, and most of that is Qt doc, Python2 site packages, 300MB of wallpapers, and 600MB of locale.
Not only is load time reflected by the time it takes for texture data to be copied into vram, but what constitute reasonable load time is to a degree based on developer platforms and those have a tendency to use above average hardware. If in the past 10s load time was maximum, that number will now be more likely based on SSD speeds rather than HDD.
Same reason as you'd want an SSD for anything else? Fast loading times of lots of data? Also, some operating systems make having programs installed to volumes other than the boot volume hard.
You are right, it is a pity that you are downvoted.
Right now, you maybe put your favorite game on the SSD, not the whole steam library - and that is what gamers often recommend themselves. FPS don't get influenced by the HDD, only loading times, and those are normally not too high. Many new games have only two loading times: loading the game and then loading the save. Afterwards, everything is streamed, and a proper HDD is fast enough for that. And that is what the people responding to your comment here miss completely.
You are right, it is a pity that you are downvoted.
It is a pity unconditionally. I expressed a subjective impression related to the topic at hand and asked constructively for explanation. There was simply no reason to downvote.
The downvotes and the first-generation replies to my comment show that this is not the place for a discussion about the worthiness of SSD's for games.
If you have the monty and the means why wouldn't you do it?
You're comment is very 'matter if fact' and baseless.
Improved loading times and better texture loading in games where it is heavy (think Fallout 3 or Skyrim) is a great reason to wack an SSD in your system.
Skyrim and Fallout 3 are in my eyes counter-examples for using an SSD, since you basically have only the two loading times and afterwards it's streaming. Maybe if the Quicktravel takes too long on the HDD.
> If you have the monty and the means why wouldn't you do it?
Oh, sure then :) It is only a waste in the relation of price per gigabyte vs. the possible performance improvement ingame.
> You're comment is very 'matter if fact' and baseless.
I have a 500gb SSD which cost ~200$. Sure 5 years ago I would agreed with you, but now days it's really not that expencive and in game load times feel much longer. IMO, Skyrim is load happy, sure the open world is fine, but go to town, get into your house, get out of your house get out of ton is 4 load screens in ~1Min of gameplay.
That HardOCP article (and HarcOCP are not really known for their quality articles anyway) is testing for framerate imrpovements.
An SSD will not improve your framerate much at all (That's not what you use an SSD for).
You cited main loading times like loading a game and loading a save, which are enough reason themselves to want an SSD in there. Some games are painfully long in these areas (take any Total War game as an example) and an SSD will help.
I gave Fallout 3 and Skyrim as an example of texture loading where an SSD would matter. You claimed it didn't, as this is contrary to all avaialble information.
These games (like many open world games) stutter when new cells/areas are loaded (I'm not talking about regular texture streaming). Again, this is where SSD's will make a difference.
An SSD becomes even more useful when you start installing high-resolution texture mods to games like these. My own Skyrim installation uses nearly 4Gb of video memory when wandering around the wilderness. That would cause some pretty heavy thrashing on an HDD.
There is nothing about SSD's being used to store games that is a 'waste' if those things are important to you.
I have two SSD's in my personal machine, one 128Gb for the OS and one 500Gb for Steam and some games. They didn't cost me much, so why wouldn't I do it? There is literally no reason for me to not do this in a high-end system meant for playing games.
Other games will go on my regular HDD's because you are right at least in saying that not all games will benefit from it, but some will.
That HardOCP article (and HarcOCP are not really known for their quality articles anyway) is testing for framerate imrpovements.
I don't like your tone. If texture streaming would profit much from an SSD, you would see that in the FPS.
Fallout 3 was played by me on a very old machine, of course without an SSD, and I remember no noticeable cell loading outside. The streaming of those engines is just too good. Fallout: New Vegas I played for more hours I'm comfortable admitting, heavily modded, on a better machine, same story there.
> My own Skyrim installation uses nearly 4Gb of video memory when wandering around the wilderness. That would cause some pretty heavy thrashing on an HDD.
Ingame, in the widlerness? Try it out. I doubt it. Initial loading times will be better of course, and loading times when switching locations, but not performance otherwise. You underestimate the performance of a regular HDD that is not a shitty 2.5 model cooked to death in an overheating laptop.
> They didn't cost me much, so why wouldn't I do it?
Like I said: No reason not to, if you have the SSD anyway. But normally, SSDs are a lot more expensive than a HDD, see above.
You did not say that you played Skyrim first on a HDD and then on a SSD…
> Area/Cell transitions are MUCH faster on an SSD.
I think we were not talking about the same thing (anymore?). Like I wrote below, explicit area transitions - entering a city, loading a save game - will of course benefit from an SSD, a lot. That is loading though. But you also have cell transitions that are streamed – you talked about texture streaming above - when travelling on foot through the game world. If I remember the engine from my morrowind mod days correctly, that is a special case in those games with its explicit cell system, but it probably basically applies to all 3D-games that include moving through big areas, since they all have to load textures at the time they come in reach. It would be quite interesting if an SSD would have a real effect here, and to my knowledge it so far does not. It would be interesting because those transitions can stutter if the streaming system does not work properly, or if the HDD is really too slow, which is a likely cause of bad minimal fps (and depending on the benchmark, could lower average fps) or rather explicit stuttering, measured in fps or not.
I'm not saying SSDs do not help with loading stuff, I say that to my knowledge, they do not help with streaming - which are two different things.
I never felt those explicit transitions were too bothersome when I played Fallout/Oblivion/Morrowind, that together with the streaming system outdoors is why I see an SSD as not necessary for those games. But of course, if it bothers you - and maybe loading really takes longer in Skyrim? - an SSD is a good load time minimizer.
I used to have roommates that were intense gamers (blew thousands of dollars on new GPUs and other desktop gaming parts per semester) and all they talked about were SSDs. SSDs are pretty big in gaming.
256GB is indeed the sweet spot for typical users. 512GB if you play large games.
Windows on its own is quite capable of filling up a 128GB drive with restore points, update backups, and all sort of other junk when left in the hands of a typical user for a couple of years. I recently wiped nearly 60GB of pure OS-level junk (i.e. not caused by applications) from a family member's Windows 8.1 PC.
I'm curious to see what kind of space Win10 takes up. I believe one of their focuses was stripping it down to take up less space for applications such as tablet, netbooks, et.c
I recently discovered a large portion of my windows 7 partition was due to space set aside for system restore. Then there is the swap space and hibernation image which if you have a lot of ram is enormous.
I've Eve-Online and StarCraft 2 installed, which use about 30GB in total. Add another 6-8GB for the OS, and you've still got about 50GB left (which I actually fill up with even more games).
> I used to have some massive programs installed, a lot of digital media and video games at one point. Now Steam / GOG / Blizzard "stores" my games when not in use. Google / Apple / Pandora / Spotify "stores" my music. Amazon or Netflix "stores" my movies.
How sustainable is this for the average consumer with most national ISPs moving towards bandwidth caps?
I had to switch to business service because Netflix alone was blowing through the data cap - in fact it was blowing through even the data cap they had for their top tier consumer service.
Which nation are you talking about? I suppose I'd guess US but it would help if you said.
I'm still getting uncapped broadband (albeit with vague TOS provisions) in the UK. I think the rest of Europe is probably similar.
For the record - I'd like to move towards metered billing - but with a realistically low price per GB - not a punitive one based on profiting from people that don't have the knowledge to estimate their usage.
Give me 20-40 GB at a good speed for £20/month with prices scaling linearly from there and I'll be fairly happy and no-one will get burnt.
(just checked and most providers are offering uncapped at around 20Mb/s for £20/month)
I'd be curious to know why you would prefer a metered approach when, as you say, for that same price you get uncapped.
It seems to me that most people, in the US anyway, would rather not have caps. I can honestly say this is the first time I've heard anybody say they would, for the same price, prefer a cap.
For the record, 20Mbps for £20 is roughly similar to what our local cable monopoly offers in NC.
I absolutely prefer capped service far, far more than uncapped services. In Singapore, I can get a LTE SIM for $35, and then pay $25 for 7 days of data @14 Gigabytes, with roll over of the amounts I don't use.
In general, I get 50 megabits down/20 megabits up. And, because this is a capped service, I have a reasonably good chance of actually seeing that performance. If it was, "Uncapped" or "Unlimited" - then people would abuse the crap out of it, and all of a sudden that 50 megabits/sec down would quickly drop, making the system far, far less useful.
I've never been a fan of unlimited/uncapped services, they quickly degrade into poor experiences for everyone in a very short period of time.
> And, because this is a capped service, I have a reasonably good chance of actually seeing that performance. If it was, "Uncapped" or "Unlimited" - then people would abuse the crap out of it, and all of a sudden that 50 megabits/sec down would quickly drop, making the system far, far less useful.
Using the service I have paid for is not abusing it, anymore than watching too much cable TV or making too many local calls is abuse of those services. If it causes degradation for other users, that's the fault of the ISP that has over-overprovisioned us. And the top ISPs all have profits in the billion dollar range, so it's not like they're hurting for money and just can't fix the infrastructure. Caps are just a way to squeeze out higher profits by reducing the necessity of upgrading infrastructure.
That said, not once have I ever seen my speeds drop below what they're rated to be.
So - let's take those one at a time, because they are interesting.
"Watching too much cable TV" - Cable TV is a broadcast Medium, and watching it, like listening to broadcast radio, has no marginal cost or impact. You can leave your TV off for an entire month, or tuned to a channel for 24x7x365 - zero difference in cost to the provider, or impact to other people on the service.
"Making too many local calls" - More than one person has been required to purchase a business line when they overused their local calling privileges. Indeed, on some CO to CO trunks, if enough people (ab)used their local calling privileges, the entire exchange would be blocked and nobody could call in/out. Unlike the case of internet connectivity, there is no graceful fallback - once the trunk reaches capacity, it's dead for anyone else. (Lumby, BC to Vernon, BC - used to happen all the time - 48 people could bring down a town of 1800). That's why those types of deployments require some type of per minute charges to avoid that scenario.
"If it causes degradation for other users, that's the fault of the ISP that has over-overprovisioned us" - We totally agree here. Any ISP selling unlimited service, and not clearly explaining how many gigabytes "Unlimited" equals, is doing everyone a disservice.
"Caps are just a way to squeeze out higher profits by reducing the necessity of upgrading infrastructure." - We disagree here. ISPs should provide Caps to their customers, and clearly communicate what they are - and then compete to provide higher caps while ensuring they maintain the line rate they've committed to their customers.
For example, if I've purchased a 1 Gigabit connection, the ISP darn well better invest in their infrastructure to make sure that none of their ports are blocking. Comcast utterly failed to do that with Netflix recently, and I find that annoying beyond belief. At the same time though, I don't expect Comcast to provision enough capacity to support all of their customers at 1 Gigabit (or whatever a high speed connection is) at the same time - that's cost prohibitive - particularly as it will rarely be happening.
On the Big I internet, this is solved by what's called 95th percentile billing. This is probably too confusing for the average user, so something simpler like, "You get a 1 Gigabit connection for $75/month, with a 10 Terabyte/month Limit during Peak hours, and Unlimited Terabyte limit in Off peak hours, $20/addition Terabyte in Peak, and we make sure that none of our upstream ports our blocked on any of our peering routers" is exactly what I'd be looking for.
> That's why those types of deployments require some type of per minute charges to avoid that scenario.
It sounds like those deployments required infrastructure upgrades, not more creative billing to stop people from utilizing it.
> We disagree here. ISPs should provide Caps to their customers, and clearly communicate what they are - and then compete to provide higher caps while ensuring they maintain the line rate they've committed to their customers.
Even this isn't the reality. It's rare for landline Internet to be advertised with the actual data caps in effect. In my case, the data cap is defined nowhere except the Acceptable Use Policy (a long document that nobody is going to read) and in a My Usage part of your account (which you obviously won't see until you're already hooked up.)
At the same time, who are they going to be competing with? How many people are going to lay down fiber side-by-side with someone else's fiber? What incentive is for them to compete based on the data cap instead of following current practices which ae basically "100 MBPS BLAZING FAST SUPER-SPEED INTERNET!!!"*
(fine print)
* Limited to 200GB/month. Users going over this limit will be automatically upgraded to the next highest tier after three offenses.
I'm not sure why you disagree with the statement "caps are just a way to squeeze out higher profits by reducing the necessity of upgrading the infrastructure." You're basically making the argument that yes, caps do have that effect, but are put in place as a necessary network management strategy, despite the fact that most of these networks worked quite well before data caps and don't work any better with them. While some or most of them are continuing to invest in upgrading infrastructure, most of them are also rolling around in billions of dollars of profit that they aren't investing...because there is no real market incentive to do what you're saying they "should" do.
> "You get a 1 Gigabit connection for $75/month, with a 10 Terabyte/month Limit during Peak hours, and Unlimited Terabyte limit in Off peak hours,
What a world that would be. In reality data caps are usually 300GB in networks that can obviously support much more than this. I could blow through my former data cap in 12 hours. Heck even in your scenario, I would only be able to use my rated 1 gigabits for 2-3 hours at max utilization. It seems misleading to advertise Internet as 1 gbps if you can actually only get 1 gbps for 3/720 (0.4%) hours of the month.
EDIT: I misread that as 1 TB instead of 10TB, but I don't think 4% of the month is much better.
I think to some degree we agree. Caps are put in place to both ensure a profit to the carrier, as well as ensure availability to the users. If the Carrier did not need to make a profit, they could upgrade their infrastructure to be totally non blocking. Alternatively, if the Carrier needed to make a profit, but was okay providing degraded service, they could avoid upgrading infrastructure.
But, if they want to do both - Profit + good service, then caps are needed.
The competition, btw, is definitely fiber (See what happens to Comcast Service when a fiber competitor comes to town - it gets much better) and wireless (I haven't used a wireline carrier in singapore for 2+ months. All LTE, all the time.)
Regarding my limits - the nice things about Tier-1 ISPs (which Comcast is) - is that they don't pay for data use, as they engage in what is called "Settlement Free Peering" - what they do pay for is (roughly) Trenches + Vaults + Data Center Real Estate + Power + cooling + security + Chassis + Fiber + Interconnect Ports + Line-Cards. These are all a function of peak usage, not total usage. So - outside of peak-usage, which has reliable patterns of behavior, there is no costs to the ISP, or fellow users of the same resources, to your usage. Peak Usage may be as few as 4 hours/day, or 120 hours/month * 1 gigabit = 54 Terabytes. In a month, 1 gigabit could theoretically draw 324 Terabytes, so - that's 270 Terabytes off Peak + 10 Terabytes on Peak allowance = 280 Terabytes/month.
By doing just a bit of scheduling, you don't have to pay any extra, the ISPs network isn't overloaded, and you have a reasonable allowance of data to work with.
Of course, if they start selling 1 Gigabit Service, that's a lot of switch ports/chassis/routers/interconnects/etc/etc... they are going to have to upgrade to handle the new peaks.
I think they're talking about home internet connections, as in fibre or ADSL broadband.
I have an uncapped service (Virgin Media) in the UK which I wouldn't trade for anything. So much so that if I were to move house their service availability would be a factor to consider.
I get 152Mbps down/15Mbps up and I have never dropped below that level but more than a few Mbps for a short time.
I suppose my preference would be that providers simply build the capacity required to support the advertised bandwidth at 100% utilization. This is similar to what Google Fiber does. They don't care if you use 20GB or 20TB in a month from what I've heard. That sounds optimal in my mind. Underestimating usage and failure to anticipate demand are real issues that major ISPs have to deal with over the long run (off the top of my head AT&T's network problems after the first iPhone was released). I've always viewed these caps as short-term fixes until they figure how to increase their network capacity to support increased use.
> Because at the moment s/he is bearing the cost of abusive users.
There is nothing abusive about using that which you paid for. If you sell me a 100mpbs pipe, I should be able to use that pipe 24/7.
Of course, the reality is that it would be expensive to make sure everyone could use their 100mbps pipe 24/7, so ISPs sensibly over-provision to save money.
But ISPs miscalculating their provisioning requirements (or as it appears in most cases, simply grubbing for money), is not the end-users fault and not the end-users problem and it does not mean that they are doing anything abusive.
Metered service at reasonable prices would probably actually be better for a lot of consumers, but most people aren't _comfortable_ with metered services.
Man, I'm in the UK and there would be no way I would like this arrangement. Im on Virgin Media fibre (152Mbps for £34 per month) right now and I would never go back to an Open-Reach infrastructure service.
Here in Australia, where data caps have been the norm for, oh, forever, it's pretty sustainable (for now). Music streaming is notionally the same cost as reading an image-heavy website: negligible, in the scheme of things. Video streaming is again not that bad, because I can only watch so many videos... they take real time to consume. YMMV depending on your chosen resolution.
Games are the main problem. When one of those 60GB bastards comes out, that's nearly one third of my monthly bandwidth, so I have to think carefully about scheduling it. But I regularly re-install 4-8GB from the cloud because it's a marginal impact compared to the entertainment it will bring.
Yeah, when GTA V came out, I waiting until 'off-peak' to use the 100G of data there, rather than waste my daytime allocation. Rarely would we hit our max usage though... thanks to appalling internet speeds. :)
> Video streaming is again not that bad, because I can only watch so many videos... they take real time to consume. YMMV depending on your chosen resolution.
I use Netflix as a replacement for OTA and cable TV. So my girlfriend, who likes to watch a lot of TV, has used 221GB 7 days into the billing cycle.
Are they moving towards bandwidth caps? Comcast in the SF Bay Area has claimed a "suspended" 250GB cap for years and years. I get the impression that we're moving away from bandwidth caps.
Data caps on Comcast service are coming back. They started with Nashville and Tucson in 2012. The program's now expanded to Huntsville and Mobile, AL; Augusta and Savannah, Georgia; Central Kentucky; Maine; Jackson, Mississippi; Knoxville and Memphis, Tennessee and Charleston, South Carolina; and Fresno, California. They're testing two different plans: a 300GB cap with overage charges ($10 per 50GB) and flex-data plans ($5 discount for a 5GB monthly cap, and $1 per GB over the 5GB).
Add Atlanta, GA metro area to your list. They've had me on a 300GB cap for the past year. I've only gone over it once (burning one of three overage allowances) because we stream most of the TV and movies we watch, and I do a lot of OS testing, downloading ISOs and packages on a daily basis.
I really don't get a data cap of 300GB for today's culture of "stream everything". I can see something like a 1TB cap to knock out the heavy torrenters, but 300GB/month is 10GB/day. A decent sized family will bust through that easily (Dad watching a 3GB movie, Mom streaming 1GB of Pandora throughout the day, Daughter downloading 7GB of new Xbox/PS4 game content, Son binging half a season of a Netflix show at 5GB).
I could imagine, at some point if ISPs became militant about caps and municipal broadband and Google Fiber weren't making inroads fast enough, Netflix creating appliances similar to their OpenConnect appliance [+] for users, but with the content encrypted with a key tied to your user account. Netflix would then cache the majority of the content you consume locally.
There would be for me if the price of SSD's come down enough. With my 2011 MBP and RME Babyface taxing the computer a bit too much with USB 2.0 context switching, my 120gig of samples streamed from disk would help while running this all at 96K.
Recently "The Witcher 3" sold 4M copies in a short amount of time whereas 1.3M are attributed to PC sales [1].
Even Indie games like Cities: Skylines which is PC/Mac only recently sold more than 1M copies in a short amount of time. [2]
I don't really play much but i follow PC Gaming and it has been on the rise for years. Free to play MOBA games like League of Legends have over 8M concurrent players in peak hours with 27M people playing at least game per day. That is more than the viewership of even the most popular TV series, every day of the week.
However, it's _nothing_ compared to the installed base of PCs. That's probably on the order of 2 billion. Most people don't use their computers to play AAA games, and likely not games at all (Steam only has 125m active users, say).
The majority of computer users don't play modern AAA games. Steam claims 125 million active users (active defined we know not how, but probably generously). Some of them will be people like me; the only Steam games I currently have installed are Papers Please and FTL. It's estimated that there are ~2bn PCs in use worldwide.
Keep in mind that SSD performance tends to degrade as the drive approaches full utilization - even if the actual internal capacity includes margin, you're probably best off avoiding approaching the available capacity. Heck, for the 250GB SSD I put in my primary laptop this past spring I intentionally formatted it to ~80% of the available capacity - that gives me plenty of space (~100GB free after all software & data) but even if I end up with things chewing that capacity I should keep good performance due to the available space for the drive's own data shuffling.
Perhaps 8.1 and Windows 10 are better about what they save, but going with anything less than 128 for Windows + apps + data seems risky to me.
Does the SSD actually use this unformatted space? I know that SSDs from the factory have a lower available space than their nominal space, to account for the eventual dimishing quanitty of functional floating gate transistors (ie 240GB is made available in a 256GB drive), and that OSes usually recommend keeping about 10% of the formatted drive free (for paging iirc). Didn't know that about unformatted space.
All clean pages (from overprovisioning, unformatted space, and trimmed space) should be treated the same by SSDs. In particular, Intel documents that leaving unformatted space will increase performance on their SSDs.
I may be wrong, but I'm fairly certain that on SSDs the wear leveling is done over the entire flash capacity. the details of where data is stored are an abstraction layer on top of that unlike on a spinning metal hard disk where specific regions of the drive are allocated to particular partitions
Wear leveling is done in the layer that translates between LBAs and actual physical chip numbers and addresses. Drive partitions are well above all of that in the abstraction stack, and partition tables are totally beyond the scope of a drive's concern (even for spinning rust drives). These days it is even common for a drive to have two different partition tables in two different formats with different partitions listed. Bootloaders and operating systems are the only things that should care about partitions.
This is also true for hard drives. Fragmentation and filesystem layout make performance of a 95% full partition on an HDD much slower than a 50% full partition.
Defragmentation and cleanup on traditional hard drives take longer when they're full for different reasons including the increased amount of data being managed and in many cases empty space being further away from the data being moved which will slow down moving of data to clear and consolidate free space. On a solid state drive the bigger issue is that there is a minimum size block of data that can be written in one operation. Having empty space available means that the drive can write data into that empty space without having to do garbage collection first.
I got myself a cheap Acer with 32 gigs of eMMC storage. I removed Windows 8, its restore partition (that took 10 gigs all by itself) and installed Ubuntu. It's not as fast as the "serious" laptop, but it's light, silent, has decent battery life and I won't be devastated if a truck rolls over my backpack. I mean, if I don't have it on my back, that is.
For Python/Django/Twisted development in a couple LXD containers, it's the just right amount of computing power at an incredibly low price.
No Star Citizen for you (it's estimated to be 100GB in size - and yes, you want to store it on SSD because the load times are quite high even with SSD).
I wonder if those sell simply because of their positions in the market. Is the average consumer knowledgeable to know whether they need 240 GB, 2 TB, or 6 TB? Or do they simply compare prices and sizes and choose whatever they think is the best value? A 2 TB drive might just be another example of the 2nd cheapest wine trick?
There are 3 reasons people need to buy a new computer- too slow, ran out of HD space, or too slow and ran out of HD space. It was true 10 years ago and even more true now. Backup your 32GB phone a few times and you're wondering where all the space went.
It's more like insurance. You buy as much as you can afford. That's why it's painful to by a new MacBook Pro these days. 1TB is very expensive compared to the 1TB I could afford 5 years ago.
> It's more like insurance. You buy as much as you can afford.
Somewhat OT, but I have to say something about that. With insurance you are almost always better off buying as little as you can afford. The more risk you can afford to take on yourself, the better a deal you get on your insurance. This is why a company like Hertz doesn't buy insurance at all for their fleet: they can spread the risk across all their cars, so they self-insure.
In fact, there are some cases where the "better" insurance is always a worse deal. I've seen this in several employer medical insurance plans where there are two or three different levels of PPO plans with varying deductibles and premiums.
The last couple of Blue Shield plans I looked at were like this. There would be two PPOs with the same doctor network and such, so you could simply compare on the numbers. The more expensive plan offered lower copays and deductibles, but unless you have a large number of prescriptions and doctor visits, you'd never make up the difference - with the more expensive plan you'd be paying more every month regardless.
And in any case, the number you really care about on medical insurance - which seemingly few people pay attention to - is the annual out of pocket max. That's your worst case scenario cost-wise. If you hit the out of pocket max, everything is covered after that.
In the plans I looked at, both the cheapest and most expensive plans had the same out of pocket max. So in almost every case, people who sign up for the more expensive plan are simply spending more money for the same coverage.
Even though location is embedded in EXIF, you can disable geolocation feature in Flickr's privacy setting, so that it won't be in the photo that is displayed on flickr. But yes, it does keep EXIF info in its database. For those who are paranoid, either run exiftool or imagemagick to strip off all EXIF before uploading.
If Flickr (or their publicly traded overlords Yahoo) decide arbitrarily for whatever reason one day you violated their TOS with an errant nipple or whatever, kiss a decade of professional archives goodbye.
No thanks. My photos live in four places, and two of them are owned by me.
No problem with nipples - Flickr allows full nudity as long as you mark them correctly for search purposes. Nobody said you should keep the only copy of everything you have on Flickr.
You can also try throwing them in an imgur album. Not sure if there is a data limit per account, but I don't think there is. I have several dozen GB of photo on there already.
I've had a paid imgur account since the day they announced them. In addition to the compression, mass upload and album mgmt is very clunky at best. I stopped attempting to archive many images there years ago...
I should give it a test again though to see if it is better.
> And in the meantimg HDDs would have gotten cheaper
Don't be sure about that. Remember that the Thai floods setback HDD prices and made them more expensive for years; they still haven't returned to the pre-flood trendline last I checked. With SSDs dropping, that might deter much more investment into HDDs (and with less demand from HDDs in the first place, there will be fewer of the economies of scale & learning curves that drove previous HDD price decreases - and vice versa for SSDs!).
That statement neither disputes nor supports the statement you made it in reply to. The statement you were replying to was referring to the trend line. That is to say it was referring to the line representing the rate at which the price was changing, not the price itself. You would need significantly more than two data points to make an argument for or against that assertion. As you would need enough enough points to show what the trend was both before the flood and currently.
I only point this out because I assume you misunderstood what was said about the trend line, if you had another purpose for pointing this out which I am missing I would be interested in knowing it.
> they still haven't returned to the pre-flood trendline last I checked.
They have. I remember buying 2TB drives a year before (2010). I paid about $200 each. Last year (2014), I bought 4TB drives for $130 each. That's about 3x improvement over 4 years, right?
Why would prices follow Moore's law rather than the laws of supply and demand? If all of a sudden there is an immense supply of drives whose performance and capacity themselves don't follow Moore's law then prices certainly wouldn't either.
If suppliers can deliver hard drives that follow some cube version of Moore's law (temporarily, long term everything seems to S-Curve IMHO), then prices will fall dramatically, even in a time frame less than 5 years.
Are spinnies horse drawn carriages, and Solid States the start of different, new, less mechanically restricted jet age? I think we all know that answer to that question -- maaaybe ;)
The author commented that they've seen a 50% drop in price over the last 8 months (from 20x HDD to 10x HDD), so it's not outside the realms of probability with the increased yields of 3D NAND.
Just as a test: I looked at the price history for a random 256GB SSD (from Crucial). The highest price in the last year? $115. The current price? $99. Not nearly a 50% drop.
Now you are just cherry picking to prove a point. In a similar tone checkout the huge drop in prices for the Kingston Digital 240GB SSDNow[1]. It has dropped from about $400 to a mind blowing $78!!
I have been tracking the prices of HDD's for a while now. Although I do agree that HDDs are currently substantially cheaper than SSDs, HDDs have had much less drop in price or increase in capacity compared to the pre flood era. The bigger problem facing HDDs now is that almost all major manufactures are bought up by WD and Seagate and this is starting to show the ill effects of a duopoly. So it remains in their best interest to keep the prices high [2].
"The bigger problem facing HDDs now is that almost all major manufactures are bought up by WD and Seagate and this is starting to show the ill effects of a duopoly. So it remains in their best interest to keep the prices high [2]."
It's not in the HDD manufacturers best interest to keep prices inflated, considering price is one of the 2 big competitive edge HDDs have over SSDs.
What happened during the floods was that WD[0] got lucky - data consumption remained high as ever, and SSDs simply weren't cheap enough at that point to fill the void that the floods left behind.
[0] And to a greater extent, Seagate. From what I remembered their facilities weren't directly affected by the flood, but their supply chains were disrupted.
The price is about the same, as is the performance, but are they still making the MX100? If not, I think the pricing information for it will not be very informative.
While I suspect your original comment is closer to the truth than the article's (as far as price parity goes), picking a single data point doesn't make the point. It's rare for a drive to suddenly drop in price so significantly without manufacturing changes (thusly a new version of the drive). So while you're unlikely to see the same exact drive for 50% less after a year it's entirely plausible to see a different model, similar or more capacity, drop in price.
In late 2013 I bought an 840 Pro 256GB for around $190. Today, I'd be looking at a 512GB 850 Pro for $140, probably $120 on a good sale (like when I bought the 840 Pro).
Its about 40% in a little over a year and a half. Significant, at least. Its at the point where you cannot recommend to anyone to use a mechanical hard drive as an OS volume anymore, at least, at any price.
It's more interesting to me to compare cost/GB on the most frequently shipped or bought capacity for each media. At 250GB, the SSD is at close to (or maybe just ahead of) the most common size. 250GB HDDs are basically close to EOL which I imagine can skew the 'sweet spot' price up or down depending on the situation. Under that definition, SSD sweet spots are probably 128 or 256GB, and HDD is probably 1TB or 2TB?
You can see there's about a 10x difference between the two on the logarithmic scale chart. The trend is slightly faster for SSD, but a quick straightedge-on-monitor projection suggests the intercept is closer to 2015, if ever.
Is that a typo? Did you mean "2025" (i.e. 10 years out?) - I looked at the graph, and that's what it looked like to me based on historical projections.
The author, Jim O'Reilly, is not basing his article on historical analysis, but on an understanding of the impact of 3D NAND + his communication with the major flash providers - some of whom are committing to 8 TB and 16 TB SSD drives in the next 18 months. That, in conjunction with massively more efficient utilization of existing Fabs (both by dropping back a few process generations to increase Yield, plus stacking the NANDs in the Z direction) is going to create a huge disruption in the next 18 months.
Now - the question up for debate - does this disruption result in lower prices, does it result in large and expensive drives, or are the Fabs supply constrained such that they will be able to serve new markets, but not have sufficient capacity to drive down prices on high volume?
Regardless - 2015/2016 are going to be huge years for SSD storage.
For example on Newegg a vanilla 7200 HD from Western Digital, at 1tb, will run you $50. The lowest cost 1tb SSD will run in the mid $300s. A solid 7x difference.
In less than two years we'll see consumer retail 1tb SSDs for $150 to $200. What's actually going to happen, more than a violent price plunge pushing 1tb down to $50, is capacity will soar while prices fall more reasonably than the article indicates. So you'll be buying 5tb SSDs for $300 in three or four years. The difference is the friction the SSD makers will insert into the market, when it comes to falling prices; ie the difference between what they could sell them at, and what they actually will sell them at. The race will be to higher capacities first, then as capacity saturation nears, the bottom will fall out of lower SSD storage tiers on pricing.
Those new "shingled" Hard Drives are 8TB and pretty cheap. However, the hard drives have severe reductions in speed and are really only for archival purposes. IIRC, there's a path for 10TB, 16TB and beyond if you're willing to put up with the weaknesses of shingled hard drives.
Granted, SSDs are moving from MLC(2-bits per cell) to TLC (3-bits per cell) which reduce speed significantly without actually selling any "more" hardware. So both sides are "cheating" extra capacity out of the same hardware.
If you want to keep the same high-quality specs that the 4tb hard drives have (speed, reliability, etc. etc.), I don't think there's a valid upgrade path at the moment. There's some research to push the capacities beyond that while retaining the speed of current hard drives, but they're not ready for commercial use yet.
But the same is true for SSDs. 16nm MLC Flash might turn to 10nm or 8nm as process technology improves... but that's maybe a 2x to 4x improvment in capacity. TLC gives another 50% boost.
HAMR may also manifest at some point soon and push HDD densities way up while getting rid of the problems associated with SMR. HAMR will likely be expensive just like Helium is but it will certainly unlock a lot of super high density materials when it comes about (which may be in the next several years).
With the tech we're talking about, there are definitely limits per inch. They'll be able to fit a petabyte in a 2.5" SSD drive case before the SSD sunsets, at a cost of $300. At that time, your 100tb SSD might be $50 to $75.
After SSD, what technology will take us to 100pb drives at $300? Is that even possible, while keeping the small size of the drive? No idea. It's going to require some amazing breakthroughs.
DNA storage[1]? It will be very interesting to watch computers become more "organic" over time. DNA for storage, processors that are correct "most of the time", deeper and more broad neural networks, etc.
Ah zeno's paradox. When the SSD matches the HDD price, in 28.5 months, the HDD will have moved, and so on and so on. Although at some point they'll probably just make many many fewer HDDs so the price will stop moving much.
For any storage experts, will HDD still be preferable for long-term cold storage or are those even currently an unacceptable solution? Seems like cold storage might end up being more important over time. I guess AWS Glacier is <$1 per TB per month which is pretty cheap and can presumably even fall over time. I have no idea what the reliability of these services is though.
Even if you factor in the performance difference, the cost still doesn't quite work out. On average, I don't think they are ~9x faster (more like 5x). Sure, SSDs max out disk interface speed and effectively have no seek time, but there are few workloads that are heavy on 4k random I/O (like booting and copying small files).
Really? For myself and everyone else I've talked to, using an SSD was by far the largest upgrade they've applied to their home computer for many years.
As a desktop user tasks like booting and copying small files make a hell of a difference for the perceived speed of a system, because they feel like they should be fast. This is where SSDs shine. On the other hand, most tasks that involve sequential I/O with bigger files are expected to take some time. Have you ever tried unzipping eclipse on different systems?
Who besides Apple is 'forcing' users to use SSD? Even most of so called 'Ultrabooks' use HDD in order to become cheaper.
If almost no one understands the performance and reliability difference a SSD can give, and almost no one 'enforces' users to buy SSD, how is this technology become cheaper in a so fast rate?
I bought a $50 1TB HDD from Newegg for my gaming desktop (which I of course do not use for gaming) about 6 months ago. I've looked again and they're the same price. It appears SSDs are still hundreds of dollars for the same capacity. I won't be buying an SSD until they're under $100/Tb.
Yup, pretty much. The only bit of news is that SSDs can achieve hard drive densities with 3D NAND, but like the early calls for the demise of all lighting other than LED. That was 10 years ago, its getting closer now. I'm putting in LED can lights in my living room for example. So at some point ...
To your LED point, I'm unaware of anyone not putting LED if cost of electricity or labor is of any importance.
Consumers may not have gotten the message yet, but anyone running the numbers is going LED or nothing at all (small exceptions for grow operations or theater lightening).
Better return compared to incandescent, but a worse return compared to CFL.
Comparing to incandescent is quite disingenuous. The bulbs just cost too much today, and the heat problems (i.e. much lower lifetime) are not fully solved, they work in some fixtures and not in others.
I don't know -- I just picked up a screw in LED bulb from Walmart for just over a couple bucks, about the same price as CFL. And I was surprised at how bright it was. Just wondering how long the electronics in it will last.
LEDs turn on instantly, CFLs take a moment and get slower as time goes by. If I break a CFL I'm potentially exposing myself and my child to mercury vapor, not so with LED. Those are both benefits I appreciate.
CFL's turn on instantly unless they are in freezing weather. If yours don't then get better ones - walmart has really good ones.
The mercury vapor in CFLs is not very dangerous, it's elemental mercury which is not especially toxic, you need continuous exposure over a long time for it to cause any problems. One CFL is not going to do anything bad. How many CFLs do you break anyway? I've never broken a hot one, only a cold one that fell of the shelf, and cold ones don't release any mercury at all.
Buying an LED is an easier sell these days since the price has gone down, and the brightness has gone up, but I hope you did not avoid CFLs before now - that would have been very shortsighted.
I just had some LED bulbs burn out ~9 months after purchase.
Any potential cost savings just became really hard to recoup. I'm what, 6-10 years out now to save a $30 or so?
Cheap incadecent gives great quality light. Expensive LEDs give good quality light. Cheap LEDs ruin sleep patterns. Race to the bottom (and a lack of consumer awareness or ability tonbuy high quality bulbs) means LEDs are going to end up causing quite a negative impact for a non-trivial number of people.
FWIW my HOA put in shiet blueish LEDs everywhere outside. The entire complex now looks like a zombified wasteland. Evidence that I need to attend more meetings.
Exactly. And how long until it achieves parity in the cloud hosting world. Hard drives and memory are so expensive you can buy they outright every other month it seems like.
My need for SSD is rather modest. I have an SSD big enough for the OS and things I use often, and a huge spinning secondary drive for things I rarely access.
That's a good pattern, one I use too. But that's not what the article was about - it was making the case that SSD would soon reach similar capacities as spinning HDs, and at similar prices. I'm skeptical too.
I believe we are already at the point where SSDs are large enough and cheap enough for the majority of use cases. It's only going to get better. There has always seemed to be a price floor for hard disks, where making them smaller doesn't make them any cheaper, and I'm wondering if the floor won't be lower for SSD.
I use spinning disks for archival backups. It's been reported that SSDs aren't reliable for that. Hard disks also are questionable for long term storage, which means I rotate them often, but still.
Anecdotally, USB sticks I throw in a drawer tend to go bad after a few years.
I'd still love to replace all the spinning disks with SSDs, though. Faster, smaller, less power, less fragile, silent, what's not to like? Hope the longevity issue gets better.
In a side-note in the article, the author mentions that Amazon Glacier runs on tape. Although I'm not sure Amazon officially explained the technology, I heard (and read on Wikipedia) that they store data on conventional HDDs, that are however kept "offline".
I thought degradation of optical medium was already pretty well understood? (And happened on timescales generally longer than FB has been in operation)
I've done accelerated aging on polymers (medical, but I bet it's similar).
We have an accelerated aging experiment, and we sample the part for degradation throughout the test. We also "shelf age" another test group of polymers from the same batch by placing them on the shelf.
Every year or so (or sometimes 5 year spans), we take the shelf samples and look for degradation, and compare it to the accelerated data.
In one polymer's specific case, free radicals in the polymer chain slowly react with oxygen and that's how it breaks down (takes about 10 years if the polymer isn't stable), so our accelerated aging is to put in in a pressure vessel full of high-pressure oxygen, and apply a little heat. We can get 10 years of degradation (roughly) within 2 weeks.
This is almost certainly what the archival blu-ray format has done and is doing.
I was too young at the time optical medium was everywhere to search for actual data, and public claims were contradictory (5 years, 10 years .. more .. ). And that was CD-ROM density.
I'm probably explaining something most everyone here already knows, but:
I don't know what Glacier uses, but the "offline" storage is much the same how mainframes use tape - disk based files (datasets in mainframe speak) are migrated to tape after a given amount of time to decrease storage costs. These datasets needs to be recalled back to disk to be used again. This is nearly identical to how Glacier operates with S3 as explained by Amazon.
The reason tape isn't available immediately is because the tape drive, unlike a traditional harddrive, doesn't contain a reader. So multiple tapes typically sit in machines with only only a few readers that move back and forth based on requests made to read the tapes. If there are many requests, the amount of time to recall a dataset increases.
Glacier may or may not be using tape, but the behavior is nearly identical to how tape has been used in past systems.
You're referring to https://en.m.wikipedia.org/wiki/Hierarchical_storage_managem.... AFAIK, most systems work at the file level, but if you address space is large enough, you could implement this as a virtual memory system that swaps RAM to disk, disk to slower disk, slower disk to tape, tapes to tapes stored offline, etc.
I don't know what Amazon uses, either, but I expect Glacier data to be pushed out to some layer that doesn't need power while idle at all, heavily checksummed and also replicated across the globe. The relatively heavy prices tag for accessing it probably is there to keep contention on some relatively thin pipe low. For example, if they use blue ray, they may have only a few readers online for every 100 or 1000 customers, both to keep prices low and to differentiate Glacier more from their other products.
if you address space is large enough, you could implement this as a virtual memory system that swaps RAM to disk, disk to slower disk, slower disk to tape, tapes to tapes stored offline, etc.
Funny to imagine,
- My program has been stuck copying a variable for over half an hour!
- Yeah, there was a page fault and the tape loading guy still hasn't come from lunch
Yes, but the other side of the coin is "why are those pages of the program still on disk? They haven't been touched in years". But such systems tend to keep parts under human control.
You rarely ever need your fire extinguisher, but that doesn't mean the LRU replacement algorithm that keeps your house clean can move them to the back of your attic.
Back in the day, before automated tape readers, tapes were stored in silos, like a library, and employees would find the cataloged tapes and manually retrieve and load them into tape readers. My company use to hire co-ops to do this back in the 70s or so.
Nothing authoritative. The links down-thread are almost all Robin Harris speculating about what it might be based on using various back of the envelope calculations. I'm actually pretty amazed that Amazon has been able to keep even minimal details off the web as far as I can tell.
The closest I've seen to a direct statement from Amazon is: " An Amazon statement sent to Ars says only that "Glacier is built from inexpensive commodity hardware components," and is "designed to be hardware-agnostic, so that savings can be captured as Amazon continues to drive down infrastructure costs."" http://arstechnica.com/information-technology/2012/08/for-on...
FWIW, Google's competitive Nearline service almost certainly uses disks:
The difference between Google’s Standard Storage and Nearline Storage slices on its public cloud comes down to a software layer, Tom Kershaw, director of product management for Google Cloud Platform, tells The Platform. We also think there might possibly be extra physical distance between the devices hosting the Nearline Storage and servers residing in the Google datacenters and network links that expose Cloud Platform services to the outside Internet. The Nearline Storage could also be running on slightly older equipment that is heavier on the disk drives but that is not beyond its technical or economic life.
I think a lot of people may have confused Google's tape storage systems for Amazon's. I don't think Amazon is using tape, but the big G most definitely is:
Yes please. I was also intrigued by this statement. I was under the impression that Amazon never told what the "secret sauce" was behind Glacier. Looking at Wikipedia now, it seems there are still multiple contending theories.
Also, is it true that the service is not very popular? Are people here using it or similar?
The big sticking point with Glacier is that the cost model for retrievals is complicated and unintuitive. There's a two-step process where you have to first retrieve data and then download it, and you're billed based on the peak retrieval rate in a month, not the total amount downloaded. It's not so bad now that they let you set a retrieval policy to limit your throughput, but before that, you had to carefully schedule your retrievals in order to avoid unexpected costs.
As far as I can tell, Google's new nearline storage offering is superior to Glacier in nearly every way. The retrieval is much faster, and the retrieval costs are a flat rate per gigabyte that's equal to what Glacier gives you in the best case. Only drawback is the lack of a free tier.
You're not wrong, but if you look at the problem that Glacier is trying to solve then it doesn't look so bad:
1. Glacier is competing with tape. So for potential customers, if it's easier to acquire and maintain than your own tape library then that's a win.
2. In the words of James Hamilton the use case for Glacier is referred "jokingly to as Write Only Storage". If you're using Glacier for disaster recovery then paying a one time cost to retrieve a bunch of data pales in in comparison to the potential costs of not being able to recover your data.
3. Is nearline actually superior for the case of "oh shit I need to retrieve everything right now"?
4. AWS already has this cloud storage thingy called S3 and it has reduced redundancy storage if you're looking for something slightly cheaper.
I guess you've got a good point as far as #3 goes; the analysis is actually more interesting than I thought it was. Both Google's nearline storage and Glacier scale your retrieval capabilities proportionally to how much data you have stored with them. You can express it as a tradeoff between cost/GB and the time taken to retrieve your entire dataset
Google throttles your downloads to a fixed rate, so they're effectively forcing you to one point in that space: (3 days, $0.01/GB). With Glacier, you get a curve depending on how fast you want it. It's much more expensive at the same speed (3 days, $0.10/GB) but you can go to one extreme of getting all your data for free over the course of 20 months, or getting it all within 4 hours for about $1.80/GB.
So I guess it's fair to say that Glacier gives you more flexibility, but Google's offering is a much better value at the particular point that it optimizes for.
Honestly it's hard for me to imagine a total emergency restore-EVERYTHING situation that needs the data in under a day.
If it's not that much data you should have a local copy anyway. If it's massive amounts of data is your connection even that fast? And how did you wipe 50 servers at once?
That's destruction, not wiping. If you are buying new servers as part of rebuilding you don't exactly need all the data downloaded in a 10 hour window.
That's an absolutely irrelevant distinction: The data is gone.
And why, in this day of rapid provisioning via cloud providers, do you expect a company to suffer the extended loss of waiting for new servers to want to bring the data back?
So you're talking about a company using entirely physical servers and switching to an entirely virtual infrastructure, while also dealing with the other effects of the disaster that hit them.
They're doing this in less than half a week, or assuming not all the backups are live data they're doing this in less than a day.
I submit that this is an extremely atypical company, and that in practice a company that just had a devastating fire is not going to be impacted more than marginally by the download speed limit.
There are a good few scenarios where there's total data loss. There are much fewer where you need to restore all the data in under a working day.
My home Synology unit does nightly backups to Glacier. It'd be terrible to lose our house to natural disaster, but losing all our digital data (photos, records, etc.) would make it just that much worse. For about $10 a month, I know I can recover all that stuff even if we lose everything else.
I can't comprehend trusting data to a (single) 30TB SSD or even 10TB SSDs given the regularly reported failure modes of "It's not a drive anymore."
Perhaps it's because I'm not doing anything where massive storage is a requirement, but having data striped in RAID6 makes me happy. I'd be happy to use SSD for the underlying medium, but I want something that's less prone to single points of failure. Backups are all well and good, but what's the interface speed of a drive like that? How long to restore a 20+TB backup?
At least with SSDs, there's much less worry about your RAID 5/6 array self-destructing while it's failing over to a hot spare. With hard drives of that size, I'd only feel comfortable with full mirroring.
Who in their right mind still uses RAID5 with spinning disks in a production system? That is like not wearing a seatbelt and driving above speed limits constantly.
Thanks, it was an honest question. I'll look into OBR10.
Currently I'm at a small-ish startup and I have ~150 TB of data on a synology nas. It's in 2 raid groups, both raid 6. Seemed safe enough.
edit Ah, right. From some reason I didn't make the connection that raid 10 was raid 0+1, it doesn't work for us simple because of the sheer amount of data we need to store. Also, although it would be a real bitch if we lost everything(like set us back a week+), most of the data is derived data, and we have all the raw data in cold storage, so we could eventually rebuild the array from scratch.
I can't wait. Once I can reasonably replace everything with SSDs I will.
And then I'll never buy spinning rust again.
I dreamed of this eventuality in the 1990s. I didn't know what technology would be used. I never expected EEPROMs would evolve in this direction (now marketed as "FLASH") as they were so fickle and unreliable in those days.
This will allow lots of interesting things.
Imagine a laptop that's got its main RAM backed on a dedicated bit of high speed flash... so it instantly powers off completely and instantly powers back on completely.
Soon, we'll have the massive increase in storage capacity much like we've had a massive increase in compute capacity to the point where you no longer really think about it-- all computers are "fast".
You can cite industry improvements in the manufacturing process that companies say will effect the price at a certain point along with scientific developments and in general claims made by actual corporations.
I mean you can cite predictions but unless there is trend data to back them up I'm not exactly sure how useful they are.
I think wmf is looking for citations of things like trend data and maybe even plans from SSD manufacturers that shows "hey we're working on this cool tech that should help us lower costs by next year!" types of things.
Not killed off so much as delayed due to lack of ability to mass produce at a decent price point so far. It is far from scrapped. It was over hyped and optimistic to begin with. It just hit the reality that it has several decades of work to catch up on to be on the same page as current tech.
While memristors may eventually become an important part of a balanced electrical component diet, I'd say the hype has failed to pan out. Don't hold your breath. Even if it does eventually manifest, it's going to take longer than you can hold your breath.
I believe the original idea for a memristor came about in 1971 [1]. We still have yet to see it in a physical consumer product. I'm with you - sounds great, but I'll believe it when I see it.
Not really. The charges and currents in a flash chip don't interfere with each other for the same reasons that they don't in any other chip. There's a minimum distance you have to keep things apart but luckily it gets smaller every process node. Heat would be a problem if it was like a processor but you're generally only reading from or writing to one part of the chip at a time so heat isn't a particular concern compared to a processor or DRAM.
Yes it's very susceptible to interference though coupling of the charge on on the floating gate of the cell and through voltage differentials on the interconnect lines during reading and writing.
But 2D NAND storage essentially hit the peak density. The gate is only a few atoms across.
The only way to increase density to build up. They had to go back to larger transistor sizes because of all the interference. IIRC from 15-16nm to about 60nm.
Complaining about overpriced/limited storage options in the iPad/iPhone is like complaining about overpriced/limited food options at a sporting event, airport or any other venue where you've got no other option than to buy from an approved vendor.
There's no lock-in what's at play here for most people. You can get a tablet/smartphone from several vendors, including Samsung, Google and Microsoft and people switch from iOs to Android and back all the time. It's not like apps are a huge investment or that it's hard to move your data across.
These people WANT to buy the iPad/iPhone, instead of something else. So a more accurate response would be "with desirability come higher prices".
We have students who WANT to buy Apple products like crazy at the school I work at. The majority of responses for why they want them is, "They look pretty."
Well, they do. But as a seasoned IT pro who cut his teeth on Sun OS and HP-UX back in the day, having a high end machine, with a great screen, great battery life and quite light to carry around that also has a friendly UI and runs a full blown UNIX underneath but also gets all major proprietary programs is nothing to sneer at either, pretty or not.
There's a reason even Linus Torvalds carried a MacBook Air (though I think he found something even lighter now).
No, they want to squeeze an extra $100 out of you for $5 of flash storage. Most people will pay it too because 16GB is pretty crippling.
It's like buying a new car that comes with a 5 gallon gas tank that costs $40,000 and having to pay $50,000 for the same model except with a 10 gallon tank. Everyone knows it's a rip off but the car is really cool otherwise and there is no way to change the tank yourself.
Apple's storage pricing model would be unsustainable in a competitive environment. Entry level products with 16GB exist solely to establish a 'free' subsidized level while practical use requires a US$750 64GB model.
Ordinarily Samsung and Moto would make a phone of similar quality and charge the market rate of $8 for the 64GB upgrade to undercut Apple.
Unfortunately it's not a competitive market. Apple's 2013 iPhone 5S still has lower power use and more processing power the current Samsung phones because Apple invested in chip design and shipped an AArch64 chip Samsung can't match even with two years (1.33 Moore's Law cycles) of process upgrades and access to Apple's tapeouts.
Apple's screen process is still brighter with more faithful colors. Apple's case design is still more fashionable (even though I don't like the latest ones, they look slick). Apple's net software and services like maps and backup are catching up to Google and ahead of M'soft or anyone else. Apple still has better control over power consumption and background processing in its OS. Apple's user experience is still clearly better and easier than Samsung or Google.
A lot of that is Google losing interest in Motorola, the Nexus brand, and Android. The Nexus 5 was the last inexpensive high quality pure Android effort and it wasn't profitable. Motorola wasn't worth keeping. Android is good enough to keep Apple and Microsoft from squeezing Google off the net and that's all Google needs. Google hasn't even updated Android to take advantage of AArch64 which cripples all the Android phone makers against Apple.
Ultimately, Google knows that they can't compete against Apple because carriers have contempt for quality. No other maker but Apple has loyal and lucrative customers that will switch carriers to keep using Apple devices. Thus no other manufacturer has leverage to get the best quality devices it can make to users. Carriers want your phone crippled because they dream of controlling you and governments hand them monopolies so they can control your device maker.
The result is that only Apple can make a top flight device. And therefore they can use monopoly price discrimination tactics to bleed extra money from you for 64GB.
It beats the alternative which is carriers ruining all devices including Apple. I like Apple robbing me a lot more than AT&T or Telcel or Softbank or NT&T or China Mobile or Verizon.
Not to mention the crazy price structure of MacBook Pros. The 13 inch model with a 128GB SSD is $1299, but if you want 256GB, you have to shell out $200 more. I don't think this price difference is justified just by SSD prices. Is this just Apple being stubborn with their price marketing? Is it to preserve the perceived resale value?
They are talking as if 3D NAND is free. The maximum of 64 layers is not yet production ready AFAIK. Even the 32 is too expensive, we are talking about 8 or 16 layers coming soon , which when you include two nodes step back you are talking about 2x to 4x improvement. 3D NAND isn't free to manufacture either. When you add up the cost, 3D NAND will be no more then a continuation of SSD falling prices according to its current tends in the coming years.
P.S - It just means it will prolongs the life of NAND and SSD will continue to get faster, higher capacity and cheaper before hitting its limits.
I find great irony in statements like these when the motivation of every party is considered. The idea is that prices are "in a free fall," i.e., they haven't fallen completely yet. But this will encourage people to buy SSDs and drive them in the very free fall of which the article speaks!
Experienced the same principle in my life a few weeks ago - I moved to a place described as "gentrifying." Realized a week into staying there that I was one of those people who was actually contributing to its gentrification! By no means was it gentrified, though.
At least you were told it was "gentrifying", not that it was "gentrified". You probably got a discount for moving into a neighborhood that was gentrifying. Once gentrified, the discount probably largely disappears.
Flash has had a drop in price because it gained mainstream utility, and therefore serious volume production. It's entirely unclear whether this will continue.
The physics of flash cells is not all that promising: there's certainly no Moore's gravy-train. 3D is a decent tweak, but it's not like 4D is coming next. As flash cells shrink, they become flakier (which might not hurt drive-writes-per-day, but is that the right metric?)
SSDs should suffer the same bit-rot other flash does.
JEDEC specs require that a drive that has used 10% of its rewrite capacity it should hold the data for at least 10 years, and this decreases to when a drive has used 100% of its rewrite capacity it should hold the data for at least one year. So to meet specs a consumer drive should fall in between these timeframes. However, cheaper drives may well not completely meet this spec; this is common for many other manufacturing processes.
The bit rot also depends on many other factors, the most important being temperature (you cannot outwit statistical mechanics). The next one is a combination of hot neutrons causing bit flips from impurities in the material and from cosmic rays, the latter of which can be reduced by choosing better storage locations.
Better controllers could understand the decay modes and read and rewrite data to maximize expected lifetime, so power is needed for maximal expected lifetime.
In practice, all of this is not of much use to everyday people [1]. But if you want to maximize your data retention, study all these issues.
Here's a case the authors claim bit rot happened in three months offline [2].
It should have no impact. It really is about write capacity. The little cells can only handle being changed so many times before they wear out. The wearing is a chemical/semi-conductor type problem.
It's not like DRAM where keeping power applied keeps the cells "fresh" (and in DRAM it's done literally be refreshing each cell at a certain frequency.)
The simple equation is the more times a cell is erased the sooner it fails. Write leveling and other software efforts work to spread around or eliminate unnecessary writes.
But over the practical life of a drive (5-10 years?) being powered off will not cause them to lose data.
Edit: I've been following FLASH technology since before the terms "FLASH" was coined, back when it was called EEPROM -- Electrically Erasable Programmable Read Only Memory, an evolution of PROMs (write once) and ERPOMS (erasable with UV light). If there's a myth out there that flash drives just forget things, I'd like to point out that I've had flash drives (low quality at that) from the 1990s that were left for 20 years and then read successfully! Show some citations of real evidence before you down vote me.
That's incorrect. Charge leaks very slowly out of cells. It's not a big issue, but it does happen. Powered-on SSD firmware knows about the problem and will maintain the cells.
The warranty is certainly not "it will keep the data 10 years without being powered on." We discuss the topic of how long SSDs retain the data when powered off. Intel says, powered off but stored on just 35 degrees C, only 14 weeks, as published by AnandTech:
And Intel is proud to be more reliable than others. In one of their presentations I've seen they show that they use some very high tech setups (like accelerators!) to measure the degradations fast, and that the other producers fare worse on their tests.
I find it surprising that most people aren't recognizing that SSDs don't even need to come close to price parity to completely replace hard drives. Random access alone makes them so much more valuable, and a good SSD is now worth more than extra memory in terms of increasing the performance of a computer system.
Having had a failure rate above what I was used to with hdds I'd prefer to see an increase in quality rather than a fall in price. It gets annoying having to replace my os ssd every 18 months or so. My most recent phone's memory also died after 30 months.
Uh, what kind of SSDs are you buying that you're having that kind of failure rate (god please don't say OCZ). I've been using Samsung and Intel SSDs for years now with failure rates below the failure rates of WD drives (and well below Seagate failure rates).
SanDisk . They have always been quick to replace them. The last one they even upgraded to a much more expensive model which makes me think the issue was being widely experienced. If I were to buy today I would buy Samsung as I have heard they are much more durable. Incidentally it was a Samsung phone that died though I don't know who made the memory for that.
There was a review of SSD lifespan a while back, they found the samsung 840 pro was able to survive something to the effect of 2 petabytes of wear before failing. The worst of them all started to fail at 100TB, and failed at 900TB. Unless you're running something with crazy IO, I can't imagine ever practically running into that limit.
Yeah, but the problem is that SSD failures usually have nothing to do with flash wearing out, and more to do with catastrophic, sudden failures somewhere else (mostly the controller AFAIK) causing the drive to stop doing anything at all. I had a 840 Pro fail on me, hadn't had many writes but just stopped showing up in the BIOS one day.
Same issue here with a Samsung Galaxy Note 2 whose flash memory died. Luckily (or was it luck...) it died about a month before the 2-year warranty ran out and I had it replaced within a week for free. Pretty unheard of service, but still, I've always wondered why that thing died.
Why? I have a 1TB Samsung 840 EVO and I updated the firmware earlier this year as the previous version apparently had some performance problems with data written long ago.
> Additionally, wear-out isn’t an issue with SSDs. Those two node uplifts in the manufacturing process add literally years to the device life, and the economics of 3D NAND allow for extra over-provisioning, making the write life of the drive well beyond its time in a data center. This is especially true for archived storage, where writing is at a much lower rate.
I seem to remember SSD manufacturers rolling out similar perks in their marketing spiel years ago. It would be great if this is for real. The drives I have had issues with have been high write drives and I knew they wouldn't last for ever but they lasted an incredibly short time even when taking this into consideration.
Did they fail as in "some/all data lost", or did they switch to RO and you could at least get your data off them? Thinking of replacing my current home server HDs with SSD but worried about robustness...
The last one I could secure some but not all of the data before it completely failed. It failed gradually over the course of the day of me trying to rescue it. It was strange. Can't remember the previous ssd but the phone was flat out dead.
>Mechanical failures account for about 60% of all drive failures.[2] While the eventual failure may be catastrophic, most mechanical failures result from gradual wear and there are usually certain indications that failure is imminent. These may include increased heat output, increased noise level, problems with reading and writing of data, or an increase in the number of damaged disk sectors.
SSD technology still has a reliability hurdle, you can nuke one in a month if you write to it constantly. Spinning media has much longer read/write durability. Until there is parity its still a bit of an arms race for capacity.
Hybrid drives are pretty awful so I don't think they'll stick around. I could see Hierarchical Storage Management make a comeback and an entry to the consumer space, but thats been esoteric at best.
True, and I've done it! I accidentally used a consumer grade (Crucial M500) in an application where endurance would be reached in less than a month.
In fairness to Micron/Crucial, the drives did not lose any data.... but the write bandwidth degraded down to 10MB/s which counts as a failure for most apps. That's the catch in the report you cite: the bandwidth of a used-up drive is so poor that it's hard to actually write enough extra data to fully-fail.
Edit: Burning up an SSD was how I learned that TRIM is unnecessary: even though many parts of the filesystem were stable and never re-written, the bandwidth was even across the entire drive. The firmware noticed that some cells were underused and moved the stable data off them so it could use up the endurance evenly.
"In fairness to Micron/Crucial, the drives did not lose any data.... but the write bandwidth degraded down to 10MB/s which counts as a failure for most apps. That's the catch in the report you cite: the bandwidth of a used-up drive is so poor that it's hard to actually write enough extra data to fully-fail."
You call that a catch? I would sell it as a feature. It's way better than "looks OK until t minus 1, can't read or write from time t onwards" or lesser variants of the "try reading everything a few times, and you may get 90% of your data back"
Can you please elaborate on what application would basically kill an SSD in a month? It is my understanding that the Crucial M500 drives support something like 72TB of writes.
Please don't take any offense, I'm just interested in how you were using the drive.
While you are correct that you can nuke a normal SSD in a month, you can buy high-endurance drives which will last much longer. For example, you can buy a p400m with 7.5PB of write endurance [1]. It's just a question of making sure you buy a part that matches your need.
If you're writing more than 7.5PB in a month... I'm prepared to be amazed :-)
Endurance is really not a concern for the vast majority of consumers. The media wearout indicator on my 19-month old cheap TLC drive has dropped 2%, and I'm a fairly heavy user.
The vast majority of SSD failures seem to be a result of firmware faults.
These estimates seem a bit exaggerated at this point. SSD is still lagging in terms of capacity, and in my household we only have a handful of devices using SSD. While it's become more widely-adopted for sure, HDD's are not "doomed" until SSD capacity goes up and prices plummet. Still too expensive for an SSD.
> Also, with much lower power use, there is a TCO saving to be added to the equation. Power savings work out at around $8/drive-year, so add another $40 to the 5-year TCO balance and the hard-drive doesn't look so good.
And I wonder if Mr. O'Reilly has included the cost of power for cooling.
Consumer SDDs are now inexpensive enough for desktop RAID. On Amazon, decent consumer 240GB-1TB SDDs are available at $0.32-$0.40 per GB. And in my experience, SSDs are so fast that even RAID6 arrays rebuild very quickly after replacing a member. I use Linux software RAID.
Also the noise of the HDDs is killing me.
I have quite a silent rig (fiddling with music/audio in my spare time) and the most noisy thing was the HDD where my audio samples were. So I've moved everything to SSD and I'm looking for another 512GB to add.
I bought my 180GB SSDs for < 50 cents per GB like 3 years ago and it's barely better than that right now. Storage in general has more to do with availability and demand than technology, there's plenty of times when they've gone up in price.
For me it's like the osbourne effect coming into play.. I would like to switch to full SSD, but I'm not going to do it until the price drops, which if enough people had the same idea, means the demand drops, so the price stays the same... repeat
Think about the market dynamics. Not just the 3D technology they are talking about, which brings parity within range, imagine what you would do if you were an SSD maker and this was on the horizon.
You'd rush to reach parity because as soon as that happens there's no reason to buy an HD over an SSD. So you're going to want to cut prices as fast as possible so that when the market switches over-- which will be a HUGE tipping point for the industry-- you can get as much market share as you can. Even taking a loss now on a reliable drive may get you customers for life who you can profit from later.
I think the transition is going to be extremely dramatic and it's a huge opportunity in this industry. I think we'll be seeing HD makers switching over their product lines to have more and more flash, and more and more flash drives, while re-positioning hard drives to very particular environments where they have an advantage (I'm not sure where these are)
But this is going to be a tornado of a market and some people are going to be come very rich as a result.
"By packing 32 or 64 times the capacity per die, 3D NAND will allow SSDs to increase capacity well beyond hard drive sizes. SanDisk, for example, plans 8 TB drives this year, and 16 TB drives in 2016. At the same time, vendors across the flash industry are able to back off two process node levels and obtain excellent die yields."
I have no doubt that we will see devices that big. The question is will anyone buy them. So its really a question of cost. HD's will be dead next year if I can buy a 10TB SSD for $300.. If it costs $4000, i'm sure HD's will last another couple years.
It all really depends on the SSD curve. If SSDs continue their price decline and capacity increases as this article mentions, it'd be difficult for HDDs to compete. Tape costs you no power in storage, but HDDs do unless you're powering them down and spinning them back up (which causes significant wear on the motor). SSDs get you cheap storage and lower power consumption, negating both tape and HDDs simultaneously.
TL;DR SSD advancement set to deprecate HDDs and tape in near future.
SSDs need to be powered up fairly often to maintain data integrity. So, they're great for the drives in computers, but for external or backup drives--which might only be powered up occasionally, spinning magnetic disks will still be a better choice for a while, it seems.
edit: maybe not. Bringing this article up from samcheng's post below:
Note that that "proof" is not proof at all. Ignoring that it's anecdotal evidence on the sample of one, the author didn't have the checksum of the data on his SSD and didn't try to recompute it, he just tried to boot the computer with the SSD after being turned off for some time and then installed the new OS.
Note that the older the SSD (from the technology standpoint) the better retention is to be expected as the miniaturization of the data cells (which sinks the costs and increases the capacity) significantly lowers the chances for the retention. So his old SSD can behave better than the new one. And some lost bits don't mean that he can't boot, as long as it's just on the places that don't change the behavior.
Needless to say, that netbook stays shut down for months but it still works when I turn it on. It survived many summers at 30 C. I don't know which kind of SSD it contains but being from 2008 it shouldn't be anything too fancy.
As I've said, the older SSD (technologically) and the lower capacity it has the better retention is expected. My 256 MB USB stick also still works, but one I've bought much later, 16 GB, died after only one summer of being not used. And that anecdote too really doesn't prove anything. The worst events are when just a few bits are changed and you don't recognize. Then a few bits more are changed... etc
The problem has to do with the temperature you write at and the temperature you store it at. If, for some reason, you're writing in a very cold environment and storing in a very hot one the lifespan is diminished. If it's the opposite, you're golden.
The title is pure irony ("SSD Storage - Ignorance of Technology is No Excuse"). See an another link posted here. Also from personal experience I have had no problem with an SSD that wasn't powered for 7 days.
The author of that piece is actually somewhat confused about how SSDs work. Data retention is a matter of the charge store leaking. An SSD doesn't have any means of refreshing that charge on it's own, the same timer is there whether the drive is on or off. And in fact it will lose data faster if the drive is on since then the drive temperature will be higher. Luckily this sort of data loss isn't a big deal in practice for the reasons outlined elsewhere in this thread.
pmontra and Bill_Dimm (edit: and cjensen!) post contradictory blogspammish articles (through no fault of their own - what else is out there?) which both reference the same JEDEC slideshow which has popped up a few times in the last few months.
This is an important enough topic that I wish a better reference existed.
So what if they have to be brought up once in a while? Who cares? They use so little power, that's really not an issue to bring the drive online, even frequently.
I think that we'll just see magnetic disks go away, and tapes pick up the long-term storage market. Pretty much nothing happens when you drop a box of tapes.
> Likewise, DVD-type archive storage will need a magic trick or two to remain in the race. A terabyte of DVDs will cost more than a terabyte SSD and that isn’t including the DVD library unit.
The article thinks that SSDs will be used for archives.
Not entreprise grade but consummer, and totally anecdotal, but I've bought a 500 GB Samsung EVO 840 SSD for less than 200€ a month ago. Given the specs (performance and capacity) of that drive and my previous experience about a year and a half ago buying a 120 GB SSD, I expected to pay maybe twice that ?
Even if the price fall slows down a bit, we're quickly getting to a point where I don't see why you would buy a SSD at home for anything other than your consummer NAS or similar. Anyone who tried a SSD surely isn't willing to go back to HDD for anything other than their "archive/storage" drive. And if this article prediction is true, soon even that will be taken by SSD.
Cheapest 1 TB HDD on Amazon: $40
Expecting a 10x drop in prices in 1 year is ludicrous. Even if the prices follow a pseudo Moore's law and fall by half every 18 months, you're looking at at least 5 years before they reach parity. And in the meantimg HDDs would have gotten cheaper; so expect even more time for parity.
In other words: ain't happenin' in 2016.