feature film at 2K is pretty much 10TB of data. Even offloading that sort of pipeline from your local machine to a machine down the hall invokes a SAN.| Well yes, that's the point. Instead of building monster desktop machines, you build monster server machines and monster storage arrays, in bulk. 10 TB might be a lot to a desktop, but for AWS, it's a drop in the bucket. My last reference for video work (Also a few years ago) was being told that you'd want at least 16 GB for compositing, though more was always better. Let's pull numbers out of my ass and quadruple that for today. I can find one of those on EC2 already. Not only that, it's barely $2 / hr. Storing 10 TB? Fire up a couple EBS volumes and stripe your data across them. Note that I Am Not A Network Engineer (Professionally, anyways), AWS might not be the right service for the job. Chances are, this is not optimal advice. Additionally, you still need a service that provides a low latency connection to your artists' thin clients and I don't have a good reference of EC2 instances' latency versus time. But hey, it's a start? Now to mull through the rest of your questions: Above entry level, below $1k in final price. Costs are shrinking, though still not non-existent. Ridiculously overpriced from the point of view of an amateur. My impression was always that the numbers were set to what the market could bear. What prices make sense in the context of a $10m production still seemed insane to a teenage thundara wanting to toy around with 3d renderers back in the day.If the gamers will no longer need high-end computing, then does it matter if the gamers are running on entry level hardware?
And if you don't understand my world, how do you know it's "ridiculously overpriced?" Are there not economics of production there, too? Or are you turning opinions into facts in order to maintain your worldview?
The part you're missing is I need it in my house. I need the throughput. Here's a calculator. Red Raw, at 4k, is 36MB/sec. If I'm editing, I need minimum 2, more like 4, possibly 6 or 8 streams. Now we're up to 70, 140, or 280 MBPS just to pull the media from the drive to my workstation. I'm on a pedestrian movie right now. I've got fifteen tracks of dialog. Each one of those is 48kHz, 24-bit... except I'm working at 32 bit FPU. That's 23MB/s before we even get into the beds, the FX, the foley, the music, or any of the rest. My data pathways are such that if I have the video and audio on the same SATA drive I get crunches. This is why I said SAN - because I expected you to notice the different varieties of exotic file transport protocols necessary to make this shit work over distances. Have you ever had to seriously consider a fiber channel card? I'm pretty much there the minute I start moving this shit into another room... and I'm suddenly in the land of SAS drives. So yes. I know what AWS is. No, it does not work for what we do. "At least 16GB for compositing?" I don't even know what that means. We're friends. Which is why I say, with affection, "you're talking out of your ass." I'm not. Don't make us both upset by deliberately misunderstanding things I do for a living to make a point you can't support.Well yes, that's the point. Instead of building monster desktop machines, you build monster server machines and monster storage arrays, in bulk.
Ridiculously overpriced from the point of view of an amateur. My impression was always that the numbers were set to what the market could bear.
Technically under a gigabit, and the same is true almost up to 4 streams, but that's just copy time. Once it's there, the bandwidth between storage and instances would easily meet that requirement, then all you need transferred back to your end is the current image / audio. Still, as much as I hate to say it, residential internet speeds are still outrageously slow, variable, and expensive, but I'm just speculating about the future for fun right now. (If you're working in a studio, I would think it'd be less outlandish to find those speeds to the wide web, but I'm unsure of your current setup) 16 GB of RAM for video editing / compositing (Not trying to talk down, just unsure if the "video" prefix is applicable or redundant to the latter). Fair enough :PRed Raw, at 4k, is 36MB/sec.
"At least 16GB for compositing?" I don't even know what that means.
We're friends. Which is why I say, with affection, "you're talking out of your ass." I'm not. Don't make us both upset by deliberately misunderstanding things I do for a living to make a point you can't support.
...so you're seriously advocating a workflow where I'm reliant on "the cloud" for monitoring and editing 4k video? Now you're just being ludicrous. No, you're wishing and presenting it as fact in order to argue that you have a leg to stand on. You don't. if gigabit ethernet isn't fast enough for data transport for what I do, there will be no WAN fast enough for the foreseeable future. A T3 is 4mbit. You're stating that somehow 1Gbit is going to happen at a pedestrian level... all so that I can put my 10TB per movie on someone else's server. You misunderstand me. I know what compositing is. I'm arguing what 16GB has to do with it. You're now arguing for RAM on an individual machine, while my argument has been (and has been clarified three times now) that the issue is even with the skookum fast machine in the sky, the pipe betwixt here and there cannot be made fast enough on an internet backbone. Just to drive home a point, I've got 20GB of RAM and it isn't enough. 64-bit workflows allow RAM caching and my next machine will probably have 64GB or more. And I don't do video.Technically under a gigabit, and the same is true almost up to 4 streams, but that's just copy time. Once it's there, the bandwidth between storage and instances would easily meet that requirement, then all you need transferred back to your end is the current image / audio.
but I'm just speculating about the future for fun right now.
16 GB of RAM for video editing / compositing (Not trying to talk down, just unsure if the "video" prefix is applicable or redundant to the latter).
I've been saying from the start that this is a one day thing and not a today thing. I'm also pointing out that the initial transfer operation is a one time thing. Yeah, it's slow. But it's a day's worth of copy time on gigabit (Don't know why you brought a T3 into the equation, and your number is off by an order of magnitude). It's absolutely possible, and the intention in the long run would be to lower costs. Instead of rolling your own storage array, instead of building your own monster machine, you buy a little time on another network that has been optimized to do all these things on a massive scale. This would be turning that $5-10k workstation into a cheap front-end + monthly server costs. Perhaps high end video editing is too high of bandwidth to be worthwhile. But I'm still not convinced either way that this is inapplicable to any digital work.No, you're wishing and presenting it as fact in order to argue that you have a leg to stand on.
You're stating that somehow 1Gbit is going to happen at a pedestrian level... all so that I can put my 10TB per movie on someone else's server.