We take it for granted now, but I think the skimmer is the most useful tool in our toolbox. To be able to skim along the thumbnails in filmstrip view, especially on long takes, means we can very quickly drill down to the exact clip content we need. We used to wear out the spacebar and J,K,L keys doing this.
Although it is hard to get your brain out of thinking of the Final Cut Pro 7 way of thinking as content being in bins, once you can see the power of keywording, it’s hard to imagine working without it now. Being able to keyword individual ranges in a clip and also applying more than one keyword is a killer feature.
Working with native formats
No longer do we spend half our time batch encoding in MPEG Streamclip, we can now take clips from many different formats and mix and match them on the same timeline with no encoding or initial rendering. Once ingested, we can get to work straight away, which means better value for our clients.
We have default smart collections to sort by camera, but also having default collections to sort by favourites with ‘Interview’ keywords, means there is always a dynamic pool of content in the project as soon as any interview selects are done, which is a huge timesaver when trying to get the initial edit underway.
The Magnetic Timeline & Clip Connections
The feature that is hardest to get used to is the magnetic timeline, yet once you have to use another track based NLE again, it suddenly becomes the best feature. Being able to move around entire segments of a cut while the remaining structure remains intact is invaluable and you can do this with reckless abandon. Having clip connections means that this all works exactly as you want it to.
Sam here… finishing up 4K week on FCPWORKS Workflow Central with one more post.
So… BMD or AJA… the eternal debate. Right now, we’re centering this around the 4K monitoring & I/O products, AJA’s IO 4K or the BMD Ultrastudio 4K. Basically, it comes down to this… Do you use DaVinci Resolve for Color Correction? If the answer is yes, you’re going to need to go with Blackmagic. Case closed. Blackmagic Devices are the only ones that will work with Resolve.
However, If the answer to that question is no, and you’re doing your color work primarily in FCPX or another program that isn’t Resolve (Scratch, Baselight, Smoke to name a few) the discussion becomes a lot more complicated. Additionally, in the case of the BMD Ultrastudio 4K… it’s can be loud. The AJA IO 4K is quiet and considerably smaller. If you’re keeping the product in a room with a new Mac Pro as your primary computer, you really start to hear the Ultrastudio 4K when it’s on…. and if you’re doing serious sound mixing, the noise makes a big difference.
Additionally, and this is a little known fact, but the AJA cards support more monitoring formats for FCPX as well. For whatever reason, your monitoring formats in the Blackmagic preferences (system settings on the Mac) are more limited and considerably smaller than when you’re using your Ultrastudio 4K in Resolve.
HOWEVER, at the end of the day, price is also a factor and the AJA products are almost universally more expensive than their BMD counterparts. So price vs. performance is definitely a consideration. In my opinion, if you’re more interested in specific features, go AJA. If you’re budget conscious or a heavy Resolve user, go BMD.
p.s – little known fact, but the HDMI out on the back for the New Mac Pro can be used as an 8-bit A/V out in FCPX, and it is FAR more configurable for video I/O if you’re using weird sequence settings and just need to send out a 1:1 output over HDMI than what’s available through your BMD or AJA device.
Basically, the long and short is that the DP is worried that higher resolutions and the ability to make alterations to images further down the line in post is going to take control away from DP’s over their images.
On the one hand, I can totally understand where he’s coming from, and he’s totally right. I’ve seen quite a few projects butchered in color correction, and I imagine it must be very difficult to go out and put your heart and soul into shooting/lighting something only to have it completely reworked in a way that’s entirely not what was imagined… and then be credited as if that was how you wanted it. That sucks.
However, this is not the fault of the resolution, RAW, or improvements in technology. The fault lies with the way that departments work together, and it’s my biggest pet peeve in the entire industry.
No one talks to each other.
Departments don’t talk about workflow before the shoot starts. Production rarely asks what post wants. Post rarely checks in with the DP or sound department after the shoot is over. VFX lives on its own island and is expected to push the “make it better” button on whatever production hands them. Everyone is just trying to get through the day, and get through the gig.
There’s no process and no blueprint. There’s no workflow.
Actually… that’s not even really true. There’s too many workflows, and every department/individual has their own specific way they think things should be done/delivered to them. Rarely do these different workflows sync up across departments. Even rarer than that does one department ask the other department how they want to do things before the production starts. Usually there’s a list of delivery requirements on how a vendor wants things that is discovered after the critical production decisions have been made.
A few examples that illustrate this:
– An anamorphic lens is chosen because Production likes the widescreen look. Post is never consulted. However, no one in post knows how to transcode/desqueeze the anamorphic footage correctly. Footage is processed slightly warped and then edited this way. Conform becomes a nightmare. Also, turns out the distributor needs a full frame 1080 master (no bars on it), however the movie wasn’t framed in many cases to live in a 16:9 master. Massive pan and scan work needs to be done. Post budgets go up.
– LUT’s are created for each shot but no discussion has been had over how these will be applied to the RAW footage when it’s time to do the conform. No one bothered to run this workflow past the editor or colorist who have no how any of this was handled, and the production had a falling out with the DIT who made all of the looks. In the end, LUTs are applied incorrectly or not at all and no one has any idea what LUT goes with what shot or how to sync all of these LUTs up in a way that isn’t ridiculously time consuming. Post budgets go up.
– RED footage will be transcoded down to prores at a random resolution with letterboxing added on to the prores. No one in post has any idea how to correctly get back to the original R3D’s with proper transforms from the edit applied to the RAW footage. Post budgets go up.
– No one asks the VFX department how they want their greenscreen shots done. Tracking marks are not used and yet the camera is moving during the shot. Posts budgets go up.
– VFX works in RED Color3 and delivers DPX plates. The colorist is using REDlogfilm and grading everything from the RAW. Things don’t match shot to shot. Post budgets go up.
– Editors need to deliver their picture to a Sound house. They’ve never delivered to this sound house before, and the Producer picked them because they had the cheapest bid. Lots of ADR work is expected. Post budgets are about to go up.
Anyway, you take things like the above and then throw in the fact that, in most cases, especially on smaller commercial jobs, most of the people involved are working with each other for the first time. Chemistry and trust are non-existent. Bids have gone out to the lowest bidder and not to the most qualified teams. CYA (Cover-Your-A$$) attitude becomes prevalent. Fingers become ready to be pointed. Accountability becomes nonexistent. People get angry. People get fired.
Someday, I want to live in a world where the DP knows the editor and both of them know the colorist. They’ve all worked with the director before. Also, before they’ve shot, each of these people sat down in a meeting with the VFX and sound departments and talked through how the imaging pipeline was going to go from set to edit to VFX to sound to color to mastering. Then, someone would come up with a diagram based on what cameras were being used, how sound was being recorded, what resolution needed to be delivered, and in what color space(s). Then, they’d also write down how metadata would be managed, VFX would be roundtripped, sound would be turned over for the mix, video would be conformed for color, and how, in general, the project would be set up and delivered to the distributor based on pre-agreed upon sound, color, and mastering specs. The departments would then take this diagram home, decide what needed to be changed based on their needs, and then come back and finalize their process, compromising where necessary for the greater good of the project.
This would all be done before a single frame of footage was shot.
A man can dream.
Anyway, until people start working this way and figuring out their process ahead of time, people will continue to write blogs like the one I linked to above and blame things like resolution and RAW for why their footage doesn’t look right in the end.
Departments need to communicate about workflow more. That’s not technology’s fault.
Those that know me well know that I’m a huge fan of both FCP X and the Panasonic GH series of cameras. Currently, I own and shoot with the GH2, GH3, and yes, even the almighty 4K-capable Panasonic GH4. FCP X handles the different codecs, bit-rates, and frame sizes like a champ. In addition, I always have the option to transcode the footage to ensure that everything runs smoothly on, say, an older laptop.
I determined that I would keep the older cameras for a variety of reasons. The most practical of those reasons is that they are both paid off! However, choosing to shoot with all three cameras at the same time presents some problems.
Chiefly, how do I effectively balance the distinct looks of each camera when the looks are baked in to the image? Does FCP X have the tools I need to get the job done? How do I keep my workflow simple without having to do a major color correction at the end of the job? It turns out, the answers are quite simple.
My personal holy grail for camera matching comes in the form of the DSC OneShot Chart. I make absolutely sure to bring it with me on multicam shoots. I allot a few extra minutes to white balance and to shoot the chart. The back side of the chart is split between white and gray. Frankly, I wish they chose either white or gray versus splitting them up since it’s hard to fill the frame of your camera with the space given for each when performing an auto white balance.
The OneShot chart was developed by both DSC and Art Adams. Art has a blog post that fully explains the chart here. I won’t go into too much techno talk about it, but it’s a brilliant chart which has all the basics you need for proper luma and chroma balancing: true black, white, gray, skin tones, plus primary and secondary broadcast colors.
Before I got the chart, I would balance the shots manually using a waveform monitor and vectorscope. The great news is that Resolve 11 can actually understand the color information on the chart using the built-in Color Match feature. In the video below, I show you how I send the shots from FCP X to Resolve, match them, and then send the LUTs back to FCP X.
To summarize, once each camera is balanced in Resolve, I export a 3D LUT of each camera. I name the LUT based on the camera and name of the shoot. Of course, you could add any additional info that you deem necessary, such as scene number.
The next trick is getting all this back into FCP X. I purchased a great plugin from Denver Riddle’s Color Grading Central called LUTutility. This program can actually read the LUTs you export from Resolve and attach them to your shots inside of FCP X.
All you need to do is drag and drop the .cube files into LUTutility’s preference pane, located in System Preferences. The LUTs are then installed and accessible inside of FCP X.
Inside FCP X, it’s simply a matter of adding the LUTutility effect to the footage and choosing the correct LUT based on your camera from the pulldown menu in the inspector. The beauty of this workflow is that all you need to do is import the shots with the DSC One Shot chart into Resolve. There’s no need to render anything out of Resolve. All that color grading info is stored in the LUTs and read by the LUTutility effect inside FCP X. I’ll then apply the necessary LUT to all shots within a scene and perform the final grade inside of FCP X.
One side-note to all of this. After I apply the LUTs, I’ll usually add some minor color correction tweaks as nothing is ever 100% perfect. But even with the minor tweaks, this process takes so much work out of balancing different cameras, especially DSLRs where the look is baked in.
As long as the cameras are white balanced off the same source and are generally shot at the same ISO, the tweaks are very minor compared to having to entirely match by eye. Frankly, I find it amazing that this is now all possible. It speaks to the exciting development that is going on in the FCP X ecosystem.
I hope this tip helps. Now go shoot and edit something awesome!
Guest Blogger Michael Garber from Garbershop.
Michael Garber is a post production professional with over 14 years of experience. He started his company, 5th Wall, in 2004 and has worked with clients such as Discovery Agency, Huell Howser Productions, Automat Pictures, FuelTV, PBS and more. In addition to editorial work, Michael produces corporate documentaries for a Fortune 500 company. When not editing or shooting, Michael is more than likely talking about editing and shooting on his blog, GARBERSHOP.
There’s a lot of talk and a whole lot of hype when it comes to 4K. I’m certainly guilty of a lot of that hype. However, most people know very little about 4K and are pretty intimidated by the subject. Here’s some quick hits when it comes to working with it.
First thing you need to know is that there are two flavors of 4K delivery resolutions:
4K UHD – This is the spec for 4K for broadcast and in the home. Resolution is 3840×2160. It’s a 16:9 aspect ratio (1.78:1), and is really just double the resolution of standard HD (1920×1080). Most 4K displays and televisions will be 4K UHD.
4K DCI – This is the cinema spec, and resolution is 4096×2160. Aspect ratio is 1.85:1. Like 4K HD, this is just double the resolution of the standard 2k spec (2048 x 1080). You’ll only really see the 4K DCI spec in play if you’re watching a movie in a theater.
From a traditional viewing distance, 4K only really becomes noticeable once the screen hits 84 inches. However, once you hit that size, and if shot and projected properly, the results are pretty stunning.
As of this writing, most of the 4K TV’s being sold are not worth buying. Either the panels are really cheap and the image quality is not great, or the price is just not worth it. If you need something that can monitor at 4K, and you’re working at a budget, get one of the cheaper panels, and pay very little attention to the color and contrast of the monitor. Just watch for sharpness and resolution factors. In many ways, what’s happening now is like what happened when HD first appeared. Sets were really expensive and only for high end pros or people with money to blow. Wait a while and you’ll start to see more affordable options appear.
Lastly, you don’t need to monitor 4K while you’re doing color correction. I’d recommend using an HD Broadcast monitor while doing color (with your video I/O set to 1080). Buying an affordable 4K grading monitor is pretty much impossible and won’t make any difference to your color decisions. Color correcting your scaled down 4K images at 1080 is still the way to go. Right now, I think the only useful thing a 4K monitor is really capable of is to check the overall sharpness of your image at a 4K resolution when you’re mastering. Everything else is not ready for prime time yet… at least in my opinion.
Sam here… So… Believe it or not, it’s actually easier for the average person, if they had access to the right projector, to put on a higher resolution screening than they typically see when they go out to the theater.
When you go watch a movie at the typical multiplex, you’re almost universally watching a movie that was made from a 2K master… even if the projector is 4K, the movie itself was up-rezzed from a 2K file to fill the screen.
The main reason for this is that Hollywood hasn’t really figured out the whole 4K pipeline thing… especially on the VFX side. It’s far simpler and more practical for them to finish in 2K.
What this means is that if you have a Dragon, Epic, GH4, or 4K BMCC, It’s a pretty straightforward process for you to shoot, finish, and screen at a much higher level than the big guys do… especially if your VFX pipeline is simple.
In fact, if you somehow managed to have access to a nice 4K Projector with an HDMI port on it, you can put on a higher quality screening in your living room than you’ll currently see in the multiplex.
Why? Well, Both the Mac Pro and Macbook Pro will shoot out a 4K signal through their HDMI ports.
Also, those 4K HDMI ports will also send a 5.1 signal.
A 4K 5.1 screening is now a pretty straightforward process if you’ve got the right home theater and you know how to plug in an HDMI cable and export a 4K ProRes.
It’s now easy to Shoot 4K, post 4K, and then screen it right from your laptop.
I have no idea why film festivals make things so hard for Filmmakers with their DCP, Bluray, or Tape requirements.
Filmmakers should be able to just hand over/dropbox a QuickTime movie and get on with their lives. For some reason, everyone loves to make things complicated.
With my Film Collective, We Make Movies We Make Movies, we do our annual WMM Fest of our communities’ work in LA, and and we run all of the screenings (there were 5 this year) right from my laptop. In fact, every screening we’ve ever done has been done through QuickTime,in 1080 ProRes, using our filmmakers’ QuickTime master files, and playing a from a laptop through QuickTime or Final Cut. It’s just easier.
The only reason we’re not doing 4K screenings is because most filmmakers are still mastering at 1080, and 4K projectors are still way too expensive. Both of these things will be changing in the not too distant future.
If we had the right files and the right gear, though, our process would still not change at all. ProRes is still ProRes, and we’re still just playing it out of an HDMI port to a projector.
Our screenings look better, sound better, and we have almost no room for technical issues because we do things this way. We work from the masters, and leave as few things to chance as humanly possible. As long as the projector is calibrated, we’re good to go.
And while I explained that it’s a lot easier for filmmakers to make DCP’s these days in our blog here… it’s still a very difficult format for the average person to implement on their own and is far from a user friendly experience to screen and play one of those things for an audience.
Both the DCP and Bluray formats were designed from day 1 to be difficult to create and hard to pirate. Essentially, as most high end technologies typically are, they were designed to both keep people from understanding them, keep them proprietary, and to maintain established business models… in this case preserving the studio multiplex and home digital distribution businesses.
Fortunately, there’s a pretty easy way around all of this nonsense… which is good news for the independent filmmaker who isn’t tethered to this process and can figure out how to make and distribute their own content.
Right now, I look at DCP’s as a necessary evil, but the truth of the matter is that the safest and easiest way to screen a movie for an audience is to just run it through the HDMI out of your Mac from your QuickTime master.
Why do people feel the need to make things so hard?
Sam here… so, over the years, I’ve gotten a lot raised eyebrows when I run into people I used to work with, or editors and people outside my circle, and I tell them I cut everything I do with FCPX and it’s the best thing out there. Usually, I get back some garbled version of “really? I heard it sucked…” or “I tried it a long time ago and couldn’t get into it…”
We then have a 10 minute conversation about why they switched to Premiere and why I didn’t… and who, in fact, the crazy person really is in this equation.
And when I look back and really think about why I switched to FCPX… I realized that my circumstances were different than pretty much anyone else’s when it came to switching, so it shouldn’t be surprising that my viewpoint on the program is much different than everyone else’s.
Long story short… I downloaded the program day one like everyone else. There were things I liked, and a lot of things I didn’t. Unlike most, I kept playing with it, and cutting small projects, trying to figure out why Apple had done what they had done… and if, in fact, there was something I wasn’t getting with all of this. I was doing all of this on my off days while I worked at my regular freelance gig still using FCP7 and being pretty content with that workflow.
Somewhere along the way, I got invited to come out and work with the Final Cut team and got to ask some of my questions in person… and I got some answers… when I was finished, I came back to LA, and my perspective had changed a bit. I’d been shown a different way of looking at editing, and sort of realized I couldn’t go back to what I was doing and still be happy with that. I had found I liked editing again (I’d become a bit of a robot with FCP7)… and for the first time in a long time, I felt like there was something new and interesting for me to explore.
So… I sort of made the decision that I was just going to run with FCPX, start my own post house, not tell my clients I was cutting with X (I’d just say Final Cut and let them assume I meant FCP7), and see just how far I could get with what I was doing before I ran out of money.
I haven’t run out of money yet.
In fact, I made more. You see, I was still charging what I would normally charge, but I was able to deliver in half the time… time equaled money. So even though I lost a few customers at first, the ones I did keep I was able to take better care of.
That one decision to go out on my own led to a big old giant chain reaction in my career that is still snowballing. It’s been weird, frustrating, cool, and consistently surprising. At the end of the day, it’s been fun. I have a lot more fun than most editors I know, and a lot more control over the projects I choose to do… which is mostly all I ever cared about.
And when I compare it to cutting the same old piece every single day at my old freelance job in the same tired workflow… well, there really is no comparison. You literally couldn’t pay me to go back to that. People have tried.
So what’s the lesson here? The person who was bored with editing at his cushy freelance gig (me before FCPX) had stopped learning and had stopped getting better. I was starting to become less curious, and editing itself had become just a transaction I would do for money. And when that happens, when you stop caring about what you do, and you stop learning, it makes you more likely to want to preserve the status quo and keep collecting checks. Change becomes threatening and learning becomes difficult. Your job becomes less about doing something cool, and it becomes more about protecting your territory from outsiders. It becomes easy to dismiss new ways of working. Eventually, you become the flatbed film editor who wakes up one day to realize their gigs are gone and everyone is editing on video. You blame the world and get really angry and bitter. No one cares that you are angry and bitter. You get more angry and bitter.
If I had stayed that way, I’d be well on my way to being one of those crusty old editors who love to tell everyone else how dumb and unprofessional their workflow is. “Get off my lawn!”
The truth is that you don’t know what you don’t know. I got lucky enough to have some people show me, and it changed the way I looked at what I do. It’s made be a better and more efficient editor and it has prepared me for the next ten years in this business in a way that many people can’t even see.
At the end of the day, it makes no difference to me what editing platform you cut with. You should use what works for you… but as an editor, it’s part of your job to know enough to know the difference between the different tools, and to continue to adapt to the changing world around you.
I guess my only piece of advice might be that, before you go ahead and dismiss a different idea entirely, decide for yourself, and be willing to occasionally go down the rabbit hole. Don’t stop being curious. Sometimes going down the rabbit hole can change your perspective on things completely. It did for me. It’s why I’m only cutting with FCPX now.
I’m always looking for the next Rabbit hole, though.
Sam here… we’re going to talk RED RAW today, because there’s no reason for this to be so hard and complicated. Mostly, it’s a public service to DP’s everywhere, many of whom seem to be confused by how all of this works. It’s been my experience that a lot of DP’s try to capture their LOOK on set… and yet they’re shooting RAW. Mostly, this is because of a fear (often justified) that post will screw it up later if they don’t lock in their look now. Unfortunately, this approach is counter intuitive to how the camera is designed to work, and doing things this way will often lead to a lot of finger pointing, anger, and inflated post budgets once the film hits the finishing stage.
The bottom line is that if you’ve ever heard your DP say the following words… show them this post:
“The RED is a noisy camera… I’ve always got add noise reduction in post to my footage. Also… I always like to save LUTs and looks when I shoot RED.”
With the RED, LUTs are stupid. Sorry. Someone needs to say it. You’re just going to go back to REDlogfilm when you hit the finish line anyway… or you should be using the controls in RCX to manipulate the RAW the way you want it AFTER YOU’VE SHOT IT.
If exposed and lit correctly, you should NEVER want/need a LUT when you hit the color room. Use the standard REDcolor/Gamma settings while you’re shooting as a baseline, and then tweak later in REDcine-X. When it comes to RED, probably the worst thing you can do is try and dial your look in while you’re on set. It defeats the whole purpose of shooting RAW.
The truth is that shooting RAW is not a cure all. While it provides greater flexibility than traditional codecs, You need to do certain things correctly and understand a couple things in order to get good results.
Fortunately, there isn’t all that much that you need to know. In fact, if you do the following, you’re pretty much guaranteed good results with your Scarlet/Epic/Dragon:
Shoot at 800 ISO – The RED sensor is rated to be shot at this ISO. Start here while on set. While you can shoot at other ISOs, you shouldn’t unless you absolutely have to. Play with that stuff later in Redcine-X. Shoot and light it for 800.
Don’t clip – Look at your histogram. Make sure everything you’re shooting is between the “goal posts”. If it’s not… do a better job with your lighting, or accept certain realities in post. Also, keep in mind you always have HDRX available to you in extreme cases.
Expose your skin tones correctly – For the love of God, don’t underexpose your skin tones. Seriously… just don’t. It’s the number one reason why people end up unhappy with their RED footage and why things turn out noisy, because they find they want to brighten up their skin tones in the color room. To make sure your skin tones are exposed properly, use the False color mode and make sure your skin tones are “pink”. If they are, you’re good to go. You can always make things darker later… rarely, however, can you make things brighter and not introduce unwanted noise. Even if you want things “moody”, EXPOSE YOUR SKIN TONES PROPERLY.
The smaller your resolution, the grainier your footage – Basically, if you shoot with the Dragon/Epic at 4k, 3k, or 2k… you’re using less and less of your sensor, and less and less information is being captured. Many complain that their 2k stuff looks worse than their 4k and 5k stuff… that’s cause it does. You’re only using part of your sensor, and depending on your compression rate, you may start to see a lot of problems, noise, and grain introduced… especially when you shoot at 2k.
Up your compression ratio if you’re going to reframe – For the same reasons discussed in #4, the higher the compression ratio you shoot with your RED at, the more noise you’re going to see from your punch ins in post. Once you get past 7:1 compression or so, expect the quality of your punch-ins to decrease and become far more noticeable. While there’s no reason to shoot RED RAW uncompressed (not even really a good reason to go below 5:1), keep in mind that the higher you go, the more noise will be introduced, and this noise will be compounded when you reframe/punch in during the edit. Even though you shot at 4k, it doesn’t necessarily mean that all punch-ins are created equal when you come down to 1080.
Seriously, those five things are all you really need to know in order to make you and your colorist happy when you reach the finish line. Why people make this so hard, I’ll never understand.
FCPWORKS’, Noah Kadner will be doing a round table discussion this Thursday evening on the Final Cut Virtual Users Group. Be sure to tune in as they’ll be answering your questions live. Fellow FCPX Whiz Kids will include: Mike Matzdorff, Chris Fenwick, Mark Spencer, and Steve Martin. Tune in live at 6:00 PM PST on Thursday, August 14th, 2014 at http://www.hazu.io/pixelcorps/fcvug-2
It’s still a bit of a bummer… because what it really means for your average person is that they’ll now think the only option for real, immediate, pro A/V support with FCPX is to go to the Apple Store and talk to an Apple Genius… and I think we all know how that is going to go for them.
*** Shameless self promotion… if you need help with FCPX and the related ecosystem, FCPWORKS is a way better option for you than the Apple store ***
Sales pitch over.
Anyway, what this news also means is that it’s just another barrel in the gun for Apple critics who want to rail about how Apple has abandoned the Pro Video market… and I was really hoping I was done with that debate.
Here’s my point of view on it, though, for what it’s worth. I don’t think this is a sign that Apple isn’t committed to the pro market. I think it’s more a sign that they have bigger problems to solve, and that maybe the support infrastructure itself has changed a bit from when Apple’s Pro Video Support was necessary.
The truth is that most people find a lot of the answers they need through google, blogs, and videos nowadays. I know I do.
And for more specialized, higher end cases, they probably weren’t going to be Apple Support clients anyway… those guys are all going to third party specialized consultants who are doing the real work day in and day out.
I think what this move symbolizes more is probably the fact that Apple woke up one day and realized that a $799/year support service wasn’t really in line at all with what their average customer needed from them… so, I think that’s why they killed it.
I mean, honestly… would you/were you paying $799 a year (more than twice as much as it costs for a license of FCPX) for Apple’s Pro Video Support?
I know I wasn’t.
To be honest, until I read Alex’s post… I didn’t even know this was still around… which probably says more about why they got rid of it than anything.
However, if you do find that you need this kind of support… well, we do that here at FCPWORKS, and we’ll have you covered. Feel free to reach out to email@example.com with any questions.