Tag workflow

Tag workflow

10 Reasons to Switch to FCPX

August 25, 2014 Tags: ,
featured image

Our colleagues over at Kingdom Creative (who were profiled recently over at FCP.CO) have come up with a great set of 10 reasons to switch to FCPX. The first five are:

  1. The Skimmer
    We take it for granted now, but I think the skimmer is the most useful tool in our toolbox. To be able to skim along the thumbnails in filmstrip view, especially on long takes, means we can very quickly drill down to the exact clip content we need. We used to wear out the spacebar and J,K,L keys doing this.
  2. Keywording
    Although it is hard to get your brain out of thinking of the Final Cut Pro 7 way of thinking as content being in bins, once you can see the power of keywording, it’s hard to imagine working without it now. Being able to keyword individual ranges in a clip and also applying more than one keyword is a killer feature.
  3. Working with native formats
    No longer do we spend half our time batch encoding in MPEG Streamclip, we can now take clips from many different formats and mix and match them on the same timeline with no encoding or initial rendering. Once ingested, we can get to work straight away, which means better value for our clients.
  4. Smart Collections
    We have default smart collections to sort by camera, but also having default collections to sort by favourites with ‘Interview’ keywords, means there is always a dynamic pool of content in the project as soon as any interview selects are done, which is a huge timesaver when trying to get the initial edit underway.
  5. The Magnetic Timeline & Clip Connections
    The feature that is hardest to get used to is the magnetic timeline, yet once you have to use another track based NLE again, it suddenly becomes the best feature. Being able to move around entire segments of a cut while the remaining structure remains intact is invaluable and you can do this with reckless abandon. Having clip connections means that this all works exactly as you want it to.

For the rest of the list, check out the Kingdom Creative site at- http://kingdom-creative.co.uk/10-reasons-move-final-cut-pro-x/

4K Monitoring Options: BMD or AJA

August 22, 2014 Tags: , , ,
featured image

Sam here… finishing up 4K week on FCPWORKS Workflow Central with one more post.

So… BMD or AJA… the eternal debate. Right now, we’re centering this around the 4K monitoring & I/O products, AJA’s IO 4K or the BMD Ultrastudio 4K. Basically, it comes down to this… Do you use DaVinci Resolve for Color Correction? If the answer is yes, you’re going to need to go with Blackmagic. Case closed. Blackmagic Devices are the only ones that will work with Resolve.

However, If the answer to that question is no, and you’re doing your color work primarily in FCPX or another program that isn’t Resolve (Scratch, Baselight, Smoke to name a few) the discussion becomes a lot more complicated. Additionally, in the case of the BMD Ultrastudio 4K… it’s can be loud. The AJA IO 4K is quiet and considerably smaller. If you’re keeping the product in a room with a new Mac Pro as your primary computer, you really start to hear the Ultrastudio 4K when it’s on…. and if you’re doing serious sound mixing, the noise makes a big difference.

Additionally, and this is a little known fact, but the AJA cards support more monitoring formats for FCPX as well. For whatever reason, your monitoring formats in the Blackmagic preferences (system settings on the Mac) are more limited and considerably smaller than when you’re using your Ultrastudio 4K in Resolve.

HOWEVER, at the end of the day, price is also a factor and the AJA products are almost universally more expensive than their BMD counterparts. So price vs. performance is definitely a consideration. In my opinion, if you’re more interested in specific features, go AJA. If you’re budget conscious or a heavy Resolve user, go BMD.

p.s – little known fact, but the HDMI out on the back for the New Mac Pro can be used as an 8-bit A/V out in FCPX, and it is FAR more configurable for video I/O if you’re using weird sequence settings and just need to send out a 1:1 output over HDMI than what’s available through your BMD or AJA device.

4K Isn’t the Problem

August 21, 2014 Tags: , ,
featured image

Sam here… Came across this article and I think it’s a symptom of a larger problem:

http://www.redsharknews.com/business/item/1831-the-peril-of-4k-our-craft-is-being-threatened

Basically, the long and short is that the DP is worried that higher resolutions and the ability to make alterations to images further down the line in post is going to take control away from DP’s over their images.

On the one hand, I can totally understand where he’s coming from, and he’s totally right. I’ve seen quite a few projects butchered in color correction, and I imagine it must be very difficult to go out and put your heart and soul into shooting/lighting something only to have it completely reworked in a way that’s entirely not what was imagined… and then be credited as if that was how you wanted it. That sucks.

However, this is not the fault of the resolution, RAW, or improvements in technology. The fault lies with the way that departments work together, and it’s my biggest pet peeve in the entire industry.

No one talks to each other.

Departments don’t talk about workflow before the shoot starts. Production rarely asks what post wants. Post rarely checks in with the DP or sound department after the shoot is over. VFX lives on its own island and is expected to push the “make it better” button on whatever production hands them. Everyone is just trying to get through the day, and get through the gig.

There’s no process and no blueprint. There’s no workflow.

Actually… that’s not even really true. There’s too many workflows, and every department/individual has their own specific way they think things should be done/delivered to them. Rarely do these different workflows sync up across departments. Even rarer than that does one department ask the other department how they want to do things before the production starts. Usually there’s a list of delivery requirements on how a vendor wants things that is discovered after the critical production decisions have been made.

A few examples that illustrate this:

– An anamorphic lens is chosen because Production likes the widescreen look. Post is never consulted. However, no one in post knows how to transcode/desqueeze the anamorphic footage correctly. Footage is processed slightly warped and then edited this way. Conform becomes a nightmare. Also, turns out the distributor needs a full frame 1080 master (no bars on it), however the movie wasn’t framed in many cases to live in a 16:9 master. Massive pan and scan work needs to be done. Post budgets go up.

LUT’s are created for each shot but no discussion has been had over how these will be applied to the RAW footage when it’s time to do the conform. No one bothered to run this workflow past the editor or colorist who have no how any of this was handled, and the production had a falling out with the DIT who made all of the looks. In the end, LUTs are applied incorrectly or not at all and no one has any idea what LUT goes with what shot or how to sync all of these LUTs up in a way that isn’t ridiculously time consuming. Post budgets go up.

– RED footage will be transcoded down to prores at a random resolution with letterboxing added on to the prores. No one in post has any idea how to correctly get back to the original R3D’s with proper transforms from the edit applied to the RAW footage. Post budgets go up.

– No one asks the VFX department how they want their greenscreen shots done. Tracking marks are not used and yet the camera is moving during the shot. Posts budgets go up.

– VFX works in RED Color3 and delivers DPX plates. The colorist is using REDlogfilm and grading everything from the RAW. Things don’t match shot to shot. Post budgets go up.

– Editors need to deliver their picture to a Sound house. They’ve never delivered to this sound house before, and the Producer picked them because they had the cheapest bid. Lots of ADR work is expected. Post budgets are about to go up.

Anyway, you take things like the above and then throw in the fact that, in most cases, especially on smaller commercial jobs, most of the people involved are working with each other for the first time. Chemistry and trust are non-existent. Bids have gone out to the lowest bidder and not to the most qualified teams. CYA (Cover-Your-A$$) attitude becomes prevalent. Fingers become ready to be pointed. Accountability becomes nonexistent. People get angry. People get fired.

Someday, I want to live in a world where the DP knows the editor and both of them know the colorist. They’ve all worked with the director before. Also, before they’ve shot, each of these people sat down in a meeting with the VFX and sound departments and talked through how the imaging pipeline was going to go from set to edit to VFX to sound to color to mastering. Then, someone would come up with a diagram based on what cameras were being used, how sound was being recorded, what resolution needed to be delivered, and in what color space(s). Then, they’d also write down how metadata would be managed, VFX would be roundtripped, sound would be turned over for the mix, video would be conformed for color, and how, in general, the project would be set up and delivered to the distributor based on pre-agreed upon sound, color, and mastering specs. The departments would then take this diagram home, decide what needed to be changed based on their needs, and then come back and finalize their process, compromising where necessary for the greater good of the project.

This would all be done before a single frame of footage was shot.

A man can dream.

Anyway, until people start working this way and figuring out their process ahead of time, people will continue to write blogs like the one I linked to above and blame things like resolution and RAW for why their footage doesn’t look right in the end.

Departments need to communicate about workflow more. That’s not technology’s fault.

Panasonic GH4 Tips and Tricks

August 20, 2014 Tags: , ,
featured image

Those that know me well know that I’m a huge fan of both FCP X and the Panasonic GH series of cameras. Currently, I own and shoot with the GH2, GH3, and yes, even the almighty 4K-capable Panasonic GH4. FCP X handles the different codecs, bit-rates, and frame sizes like a champ. In addition, I always have the option to transcode the footage to ensure that everything runs smoothly on, say, an older laptop.

I determined that I would keep the older cameras for a variety of reasons. The most practical of those reasons is that they are both paid off! However, choosing to shoot with all three cameras at the same time presents some problems.

Chiefly, how do I effectively balance the distinct looks of each camera when the looks are baked in to the image? Does FCP X have the tools I need to get the job done? How do I keep my workflow simple without having to do a major color correction at the end of the job? It turns out, the answers are quite simple.

My personal holy grail for camera matching comes in the form of the DSC OneShot Chart. I make absolutely sure to bring it with me on multicam shoots. I allot a few extra minutes to white balance and to shoot the chart. The back side of the chart is split between white and gray. Frankly, I wish they chose either white or gray versus splitting them up since it’s hard to fill the frame of your camera with the space given for each when performing an auto white balance.

The OneShot chart was developed by both DSC and Art Adams. Art has a blog post that fully explains the chart here. I won’t go into too much techno talk about it, but it’s a brilliant chart which has all the basics you need for proper luma and chroma balancing: true black, white, gray, skin tones, plus primary and secondary broadcast colors.

Before I got the chart, I would balance the shots manually using a waveform monitor and vectorscope. The great news is that Resolve 11 can actually understand the color information on the chart using the built-in Color Match feature. In the video below, I show you how I send the shots from FCP X to Resolve, match them, and then send the LUTs back to FCP X.

To summarize, once each camera is balanced in Resolve, I export a 3D LUT of each camera. I name the LUT based on the camera and name of the shoot. Of course, you could add any additional info that you deem necessary, such as scene number.

The next trick is getting all this back into FCP X. I purchased a great plugin from Denver Riddle’s Color Grading Central called LUTutility. This program can actually read the LUTs you export from Resolve and attach them to your shots inside of FCP X.

All you need to do is drag and drop the .cube files into LUTutility’s preference pane, located in System Preferences. The LUTs are then installed and accessible inside of FCP X.

Inside FCP X, it’s simply a matter of adding the LUTutility effect to the footage and choosing the correct LUT based on your camera from the pulldown menu in the inspector. The beauty of this workflow is that all you need to do is import the shots with the DSC One Shot chart into Resolve. There’s no need to render anything out of Resolve. All that color grading info is stored in the LUTs and read by the LUTutility effect inside FCP X. I’ll then apply the necessary LUT to all shots within a scene and perform the final grade inside of FCP X.

One side-note to all of this. After I apply the LUTs, I’ll usually add some minor color correction tweaks as nothing is ever 100% perfect. But even with the minor tweaks, this process takes so much work out of balancing different cameras, especially DSLRs where the look is baked in.

As long as the cameras are white balanced off the same source and are generally shot at the same ISO, the tweaks are very minor compared to having to entirely match by eye. Frankly, I find it amazing that this is now all possible. It speaks to the exciting development that is going on in the FCP X ecosystem.

I hope this tip helps. Now go shoot and edit something awesome!

ABOUT

Michael Garber

Guest Blogger Michael Garber from Garbershop.

Michael Garber is a post production professional with over 14 years of experience. He started his company, 5th Wall, in 2004 and has worked with clients such as Discovery Agency, Huell Howser Productions, Automat Pictures, FuelTV, PBS and more. In addition to editorial work, Michael produces corporate documentaries for a Fortune 500 company. When not editing or shooting, Michael is more than likely talking about editing and shooting on his blog, GARBERSHOP.

About 4K Monitoring

August 19, 2014 Tags: , ,
featured image

There’s a lot of talk and a whole lot of hype when it comes to 4K. I’m certainly guilty of a lot of that hype. However, most people know very little about 4K and are pretty intimidated by the subject. Here’s some quick hits when it comes to working with it.

First thing you need to know is that there are two flavors of 4K delivery resolutions:
4K UHD – This is the spec for 4K for broadcast and in the home. Resolution is 3840×2160. It’s a 16:9 aspect ratio (1.78:1), and is really just double the resolution of standard HD (1920×1080). Most 4K displays and televisions will be 4K UHD.
4K DCI – This is the cinema spec, and resolution is 4096×2160. Aspect ratio is 1.85:1. Like 4K HD, this is just double the resolution of the standard 2k spec (2048 x 1080). You’ll only really see the 4K DCI spec in play if you’re watching a movie in a theater.

From a traditional viewing distance, 4K only really becomes noticeable once the screen hits 84 inches. However, once you hit that size, and if shot and projected properly, the results are pretty stunning.

As of this writing, most of the 4K TV’s being sold are not worth buying. Either the panels are really cheap and the image quality is not great, or the price is just not worth it. If you need something that can monitor at 4K, and you’re working at a budget, get one of the cheaper panels, and pay very little attention to the color and contrast of the monitor. Just watch for sharpness and resolution factors. In many ways, what’s happening now is like what happened when HD first appeared. Sets were really expensive and only for high end pros or people with money to blow. Wait a while and you’ll start to see more affordable options appear.

Lastly, you don’t need to monitor 4K while you’re doing color correction. I’d recommend using an HD Broadcast monitor while doing color (with your video I/O set to 1080). Buying an affordable 4K grading monitor is pretty much impossible and won’t make any difference to your color decisions. Color correcting your scaled down 4K images at 1080 is still the way to go. Right now, I think the only useful thing a 4K monitor is really capable of is to check the overall sharpness of your image at a 4K resolution when you’re mastering. Everything else is not ready for prime time yet… at least in my opinion.

The Easiest Way to do a 4K Screening

August 18, 2014 Tags: , ,
featured image

Sam here… So… Believe it or not, it’s actually easier for the average person, if they had access to the right projector, to put on a higher resolution screening than they typically see when they go out to the theater.

When you go watch a movie at the typical multiplex, you’re almost universally watching a movie that was made from a 2K master… even if the projector is 4K, the movie itself was up-rezzed from a 2K file to fill the screen.

The main reason for this is that Hollywood hasn’t really figured out the whole 4K pipeline thing… especially on the VFX side. It’s far simpler and more practical for them to finish in 2K.

What this means is that if you have a Dragon, Epic, GH4, or 4K BMCC, It’s a pretty straightforward process for you to shoot, finish, and screen at a much higher level than the big guys do… especially if your VFX pipeline is simple.

In fact, if you somehow managed to have access to a nice 4K Projector with an HDMI port on it, you can put on a higher quality screening in your living room than you’ll currently see in the multiplex.

Why? Well, Both the Mac Pro and Macbook Pro will shoot out a 4K signal through their HDMI ports.

Also, those 4K HDMI ports will also send a 5.1 signal.

A 4K 5.1 screening is now a pretty straightforward process if you’ve got the right home theater and you know how to plug in an HDMI cable and export a 4K ProRes.

It’s now easy to Shoot 4K, post 4K, and then screen it right from your laptop.

I have no idea why film festivals make things so hard for Filmmakers with their DCP, Bluray, or Tape requirements.

Filmmakers should be able to just hand over/dropbox a QuickTime movie and get on with their lives. For some reason, everyone loves to make things complicated.

With my Film Collective, We Make Movies We Make Movies, we do our annual WMM Fest of our communities’ work in LA, and and we run all of the screenings (there were 5 this year) right from my laptop. In fact, every screening we’ve ever done has been done through QuickTime,in 1080 ProRes, using our filmmakers’ QuickTime master files, and playing a from a laptop through QuickTime or Final Cut. It’s just easier.

The only reason we’re not doing 4K screenings is because most filmmakers are still mastering at 1080, and 4K projectors are still way too expensive. Both of these things will be changing in the not too distant future.

If we had the right files and the right gear, though, our process would still not change at all. ProRes is still ProRes, and we’re still just playing it out of an HDMI port to a projector.

Our screenings look better, sound better, and we have almost no room for technical issues because we do things this way. We work from the masters, and leave as few things to chance as humanly possible. As long as the projector is calibrated, we’re good to go.

And while I explained that it’s a lot easier for filmmakers to make DCP’s these days in our blog here… it’s still a very difficult format for the average person to implement on their own and is far from a user friendly experience to screen and play one of those things for an audience.

Both the DCP and Bluray formats were designed from day 1 to be difficult to create and hard to pirate. Essentially, as most high end technologies typically are, they were designed to both keep people from understanding them, keep them proprietary, and to maintain established business models… in this case preserving the studio multiplex and home digital distribution businesses.

Fortunately, there’s a pretty easy way around all of this nonsense… which is good news for the independent filmmaker who isn’t tethered to this process and can figure out how to make and distribute their own content.

Right now, I look at DCP’s as a necessary evil, but the truth of the matter is that the safest and easiest way to screen a movie for an audience is to just run it through the HDMI out of your Mac from your QuickTime master.

Why do people feel the need to make things so hard?

RED Raw For Your Colorist

August 14, 2014 Tags: , , ,
featured image

Sam here… we’re going to talk RED RAW today, because there’s no reason for this to be so hard and complicated.  Mostly, it’s a public service to DP’s everywhere, many of whom seem to be confused by how all of this works.  It’s been my experience that a lot of DP’s try to capture their LOOK on set… and yet they’re shooting RAW.  Mostly, this is because of a fear (often justified) that post will screw it up later if they don’t lock in their look now.  Unfortunately, this approach is counter intuitive to how the camera is designed to work, and doing things this way will often lead to a lot of finger pointing, anger, and inflated post budgets once the film hits the finishing stage.

The bottom line is that if you’ve ever heard your DP say the following words… show them this post:

“The RED is a noisy camera… I’ve always got add noise reduction in post to my footage.  Also… I always like to save LUTs and looks when I shoot RED.”

With the RED, LUTs are stupid.  Sorry.  Someone needs to say it. You’re just going to go back to REDlogfilm when you hit the finish line anyway… or you should be using the controls in RCX to manipulate the RAW the way you want it AFTER YOU’VE SHOT IT.

If exposed and lit correctly, you should NEVER want/need a LUT when you hit the color room.  Use the standard REDcolor/Gamma settings while you’re shooting as a baseline, and then tweak later in REDcine-X.  When it comes to RED, probably the worst thing you can do is try and dial your look in while you’re on set.  It defeats the whole purpose of shooting RAW.

The truth is that shooting RAW is not a cure all.  While it provides greater flexibility than traditional codecs, You need to do certain things correctly and understand a couple things in order to get good results.

Fortunately, there isn’t all that much that you need to know.  In fact, if you do the following, you’re pretty much guaranteed good results with your Scarlet/Epic/Dragon:

  1. Shoot at 800 ISO – The RED sensor is rated to be shot at this ISO.  Start here while on set.  While you can shoot at other ISOs, you shouldn’t unless you absolutely have to.  Play with that stuff later in Redcine-X.  Shoot and light it for 800.
  2. Don’t clip – Look at your histogram.  Make sure everything you’re shooting is between the “goal posts”.  If it’s not… do a better job with your lighting, or accept certain realities in post.  Also, keep in mind you always have HDRX available to you in extreme cases.
  3. Expose your skin tones correctly – For the love of God, don’t underexpose your skin tones.  Seriously… just don’t.  It’s the number one reason why people end up unhappy with their RED footage and why things turn out noisy, because they find they want to brighten up their skin tones in the color room.  To make sure your skin tones are exposed properly, use the False color mode and make sure your skin tones are “pink”.  If they are, you’re good to go.  You can always make things darker later… rarely, however, can you make things brighter and not introduce unwanted noise.  Even if you want things “moody”, EXPOSE YOUR SKIN TONES PROPERLY.
  4. The smaller your resolution, the grainier your footage – Basically, if you shoot with the Dragon/Epic at 4k, 3k, or 2k… you’re using less and less of your sensor, and less and less information is being captured.  Many complain that their 2k stuff looks worse than their 4k and 5k stuff… that’s cause it does.  You’re only using part of your sensor, and depending on your compression rate, you may start to see a lot of problems, noise, and grain introduced… especially when you shoot at 2k.
  5. Up your compression ratio if you’re going to reframe – For the same reasons discussed in #4, the higher the compression ratio you shoot with your RED at, the more noise you’re going to see from your punch ins in post.  Once you get past 7:1 compression or so, expect the quality of your punch-ins to decrease and become far more noticeable.  While there’s no reason to shoot RED RAW uncompressed (not even really a good reason to go below 5:1), keep in mind that the higher you go, the more noise will be introduced, and this noise will be compounded when you reframe/punch in during the edit.  Even though you shot at 4k, it doesn’t necessarily mean that all punch-ins are created equal when you come down to 1080.

Seriously, those five things are all you really need to know in order to make you and your colorist happy when you reach the finish line.  Why people make this so hard, I’ll never understand.

For more info on some of the RED exposure tools and how all this works, read these articles –
http://www.red.com/learn/red-101/exposure-with-red-cameras
http://www.red.com/learn/red-101/red-camera-exposure-tools http://www.red.com/learn/red-101/exposure-false-color-zebra-tools

FCPWORKS’ Noah Kadner on FCVUG

August 13, 2014 Tags: ,
featured image

FCPWORKS’, Noah Kadner will be doing a round table discussion this Thursday evening on the Final Cut Virtual Users Group. Be sure to tune in as they’ll be answering your questions live. Fellow FCPX Whiz Kids will include: Mike Matzdorff, Chris Fenwick, Mark Spencer, and Steve Martin. Tune in live at 6:00 PM PST on Thursday, August 14th, 2014 at http://www.hazu.io/pixelcorps/fcvug-2

No More Pro Video Support

August 13, 2014 Tags: , ,
featured image

While, selfishly, I’m kind of okay with this because it’s good for my business:

http://alex4d.com/notes/item/apple-discontinues-applecare-provideo-support

It’s still a bit of a bummer… because what it really means for your average person is that they’ll now think the only option for real, immediate, pro A/V support with FCPX is to go to the Apple Store and talk to an Apple Genius… and I think we all know how that is going to go for them.

*** Shameless self promotion… if you need help with FCPX and the related ecosystem, FCPWORKS is a way better option for you than the Apple store ***

Sales pitch over.

Anyway, what this news also means is that it’s just another barrel in the gun for Apple critics who want to rail about how Apple has abandoned the Pro Video market… and I was really hoping I was done with that debate.

Here’s my point of view on it, though, for what it’s worth.  I don’t think this is a sign that Apple isn’t committed to the pro market.  I think it’s more a sign that they have bigger problems to solve, and that maybe the support infrastructure itself has changed a bit from when Apple’s Pro Video Support was necessary.

The truth is that most people find a lot of the answers they need through google, blogs, and videos nowadays.  I know I do.

And for more specialized, higher end cases, they probably weren’t going to be Apple Support clients anyway… those guys are all going to third party specialized consultants who are doing the real work day in and day out.

I think what this move symbolizes more is probably the fact that Apple woke up one day and realized that a $799/year support service wasn’t really in line at all with what their average customer needed from them… so, I think that’s why they killed it.

I mean, honestly… would you/were you paying $799 a year (more than twice as much as it costs for a license of FCPX) for Apple’s Pro Video Support?

I know I wasn’t.

To be honest, until I read Alex’s post… I didn’t even know this was still around… which probably says more about why they got rid of it than anything.

However, if you do find that you need this kind of support… well, we do that here at FCPWORKS, and we’ll have you covered.  Feel free to reach out to workflow@fcpworks.com with any questions.

Why Aren’t You Using Motion?

August 12, 2014 Tags: , , ,
featured image

So… the title of this blog is a bit of a trick question.  If you’re using any of the built in FCPX effects, titles, or generators, or you’re using most of the 3rd party effects or templates, you’re actually using Motion without even being aware of it.

However, because of the new integration with how FCPX works with Motion and the loss of the “Send To Motion” command, I tend to feel like Motion has become the forgotten App in the the Apple Pro Apps ecosystem.  I sort of feel bad about this, because Motion is actually awesome, especially if you’re an editor like me who has no interest in becoming an After Effects/Nuke genius.

No one has the time to know and be good at everything.  I always gravitated towards the Edit and Color correction ends of the business.  While I had an understanding and interest in essential GFX and audio techniques, for me, if I couldn’t get something done quickly on that end, it just wasn’t going to happen… and I never had the time or natural inclination to become an After Effects or Pro Tools Master.  For whatever reason, those apps just never made sense to my brain the way Final Cut, Color, and Resolve did.  I liked to stay in-app/as integrated as possible when it came to GFX and audio and is why I bothered to learn Motion and Soundtrack Pro back in the FCP7 days.

The truth is that, even though GFX may not be your thing, especially if you’re a one man band, your clients are still going to expect you to be able to do high quality lower thirds, titles, and other common GFX tasks… especially for lower budget corporate, commercial, and internet projects.  In fact, just about any Youtube video you make for a paying client is going to require you to know how to make some kind of endtag for it.  In fact, many of these tasks end up being repetitive, and in most cases would tend to be best suited for having a template you could work with quickly right in your NLE.

This is where, as an editor, getting to know Motion was my best friend.

The reason is that things you make in Motion will show up automatically as titles, generators, or effects in FCPX, and understanding the rigging and publishing concepts inherent in Motion can save you RIDICULOUS amounts of time in your edits.  It’s a bit of a different way of working, which may be why a lot of people haven’t gravitated to it… but when it comes to 85% of the common tasks asked of an editor, between FCPX and Motion, you’re going to save a ton of time doing it that way, and often at a higher quality because of the time saved and simplicity of the workflow, than you would banging your head against the wall with After Effects renders.

Also, Motion becomes a lot more powerful the deeper you get into it.

The good news is that Mark Spencer from Ripple Training created some of the best tutorials for Motion that I think anyone has made for any kind of app.  More specifically, his tutorials for Rigging and Publishing for Final Cut Pro X, Mastering Replicators, Mastering The Camera, and my personal favorite, Mastering Shapes, Paint Strokes & Masks, are just awesome… especially for the average editor who just wants to be a solid B when it comes to GFX, and just wants to know how make something that looks professional quickly.

Anyway, if you want to get up to speed quickly on Motion, I can’t recommend Mark’s Motion 5: The Complete Series (which has all the tutorials I just mentioned) highly enough.

And if you’re looking for some great free Motion tutorials, you need to stop what you’re doing and check out Simon Ubsdell’s Youtube Channel.

Seriously, even though no one ever talks about it… for a lot of people, Motion is really worth learning.