Why do professional camera-people take single or rapid burst shots of the event they’re photographing, instead of using a high quality video camera and selecting the best frame later in edit?

330 views

Why do professional camera-people take single or rapid burst shots of the event they’re photographing, instead of using a high quality video camera and selecting the best frame later in edit?

In: 5

8 Answers

Anonymous 0 Comments

Because you can better control the quality of a single photo on a photography camera compared to a single frame on a video camera. Additionally, to get a video camera that can record at the same quality as a still camera would cost much more and use huge amounts of memory to save the raw video file compared to even multiple single shots.

Anonymous 0 Comments

Video cameras still have lower resolution than still cameras of equivalent price & portability. Shooting in raw, uncompressed video would also require vastly more data processing capability & digital storage space.

Anonymous 0 Comments

8K video – pretty much the highest video standard out there – is the equivalent of 33.17 megapixels per frame.

A mid-range (~$2,000) mirrorless camera has a resolution of about 45 megapixels per shot – nearly a 30% increase in resolution.

You simply get higher resolution photos using burst photography than video.

Anonymous 0 Comments

The digital camera shooting quick single shots will produce far higher quality images than an equivalent camera shooting video.

The main issue is simply the amount of data that needs to be handled. A digital camera will have a memory buffer of limited size to which images can be captured when shooting in burst mode. They can be accumulated faster than they can actually be processed and written to long term storage, so this process can’t be continuous as video would require. A video then would need to be a lower resolution and quality.

Anonymous 0 Comments

Video cameras are built to take extremely rapid photos and will therefore have to do some compromises. The CCD needs to be built differently so they can transfer the data out fast enough, the shutters have to be electronic instead of mechanical, global shutters are harder to do at speeds, etc. And this have an impact on the image quality. So to get a high quality image they take single pictures with cameras designed for this. The shutter on these cameras are usually slower then the finger of a fast photographer. And taking images manually do give them the chance to do adjustments of the framing or focus between the shots instead of while the shutter is open.

However the cameras today are capable of helping a lot in these situations if they are set up correctly. They can be configured to take bursts automatically and even change the settings a bit between each shot themselves. So the cameraman presses the button once and the camera takes a short burst with varying settings as fast as it can. This usually ends when the image buffer becomes full and the camera needs some time to transfer the images to its storage.

Anonymous 0 Comments

A lot of reasons, but I can think of three of the top of my head.

First, off video camera where individual frames will look just as good as pictures from a regular camera would cost…probably a few 100 times as much money? They probably don’t even *make* video cameras where each frame is the same quality as a lot of professional picture cameras.

Second, that would make an insane amount of storage. 48 megapixels would be would be a respectable resolution for a professional (and roughly equal to 8k). Uncompressed raw footage, which they would need to be taking, would be about 6 GB per second if they show 60 fps. That means in about 30 seconds they have recorded more data than an entire 4k movie.

Third, the nature of pictures in general (this is a pretty big oversimplification btw) the shorter your exposure time the more light you need to have. With a video camera you would would need a lot more light either by making it way brighter or by having a giant lens to collect lots of light. Your would have no option to take really crisp low light pictures of something that isn’t moving.

Anonymous 0 Comments

The resolution of a still photo that would be blown up for use on say a Wheaties box, the cover of SI or on a billboard is SUPER high. Each single frame or image at these resolutions takes up a bunch of megabytes of storage space. For storing a few hundred of these images yeah we have GB flash cards etc. no problem. But for video at that resolution, that would be gigabytes per minute. So video gets compressed. Problem is though, while possible to recreate one particular still frame from a compressed video, its a pain in the butt; and still won’t look as good as the still image.

For example your typical “pro” camera, the Canon EOS 5D Mark IV – say $3k just for the _body_, no lenses etc. – is a 30 megapixel camera. Each still shot is 30 million pixels. “oh 30MP is just 30MB right? Nope. Each pixel takes more bits to store color and brightness info. If I recall, RAW format is 14 bits per pixel? So 30 MP at 14 bits RAW is 52.5 megabytes. Per image.

Meanwhile, “4k” video resolution = 3840 x 2160 pixel = 8,294,400 pixels = 8 megapixels = 14.5 MB per frame (if it were stored at the same 14-bit RAW format).

For instant replay purposes on your 4k TV screen, this is great. But even an 8megapixel raw video frame might not be sufficient if you want to show the concerned look of panic on the quarterback’s face on a magazine cover.

Anonymous 0 Comments

Professional still cameras still beat the quality of video cameras. You could buy a pro video camera with 4K or better quality but it would be stupidly large and expensive, even compared to pro still cameras, and still lag behind in many features. Even 4K is only 8 megapixels.

The best autofocus techniques, essential for sports photography, have involved the DSLR mirror being down though mirrorless cameras are catching up. In contrast cine cameras come from a history of requiring manual focus for everything, with a dedicated focus puller (human) and tape measures involved.

On paper the gap is closing between still and video but there remain many practical differences that make it very annoying to use one type instead of the other: file formats, storage media and software workflow to produce the shots you want.

Shutter speeds provide a concrete example. With cine cameras these are commonly specified in terms of shutter angle. Shooting at 24fps with a 180° shutter angle implies a shutter speed of 1/48s. This is relatively low and results in motion blur that’s desireable for cinema footage. But a sports photographer will often want 1/1000s shutter speeds to freeze the action and cine cameras aren’t set up to make this easy. And Cine cameras will expose every frame the same for obvious reasons but still cameras will individually adjust the exposure of every frame.

Even lenses are different. Still lenses have F-numbers which give the ratio between diameter and focal length. Cine lenses have T-numbers which are the same, except that they adjust for light loss due to glass. F-numbers mean you can accurately predict the depth of field whereas T-numbers relax that so that you can use the same T-number on any lens and the exposure will stay exactly the same. Also, still lenses have F-stops, where the dial actually clicks into place in something like 1/3rd of a stop intervals but cine lenses have no clicks, allowing you to gradually and smoothly change the exposure during a shot.

Anyway, pro still cameras are becoming mirrorless, without a mechanical shutter. It will probably soon be common for them to be able to take short bursts at 24fps or more, just like cine cameras.