Martin Chab
Aktiv medlem
All the Dslr cameras are great for general filming, but, if you are into filming there are not the right tools. Don't get me wrong, i love them and i use a lot for for filming but, even using enhancements like magic lantern, the dslrs are very limited in many ways.
Just to name a few, the dslrs are single bayer sensor, that means that the color produced are kind of "invented". I mean, the sensor does not have a photosite for ech color but an array of RGB filters over the photosites where the green ones are duplicated (because the green channel contains most of the luma and it is well known that the human eye is more sensitive to the luma than the chroma). From that the camera does a series of statisticals interpolations of the neighbour pixels to "figure out" which color each pixel should be. That is fine for general purposes but think what hapend with the edges in the image, there is no way for the system to figure out the "right colour".
In the other side since the realm information of the pixels is so data costuming, it would be impossible to process and store all that data with the technology of this days. Knowing that, and knowing that the eye is very sensitive to to luma (a small gradation of light is easily perceived by the aye) but not the chroma (a big change in color is hardly perceived by the eye) the trick is to do what it is called chroma subsampling to reduce the bandwidth (or the amount of data to be transferred and stored).
How is done? simple: the signal is first converted to Y Cr Cb (Y for luma, Cr the differential of the red and Cb the differential of the blue, yo must wonder where is the green, well....it happens thatbthe green contains must of the luma so that channel is also used as it containing the green).
Now we can start to subsample. Each pixel contains a sample of Y but every two pixels contains either a sample of Cr either Cb. This is called 4:2:2 sample and itis used only in high end video cameras because it continue to be very data consuming. Most of cameras uses 4:2:0 what it is an odd way to name it. It means that for each luma sample it uses alternative blocks of Cr and Cb in rows. This way the data is reduced to about a quarter of the original data. Over that the camera compresses the signal to reduce even more the bandwidth (through h264 or other compression schemes).
Normal professional videocameras not only use more dedicated hardware to deal with the processing but in general has three sensors, one for each colour. That means that each photo site gets the full color information and doesn't have to figure out the color of each point. Their internal electronics makes a better job processing the information in a higher bit count (more precision), but also has functions to manipulate the gamma curves, knee, matrices and so on.
Over this DSLRs can not use the full sensor to generate the video and most uses a technique use line skipping, that means, it doesn't use all the lines (supersampling) and then process the final image (this is because of lack of processing power and because the sensor would be very hot what would introduce lot of noise. This phenomena causes moire. To avoid that the manufacturer are forced to add a low pass filter (the higher frequencies are eliminated). A high pass filter is nothing but a blur filter (thats why the image is not crisp and losses a lot of details, it is a tradeoff). Lets be clear, all the cameras has some king of low pass filters but the video cameras a re less prone to the effects so the filters can be softer, specially because the resolution don need to be high enough for still photography too.
No doubt, the dslr are getting better and better and are great tools, but the the tradeoffs are too high to even come closer to real video devices.
There is much more to write about it, we should consider MTF, Kell factor, Cmos sensors VS CCD, rolling shutter vs global shutter, etc, etc, etc.
If you are interested i can taken one by one this issues and anayle them to see how we can use to our advantage.
best regards
Martin Chab
Just to name a few, the dslrs are single bayer sensor, that means that the color produced are kind of "invented". I mean, the sensor does not have a photosite for ech color but an array of RGB filters over the photosites where the green ones are duplicated (because the green channel contains most of the luma and it is well known that the human eye is more sensitive to the luma than the chroma). From that the camera does a series of statisticals interpolations of the neighbour pixels to "figure out" which color each pixel should be. That is fine for general purposes but think what hapend with the edges in the image, there is no way for the system to figure out the "right colour".
In the other side since the realm information of the pixels is so data costuming, it would be impossible to process and store all that data with the technology of this days. Knowing that, and knowing that the eye is very sensitive to to luma (a small gradation of light is easily perceived by the aye) but not the chroma (a big change in color is hardly perceived by the eye) the trick is to do what it is called chroma subsampling to reduce the bandwidth (or the amount of data to be transferred and stored).
How is done? simple: the signal is first converted to Y Cr Cb (Y for luma, Cr the differential of the red and Cb the differential of the blue, yo must wonder where is the green, well....it happens thatbthe green contains must of the luma so that channel is also used as it containing the green).
Now we can start to subsample. Each pixel contains a sample of Y but every two pixels contains either a sample of Cr either Cb. This is called 4:2:2 sample and itis used only in high end video cameras because it continue to be very data consuming. Most of cameras uses 4:2:0 what it is an odd way to name it. It means that for each luma sample it uses alternative blocks of Cr and Cb in rows. This way the data is reduced to about a quarter of the original data. Over that the camera compresses the signal to reduce even more the bandwidth (through h264 or other compression schemes).
Normal professional videocameras not only use more dedicated hardware to deal with the processing but in general has three sensors, one for each colour. That means that each photo site gets the full color information and doesn't have to figure out the color of each point. Their internal electronics makes a better job processing the information in a higher bit count (more precision), but also has functions to manipulate the gamma curves, knee, matrices and so on.
Over this DSLRs can not use the full sensor to generate the video and most uses a technique use line skipping, that means, it doesn't use all the lines (supersampling) and then process the final image (this is because of lack of processing power and because the sensor would be very hot what would introduce lot of noise. This phenomena causes moire. To avoid that the manufacturer are forced to add a low pass filter (the higher frequencies are eliminated). A high pass filter is nothing but a blur filter (thats why the image is not crisp and losses a lot of details, it is a tradeoff). Lets be clear, all the cameras has some king of low pass filters but the video cameras a re less prone to the effects so the filters can be softer, specially because the resolution don need to be high enough for still photography too.
No doubt, the dslr are getting better and better and are great tools, but the the tradeoffs are too high to even come closer to real video devices.
There is much more to write about it, we should consider MTF, Kell factor, Cmos sensors VS CCD, rolling shutter vs global shutter, etc, etc, etc.
If you are interested i can taken one by one this issues and anayle them to see how we can use to our advantage.
best regards
Martin Chab