I posted yesterday on the news that Leica was experiencing corrosion issues with the IR filter glass used in its CCD sensor cameras, starting with the M 9.
I was pressed for time yesterday, so I didn't post anything beyond reporting the problem. I have some additional thoughts today.
I'm having a hard time understanding how damaging the coating on the IR filter glass could result in corrosion. Glass is an inert material. It normally doesn't corrode, even when subjected to corrosive materials. That's why it's used to contain acids.
The filter does apparently have a coating, but that coating should be inert as well. (The fact that the filter only corrodes when damaged seems to indicate that the coating normally is inert.)
Somehow, a filter made with two inert materials stops being inert if the surface becomes damaged.
That doesn't seem to make sense.
There are a couple of possible explanations that I can come up with.
(Discalimer: Based on limited information.)
First, the filter might have more than one coating. Having a layer that reacts when exposed to air that is covered by a second protective layer would explain what is happening with the Leica filter. The filter is fine as long as the protective layer is undamaged, but damaging that layer results in the second layer corroding.
Second, there is something in the coating that corrodes the glass when exposed to the environment. There is only one thing I know of that can damage glass this way: Hydrofluoric Acid. Unlike other acids, hydrofluoric acid has the ability to dissolve glass. (One of it's uses is etching glass.)
I suppose it's possible for the glass itself to be damaged if the coating contains a fluorine compound.
Both possibilities have implications.
The first one would suggest that someone overlooked a fairly obvious design flaw. That doesn't bode well for whatever company is responsible for the filter design. (I assume Leica, but that's not necessarily correct.)
The second would have implications when it comes to the use of fluorine compounds in cameras. This would include lenses as well as internal filters. If damaging a fluorine containing coating can result in the underlying glass being damaged, that would make those coating ill-suited for use in cameras and lenses.
Nikon recently developed a fluorine coating for its lenses. It's extremely effective at repelling dust, grease and dirt. Hopefully it doesn't include a risk of corrosion as well.
There is another issue that needs to be addressed as well.
That has to do with Leica's fix. They are offering to replace any sensor effected by the corrosion problem. The key here being "replacing the sensor".
The sensor itself s not actually affected by corrosion, only the filter. Simply replacing the filter would seem to be a much easier and less expensive fix. In fact, replacing the filter with one that doesn't use the faulty coating is the only permanent solution to the problem.
Instead, Leica is replacing the entire sensor and replacing it with one that uses the same faulty filter. That means the corrosion problem could re-occur on any "fixed" camera.
I'm not sure which is harder to understand, how the problem occurred in the first place or Leica's response.
Showing posts with label image sensors. Show all posts
Showing posts with label image sensors. Show all posts
Friday, December 12, 2014
Sunday, November 16, 2014
More Info on Sony's APCS Sensor
SLR Lounge has some additional information on Sony's new APCS sensor, including diagrams.
Based on the diagrams, it appears that the APCS design uses a movable Bayer filter. There are some obvious questions raised if the design does indeed use a movable filter.
This introduces another moving part that can break or wear out. It is basically a second shutter. On that has to move every time the camera takes a picture. Increasing the moving parts involved in an electronic device also increases the chances that something will go wrong with that device.
This problem is magnified when long exposure times are factored into the equation. Presumably, the only way to prevent color artifacts during long exposures would be to have the filter repeatedly reposition itself. Possibly hundreds of times during a single exposure. This would vastly increase the odds of the part failing.
Using moving parts inside a camera introduces another potential issue. Movement while taking pictures results in blurred images. The filter will need to be engineered in such a way so that movement in the filter is does not result in movement in any other part of the camera. Otherwise the filter could result in "camera shake" even when a tripod is used.
None of these issues are obvious given the original, sketchy description of the technology involved. They become far more obvious after seeing the diagrams.
Based on the diagrams, it appears that the APCS design uses a movable Bayer filter. There are some obvious questions raised if the design does indeed use a movable filter.
This introduces another moving part that can break or wear out. It is basically a second shutter. On that has to move every time the camera takes a picture. Increasing the moving parts involved in an electronic device also increases the chances that something will go wrong with that device.
This problem is magnified when long exposure times are factored into the equation. Presumably, the only way to prevent color artifacts during long exposures would be to have the filter repeatedly reposition itself. Possibly hundreds of times during a single exposure. This would vastly increase the odds of the part failing.
Using moving parts inside a camera introduces another potential issue. Movement while taking pictures results in blurred images. The filter will need to be engineered in such a way so that movement in the filter is does not result in movement in any other part of the camera. Otherwise the filter could result in "camera shake" even when a tripod is used.
None of these issues are obvious given the original, sketchy description of the technology involved. They become far more obvious after seeing the diagrams.
Wednesday, November 12, 2014
Sony Making News for Image Sensor Innovations, Again
Last Week, Sony made news with its patent for an image sensor that could apply multiple exposure times to a singe image.
This week, it's an image sensor that can capture Red/Green/Blue information at every pixel. The sensor uses something called "Active-Pixel Color Sensing" to achieve this. Instead of having some pixels detect green, others red and still others green by use of a color filter array, every pixel in an Active-Pixel Color Sensing (APCS) sensor would detect those three colors by using a moving electronic color filter.
The details are a bit sketchy right now, but rumors have the sensor showing up in products starting late 2015 or early 2016. (With the Experia smartphone being the first recipient.)
Using each pixel to capture Red/Green/Blue data would result in advantages.
First, this allows Sony to eliminate the Bayer filter traditionally used to capture color information.
Eliminating the Bayer filter eliminates the need to interpolate color data from several pixels in order to produce color information. This eliminates a great deal of the processing currently needed to produce color images. Eliminating processing should greatly increase the speed at which images can be captured and recorded. It might also lower power consumption.
Eliminating the Bayer filter also eliminates the need to deal with moire. This means that a camera equipped with this type of sensor could eliminate the anti-aliasing filter found in many digital cameras. This would help increase image clarity. Sharper images are always a plus.
Second, the pixels used could be larger than those used in Bayer based sensors with no loss of image resolution. Larger pixels are more efficient when it comes to capturing light and less prone to noise at high ISO settings. Fewer pixels would also increase processing speed.
The increase in processing speed actually seems to be one of the largest advantages for the new design. Sony is suggesting 2K video recorded at 16,000 fps.
There is one obvious problem with the new sensor: the name. The acronym for the current name would be "APCS". That is far too to APS-C, which is a common sensor size found in digital cameras. Imagine a camera being described as having an APCS APS-C sensor.
That might be just a tad confusing.
Keep track of developments on this sensor and other Sony camera news at Sony Alpha Rumors.
This week, it's an image sensor that can capture Red/Green/Blue information at every pixel. The sensor uses something called "Active-Pixel Color Sensing" to achieve this. Instead of having some pixels detect green, others red and still others green by use of a color filter array, every pixel in an Active-Pixel Color Sensing (APCS) sensor would detect those three colors by using a moving electronic color filter.
The details are a bit sketchy right now, but rumors have the sensor showing up in products starting late 2015 or early 2016. (With the Experia smartphone being the first recipient.)
Using each pixel to capture Red/Green/Blue data would result in advantages.
First, this allows Sony to eliminate the Bayer filter traditionally used to capture color information.
Eliminating the Bayer filter eliminates the need to interpolate color data from several pixels in order to produce color information. This eliminates a great deal of the processing currently needed to produce color images. Eliminating processing should greatly increase the speed at which images can be captured and recorded. It might also lower power consumption.
Eliminating the Bayer filter also eliminates the need to deal with moire. This means that a camera equipped with this type of sensor could eliminate the anti-aliasing filter found in many digital cameras. This would help increase image clarity. Sharper images are always a plus.
Second, the pixels used could be larger than those used in Bayer based sensors with no loss of image resolution. Larger pixels are more efficient when it comes to capturing light and less prone to noise at high ISO settings. Fewer pixels would also increase processing speed.
The increase in processing speed actually seems to be one of the largest advantages for the new design. Sony is suggesting 2K video recorded at 16,000 fps.
There is one obvious problem with the new sensor: the name. The acronym for the current name would be "APCS". That is far too to APS-C, which is a common sensor size found in digital cameras. Imagine a camera being described as having an APCS APS-C sensor.
That might be just a tad confusing.
Keep track of developments on this sensor and other Sony camera news at Sony Alpha Rumors.
Friday, November 7, 2014
Lytro Announces Developer Kit
Hat Tip: DP Review
(This has been covered by other outlets as well. DP Review just happens to be the one that caught my attention.)
Lytro is the company that has developed light field technology for camera use. Light field technology allows the camera to record a light ray's direction, intensity and color. (As opposes to regular sensors which only record intensity and strength.) The additional directional information allows light field cameras to be for applications beyond those that normal digital cameras can be used for.
The new Lytro Developer's Kit allows outside companies to develop those applications. NASA and the DoD are apparently already interested in the kit.
Light field technology does not seem to be positioned to compete with traditional digital cameras when it comes to producing still images. The still images produced don't stack up resolution wise. That means Lytro needs to find another reason for consumers to purchase light field cameras, which makes the development kit a smart move. It will enable other companies to develop the technology in directions other than those aimed at producing still images.
The annual subscription for the kit starts at $20,000.
I'll let you decide whether that's reasonable.
Update: PetaPixel has a link to the Lytro Platform page. It provides specifics on what is included in the kit.
(This has been covered by other outlets as well. DP Review just happens to be the one that caught my attention.)
Lytro is the company that has developed light field technology for camera use. Light field technology allows the camera to record a light ray's direction, intensity and color. (As opposes to regular sensors which only record intensity and strength.) The additional directional information allows light field cameras to be for applications beyond those that normal digital cameras can be used for.
The new Lytro Developer's Kit allows outside companies to develop those applications. NASA and the DoD are apparently already interested in the kit.
Light field technology does not seem to be positioned to compete with traditional digital cameras when it comes to producing still images. The still images produced don't stack up resolution wise. That means Lytro needs to find another reason for consumers to purchase light field cameras, which makes the development kit a smart move. It will enable other companies to develop the technology in directions other than those aimed at producing still images.
The annual subscription for the kit starts at $20,000.
I'll let you decide whether that's reasonable.
Update: PetaPixel has a link to the Lytro Platform page. It provides specifics on what is included in the kit.
Wednesday, November 5, 2014
Sony Patents Varying Exposure Image Sensor
Hat Tip: PetaPixel
Chalk another one up to engineers realizing there is no need for a digital image sensor to behave exactly the same way film behaves. Sony has now designed a new image sensor that uses variable exposure times. The exposure time for each pixel depends on the amount of light hitting the sensor at that pixel's location.
The sensor works by applying one of two exposure times to each pixel. A short exposure time to the bright areas of the image and a long exposure time to the dark areas. Theoretically, this allows the sensor to capture details in the darker areas of an image without over exposing the lighter areas.
There are obvious issues with using different exposure times for a single exposure.
The most obvious issue involves movement. Moving object could conceivably move from "light" areas into "dark" areas (or dark to light.) This would result in a motion-blur with different exposures in different areas. Not necessarily the result the photographer is looking for.
Light emitting objects could produce additional problems. A light emitting object that starts in a "light" area and moves into a "dark" area could result in the dark area being over exposed.
Sony has apparently considered the potential problems associated with using multiple exposure times for a single image and have attempted to address these issues via the software used with the sensor.
The actual patent can be viewed here for those interested.
The patent description includes a link to a pdf file with images and includes a little more detail on the approach used to address blurring/movement.
It's always nice to see digital imaging innovation that comes as a result of diverging from the "image sensor as film" mentality.
Chalk another one up to engineers realizing there is no need for a digital image sensor to behave exactly the same way film behaves. Sony has now designed a new image sensor that uses variable exposure times. The exposure time for each pixel depends on the amount of light hitting the sensor at that pixel's location.
The sensor works by applying one of two exposure times to each pixel. A short exposure time to the bright areas of the image and a long exposure time to the dark areas. Theoretically, this allows the sensor to capture details in the darker areas of an image without over exposing the lighter areas.
There are obvious issues with using different exposure times for a single exposure.
The most obvious issue involves movement. Moving object could conceivably move from "light" areas into "dark" areas (or dark to light.) This would result in a motion-blur with different exposures in different areas. Not necessarily the result the photographer is looking for.
Light emitting objects could produce additional problems. A light emitting object that starts in a "light" area and moves into a "dark" area could result in the dark area being over exposed.
Sony has apparently considered the potential problems associated with using multiple exposure times for a single image and have attempted to address these issues via the software used with the sensor.
The actual patent can be viewed here for those interested.
The patent description includes a link to a pdf file with images and includes a little more detail on the approach used to address blurring/movement.
It's always nice to see digital imaging innovation that comes as a result of diverging from the "image sensor as film" mentality.
Tuesday, October 7, 2014
Scientist Develop Sensor More Sensitive to Color
Hat Tip: Imaging Resource
Researchers at the University of Granada along with those at the Polytechnic University f Milan (Italy) have developed an imaging sensing device capable of capturing far greater color information.
The sensor is similar to Simga's Foveon sensor. It detects the wavelength of light associated with a photon of light based on how far it penetrates into the silicon used in the sensor. Different wavelengths (perceived as different colors) penetrate to different depths. The sensor determines the color associated with a particular pixel based on how far the light penetrates the sensor at that point.
(It's probably a little more complicated that that, but that's close enough for anyone not trying to develope the technology.)
The scientist refer to the sensor as a "Transverse Field Detector" (TFD). It utilizes a transversal electric field with varying intensity to modulate the depths at which photons are collected.
(Join the club if that description just goes right over your head. I think it means they can control the sensor so that it only detects certain wavelengths, and so so on a per-image basis based on the strength of the electric field use.)
The sensor is capable of obtaining 36 channels of information. Current imaging technology produces three: red, blue and green.
36 channels of information is far more than what is required to produce a viable photographic image. It's unlikely that any application would require the use of that many channels at the same time. Plus the files size would be huge; roughly 12 times that of a three-channel image.
There might be times when someone might want all 36 channels in order to view them separately or to compare a channels. (I think Astronomers might find the 36 channel capability useful when determining the chemical signatures associated with different astronomical bodies.)
36 channels of color information is overkill when it comes to photography. 3 Channels at 8 bits results in over 16 million possible colors. Extending that out to 10-bits per channel results in 4 Trillion possible colors (yes, that's a "T"). This is far more than the human eye is capable of seeing.
If my math is correct, 8 channels at 8 bits would result in over 18 Quadrillion possible colors. (The color combination increases 256 times every time you add another channel.) To put this in perspective, 18 Quadrillion dollars is 1000 times more than is needed to pay off the current US national debt.
The current 10-bit (or more) 3 channels of color used by many modern DSLR cameras results in more colors than the human eye is capable of discerning. Adding additional channels would just result in useless additional information.
The ability to pick 3 different channels out of 36 possibilities might prove useful, though. Especially if those options include infrared and ultraviolet wavelengths. A camera that could switch from normal 3 channel (RGB) mode to one capable of taking infrared and/or ultraviolet pictures simply by changing settings might be of interest to some photographers.
The capability to switch the camera to detecting particular wavelengths might also prove useful under certain lighting conditions.
(The original article can be read here)
Researchers at the University of Granada along with those at the Polytechnic University f Milan (Italy) have developed an imaging sensing device capable of capturing far greater color information.
The sensor is similar to Simga's Foveon sensor. It detects the wavelength of light associated with a photon of light based on how far it penetrates into the silicon used in the sensor. Different wavelengths (perceived as different colors) penetrate to different depths. The sensor determines the color associated with a particular pixel based on how far the light penetrates the sensor at that point.
(It's probably a little more complicated that that, but that's close enough for anyone not trying to develope the technology.)
The scientist refer to the sensor as a "Transverse Field Detector" (TFD). It utilizes a transversal electric field with varying intensity to modulate the depths at which photons are collected.
(Join the club if that description just goes right over your head. I think it means they can control the sensor so that it only detects certain wavelengths, and so so on a per-image basis based on the strength of the electric field use.)
The sensor is capable of obtaining 36 channels of information. Current imaging technology produces three: red, blue and green.
36 channels of information is far more than what is required to produce a viable photographic image. It's unlikely that any application would require the use of that many channels at the same time. Plus the files size would be huge; roughly 12 times that of a three-channel image.
There might be times when someone might want all 36 channels in order to view them separately or to compare a channels. (I think Astronomers might find the 36 channel capability useful when determining the chemical signatures associated with different astronomical bodies.)
36 channels of color information is overkill when it comes to photography. 3 Channels at 8 bits results in over 16 million possible colors. Extending that out to 10-bits per channel results in 4 Trillion possible colors (yes, that's a "T"). This is far more than the human eye is capable of seeing.
If my math is correct, 8 channels at 8 bits would result in over 18 Quadrillion possible colors. (The color combination increases 256 times every time you add another channel.) To put this in perspective, 18 Quadrillion dollars is 1000 times more than is needed to pay off the current US national debt.
The current 10-bit (or more) 3 channels of color used by many modern DSLR cameras results in more colors than the human eye is capable of discerning. Adding additional channels would just result in useless additional information.
The ability to pick 3 different channels out of 36 possibilities might prove useful, though. Especially if those options include infrared and ultraviolet wavelengths. A camera that could switch from normal 3 channel (RGB) mode to one capable of taking infrared and/or ultraviolet pictures simply by changing settings might be of interest to some photographers.
The capability to switch the camera to detecting particular wavelengths might also prove useful under certain lighting conditions.
(The original article can be read here)
Tuesday, July 29, 2014
How Much Does Sensor Size Really Matter?
Zack Arias of Dedpxl.com posted a YouTube video recently on the difference between full frame and APS-C camera sensors.
The title "Crop or Crap".
His take, there is negligible difference between the sensors (and the photographer matters more than the sensor.)
He doesn't actually get into any numbers, so I thought I would. (One of my first posts involves a comparison of sensor size so I have easy access to the numbers involved.)
A full frame digital sensor measures 36 x 24 mm. APS-C sensor size varies based on manufacturer. Its roughly 22 x 15 mm. The full frame sensor has a surface area of 864 sq. mm versus about 330 sq. mm. for the APS-C sensor.
Micro 4/3 sensors are 17.3 x 13 mm, for a surface area of 225 sq. mm.
It's the surface area number that's important. That number reflects the actual space available for individual photosites or pixels. There is a limit on how small individual photosites can be shrunk. (Too small results in introducing an unacceptable level of noise.)
Based on the numbers, there is actually a very large difference between APS-C and full frame sensors, with the full frame sensor offering roughly 2.5 times the surface area.
Given the same pixel count, a camera with an APS-C sensor will have to use smaller photo sensors and have those sensor located closer to each other. This increases the likelihood of noise. Eventually, you get to a point where the photosites are so small and so close together that it is impossible to fit photosites onto the sensor.
Neither Micro 4/3" or APS-C sensors seem to have reached their theoretical maximum pixel count right now. Those systems will eventually reach a point where they can't offer the same resolution as full frame cameras, and any future conscious photographer should consider this when investing in a new camera.
(Frankly, the theoretical limit on resolution is high enough when it comes to Micro 4/3" and APS-C systems that most photographers won't care. For those that do, the limit for full frame cameras will be 2.5 times that for APS-C cameras.)
For those willing to accept this theoretical limit, systems using the smaller sensors do offer some advantages.
The smaller sensor size equates to smaller camera bodies. It also means smaller lenses. Smaller cameras and lenses mean these systems are lighter and easier to carry. The lenses also tend to cost less as they contain less glass.
The weight and monetary savings make these systems very attractive compared to full frame DSLR cameras. They also happen to be capable of producing high image quality. (There are minor differences when it comes to things like depth of field.)
There seems to be very little reason to avoid these systems right now. That might change in the future if manufacturers start increasing the resolution offered by these systems, especially if they start reaching the theoretical maximum resolution (whatever that turns out to be).
The title "Crop or Crap".
His take, there is negligible difference between the sensors (and the photographer matters more than the sensor.)
He doesn't actually get into any numbers, so I thought I would. (One of my first posts involves a comparison of sensor size so I have easy access to the numbers involved.)
A full frame digital sensor measures 36 x 24 mm. APS-C sensor size varies based on manufacturer. Its roughly 22 x 15 mm. The full frame sensor has a surface area of 864 sq. mm versus about 330 sq. mm. for the APS-C sensor.
Micro 4/3 sensors are 17.3 x 13 mm, for a surface area of 225 sq. mm.
It's the surface area number that's important. That number reflects the actual space available for individual photosites or pixels. There is a limit on how small individual photosites can be shrunk. (Too small results in introducing an unacceptable level of noise.)
Based on the numbers, there is actually a very large difference between APS-C and full frame sensors, with the full frame sensor offering roughly 2.5 times the surface area.
Given the same pixel count, a camera with an APS-C sensor will have to use smaller photo sensors and have those sensor located closer to each other. This increases the likelihood of noise. Eventually, you get to a point where the photosites are so small and so close together that it is impossible to fit photosites onto the sensor.
Neither Micro 4/3" or APS-C sensors seem to have reached their theoretical maximum pixel count right now. Those systems will eventually reach a point where they can't offer the same resolution as full frame cameras, and any future conscious photographer should consider this when investing in a new camera.
(Frankly, the theoretical limit on resolution is high enough when it comes to Micro 4/3" and APS-C systems that most photographers won't care. For those that do, the limit for full frame cameras will be 2.5 times that for APS-C cameras.)
For those willing to accept this theoretical limit, systems using the smaller sensors do offer some advantages.
The smaller sensor size equates to smaller camera bodies. It also means smaller lenses. Smaller cameras and lenses mean these systems are lighter and easier to carry. The lenses also tend to cost less as they contain less glass.
The weight and monetary savings make these systems very attractive compared to full frame DSLR cameras. They also happen to be capable of producing high image quality. (There are minor differences when it comes to things like depth of field.)
There seems to be very little reason to avoid these systems right now. That might change in the future if manufacturers start increasing the resolution offered by these systems, especially if they start reaching the theoretical maximum resolution (whatever that turns out to be).
Friday, July 4, 2014
Image Taken With Sony's New Curved Sensor
Hat Tip: PetaPixel
Sony has apparently posted an image taken using it's new curved sensor design. It's just an image of model/diorama so you really can't draw many conclusions from it.
The post that included the image is written in Japanese, which I don't read (I did run it through Bing Translator to get a rough idea on the content.) There doesn't seem to be any information on what type of lens was used to take the picture. The sensor could conceivably require a specially designed lens for best results and I would like to know whether they used a lens designed for the lens or just used a preexisting lens designed for current flat sensors.
The image does suggest that Sony is serious about using the curved sensor in its cameras.
There does seem to be an issue with using zoom lenses with the sensor. This suggests that the sensor might first appear in fixed-lens cameras.
Sony has apparently posted an image taken using it's new curved sensor design. It's just an image of model/diorama so you really can't draw many conclusions from it.
The post that included the image is written in Japanese, which I don't read (I did run it through Bing Translator to get a rough idea on the content.) There doesn't seem to be any information on what type of lens was used to take the picture. The sensor could conceivably require a specially designed lens for best results and I would like to know whether they used a lens designed for the lens or just used a preexisting lens designed for current flat sensors.
The image does suggest that Sony is serious about using the curved sensor in its cameras.
There does seem to be an issue with using zoom lenses with the sensor. This suggests that the sensor might first appear in fixed-lens cameras.
Friday, June 27, 2014
Canon Patents Multi-Layer Sensor
Hat Tip: SLR Lounge / PetaPixel
Canon has patented a digital camera sensor consisting of five different layers.
The sensor appears similar to the one produced by Foveon, with the Canon sensor having the ability to capture infrared and ultraviolet information in addition to recording red, green and blue.
Digital cameras typically include a piece of glass in front of the sensor to screen excess UV light. The Canon sensor (if it is actually produced) would most likely omit this glass. That might be the only way for there to be enough UV information available for the UV layer to function correctly.
Omitting this layer of glass would have implications when it comes to lens design. Lenses are designed with this piece of glass in mind, and using a lens on a camera that uses glass with a different thickness (or no glass at all) will impact image quality.
Canon might be able to address this problem by using glass that doesn't absorb UV light. (I don't have enough information as to whether this would work or not. I only throw it out as a possability.)
The design would also have implications when it comes to image editing software. Current sensors only record three channels of information. This sensor would add two more channels. The simplest solution to this might be putting the information from the UV and IR layers into a separate file. This would result in an image file that only contained RGB data, with an additional file that could be used for UV or IR photography or to edit the RGB data.
The ability to record UV and IR information might by a selling point for some photographers. The problem might be the additional costs involved in producing the sensor. Those costs would include the design changes to the camera that would come with using the sensor in addition to the increased cost of producing the sensor itself due to the increased complexity.
(Image of sensor design from Northlight Images. I didn't link to their article as it isn't on a permanent page.)
Canon has patented a digital camera sensor consisting of five different layers.
The sensor appears similar to the one produced by Foveon, with the Canon sensor having the ability to capture infrared and ultraviolet information in addition to recording red, green and blue.
Digital cameras typically include a piece of glass in front of the sensor to screen excess UV light. The Canon sensor (if it is actually produced) would most likely omit this glass. That might be the only way for there to be enough UV information available for the UV layer to function correctly.
Omitting this layer of glass would have implications when it comes to lens design. Lenses are designed with this piece of glass in mind, and using a lens on a camera that uses glass with a different thickness (or no glass at all) will impact image quality.
Canon might be able to address this problem by using glass that doesn't absorb UV light. (I don't have enough information as to whether this would work or not. I only throw it out as a possability.)
The design would also have implications when it comes to image editing software. Current sensors only record three channels of information. This sensor would add two more channels. The simplest solution to this might be putting the information from the UV and IR layers into a separate file. This would result in an image file that only contained RGB data, with an additional file that could be used for UV or IR photography or to edit the RGB data.
The ability to record UV and IR information might by a selling point for some photographers. The problem might be the additional costs involved in producing the sensor. Those costs would include the design changes to the camera that would come with using the sensor in addition to the increased cost of producing the sensor itself due to the increased complexity.
(Image of sensor design from Northlight Images. I didn't link to their article as it isn't on a permanent page.)
Friday, June 13, 2014
Sony's New Curved Sensor
Hat Tip: DIY Photography
Sony patented a curved camera sensor back in April. The sensor has apparently reached the production stage.
The curved design apparently enhances the sensor's sensitivity, making it twice as sensitive at the edges of the sensor and 1.4 times as sensitive in the middle. The benefit of increased sensitivity to light is fairly self-evident. Even better, the increase in sensitivity should not increase noise.
The biggest drawback would seem to be the impact the design change would have on the ability to use existing lenses. Current lenses are designed for use with a flat sensor. Using one with the curved sensor would probably produce an image that only have parts of the image in focus.
It might be possible to address this issue with an adapter.
Using a current lens with the new curved senor could produce desirable effects for some photographers (vignetting or soft-focus on the edges).
Lenses designed to by used with the curved sensor wouldn't need some of the elements used by current lenses to address problems caused by using a flat sensor. This means lenses designed for this sensor could use fewer elements or a less complex design (or both).
This could allow Sony (or other companies) to produce lenses that cost less than those used by competing cameras without having to compromise on image quality.
Sony patented a curved camera sensor back in April. The sensor has apparently reached the production stage.
The curved design apparently enhances the sensor's sensitivity, making it twice as sensitive at the edges of the sensor and 1.4 times as sensitive in the middle. The benefit of increased sensitivity to light is fairly self-evident. Even better, the increase in sensitivity should not increase noise.
The biggest drawback would seem to be the impact the design change would have on the ability to use existing lenses. Current lenses are designed for use with a flat sensor. Using one with the curved sensor would probably produce an image that only have parts of the image in focus.
It might be possible to address this issue with an adapter.
Using a current lens with the new curved senor could produce desirable effects for some photographers (vignetting or soft-focus on the edges).
Lenses designed to by used with the curved sensor wouldn't need some of the elements used by current lenses to address problems caused by using a flat sensor. This means lenses designed for this sensor could use fewer elements or a less complex design (or both).
This could allow Sony (or other companies) to produce lenses that cost less than those used by competing cameras without having to compromise on image quality.
Tuesday, April 22, 2014
Lytro Announces New Light Field Camera
Hat Tip: DP Review, Amateur Photographer, ePhotozine
Today's hot news seems to be the announcement of the Illum Professional Light Field Camera by Lytro.
Lytro is currently taking pre-orders for the Illum. Sales are scheduled to start in July. The Illum is listed at $1,599. You can save $100 by pre-ordering (pre-ordering the Illum also provides some additional perks beyond the price drop, including a chance to participate in a photo shoot run by a professional photographer.)
Camera Specs
Included software can be used to manipulate the images and to create interactive images or animation. It can also export still image files that can be edited with other image editing software. (Apple's Aperture and Adobe's Photoshop/Lightroom support the light-field image files. These programs should be able to edit the light-field file without exporting it as a jpg or similar image file first.)
The images can also be used to produce 3-D images on 3-D capable devices.
Lytro has a page with interactive images produced using the Illum. These images give an idea as to the type of post-processing possibilities that exist when editing the light-field image files.
DP Review also has a brief interview with the Lytro CEO.
The most interesting part of the interview may be the part on the zoom lens used by the Illum. The lens uses 13 elements, which is fairly low for this type of lens. Normally the lens would need to address aberrations by including additional pieces of glass (or more complex pieces).
The ability to track light-ray direction enables the camera to omit those. Aberration correction is done by software instead.
This suggest that light-field cameras could wind-up competitively priced in comparison to regular digital cameras. The sensor might be more expensive, but the expense could be offset by less expensive lenses that still produce images of similar quality.
The 40 mega-ray number seems impressive, but that includes a great deal of information not used when exporting a two dimensional image. Those images run about 5 megapixels.
5 megapixel output seems low given the $1,599 price tag.
The resolution produced when exporting still images still needs to improve in order to compete with existing digital cameras.
There is also no information on hardware specifications for using the accompanying software. I suspect it will be higher than the requirements for other image editing software.
Overall, the camera does not seem well suited for traditional photography. The image resolution just isn't high enough for still images. That does not mean the camera is useless, though.
It seems tailor made for game and web developers. The ability to capture 3-D information suggests that 3-D game developers might be well served by investing in the camera. It also promises to be useful for creating interactive web sites.
One thing to remember is that this is basically the second generation of light-field technology. While it might come up short when compared to existing digital cameras, the Illum looks very impressive when compared to second-generation digital cameras. (Those didn't even manage 5MP.)
Today's hot news seems to be the announcement of the Illum Professional Light Field Camera by Lytro.
(Photo from DP Review article) |
Lytro is currently taking pre-orders for the Illum. Sales are scheduled to start in July. The Illum is listed at $1,599. You can save $100 by pre-ordering (pre-ordering the Illum also provides some additional perks beyond the price drop, including a chance to participate in a photo shoot run by a professional photographer.)
Camera Specs
- 40 mega-ray sensor
- 8x optical zoom lens, 30-250mm equivalent
- Constant f/2.0 aperture
- Macro capability
- 1/4000 of a second high-speed shutter
- 4" tilting touchscreen
Included software can be used to manipulate the images and to create interactive images or animation. It can also export still image files that can be edited with other image editing software. (Apple's Aperture and Adobe's Photoshop/Lightroom support the light-field image files. These programs should be able to edit the light-field file without exporting it as a jpg or similar image file first.)
The images can also be used to produce 3-D images on 3-D capable devices.
Lytro has a page with interactive images produced using the Illum. These images give an idea as to the type of post-processing possibilities that exist when editing the light-field image files.
DP Review also has a brief interview with the Lytro CEO.
The most interesting part of the interview may be the part on the zoom lens used by the Illum. The lens uses 13 elements, which is fairly low for this type of lens. Normally the lens would need to address aberrations by including additional pieces of glass (or more complex pieces).
The ability to track light-ray direction enables the camera to omit those. Aberration correction is done by software instead.
This suggest that light-field cameras could wind-up competitively priced in comparison to regular digital cameras. The sensor might be more expensive, but the expense could be offset by less expensive lenses that still produce images of similar quality.
The 40 mega-ray number seems impressive, but that includes a great deal of information not used when exporting a two dimensional image. Those images run about 5 megapixels.
5 megapixel output seems low given the $1,599 price tag.
The resolution produced when exporting still images still needs to improve in order to compete with existing digital cameras.
There is also no information on hardware specifications for using the accompanying software. I suspect it will be higher than the requirements for other image editing software.
Overall, the camera does not seem well suited for traditional photography. The image resolution just isn't high enough for still images. That does not mean the camera is useless, though.
It seems tailor made for game and web developers. The ability to capture 3-D information suggests that 3-D game developers might be well served by investing in the camera. It also promises to be useful for creating interactive web sites.
One thing to remember is that this is basically the second generation of light-field technology. While it might come up short when compared to existing digital cameras, the Illum looks very impressive when compared to second-generation digital cameras. (Those didn't even manage 5MP.)
Friday, April 18, 2014
The First Digital Space Camera
Hat Tip: PetaPixel
PetaPixel has a short article on the Kodak Hawkeye II.
This combined a 1.2 MP digital sensor built by Kodak with the body of a Nikon F3. The camera was sent up on the space shuttle in 1991, making it the first digital camera to be used in space.
PetaPixel has a short article on the Kodak Hawkeye II.
This combined a 1.2 MP digital sensor built by Kodak with the body of a Nikon F3. The camera was sent up on the space shuttle in 1991, making it the first digital camera to be used in space.
Thursday, April 17, 2014
Panasonic Patents New Light Field Photography Sensor
Hat Tip: Imaging Resource
Panasonic has patented a new sensor that adds the ability to capture light field information to current digital camera sensor technology.
The sensors used in modern digital cameras record the intensity of light falling on a particular point (the sensor).
A light-field sensor records the direction and intensity of light passing-through that point. It's the ability to record direction information that sets a light-field sensor apart from the current digital camera sensor.
The ability to capture directional information means that images captured using light-field sensors can be used in ways that standard digital images can't be used. The information allows computer software to determine a subject's distance from the camera. This information can be used to create a 3-D model or to manipulate the image in a way that normal images can't be manipulated.
Among other things, software can be used to refocus the image. No more images ruined due to out-of-focus subject matter.
Scientific American has a short video on How Light Field Cameras Work.
The best part of the Panasonic patent? The light-field sensor covered by it can be used in any camera using a digital sensor.
Panasonic has patented a new sensor that adds the ability to capture light field information to current digital camera sensor technology.
The sensors used in modern digital cameras record the intensity of light falling on a particular point (the sensor).
A light-field sensor records the direction and intensity of light passing-through that point. It's the ability to record direction information that sets a light-field sensor apart from the current digital camera sensor.
The ability to capture directional information means that images captured using light-field sensors can be used in ways that standard digital images can't be used. The information allows computer software to determine a subject's distance from the camera. This information can be used to create a 3-D model or to manipulate the image in a way that normal images can't be manipulated.
Among other things, software can be used to refocus the image. No more images ruined due to out-of-focus subject matter.
Scientific American has a short video on How Light Field Cameras Work.
The best part of the Panasonic patent? The light-field sensor covered by it can be used in any camera using a digital sensor.
Saturday, April 5, 2014
Inside the Samsung Galaxy S5
Imaging Resource has an article covering the tear-down of the Samsung Galaxy S5 done by the people over at Chipworks.
Chipworks engineers disassembles devices like the Samsung Galaxy, analyze the components and then provides an analysis of the device to competitors and institutional investors. (This would include your mutual fund manager if you happen to have a 401(k) account.)
This allows competitors to keep tabs on what other companies are doing and gives investors some insight into a companies strength and business strategy not available by other means.
The images including in the article are worth viewing. (The article itself is a bit dense in spots. Parts made my eyes glaze over and I'm fairly tech savvy.)
Here's the image of the phone's image sensor:
There are a large number of photos dedicated to the various parts and identification labels. It's a good opportunity to see what the inside of one of these devices looks like without risking taking one you own apart.
The Imaging Resource article is a little easier to read and includes a brief description of the implications when it comes to using the Galaxy to take photos.
Thursday, March 27, 2014
What Differnt Megapixel Numbers Mean When Comparing Cameras
Digital Trends has an online article comparing the Canon EOS Rebel T5 to the Nikon D3300.
The fact that the Nikon D3300 sports a 24MP APS-C sensor vs. the 18MP sensor used by the T5 makes this look like a very one-sided comparison, but what does that 6 megapixel difference really mean when comparing the two cameras?
Take a look at the maximum image size each camera is capable of producing.
The T5 can produce a 5184 x 3456 image. The D3300 can produce a 6,000 x 4,000 image. Those numbers appear to be closer than the 30% difference represented by the megapixel number. Applying a little math, the D3300 image is 15% wider and 15% taller.
If the D3300 image is only 15% larger, why is the megapixel count 30% greater?
It's because you have to increase size in two directions in order to keep the aspect ratio. Increasing both height and width by 15% increases the surface area by 30%.
That raises the question: which number should be used when comparing the two cameras? The 15% height/width increase or the 30% increase in surface area?
Personally, I'd go with the 15% increase for one simple reason: printing results. Print sizes are usually compared based on width or height. D3300 images can be printed at a size 15% wider than those produced by the T5.
(To get the increase in printing width when comparing cameras with different megapixel counts, take the percentage the smaller number must be increased and then halve that result.)
The 15% number also makes sense when comparing the prices of the two cameras. The T5 with kit lens costs $550. The D3300 with a similar lens costs $650.
That amounts to an 18% increase in price.
Nikon seems to be admitting that it's 24MP sensor is only 15% better than the 18MP sensor in the T5.
Footnote: Remember we're talking image size not sensor size. The two cameras use the same size sensor. The D3300 has a larger pixel count because it uses slightly smaller individual photosensors. This allows it to produce images that are larger than produced by the T5 when viewed at the same resolution.
The fact that the Nikon D3300 sports a 24MP APS-C sensor vs. the 18MP sensor used by the T5 makes this look like a very one-sided comparison, but what does that 6 megapixel difference really mean when comparing the two cameras?
Take a look at the maximum image size each camera is capable of producing.
The T5 can produce a 5184 x 3456 image. The D3300 can produce a 6,000 x 4,000 image. Those numbers appear to be closer than the 30% difference represented by the megapixel number. Applying a little math, the D3300 image is 15% wider and 15% taller.
If the D3300 image is only 15% larger, why is the megapixel count 30% greater?
It's because you have to increase size in two directions in order to keep the aspect ratio. Increasing both height and width by 15% increases the surface area by 30%.
That raises the question: which number should be used when comparing the two cameras? The 15% height/width increase or the 30% increase in surface area?
Personally, I'd go with the 15% increase for one simple reason: printing results. Print sizes are usually compared based on width or height. D3300 images can be printed at a size 15% wider than those produced by the T5.
(To get the increase in printing width when comparing cameras with different megapixel counts, take the percentage the smaller number must be increased and then halve that result.)
The 15% number also makes sense when comparing the prices of the two cameras. The T5 with kit lens costs $550. The D3300 with a similar lens costs $650.
That amounts to an 18% increase in price.
Nikon seems to be admitting that it's 24MP sensor is only 15% better than the 18MP sensor in the T5.
Footnote: Remember we're talking image size not sensor size. The two cameras use the same size sensor. The D3300 has a larger pixel count because it uses slightly smaller individual photosensors. This allows it to produce images that are larger than produced by the T5 when viewed at the same resolution.
Friday, March 14, 2014
Samsun Video Explaining Its ISOCELL CMOS Image Sensor
Hat Tip: DP Review
Samsung uploaded a video explaining its ISOCELL image sensor to YouTube a couple days ago.
The design is meant to isolate the individual pixels (photosites/photodiodes) from each other in order to prevent image noise.
Isolating the pixel sites from each other becomes more important as the sensor's size is decreased or the number of pixel sites on the image is increased. Both require using smaller pixel sites packed closer together. Image noise increases when pixel sites are moved closer to each other.
Samsung uploaded a video explaining its ISOCELL image sensor to YouTube a couple days ago.
The design is meant to isolate the individual pixels (photosites/photodiodes) from each other in order to prevent image noise.
Isolating the pixel sites from each other becomes more important as the sensor's size is decreased or the number of pixel sites on the image is increased. Both require using smaller pixel sites packed closer together. Image noise increases when pixel sites are moved closer to each other.
Monday, February 10, 2014
New Foveon Sensor, The Latest in Unconventional Sensor Design
PC Mag, Image Resource, DP Review and Pop Photo all have articles today on the new Sigma DP Quatro featuring a redesigned Foveon sensor.
I suspect the timing has less to do with overwhelming interest and more to do with a news embargo timed to end today. I have to admit, though, having four articles written on the same day is great for P.R.
How the Foveon Sensor Works
Conventional digital camera sensors utilize a single layer of photodetectors and a mosaic patterned filter. The filter limits light hitting the detector to a specific color. Different color are allowed through at different areas of the sensor. These results are combined to produce a full-color image.
The Foveon sensor takes advantage of one of the properties of Silicon. Light wavelengths (color) penetrate silicon to different depths. Blue light only penetrates slightly with red penetrating the most. The sensor uses three layers of photodetectors, each capturing light penetrating to different depths.
The depth is used to tell the camera what color is being captured. The top level yields blue, the middle green and the bottom red.
(Technically, the top layer captures all light. The middle all light except blue. The bottom only captured red. Determine blue/green requires a little extrapolation based on the light captured by the layer below.)
This design allows the sensor to capture all the light hitting the sensor instead of limiting the light to certain wavelengths. The original generation of cameras using the technology did capture very detailed images, largely due to the ability to eliminate the color filter and low-pass filter used by typical camera sensors. Unfortunately, they were also prone to noise at higher ISO settings.
The original sensor had three layers with equal resolutions. This has been altered in the latest version so that the top layer has a much higher resolution than the lower two layers. The top layer's resolution is four times that of the second and third layer. (Top is 20MP, second and third 4.9MP each.)
Lowering the resolution of the bottom two layers may help lower image noise and increase processing speed.
Camera Design
The DP line comes in three fixed-lens versions. The DP1 has a 19mm lens, the DP2 a 30mm lens and the DP3 uses a 45 mm lens. (Equivalent to 28mm, 45mm and 75mm, or wide-angle, standard and minor telephoto.)
A large amount of the body has been eliminated. Viewing an image of the 30mm lens model emphasizes just how much of the body has been eliminated. The lens extends above the camera body.
This is from the CNet review. The published a day before the other sites.
The grip faces the opposite direction from what is used on most digital cameras and appears to have been angled slightly. It almost looks as if Sigma decided to flip the typical body over, putting the lens on what originally was the back and the lcd on the front.
Prices have yet to be announced.
I suspect the timing has less to do with overwhelming interest and more to do with a news embargo timed to end today. I have to admit, though, having four articles written on the same day is great for P.R.
How the Foveon Sensor Works
Conventional digital camera sensors utilize a single layer of photodetectors and a mosaic patterned filter. The filter limits light hitting the detector to a specific color. Different color are allowed through at different areas of the sensor. These results are combined to produce a full-color image.
The Foveon sensor takes advantage of one of the properties of Silicon. Light wavelengths (color) penetrate silicon to different depths. Blue light only penetrates slightly with red penetrating the most. The sensor uses three layers of photodetectors, each capturing light penetrating to different depths.
The depth is used to tell the camera what color is being captured. The top level yields blue, the middle green and the bottom red.
(Technically, the top layer captures all light. The middle all light except blue. The bottom only captured red. Determine blue/green requires a little extrapolation based on the light captured by the layer below.)
This design allows the sensor to capture all the light hitting the sensor instead of limiting the light to certain wavelengths. The original generation of cameras using the technology did capture very detailed images, largely due to the ability to eliminate the color filter and low-pass filter used by typical camera sensors. Unfortunately, they were also prone to noise at higher ISO settings.
The original sensor had three layers with equal resolutions. This has been altered in the latest version so that the top layer has a much higher resolution than the lower two layers. The top layer's resolution is four times that of the second and third layer. (Top is 20MP, second and third 4.9MP each.)
Lowering the resolution of the bottom two layers may help lower image noise and increase processing speed.
Camera Design
The DP line comes in three fixed-lens versions. The DP1 has a 19mm lens, the DP2 a 30mm lens and the DP3 uses a 45 mm lens. (Equivalent to 28mm, 45mm and 75mm, or wide-angle, standard and minor telephoto.)
A large amount of the body has been eliminated. Viewing an image of the 30mm lens model emphasizes just how much of the body has been eliminated. The lens extends above the camera body.
This is from the CNet review. The published a day before the other sites.
The grip faces the opposite direction from what is used on most digital cameras and appears to have been angled slightly. It almost looks as if Sigma decided to flip the typical body over, putting the lens on what originally was the back and the lcd on the front.
Prices have yet to be announced.
Saturday, January 25, 2014
Fujifilm Patents New Color Filter Array for Digital Cameras
Hat Tip: DP Review
Fujifilm has a history of designing (and using) digital camera sensors that diverge from the traditional design used by almost every other digital camera manufacturer.
The sensor used to in digital cameras are actually only capable of detect how much light hits the sensor at each light receptor (pixel). The light hitting the sensor has to be filtered so that different pixels are hit by different colors of light. Some are hit by red, others by green or blue. These values are then used to render a color image.
Bayer Pattern
Most digital cameras use a Bayer pattern in the filter used to produce color information.
This regular pattern works well, most of the time. Problems can occur when photographing something that also has a regular pattern, like a window screen. The two patterns interact with each other to form an artificially pattern, called a moire pattern, in the end image. (Diagonal lines of alternating colors for example.)
Sensors that use this Bayer pattern will use a "low-pass" or "anti-aliasing" filter in front of the sensor to avoid this pattern in the end image. This filter works by slightly blurring the light as it paces though the filter.
This results in slight loss of detail.
X-Trans Filter
This is a color filter developed by Fujifilm to address moire. It replaces the regular pattern found in the Bayer filter with a less regular pattern.
This enables Fujifilm to remove the low pass filter without increasing the risk of moire patterns occurring when using their cameras. This increases the level of detail the camera can capture slightly.
The Latest Patent
The latest sensor patent filed by Fujifilm combines a sensor array that utilizes different pixel sizes with a filter that utilizes clear filters as well as colored.
The addition of clear filter areas and larger pixels should make the sensor more sensitive in low light conditions, lowering noise taken in low light.
The downside is a sensor that is more complex to manufacture. This should increase its cost when compared to the Bayer based sensors used in other cameras.
It also looks like the sensor may have more open space when compared to the Bayer sensor. (This could just be due to the drawing used in the patent.)
Alternatively, Fujifilm might be able to produce similar result simply by alternating the filter used with the current sensor. Adding clear areas and using a a pattern like this:
might allow Fujifilm to emulate the results of the patented sensor without have to increase manufacturing costs.
BTW, if you're wondering why the green areas are larger than the blue and red, that's because the human eye is more sensitive to that wavelength (color) of light than the other two
Fujifilm has a history of designing (and using) digital camera sensors that diverge from the traditional design used by almost every other digital camera manufacturer.
The sensor used to in digital cameras are actually only capable of detect how much light hits the sensor at each light receptor (pixel). The light hitting the sensor has to be filtered so that different pixels are hit by different colors of light. Some are hit by red, others by green or blue. These values are then used to render a color image.
Bayer Pattern
Most digital cameras use a Bayer pattern in the filter used to produce color information.
This regular pattern works well, most of the time. Problems can occur when photographing something that also has a regular pattern, like a window screen. The two patterns interact with each other to form an artificially pattern, called a moire pattern, in the end image. (Diagonal lines of alternating colors for example.)
Sensors that use this Bayer pattern will use a "low-pass" or "anti-aliasing" filter in front of the sensor to avoid this pattern in the end image. This filter works by slightly blurring the light as it paces though the filter.
This results in slight loss of detail.
X-Trans Filter
This is a color filter developed by Fujifilm to address moire. It replaces the regular pattern found in the Bayer filter with a less regular pattern.
This enables Fujifilm to remove the low pass filter without increasing the risk of moire patterns occurring when using their cameras. This increases the level of detail the camera can capture slightly.
The Latest Patent
The latest sensor patent filed by Fujifilm combines a sensor array that utilizes different pixel sizes with a filter that utilizes clear filters as well as colored.
The addition of clear filter areas and larger pixels should make the sensor more sensitive in low light conditions, lowering noise taken in low light.
The downside is a sensor that is more complex to manufacture. This should increase its cost when compared to the Bayer based sensors used in other cameras.
It also looks like the sensor may have more open space when compared to the Bayer sensor. (This could just be due to the drawing used in the patent.)
Alternatively, Fujifilm might be able to produce similar result simply by alternating the filter used with the current sensor. Adding clear areas and using a a pattern like this:
might allow Fujifilm to emulate the results of the patented sensor without have to increase manufacturing costs.
BTW, if you're wondering why the green areas are larger than the blue and red, that's because the human eye is more sensitive to that wavelength (color) of light than the other two
Wednesday, January 22, 2014
New Youtube Video on How Digital Cameras Work
The people over at the Head Squeeze Youtube Channel recently uploaded a video on how digital cameras work. The video features James May of the BBC show Top Gear.
I do have a couple of comment.
First, the video is a little off when it comes to the resolution available 10 years ago. 3 to 4 megapixel cameras were readily available in 2003/2004. The 5 megapixel Olympus E-1 came out at the end of 2003. The more consumer oriented, 4 Megapixel HP Photosmart 850 dates to fall of that year. (Price was about $500 at the time).
The 1 megapixel era is closer to 15 years ago.
Oddly enough, this is about the time that the singing bass was popular. (Watch the video)
Secondly, the "My Cat Looks Like Hitler" web forum?
Okay
Just couldn't leave that one alone. Actually found one.
Edit: Forgot the hat tip on this.
The Pop Photo writer had a similar response to the 1 megapixel statement.
I do have a couple of comment.
First, the video is a little off when it comes to the resolution available 10 years ago. 3 to 4 megapixel cameras were readily available in 2003/2004. The 5 megapixel Olympus E-1 came out at the end of 2003. The more consumer oriented, 4 Megapixel HP Photosmart 850 dates to fall of that year. (Price was about $500 at the time).
The 1 megapixel era is closer to 15 years ago.
Oddly enough, this is about the time that the singing bass was popular. (Watch the video)
Secondly, the "My Cat Looks Like Hitler" web forum?
Okay
Just couldn't leave that one alone. Actually found one.
Edit: Forgot the hat tip on this.
The Pop Photo writer had a similar response to the 1 megapixel statement.
Saturday, November 23, 2013
Night Photography: Results Versus Human Perspective
I ran across an interesting article on Space.com on the difference between human vision at night and the results produced by the camera. The article is by Scott Taylor and ititled How Cameras Reveal the Northern Lights' True Colors.
His aurora and other night photographs can be seen at his smugmug portfolio. Some of his aurora photographs are extremely impressive. (Prints can be purchased from the smugmug link if interested.) Taylor also offers photography workshops. Keep an eye on his blog for his 2013 schedule.
Now, back to the article.
It points out that auroras seen be the naked eye lack the strong color often seen when photographed by a camera. The human eye contains two types of structures for capturing light. One captures color and works best in strong light. The other works in low light conditions, but can't capture color only value.
Digital cameras, on the other hand, are capable of capturing color even in low light conditions.
The sensor used by digital cameras technically isn't capable of capturing colors at all. It can only capture value (dark/light). Color is produced by filtering incoming light to exclude all but certain wavelengths of light. Some parts of the sensor detect green light, another red and a third blue. Combined, they produce full color.
It's like having three eyes designed for low light conditions, filtered for color and then combined by the brain into a single image.
The article gives some practical advice when taking aurora photographs. Presumably, the advice should apply to other low light conditions.
His aurora and other night photographs can be seen at his smugmug portfolio. Some of his aurora photographs are extremely impressive. (Prints can be purchased from the smugmug link if interested.) Taylor also offers photography workshops. Keep an eye on his blog for his 2013 schedule.
Now, back to the article.
It points out that auroras seen be the naked eye lack the strong color often seen when photographed by a camera. The human eye contains two types of structures for capturing light. One captures color and works best in strong light. The other works in low light conditions, but can't capture color only value.
Digital cameras, on the other hand, are capable of capturing color even in low light conditions.
The sensor used by digital cameras technically isn't capable of capturing colors at all. It can only capture value (dark/light). Color is produced by filtering incoming light to exclude all but certain wavelengths of light. Some parts of the sensor detect green light, another red and a third blue. Combined, they produce full color.
It's like having three eyes designed for low light conditions, filtered for color and then combined by the brain into a single image.
The article gives some practical advice when taking aurora photographs. Presumably, the advice should apply to other low light conditions.
Subscribe to:
Posts (Atom)