Hat Tip: Imaging Resource
My immediate reaction to Polaroid's new Zip photoprinter was "why?".
It's a mobile printer that prints 2" x 3" images. It's charged via a micro USB cable and prints roughly 25 images per charge.
The printer can link to android or iPhones via Bluetooth/NFC.
So basically, it's a device that turns your phone into a Polaroid camera with a single, 25 image, film pack. All for only $129.99
Oh, and you have to use Polaroid's paper which costs $14.99 for 30 sheets.
Okay, so I'm being a bit facetious.
The "printer" doesn't actual print. It actually activates ink embedded in the specialized paper. Plus, the printer will presumably draw power from the cable if the cable is plugged into a wall outlet instead of using the battery, allowing the device to print more than 25 images as long as a wall outlet is available. (It should also be able to use external portable batteries.)
This does strikes me as a very niche product. I don't see that many people being interested in a portable printer capable of only printing 25 images before it has to be recharged. It does appear to be a better option than the current alternatives, though.
You can purchase instant film cameras and film. The cheapest option for film seems to by Fuji's instant film at $8.99 for a 10 pack. That's $27 for 30 photos versus $14.99 when using Polaroid's new printer. Saving $12 per 30 images means the printer will pay for itself after roughly 300 images when compared to the alternatives.
This might be a product that certain photographers might be interested, with wedding photographers being the most obvious. Guests could obtain copies of photos taken at the wedding or reception while they were still on location. The couple could also personalize any keepsakes given out to guests with images taken during the ceremony.
The same is true for other events.
This does suggest a possible revenue source for event photographers. The photographer could rent the printer and sell the paper needed to print images at the event.
(For those wondering how someone could make money selling the paper. The $14.99 is the retail price. The photographer should be able to purchase it at the wholesale price. The difference is the photographer's profit margin on the paper.)
Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts
Tuesday, April 21, 2015
Friday, April 3, 2015
Run Linux on Your Canon DSLR
Hat Tip: PetaPixel
The developers over at Magic Lantern have announced that they have gotten the Linux OS to successfully boot on Canon DSLR cameras.
The Magic Lantern team wasn't able to get beyond getting Linux to boot. That would require experience with modifying the Linux kernel and the developers at Magic Lantern apparently have no experience doing so.
They have released the information on how they accomplished this publicly. That means developers with Linux kernel experience should be able to develop a version of Linux capable of running on Canon DSLR cameras.
This creates the possibility of a great deal of customization when it comes to the software running on the cameras, including adding features not envisioned by Canon programmers.
The developers over at Magic Lantern have announced that they have gotten the Linux OS to successfully boot on Canon DSLR cameras.
The Magic Lantern team wasn't able to get beyond getting Linux to boot. That would require experience with modifying the Linux kernel and the developers at Magic Lantern apparently have no experience doing so.
They have released the information on how they accomplished this publicly. That means developers with Linux kernel experience should be able to develop a version of Linux capable of running on Canon DSLR cameras.
This creates the possibility of a great deal of customization when it comes to the software running on the cameras, including adding features not envisioned by Canon programmers.
Monday, March 16, 2015
Japanese Company Developing Synthetic Fluorite Process
Hat Tip: Imaging Resource
Yes, I realize the story is from Friday. I was busy this weekend. (I'm also busy working during the week at a physically demanding job. That's part of the reason I don't post regularly right now.)
Now, back to the fluorite (calcium fluoride).
Fluorite is a naturally occurring mineral that is mined from the earth. It's optical properties all it to be used to create lenses that suffer lower chromatic aberrations than glass-based lenses suffer. This obviously improves image quality.
The problem?
The vast majority of high-quality fluorite is mined in China.
The country has what amounts to a monopoly on the mineral. Any company that wants to buy pure fluorite in large amounts has to do business with that country.
A Japanese company (Iwatani Corp.) is attempting to develop a method for creating fluorite with a high enough purity for use in optical glass.
Iwatani developed a process for recycling the perfluorocarbon (PFC) gas created during the manufacturing of semiconductors. PFC gas is considered a harmful pollutant (it destroys ozone). The end result of this process is fluorite. Unfortunately, the fluorite created by this process lacks the purity necessary for optical uses.
The company recently announced it had been successful in creating highly purified fluorite, but the processing cost is currently cost prohibitive. (The resulting product is twice that of mined fluorite.)
It is now working on ways to decrease production costs.
Yes, I realize the story is from Friday. I was busy this weekend. (I'm also busy working during the week at a physically demanding job. That's part of the reason I don't post regularly right now.)
Now, back to the fluorite (calcium fluoride).
Fluorite is a naturally occurring mineral that is mined from the earth. It's optical properties all it to be used to create lenses that suffer lower chromatic aberrations than glass-based lenses suffer. This obviously improves image quality.
The problem?
The vast majority of high-quality fluorite is mined in China.
The country has what amounts to a monopoly on the mineral. Any company that wants to buy pure fluorite in large amounts has to do business with that country.
A Japanese company (Iwatani Corp.) is attempting to develop a method for creating fluorite with a high enough purity for use in optical glass.
Iwatani developed a process for recycling the perfluorocarbon (PFC) gas created during the manufacturing of semiconductors. PFC gas is considered a harmful pollutant (it destroys ozone). The end result of this process is fluorite. Unfortunately, the fluorite created by this process lacks the purity necessary for optical uses.
The company recently announced it had been successful in creating highly purified fluorite, but the processing cost is currently cost prohibitive. (The resulting product is twice that of mined fluorite.)
It is now working on ways to decrease production costs.
Sunday, March 8, 2015
Major Advance in Flat Lenses From Harvard
Hat Tip: ePhotozine (Original story at Digital Trends.)
Harvard has created a prototype flat lens capable of successfully focusing multiple wavelengths of light at the same point. Harvard refers to the planar lens design as an Achromatic metasurface lens.
Planar is just fancy language for flat. Achromatic when applied to lenses means the light is not separated into its constituent colors. Metasurface refers to the surface of a metamaterial, which is a material that has a structure that produces results that can't be produced by natural materials. It's geek speak to describe how the lens works.
Ordinary lenses work by utilizing a curved surface to bend light. The drawback with this method is that different wavelengths of light (perceived as different colors) bend different amounts when passing through these lenses. This forces camera lens manufacturers to utilize multiple components to correct for this splitting of the different colors.
(The Harvard news page has an illustration if you want a visual representation.)
Having to use multiple lenses increases the complexity of lens design and increases the amount of glass needed when manufacturing camera lenses. This increases the cost and weight of quality lenses.
Instead of utilizing curved glass to bend light, the Harvard design utilizes a flat lens with "antennas" on its surface. These antennas are what make the lens a metamaterial. Light bends as a result of hitting the antenna.
The initial prototype introduced in 2012 was only capable of bending a single wavelength of light. The research team addressed this limitation by utilizing antennas of different sizes/shapes. Each shape or size targets a specific wavelengths of light. The lens is capable of producing a photographic image by targeting the wavelengths corresponding to red, green and blue as these are the colors recorded by digital sensors. (The other wavelengths are ignored by the sensor and don't need to be effected by the lens.)
The result is a single lens capable of replacing the set of three lenses used in current lens design.
At the very least, this would result in much lighter lenses. It might also result in less expensive lenses as less material needs to be used.
Judging from the images on the Harvard site, the antennas appear to run parallel across the face of the lens. This would result in light being bent in a single direction only. This would not necessarily prevent the technology from being used for camera lenses. The easiest fix would simply be using a second lens set perpendicular to the first.
The lenses would need to be set so the antennas parallel to the edges of the sensor to ensure light hit the sensor correctly. This is a consideration not required by traditional round lenses.
Harvard has created a prototype flat lens capable of successfully focusing multiple wavelengths of light at the same point. Harvard refers to the planar lens design as an Achromatic metasurface lens.
Planar is just fancy language for flat. Achromatic when applied to lenses means the light is not separated into its constituent colors. Metasurface refers to the surface of a metamaterial, which is a material that has a structure that produces results that can't be produced by natural materials. It's geek speak to describe how the lens works.
Ordinary lenses work by utilizing a curved surface to bend light. The drawback with this method is that different wavelengths of light (perceived as different colors) bend different amounts when passing through these lenses. This forces camera lens manufacturers to utilize multiple components to correct for this splitting of the different colors.
(The Harvard news page has an illustration if you want a visual representation.)
Having to use multiple lenses increases the complexity of lens design and increases the amount of glass needed when manufacturing camera lenses. This increases the cost and weight of quality lenses.
Instead of utilizing curved glass to bend light, the Harvard design utilizes a flat lens with "antennas" on its surface. These antennas are what make the lens a metamaterial. Light bends as a result of hitting the antenna.
The initial prototype introduced in 2012 was only capable of bending a single wavelength of light. The research team addressed this limitation by utilizing antennas of different sizes/shapes. Each shape or size targets a specific wavelengths of light. The lens is capable of producing a photographic image by targeting the wavelengths corresponding to red, green and blue as these are the colors recorded by digital sensors. (The other wavelengths are ignored by the sensor and don't need to be effected by the lens.)
The result is a single lens capable of replacing the set of three lenses used in current lens design.
At the very least, this would result in much lighter lenses. It might also result in less expensive lenses as less material needs to be used.
Judging from the images on the Harvard site, the antennas appear to run parallel across the face of the lens. This would result in light being bent in a single direction only. This would not necessarily prevent the technology from being used for camera lenses. The easiest fix would simply be using a second lens set perpendicular to the first.
The lenses would need to be set so the antennas parallel to the edges of the sensor to ensure light hit the sensor correctly. This is a consideration not required by traditional round lenses.
Wednesday, February 25, 2015
Higher Megapixel Count Not Necessarily Bettter
Shutterbug has an interesting article today on just what increasing megapixel counts actually means when it comes to digital photography.
The article is largely in response to Canon's introduction of a full-frame camera boasting 50 megapixel resolution. That pixel count puts the full-frame camera in the same league as many medium format cameras, but Canon can offer their camera at a much lower price.
The Canon 5DS and 5DS R have price tags under $4,000. The cheapest medium format camera with at least 50 MP resolution costs just over twice that. As an additional bonus, the lenses should also have similar price differences. (The lenses for the smaller full-frame cameras need smaller lenses. Less glass usually equals lower price. Emphasis on "usually").
The 50 MP offerings from Canon do look like an attempt to compete with medium format cameras without actually creating a medium format camera. The move makes a certain amount of sense given all the Canon lenses currently available for Canon full-frame cameras.
There is a drawback here, though.
A 50 megapixel full-frame sensor requires smaller individual photoreceptors when compared to a 50 megapixel medium format camera. This impacts performance at higher ISO settings. Some photographers will find a camera with a lower pixel count more suitable for the images the capture.
The article is largely in response to Canon's introduction of a full-frame camera boasting 50 megapixel resolution. That pixel count puts the full-frame camera in the same league as many medium format cameras, but Canon can offer their camera at a much lower price.
The Canon 5DS and 5DS R have price tags under $4,000. The cheapest medium format camera with at least 50 MP resolution costs just over twice that. As an additional bonus, the lenses should also have similar price differences. (The lenses for the smaller full-frame cameras need smaller lenses. Less glass usually equals lower price. Emphasis on "usually").
The 50 MP offerings from Canon do look like an attempt to compete with medium format cameras without actually creating a medium format camera. The move makes a certain amount of sense given all the Canon lenses currently available for Canon full-frame cameras.
There is a drawback here, though.
A 50 megapixel full-frame sensor requires smaller individual photoreceptors when compared to a 50 megapixel medium format camera. This impacts performance at higher ISO settings. Some photographers will find a camera with a lower pixel count more suitable for the images the capture.
Monday, February 9, 2015
Corning Upgrading Gorilla Glass Scratch Resistance
This is slightly off-topic when it comes to photography, but I though it was worth mentioning.
In response to the artificial sapphire used by Apple in its Touch ID sensor, Corning is working to increase the scratch resistance of its Gorilla Glass.
While artificial sapphire is harder than glass (making it virtually scratch-proof) production costs and other concerns have prevented its use in electronic device screens. Glass costs far less to produce and can be given shatter-resistance and scratch-resistance properties depending on how it is produced. This gives Corning an advantage over artificial sapphire if it can create a glass that is strong enough and resistant enough to scratching to make sapphire pointless.
The challenge for Corning has to do with the physical properties of glass, including its natural hardness rating.
Glass typically has a hardness between 6 and 7 on the Mohs scale. Quartz particles are one of the main components of dust and it has a hardness of 7. For a material to be scratch-resistance, it needs to exceed the hardness of quarts. This means achieving a hardness over 7.
Silica when melted does not crystallize when melted and then cooled to form glass. Glass is sometimes referred to as a super-cooled liquid instead of a solid. There are also other materials added to the silica that can alter the properties of the glass. This additives are used to produce desirable properties in the end product, but can negatively effect the glass's hardness.
Corning can take a couple of approaches when attempting to produce scratch-resistant glass.
The first involves finding additives that increase the hardness of the finished product. (Similar to how adding carbon to iron produces steel.)
The second approach would be to apply a scratch-resistant coating to the exposed glass. A thin layer of aluminum oxide comes to mind.
In response to the artificial sapphire used by Apple in its Touch ID sensor, Corning is working to increase the scratch resistance of its Gorilla Glass.
While artificial sapphire is harder than glass (making it virtually scratch-proof) production costs and other concerns have prevented its use in electronic device screens. Glass costs far less to produce and can be given shatter-resistance and scratch-resistance properties depending on how it is produced. This gives Corning an advantage over artificial sapphire if it can create a glass that is strong enough and resistant enough to scratching to make sapphire pointless.
The challenge for Corning has to do with the physical properties of glass, including its natural hardness rating.
Glass typically has a hardness between 6 and 7 on the Mohs scale. Quartz particles are one of the main components of dust and it has a hardness of 7. For a material to be scratch-resistance, it needs to exceed the hardness of quarts. This means achieving a hardness over 7.
Silica when melted does not crystallize when melted and then cooled to form glass. Glass is sometimes referred to as a super-cooled liquid instead of a solid. There are also other materials added to the silica that can alter the properties of the glass. This additives are used to produce desirable properties in the end product, but can negatively effect the glass's hardness.
Corning can take a couple of approaches when attempting to produce scratch-resistant glass.
The first involves finding additives that increase the hardness of the finished product. (Similar to how adding carbon to iron produces steel.)
The second approach would be to apply a scratch-resistant coating to the exposed glass. A thin layer of aluminum oxide comes to mind.
Thursday, January 22, 2015
Adobe Moving to 64-Bit Only Version of Lightroom
Hat Tip: PetaPixel
Adobe tends to be the go to platform for digital image professionals.
The company recently announced that Adobe Lightroom 6 would no longer support 32-bit operating systems. You'll need a 64-bit OS in order to use the software.
The company officially lists Mac OS 10.8 or higher and 64-bit versions Windows 7 or later as being supported.
(Windows did produce 64-but versions of Windows Vista. That OS is not listed as being capable of running Lightroom 6.)
There are valid reasons to limit a graphics program like Lightroom to 64-bit operating systems. 64-bit operating systems are capable of accessing much larger amounts of RAM than their 32-bit counterparts. This can greatly increase performance of memory intensive programs like Lightroom.
Adobe has made the announcement in advance of releasing Lightroom 6 in order to give users a chance to switch to a 64-bit OS.
Adobe tends to be the go to platform for digital image professionals.
The company recently announced that Adobe Lightroom 6 would no longer support 32-bit operating systems. You'll need a 64-bit OS in order to use the software.
The company officially lists Mac OS 10.8 or higher and 64-bit versions Windows 7 or later as being supported.
(Windows did produce 64-but versions of Windows Vista. That OS is not listed as being capable of running Lightroom 6.)
There are valid reasons to limit a graphics program like Lightroom to 64-bit operating systems. 64-bit operating systems are capable of accessing much larger amounts of RAM than their 32-bit counterparts. This can greatly increase performance of memory intensive programs like Lightroom.
Adobe has made the announcement in advance of releasing Lightroom 6 in order to give users a chance to switch to a 64-bit OS.
Saturday, December 20, 2014
Would You Want a Camera That Automatically Encrypted Files?
Hat Tip: PetaPixel
A Hacker identified as "Doug" has created a firmware update for the NX300 that automatically encrypts images when they are saved by the camera. The files can only be opened by someone with the correct decryption key.
Encrypting files as they are saved by the camera would require some unusual circumstances before the feature was actually useful. The images would probably need to include sensitive information of some kind that needed to be protected from unauthorized access. The encryption would protect that information even if someone somehow got their hands on the image files. (Such as by stealing the photographers equipment before the images are processed.)
Political activists or journalists taking images in areas controlled by repressive regimes might want this features. Plastics surgeons might use it when taking pictures of their patients. (No accident nude photos showing up on social media.)
There are probably other legitimate reasons for encrypting images as they are taken. I can also think of some less savory reasons for doing so (and no, I am not giving anyone any ideas by mentioning them).
A Hacker identified as "Doug" has created a firmware update for the NX300 that automatically encrypts images when they are saved by the camera. The files can only be opened by someone with the correct decryption key.
Encrypting files as they are saved by the camera would require some unusual circumstances before the feature was actually useful. The images would probably need to include sensitive information of some kind that needed to be protected from unauthorized access. The encryption would protect that information even if someone somehow got their hands on the image files. (Such as by stealing the photographers equipment before the images are processed.)
Political activists or journalists taking images in areas controlled by repressive regimes might want this features. Plastics surgeons might use it when taking pictures of their patients. (No accident nude photos showing up on social media.)
There are probably other legitimate reasons for encrypting images as they are taken. I can also think of some less savory reasons for doing so (and no, I am not giving anyone any ideas by mentioning them).
Monday, December 15, 2014
New File Format Seeking to Supplant the JPEG
Hat Tip: FStoppers
(Original story at ExtremeTech)
There have been many attempts to supplant the ubiquitous jpeg file format/compression method. None have been successful so far. Even the alternatives backed by tech companies like Mozilla (mozjpeg) and Goggle (WebP) have failed to take hold.
Jpeg compression results in artifacts and blocky images. The alternatives avoid those issues. So why can't they replace a file format with obvious weaknesses that is twenty years old at this point?
Inertia.
The fact that the jpeg format is twenty years old is part of the problem that developers must overcome when attempting to replace that format. It has been around so long that just about every device no matter what OS it runs can recognize the file format and display it correctly. This includes old computers running obsolete operating systems.
New file formats lack the that compatibility. That means a new file format/compression method must offer features that ensure widespread adoption. The other major formats (PNG, TIFF and GIF) did that by offering features not provided by the jpeg format.
So, does the new format offer any features that might result in its widespread adoption?
BPG
The new format is BPG (short for "Better Portable Graphics") and is based on the HEVC/H.265 video codec.
There are a couple of features that might result in broad adoption.
First, the format offers similar or better image quality than the jpeg format when images are compressed with smaller sized files. This alone probably isn't enough for the format to supplant the jpeg. There just aren't that many applications where decreasing the size of an image from that created using the jpeg format would make much difference.
Second, the BPG format supports 14-bit color and alpha transparency. This is where the BPG becomes interesting. 14-bits of information per channel provides much greater dynamic range of information than is supplied by the 8-bits per channel used by the jpeg format. This makes the BPG much better suited for digital photography. (The ability to save transparency information makes it better suited for certain web applications.)
Finally, as an adaptation of the H.256 codec, the BGP format can be decoded by any hardware capable of decoding H.256 video. These devices would not need to rely on software to decode a BPG image.
As a side note, BPG images can be decoded using JavaScript. This means that any modern web browser will be able to display the image, even if the computer hosting the browser lacks the necessary codec. This gives the format a leg-up when it comes to widespread web adoption.
(Original story at ExtremeTech)
There have been many attempts to supplant the ubiquitous jpeg file format/compression method. None have been successful so far. Even the alternatives backed by tech companies like Mozilla (mozjpeg) and Goggle (WebP) have failed to take hold.
Jpeg compression results in artifacts and blocky images. The alternatives avoid those issues. So why can't they replace a file format with obvious weaknesses that is twenty years old at this point?
Inertia.
The fact that the jpeg format is twenty years old is part of the problem that developers must overcome when attempting to replace that format. It has been around so long that just about every device no matter what OS it runs can recognize the file format and display it correctly. This includes old computers running obsolete operating systems.
New file formats lack the that compatibility. That means a new file format/compression method must offer features that ensure widespread adoption. The other major formats (PNG, TIFF and GIF) did that by offering features not provided by the jpeg format.
So, does the new format offer any features that might result in its widespread adoption?
BPG
The new format is BPG (short for "Better Portable Graphics") and is based on the HEVC/H.265 video codec.
There are a couple of features that might result in broad adoption.
First, the format offers similar or better image quality than the jpeg format when images are compressed with smaller sized files. This alone probably isn't enough for the format to supplant the jpeg. There just aren't that many applications where decreasing the size of an image from that created using the jpeg format would make much difference.
Second, the BPG format supports 14-bit color and alpha transparency. This is where the BPG becomes interesting. 14-bits of information per channel provides much greater dynamic range of information than is supplied by the 8-bits per channel used by the jpeg format. This makes the BPG much better suited for digital photography. (The ability to save transparency information makes it better suited for certain web applications.)
Finally, as an adaptation of the H.256 codec, the BGP format can be decoded by any hardware capable of decoding H.256 video. These devices would not need to rely on software to decode a BPG image.
As a side note, BPG images can be decoded using JavaScript. This means that any modern web browser will be able to display the image, even if the computer hosting the browser lacks the necessary codec. This gives the format a leg-up when it comes to widespread web adoption.
Friday, November 21, 2014
3D Printers can now Produce LEDs
Hat Tip: c|net
C/net isn't one of my usual sources when blogging on this site, but the story is interesting enough to deserve mention.
Researchers at Princeton University have developed a 3D Printer capable of printing LEDs in layers. The bottom layer is a ring made of silver nanoparticles (used to conduct electricity.) This is followed by a couple of polymer layers and then a layer of cadmium selenide nanoparticles in a zinc sulphide case. The top layer is an eutectic gallium indium cathode.
The cadmium selenide layer is known as a quantum dot, and is what actually produces light. The color of light produced by a quantum dot LED depends on the size of the dot. Quantum dot LEDs are capable of producing any wavelength of light in the visible spectrum. The manufacturer just needs to produce a dot of the correct size.
The ability to produce any color of light means that quantum dot LEDs can be used in devices like computer displays. (They are actually small enough that they could conceivably be used to put a display on a contact lens.
They also appear to produce better color, brighter light with lower power consumption than current LEDs.
For photographers that could lead to LED displays on cameras that produce images that are easier to see in sunlight with lower power consumption than current displays. It could also lead to brighter artificial light sources with better color. There would also be lower power consumption.
Computer monitors might also benefit from the technology. Better color, brighter display and lower power costs? (And possibly no color management.)
What photographer would turn that down?
C/net isn't one of my usual sources when blogging on this site, but the story is interesting enough to deserve mention.
Researchers at Princeton University have developed a 3D Printer capable of printing LEDs in layers. The bottom layer is a ring made of silver nanoparticles (used to conduct electricity.) This is followed by a couple of polymer layers and then a layer of cadmium selenide nanoparticles in a zinc sulphide case. The top layer is an eutectic gallium indium cathode.
The cadmium selenide layer is known as a quantum dot, and is what actually produces light. The color of light produced by a quantum dot LED depends on the size of the dot. Quantum dot LEDs are capable of producing any wavelength of light in the visible spectrum. The manufacturer just needs to produce a dot of the correct size.
The ability to produce any color of light means that quantum dot LEDs can be used in devices like computer displays. (They are actually small enough that they could conceivably be used to put a display on a contact lens.
They also appear to produce better color, brighter light with lower power consumption than current LEDs.
For photographers that could lead to LED displays on cameras that produce images that are easier to see in sunlight with lower power consumption than current displays. It could also lead to brighter artificial light sources with better color. There would also be lower power consumption.
Computer monitors might also benefit from the technology. Better color, brighter display and lower power costs? (And possibly no color management.)
What photographer would turn that down?
Wednesday, November 19, 2014
Use Your Cell Phone for Model Release Form
PetaPixel has an article today on a new Model Release Template available from Shake.
There are accompanying iOS and Android apps. The apps allow the photographer to use the phone for signatures and allow photos of the model that signed the release to be attached to the signed contract.
The contract can also be sent electronically to the other party for their signature.
This is something that any photographer that does a great deal of work requiring model releases should look into, as the template and apps allow the photographer to ensure they always had a contract ready for signature and keep signed contracts organized.
Head over to PetaPixel's post for download links and a video tutorial on the app.
There are accompanying iOS and Android apps. The apps allow the photographer to use the phone for signatures and allow photos of the model that signed the release to be attached to the signed contract.
The contract can also be sent electronically to the other party for their signature.
This is something that any photographer that does a great deal of work requiring model releases should look into, as the template and apps allow the photographer to ensure they always had a contract ready for signature and keep signed contracts organized.
Head over to PetaPixel's post for download links and a video tutorial on the app.
Tuesday, November 18, 2014
Google Working on Computers That Can Describe Images in Detail
Hat Tip: PetaPixel
Google Research has teamed with Stanford University to improve computer image recognition capabilities.
The software being developed will allow computers to recognize objects in an image, determine context and produce a full description of the image.
For example this image:
Produces the description: Two pizzas sitting on top of a stove top oven
The technology still requires human interaction to "instruct" the computer by providing human captioned photos. Accuracy increases with each captioned image.
The most immediate impact would probably be in regards to image searches. Having a program that can determine image contents would greatly improve image search results. The search engine would not have to rely on surrounding text or the contents of an images <alt> tags.
This also holds promise for anyone that needs to produce image descriptions for large numbers of images, including photographers. The caption for every image could be automatically generated using a program instead of it having to be manually applied.
There are potential implications beyond those immediate uses. Security cameras, automated drones or cars, facial recognition software are items that could benefit from this ability to determine the items contained in an image along with context.
Google Research has teamed with Stanford University to improve computer image recognition capabilities.
The software being developed will allow computers to recognize objects in an image, determine context and produce a full description of the image.
For example this image:
Produces the description: Two pizzas sitting on top of a stove top oven
The technology still requires human interaction to "instruct" the computer by providing human captioned photos. Accuracy increases with each captioned image.
The most immediate impact would probably be in regards to image searches. Having a program that can determine image contents would greatly improve image search results. The search engine would not have to rely on surrounding text or the contents of an images <alt> tags.
This also holds promise for anyone that needs to produce image descriptions for large numbers of images, including photographers. The caption for every image could be automatically generated using a program instead of it having to be manually applied.
There are potential implications beyond those immediate uses. Security cameras, automated drones or cars, facial recognition software are items that could benefit from this ability to determine the items contained in an image along with context.
Sunday, November 16, 2014
More Info on Sony's APCS Sensor
SLR Lounge has some additional information on Sony's new APCS sensor, including diagrams.
Based on the diagrams, it appears that the APCS design uses a movable Bayer filter. There are some obvious questions raised if the design does indeed use a movable filter.
This introduces another moving part that can break or wear out. It is basically a second shutter. On that has to move every time the camera takes a picture. Increasing the moving parts involved in an electronic device also increases the chances that something will go wrong with that device.
This problem is magnified when long exposure times are factored into the equation. Presumably, the only way to prevent color artifacts during long exposures would be to have the filter repeatedly reposition itself. Possibly hundreds of times during a single exposure. This would vastly increase the odds of the part failing.
Using moving parts inside a camera introduces another potential issue. Movement while taking pictures results in blurred images. The filter will need to be engineered in such a way so that movement in the filter is does not result in movement in any other part of the camera. Otherwise the filter could result in "camera shake" even when a tripod is used.
None of these issues are obvious given the original, sketchy description of the technology involved. They become far more obvious after seeing the diagrams.
Based on the diagrams, it appears that the APCS design uses a movable Bayer filter. There are some obvious questions raised if the design does indeed use a movable filter.
This introduces another moving part that can break or wear out. It is basically a second shutter. On that has to move every time the camera takes a picture. Increasing the moving parts involved in an electronic device also increases the chances that something will go wrong with that device.
This problem is magnified when long exposure times are factored into the equation. Presumably, the only way to prevent color artifacts during long exposures would be to have the filter repeatedly reposition itself. Possibly hundreds of times during a single exposure. This would vastly increase the odds of the part failing.
Using moving parts inside a camera introduces another potential issue. Movement while taking pictures results in blurred images. The filter will need to be engineered in such a way so that movement in the filter is does not result in movement in any other part of the camera. Otherwise the filter could result in "camera shake" even when a tripod is used.
None of these issues are obvious given the original, sketchy description of the technology involved. They become far more obvious after seeing the diagrams.
Wednesday, November 12, 2014
Sony Making News for Image Sensor Innovations, Again
Last Week, Sony made news with its patent for an image sensor that could apply multiple exposure times to a singe image.
This week, it's an image sensor that can capture Red/Green/Blue information at every pixel. The sensor uses something called "Active-Pixel Color Sensing" to achieve this. Instead of having some pixels detect green, others red and still others green by use of a color filter array, every pixel in an Active-Pixel Color Sensing (APCS) sensor would detect those three colors by using a moving electronic color filter.
The details are a bit sketchy right now, but rumors have the sensor showing up in products starting late 2015 or early 2016. (With the Experia smartphone being the first recipient.)
Using each pixel to capture Red/Green/Blue data would result in advantages.
First, this allows Sony to eliminate the Bayer filter traditionally used to capture color information.
Eliminating the Bayer filter eliminates the need to interpolate color data from several pixels in order to produce color information. This eliminates a great deal of the processing currently needed to produce color images. Eliminating processing should greatly increase the speed at which images can be captured and recorded. It might also lower power consumption.
Eliminating the Bayer filter also eliminates the need to deal with moire. This means that a camera equipped with this type of sensor could eliminate the anti-aliasing filter found in many digital cameras. This would help increase image clarity. Sharper images are always a plus.
Second, the pixels used could be larger than those used in Bayer based sensors with no loss of image resolution. Larger pixels are more efficient when it comes to capturing light and less prone to noise at high ISO settings. Fewer pixels would also increase processing speed.
The increase in processing speed actually seems to be one of the largest advantages for the new design. Sony is suggesting 2K video recorded at 16,000 fps.
There is one obvious problem with the new sensor: the name. The acronym for the current name would be "APCS". That is far too to APS-C, which is a common sensor size found in digital cameras. Imagine a camera being described as having an APCS APS-C sensor.
That might be just a tad confusing.
Keep track of developments on this sensor and other Sony camera news at Sony Alpha Rumors.
This week, it's an image sensor that can capture Red/Green/Blue information at every pixel. The sensor uses something called "Active-Pixel Color Sensing" to achieve this. Instead of having some pixels detect green, others red and still others green by use of a color filter array, every pixel in an Active-Pixel Color Sensing (APCS) sensor would detect those three colors by using a moving electronic color filter.
The details are a bit sketchy right now, but rumors have the sensor showing up in products starting late 2015 or early 2016. (With the Experia smartphone being the first recipient.)
Using each pixel to capture Red/Green/Blue data would result in advantages.
First, this allows Sony to eliminate the Bayer filter traditionally used to capture color information.
Eliminating the Bayer filter eliminates the need to interpolate color data from several pixels in order to produce color information. This eliminates a great deal of the processing currently needed to produce color images. Eliminating processing should greatly increase the speed at which images can be captured and recorded. It might also lower power consumption.
Eliminating the Bayer filter also eliminates the need to deal with moire. This means that a camera equipped with this type of sensor could eliminate the anti-aliasing filter found in many digital cameras. This would help increase image clarity. Sharper images are always a plus.
Second, the pixels used could be larger than those used in Bayer based sensors with no loss of image resolution. Larger pixels are more efficient when it comes to capturing light and less prone to noise at high ISO settings. Fewer pixels would also increase processing speed.
The increase in processing speed actually seems to be one of the largest advantages for the new design. Sony is suggesting 2K video recorded at 16,000 fps.
There is one obvious problem with the new sensor: the name. The acronym for the current name would be "APCS". That is far too to APS-C, which is a common sensor size found in digital cameras. Imagine a camera being described as having an APCS APS-C sensor.
That might be just a tad confusing.
Keep track of developments on this sensor and other Sony camera news at Sony Alpha Rumors.
Friday, November 7, 2014
Lytro Announces Developer Kit
Hat Tip: DP Review
(This has been covered by other outlets as well. DP Review just happens to be the one that caught my attention.)
Lytro is the company that has developed light field technology for camera use. Light field technology allows the camera to record a light ray's direction, intensity and color. (As opposes to regular sensors which only record intensity and strength.) The additional directional information allows light field cameras to be for applications beyond those that normal digital cameras can be used for.
The new Lytro Developer's Kit allows outside companies to develop those applications. NASA and the DoD are apparently already interested in the kit.
Light field technology does not seem to be positioned to compete with traditional digital cameras when it comes to producing still images. The still images produced don't stack up resolution wise. That means Lytro needs to find another reason for consumers to purchase light field cameras, which makes the development kit a smart move. It will enable other companies to develop the technology in directions other than those aimed at producing still images.
The annual subscription for the kit starts at $20,000.
I'll let you decide whether that's reasonable.
Update: PetaPixel has a link to the Lytro Platform page. It provides specifics on what is included in the kit.
(This has been covered by other outlets as well. DP Review just happens to be the one that caught my attention.)
Lytro is the company that has developed light field technology for camera use. Light field technology allows the camera to record a light ray's direction, intensity and color. (As opposes to regular sensors which only record intensity and strength.) The additional directional information allows light field cameras to be for applications beyond those that normal digital cameras can be used for.
The new Lytro Developer's Kit allows outside companies to develop those applications. NASA and the DoD are apparently already interested in the kit.
Light field technology does not seem to be positioned to compete with traditional digital cameras when it comes to producing still images. The still images produced don't stack up resolution wise. That means Lytro needs to find another reason for consumers to purchase light field cameras, which makes the development kit a smart move. It will enable other companies to develop the technology in directions other than those aimed at producing still images.
The annual subscription for the kit starts at $20,000.
I'll let you decide whether that's reasonable.
Update: PetaPixel has a link to the Lytro Platform page. It provides specifics on what is included in the kit.
Thursday, November 6, 2014
Oh, the Humanity! Canon 7D Mark II Disassembled
Don't worry, it was all in a good cause.
The good people over at LensRentals.com have disassembled a Canon 7D Mark II. You can see all the gory details on their blog.
Why take one apart?
To test Canon's claim on improving the camera's weather resistance. (The claim was that weather sealing was "four times better" than on the original 7D.)
Canon certainly seems to have concentrated on improving the build quality of the 7D, including weather resistance. The Mark II has rubber gaskets not present in the previous model, increasing the camera's ability to resist water penetration. (Pretty much any area that could allow water into the camera has been addressed by Canon.)
There are other build improvements and the Lens Rental article goes into those as well. For example, the CF card reader has been moved to its own board instead of being connected directly to the camera's main board. This means that any damage to the CF card reader can be fixed by simply replacing the daughter board.
Check out the article for all the improvements and for lots and lots of pictures.
Hat Tip: Imaging Resource
Update: While Canon has improved weather resistance and build quality, image quality could apparently use some work at least at lower ISO settings.
The good people over at LensRentals.com have disassembled a Canon 7D Mark II. You can see all the gory details on their blog.
Why take one apart?
To test Canon's claim on improving the camera's weather resistance. (The claim was that weather sealing was "four times better" than on the original 7D.)
Canon certainly seems to have concentrated on improving the build quality of the 7D, including weather resistance. The Mark II has rubber gaskets not present in the previous model, increasing the camera's ability to resist water penetration. (Pretty much any area that could allow water into the camera has been addressed by Canon.)
There are other build improvements and the Lens Rental article goes into those as well. For example, the CF card reader has been moved to its own board instead of being connected directly to the camera's main board. This means that any damage to the CF card reader can be fixed by simply replacing the daughter board.
Check out the article for all the improvements and for lots and lots of pictures.
Hat Tip: Imaging Resource
Update: While Canon has improved weather resistance and build quality, image quality could apparently use some work at least at lower ISO settings.
Wednesday, November 5, 2014
Sony Patents Varying Exposure Image Sensor
Hat Tip: PetaPixel
Chalk another one up to engineers realizing there is no need for a digital image sensor to behave exactly the same way film behaves. Sony has now designed a new image sensor that uses variable exposure times. The exposure time for each pixel depends on the amount of light hitting the sensor at that pixel's location.
The sensor works by applying one of two exposure times to each pixel. A short exposure time to the bright areas of the image and a long exposure time to the dark areas. Theoretically, this allows the sensor to capture details in the darker areas of an image without over exposing the lighter areas.
There are obvious issues with using different exposure times for a single exposure.
The most obvious issue involves movement. Moving object could conceivably move from "light" areas into "dark" areas (or dark to light.) This would result in a motion-blur with different exposures in different areas. Not necessarily the result the photographer is looking for.
Light emitting objects could produce additional problems. A light emitting object that starts in a "light" area and moves into a "dark" area could result in the dark area being over exposed.
Sony has apparently considered the potential problems associated with using multiple exposure times for a single image and have attempted to address these issues via the software used with the sensor.
The actual patent can be viewed here for those interested.
The patent description includes a link to a pdf file with images and includes a little more detail on the approach used to address blurring/movement.
It's always nice to see digital imaging innovation that comes as a result of diverging from the "image sensor as film" mentality.
Chalk another one up to engineers realizing there is no need for a digital image sensor to behave exactly the same way film behaves. Sony has now designed a new image sensor that uses variable exposure times. The exposure time for each pixel depends on the amount of light hitting the sensor at that pixel's location.
The sensor works by applying one of two exposure times to each pixel. A short exposure time to the bright areas of the image and a long exposure time to the dark areas. Theoretically, this allows the sensor to capture details in the darker areas of an image without over exposing the lighter areas.
There are obvious issues with using different exposure times for a single exposure.
The most obvious issue involves movement. Moving object could conceivably move from "light" areas into "dark" areas (or dark to light.) This would result in a motion-blur with different exposures in different areas. Not necessarily the result the photographer is looking for.
Light emitting objects could produce additional problems. A light emitting object that starts in a "light" area and moves into a "dark" area could result in the dark area being over exposed.
Sony has apparently considered the potential problems associated with using multiple exposure times for a single image and have attempted to address these issues via the software used with the sensor.
The actual patent can be viewed here for those interested.
The patent description includes a link to a pdf file with images and includes a little more detail on the approach used to address blurring/movement.
It's always nice to see digital imaging innovation that comes as a result of diverging from the "image sensor as film" mentality.
Tuesday, October 7, 2014
Scientist Develop Sensor More Sensitive to Color
Hat Tip: Imaging Resource
Researchers at the University of Granada along with those at the Polytechnic University f Milan (Italy) have developed an imaging sensing device capable of capturing far greater color information.
The sensor is similar to Simga's Foveon sensor. It detects the wavelength of light associated with a photon of light based on how far it penetrates into the silicon used in the sensor. Different wavelengths (perceived as different colors) penetrate to different depths. The sensor determines the color associated with a particular pixel based on how far the light penetrates the sensor at that point.
(It's probably a little more complicated that that, but that's close enough for anyone not trying to develope the technology.)
The scientist refer to the sensor as a "Transverse Field Detector" (TFD). It utilizes a transversal electric field with varying intensity to modulate the depths at which photons are collected.
(Join the club if that description just goes right over your head. I think it means they can control the sensor so that it only detects certain wavelengths, and so so on a per-image basis based on the strength of the electric field use.)
The sensor is capable of obtaining 36 channels of information. Current imaging technology produces three: red, blue and green.
36 channels of information is far more than what is required to produce a viable photographic image. It's unlikely that any application would require the use of that many channels at the same time. Plus the files size would be huge; roughly 12 times that of a three-channel image.
There might be times when someone might want all 36 channels in order to view them separately or to compare a channels. (I think Astronomers might find the 36 channel capability useful when determining the chemical signatures associated with different astronomical bodies.)
36 channels of color information is overkill when it comes to photography. 3 Channels at 8 bits results in over 16 million possible colors. Extending that out to 10-bits per channel results in 4 Trillion possible colors (yes, that's a "T"). This is far more than the human eye is capable of seeing.
If my math is correct, 8 channels at 8 bits would result in over 18 Quadrillion possible colors. (The color combination increases 256 times every time you add another channel.) To put this in perspective, 18 Quadrillion dollars is 1000 times more than is needed to pay off the current US national debt.
The current 10-bit (or more) 3 channels of color used by many modern DSLR cameras results in more colors than the human eye is capable of discerning. Adding additional channels would just result in useless additional information.
The ability to pick 3 different channels out of 36 possibilities might prove useful, though. Especially if those options include infrared and ultraviolet wavelengths. A camera that could switch from normal 3 channel (RGB) mode to one capable of taking infrared and/or ultraviolet pictures simply by changing settings might be of interest to some photographers.
The capability to switch the camera to detecting particular wavelengths might also prove useful under certain lighting conditions.
(The original article can be read here)
Researchers at the University of Granada along with those at the Polytechnic University f Milan (Italy) have developed an imaging sensing device capable of capturing far greater color information.
The sensor is similar to Simga's Foveon sensor. It detects the wavelength of light associated with a photon of light based on how far it penetrates into the silicon used in the sensor. Different wavelengths (perceived as different colors) penetrate to different depths. The sensor determines the color associated with a particular pixel based on how far the light penetrates the sensor at that point.
(It's probably a little more complicated that that, but that's close enough for anyone not trying to develope the technology.)
The scientist refer to the sensor as a "Transverse Field Detector" (TFD). It utilizes a transversal electric field with varying intensity to modulate the depths at which photons are collected.
(Join the club if that description just goes right over your head. I think it means they can control the sensor so that it only detects certain wavelengths, and so so on a per-image basis based on the strength of the electric field use.)
The sensor is capable of obtaining 36 channels of information. Current imaging technology produces three: red, blue and green.
36 channels of information is far more than what is required to produce a viable photographic image. It's unlikely that any application would require the use of that many channels at the same time. Plus the files size would be huge; roughly 12 times that of a three-channel image.
There might be times when someone might want all 36 channels in order to view them separately or to compare a channels. (I think Astronomers might find the 36 channel capability useful when determining the chemical signatures associated with different astronomical bodies.)
36 channels of color information is overkill when it comes to photography. 3 Channels at 8 bits results in over 16 million possible colors. Extending that out to 10-bits per channel results in 4 Trillion possible colors (yes, that's a "T"). This is far more than the human eye is capable of seeing.
If my math is correct, 8 channels at 8 bits would result in over 18 Quadrillion possible colors. (The color combination increases 256 times every time you add another channel.) To put this in perspective, 18 Quadrillion dollars is 1000 times more than is needed to pay off the current US national debt.
The current 10-bit (or more) 3 channels of color used by many modern DSLR cameras results in more colors than the human eye is capable of discerning. Adding additional channels would just result in useless additional information.
The ability to pick 3 different channels out of 36 possibilities might prove useful, though. Especially if those options include infrared and ultraviolet wavelengths. A camera that could switch from normal 3 channel (RGB) mode to one capable of taking infrared and/or ultraviolet pictures simply by changing settings might be of interest to some photographers.
The capability to switch the camera to detecting particular wavelengths might also prove useful under certain lighting conditions.
(The original article can be read here)
Wednesday, September 3, 2014
Is It Really a Good Idea to Store Images on a Wireless Hard Drive?
Pop Photo has a brief article today on Western Digital's new wireless hard drive.
My question: Is storing your images an a wireless device really a good idea?
Wireless devices like this can be hacked by someone within communications range. While the dangers are fairly obvious while traveling, it also poses a danger within your home or studio. The wireless signals can easily pass beyond the walls of your home or office.
(As an example, there have been times when I've had to reset my laptop's wireless connection to my home wireless router. Doing so shows all wireless networks within range. There have been times when four other wireless networks show up on the list of available networks while I am sitting in my house. These belong to my neighbors and I could conceivably hack into any one of them.)
Keeping the hard drive stationary may actually pose a greater danger than using it when traveling. Hacking a wireless network becomes easier when you can analyze large amounts of traffic.
Anyone using a wireless hard drive should keep the potential security threat in mind. This means using strong passwords to protect access to the network the device creates. Otherwise anyone within range of the device will have access to the network and to all devices connected to it. (Your laptop and smartphone are potential targets.)
Any files stored on the device should be protected as well.
Encryption is a good idea for any files that contain sensitive information, including those embarrassing photos of yourself. (Wireless hard drives pose similar security issues as those that stem from Cloud Computing.)
Learn from the recent problems encountered by some Hollywood actresses. Either don't put those photos where someone else can access them or encrypt them so they can't be opened if someone does access them.
Wireless hard drives are great when it comes to convenience. They allow files to be easily shared between multiple devices and are one way to address storage issues with certain smartphones.
They're not that great when it comes to security.
Best advice: If you don't want someone else accessing a computer file, don't put it on a wireless hard drive.
My question: Is storing your images an a wireless device really a good idea?
Wireless devices like this can be hacked by someone within communications range. While the dangers are fairly obvious while traveling, it also poses a danger within your home or studio. The wireless signals can easily pass beyond the walls of your home or office.
(As an example, there have been times when I've had to reset my laptop's wireless connection to my home wireless router. Doing so shows all wireless networks within range. There have been times when four other wireless networks show up on the list of available networks while I am sitting in my house. These belong to my neighbors and I could conceivably hack into any one of them.)
Keeping the hard drive stationary may actually pose a greater danger than using it when traveling. Hacking a wireless network becomes easier when you can analyze large amounts of traffic.
Anyone using a wireless hard drive should keep the potential security threat in mind. This means using strong passwords to protect access to the network the device creates. Otherwise anyone within range of the device will have access to the network and to all devices connected to it. (Your laptop and smartphone are potential targets.)
Any files stored on the device should be protected as well.
Encryption is a good idea for any files that contain sensitive information, including those embarrassing photos of yourself. (Wireless hard drives pose similar security issues as those that stem from Cloud Computing.)
Learn from the recent problems encountered by some Hollywood actresses. Either don't put those photos where someone else can access them or encrypt them so they can't be opened if someone does access them.
Wireless hard drives are great when it comes to convenience. They allow files to be easily shared between multiple devices and are one way to address storage issues with certain smartphones.
They're not that great when it comes to security.
Best advice: If you don't want someone else accessing a computer file, don't put it on a wireless hard drive.
Friday, August 29, 2014
New Security Threat You Need to Take Steps Against
Hat Tip: PetaPixel
It's been a very slow news week. The only photography related stories I've been interested in have all been announcements on new camera, lenses or related products.
Those stories I just post links to and publish them on Sunday.
PetaPixel has one today that I feel obligated to comment on: turns out that iPhone infrared camera that was recently released poses a security threat to anyone using a passcode protected item, like an ATM. Someone taking a picture of the buttons immediately after the buttons are used can easily tell which buttons were pressed due to the residual heat left on them.
it is also possible to get some idea as to the order the buttons were pressed based on how much residual heat remains. The hottest buttons were used last and the buttons that retain less heat were used earlier. This greatly reduces the amount of effort needed to determine the passcode used by the previous customer.
There is a fairly easy way to protect yourself from this threat, just rest your hand on the button panel while entering your code. This heats up the buttons evenly which prevents being able to determine which buttons were pressed.
It's been a very slow news week. The only photography related stories I've been interested in have all been announcements on new camera, lenses or related products.
Those stories I just post links to and publish them on Sunday.
PetaPixel has one today that I feel obligated to comment on: turns out that iPhone infrared camera that was recently released poses a security threat to anyone using a passcode protected item, like an ATM. Someone taking a picture of the buttons immediately after the buttons are used can easily tell which buttons were pressed due to the residual heat left on them.
it is also possible to get some idea as to the order the buttons were pressed based on how much residual heat remains. The hottest buttons were used last and the buttons that retain less heat were used earlier. This greatly reduces the amount of effort needed to determine the passcode used by the previous customer.
There is a fairly easy way to protect yourself from this threat, just rest your hand on the button panel while entering your code. This heats up the buttons evenly which prevents being able to determine which buttons were pressed.
Subscribe to:
Posts (Atom)