If this was the state of the art when it was published in Jan 2013 what is the state of the art and how is it being applied now? 1.8 gigapixels was the biggest camera then.
Argus. Google Earth on steroids!
State of the art in gigapixel cameras image acquistion?
Just ask google. This is state of the art in 2018. 3.2 gigapixels. Large Synoptic Survey Telescope.
Beyond gigapixel. What is the next step up? Terapixel that however is not picture image acquisition but picture image display. Interesting that the ability to capture images is so much less that the image display resolution. I think it is like displaying a 720p video signal on a 1080p tv. That is much different than displaying a 1080p image. Terapixel image acquisition probably can't be very for behind terapixel image display capability.
The first link in this entry showed that Argus was essentially a lot of smart phone camera assembled together. The Large Synoptic Survey Telescope looks like simply more small imaging devices actually assembled in a frame to make a 1meter diameter lens.
From Wikipedia
A terapixel image is an image composed of one trillion (1012) pixels. Though currently rare, there have been a few instances such as the Microsoft Research Terapixel project for use on the Fulldome projection system,[3] a composite of medical images by Aperio,[4][5] and Google Earth's Landsat images viewable as a time-lapse are collectively considered over one terapixel.[6]
Terapixel display is visual presentation of Big Data
This is a 3.81 gigapixel picture Zooming fun!
This was the world record at 45 gigapixels in 2010
This is 320 gigapixel panorama One camera, more than 48,000 images taken with a single camera. To scale the thing is the same as 48,000 cameras taking one picture at a single time with a Canon EOS 7d and putting them all together. That is what a giga or tera pixel camera does in one click. Results are the same.
While these are cityscapes, getting closer to the subject when taking the picture reveals more.
This is a stadium picture. Not an image taken over a long period of time like in the dawn of photography where no one could move. Very revealing. People can log in on facebook and tag identify themselves and see the tag applied to their picture.
Gigapan provides this stadium tagging service for fans to identify themselves. This is a Wikipedia link to Gigapan. Gigapan privacy policy at this link.
While searching on Gigapan Privacy Rights I found only a couple of hits over a dozen pages that express concern for privacy. They appear in discussion forums, not new article analysis. This link for example at a site where some people defend may themselves and their rights with AR-15s.
Maybe a person gives up rights not to be photographed as a condition of entry to a paid venue where there is some waiver of privacy rights in small print somewhere but this Gigapan picture is of the inauguration!
Amazing! How many of these people images are good enough for facial recognitions? Somebody already knows that answer because they probably used this image or one like to to discover that statistic by running it against their facial recognition data base. How long until most facial images in a terapixel photo could be identified against a facial recognition data base. Answer: A matter of time.
This picture (The Vancouver Canucks Fan Zone along Georgia St. for Game 7 of the 2011 Stanley Cup Final was captured at 5:46 pm on June 15, 2011. It is made up of 216 photos (12 across by 18 down) stitched together, taken over a 15-minute span, and is not supposed to represent a single moment in time. The final hi-res file is 69,394 X 30,420 pixels or 2,110 megapixels. Special thanks to Bonita Howard and CBC Real Estate) It was taken with a 2100 mexapixel camera. It is at the gigapixel.com site. Individual faces look as good as those that I can match up to names in my own iPhoto collection.
Who makes a 2.1 megapixel camera? Google: "2100 megapixel camera 70000 x 30000 pixels" pages hits. Most saying not available to the public. It is not really a single camera but probably a high resolution camera on a mount that moves it to take pictures to be stitched together like the camera at this link. from iPconfigure which also deals in license plate recognition cameras.
Argus uses 368 5 mexapixel image sensors mounted in a 1 meter diameter round "lense" , like the one in the iPhone to create a single camera to capture an image at a single time apparently. The holy grail of big pixel photography is probably to capture the biggest pixel image in one shot. That is still photography and being able to take the next shot in 1/30 of a second or 1/60th gives live big pixel video.
Big Data loves Big Pictures and Big Video.
Aware2 uses the same approach to their camera. However it uses multiple individual high pixel image capture cameras in a single big box to make a camera that captures each individual camera image in a mosaic and then stitches them together for the big picture. Hey, that is aggregation of visual silo data to Big Data. (see prior post on aggregation)
The Aware2 camera: Box camera! But it does not have Kodak on it.
This is the Aware 40 which evidently is the Aware2 above mounted on a device that can reposition it to a new view to take a picture to be stitched together. Perhaps that makes it an Aware2 with a telephoto lens?
From the Duke website a look at the future:
Significant improvements have been made to the optics, electronics, and integration of AWARE cameras have been made over the life of this program. Some are described here: Camera Evolution. In the Spring of 2013, DISP will build the AWARE 40 15 microdradian ifov systems and will build next generation AWARE 10 and AWARE 2 cameras. The goal of this DARPA project is to design a long-term production camera that is highly scalable from sub-gigapixel to tens-of-gigapixels. Deployment of the system is envisioned for military, commercial, and civilian applications. Ultimately, the goal of AWARE is to demonstrate that it is possible to capture all of the information in the optical field entering a camera aperture. The monocentric multiscale approach allows detection of modes at the diffraction limit. As discussed in "Petapixel Photography," the number of voxels resolved in the space-time-spectral data cube is ultimately limited by photon flux. We argue in the "gigapixel television," a paper presented at the 14th Takayanagi Kenjiro Memorial Symposium, that real-time streaming of gigapixel images is within reach and advisable
The same Duke website says:
Second generation color AWARE 2 cameras came on line in April 2013 using the original AWARE 2 mounting dome. AWARE 2 captures a 120 degree circular FOV with 226 microcameras, 38 microradian ifov, and an effective f-number of 2.17. Each microcamera operates at 10 fps at full resolution. The optical volume is about 8 liters and the total enclosure is about 300 liters. The optical track length from the first surface of the objective to the focal plane is 188 mm. Example images taken by these systems can be found on the AWARE image server, at aqueti.com and on gigapan.com
Aha! This is the mysterious "not commercially available" camera that takes those Gigapan pictures?
Wow! Looking into the future where everything can be seen in extreme detail from a distance.
Big Visual Data
This is the Aqueti website located in Durham, North Carolina. Their QG 250 megapixel Camera link here. Exposure time 1/500 to 1/6 seconds but it looks like it takes 15 seconds for the computer to process? Applied Quantum Technologies Inc is also located at the same address.
This link is a .23 gigapixel (2300 megapixel) picture from the Gigapan website taken by Aqueti Inc. with a QG camera at a Duke University basketball game. A news story about that camera here.
This camera only has 34 micro cameras inside. The Aware has 98 micro cameras. It is a camera that scales well. Just add more micro cameras. Scale all the way up to Argus size. There must be a limit somewhere and it is probably at least size. Making them smaller is maybe a small challenge like it was in semi conductors?
At the same news story link:
"Brady said they believe the technology will be the future of all camera technology, but right now, they have yet to sell one.
If you look a few years out, we’ll be able to make consumer-grade cameras in the 250 megapixel camera scale,” he said.
In the nearer term, they’re using the technology to take contract pictures at basketball games, concerts or festivals since the cameras can take zoomable images in great detail.
He said they’re working on a capability for the camera that would allow people to take real-time “selfies” at events by logging in through their handheld smartphones."
Check out this panoramic image of the Super Bowl where fans have voluntarily identified themselves
with the logo of their favorite team. Hey, look! That's me at the Superbowl! Actually, as good as a posed shot. Everyone looking at the same focal point, each face located well to see with the camera (for them to see the game) Well lighted. Densely packed. The ultimate group photo. Examine it long enough and I am sure that some interesting individual images would be found like in Google street view. Knowing the location of a person in sectioned reserved seating, their image could be found. Who got those 50 yard line tickets??? Check out the first cheerleader winking at the camera.
Hmmm... maybe Gigapan was just borrowing one of the cameras? Field test? Aquiti contracts to gigapan to take pictures?
No comments:
Post a Comment