The image is to modern remote sensing what assembly language is to your laptop. Important, but you shouldn’t (need to) interact with it.— Will Cadell (@geo_will) October 11, 2017
Images have been the currency of remote sensing since we started looking down at our Earth with sensors. Whether we were looking at stereo pairs for classifying stands of forest, or using LandSat images for the measuring our changing landscape the image has been an infrastructural necessity. Indeed, how on Earth would one do remote sensing without an image?
A typical work-flow would be to order images from a government person on the phone and wait for their delivery for manual interpretation, or you might hit up an FTP site and wait a few hours while images downloaded. This was all quite reasonable because the digital form of images has always been “big” in terms of file size and thus bandwidth. Due to this, imagery has always taken a fair bit of management and of course disk space. I am sure most self identifying GIS* people reading this will have a (large) number of external hard drives laying around with archived projects containing in the most part images, or derivatives of images, or derivatives of derivatives of images.
The fact that the image, as a convenient data construct is painfully inadequate is just irrelevant. Images are a reality, like timezones or taxes. They are just a necessary transaction to access what you really want: the pixels, the data they hold, and the insight they might reveal.
Hard-copy is one thing, but even assuming that our images are in fact digital, we still have enormous difficulty in moving them around. We’ve built mythologies around download times and file sizes, the valiant analyst processing late into the night crunching data; the abilities of some to build more optimal network appliances for quicker access. This battle was constantly being fought against a panorama of higher and higher resolution imagery, with their ballooning file sizes. Moore’s law was helping but in the end bandwidth simply hasn’t kept up. Indeed now we have cloud computing which solves much of this problem, except the data must still be moved from one cloud to another, and so often we are moving pixels we don’t even care about.
Because file size isn’t the only problem with the image. Another reason is that an image has an arbitrary size. An image has at least four geospatially relevant resolutions. Pixel, spatial, temporal, and spectral resolutions all play their part. Spatial, temporal and spectral resolutions are all characteristics of the platform capturing the data. A satellite or aircraft has a particular sensor and flies at a particular height. These features determine a platform’s fitness for purpose.
The pixel resolution of an image, however it completely arbitrary in the face of the images purpose. Sure, there is typically a relationship between the pixel width of a sensor and an image’s width. If a sensor is push-broom then the image length is typically determined by the “convenience of data transfer”; satellite bandwidth is limited, so data is sent down in chunks. If the satellite sensor is a more traditional camera based system, then length is determined like width by the sensor’s dimensional characteristics. But my point is that this characteristic of an image does not support its fitness for purpose, it simply defines an arbitrary area of capture for the convenience of data capture and transfer.
But once that image is on the ground, why do we insist on it still being an image? The fact that an image has pixel x and y characteristics is largely due to the nature of the sensor itself, and not respective of the environment we seek to measure.
That a image has a pixel resolution isn’t really the issue. It is that we have to consider an image having an edge at all, or indeed that an image exists at all.
So, the issue is more that we have had to consider an area that is not of our own design, rather than our actual area of interest. The reason we have had to do that is because the pixels we want to look at belong to a construct of somewhat arbitrary size. The image is entirely irrelevant to our work-flow. The fact is we care about the pixels; we care about the reflectance values or sensory measurement of a particular piece of our planet; we care about location.
I feel deeply lucky and humbled to be part of our present geospatial age. Every few weeks we see new advancements in technology, whether its Machine Learning here or micro-sats there we have been witnessing the renaissance of remote sensing. This domain of dusty old aerospace boffins is emerging as as both an increasingly incredible source of information about our planet and one of the “hottest” sectors for technology investment.
One of modern remote sensing’s quiet macro trends is the incremental erosion of the image as the transactional currency of remote sensing, and I am rejoicing!
With both DigitalGlobe’s GBDX platform and Planet’s API are moving towards an image-less society, soon it will be possible to simply request the pixels we want based on a snippet of Well Known Text rather than having to access a strip or image. Accessing the 300 pixels I want rather than the 30GBs of image I don’t want. As we iterate over these ideas, we will see an evolution of images becoming imagery, and imagery being a pervasive globe of multi-temporal, multi-spatial, multi-spectral pixels from which we, as analysts will be able to pick and choose the wavelengths of interest, the dates of interest, the spatial resolution of interest all based on a location.
Say goodbye to the days of picking images from catalogs.
Say goodbye to having to download four massive images because your AOI happens to be in just the wrong place.
Say hello to creating data products based on location.
*But (Oh my word!) what is GIS!?