More

How calculate pixel by pixel the mode (statistic) of multiple rasters (geotiff)?


I have 12 land cover maps, and I would like to calculate the mode to each pixel of the 12 maps (they have the same resolution and dimension). I mean, the final map would be the mode for each pixel of the 12 land cover maps.

I already tried using this GRASS module (raster calculator):r.mapcalc raster_mode = mode(raster1, raster2,… raster12), but I just obtained an empty map. I also tried with raster calculator of Qgis, but it seems it hasn't the mode function.

I can use GRASS, Qgis, R, Python… (Open Source).


I am assuming that by mode you mean the most frequent class? You can use the R function "table" to calculate the frequencies of a vector.

x <- c(1,1,2,3,4,4,4,4) table(x)

Then use which.max to return the class associated with the most frequent class. To return the actual class name you need to wrap the statement in names.

which.max( table(x) ) names( which.max( table(x) ) )

If you ever need class percents, you can pass the frequency table to the prop.table function.

prop.table(table(x))

Now you ask, how does this apply to rasters? You can vectorize the problem using calc or overlay. In this case, because we do not have to specifically index each raster, the calc option is a far simpler approach in writing a function for returning the majority class.

Here we write a function, using the above example, that returns the most frequent values, at the cell level, across all the rasters. The only addition is as.numeric because we need the resulting values assigned to the raster to be numeric.

maj.class <- function(x) { as.numeric( names( which.max( table(x) ) ) ) }

We then dummy up some example data, 4 rasters each with 4 classes, and pass our function to calc.

library(raster) r1 <- r2 <- r3 <- r4 <- raster(nrows=50, ncols=50) r1[] <- round(runif(ncell(r1),1,4),0) r2[] <- round(runif(ncell(r2),1,4),0) r3[] <- round(runif(ncell(r3),2,6),0) r4[] <- round(runif(ncell(r4),4,8),0) ( r <- stack(r1,r2,r3,r4) ) ( mclass <- calc(r, maj.class) ) plot(mclass)

Jeffrey's answer works perfectly but the GRASS solution is also an option. I used r1, r2, r3 and r4 Jeffrey's raster for producing the 'mode' raster in QGIS-GRASS. The GRASS command was:

r.mapcalc mode="mode(r1,r2,r3,r4)"

At the Image below, I chose an arbitrary point to illustrate, with Value Tool Plugin, that the pixel value in the resultant map is the 'mode' of pixel values of r1, r2, r3 and r4.


R: Crop GeoTiff Raster using packages &ldquorgdal&rdquo and &ldquoraster&rdquo

I'd like to crop GeoTiff Raster Files using the two mentioned packages, "rgdal" and "raster". Everything works fine, except that the quality of the resulting output tif is very poor and in greyscale rather than colour. The original data are high quality raster maps from the swiss federal office of Topography, example files can be downloaded here.

In order to reproduce this example, download the sample data and extract it to the folder "c:/files/". Oddly enough, using the sample data the quality of the croped image is alright, but still greyscale.

I played around using the options "datatype", "format", but didnt get anywhere with that. Can anybody point out a solution? Should I supply more information the the input data?

EDIT: Josh's example works superb with the sample data 2. Unfortunately, the data I have seems to be older and somewhat different. Can you tell me what option I choose if you read the following GDALinfo:


What are pixels?

The term pixel is short for "picture element", and pixels are the tiny building blocks that make up all digital images. Much like how a painting is made from individual brush strokes, a digital image is made from individual pixels.

In Photoshop, when viewing an image at a normal zoom level (100% or less), the pixels are usually too small to notice. Instead, we see what looks like a continuous image, with light, shadows, colors and textures all blending together to create a scene that looks much like it would in the real world (image from Adobe Stock):

A closer look at pixels

But like any good magic trick, what we're seeing is really an illusion. And to break the illusion, we just need to look closer. To view the individual pixels in an image, all we need to do is zoom in. I'll select the Zoom Tool from the Toolbar:

Then, I'll click a few times on one of the woman's eyes to zoom in on it. Each time I click, I zoom in closer. And if I zoom in close enough, we start seeing that what looked like a continuous image is really a bunch of tiny squares. These squares are the pixels:

And if I zoom in even closer, we see that each pixel displays a single color. The entire image is really just a grid of solid-colored squares. When viewed from far enough away, our eyes blend the colors together to create an image with lots of detail. But up close, it's pixels that create our digital world:

The Pixel Grid

Notice that once you zoom in close enough (usually beyond 500%), you start seeing a light gray outline around each pixel. This is Photoshop's Pixel Grid, and it's there just to make it easier to see the individual pixels. If you find the Pixel Grid distracting, you can turn it off by going up to the View menu in the Menu Bar, choosing Show, and then choosing Pixel Grid. To turn it back on, just select it again:

Zooming back out to view the image

To zoom out from the pixels and view the entire image, go up to the View menu and choose Fit on Screen:

And now that we're zoomed out, the individual pixels are once again too small to notice, and we're back to seeing the illusion of a detailed photo:


How calculate pixel by pixel the mode (statistic) of multiple rasters (geotiff)? - Geographic Information Systems

For any electronic measuring system, the signal-to-noise ratio (SNR) characterizes the quality of a measurement and determines the ultimate performance of the system. Many digital cameras for microscopy utilize a CCD (charge-coupled device) image sensor, the SNR value specifically represents the ratio of the measured light signal to the combined noise, which consists of undesirable signal components arising in the electronic system, and inherent natural variation of the incident photon flux. Because a CCD sensor collects charge over an array of discrete physical locations, the signal-to-noise ratio may be thought of as the relative signal magnitude, compared to the measurement uncertainty, on a per-pixel basis. The three primary sources of noise in a CCD imaging system are photon noise, dark noise, and read noise, all of which must be considered in the SNR calculation.

The tutorial initializes with the display of a graphical plot of signal-to-noise ratio as a function of integration (exposure) time for a hypothetical CCD system with specifications typical of high-performance cameras used in microscopy imaging applications. Parameters that affect signal-to-noise ratio for a CCD sensor can be varied for the system modeled in the tutorial by using the mouse to reposition any of the sliders located below the display window. As each variable is changed, the calculated value of signal-to-noise ratio is updated in the left-hand yellow box. During image acquisition with electronic sensors, including CCDs, apparently random fluctuations in signal intensity constitute noise superimposed on the signal, and as the magnitude of noise increases, uncertainty in the measured signal becomes greater. Changes made to the factors that directly affect signal level, and to those variables primarily contributing noise to the system, have an inverse effect on SNR that is reflected in the displayed value. A large signal-to-noise ratio is important in the acquisition of high-quality digital images, and is particularly critical in applications requiring precise light measurements. The radio buttons labeled Binning Factor can be selected individually to enable a method of signal-to-noise ratio improvement commonly used with scientific CCD cameras, in which the signal-generated charge from groups of neighboring pixels is combined during readout into larger "superpixels". The binning factor represents the number of pixels that are combined to form each larger pixel. When the SNR is recalculated to reflect the binning operation, it is assumed that the signal is the same for each pixel within a group.

The measured signal from a CCD imaging system, utilized in calculating SNR, depends upon the photon flux incident on the CCD (expressed as photons per pixel per second), the quantum efficiency of the device (where 1 represents 100 percent efficiency), and the integration (exposure) time over which the signal is collected (seconds). The product of these three variables determines the signal portion (numerator) of the signal-to-noise ratio, which is weighed against all noise sources that contribute to the denominator term of the ratio. TheQuantum Efficiency slider in the tutorial provides an adjustment range of 20 to 98 percent, and the Photon Flux slider allows selecting incident light levels between 0.1 and 10000 photons per pixel per second. The Integration Time slider adjusts the CCD integration time over a range of 0.1 to 100 seconds.

Sliders are provided for varying the CCD specifications for Read Noise (2 to 20 electrons rms per pixel) and for Dark Current (0.01 to 50 electrons per pixel per second). The photon noise contribution to total noise is a function of signal level and is not an independent noise variable that can be reduced through camera design or method of operation, but is accounted for in the SNR calculation. The right-hand yellow numerical field (Detected Photons/Pixel) displays the total number of signal photons read out by the CCD for each pixel over the integration period currently set by the slider. This value represents the product of photon flux, quantum efficiency, and integration time. Manipulation of the five sliders, in conjunction with the radio buttons adjacent to several, produces a range of signal-to-noise ratio values corresponding to most operating conditions likely to be encountered in the use of CCD cameras designed for low light imaging in microscopy. When the tutorial is initially loaded or is reset, the slider positions default to values that are typical for a high-performance scientific grade camera system utilizing a cooled CCD.

Three primary undesirable signal components (noise), which degrade the performance of a CCD imaging device by lowering signal-to-noise ratio, are considered in calculating overall SNR:

Photon noise (sometimes referred to as shot noise) results from the inherent statistical variation in the arrival rate of photons incident on the CCD. Photoelectrons generated within the semiconductor device constitute the signal, the magnitude of which is perturbed by fluctuations that follow the Poisson statistical distribution of photons incident on the CCD at a given location. The photon noise, or measurement variation, is therefore equivalent to the square-root of the signal.

Dark noise arises from statistical variation in the number of electrons thermally generated within the silicon structure of the CCD, which is independent of photon-induced signal, but highly dependent on device temperature. The rate of generation of thermal electrons at a given CCD temperature is termed dark current. In similarity to photon noise, dark noise follows a Poisson relationship to dark current, and is equivalent to the square-root of the number of thermal electrons generated within the image exposure time. Cooling the CCD reduces the dark current dramatically, and in practice, high-performance cameras are usually cooled to a temperature at which dark current is negligible over a typical exposure interval.

Read noise is a combination of system noise components inherent to the process of converting CCD charge carriers into a voltage signal for quantification, and the subsequent processing and analog-to-digital conversion. The major contribution to read noise usually originates with the on-chip preamplifier, and this noise is added uniformly to every image pixel. High-performance camera systems utilize design enhancements that greatly reduce the significance of read noise.

The CCD signal-to-noise ratio calculation in the tutorial uses the following equation:

Formula 1 - Signal-to-Noise Ratio

An additional factor that should be considered is that the values of incident and background photon flux, and quantum efficiency are functions of wavelength, and when broadband illumination sources are employed, the calculation of signal-to-noise ratio requires these variables to be integrated over all wavelengths utilized for imaging.

Various approaches are used to increase signal-to-noise ratio in high-performance CCD imaging systems. To reduce thermal charge generation within the semiconductor layers of the CCD, which is manifested as dark current, special device fabrication techniques and operation modes are sometimes employed. It is common to cool the CCD to reduce dark current to a negligible level using thermoelectric or cryogenic refrigeration, or if necessary, the extreme approach of liquid nitrogen cooling may be taken. In general, high-performance CCD sensors exhibit a one-half reduction in dark current for every 5 to 9 degrees Celsius as they are cooled below room temperature, a specification referred to as the "doubling temperature". This rate of improvement typically continues to a temperature of approximately 5 to 10 degrees below zero, beyond which the reduction in dark current diminishes quickly. In addition to specialized circuit and electronics design, filtration techniques utilizing advanced integrators and double sampling methods are sometimes employed to remove certain components of read noise.

Figure 1 - Signal-to-Noise Variation with Integration Time

Figure 2 - Signal-to-Noise Variation with Integration Time

Because photon noise is an inherent property of CCD signal detection, which cannot be reduced by camera design factors, it essentially represents a "noise floor" that is the minimum achievable noise level, diminishing in relative effect as photon flux increases. Consequently, it is desirable to operate an imaging system under conditions that are limited by photon noise, with other noise components being reduced to relative insignificance. Under low illumination level conditions (assuming dark noise is essentially eliminated by CCD cooling), read noise is greater than photon noise and the image signal is said to be read-noise limited. The camera exposure time (integration time) can be increased to collect more photons and increase SNR, until a point is reached at which photon noise exceeds both read noise and dark noise. Above this exposure time, the image is said to be photon-noise limited.

The limited number of photons available for image formation is a critical factor in many microscopy techniques, and high-performance CCD camera systems are specifically designed to reach a photon-noise limited operation mode at much lower signal levels than conventional cameras, which typically never achieve photon-noise limited performance (and a suitably high SNR) at low light levels. In widefield microscopes, which commonly employ CCD cameras, the total signal available from the specimen focal volume may vary by several orders of magnitude, depending largely upon the imaging technique employed and the specimen itself. A photon flux of 10e6 (1 million) photons per second from the focal volume, an extremely low light level, is equivalent to an average of 1 photon/pixel/second distributed over the surface of a sensor having 1 million active pixels. As a point of reference, the minimum detection limit of the dark-adapted eye is approximately 40 times that (40 million photons/second). A properly designed fluorescence microscope typically yields 10e8 to 10e9 photons per second from the focal volume, or 100 to 1000 photons/pixel/second with the same 1-megapixel sensor. Conventional brightfield imaging mode commonly produces illumination levels, averaged over the full sensor area, of 5000 to approximately 40,000 photons/pixel/second. Unless the integration interval is very short, bright areas of a widefield image can generate a total detected signal of more than 100,000 photons per pixel.

Figure 1 presents a plot of signal-to-noise ratio versus integration (exposure) time for a typical high-performance CCD camera designed for imaging at low signal levels, with photon flux and sensor characteristics fixed at the values shown in the figure. In a plot of this type, a read-noise limited region and a photon-noise limited region can be identified, separated at the exposure time for which photon noise begins to exceed read noise (approximately 0.15 second for the sensor and light flux values specified in the figure). Because of the square-root relationship of photon noise to signal, this division between the two regions occurs at an exposure time for which the total detected signal per pixel is approximately the square of the read noise value. For example, with a read noise specification of 5 electrons rms per pixel, photon noise becomes the dominant noise source when the exposure time is sufficient to result in more than 25 detected photons per pixel at the existing incident photon flux. The interactive tutorial displays a plot similar to that in Figure 1, but reflects changes in the graphical plot as each of the variables controlled by a slider is adjusted. In addition to the calculated SNR value, shown on the left, the right-hand yellow window updates the value ofDetected Photons/Pixel, and the red text message in the upper portion of the graph changes to indicate whether read noise or photon noise predominates for the values selected by the sliders. A red arrow follows the plotted curve as an indication of the currently selected integration time. The transition between the two dominant-noise regimes assumes that dark noise is negligible, which is typical in operation of scientific-grade CCD imaging systems, although other situations are possible. Operation at high dark-current levels alters the significance of the relative values of read noise and photon noise under some conditions, and in such circumstances dark noise can overwhelm both signal and other noise components.

Figure 3 - Signal-to-Noise Improvement with Binning

Some scientific-grade CCD cameras allow implementing an on-chip pixel binning function as another mechanism for increasing signal-to-noise ratio. It should be realized that this method involves a sacrifice of some spatial resolution, as well as a concomitant increase in dark current. By improving the signal-to-noise ratio of the CCD, the imaging system is able to reach photon-noise limited conditions at a lower light level and/or shorter exposure time. Some camera systems automatically utilize a pixel binning mode for monitor display of a preview image to provide a brighter image at rapid frame rates, which facilitates specimen positioning and focusing. To demonstrate this binning effect on calculated SNR, the tutorial provides radio buttons corresponding to three binning factors. The button labels indicate the number of binned pixels as follows: 1 pixel, no binning 4 pixels, 2x2 pixel array combined into one 16 pixels, 4x4 pixel array combined into one. Figure 2 illustrates the effect of different binning values on curves plotting the variation of SNR with exposure time. The equation used in the tutorial for calculating SNR is modified to account for binning, as shown below:

Formula 2 - Signal-to-Noise Ratio

In this modified equation, the symbol M represents the number of binned pixels, and it is assumed that the signal in each of those pixels is the same. The three curves are plotted for the same typical CCD specifications, as denoted on the graph, and for a very low specimen signal intensity, producing a photon flux of 40 photons per pixel per second incident on the sensor. Note that without pixel binning, an exposure time of approximately 4 seconds is required to achieve a photon-noise limited signal level. By implementing 16-pixel binning, an equivalent SNR and total number of detected photons per pixel is reached at an exposure time of only 0.25 seconds (see Figure 2), which would allow refreshing a preview image at an adequate frame rate to permit focusing and specimen positioning even at a low image intensity. Another consideration is that an image acquired using a 4-second integration time would benefit from an approximate 5-times improvement in signal-to-noise ratio with the use of 16-pixel binning, compared to the unbinned mode. In many situations, particularly at low light levels, the benefits of reduced noise and the resulting improved image contrast outweigh the loss of theoretical spatial resolution that is inherent to the pixel binning process.

Contributing Authors

Thomas J. Fellers, Kimberly M. Vogt, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.

Related Nikon Products

Cameras

Nikon has a diverse camera and controller lineup, enabling researchers to configure a microscope imaging system that’s ideal for their specimens and applications.


Coordinate System and Shapes

Before we begin programming with Processing, we must first channel our eighth grade selves, pull out a piece of graph paper, and draw a line. The shortest distance between two points is a good old fashioned line, and this is where we begin, with two points on that graph paper.

The above figure shows a line between point A (1,0) and point B (4,5). If you wanted to direct a friend of yours to draw that same line, you would give them a shout and say "draw a line from the point one-zero to the point four-five, please." Well, for the moment, imagine your friend was a computer and you wanted to instruct this digital pal to display that same line on its screen. The same command applies (only this time you can skip the pleasantries and you will be required to employ a precise formatting). Here, the instruction will look like this:

Even without having studied the syntax of writing code, the above statement should make a fair amount of sense. We are providing a command (which we will refer to as a "function") for the machine to follow entitled "line." In addition, we are specifying some arguments for how that line should be drawn, from point A (1,0) to point B (4,5). If you think of that line of code as a sentence, the function is a verb and the arguments are the objects of the sentence. The code sentence also ends with a semicolon instead of a period.

The key here is to realize that the computer screen is nothing more than a fancier piece of graph paper. Each pixel of the screen is a coordinate - two numbers, an "x" (horizontal) and a "y" (vertical) - that determines the location of a point in space. And it is our job to specify what shapes and colors should appear at these pixel coordinates.

Nevertheless, there is a catch here. The graph paper from eighth grade ("Cartesian coordinate system") placed (0,0) in the center with the y-axis pointing up and the x-axis pointing to the right (in the positive direction, negative down and to the left). The coordinate system for pixels in a computer window, however, is reversed along the y-axis. (0,0) can be found at the top left with the positive direction to the right horizontally and down vertically.

Simple Shapes

The vast majority of the programming examples you'll see with Processing are visual in nature. These examples, at their core, involve drawing shapes and setting pixels. Let's begin by looking at four primitive shapes.

For each shape, we will ask ourselves what information is required to specify the location and size (and later color) of that shape and learn how Processing expects to receive that information. In each of the diagrams below, we'll assume a window with a width of 10 pixels and height of 10 pixels. This isn't particularly realistic since when you really start coding you will most likely work with much larger windows (10x10 pixels is barely a few millimeters of screen space.) Nevertheless for demonstration purposes, it is nice to work with smaller numbers in order to present the pixels as they might appear on graph paper (for now) to better illustrate the inner workings of each line of code.

A point() is the easiest of the shapes and a good place to start. To draw a point, we only need an x and y coordinate.

A line() isn't terribly difficult either and simply requires two points: (x1,y1) and (x2,y2):

Once we arrive at drawing a rect(), things become a bit more complicated. In Processing, a rectangle is specified by the coordinate for the top left corner of the rectangle, as well as its width and height.

A second way to draw a rectangle involves specifying the centerpoint, along with width and height. If we prefer this method, we first indicate that we want to use the "CENTER" mode before the instruction for the rectangle itself. Note that Processing is case-sensitive.

Finally, we can also draw a rectangle with two points (the top left corner and the bottom right corner). The mode here is "CORNERS".

Once we have become comfortable with the concept of drawing a rectangle, an ellipse() is a snap. In fact, it is identical to rect() with the difference being that an ellipse is drawn where the bounding box of the rectangle would be. The default mode for ellipse() is "CENTER", rather than "CORNER."

It is important to acknowledge that these ellipses do not look particularly circular. Processing has a built-in methodology for selecting which pixels should be used to create a circular shape. Zoomed in like this, we get a bunch of squares in a circle-like pattern, but zoomed out on a computer screen, we get a nice round ellipse. Processing also gives us the power to develop our own algorithms for coloring in individual pixels (in fact, we can already imagine how we might do this using "point" over and over again), but for now, we are content with allowing the "ellipse" statement to do the hard work. (For more about pixels, start with: the pixels reference page, though be warned this is a great deal more advanced than this tutorial.)

Now let's look at what some code with shapes in more realistic setting, with window dimensions of 200 by 200. Note the use of the size() function to specify the width and height of the window.


(Digital Photo Professional software)

Another function of the Dual Pixel RAW Optimizer in our DPP software helps reduce the appearance of artefacts due to bright light sources in the frame. Views from the left and right dual pixel images are compared and selectively swapped so that flare spots and ghosting can be minimised or avoided.

Digital Photo Professional

Digital Photo Professional (DPP) is a high-performance RAW image processing, viewing and editing software for EOS digital cameras and PowerShot models with RAW capability.


Acknowledgements

This work was partially supported by the Research Grants Council of Hong Kong (Project Numbers 14206717, 14201415 and AoE/M-05/12), The Hong Kong Innovation and Technology Fund (Project Number ITS/371/16), The Engineering and Physical Sciences Research Council (Grant Number EP/S021442/1), National Natural Science Foundation of China (project number 51777023) and the Royal Society Wolfson Merit Award (E.P.M.). We thank Xudong Liu for providing suitable silicon samples and helpful discussions along with Edward P. J. Parrot.


aspect identifies the downslope direction of the maximum rate of change in value from each cell to its neighbors. Aspect can be thought of as the slope direction. The values of the output raster will be the compass direction of the aspect. For more information, see <a href=”http://desktop.arcgis.com/en/arcmap/latest/manage-data/raster-and-images/aspect-function.htm”>Aspect function</a> and <a href=”http://desktop.arcgis.com/en/arcmap/latest/tools/spatial-analyst-toolbox/how-aspect-works.htm”>How Aspect works</a>.

Parameters:raster – the input raster / imagery layer
Returns:aspect applied to the input raster

The arguments for this function are as follows:

  • rasters – array of rasters. If a scalar is needed for the operation, the scalar can be a double or string
  • extent_type – one of “FirstOf”, “IntersectionOf”, “UnionOf”, “LastOf”
  • cellsize_type – one of “FirstOf”, “MinOf”, “MaxOf, “MeanOf”, “LastOf”
  • astype – output pixel type

Code Availability

The R code developed for preparing the inputs to the SLEUTH model and integrating the modelling results into global maps is publicly and freely available 17 . The code consists of two R programming language scripts (version R 3.4.3 https://www.r-project.org/), which prepare simulation inputs and integrate outputs. The script is internally documented to assist understanding and customisation for further use.

We have also shared the modified SLEUTH model, as well as the scenario.jinja file in the Python package sleuth-automation 17 (version 1.0.2 https://pypi.org/project/sleuth-automation/).


On a normal monitor imaging element is in a square matrix. We then call the aspect ratio of that pixel 1. Aspect ratio is just the width/height. A aspect ratio of 1 is a square and a aspect of 16/9 is elongated. In the case of monitors we have 2 separate aspect ratios the ratio of the monitor and the shape of each pixel, called pixel aspect ratio. These two are not to be confused together they are different things.

Image 1: Aspect ratio is defined as width divided by height

One some devices most notably old TV signals and some movie formats the pixel is not a square but rather somewhat elongated, wider than high. So by setting this value to other than 1 will result in Photoshop emulating such a screen by stretching your image accordingly.

Image 2: A pixel bobba fett by Shkvapper in a 1:1 pixel ratio and a 3:2 Ratio (or 1.5 expressed as one number)

Most users will never need this option for anything! So its safe to keep it at 1, unless you know you need it. Odds are you will never encounter a situation where you would ever need to change pixel aspect ratio.

The ViewPixel Aspect Ratio setting in Photoshop simulates non-square (elongated, rectangular) pixels on a square-pixel screen, primarily for preview purposes.

Photoshop does this simply by scaling the work area along one of the axes to get the desired, simulated pixel shape. The scaling takes place for display purposes only when you change the pixel aspect ratio, the software will not touch the underlying pixel data in the image you’re working on.

The image resolution (number of pixels along the horizontal axis and number of pixels along the vertical axis) will stay the same regardless of whether you’re watching it in an aspect ratio-corrected mode or in a square-pixel mode. If you set a non-1:1 pixel aspect ratio and use the magnifier tool to zoom into a level which will show you the individual pixels as a grid, you will see that the cells of this grid are now elongated along one of the axes, following the x/y pixel aspect ratio you set.

However, Photoshop does allow you to paint on the image in this mode, and will scale the output of its tools accordingly, to match the new pixel aspect ratio. So you can e.g. draw circles which will look perfect with no distortion whatsoever, even though when you study them in the magnifier view, or using the ruler tool (set to use pixel units), there will be a different number of pixels along the horizontal and vertical axes.

So why would you ever want to do this? Your pixels are supposed to be neat and square their width matching their height, right?

As the preset options in the ViewPixel Aspect Ratio menu suggest, Photoshop mainly implements this feature for working with video frames. There are several industry-standard digital video formats – such as those used on PAL and NTSC DVDs and in SD resolution digital TV broadcasts – which for technical and historical reasons employ a different pixel aspect ratio than 1:1.

The same also holds true for the early (1980s-era) home and office computers and video game consoles. The early video graphics chips usually produced signal where the pixels – realized as a video raster displayed on a CRT screen – were clearly wider or narrower than their height. If you wanted your computer to draw perfect circles instead of elongated ellipsoids, or design any other sort of graphics or art which was to be displayed on the computer screen, you needed to take the pixel aspect ratio in account and match your designs to the fundamental characteristics of the video graphics modes your computer could produce.

Later on, PCs began to standardize on graphics modes which would produce (nominally) 1:1-shaped pixels on properly adjusted CRT screens, while also filling the screen area from edge to edge. Yet later, LCD monitors fixed the pixel array once and for all, making it (for all practical purposes) mandatory to use square-pixel graphics modes and the native resolution of the display, instead of some arbitrary resolution.

This was all sensible and welcome development as standardizing on square pixels made it much easier to create and display graphics in a portable way. The early computers did not do this because they had various technical limitations and trade-offs where getting a particular resolution or color palette on the screen was more important than the exact shape of the pixels.

You may still occasionally stumble upon special-purpose displays (think of something like a jumbo LED ad display on the outside wall of a shopping mall, or the LED array displays showing the next stop on a local bus, or the monochrome LCD display on the control panel of some industrial device) where the picture elements are not necessarily square-shaped, and where your pixel-graphics designs need to be scaled or shaped accordingly. That is, if you want to maintain the correct (physical) aspect ratio for graphics you output.

The less resolution and colors a display has, the more it calls for hand-tweaking your graphics pixel-by-pixel, or designing them from scratch for a particular graphics mode or display. Even more so if the final picture elements are not square. (Mere mechanical application of interpolation algorithms will usually produce quite bad results if the target resolution or color depth is small enough. Or inversely, the quality of your designs can be considerably better if you design for the limitations of the device and control the output to the level of the individual picture elements instead of just applying scaling algorithms and automatic conversions.)

The need for these considerations is altogether getting rarer now as even the lowest-end devices often have plenty of resolution and color on their displays, and engineers mostly try to make the addressable picture elements square in their shape, if at all possible. If you work with SD video (for archival or editing purposes), or design graphics for retrocomputing or demoscene projects, they’re still very real, though.


Watch the video: QGIS 3D Map using 3D View Version (October 2021).