Метод диффузии ошибки

Ping Wah Wong, in Handbook of Image and Video Processing (Second Edition), 2005

Image Quantization, Halftoning, and Printing

Ping Wah Wong, in Handbook of Image and Video Processing (Second Edition), 2005

3.2 Error Diffusion

Error diffusion [11] is an excellent method for generating high quality halftones that are particularly suitable for low to medium resolution devices. A block diagram showing an error diffusion system is given in Fig. 7. It turns out [12] that error diffusion is the two-dimension equivalent of sigma-delta modulation (also called delta-sigma modulation) [13], which was developed for performing high resolution analog-to-digital conversion using a one-bit quantizer embedded in a feedback loop.

FIGURE 7. A block diagram of the error diffusion system.

To apply the error diffusion algorithm in producing a halftone b(n1, n2), we need to scan the input image f(n1, n2) in some fashion. One of the most popular strategies is raster scanning, where we scan the image row by row from the top to the bottom, and within each row from the left to the right.

At each pixel location, we perform the operations

(7)u(n1, n2)=f(n1, n2)−Σm1, m2h(m1, m2)e(n1−m1, n2−m2)

(8)b(n1, n2)=Q(u(n1, n2))={1if u(n1, n2)≥0.50otherwise.

(9)e(n1, n2)=b(n1, n2)−u(n1, n2)=Q(u(n1, n2))−u(n1, n2)

In the signal processing literature, u(n1, n2) is called either the state variable of the system or the modified input.

Notice that we need to buffer the binary quantizer error values e(n1, n2) because of the convolution operation in (7). The extent of the buffer required is dependent on the support of the error diffusion kernel h(m1, m2). The filter h(m1, m2) generally has a low pass characteristic, and the coefficients often satisfy

Σ(m1, m2)h(m1, m2)=1.

A very popular kernel suggested by Floyd and Steinberg [11] is shown in Fig. 8, with

FIGURE 8. The Floyd–Steinberg error diffusion filter coefficients.

h(0, 1)=7/16,h(1, −1)=3/16,h(1, 0)=5/16,h(1, 1)=1/16.

This kernel consists of four coefficients, and hence the complexity of the error diffusion algorithm is quite mild. Two other popular error diffusion kernels are proposed by Jarvis, Judice and Ninke [14] and Stucki [15]. A Floyd–Steinberg error diffused halftone and its power spectrum is shown in Fig. 9.

FIGURE 9. Boats image (a) halftoned using Floyd–Steinberg error diffusion [11] at 150 dots per inch, and (b) the halftone power spectrum.

Using (7) and (9), we can write

(10)b(n1, n2)=f(n1, n2)+Σm1, m2g(m1, m2)e(n1−m1, n2−m2)

where

g(m1, m2)=δ(m1, m2)−h(m1, m2),

and δ(m1, m2) is the Kronecker delta. Using the Floyd Steinberg filter kernel as an example, we have

g(m1, m2)={1(m1, m2)=(0, 0)−7/16(m1, m2)=(0, 1)−3/16(m1, m2)=(1, −1)−5/16(m1, m2)=(1, 0)−1/16(m1, m2)=(1, 1)0otherwise.

A nice interpretation of (10) is that the output is the sum of the input and a filtered version of the quantizer noise.

To calculate the power spectral density of b(n1, n2), we need to calculate its autocorrelation function. This calculation requires the autocorrelation function of e(n1, n2) and the cross correlation function E[f(n1, n2)e(n1 +n1, n2 + m2)], both as function of the statistical properties of the input image f(n1, n2). This turns out to be a very difficult task because of the nonlinear nature of the system induced by the binary quantizer. A general solution for the two-dimensional case is still an open problem, although solutions to the one-dimensional case (sigma-delta modulation) have already been found [16, 17].

Despite the difficulty in an exact mathematic analysis of error diffusion, we can compute the spectrum of error diffused halftones empirically. An example of such a power spectrum is shown in Fig. 9. It is of interest to note that the noise of error diffusion is primarily in the high frequency region, which agrees with the intuition that the quantization noise in a high quality halftone should be mostly located in the high frequency region.

Although Floyd–Steinberg error diffusion generally produces high quality output, it also generates undesirable artifacts such as “worms” and undesirable patterns. Consequently there has been a large number of reports in the literature focusing on the improvement of error diffusion. These techniques can generally be categorized into two classes. The first class of techniques focuses on “breaking up” the undesirable output patterns’ of error diffusion. This includes using alternative scanning strategies [18, 19] and injecting random noises into the error diffusion system [18, 20].

Another class of techniques attempts to optimize the error diffusion filter with respect to a distortion criterion. To this end, one can use the frequency weighted mean squared error in (4). Substituting (10) into (4), we have

ɛ=E[(v(n1, n2)*g(n1, n2)*e(n1, n2))2].

It is evident that ε is a function of h(n1, n2). Hence at least in principle, we can find an optimum error diffusion kernel h(n1, n2) to minimize ε.

As discussed earlier, error diffusion is a highly non-linear system where e(n1, n2) depends on f(n1, n2) and the structure of the feedback loop in a very complicated way. This makes the optimization problem very difficult. In the literature, there are reports in solving this optimization problem either by making certain assumptions on the quantizer noise [21], using the LMS algorithm in adaptive signal processing to solve the optimization problem [22], or using an iterative optimization approach [23].

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780121197926501170

PIXEL MANIPULATION OF IMAGES

Thomas Strothotte, Stefan Schlechtweg, in Non-Photorealistic Computer Graphics, 2002

2.1.2 Error Diffusion

The technique described so far is also known as ordered dithering since clusters of dots are used where the order of the dots is given by the dither matrix and is not changed during execution of the dithering algorithm. Even when the dither matrices are designed very carefully, it is sometimes the case that there are visible artifacts in the resulting image. To circumvent this problem, so-called error diffusion techniques can be used. As the name implies, the error (difference between the exact pixel value from the original image and the approximated value being displayed in the result) is distributed to the neighboring pixels, thus introducing a kind of “smoothing” into the dithered image.

The best-known error diffusion technique was developed by Robert W. Floyd and Louis Steinberg in 1975 and can be considered one of the classical algorithms in computer graphics. The difference between the exact pixel value and the binary representation is distributed among the neighboring pixels with a certain ratio for each direction, as can be seen in Figure 2.4. There exist a wide range of other approaches to error diffusion dithering, which mainly differ in the fractions of the error term being distributed on the neighboring pixels and in the set of pixels being involved.

FIGURE 2.4. Distribution of the error within the Floyd-Steinberg algorithm. Note that all portions sum up to one.

ALGORITHM 2.2. Given an input image S, the Floyd-Steinberg algorithm computes an output image O by distributing the approximation error to neighboring pixels. The function approximate() returns the closest intensity value possible to display in the output image of the current pixel.

To present the procedure more formally, consider the pseudocode in Algorithm 2.2. Note that the algorithm processes the image from left to right and from the topmost pixel downwards so that the error terms are always added to pixels that have not been already involved in the dithering process. Also note that all fractions of the error term that are distributed sum up to one. (That is, exactly the error is distributed; any inaccuracy here results in unwanted visual artifacts.) To show the result, the intensity ramp dithered according to this algorithm can be found in Figure 2.5.

FIGURE 2.5. Intensity ramp dithered using Floyd-Steinberg error diffusion.

At the end of this theoretical section on the basics of halftoning, an example will show the results that are produced by the different methods when applied to one image. In Figure 2.6(a), an original grayscale image is given. Next, in Figure 2.6(b), a simple threshold quantization has been performed by setting all pixels having an intensity of less than 0.5 to white and all others to black.1Figures 2.6(c) and (d) show an example of ordered dithering using the dither patterns from Figure 2.2 and finally the result of the Floyd-Steinberg algorithm.

FIGURE 2.6. Different halftoning techniques for the same original image (a): threshold quantization (b), ordered dithering (c), and Floyd-Steinberg error diffusion (d). The lower images show an enlarged part to visualize the pixel distribution.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9781558607873500037

Parallel computation of steady Navier-Stokes equations on uni-variant/multi-variant elements

Tony W.H. SheuProfessor, … Morten M.T. Wang, in Parallel Computational Fluid Dynamics 1998, 1999

6 CONCLUDING REMARK

We have presented parallel computation of steady-state incompressible Navier-Stokes equations. The results presented here are based on the streamline upwind formulation, implemented on quadratic elements, to avoid cross-wind diffusion errors. Both uni- and multi-variant elements have been examined. Since the algebraic system is prohibitive in the present threedimensional simulation, we have used the BICGSTAB iterative solver to resolve matrix asymmetry and indefiniteness. This iterative solver is implemented it in an element-by-element format to improve the computational performance. For a further improvement on the computational efficiency we ran codes in parallel to take advantage of the speed-up potential the multi-processors can offer. In this study, we have focused on the parallel performance for code run on CRAY parallel platforms, CRAY C-90, J-90, and J-916 using the three-dimensional lid-driven problem to benchmark the parallel performance.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B978044482850750070X

Lossless Information Hiding in Images on the Spatial Domain

Zhe-Ming Lu, Shi-Ze Guo, in Lossless Information Hiding in Images, 2017

2.5.4.7 Experimental Results

Six 512 × 512 error-diffused halftone images, Lena, Baboon, Airplane, Boat, Pepper, and Barbara, are selected to test the performance of the proposed method, as shown in Fig. 2.35. These halftones are obtained by performing Floyd–Steinberg error diffusion filtering on the 8-bit gray level images. The capacities for different images and different sizes of LUT are listed in Table 2.4.

Figure 2.35. Six test error–diffused images Lena, Airplane, Baboon (top row, from left to right), Boat, Pepper, and Barbara (bottom row, from left to right).

Table 2.4. Capacity (Bits) With Different Images and Different Sizes of LUT(I)

LUT Lena Airplane Baboon Boat Pepper Barbara
I = 1 201 195 8 68 152 33
I = 2 339 385 21 142 258 73
I = 3 432 522 28 204 355 112
I = 4 512 635 37 261 429 140
I = 5 582 742 41 314 487 165
I = 6 641 844 45 366 540 188
I = 7 690 936 48 416 582 212
I = 8 739 1025 51 464 620 228
I = 9 788 1109 52 512 655 241
I = 10 831 1191 54 553 685 254

LUT, look-up table.

In our experiments, a 1D binary sequence created by a pseudo random number generator is chosen as the hidden data. Fig. 2.36a and b illustrate the original image Lena and its watermarked version, whereas Fig. 2.36c shows the recovered one. To evaluate the introduced distortion, we apply an effective quality metric proposed by Valliappan et al. [55], i.e., weighted signal-to-noise ratio (WSNR). The linear distortion is quantified in Ref. [55] by constructing a minimum mean squared error Weiner filter; in this way the residual image is uncorrelated with the input image. The residual image represents the nonlinear distortion plus additive independent noise. Valliappan et al. [55] spectrally weight the residual by a contrast sensitivity function (CSF) to quantify the effect of nonlinear distortion and noise on quality. A CSF is a linear approximation of the HVS response to a sine wave of a single frequency, and a low-pass CSF assumes that the human eyes do not focus on one point but freely moves the around the image. Since the halftone image is attempted to preserve the useful information of the gray level image, we compare the halftone or watermarked image with the original gray level image. Similar to PSNR, a higher WSNR means a higher quality. In our experiments, the WSNR between the gray level Lena and the halftone Lena is 29.18 dB, whereas the WSNR between the gray level Lena and the watermarked Lena is 28.59 dB. It can be seen that the introduced distortion of the visual quality is slight. Since the WSNR between the gray level Lena and the recovered Lena is 29.18 dB, the recovered version is exactly the same as the original image.

Figure 2.36. Data hiding on the halftone Lena. (a) The original Lena, WSNR = 29.18 dB, (b) the watermarked Lena with 831 bits inserted, WSNR = 28.58 dB, (c) the recovered Lena, WSNR = 29.18 dB. WSNR, weighted signal-to-noise ratio.

Our method can also be used for halftone image authentication. For example, a hash sequence of the original halftone image can be hidden in the halftone image. We only need to compare the hash extracted from the watermarked image (Hash1) with the hash sequence computed from the recovered image (Hash2). When these two sequences are equal, we can confirm that the watermarked image suffers no alteration. Under no attacks, both of them are certainly equal to the original hash, whereas if the watermarked image is an unauthorized changed, the two sequences are different. The process is illustrated in Fig. 2.37.

Figure 2.37. Application of our scheme in halftone image authentication.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780128120064000024

Digital Color Reproduction

Brian A. Wandell, Louis D. Silverstein, in The Science of Color (Second Edition), 2003

8.5.5.3 Error diffusion

At low print resolutions, the best halftoning results are obtained using an adaptive algorithm in which the halftoning depends upon the data in the image itself. Floyd and Steinberg (1976) introduced the basic principals of adaptive halftoning methods in a brief and fundamental paper. Their algorithm is called error diffusion. The idea is to initiate the halftoning process by selecting a binary output level closest to the original intensity. This binary level will differ substantially from the original. The difference between the halftone output and the true image (i.e., the error) is added to neighboring pixels that have not yet been processed. Then, the binary output decision is made on the next pixel whose value now includes both the original image intensity and the errors that have been added from previously processed pixels. Figure 8.23 shows a flow chart of the algorithm (panel A) and a typical result (panel B).

Figure 8.23. The steps involved in error diffusion algorithm (A) and the resulting image (B). See text for details.

The coefficients that distribute the error among neighboring pixels can be chosen depending on the source material and output device. Jarvis, Judice, and Ninke (1976) found the apportionment of error using the matrix

(00*753575313531)(148)

to be satisfactory, where * denotes the current image point being processed. Notice that the error is propagated forward to unprocessed pixels. Also, the algorithm works properly when applied to the linear intensity of the image. The algorithm should not be applied to images represented in a nonlinear space, such as the frame buffer values of a monitor. Instead, the image should be converted to a format that is linear with intensity prior to application of the algorithm.

For images printed at low spatial resolution, error-diffusion is considered the best method. Ulichney (1987) analyzed the spatial error of the method and showed that the error was mainly in the high spatial frequency regime. The drawback of error diffusion is that it is very time-consuming compared to the simple threshold operations used in dither patterns. For images at moderate to high spatial resolution (600 dpi), blue-noise masks are visually as attractive as error diffusion and much faster to compute. Depending on the nature of the paper, cluster dot can be preferred at high resolutions. The cluster dot algorithm separates the centroids of the ink so that there is less unwanted bleeding of the ink from cell to cell. In certain devices and at certain print resolutions, reducing the spread of the ink is more important than reducing the visibility of the mask.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B978044451251250009X

Visual Cryptography: The Combinatorial and Halftoning Frameworks

Gonzalo R. Arce, … Zhi Zhou, in Handbook of Image and Video Processing (Second Edition), 2005

3.3.1 Halftone Visual Secret Sharing vs. Combinatorial Visual Secret Sharing

To compare the result of halftone visual SS with that of extended visual SS, a 256 × 256 secret binary image is cryptographically encoded into two 512 × 512 halftone images using the two methods, respectively. The pixel expansion (halftone cell size) and the relative difference of both methods are the same, being m = 4 and α=12, respectively. The original halftone images, obtained by the error diffusion algorithm and pixel reversal are shown in Fig. 5. Applying the extended visual SS method, [4] outputs two shares with poor visual quality and low contrast as shown in Figs. 6A and 6B. The average PSNR of these two shares with respect to their original halftones is 3.46 dB. The halftone visual SS method results in the two visually pleasing halftone shares shown in Figs. 6C and 6D. The PSNR of these two halftone shares is 6.02 dB. The new method gains 2.56 dB. Having the same relative difference in both methods indicates that the same contrast of the reconstructed secret images can be obtained by both methods. This is precisely the case, as shown in Figs. 6E and 6F. The superiority of the halftone visual SS method is that halftone shares with much better visual quality can be generated, reducing the suspicion of encrypted secret.

FIGURE 5. Original complementary halftone images generated by error diffusion algorithm and pixel reversal, respectively.

FIGURE 6. Comparison between extended visual secret sharing (SS) and halftone visual SS (Q = 2): (A, B) the two shares of extended visual SS; (C, D) the two shares of halftone visual SS; (E) decoded image of extended visual SS; (F) decoded image of halftone visual SS.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780121197926501261

vic: A Flexible Framework for Packet Video 

Steven McCanne, Jacobson Van, in Readings in Multimedia Computing and Networking, 2002

4.3 Rendering

Another performance-critical operation is converting video from the YUV pixel representation used by most compression schemes to a format suitable for the output device. Since this rendering operation is performed after the decompression on uncompressed video, it can be a bottleneck and must be carefully implemented. Our profiles of vic match the experiences reported by Patel et al. [35], where image rendering sometimes accounts for 50% or more of the execution time.

Video output is rendered either through an output port on an external video device or to an X window. In the case of an X window, we might need to dither the output for a color-mapped display or simply convert YUV to RGB for a true-color display. Alternatively, HP’s X server supports a “YUV visual” designed specifically for video and we can write YUV data directly to the X server. Again, we use a C++ class hierarchy to support all of these modes of operation and special-case the handling of 4:2:2 and 4:1:1-decimated video and scaling operations to maximize performance.

For color-mapped displays, vic supports several modes of dithering that trade off quality for computational efficiency. The default mode is a simple error-diffusion dither carried out in the YUV domain. Like the approach described in [35], we use table lookups for computing the error terms, but we use an improved algorithm for distributing color cells in the YUV color space. The color cells are chosen uniformly throughout the feasible set of colors in the YUV cube, rather than uniformly across the entire cube using saturation to find the closest feasible color. This approach effectively doubles the number of useful colors in the dither. Additionally, we add extra cells in the region of the color space that corresponds to flesh tones for better rendition of faces.

While the error-diffusion dither produces a relatively high quality image, it is computationally expensive. Hence, when performance is critical, a cheap, ordered dither is available. Vic’s ordered dither is an optimized version of the ordered dither from nv.

An even cheaper approach is to use direct color quantization. Here, a color gamut is optimized to the statistics of the displayed video and each pixel is quantized to the nearest color in the gamut. While this approach can produce banding artifacts from quantization noise, the quality is reasonable when the color map is chosen appropriately. Vic computes this color map using a static optimization explicitly invoked by the user. When the user clicks a button, a histogram of colors computed across all active display windows is fed into Heckbert’s median cut algorithm [21]. The resulting color map is then downloaded into the rendering module. Since median cut is a compute-intensive operation that can take several seconds, it runs asynchronously in a separate process. We have found that this approach is qualitatively well matched to LCD color displays found on laptop PCs. The Heckbert color map optimization can also be used in tandem with the error diffusion algorithm. By concentrating color cells according to the input distribution, the dither color variance is reduced and quality increased.

Finally, we optimized the true-color rendering case. Here, the problem is simply to convert pixels from the YUV color space to RGB. Typically, this involves a linear transformation requiring four scalar multiplications and six conditionals. Inspired by the approach in [35], vic uses an algorithm that gives full 24-bit resolution using a single table lookup on each U-V chrominance pair and performs all the saturation checks in parallel. The trick is to leverage off the fact that the three coefficients of the Y term are all 1 in the linear transform. Thus we can precompute all conversions for the tuple (0, U, V) using a 64KB lookup table, T. Then, by linearity, the conversion is simply (R, G, B) = (Y, Y, Y) + T(U, V).

A final rendering optimization is to dither only the regions of the image that change. Each decoder keeps track of the blocks that are updated in each frame and renders only those blocks. Pixels are rendered into a buffer shared between the X server and the application so that only a single data copy is needed to update the display with a new video frame. Moreover, this copy is optimized by limiting it to a bounding box computed across all the updated blocks of the new frame.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9781558606517501352

Volume 3

Joachim Heinzl, in Comprehensive Microsystems, 2008

3.11.4.2 Halftones and Color

Sharp contours and clear letters are obtained by the use of quarter steps. Round and slanty contours are distinctly rendered. The difference between the possible dot positions along the lines and between the lines cannot be perceived with the naked eye, even if flat arcs follow the raster line. As for the contours, photo quality has been obtained. Halftones are more difficult to obtain in photo quality.

There are a number of ways to render halftones in a raster. You can repeatedly trigger small droplets in rapid sequence, vary the size of the drops, use inks of different degrees of saturation, and use halftoning with a fixed drop size using clusters or raster cells (Wild 1990). You can alter the distances between the droplets using error diffusion. Part of these strategies can be interpreted as amplitude modulation (AM) and others as frequency modulation (FM).

It would be ideal to produce droplets in various sizes in order to get a linear gray scale of 16 or 32 steps. Unfortunately, the variation in drop size is difficult to bring about. No more than three different drop sizes have been achieved so far. It is much easier to put together different dot sizes by superimposing varying numbers of small drops. Up to eight microdrops are in use.

If you use only one drop size, you can produce half tones by clustering, and by laying down which dots of the cluster are to be printed in every step of the gray scale. Typical strategies are Digital Halftone and Ordered Dither. Digital Halftone uses neighboring dots in the cluster, and this leads to a greater granularity. Ordered Dither separates the dots within the cluster as far as possible. The transitions in the highlights are especially critical with Ordered Dither.

Error Diffusion does without predetermined clusters. The desired levels of gray are added up along the line. When a threshold is crossed, the next dot is printed and the addition restarts. If in this strategy you only calculate along one raster line, wavy lines may cross the raster lines. Therefore you have to calculate several raster lines at a time and to distribute the dots in such a way that they are staggered in neighboring raster lines.

Normally white paper is used for ink-jet printing. The application of colored inks leads to subtractive color mixing. The colors cyan, magenta, and yellow (CMY) are used. As far as possible, dots of different colors should aggregrate on the paper without overlap. With dark colors this is only possible to a certain extent. The three basic colors can render a big part of the color gamut (Petschik 1994). However, the superimposing of the three colors does not lead to a satisfactory black, but to a brownish gray. Therefore, black has to be used as a fourth ink (cyan magenta yellow black (CMYK)). Because black is used most frequently in the printing of texts, many printers usually provide more nozzles for black than for the other three colors.

There is a major difference between color printing with ink-jet on the one hand and offset printing and laser printing on the other. This concerns registering, which is how the prints of the different colors fit together. Ink-jet printheads carry all the colors and print them in one run of the paper. Offset and laser print the colors in successive runs, thus registering is much more difficult. To avoid moiré, the color screens are slightly twisted, this leads to circular patterns and rosettes for laser printing, especially in light areas.

The difference can be seen clearly in Figure 33. The sample on the left side is printed with the HP Laserprinter Color LaserJet 4600, the sample in the middle is printed with the ink-jet printer HP Officejet 6210. Both samples used the same data. The magnification shows the effect of the different raster strategies. Error diffusion was used in the ink-jet sample and avoids the artifacts that can be seen in the laser print. A further improvement can be achieved by two additional colors. With light cyan (Lc), light magenta (Lm), and CMYK, the granularity and the roughness can be reduced. The sample on the right-hand side is printed with the same ink-jet printer, but using six colors (Allen 1997, 1998). Increasing the number of nozzles by 25% brings on smooth color transitions and low image grain.

Figure 33. Enlarged samples printed by laser and four or six color ink-jet. (Source: Lehrstuhl für Mikrotechnik und Medizingerätetechnik, Institut für Mechatronik, Technische Universität Munich.)

Even on plain paper acceptable color prints can be achieved. For photo quality, special coated papers are necessary. They are available on the market in formats from 10 cm × 15 cm up to A4 and for large format as rolls with a breadth up to 60″ or 1524 mm.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780444521903000616

Image Quantization, Halftoning, and Printing

Ping Wah Wong, in Handbook of Image and Video Processing (Second Edition), 2005

3.2 Error Diffusion

Error diffusion [11] is an excellent method for generating high quality halftones that are particularly suitable for low to medium resolution devices. A block diagram showing an error diffusion system is given in Fig. 7. It turns out [12] that error diffusion is the two-dimension equivalent of sigma-delta modulation (also called delta-sigma modulation) [13], which was developed for performing high resolution analog-to-digital conversion using a one-bit quantizer embedded in a feedback loop.

FIGURE 7. A block diagram of the error diffusion system.

To apply the error diffusion algorithm in producing a halftone b(n1, n2), we need to scan the input image f(n1, n2) in some fashion. One of the most popular strategies is raster scanning, where we scan the image row by row from the top to the bottom, and within each row from the left to the right.

At each pixel location, we perform the operations

(7)u(n1, n2)=f(n1, n2)−Σm1, m2h(m1, m2)e(n1−m1, n2−m2)

(8)b(n1, n2)=Q(u(n1, n2))={1if u(n1, n2)≥0.50otherwise.

(9)e(n1, n2)=b(n1, n2)−u(n1, n2)=Q(u(n1, n2))−u(n1, n2)

In the signal processing literature, u(n1, n2) is called either the state variable of the system or the modified input.

Notice that we need to buffer the binary quantizer error values e(n1, n2) because of the convolution operation in (7). The extent of the buffer required is dependent on the support of the error diffusion kernel h(m1, m2). The filter h(m1, m2) generally has a low pass characteristic, and the coefficients often satisfy

Σ(m1, m2)h(m1, m2)=1.

A very popular kernel suggested by Floyd and Steinberg [11] is shown in Fig. 8, with

FIGURE 8. The Floyd–Steinberg error diffusion filter coefficients.

h(0, 1)=7/16,h(1, −1)=3/16,h(1, 0)=5/16,h(1, 1)=1/16.

This kernel consists of four coefficients, and hence the complexity of the error diffusion algorithm is quite mild. Two other popular error diffusion kernels are proposed by Jarvis, Judice and Ninke [14] and Stucki [15]. A Floyd–Steinberg error diffused halftone and its power spectrum is shown in Fig. 9.

FIGURE 9. Boats image (a) halftoned using Floyd–Steinberg error diffusion [11] at 150 dots per inch, and (b) the halftone power spectrum.

Using (7) and (9), we can write

(10)b(n1, n2)=f(n1, n2)+Σm1, m2g(m1, m2)e(n1−m1, n2−m2)

where

g(m1, m2)=δ(m1, m2)−h(m1, m2),

and δ(m1, m2) is the Kronecker delta. Using the Floyd Steinberg filter kernel as an example, we have

g(m1, m2)={1(m1, m2)=(0, 0)−7/16(m1, m2)=(0, 1)−3/16(m1, m2)=(1, −1)−5/16(m1, m2)=(1, 0)−1/16(m1, m2)=(1, 1)0otherwise.

A nice interpretation of (10) is that the output is the sum of the input and a filtered version of the quantizer noise.

To calculate the power spectral density of b(n1, n2), we need to calculate its autocorrelation function. This calculation requires the autocorrelation function of e(n1, n2) and the cross correlation function E[f(n1, n2)e(n1 +n1, n2 + m2)], both as function of the statistical properties of the input image f(n1, n2). This turns out to be a very difficult task because of the nonlinear nature of the system induced by the binary quantizer. A general solution for the two-dimensional case is still an open problem, although solutions to the one-dimensional case (sigma-delta modulation) have already been found [16, 17].

Despite the difficulty in an exact mathematic analysis of error diffusion, we can compute the spectrum of error diffused halftones empirically. An example of such a power spectrum is shown in Fig. 9. It is of interest to note that the noise of error diffusion is primarily in the high frequency region, which agrees with the intuition that the quantization noise in a high quality halftone should be mostly located in the high frequency region.

Although Floyd–Steinberg error diffusion generally produces high quality output, it also generates undesirable artifacts such as “worms” and undesirable patterns. Consequently there has been a large number of reports in the literature focusing on the improvement of error diffusion. These techniques can generally be categorized into two classes. The first class of techniques focuses on “breaking up” the undesirable output patterns’ of error diffusion. This includes using alternative scanning strategies [18, 19] and injecting random noises into the error diffusion system [18, 20].

Another class of techniques attempts to optimize the error diffusion filter with respect to a distortion criterion. To this end, one can use the frequency weighted mean squared error in (4). Substituting (10) into (4), we have

ɛ=E[(v(n1, n2)*g(n1, n2)*e(n1, n2))2].

It is evident that ε is a function of h(n1, n2). Hence at least in principle, we can find an optimum error diffusion kernel h(n1, n2) to minimize ε.

As discussed earlier, error diffusion is a highly non-linear system where e(n1, n2) depends on f(n1, n2) and the structure of the feedback loop in a very complicated way. This makes the optimization problem very difficult. In the literature, there are reports in solving this optimization problem either by making certain assumptions on the quantizer noise [21], using the LMS algorithm in adaptive signal processing to solve the optimization problem [22], or using an iterative optimization approach [23].

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780121197926501170

PIXEL MANIPULATION OF IMAGES

Thomas Strothotte, Stefan Schlechtweg, in Non-Photorealistic Computer Graphics, 2002

2.1.2 Error Diffusion

The technique described so far is also known as ordered dithering since clusters of dots are used where the order of the dots is given by the dither matrix and is not changed during execution of the dithering algorithm. Even when the dither matrices are designed very carefully, it is sometimes the case that there are visible artifacts in the resulting image. To circumvent this problem, so-called error diffusion techniques can be used. As the name implies, the error (difference between the exact pixel value from the original image and the approximated value being displayed in the result) is distributed to the neighboring pixels, thus introducing a kind of “smoothing” into the dithered image.

The best-known error diffusion technique was developed by Robert W. Floyd and Louis Steinberg in 1975 and can be considered one of the classical algorithms in computer graphics. The difference between the exact pixel value and the binary representation is distributed among the neighboring pixels with a certain ratio for each direction, as can be seen in Figure 2.4. There exist a wide range of other approaches to error diffusion dithering, which mainly differ in the fractions of the error term being distributed on the neighboring pixels and in the set of pixels being involved.

FIGURE 2.4. Distribution of the error within the Floyd-Steinberg algorithm. Note that all portions sum up to one.

ALGORITHM 2.2. Given an input image S, the Floyd-Steinberg algorithm computes an output image O by distributing the approximation error to neighboring pixels. The function approximate() returns the closest intensity value possible to display in the output image of the current pixel.

To present the procedure more formally, consider the pseudocode in Algorithm 2.2. Note that the algorithm processes the image from left to right and from the topmost pixel downwards so that the error terms are always added to pixels that have not been already involved in the dithering process. Also note that all fractions of the error term that are distributed sum up to one. (That is, exactly the error is distributed; any inaccuracy here results in unwanted visual artifacts.) To show the result, the intensity ramp dithered according to this algorithm can be found in Figure 2.5.

FIGURE 2.5. Intensity ramp dithered using Floyd-Steinberg error diffusion.

At the end of this theoretical section on the basics of halftoning, an example will show the results that are produced by the different methods when applied to one image. In Figure 2.6(a), an original grayscale image is given. Next, in Figure 2.6(b), a simple threshold quantization has been performed by setting all pixels having an intensity of less than 0.5 to white and all others to black.1Figures 2.6(c) and (d) show an example of ordered dithering using the dither patterns from Figure 2.2 and finally the result of the Floyd-Steinberg algorithm.

FIGURE 2.6. Different halftoning techniques for the same original image (a): threshold quantization (b), ordered dithering (c), and Floyd-Steinberg error diffusion (d). The lower images show an enlarged part to visualize the pixel distribution.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9781558607873500037

Parallel computation of steady Navier-Stokes equations on uni-variant/multi-variant elements

Tony W.H. SheuProfessor, … Morten M.T. Wang, in Parallel Computational Fluid Dynamics 1998, 1999

6 CONCLUDING REMARK

We have presented parallel computation of steady-state incompressible Navier-Stokes equations. The results presented here are based on the streamline upwind formulation, implemented on quadratic elements, to avoid cross-wind diffusion errors. Both uni- and multi-variant elements have been examined. Since the algebraic system is prohibitive in the present threedimensional simulation, we have used the BICGSTAB iterative solver to resolve matrix asymmetry and indefiniteness. This iterative solver is implemented it in an element-by-element format to improve the computational performance. For a further improvement on the computational efficiency we ran codes in parallel to take advantage of the speed-up potential the multi-processors can offer. In this study, we have focused on the parallel performance for code run on CRAY parallel platforms, CRAY C-90, J-90, and J-916 using the three-dimensional lid-driven problem to benchmark the parallel performance.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B978044482850750070X

Lossless Information Hiding in Images on the Spatial Domain

Zhe-Ming Lu, Shi-Ze Guo, in Lossless Information Hiding in Images, 2017

2.5.4.7 Experimental Results

Six 512 × 512 error-diffused halftone images, Lena, Baboon, Airplane, Boat, Pepper, and Barbara, are selected to test the performance of the proposed method, as shown in Fig. 2.35. These halftones are obtained by performing Floyd–Steinberg error diffusion filtering on the 8-bit gray level images. The capacities for different images and different sizes of LUT are listed in Table 2.4.

Figure 2.35. Six test error–diffused images Lena, Airplane, Baboon (top row, from left to right), Boat, Pepper, and Barbara (bottom row, from left to right).

Table 2.4. Capacity (Bits) With Different Images and Different Sizes of LUT(I)

LUT Lena Airplane Baboon Boat Pepper Barbara
I = 1 201 195 8 68 152 33
I = 2 339 385 21 142 258 73
I = 3 432 522 28 204 355 112
I = 4 512 635 37 261 429 140
I = 5 582 742 41 314 487 165
I = 6 641 844 45 366 540 188
I = 7 690 936 48 416 582 212
I = 8 739 1025 51 464 620 228
I = 9 788 1109 52 512 655 241
I = 10 831 1191 54 553 685 254

LUT, look-up table.

In our experiments, a 1D binary sequence created by a pseudo random number generator is chosen as the hidden data. Fig. 2.36a and b illustrate the original image Lena and its watermarked version, whereas Fig. 2.36c shows the recovered one. To evaluate the introduced distortion, we apply an effective quality metric proposed by Valliappan et al. [55], i.e., weighted signal-to-noise ratio (WSNR). The linear distortion is quantified in Ref. [55] by constructing a minimum mean squared error Weiner filter; in this way the residual image is uncorrelated with the input image. The residual image represents the nonlinear distortion plus additive independent noise. Valliappan et al. [55] spectrally weight the residual by a contrast sensitivity function (CSF) to quantify the effect of nonlinear distortion and noise on quality. A CSF is a linear approximation of the HVS response to a sine wave of a single frequency, and a low-pass CSF assumes that the human eyes do not focus on one point but freely moves the around the image. Since the halftone image is attempted to preserve the useful information of the gray level image, we compare the halftone or watermarked image with the original gray level image. Similar to PSNR, a higher WSNR means a higher quality. In our experiments, the WSNR between the gray level Lena and the halftone Lena is 29.18 dB, whereas the WSNR between the gray level Lena and the watermarked Lena is 28.59 dB. It can be seen that the introduced distortion of the visual quality is slight. Since the WSNR between the gray level Lena and the recovered Lena is 29.18 dB, the recovered version is exactly the same as the original image.

Figure 2.36. Data hiding on the halftone Lena. (a) The original Lena, WSNR = 29.18 dB, (b) the watermarked Lena with 831 bits inserted, WSNR = 28.58 dB, (c) the recovered Lena, WSNR = 29.18 dB. WSNR, weighted signal-to-noise ratio.

Our method can also be used for halftone image authentication. For example, a hash sequence of the original halftone image can be hidden in the halftone image. We only need to compare the hash extracted from the watermarked image (Hash1) with the hash sequence computed from the recovered image (Hash2). When these two sequences are equal, we can confirm that the watermarked image suffers no alteration. Under no attacks, both of them are certainly equal to the original hash, whereas if the watermarked image is an unauthorized changed, the two sequences are different. The process is illustrated in Fig. 2.37.

Figure 2.37. Application of our scheme in halftone image authentication.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780128120064000024

Digital Color Reproduction

Brian A. Wandell, Louis D. Silverstein, in The Science of Color (Second Edition), 2003

8.5.5.3 Error diffusion

At low print resolutions, the best halftoning results are obtained using an adaptive algorithm in which the halftoning depends upon the data in the image itself. Floyd and Steinberg (1976) introduced the basic principals of adaptive halftoning methods in a brief and fundamental paper. Their algorithm is called error diffusion. The idea is to initiate the halftoning process by selecting a binary output level closest to the original intensity. This binary level will differ substantially from the original. The difference between the halftone output and the true image (i.e., the error) is added to neighboring pixels that have not yet been processed. Then, the binary output decision is made on the next pixel whose value now includes both the original image intensity and the errors that have been added from previously processed pixels. Figure 8.23 shows a flow chart of the algorithm (panel A) and a typical result (panel B).

Figure 8.23. The steps involved in error diffusion algorithm (A) and the resulting image (B). See text for details.

The coefficients that distribute the error among neighboring pixels can be chosen depending on the source material and output device. Jarvis, Judice, and Ninke (1976) found the apportionment of error using the matrix

(00*753575313531)(148)

to be satisfactory, where * denotes the current image point being processed. Notice that the error is propagated forward to unprocessed pixels. Also, the algorithm works properly when applied to the linear intensity of the image. The algorithm should not be applied to images represented in a nonlinear space, such as the frame buffer values of a monitor. Instead, the image should be converted to a format that is linear with intensity prior to application of the algorithm.

For images printed at low spatial resolution, error-diffusion is considered the best method. Ulichney (1987) analyzed the spatial error of the method and showed that the error was mainly in the high spatial frequency regime. The drawback of error diffusion is that it is very time-consuming compared to the simple threshold operations used in dither patterns. For images at moderate to high spatial resolution (600 dpi), blue-noise masks are visually as attractive as error diffusion and much faster to compute. Depending on the nature of the paper, cluster dot can be preferred at high resolutions. The cluster dot algorithm separates the centroids of the ink so that there is less unwanted bleeding of the ink from cell to cell. In certain devices and at certain print resolutions, reducing the spread of the ink is more important than reducing the visibility of the mask.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B978044451251250009X

Visual Cryptography: The Combinatorial and Halftoning Frameworks

Gonzalo R. Arce, … Zhi Zhou, in Handbook of Image and Video Processing (Second Edition), 2005

3.3.1 Halftone Visual Secret Sharing vs. Combinatorial Visual Secret Sharing

To compare the result of halftone visual SS with that of extended visual SS, a 256 × 256 secret binary image is cryptographically encoded into two 512 × 512 halftone images using the two methods, respectively. The pixel expansion (halftone cell size) and the relative difference of both methods are the same, being m = 4 and α=12, respectively. The original halftone images, obtained by the error diffusion algorithm and pixel reversal are shown in Fig. 5. Applying the extended visual SS method, [4] outputs two shares with poor visual quality and low contrast as shown in Figs. 6A and 6B. The average PSNR of these two shares with respect to their original halftones is 3.46 dB. The halftone visual SS method results in the two visually pleasing halftone shares shown in Figs. 6C and 6D. The PSNR of these two halftone shares is 6.02 dB. The new method gains 2.56 dB. Having the same relative difference in both methods indicates that the same contrast of the reconstructed secret images can be obtained by both methods. This is precisely the case, as shown in Figs. 6E and 6F. The superiority of the halftone visual SS method is that halftone shares with much better visual quality can be generated, reducing the suspicion of encrypted secret.

FIGURE 5. Original complementary halftone images generated by error diffusion algorithm and pixel reversal, respectively.

FIGURE 6. Comparison between extended visual secret sharing (SS) and halftone visual SS (Q = 2): (A, B) the two shares of extended visual SS; (C, D) the two shares of halftone visual SS; (E) decoded image of extended visual SS; (F) decoded image of halftone visual SS.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780121197926501261

vic: A Flexible Framework for Packet Video 

Steven McCanne, Jacobson Van, in Readings in Multimedia Computing and Networking, 2002

4.3 Rendering

Another performance-critical operation is converting video from the YUV pixel representation used by most compression schemes to a format suitable for the output device. Since this rendering operation is performed after the decompression on uncompressed video, it can be a bottleneck and must be carefully implemented. Our profiles of vic match the experiences reported by Patel et al. [35], where image rendering sometimes accounts for 50% or more of the execution time.

Video output is rendered either through an output port on an external video device or to an X window. In the case of an X window, we might need to dither the output for a color-mapped display or simply convert YUV to RGB for a true-color display. Alternatively, HP’s X server supports a “YUV visual” designed specifically for video and we can write YUV data directly to the X server. Again, we use a C++ class hierarchy to support all of these modes of operation and special-case the handling of 4:2:2 and 4:1:1-decimated video and scaling operations to maximize performance.

For color-mapped displays, vic supports several modes of dithering that trade off quality for computational efficiency. The default mode is a simple error-diffusion dither carried out in the YUV domain. Like the approach described in [35], we use table lookups for computing the error terms, but we use an improved algorithm for distributing color cells in the YUV color space. The color cells are chosen uniformly throughout the feasible set of colors in the YUV cube, rather than uniformly across the entire cube using saturation to find the closest feasible color. This approach effectively doubles the number of useful colors in the dither. Additionally, we add extra cells in the region of the color space that corresponds to flesh tones for better rendition of faces.

While the error-diffusion dither produces a relatively high quality image, it is computationally expensive. Hence, when performance is critical, a cheap, ordered dither is available. Vic’s ordered dither is an optimized version of the ordered dither from nv.

An even cheaper approach is to use direct color quantization. Here, a color gamut is optimized to the statistics of the displayed video and each pixel is quantized to the nearest color in the gamut. While this approach can produce banding artifacts from quantization noise, the quality is reasonable when the color map is chosen appropriately. Vic computes this color map using a static optimization explicitly invoked by the user. When the user clicks a button, a histogram of colors computed across all active display windows is fed into Heckbert’s median cut algorithm [21]. The resulting color map is then downloaded into the rendering module. Since median cut is a compute-intensive operation that can take several seconds, it runs asynchronously in a separate process. We have found that this approach is qualitatively well matched to LCD color displays found on laptop PCs. The Heckbert color map optimization can also be used in tandem with the error diffusion algorithm. By concentrating color cells according to the input distribution, the dither color variance is reduced and quality increased.

Finally, we optimized the true-color rendering case. Here, the problem is simply to convert pixels from the YUV color space to RGB. Typically, this involves a linear transformation requiring four scalar multiplications and six conditionals. Inspired by the approach in [35], vic uses an algorithm that gives full 24-bit resolution using a single table lookup on each U-V chrominance pair and performs all the saturation checks in parallel. The trick is to leverage off the fact that the three coefficients of the Y term are all 1 in the linear transform. Thus we can precompute all conversions for the tuple (0, U, V) using a 64KB lookup table, T. Then, by linearity, the conversion is simply (R, G, B) = (Y, Y, Y) + T(U, V).

A final rendering optimization is to dither only the regions of the image that change. Each decoder keeps track of the blocks that are updated in each frame and renders only those blocks. Pixels are rendered into a buffer shared between the X server and the application so that only a single data copy is needed to update the display with a new video frame. Moreover, this copy is optimized by limiting it to a bounding box computed across all the updated blocks of the new frame.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9781558606517501352

Volume 3

Joachim Heinzl, in Comprehensive Microsystems, 2008

3.11.4.2 Halftones and Color

Sharp contours and clear letters are obtained by the use of quarter steps. Round and slanty contours are distinctly rendered. The difference between the possible dot positions along the lines and between the lines cannot be perceived with the naked eye, even if flat arcs follow the raster line. As for the contours, photo quality has been obtained. Halftones are more difficult to obtain in photo quality.

There are a number of ways to render halftones in a raster. You can repeatedly trigger small droplets in rapid sequence, vary the size of the drops, use inks of different degrees of saturation, and use halftoning with a fixed drop size using clusters or raster cells (Wild 1990). You can alter the distances between the droplets using error diffusion. Part of these strategies can be interpreted as amplitude modulation (AM) and others as frequency modulation (FM).

It would be ideal to produce droplets in various sizes in order to get a linear gray scale of 16 or 32 steps. Unfortunately, the variation in drop size is difficult to bring about. No more than three different drop sizes have been achieved so far. It is much easier to put together different dot sizes by superimposing varying numbers of small drops. Up to eight microdrops are in use.

If you use only one drop size, you can produce half tones by clustering, and by laying down which dots of the cluster are to be printed in every step of the gray scale. Typical strategies are Digital Halftone and Ordered Dither. Digital Halftone uses neighboring dots in the cluster, and this leads to a greater granularity. Ordered Dither separates the dots within the cluster as far as possible. The transitions in the highlights are especially critical with Ordered Dither.

Error Diffusion does without predetermined clusters. The desired levels of gray are added up along the line. When a threshold is crossed, the next dot is printed and the addition restarts. If in this strategy you only calculate along one raster line, wavy lines may cross the raster lines. Therefore you have to calculate several raster lines at a time and to distribute the dots in such a way that they are staggered in neighboring raster lines.

Normally white paper is used for ink-jet printing. The application of colored inks leads to subtractive color mixing. The colors cyan, magenta, and yellow (CMY) are used. As far as possible, dots of different colors should aggregrate on the paper without overlap. With dark colors this is only possible to a certain extent. The three basic colors can render a big part of the color gamut (Petschik 1994). However, the superimposing of the three colors does not lead to a satisfactory black, but to a brownish gray. Therefore, black has to be used as a fourth ink (cyan magenta yellow black (CMYK)). Because black is used most frequently in the printing of texts, many printers usually provide more nozzles for black than for the other three colors.

There is a major difference between color printing with ink-jet on the one hand and offset printing and laser printing on the other. This concerns registering, which is how the prints of the different colors fit together. Ink-jet printheads carry all the colors and print them in one run of the paper. Offset and laser print the colors in successive runs, thus registering is much more difficult. To avoid moiré, the color screens are slightly twisted, this leads to circular patterns and rosettes for laser printing, especially in light areas.

The difference can be seen clearly in Figure 33. The sample on the left side is printed with the HP Laserprinter Color LaserJet 4600, the sample in the middle is printed with the ink-jet printer HP Officejet 6210. Both samples used the same data. The magnification shows the effect of the different raster strategies. Error diffusion was used in the ink-jet sample and avoids the artifacts that can be seen in the laser print. A further improvement can be achieved by two additional colors. With light cyan (Lc), light magenta (Lm), and CMYK, the granularity and the roughness can be reduced. The sample on the right-hand side is printed with the same ink-jet printer, but using six colors (Allen 1997, 1998). Increasing the number of nozzles by 25% brings on smooth color transitions and low image grain.

Figure 33. Enlarged samples printed by laser and four or six color ink-jet. (Source: Lehrstuhl für Mikrotechnik und Medizingerätetechnik, Institut für Mechatronik, Technische Universität Munich.)

Even on plain paper acceptable color prints can be achieved. For photo quality, special coated papers are necessary. They are available on the market in formats from 10 cm × 15 cm up to A4 and for large format as rolls with a breadth up to 60″ or 1524 mm.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780444521903000616

Распространение ошибок — это тип полутонового изображения, при котором остаток квантования распределяется на соседние пиксели, которые еще не обработаны. Его основное использование — преобразование многоуровневого изображения в двоичное изображение , хотя у него есть и другие приложения.

В отличие от многих других методов полутонового изображения, распространение ошибок классифицируется как операция области, потому что то, что алгоритм делает в одном месте, влияет на то, что происходит в других местах. Это означает, что требуется буферизация, что усложняет параллельную обработку. Операции с точками, такие как упорядоченный дизеринг, не имеют этих осложнений.

Распространение ошибок имеет тенденцию усиливать края изображения. Это может сделать текст в изображениях более читабельным, чем при использовании других методов полутонового изображения.

Изображение с размытыми ошибками

Содержание

  • 1 Ранняя история
  • 2 Цифровая эра
  • 3 Описание алгоритма
    • 3.1 Распространение одномерных ошибок
    • 3.2 Распространение двумерных ошибок
    • 3.3 Распространение цветовых ошибок
    • 3.4 Распространение ошибок с несколькими уровнями серого
    • 3.5 Рекомендации по принтеру
    • 3.6 Сравнение улучшения краев и сохранения яркости
  • 4 См. Также
  • 5 Ссылки
  • 6 Внешние ссылки

Ранняя история

Ричард Хоуленд Рейнджер получил в США патент 1790723 на свое изобретение «Система факсимильной связи». Патент, выданный в 1931 году, описывает систему для передачи изображений по телефону или телеграфу или по радио. Изобретение Рейнджера позволило фотографии со сплошным тоном преобразовать сначала в черно-белые, а затем передать их в отдаленные места, где ручка перемещалась по листу бумаги. Чтобы сделать черный цвет, перо опускалось на бумагу; чтобы получить белый цвет, перо было поднято. Оттенки серого визуализировались путем периодического подъема и опускания пера, в зависимости от яркости желаемого серого.

В изобретении Рейнджера использовались конденсаторы для накопления зарядов и ламповые компараторы, чтобы определить, когда текущая яркость плюс любая накопленная ошибка была выше порогового значения (вызывающего подъем пера) или ниже (заставляя ручку опускаться). В этом смысле это был аналог версии распространения ошибок.

Цифровая эра

Флойд и Стейнберг описали систему для выполнения распространения ошибок на цифровых изображениях на основе простого ядра:

1 16 [- # 7 3 5 1] { displaystyle { frac {1} {16}} left [{ begin {array} {ccccc} — # 7 \ 3 5 1 end {array}} right]}{ frac {1} {16}}  left [{ begin {array} {ccccc} -  # 7 \ 3 5 1  end {array}}  right]

где » — { displaystyle -}- «обозначает пиксель в текущей строке, который уже был обработан (следовательно, распространение на него ошибки было бы бессмысленным), а» # «обозначает пиксель, обрабатываемый в данный момент.

Почти одновременно Дж. Ф. Джарвис, С. Н. Джудис и У. Нинке из Bell Labs раскрыли аналогичный метод, который они назвали «минимизированной средней ошибкой » с использованием большего ядро ​​ :

1 48 [- — # 7 5 3 5 7 5 3 1 3 5 3 1] { displaystyle { frac {1} {48}} left [{ begin {array} {ccccc} — — # 7 5 \ 3 5 7 5 3 \ 1 3 5 3 1 end {array}} right]}{ frac {1} { 48}}  left [{ begin {array} {ccccc} - -  # 7 5 \ 3 5 7 5 3 \ 1 3 5 3 1  end {array}}  right]

Описание алгоритма

Распространение ошибок принимает монохромное или цветное изображение и уменьшает количество уровней квантования. Популярное применение диффузии ошибок включает сокращение количества состояний квантования до двух на канал. Это делает изображение пригодным для печати на бинарных принтерах, таких как черно-белые лазерные принтеры.

В нижеследующем обсуждении предполагается, что количество состояний квантования в изображении с диффузной ошибкой равно двум на канал, если не указано иное.

Одномерное распространение ошибок

Простейшая форма алгоритма сканирует изображение по одной строке за раз и по одному пикселю за раз. Текущий пиксель сравнивается со значением полутонового серого. Если оно выше значения, в результирующем изображении генерируется белый пиксель. Если пиксель ниже средней яркости, создается черный пиксель. Если целевая палитра не является монохромной, можно использовать разные методы, например, пороговое значение с двумя значениями, если целевая палитра — черный, серый и белый. Сгенерированный пиксель либо полностью яркий, либо полностью черный, поэтому в изображении есть ошибка. Затем ошибка добавляется к следующему пикселю изображения, и процесс повторяется.

Распространение двумерных ошибок

Распространение одномерных ошибок имеет тенденцию иметь серьезные артефакты изображения, которые проявляются в виде отдельных вертикальных линий. Распространение двумерных ошибок уменьшает визуальные артефакты. Простейший алгоритм точно такой же, как диффузия одномерных ошибок, за исключением того, что половина ошибки добавляется к следующему пикселю, а половина ошибки добавляется к пикселю в следующей строке ниже.

Ядро:

1 2 [# 1 1 0] { displaystyle { frac {1} {2}} left [{ begin {array} {cc} # 1 1 0 end {array}} right]}{ displaystyle { frac { 1} {2}}  left [{ begin {array} {cc}  # 1 \ 1 0  end {array}}  right]}

где «#» обозначает пиксель, обрабатываемый в данный момент.

Дальнейшее уточнение может быть достигнуто путем разброса ошибки дальше от текущего пикселя, как в матрице, приведенной выше в разделе «Начало цифровой эры». Образец изображения в начале этой статьи является примером двумерной диффузии ошибок.

Распространение цветовых ошибок

Те же алгоритмы могут применяться к каждому из красного, зеленого и синего (или голубого, пурпурного, желтого, черного) каналов цветного изображения для получения цвета. влияет на такие принтеры, как цветные лазерные принтеры, которые могут печатать только значения одного цвета.

Однако лучшие визуальные результаты можно получить, сначала преобразовав цветовые каналы в перцептивную цветовую модель, которая разделит каналы яркости, оттенка и насыщенности, так что более высокий вес для рассеивания ошибок будет отдавать каналу легкости, чем каналу оттенка. Мотивация для этого преобразования заключается в том, что человеческое зрение лучше воспринимает небольшие различия в освещенности в небольших локальных областях, чем аналогичные различия оттенков в одной и той же области, и даже больше, чем аналогичные различия насыщенности в одной и той же области.

Например, если есть небольшая ошибка в зеленом канале, которая не может быть представлена, и другая небольшая ошибка в красном канале в том же случае, правильно взвешенная сумма этих двух ошибок может использоваться для корректировки ощутимая ошибка яркости, которая может быть представлена ​​сбалансированным образом между всеми тремя цветовыми каналами (согласно их соответствующему статистическому вкладу в яркость), даже если это приводит к большей ошибке для оттенка при преобразовании зеленого канала. Эта ошибка будет распространена на соседние пиксели.

Кроме того, гамма-коррекция может потребоваться на каждом из этих каналов восприятия, если они не масштабируются линейно с человеческим зрением, так что распространение ошибок может накапливаться линейно по отношению к этим гамма -корректированные линейные каналы, перед вычислением окончательных цветовых каналов округленных цветов пикселей, с использованием обратного преобразования в исходный формат изображения без гамма-коррекции, из которого будет вычислена и снова преобразована новая остаточная ошибка для распределения на следующие пиксели.

Следует также отметить, что из-за ограничений точности во время числового преобразования между цветовыми моделями (особенно, если это преобразование не является линейным или использует нецелочисленные веса), могут возникать дополнительные ошибки округления, которые следует учитывать учитывать остаточную ошибку.

Распространение ошибок с несколькими уровнями серого

Распространение ошибок также может использоваться для создания выходных изображений с более чем двумя уровнями (на канал, в случае цветных изображений). Это применяется в дисплеях и принтерах, которые могут создавать 4, 8 или 16 уровней в каждой плоскости изображения, например, в электростатических принтерах и дисплеях в компактных мобильных телефонах. Вместо того, чтобы использовать один порог для получения двоичного вывода, определяется ближайший разрешенный уровень, и ошибка, если таковая имеется, рассеивается, как описано выше.

Особенности принтера

Большинство принтеров слегка перекрывают черные точки, поэтому нет точного однозначного отношения к частоте точек (в точках на единицу площади) и яркости. К исходному изображению можно применить линеаризацию шкалы тонов, чтобы напечатанное изображение выглядело правильно.

Улучшение краев по сравнению с сохранением яркости

Когда изображение имеет переход от светлого к темному, алгоритм рассеивания ошибок стремится сделать следующий сгенерированный пиксель черным. Переходы от темного к светлому обычно приводят к тому, что следующий сгенерированный пиксель становится белым. Это вызывает эффект улучшения контуров за счет точности воспроизведения уровня серого. Это приводит к диффузии ошибок, имеющей более высокое видимое разрешение, чем другие методы полутонов. Это особенно полезно для изображений с текстом, таких как типичное факсимильное сообщение.

Этот эффект довольно хорошо показан на картинке в верхней части этой статьи. Трава и текст на вывеске хорошо сохранены, а легкость неба — мало деталей. Изображение кластерных точек полутонов того же разрешения будет намного менее резким.

См. Также

  • дизеринг Флойда – Стейнберга
  • Полутона

Ссылки

  1. ^Ричард Хоуленд Рейнджер, Факсимильная система. Патент США 1790723, выдан 3 февраля 1931 г.
  2. ^Дж. Ф. Джарвис, К. Н. Джудис и У. Н. Нинке, Обзор методов отображения изображений с непрерывным тоном на двухуровневых дисплеях. Компьютерная графика и обработка изображений, 5 : 1: 13–40 (1976).

Внешние ссылки

  • Распространение ошибок в Matlab

Диффузия ошибок

Диффузия ошибок — это тип полутона, при котором остаток квантования распределяется по соседним пикселам, которые еще не были обработаны. Его основное использование заключается в преобразовании многоуровневого изображения в бинарное, хотя оно и имеет другие применения.

В отличие от многих других полутоновых методов, диффузия ошибок классифицируется как операция области, потому что то, что algorithm делает в одном месте, влияет на то, что происходит в других местах. Это означает, что требуется буферизация, и выполняет параллельную обработку. Точечные операции, такие как упорядоченный дитер, не имеют этих .

Диффузия ошибок имеет тенденцию к усилению кромок в изображении. Это может сделать текст в изображениях более читаемым, чем в других методах полутонов.

Изображение с рассеянием ошибок

Ранняя история

Ричард Хауленд Рейнджер получил патент США 1790723 за своё изобретение «Facsimile system». В патенте, выданном в 1931г., описана система для передачи изображений по телефонным или телефонным линиям, или по радио. Изобретение Рейнджера позволяло преобразовывать непрерывный тон сначала в черный и белый, затем переносить в отдаленные места, которые имели ручку, перемещающуюся по листу бумаги. Для придания черному цвета ручку опускали к бумаге, для получения белого — поднимали ручку. Оттенки могильника изменяли путем промежуточного подъема и опускания ручки, в зависимости от желаемой освещенности могилы.

В изобретении Рейнджера для хранения зарядов использовались конденсаторы, а компараторы вакуумных трубок — для определения того, когда текущая освещенность, плюс любая накопленная ошибка, была выше трехкратной (вызывая подъем ручки) или ниже (вызывая понижение ручки). В этом смысле это был аналоговый вариант диффузии ошибок.

Цифровая эра

Флойд и Берберг описали систему для выполнения диффузии ошибок на цифровых изображениях, основанную на простом ядре:

где «» обозначает pixel в текущей строке, которая уже была обработана (следовательно, диффузия ошибки в нее была бы безвыходной), а «#» обозначает pixel, обрабатываемый в настоящее время.

Почти одновременно J F Jarvis, C N Judice и W H Ninke из Bell Labs потеряли аналогичный метод, который они использовали «», используя большее ядро:

Описание Algorithm

Диффузия ошибок принимает монохромное или цветное изображение и уменьшает количество уровней квантования. Популярное применение диффузии ошибок включает в себя уменьшение числа состояний квантования всего до двух на канал. Это делает изображение пригодным для печати на бинарных принтерах, таких как черно-белые лазерные принтеры.

В последующем обсуждении утверждается, что количество состояний квантования в рассеянном ошибками изображении составляет два на канал, если не указано иное.

Однодимерная диффузия ошибок

Самая форма algorithm сканирует изображение по одной строке за раз и по одному пикселу за раз. Если она выше значения, в полученном изображении генерируется белый пиксель. Различные методы могут использоваться, если целевая палитра не является монохромной, например, thresholding с двумя значениями, если целевая палитра является черной, грейевой и белой. Генерируемый пиксель является либо полным ярким, либо полным черным, поэтому в изображении имеется ошибка.

Двухдымная диффузия ошибок

Одна димиальная диффузия ошибок имеет тенденцию иметь серьезные артефакты изображения, которые отображаются как вертикальные линии. Две димиальные диффузии ошибок уменьшает визуальные артефакты.

Ядро:

где «#» обозначает пиксель, обрабатываемый в настоящее время.

Дальнейшее уточнение может быть осуществлено путем удаления ошибки дальше от текущего пикселя, как в матрицах, приведенных выше в Digital era. Образец изображения в начале этой статьи является примером диффузии двух димных ошибок.

Диффузия цветовой ошибки

Одни и те же алгоритмы могут быть применены к каждому из красных, зеленых и синих каналов цветного изображения (или cyan, magenta, yellow, black) для достижения цветового эффекта на принтерах, таких как цветные лазерные принтеры, которые могут печатать только отдельные цветовые значения.

Однако лучшие визуальные результаты могут быть получены путем первого преобразования цветовых каналов в воспринимающую цветовую модель, которая будет разделять каналы легкости, цветового тона и насыщенности, так что более высокий вес для диффузии ошибок будет придан каналу легкости, чем каналу цветового тона. Мотивация этой конверсии заключается в том, что человеческое зрение лучше воспринимает небольшие различия светлоты в небольших локальных областях, чем аналогичные различия оттенка в одной и той же области, и даже больше, чем аналогичные различия насыщения в одной и той же области.

Например, если имеется небольшая ошибка в зеленом канале, которая не может быть представлена, и другая небольшая ошибка в красном канале в том же самом случае, правильно сумма этих двух ошибок может быть использована для регулировки воспринимаемой ошибки яркости, которая может быть представлена сбалансированным образом между всеми тремя цветовыми каналами (согласно их соответствующему статистическому вкладу в легкость), даже если это приводит к большей ошибке для цветового тона при преобразовании зеленого канала. Эта ошибка будет рассеиваться в соседних пикселах.

Кроме того, gamma может потребоваться на каждом из этих воспринимающих каналов, если они не масштабируются линейно с человеческим зрением, так что диффузия ошибок может накапливаться линейно рано с этими gamma- линейными каналами, перед вычислением окончательных цветовых каналов округлых цветов Pixel, используя обратное преобразование в собственный формат изображения без gamma-, и из которого новая ошибка resdual будет вычисляться и конвертироваться в следующий формат.

Следует также отметить, что из-за ограничений sion во время числового преобразования между цветовыми моделями (примечательно, что это преобразование не является линейным или использует не целые числа) могут возникать дополнительные ошибки округления, которые должны быть приняты во внимание при ошибке residual.

Погрешность диффузии с несколькими граийными уровнями

Диффузия ошибок может также использоваться для получения выходных изображений с более чем двумя уровнями (на канал, в случае цветных изображений). Это имеет применение в дисплеях и принтерах, которые могут создавать 4, 8 или 16 уровней в каждой плоскости изображения, таких как электростатические принтеры и дисплеи в компактных мобильных телефонах. Вместо использования одного порогового значения для получения бинарного выходного сигнала определяют самый близкий разрешенный уровень, и погрешность, если она имеется, рассеивается, как описано выше.

Соображения по принтеру

Большинство принтеров слегка перекрывают черные точки, поэтому нет точной зависимости «один к одному» от частоты точек (в точках на единицу площади) и яркости. Линеаризация масштаба тона может быть применена к исходному изображению, чтобы печатаемое изображение выглядело правильно.

Усиление кромки, а не сохранение легкости

Когда изображение имеет переход от светлого к темному, погрешность диффузии algorithm имеет тенденцию сделать следующий сгенерированный пиксель черным. Переходы от темного к светлому имеют тенденцию приводить к тому, что следующий сгенерированный пиксель будет белым. Это приводит к эффекту усиления края при стоимости точности воспроизведения на уровне Граи. Это приводит к диффузии ошибок, имеющей более высокое кажущееся разрешение, чем другие полутоновые способы. Это особенно полезно с изображениями с текстом в них, такими как типичный факсимиле.

Этот эффект хорошо проявляется на картинке в верхней части этой статьи. Деталь грасса и текст на знаке хорошо сохранились, а легкость на небе, содержавшая мало деталей. Изображение той же разрешающей способности в полутоновой области было бы гораздо менее резким.

См. также

  • Флойд — Бург дитеринг
  • Полутон

Внешние связи

  • Диффузия ошибок в Matlab

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Метро исход ошибка previous launch was unsuccessful would you like to start in safe mode
  • Методы стирания кодов ошибок
  • Метлицкая ошибка молодости краткое содержание
  • Мерседес спринтер ошибка 2025 002
  • Метро исход ошибка a crash has been detected by bugtrap

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии