the palette from frame to frame, lighting changes and
apparent motion can be induced. In this paper, we
demonstrated an optimization-based method for cre-
ating palette-cycling animations from arbitrary input
videos. Our technique involves alternating between
finding a set of per-frame palettes given a set of palette
indices, and then finding the pixel indices given a
fixed set of palettes. While many handcrafted palette
animations have been created historically, this paper
is the first to automate palette cycling.
The method produces quite good results for tradi-
tional use cases such as scenes with minor natural mo-
tion or time-lapse videos. Intrinsically difficult sce-
narios including large-scale motion and moving back-
grounds are less successful and offer opportunities for
further investigation.
The present method takes only an input video,
without annotations. Better results might be achieved
by allowing a user to mark regions of interest, and pri-
oritizing fidelity in those areas while discounting error
outside the important regions. Of course, the region
of interest determination could also be automated. It
might also be worthwhile to preprocess the video to
reduce the number of colors, rather than strictly rely-
ing on dithering; for example, L0 quantization could
be employed to reduce gradients.
We concentrated on photorealistic videos, while
historical palette cycling used pixel art. A possible
direction would be to jointly construct a pixel art styl-
ization and a palette cycling animation from an input
video. Further, the animation could itself be stylized,
as in handcrafted palette animations: one might imag-
ine artist-drawn tracks for particle effects or lighting
which could build on an input scene or video. Over-
all, we hope that this paper can spark renewed interest
in the fascinating medium of palette cycling.
ACKNOWLEDGEMENTS
Thanks to the GIGL group for helpful suggestions as
this project developed. The project received financial
support from Carleton University and from NSERC.
We want to thank Mark Ferrari for his kind words of
encouragement as we were beginning this project and
for his permission to reproduce frames from the jun-
gle animation, shown in Figure 2.
REFERENCES
Bayer, B. E. (1973). An optimum method for two-level ren-
dition for continuous tone pictures. Proc. of IEEE Int’l
Communication Conf., 1973, pages 2611–2615.
Chang, J., Alain, B., and Ostromoukhov, V. (2009).
Structure-aware error diffusion. ACM Trans. Graph.,
28(5):1–8.
Eschbach, R. and Knox, K. T. (1991). Error-diffusion al-
gorithm with edge enhancement. J. Opt. Soc. Am. A,
8(12):1844–1850.
Floyd, R. W. and Steinberg, L. (1976). An adaptive algo-
rithm for spatial greyscale. Proceedings of the Society
for Information Display, 17(2):75–77.
Franchini, G., Cavicchioli, R., and Hu, J. C. (2019).
Stochastic floyd-steinberg dithering on GPU: image
quality and processing time improved. In 2019 Fifth
International Conference on Image Information Pro-
cessing (ICIIP), pages 1–6.
Gerstner, T., DeCarlo, D., Alexa, M., Finkelstein, A., Gin-
gold, Y., and Nealen, A. (2012). Pixelated image ab-
straction. In Proceedings of the tenth annual sympo-
sium on non-photorealistic animation and rendering
(NPAR 2012), pages 29–36.
Gervautz, M. and Purgathofer, W. (1988). A simple
method for color quantization: Octree quantization. In
Magnenat-Thalmann, N. and Thalmann, D., editors,
New Trends in Computer Graphics, pages 219–231,
Berlin, Heidelberg. Springer Berlin Heidelberg.
Heckbert, P. (1982). Color image quantization for frame
buffer display. In Proceedings of the 9th Annual Con-
ference on Computer Graphics and Interactive Tech-
niques, SIGGRAPH ’82, page 297–307, New York,
NY, USA. Association for Computing Machinery.
Huang, H.-Z., Xu, K., Martin, R. R., Huang, F.-Y., and Hu,
S.-M. (2016). Efficient, edge-aware, combined color
quantization and dithering. IEEE Transactions on Im-
age Processing, 25(3):1152–1162.
Inglis, T. and Kaplan, C. (2012). Pixelating vector line art.
In Proceedings of the tenth annual symposium on non-
photorealistic animation and rendering (NPAR 2012),
pages 21–28.
Joy, G. and Xiang, Z. (1993). Center-cut for color-image
quantization. The Visual Computer, 10(1):62–66.
Kim, T.-H. and Park, S. I. (2018). Deep context-aware de-
screening and rescreening of halftone images. ACM
Trans. Graph., 37(4).
Kwak, N.-j., Ryu, S.-p., and Ahn, J.-h. (2006). Edge-
enhanced error diffusion halftoning using human vi-
sual properties. In 2006 International Conference
on Hybrid Information Technology, volume 1, pages
499–504.
Li, X. (2006). Edge-directed error diffusion halftoning.
IEEE Signal Processing Letters, 13(11):688–690.
Liu, Y.-F. and Guo, J.-M. (2015). Dot-diffused halftoning
with improved homogeneity. IEEE Transactions on
Image Processing, 24(11):4581–4591.
Orchard, M. T., Bouman, C. A., et al. (1991). Color quan-
tization of images. IEEE transactions on signal pro-
cessing, 39(12):2677–2690.
Ozdemir, D. and Akarun, L. (2001). Fuzzy algorithms for
combined quantization and dithering. IEEE Transac-
tions on Image Processing, 10(6):923–931.
Automated Palette Cycling Animations
125