A. Fatzilah Misman, Peter Blanchfield


One way to remedy the gap that evidently exists between the image element on the web and the web user who is visually blind is by redefining connection between the image and the abundant element of the web itself i.e. text. Studies on the exploitation are done largely related to the fields like the HCI, the semantic web, the information retrieval or even a new hybrid approach. However, often many see the problem from the perspective of the third party. This position paper posits that the problem can also be seen from the fundamental reasons for an image being on a web page without neglecting the connection that develops from the web user’s perspective. Effective and appropriate image tagging may consider this view.


  1. Bigham, J., Kaminsky R., Ladner, R., Danielsson, O., & Wempton, G. (2006) Webinsight: Making web images accessible. In Proceedings of 8th International ACM SIGACCESS Conference on Computers and Accessibility, 181-188.
  2. Bigham, J. (2007) Increasing Web Accessibility by Automatically Judging Alternative Text Quality. In Proceedings of IUI 07 Conference on Intelligent User Interfaces, 162-169.
  3. Caldwell, B., Cooper, M., Reid, L., & Vanderheiden, G. (2008) Web Content Accessibility Guidelines 2.0, Retrieved September 23 , 2009 from
  4. Carson, C. & Ogle, V. E (1996) Storage and Retrieval of Feature Data for a Very Large Online Image Collection. In IEEE Computer Society Bulletin of the Technical Committee on Data Engineering, 19(4).
  5. Cascia, L, Sethi, S. & Sclaroff, S. (1998) Combining textual and visual cues for content-based image retrieval on the world wide web. In IEEE Workshop on Content-Based Access of Image and Video Libraries.
  6. Croft, B., Metzler, D. & Strohman, T. (2009) Search Engines: Information Retrieval in Practice. AddisonWesley.
  7. Datta, R., Li, J. & Wang, J. Z. (2005) Content-based image retrieval: approaches and trends of the new age. In Proceedings of the 7th ACM SIGMM International Workshop on Multimedia Information Retrieval, 253- 262.
  8. Deewester, S. Dumais, S., et al. (1990) Indexing by latent semantic analysis, Journal of Americ. Soc. Info. Sci., vol 41, 391-407.
  9. Gabrilovich, E. & Markovitch, S. (2007). Computingsemantic relatedness using Wikipediabased explicit semantic analysis. In proceedings of IJCAI, 1606-1611.
  10. King, A., Evans, G., & Blenkhorn, P., (2004) WebbIE: a web browser for visually impaired people. In S. Keates, P. J. Clarkson, P. Langdon, & P. Robinson (Eds)., In Proceedings of the 2nd Cambridge Workshop on Universal Access and Assistive Technology (pp. 35 - 44), Springer-Verlag, London,.
  11. Li, J. & Wang, J. Z. (2008). Real-time computerized annotation of pictures. IEEE Trans. Pattern Anal. Mach. Intell. 30(6), 985-1002.
  12. Macdonald, C. & Ounis, I. (2010) Global Statistics in Proximity Weighting Models. In Proceedings of Web N-gram Workshop at SIGIR 2010, Geneva.
  13. Nakov, P., Popova, A. & Mateev, P. (2001) Weight functions impact on LSA performance, In Proceedings of the Euro Conference Recent Advances in Natural Language Processing (RANLP), 187-193.
  14. Paek, S. & Smith, J.R.(1998) Detecting Image Purpose. In World-Wide Web Documents. In Proceedings of IS&T/SPIE Symposium on Electronic Imaging: Science and Technology - Document Recognition. San Jose: CA.
  15. Petrie, H., Harrison, C. & Dev. S. (2005) Describing images on the web: a survey of current practice and prospects for the future. In Proceedings of Computer Human Interaction International (CHI).
  16. Petrie, H., O'Neill, A-M. and Colwell, C. (2002). Computer access by visually impaired people. In A.Kent and J.G. Williams (Eds.), Encyclopedia of Microcomputers, vol 28. New York: Marcel Dekker.
  17. Petrie, H. and Kheir, O. (2007) The relationship between accessibility and usability of websites. In Proceedings of Computer Human Interaction International (CHI)
  18. Russell, B., Torralba, A., Murphy, K., & Freeman, W., (2005) LabelMe: a database and web-based tool for image annotation (Technical Report). Cambridge, MA: MIT Press.
  19. Sclaroff, S., Cascia, L. & Sethi, S. (1999) Using Textual and Visual Cues for Content-Based Image Retrieval from the World Wide Web, Image Understanding, 75(2), 86-98.
  20. Shinohara, K., & Tenenberg, J. (2007). Observing sara: a case study of a blind person's interactions with technology. In Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility. New York: ACM Press. 171-178
  21. Shinohara, K. & Tenenberg, J., (2009). A blind person's interactions with technology, Communications of the ACM, 58-66.
  22. Stork, D. G. (1999) The Open Mind Initiative. IEEE Intelligent Systems & Their Applications, 14(3), 19-20.
  23. Stork, D. G. & Lam C. P. (2000) Open Mind Animals: Ensuring the quality of data openly contributed over the World Wide Web. AAAI Workshop on Learning with Imbalanced Data Sets, pp 4-9.
  24. Sutton, R. I & Staw, B. M. (1995) What theory is not Administrative Science Quaterly, vol 40, 371-384.
  25. Tao, T. & Zhai, C. (2007) An exploration of proximity measures information retrieval. In Proceedings of SIGIR' 2007, 295-302.
  26. Von Ahn, L. & Dabbish, L. (2004). Labeling images with a computer game. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI).

Paper Citation

in Harvard Style

Fatzilah Misman A. and Blanchfield P. (2011). ADAPTING WEB IMAGES FOR BLIND PEOPLE . In Proceedings of the 7th International Conference on Web Information Systems and Technologies - Volume 1: WEBIST, ISBN 978-989-8425-51-5, pages 430-437. DOI: 10.5220/0003402004300437

in Bibtex Style

author={A. Fatzilah Misman and Peter Blanchfield},
booktitle={Proceedings of the 7th International Conference on Web Information Systems and Technologies - Volume 1: WEBIST,},

in EndNote Style

JO - Proceedings of the 7th International Conference on Web Information Systems and Technologies - Volume 1: WEBIST,
SN - 978-989-8425-51-5
AU - Fatzilah Misman A.
AU - Blanchfield P.
PY - 2011
SP - 430
EP - 437
DO - 10.5220/0003402004300437