Skip to main content
Log in

An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends

  • Survey
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

Assistive technology for the visually impaired and blind people is a research field that is gaining increasing prominence owing to an explosion of new interest in it from disparate disciplines. The field has a very relevant social impact on our ever-increasing aging and blind populations. While many excellent state-of-the-art accounts have been written till date, all of them are subjective in nature. We performed an objective statistical survey across the various sub-disciplines in the field and applied information analysis and network-theory techniques to answer several key questions relevant to the field. To analyze the field we compiled an extensive database of scientific research publications over the last two decades. We inferred interesting patterns and statistics concerning the main research areas and underlying themes, identified leading journals and conferences, captured growth patterns of the research field; identified active research communities and present our interpretation of trends in the field for the near future. Our results reveal that there has been a sustained growth in this field; from less than 50 publications per year in the mid 1990s to close to 400 scientific publications per year in 2014. Assistive Technology for persons with visually impairments is expected to grow at a swift pace and impact the lives of individuals and the elderly in ways not previously possible.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. http://www.seeingwithsound.com/.

  2. http://www.monash.edu.au/bioniceye/.

  3. http://www.secondsight.com.

  4. http://www.project-ray.com.

  5. http://www.georgiephone.com.

  6. http://projects.csail.mit.edu/bocelli/.

  7. http://www.orcam.com.

  8. http://www.nano-retina.com.

  9. www.secondsight.com/.

  10. http://www.xowi.me/.

  11. http://www.nprlabs.org/.

  12. http://ieeexplore.ieee.org/.

  13. http://apps.webofknowledge.com/.

  14. http://dl.acm.org/.

  15. http://www.sciencedirect.com/.

  16. http://www.wordle.net.

  17. http://gephi.github.io/.

References

  1. Abboud S, Hanassy S, Levy-Tzedek S, Maidenbaum S, Amedi S (2014) EyeMusic: introducing a visual colorful experience for the blind using auditory sensory substitution. Restor Neurol Neurosci 32(2):247–257

    Google Scholar 

  2. Ahuja AK, Dorn JD, Caspi A, McMahon MJ, Dagnelie G, daCruz L, Stanga P, Humayun MS, Greenberg RJ (2011) Blind subjects implanted with the Argus II retinal prosthesis are able to improve performance in a spatial-motor task. Br J Ophthalmol 95(4):539–543

    Article  Google Scholar 

  3. Arnoldussen A, Fletcher DC (2012) Visual perception for the blind: the brainport vision device. Retin Physician 9(January 2012):32–34

  4. Auvray M, Hanneton S, O’ Regan JK (2007) Learning to perceive with a visuo-auditory substitution system: localisation and object recognition with the vOICe. Perception 36(3):416–430

    Article  Google Scholar 

  5. Ayton LN, Blamey PJ, Guymer RH, Luu CD, Nayagam DAX, Sinclair NC, Shivdasani MN, Yeoh J, McCombe MF, Briggs RJ, Opie NL, Villalobos J, Dimitrov PN, Varsamidis M, Petoe MA, McCarthy CD, Walker JG, Barnes N, Burkitt AN, Williams CE, Shepherd RK, Allen PJ, Consortium ftBVAR (2014) First-in-human trial of a novel suprachoroidal retinal prosthesis. PLoS One 9(12):e115,239

  6. Bach-y Rita P (1967) Sensory plasticity. Acta Neurol Scand 43(4):417–426

    Article  Google Scholar 

  7. Bach-y Rita PW, Kercel S (2003) Sensory substitution and the human-machine interface. Trends Cogn Sci 7(12):541–546

    Article  Google Scholar 

  8. Bach-y Rita P, Collins CC, Saunders FA, White B, Scadden L (1969) Vision substitution by tactile image projection. Nature 221(5184):963–964

    Article  Google Scholar 

  9. Bastian M, Heymann S, Jacomy M (2009) Gephi: an open source software for exploring and manipulating networks. In: Third international AAAI conference on weblogs and social media

  10. Belinky M, Jeremijenko N (2001) Sound through bone conduction in public interfaces. In: CHI ’01 extended abstracts on human factors in computing systems, CHI EA ’01. ACM, New York, pp 181–182

  11. Bhowmick A, Prakash S, Bhagat R, Prasad V, Hazarika SM (2014) IntelliNavi: navigation for blind based on kinect and machine learning. In: Murty MN, He X, Chillarige RR, Weng P (eds) Multi-disciplinary trends in artificial intelligence, Lecture notes in computer science, vol 8875. Springer International Publishing, UK, pp 172–183

  12. Bischof W, Krajnc E, Dornhofer M, Ulm M (2012) NAVCOM WLAN communication between public transport vehicles and smart phones to support visually impaired and blind people. In: Proceedings of the 13th international conference on computers helping people with special needs—Volume Part II, ICCHP’12. Springer, Berlin, pp 91-98

  13. Borenstein J (1990) The NavBelt—a computerized multi-sensor travel aid for active guidance of the blind. In: CSUN’s fifth annual conference on technology and persons with disabilities, Los Angeles, pp 107–116

  14. Bornmann L, Mutz R (2014) Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. J Assoc Inf Sci Technol 66(11):22152222

    Google Scholar 

  15. Borodin Y, Bigham JP, Dausch G, Ramakrishnan IV (2010) More than meets the eye: a survey of screen-reader browsing strategies. In: Proceedings of the 2010 international cross disciplinary conference on web accessibility (W4A), W4A ’10. ACM, New York, pp 13:1–13:10

  16. Brabyn JA (1982) New developments in mobility and orientation aids for the blind. IEEE Trans Biomed Eng BME 29(4):285–289

    Article  Google Scholar 

  17. Brassai ST, Bako L, Losonczi L (2011) Assistive technologies for visually impaired people. Acta Univ Sapientiae Electr Mech Eng 3:39–50

    Google Scholar 

  18. Brindley GS, Lewin WS (1968) The sensations produced by electrical stimulation of the visual cortex. J Physiol 196(2):479–493

    Article  Google Scholar 

  19. Calder D (2009) Assistive technology interfaces for the blind. In: 3rd IEEE international conference on digital ecosystems and technologies, 2009. DEST ’09. IEEE, Istanbul

  20. Campbell M, Bennett C, Bonnar C, Borning A (2014) Where’s my bus stop?: supporting independence of blind transit riders with StopInfo. In: Proceedings of the 16th international ACM SIGACCESS conference on computers and accessibility, ASSETS ’14. ACM, New York, pp 11–18

  21. Chen X, Zitnick CL (2015) Mind’s eye: a recurrent visual representation for image caption generation. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 2422–2431

  22. Coughlan JM, Shen H (2012) The crosswatch traffic intersection analyzer: a roadmap for the future. In: Miesenberger K, Karshmer A, Penaz P, Zagler W (eds) Computers helping people with special needs, Lecture Notes in Computer Science, vol 7383. Springer, Berlin, pp 25–28

  23. Coughlan JM, Shen H (2013) Crosswatch: a system for providing guidance to visually impaired travelers at traffic intersections. J Assist Technol 7(2)

  24. Croce D, Gallo P, Garlisi D, Giarre L, Mangione S, Tinnirello I (2014) ARIANNA: a smartphone-based navigation system with human in the loop. In: 2014 22nd mediterranean conference of control and automation (MED), pp 8–13

  25. Csapo A, Wersnyi G, Nagy H, Stockman T (2015) A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research. J Multimodal User Interfaces 9:275–286

    Article  Google Scholar 

  26. Dagnelie G (ed) (2011) Visual prosthetics. Springer US, Boston

    Google Scholar 

  27. Dagnelie G (2012) Retinal implants: emergence of a multidisciplinary field. Curr Opin Neurol 25(1):67–75

    Article  Google Scholar 

  28. Dakopoulos D, Bourbakis NG (2010) Wearable obstacle avoidance electronic travel aids for blind: a survey. IEEE Trans Syst Man Cybern Part C Appl Rev 40(1):25–35

    Article  Google Scholar 

  29. Dandona L, Dandona R (2006) Revision of visual impairment definitions in the International Statistical Classification of Diseases. BMC Med 4:7

    Article  Google Scholar 

  30. Dobelle WH (2000) Artificial vision for the blind by connecting a television camera to the visual cortex. ASAIO J (American Society for Artificial Internal Organs: 1992) 46(1):3–9

  31. Dobelle WH, Mladejovsky MG, Girvin JP (1974) Artifical vision for the blind: electrical stimulation of visual cortex offers hope for a functional prosthesis. Science (New York, NY) 183(4123):440–444

  32. Dorn JD, Ahuja AK, Caspi A, daCruz L, Dagnelie G, Sahel JA, Greenberg RJ, McMahon MJ, Group gIS (2013) The detection of motion by blind subjects with the Epiretinal 60-Electrode (Argus II) retinal prosthesis. JAMA Ophthalmol 131(2):183–189

  33. Filipe V, Fernandes F, Fernandes H, Sousa A, Paredes H, Barroso J (2012) Blind navigation support system based on microsoft Kinect. Proc Comput Sci 14:94–101

    Article  Google Scholar 

  34. Fletcher JF (1980) Spatial representation in blind children. 1: development compared to sighted children. J Vis Impair Blind 74(10):318–385

  35. Fortunato S, Barthlemy M (2007) Resolution limit in community detection. Proc Natl Acad Sci 104(1):36–41

    Article  Google Scholar 

  36. Foster A, Resnikoff S (2005) The impact of vision 2020 on global blindness. Eye 19(10):1133–1135

    Article  Google Scholar 

  37. Freiberger H (1974) Mobility aids for the blind. Bull Prosthet Res, pp 73–78

  38. Fusco G, Shen H, Murali V, Coughlan JM (2014) Determining a Blind Pedestrian’s Location and Orientation at Traffic Intersections. In: International conference, ICCHP : proceedings international conference on computers helping people with special needs, vol 8547, pp 427–432

  39. Grammenos D, Savidis A, Stephanidis C (2009) Designing universally accessible games. Comput Entertain 7(1):8:1–8:29

  40. Hakobyan L, Lumsden J, OSullivan D, Bartlett H (2013) Mobile assistive technologies for the visually impaired. Surv Ophthalmol 58(6):513–528

    Article  Google Scholar 

  41. Hermann T, Hunt A, Neuhoff JG (2011) The sonification handbook. Logos, Berlin

    Google Scholar 

  42. Hersh MA, Johnson MA (eds) Assistive technology for visually impaired and blind people. Springer, London

  43. Hersh MA, Johnson MA (2008) On modelling assistive technology systems part i: modelling framework. Technol Disabil 20(3):193–215

    Google Scholar 

  44. Hicks SL, Wilson I, Muhammed L, Worsfold J, Downes SM, Kennard C (2013) A depth-based head-mounted visual display to aid navigation in partially sighted individuals. PLoS One 8(7):e67,695

    Article  Google Scholar 

  45. Hodosh M, Young P, Hockenmaier J (2013) Framing image description as a ranking task: data, models and evaluation metrics. J Artif Int Res 47(1):853–899

    MathSciNet  MATH  Google Scholar 

  46. Hornig R, Zehnder T, Velikay-Parel M, Laube T, Feucht M, Richard G (2007) The IMI retinal implant system. In: Humayun MS, Weiland JD, Chader G, Greenbaum E (eds) Artificial sight biological and medical physics, biomedical engineering. Springer, New York, pp 111–128

    Google Scholar 

  47. Hoyle B, Waters D (2008) Mobility AT: the Batcane (UltraCane). In: Hersh MA, Johnson MA (eds) Assistive technology for visually impaired and blind people. Springer, London, pp 209–229

    Chapter  Google Scholar 

  48. Humayun MS, deJuan E, Dagnelie G, Greenberg RJ, Propst RH, Phillips DH (1996) Visual perception elicited by electrical stimulation of retina in blind humans. Arch Ophthalmol (Chicago, Ill: 1960) 114(1):40–46

  49. Humayun MS, Dorn JD, daCruz L, Dagnelie G, Sahel JA, Stanga PE, Cideciyan AV, Duncan JL, Eliott D, Filley E, Ho AC, Santos A, Safran AB, Arditi A, Del Priore LV, Greenberg RJ, Group AIS (2012) Interim results from the international trial of Second Sight’s visual prosthesis. Ophthalmology 119(4):779–788

  50. Jacobson RD (1998) Cognitive mapping without sight: four preliminary studies of spatial learning. J Environ Psychol 18(3):289–305

    Article  Google Scholar 

  51. Jacquet C, Bellik Y, Bourda Y (2006) Electronic locomotion aids for the blind: towards more assistive systems. In: Ichalkaranje N, Ichalkaranje A, Jain LC (eds) Intelligent paradigms for assistive and preventive healthcare, Studies in computational intelligence, vol 19. Springer, Berlin, pp 133–163

  52. Jain D (2014) Path-guided indoor navigation for the visually impaired using minimal building retrofitting. In: Proceedings of the 16th international ACM SIGACCESS conference on computers and accessibility, ASSETS ’14. ACM, New York, pp 225–232

  53. Jain D, Jain A, Paul R, Komarika A, Balakrishnan M (2013) A path-guided audio based indoor navigation system for persons with visual impairment. In: Proceedings of the 15th international ACM SIGACCESS conference on computers and accessibility, ASSETS ’13. ACM, New York, pp 33:1–33:2

  54. Jameson B, Manduchi R (2010) Watch your head: a wearable collision warning system for the blind. In: 2010 IEEE Sensors, pp 1922–1927

  55. Jayant C, Ji H, White S, Bigham JP (2011) Supporting blind photography. In: The proceedings of the 13th international ACM SIGACCESS conference on computers and accessibility, ASSETS ’11. ACM, New York, pp 203–210

  56. Kajimoto H, Kawakami N, Tachi S (2002) Optimal design method for selective nerve stimulation and its application to electrocutaneous display. In: 10th symposium on haptic interfaces for virtual environment and teleoperator systems, 2002. HAPTICS 2002. Proceedings, pp 303–310

  57. Kajimoto H, Kanno Y, Tachi S (2006) A vision substitution system using forehead electrical stimulation. In: ACM SIGGRAPH 2006 sketches, SIGGRAPH ’06. ACM, New York

  58. Kajimoto H, Suzuki M, Kanno Y (2014) HamsaTouch: tactile vision substitution with smartphone and electro-tactile display. In: CHI ’14 extended abstracts on human factors in computing systems, CHI EA ’14. ACM, New York, pp 1273–1278

  59. Karpathy A, Fei-Fei L (2015) Deep visual-semantic alignments for generating image descriptions. In: The IEEE conference on computer vision and pattern recognition (CVPR), Boston

  60. Keller P, Stevens C (2004) Meaning from environmental sounds: types of signal-referent relations and their effect on recognizing auditory icons. J Exp Psychol Appl 10(1):3–12

    Article  Google Scholar 

  61. Kercel S, Bach-y Rita P (2006) Human nervous system, noninvasive coupling of electronically generated data into. In: Wiley encyclopedia of biomedical engineering. Wiley, New York

  62. Kesavan S, Giudice NA (2012) Indoor scene knowledge acquisition using a natural language interface. In: Workshop on spatial knowledge acquisition with limited information displays 2012, Kloster Seeon, Germany, vol 888, pp 1–6

  63. Krishnamoorthy N, Malkarnenkar G, Mooney RJ, Saenko K, Guadarrama S (2013) Generating natural-language video descriptions using text-mined knowledge. In: Proceedings of the 27th AAAI conference on artificial intelligence (AAAI-2013), pp 541–547

  64. Lane ND, Georgiev P (2015) Can deep learning revolutionize mobile sensing? In: Proceedings of the 16th international workshop on mobile computing systems and applications, HotMobile ’15. ACM, New York, pp 117–122

  65. Lanigan PE, Paulos AM, Williams AW, Rossi D, Narasimhan P (2006) Trinetra: assistive technologies for grocery shopping for the blind. In: 2006 10th IEEE international symposium on wearable computers, pp 147–148

  66. Larkin JH, Simon HA (1987) Why a diagram is (sometimes) worth ten thousand words. Cogn Sci 11(1):65–100

    Article  Google Scholar 

  67. Lepora NF, Verschure P, Prescott TJ (2013) The state of the art in biomimetics. Bioinspir Biomim 8(1):013,001

    Article  Google Scholar 

  68. Levesque V (2005) Blindness, technology and haptics. Center for intelligent machines, Technical Report, McGill University, UK pp 19–21

  69. Lewis PM, Ackland HM, Lowery AJ, Rosenfeld JV (2015) Restoration of vision in blind individuals using bionic devices: a review with a focus on cortical visual prostheses. Brain Res 1595:51–73

    Article  Google Scholar 

  70. Li WH (2013) Wearable computer vision systems for a cortical visual prosthesis. In: 2013 IEEE international conference on computer vision workshops (ICCVW), pp 428–435

  71. Li WH (2014) A fast and flexible computer vision system for implanted visual prostheses. In: Agapito L, Bronstein MM, Rother C (eds) Computer vision—ECCV 2014 Workshops, Lecture notes in computer science, vol 8927. Springer International Publishing, pp 686–701

  72. Linvill JG, Bliss JC (1966) A direct translation reading aid for the blind. Proc IEEE 54(1):40–51

    Article  Google Scholar 

  73. Lorach H, Goetz G, Smith R, Lei X, Mandel Y, Kamins T, Mathieson K, Huie P, Harris J, Sher A, Palanker D (2015) Photovoltaic restoration of sight with high visual acuity. Nat Med 21(5):476–482

    Article  Google Scholar 

  74. Luo YHL, Cruz Ld (2014) A review and update on the current status of retinal prostheses (bionic eye). Br Med Bull 109(1):31–44

    Article  Google Scholar 

  75. Maidenbaum S, Abboud S, Amedi A (2014a) Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci Biobehav Rev 41:3–15

  76. Maidenbaum S, Hanassy S, Abboud S, Buchs G, Chebat DR, Levy-Tzedek S, Amedi A (2014b) The EyeCane, a new electronic travel aid for the blind: technology, behavior and swift learning. Restor Neurol Neurosci 32(6):813–824

    Google Scholar 

  77. Manduchi R, Coughlan J (2012) (Computer) vision without sight. Commun ACM 55(1):96–104

    Article  Google Scholar 

  78. Mankoff J, Fait H, Tran T (2005) Is your web page accessible?: A comparative study of methods for assessing web page accessibility for the blind. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’05. ACM, New York, pp 41–50

  79. Manning CD, Schtze H (1999) Foundations of statistical natural language processing. MIT Press, Cambridge

    Google Scholar 

  80. Mathieson K, Loudin J, Goetz G, Huie P, Wang L, Kamins TI, Galambos L, Smith R, Harris JS, Sher A, Palanker D (2012) Photovoltaic retinal prosthesis with high pixel density. Nat Photonics 6(6):391–397

    Article  Google Scholar 

  81. Meijer PBL (1992) An experimental system for auditory image representations. IEEE Trans Biomed Eng 39(2):112–121

    Article  Google Scholar 

  82. Menzel-Severing J, Laube T, Brockmann C, Bornfeld N, Mokwa W, Mazinani B, Walter P, Roessler G (2012) Implantation and explantation of an active epiretinal visual prosthesis: 2-year follow-up data from the EPIRET-3 prospective clinical trial. Eye 26(4):501–509

    Article  Google Scholar 

  83. Millar S (1988) Models of sensory deprivation: the nature/nurture dichotomy and spatial representation in the blind. Int J Behav Dev 11(1):69–87

    Article  Google Scholar 

  84. Narasimhan P (2006) Assistive embedded technologies. Computer 39(7):85–87

    Article  Google Scholar 

  85. Nau A, Hertle RW, Yang D (2012) Effect of tongue stimulation on nystagmus eye movements in blind patients. Brain Struct Funct 217(3):761–765

    Article  Google Scholar 

  86. Preece A (2013) An evaluation of the RAY G300, an android-based smartphone designed for the blind and visually impaired. AccessWorld 14(July 2013)

  87. Rabia J, Ali SA, Arabnia HR, Fatima S (2014) Computer vision-based object recognition for the visually impaired in an indoors environment: a survey. Vis Comput 30(11):1197–1222

    Article  Google Scholar 

  88. Ren Q (2014) Visual prosthesis, optic nerve approaches. In: Jaeger D, Jung R (eds) Encyclopedia of computational neuroscience. Springer, New York, pp 1–3

    Chapter  Google Scholar 

  89. Rizzo JF, Wyatt J, Loewenstein J, Kelly S, Shire D (2003) Methods and perceptual thresholds for short-term electrical stimulation of human retina with microelectrode arrays. Investig Ophthalmol Vis Sci 44(12):5355–5361

    Article  Google Scholar 

  90. Roentgen UR, Gelderblom GJ, Soede M, de Witte LP (2008) Inventory of electronic mobility aids for persons with visual impairments: a literature review. J Visu Impair Blind 102(11):702–724

  91. Roentgen UR, Gelderblom GJ, Soede M, de Witte LP (2009) The impact of electronic mobility devices for persons who are visually impaired: a systematic review of effects and effectiveness. J Visu Impair Blind 103(11):743

    Google Scholar 

  92. Roessler G, Laube T, Brockmann C, Kirschkamp T, Mazinani B, Goertz M, Koch C, Krisch I, Sellhaus B, Trieu HK, Weis J, Bornfeld N, Rothgen H, Messner A, Mokwa W, Walter P (2009) Implantation and explantation of a wireless epiretinal retina implant device: observations during the EPIRET3 prospective clinical trial. Investig Opthalmol Vis Sci 50(6):3003

    Article  Google Scholar 

  93. Roth P, Petrucci L, Pun T, Assimacopoulos A (1999) Auditory browser for blind and visually impaired users. In: CHI ’99 extended abstracts on human factors in computing systems, CHI EA ’99. ACM, New York, pp 218–219

  94. Saez JM, Escolano F, Lozano MA (2015) Aerial obstacle detection with 3-D mobile devices. IEEE J Biomed Health Inf 19(1):74–80

    Article  Google Scholar 

  95. Sakaguchi H, Fujikado T, Fang X, Kanda H, Osanai M, Nakauchi K, Ikuno Y, Kamei M, Yagi T, Nishimura S, Ohji M, Yagi T, Tano Y (2004) Transretinal electrical stimulation with a suprachoroidal multichannel electrode in rabbit eyes. Jpn J Ophthalmol 48(3):256–261

    Article  Google Scholar 

  96. Saunders AL, Williams CE, Heriot W, Briggs R, Yeoh J, Nayagam DAX, McCombe M, Villalobos J, Burns O, Luu CD, Ayton LN, McPhedran M, Opie NL, McGowan C, Shepherd RK, Guymer R, Allen PJ (2014) Development of a surgical procedure for implantation of a prototype suprachoroidal retinal prosthesis. Clin Exp Ophthalmol 42(7):665–674

    Article  Google Scholar 

  97. Schmeidler E, Kirchner C (2001) Adding audio description: does it make a difference? J Vis Impair Blind 95(4):197–212

    Google Scholar 

  98. Shilkrot R, Huber J, Liu C, Maes P, Nanayakkara SC (2014) A wearable text-reading device for the visually-impaired. In: CHI ’14 extended abstracts on human factors in computing systems, CHI EA ’14. ACM, New York, pp 193–194

  99. Shilkrot R, Huber J, Meng Ee W, Maes P, Nanayakkara SC (2015) FingerReader: a wearable device to explore printed text on the go. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems, CHI ’15. ACM, New York, pp 2363–2372

  100. Shoval S, Borenstein J, Koren Y (1998) Auditory guidance with the Navbelt-a computerized travel aid for the blind. IEEE Trans Syst Man Cybern Part C Appl Rev 28(3):459–467

    Article  Google Scholar 

  101. Simonite T (2014) Google creates software that tells you what it sees in images

  102. Solomon N, Bhandari P (2015) Patent landscape report on assistive devices and technologies for visually and hearing impaired persons. Tech. rep, World Intellectual Property Organization (WIPO)

  103. Stingl K, Bartz-Schmidt KU, Besch D, Braun A, Bruckmann A, Gekeler F, Greppmaier U, Hipp S, Hrtdrfer G, Kernstock C, Koitschev A, Kusnyerik A, Sachs H, Schatz A, Stingl KT, Peters T, Wilhelm B, Zrenner E (2013) Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS. Proc Biol Sci R Soc 280(1757):20130,077

    Article  Google Scholar 

  104. Stingl K, Bartz-Schmidt KU, Besch D, Chee CK, Cottriall CL, Gekeler F, Groppe M, Jackson TL, MacLaren RE, Koitschev A, Kusnyerik A, Neffendorf J, Nemeth J, Naeem MAN, Peters T, Ramsden JD, Sachs H, Simpson A, Singh MS, Wilhelm B, Wong D, Zrenner E (2015) Subretinal visual implant alpha IMS clinical trial interim report. Vis Res 111(Part B):149–160

  105. Strumillo P (2010) Electronic interfaces aiding the visually impaired in environmental access, mobility and navigation. In: 3rd Conference on human system interactions (HSI), 2010, Rzeszow, pp 17–24

  106. Subelj L, Bajec M, Mileva Boshkoska B, Kastrin A, Levnaji Z (2015) Quantifying the consistency of scientific databases. PLoS One 10:e0127390. doi:10.1371/journal.pone.0127390

  107. Tapu R, Mocanu B, Tapu E (2014) A survey on wearable devices used to assist the visual impaired user navigation in outdoor environments. In: 2014 11th international symposium on electronics and telecommunications (ISETC), pp 1–4

  108. Tekin E, Coughlan JM (2010) A mobile phone application enabling visually impaired users to find and read product barcodes. In: Proceedings of the 12th international conference on computers helping people with special needs, ICCHP’10. Springer, Berlin, pp 290–295

  109. Terven JR, Salas J, Raducanu B (2014) New opportunities for computer vision-based assistive technology systems for the visually impaired. Computer 47(4):52–58

    Article  Google Scholar 

  110. Thinus-Blanc C, Gaunet F (1997) Representation of space in blind persons: vision as a spatial sense? Psychol Bull 121(1):20–42

    Article  Google Scholar 

  111. Upson S (2007) Loser: tongue vision—IEEE spectrum. http://spectrum.ieee.org/consumer-electronics/portable-devices/loser-tongue-vision

  112. Velzquez R (2010) Wearable assistive devices for the blind. In: Lay-Ekuakille A, Mukhopadhyay SC (eds) Wearable and autonomous biomedical devices and systems for smart environment, Lecture notes in electrical engineering, vol 75. Springer, Berlin, pp 331–349

  113. WHO (2015) WHO—causes of blindness and visual impairment. http://www.who.int/mediacentre/factsheets/fs282/en/

  114. Ye H, Malu M, Oh U, Findlater L (2014) Current and future mobile and wearable device use by people with visual impairments. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’14. ACM, New York, pp 3123–3132

  115. Zhao H, Plaisant C, Shneiderman B, Lazar J (2008) Data sonification for users with visual impairment: a case study with georeferenced data. ACM Trans Comput Hum Interact 15(1):4:1–4:28

    Article  Google Scholar 

  116. Zhou JA, Woo SJ, Park SI, Kim ET, Seo JM, Chung H, Kim SJ (2008) A suprachoroidal electrical retinal stimulator design for long-term animal experiments and in vivo assessment of its feasibility and biocompatibility in rabbits. J Biomed Biotechnol 2008:547428

  117. Zollner M, Huber S, Jetter HC, Reiterer H (2011) NAVI a proof-of-concept of a mobile navigational aid for visually impaired based on the microsoft Kinect. In: Campos P, Graham N, Jorge J, Nunes N, Palanque P, Winckler M (eds) Human-computer interaction INTERACT 2011, Lecture notes in computer science, vol 6949. Springer, Berlin, pp 584–587

Download references

Acknowledgements

The authors are grateful for insightful communications from—(Dr Nathan Lepora, University of Bristol and Bristol Robotics Laborotary) at the early stage of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexy Bhowmick.

Appendices

Appendix A: Database construction

Four major scientific databases—IEEE Xplore database Footnote 12, Thomson Reuters Web of Science Footnote 13, ACM Digital Library Footnote 14, and Elsevier ScienceDirect Footnote 15 were used as sources of quality multidisciplinary publications related to research in assistive technology for the visually impaired and blind people published in journals, conferences and book chapters. The bar chart in Fig. 3 illustrates the proportions of publications captured from the different scientific database sources. While the search within Web of Science, IEEE Xplore, and ACM Digital Library was restricted to a topical search from multiple fields containing relevant key words or phrases, the search within ScienceDirect was restricted to computer science, engineering, medicine, and design. A range of keywords and phrases were used as search terms including—wearable assistive devices, electronic travel aids, portable assistive device, navigation systems, assistive technology solutions, rehabilitation technology, vision substitution systems, etc all in the context of the blind and visually impaired. The search engines in the scientific databases matches on each key word individually as well as on the whole phrase with the descriptive stored metadata for each published document, which includes the title, publication title, abstract and author-defined keywords. Many articles on just pure medicine, neuroscience, psychology, general disability and impairments did appear in later search listings and were not considered as there was no any clear relation to engineering or technology. These terms restricted the type of research publications that we consider to that published in engineering, rehabilitation engineering, assistive and rehabilitation technologies, computer vision, augmented reality, virtual reality, sensor processing and technologies, cognitive science, vision rehabilitation and sensory substitution research.

The information analysis was carried out using Xpath (XML Path Language) and XSLT 2.0 (EXtensible Stylesheet Language Transformations), both of which are best suited for navigating, manipulating, querying, or transforming of XML data. It was first necessary to convert the results of publication search to a readable document in a non-proprietary format. This was achieved in two steps. First, the search results were exported to a personal library in Zotero (http://www.zotero.org), a powerful research tool to organize and analyze sources. Zotero comes with several styles for creating bibliographies. One can automatically export all of the references in a certain library in Zotero to specific bibliography formats like BibTeX or Endnote XML. XML is a markup language for representing structured information like documents, books, transactions, etc in a format that is both human-readable and machine-readable. XML markups are a form of document metadata.

A separate XML database was maintained for the search listings from each scientific database. Within XML, the metadata content concerning each publication is enclosed within standardized tags of elements. Hence, these were then accessible through XPath queries and expressions meant to match specific patterns and retrieve results. An XML editor provides the platform to perform XPath queries and regular expression searches and XSLT transformations meant for information analysis. Before the information analysis, it is essential to have the publication databases of metadata cleaned up of duplicate entries which arise from overlapping search terms. The database also needed to be preprocessed as it was found that unexpected characters, strings of symbols did creep into XML elements during the process of capturing the metadata from the source. The end goal is to have a well formed XML document for analysis.

Appendix B: Database analysis

The complete database with metadata content of publications was then analysed using a variety of methods. A survey of topics provides a broad introduction to the research area. Hence, the first step of analysis was to survey the publications by topic. As a useful starting point, we retrieved the specific fields i.e. titles and abstracts from the publication database through a pattern matching program in XSLT. Next we extracted common topics (based on their frequency of occurrence in the ‘titles’ and ‘abstracts’ of the publications) through a word frequency counter in XSLT and listed them. Unfortunately the most frequent words and collocations are found to be stop words. Stop words are extremely common words that are of little semantic value and so they must be excluded from analysis. Stop words like ‘the’, ‘and’, ‘of’, ‘have’ and non-informative words like ‘proposed’, ‘developed’, ‘approach’, ‘towards’, etc were compiled into a stop word list and filtered out. This left us with a refined list of most frequent words (Table 2 lists 100 most frequent words), that identifies the most interesting and informative words or topics in the field of assistive technology for the visually impaired and blind people today. The same corpus of title and abstract information from the publications database was used to generate a word cloud from WordleFootnote 16, a web-based tool. The static word cloud (Fig. 2) visualizes our analysis giving greater prominence to the more frequent words. Except for a mismatch of two words, the word list and rankings from our XSLT word frequency program matched that of Wordle’s program.

Our second step of analysis considered identifying the leading journals and conferences which publish or disseminate knowledge in the field of assistive technology for the visually impaired and blind people. These steps were performed with XPath queries (using path expressions) on the XML publication database. We gathered additional information, such as the proportion of published content by journals and conferences and year besides the reputation of these journals and conferences. The total number of journal articles, conference publications, book section publications are mentioned in Sect. 5. The analysis by journals and conferences are published in Tables 3 and 4 respectively; while the analysis by year is plotted in Fig. 4 and are discussed in detail in the main text of this article. This was followed by the adding up the number of publications each year in our database in order to determine the growth of journals and conferences in the field in the last two decades. The expansion of research interest and developments in this field is reflected in the increased number of publications within individual journals and conferences (Fig. 5).

Our next step was to consider the semantic environment of the most frequent words by analyzing their collocations. In corpus analysis, examination of word collocations is known to reveal the main underlying themes in the corpus. In this step, we employed a list (of most frequent words) longer than the one presented in Table 2. The goal was to obtain a fairly good overview of the main themes and sub-disciplines in the field of assistive technology for the visually impaired and blind people. Using absolute frequency as a collocational measure is the simplest method, but it is not recommended, as it does not lead to interesting results always. A more appropriate and widely accepted measure for statistical significance is the collocation metric based on Log-Likelihood Ratio. In collocation discovery the likelihood ratio examines the likelihoods \(L(H_0)\) and \(L(H_1)\) of two hypotheses \(H_0\) and \(H_1\) about words \( w_1 \) and \( w_2 \) [79]. If N is the total number of words in the corpus, and \(c_1\), \(c_2\), and \(c_{12}\) denote the occurrences of \(w_1\), \(w_2\), and bigram \(w_{1}w_{2}\); then maximum likelihood estimates for \(p = c_2/N \), \(p_1 = c_{12}/c_1 \) and \(p_2 = (c_2-c_{12})/(N-c_1)\). Assuming binomial distribution, the log of the likelihood ratio \( \lambda \) is calculated as follows:

$$\begin{aligned} log \lambda= & {} log \frac{L(H_0)}{L(H_1)} \nonumber \\= & {} log \frac{b(c_{12}, c_1, p)b(c_2 - c_{12}, N - c_1, p)}{b(c_{12}, c_1, p_1)b(c_2 - c_{12}, N - c_1, p_2)} \nonumber \\= & {} log L(c_{12}, c_1, p) + log L(c_2 - c_{12}, N - c_1, p) \nonumber \\&- log L(c_{12}, c_1, p_1) - log L(c_2 - c_{12}, N - c_1, p_2)\nonumber \\ \end{aligned}$$
(1)

where L(knx) = \( x^{k}(1-x)^{n-k} \). On applying the log-likelihood test to our database consisting of preprocessed titles and abstracts from 3010 publications on the concerned field of research, we obtained the most interesting two-word collocations and ranked them based on their corresponding LLR scores (Table 5 lists 50 of them). The LLR score is simply a statistic; the higher the number the closer the candidate is to being a collocation. A trigram analysis could yield terms such as “electronic travel aid”, “object detection system”, or “low vision rehabilitation” which could be interesting for analysis. However we feel that a second-order analysis of terms may not necessarily lead to more insights or novel terms.

Community detection is a popular way for uncovering important structures and functions of complex networks. We performed a modularity-based analysis of the word co-occurrence graph (publication data) in order to detect underlying community structures. The network analysis tool Gephi Footnote 17 was used to visualize and understand the graph. Our word co-occurrence graph regards words as vertices and the LLR score of word pairs as edge weights. We constructed a temporal graph in the GEXF file format retaining these information. The Force Atlas algorithm was used to produce a layout with the strongly connected nodes pulled together (Fig. 6). Gephi’s Modularity statistic was used to detect communities. Force Atlas algorithm and the Modularity statistic are described more fully in the Gephi software’s documentation [9].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhowmick, A., Hazarika, S.M. An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends. J Multimodal User Interfaces 11, 149–172 (2017). https://doi.org/10.1007/s12193-016-0235-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-016-0235-6

Keywords

Navigation