Skip to yearly menu bar Skip to main content


Poster

Exploring Effective Stimulus Encoding via Vision System Modeling for Visual Prostheses

Chuanqing Wang · Di Wu · Chaoming Fang · Jie Yang · Mohamad Sawan

Halle B
[ ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Visual prostheses are potential devices to restore vision for the blind, which highly depends on the quality of stimulation patterns. However, existing processing frameworks utilize region detection or deep learning model to generate stimulation patterns without effective optimization methods to achieve better vision recovery. In this paper, we propose for the first time an end-to-end stimulation patterns optimization framework that consists of a retinal network to mimic the behavior of retina, phosphene model to simulate phosphene generated by retinal prostheses, and primary vision system network (PVS-net) to mimic the function from retina to visual cortex. Combining these three components, the framework can simulate the whole process of visual signals processing from external scenes to the visual perception in the cortex. Besides, we adopt biological spike responses of the visual cortex as target signals during training, providing an efficient way to generate and verify the quality of stimulation patterns. The proposed retina network adopts a spike representation encoding technique to record external scenes and a spiking recurrent neural network to predict the stimulation patterns. The phosphene model and VVS-net simulate the phosphene in the retina and predict multiple V1 neurons' response. Experimental results show that the generated stimulation patterns not only contain the feature of original scenes but also have biological plausibility to generate similar perceptions in the visual cortex. The performance of the proposed framework achieves 0.78, evaluated by the Pearson correlation coefficient between predicted values and the recorded response of normal neurons.

Chat is not available.