This technology uses machine learning methods to detect and modify harmful stimuli in videos. In the forward pass, this technology generates a large dataset of videos. Each video is then corrupted with artifacts, such as flashing lights, which would adversely affect photosensitive individuals. In the inverse pass, the harmful videos created in the forward pass are transformed into innocuous videos without compromising the quality. This inverse pass is accomplished using neural networks that learn to detect and selectively suppress problematic regions. Videos lacking adverse stimuli remain unaffected. The resulting filter may then be generalized to a wide range of new videos.
For individuals with photosensitivity, including photosensitive epilepsy, these filters can be tuned to be more aggressive against specific stimuli. In addition, this technology can be utilized by large content providers or browsers to improve the safety of their video platforms. As the quality of the video is not degraded during the transformation, this method can be applied to a broad range of videos without compromising the visual experience of those without photosensitivity.