The inventors of this technology developed a method that prevents seizures in those who have photosensitivity by learning to adjust videos in a way that makes them safe. A large fraction of the population, particularly children, are sensitive to bright flashes and changing patterns, with many developing headaches or other unpleasant feelings, even if they do not have a seizure. Recent movies, such as Incredibles 2 and the 2019 Star Wars movie, carried disclaimers about seizures that might impact their public image. This approach detects and modifies videos, rendering them safe with minimal degradation in video quality. This technology can benefit photosensitive consumers as well as large digital content providers or movie studios seeking to provide safe viewing experiences. Children are disproportionately affected, particularly children with autism. Since the individual tolerance for flashing or moving patterns varies significantly, providers might offer a simple knob that users can tune or automatically adapt the filter by the age of the user.
Photosensitivity is a prevalent condition in which visual stimuli, such as flashing lights, trigger seizures or other adverse reactions. Videos are common sources of these unsafe stimuli, and although there are efforts to reduce certain types of strobing, the range of problematic stimuli is broad, unpredictable, and difficult to filter against. The inventors have developed a method to actively detect and modify harmful stimuli in videos, thereby protecting photosensitive individuals from exposure. Importantly, this method can protect viewers against a wide range of problematic stimuli without degradation of video quality. This invention has broad consumer and business applications to ensure a safe visual environment for all photosensitive individuals.
This technology uses machine learning methods to detect and modify harmful stimuli in videos. In the forward pass, this technology generates a large dataset of videos. Each video is then corrupted with artifacts, such as flashing lights, which would adversely affect photosensitive individuals. In the inverse pass, the harmful videos created in the forward pass are transformed into innocuous videos without compromising the quality. This inverse pass is accomplished using neural networks that learn to detect and selectively suppress problematic regions. Videos lacking adverse stimuli remain unaffected. The resulting filter may then be generalized to a wide range of new videos.
For individuals with photosensitivity, including photosensitive epilepsy, these filters can be tuned to be more aggressive against specific stimuli. In addition, this technology can be utilized by large content providers or browsers to improve the safety of their video platforms. As the quality of the video is not degraded during the transformation, this method can be applied to a broad range of videos without compromising the visual experience of those without photosensitivity.
- Method to detect and modify a broad range of harmful stimuli in videos
- Filter can be tunable to individual photosensitivities or broadly applied to video platforms
- Provides safe visual environments without degradation of video quality