In occluding contour detection, cues obtained from a static 2D image are combined with cues based on the relative motion of objects in the image. These pieces of local evidence are integrated globally using dynamic programming to determine contours in the image. This process is initiated once an image of a scene is captured, upon which local pixel-wise flow fields are obtained to determine a warped image. This image is then compared to the original to form a disparity map, which is integrated with static image cues to produce a product image map. This map is processed with a tracking technique to reveal occluding contours, which may then be used with the original image for subsequent processing and editing. This evidence can also be integrated across time by tracking the moving contours, allowing evidence gathered at one time to influence the interpretation of that contour some few seconds later, perhaps using a particle filter. Integrating over time and across space allows the occluding contours to be identified robustly. This technology quickly calculates and tracks the occluding contours in the camera around the time of the photographic exposure, and stores that information in a compact manner along with the image.