One of the central problems in image recognition is the extraction of salient “features” in a manner robust to variation in position, orientation, and scale and suitable for further processing. Because real-world images contain distinct features at various resolutions, effective extraction may require the combination of edge and other information across several scales, which is itself a difficult problem. Our analysis suggests that these two problems are fundamentally interdependent, and can be addressed in an integrated framework. We demonstrate improved results by combining edge detection and feature binding at each scale. This is accomplished by extending elements of the Sajda–Finkel neural-network model of perceptual binding to the multi-scale feature-extraction task.