The nearest neighbor idea and local learning in general are not limited to classification, and many of the ideas can be more easily illustrated for regression. Consider the following one-dimensional regression problems (examples from http://www.autonlab.org/tutorials/mbl08.pdf):
Clearly, linear models do not capture the data well. Example 0 to Example 2 illustrate. We can add more features, like higher order polynomial terms, or we can use a local approach, like nearest neighbor:
In one dimension, each training point will produce a horizontal segment that extends from half the distance to the point on the left to half the distance of the point on the right. These midway points are where the prediction changes. In higher dimension, if we use standard Euclidean distance (aka L2 norm) , the regions where the closest training instance is the same will be polygonal. This tessellation of input space into nearest neighbor cell is known in geometry as a Voronoi diagram. In 2D, it can be easily visualized (try the applet below, which allows you to move the points around!).
More generally, we can use an arbitrary distance function to define a nearest neighbor. Some popular choices besides include other norms:
Here are all the learned models on our examples (1NN,9NN,KR,LWR):