
Image Morphing using the Beier-Neely Algorithm
Introduction: A Childhood Memory Becomes a Technical Journey
In the early 1990s, I encountered a small program on my i486SX loaded FM Towns computer that could smoothly morph one face into another. Running on just 25MHz with no GPU / no FPU with 6MB of RAM, this tiny program produced effects that seemed impossibly sophisticated. That childhood fascination recently motivated me to understand and recreate this technology.
This article I will share my learning after reading the seminal paper by Beier and Neely, which introduced the feature-based morphing algorithm that powered many iconic visual effects of the era and attempted to implement it in TypeScript.
Initial Approach: Why Simple Blending Fails
My initial thought before researching was to simply blend two images together using alpha compositing:
// Naive approach - cross-dissolve
morphed = (1 - t) * imageA + t * imageB
The results were predictably poor. This approach produces ghosting artifacts - double images where features don’t align. Frame-by-frame analysis of the original demo revealed why:
- Features maintain coherent motion paths during transformation
- The image appears to deform and flow, not just fade
- Pixel displacement follows structured patterns
Clearly, effective morphing requires more than simple alpha blending.
The Beier-Neely Algorithm: A Breakthrough in Feature-Based Morphing
While researching image morphing techniques, I discovered the seminal 1992 paper by Thaddeus Beier and Shawn Neely from Pacific Data Images. Their approach revolutionized the field by using line segments to define feature correspondences rather than tracking individual points.
The key insight: instead of establishing point-to-point mappings, you define corresponding line segments between images. For example:
- A line connecting the eye corners in the source image
- The corresponding line in the target image
- The algorithm interpolates the transformation between these line pairs
This method captures the semantic structure of the image, allowing for coherent transformations that respect the underlying geometry of facial features.
Interactive Demo
Before diving into the technical details, here’s my implementation of the algorithm. Try moving control lines on corresponding features between images to see how the morphing works:
Initializing demo...
Click to load image
Click to load image
1. Load Images
2. Add Control Points (0/20)
3. Generate & Play
Morphed result will appear here
4. Export
Instructions:
- Load source and target images using the buttons above
- Click on the images to add control points (matching features)
- Use "Auto-Detect Features" for automatic face landmark detection
- Drag control points to adjust their positions
- Press Delete/Backspace to remove selected point
- Generate the morph and play the animation
Understanding the Algorithm
The Core Concept: Line-Based Coordinate Systems
Each line segment in the Beier-Neely algorithm defines a local coordinate system that influences nearby pixels. Think of each line as creating a field of influence - pixels close to the line are strongly affected by its transformation, while distant pixels receive minimal influence.
Mathematical Foundation
The Beier-Neely algorithm transforms pixels based on their relationship to control line segments. Here’s the complete mathematical formulation:
Line Parameterization
Given a line segment defined by points and , any point can be expressed in the line’s local coordinate system:
where is the perpendicular vector.
Pixel Displacement Calculation
For a pixel at position in the destination image, its corresponding position in the source image is computed through the following steps:
- Calculate displacement for each line pair:
For line with source positions and destination positions :
where is found by:
- Computing coordinates of relative to destination line
- Reconstructing position using same relative to source line :
- Weighted average of displacements:
where is the displacement vector for line and is its weight.
Weight Function
The weight of each line’s influence is determined by:
Where:
- is the line segment length
- is the shortest distance from the pixel to the line segment
- is a small constant (typically 0.001) to prevent division by zero
- controls the falloff rate (typically 0.5-2.0)
- determines line length importance (typically 0-1.0)
Distance Calculation
The distance from point to line segment is:
Through experimentation, I found that and provide natural-looking deformations for facial morphing.
Implementation Details
Core Warping Algorithm
The implementation uses TypeScript and Canvas 2D API. Here’s the core warping function:
function warpImage(
sourceImg: ImageData,
srcLines: Line[],
dstLines: Line[],
t: number
): ImageData {
const result = new ImageData(width, height);
// Interpolate line positions for the current frame
const currentLines = interpolateLines(srcLines, dstLines, t);
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
let totalDisplacement = { x: 0, y: 0 };
let totalWeight = 0;
// Calculate cumulative displacement from all lines
for (let i = 0; i < srcLines.length; i++) {
const displacement = calculateDisplacement(
{ x, y }, currentLines[i], srcLines[i]
);
const dist = distanceToLine({ x, y }, currentLines[i]);
const weight = Math.pow(
Math.pow(lineLength, p) / (a + dist), b
);
totalDisplacement.x += displacement.x * weight;
totalDisplacement.y += displacement.y * weight;
totalWeight += weight;
}
// Sample from the computed source position
const srcX = x + totalDisplacement.x / totalWeight;
const srcY = y + totalDisplacement.y / totalWeight;
copyPixel(sourceImg, srcX, srcY, result, x, y);
}
}
return result;
}
Bidirectional Morphing:
For a morph at time , the final image is computed as:
Where:
- and are the warping functions for images A and B
- represents the interpolated line positions
- Both images are warped toward the same intermediate line configuration
The implementation:
function createMorph(imageA: ImageData, imageB: ImageData, t: number): ImageData {
// Warp both images toward intermediate positions
const warpedA = warpImage(imageA, linesA, linesB, t);
const warpedB = warpImage(imageB, linesB, linesA, 1 - t);
// Cross-dissolve the warped images
return blendImages(warpedA, warpedB, t);
}
This bidirectional approach ensures both images deform toward a common intermediate configuration, creating smooth, natural transitions where features from both images meet at geometrically consistent positions.
Best Practices for Feature Correspondence
Here is some note on how to effectively place control lines for morphing:
Principles for Effective Line Placement
Through extensive experimentation, I developed these guidelines for creating convincing morphs:
-
Semantic Correspondence: Always match anatomically equivalent features. Left eye to left eye, not to nose. This seems obvious but is easy to violate when focusing on geometric similarity rather than semantic meaning.
-
Hierarchical Feature Definition:
- Primary features: Eye line, nose ridge, mouth line
- Secondary features: Jaw outline, hairline
- Tertiary features: Individual feature details
-
Directional Consistency: Line direction must be preserved between images. A line drawn left-to-right in the source must correspond to a left-to-right line in the target. Violating this creates unnatural twisting artifacts.
-
Optimal Line Density: Fewer, well-placed lines often produce better results than dense coverage. I found 8-12 lines optimal for facial morphing.
Common Pitfalls and Solutions
When implementing image morphing, I encountered several common problems that can ruin the effect. Here are the main issues and how to avoid them:
Crossing lines create impossible transformations that result in torn or distorted pixels. When control lines intersect, the algorithm tries to satisfy conflicting constraints, leading to chaotic results. Always ensure your control lines never cross each other, either within a single image or between corresponding lines in the source and target images.
Missing boundaries cause the background to bleed into the foreground during morphing. Without proper silhouette definition, the algorithm doesn’t understand where the subject ends and the background begins. Always define clear boundary lines around the main subject, especially along the face outline and hairline.
Over-constrained regions produce rigid, unnatural movement. When you place too many control lines in a small area, you restrict the algorithm’s ability to create smooth deformations. It’s better to use fewer, well-placed lines and trust the algorithm to interpolate naturally between them.
Mismatched topology occurs when you try to morph between structurally incompatible features. For example, morphing between an open mouth and a closed mouth, or between vastly different facial expressions, often produces poor results. Choose images with similar poses and expressions for the best morphing effects.
Automating Feature Detection
Modern Approach: MediaPipe Integration
Manual line placement becomes tedious for batch processing. I integrated Google’s MediaPipe for automatic facial landmark detection:
async function detectFacialFeatures(image: HTMLImageElement) {
const faceLandmarker = await FaceLandmarker.createFromOptions({
baseOptions: {
modelAssetPath: 'face_landmarker.task',
delegate: 'GPU'
},
numFaces: 1,
runningMode: 'IMAGE'
});
const landmarks = await faceLandmarker.detect(image);
return convertLandmarksToLines(landmarks);
}
MediaPipe provides 468 facial landmarks, which I convert into semantically meaningful line segments:
- Eye contours (landmarks 33-133, 243-346)
- Nose ridge (landmarks 1-4, 5-6)
- Lip boundaries (landmarks 0-16, 17-26)
- Face outline (landmarks 356-454)
Classical Computer Vision Approaches
I also experimented with traditional edge detection methods:
// Sobel operator for edge detection
const sobelX = [[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]];
const sobelY = [[-1, -2, -1], [0, 0, 0], [1, 2, 1]];
However, edge detection proves inadequate for morphing as it:
- Detects all edges indiscriminately (shadows, texture, hair)
- Lacks semantic understanding of facial features
- Produces noisy, disconnected segments
Hybrid Approach: Combining Automation with Manual Refinement
The optimal workflow combines automated detection with manual adjustment:
- Initial Detection: MediaPipe provides baseline feature locations
- Automatic Conversion: Algorithm converts landmarks to line segments
- Manual Refinement: User adjusts lines for specific morphing requirements
- Quality Control: Visual preview ensures proper correspondence
This approach reduces setup time by ~80% while maintaining artistic control.
Conclusion
This journey from childhood fascination to technical understanding illustrates how seemingly magical effects often have elegant mathematical foundations. The Beier-Neely algorithm’s brilliance lies not in complexity, but in its intuitive mapping of how we naturally perceive facial features as connected line segments.
References
- Beier, T., & Neely, S. (1992). Feature-based image metamorphosis. ACM SIGGRAPH Computer Graphics, 26(2), 35-42.