typescript and javascript logo
author avatar

Grzegorz Dubiel

29-10-2025

The Trick of Running Face Recognition in a Web Worker in React

When you want to perform simple face recognition tasks on your photos, you’ll probably end up using an app that calls an external API under the hood. What if you want to keep your data private on your local machine? I’ve been in that situation a few times, as I’m quite obsessed with data privacy. From a JavaScript developer’s perspective, when it comes to that kind of app, we usually think about calling an external service or asking our Python-proficient colleagues to write an API (by the way, that would be a great opportunity to learn Python). Fortunately, there’s an old but still useful solution — face-api.js

What is face-api.js

When deciding which package to use in the project, its age and user involvement in maintenance are usually among the main factors. But in this case, that’s not really an issue — face-api.js appears to be a complete library, ready to use in your project.

This solution does not use LLMs or any other transformer models. face-api.js is a wrapper around tensorflow.js and mainly works with SSD Mobilenet V1, ResNet-34-like Face Recognition Model, and MTCNN (experimental) models.

SSD (Single Shot MultiBox Detector) is a model designed for multiple object detection in a single pass, trained and adapted for face detection tasks. The model performs well in real-time face detection. Its size is relatively small (about 5.4 MB) because it has been quantized.

ResNet-34-like Face Recognition Model is designed for face recognition tasks. This model can generate face descriptors that can be used to compare two faces by measuring the similarity between their descriptions using the Euclidean distance algorithm, whose implementation is also provided by face-api.js.

MTCNN (Multi-task Cascaded Convolutional Neural Networks) comes with face-api.js mostly for experimental purposes, but it can still be very useful. This model is more specialized for face detection tasks. Its special ability is landmark detection — for example, eyes, mouth, and nose. The detection is performed simultaneously. The model size is about 2 MB.

Those are, in my view, the most interesting models from the face-api.js but there are more available, such as a lightweight version for face recognition tasks and a face expression model.

Of course, it’s also possible to use your own models with face-api.js.

Why use face-api.js in a Web Worker?

Before we dive into implementing inference in a Web Worker, it’s good to understand why it’s worth the effort to manage communication between the main thread and the worker.

Let’s say we have a component that allows users to upload an example face to the browser and then upload a set of other images. The task of the component is to anonymize each face in the given set that belongs to the person from the example image.

First, we need to load the models:

typescript

import { useEffect } from "react";
import * as faceapi from "face-api.js";

const MODEL_PATH = `/models`;

async function loadModels(onError: (err: any) => void) {
  try {
    await Promise.all([
      faceapi.nets.ssdMobilenetv1.loadFromUri(MODEL_PATH),
      faceapi.nets.faceLandmark68Net.loadFromUri(MODEL_PATH),
      faceapi.nets.faceRecognitionNet.loadFromUri(MODEL_PATH),
    ]);
    console.log("models loaded");
  } catch (err) {
    onError("Failed to load face detection models");
    console.error(err);
  }
}

function useLoadModels() {
  useEffect(() => {
    loadModels((e) => console.log("Error Loading Models", e));
  }, []);
}

export default useLoadModels;

We can keep the models in the ./public directory. Once the component is mounted, the function for loading the models is called.

Next, we will define our core function for inference. For now, we’ll put the entire inference logic into a custom React hook:

typescript

import { useState } from "react";
import * as faceapi from "face-api.js";

type ImageWithDescriptor = {
  id: number;
  descriptors: Float32Array<ArrayBufferLike>[];
  detections: faceapi.WithFaceDescriptor<
    faceapi.WithFaceLandmarks<
      {
        detection: faceapi.FaceDetection;
      },
      faceapi.FaceLandmarks68
    >
  >[];
  imgElement: HTMLCanvasElement;
};

async function createCanvasFromDataUrl(
  dataUrl: string,
): Promise<HTMLCanvasElement> {
  const res = await fetch(dataUrl);
  const blob = await res.blob();
  const bitmap = await createImageBitmap(blob);
  const canvas = document.createElement("canvas");
  canvas.width = bitmap.width;
  canvas.height = bitmap.height;
  const ctx = canvas.getContext("2d")!;
  ctx.drawImage(bitmap, 0, 0, canvas.width, canvas.height);
  return canvas;
}

function compareImages(
  exampleDescriptors: Float32Array[],
  targetImagesWithDescriptors: ImageWithDescriptor[],
) {
  const threshold = 0.5;

  const matchedImagesWithDescriptors: ImageWithDescriptor[] = [];
  targetImagesWithDescriptors.forEach(({ detections, ...rest }) => {
    const matchedDescriptors = detections.filter(({ descriptor }) => {
      return exampleDescriptors.some((exampleDescriptor) => {
        const distance = faceapi.euclideanDistance(
          exampleDescriptor,
          descriptor,
        );
        return distance < threshold;
      });
    });

    if (matchedDescriptors.length) {
      matchedImagesWithDescriptors.push({
        detections: matchedDescriptors,
        ...rest,
      });
    }
  });
  return matchedImagesWithDescriptors;
}

async function extractAllFaces(
  image: HTMLImageElement | HTMLCanvasElement | ImageBitmap,
) {
  const detections = await faceapi
    .detectAllFaces(image as any)
    .withFaceLandmarks()
    .withFaceDescriptors();

  return detections;
}

function useFace() {
  const [exampleImage, setExampleImage] = useState<string | null>(null);
  const [targetImages, setTargetImages] = useState<string[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);
  const [outputImages, setOutputImages] = useState<string[]>([]);

  const handleFace = async () => {
    if (!exampleImage) {
      return;
    }
    try {
      setIsLoading(true);
      setError(null);

      const exampleCanvas = await createCanvasFromDataUrl(exampleImage);

      const allFaces = await extractAllFaces(exampleCanvas);

      if (!allFaces.length) {
        setError("No faces detected in the example image");
        return;
      }

      const canvas = document.createElement("canvas");
      const ctx = canvas.getContext("2d")!;

      const updatedTargetImages: string[] = [];
      const targetImagesWithId = targetImages.map((targetImage, index) => ({
        id: index,
        src: targetImage,
      }));
      const targetImagesWithDescriptors = await Promise.all(
        targetImagesWithId.map(async ({ id, src }) => {
          const canvas = await createCanvasFromDataUrl(src);
          const detections = await extractAllFaces(canvas);

          return {
            id,
            detections,
            descriptors: detections.map((detection) => detection.descriptor),
            src,
            imgElement: canvas,
          };
        }),
      );

      const matchedImagesWithDescriptors = compareImages(
        allFaces.map((face) => face.descriptor),
        targetImagesWithDescriptors,
      );

      for (const targetImage of targetImagesWithDescriptors) {
        const detections =
          matchedImagesWithDescriptors.find(
            (matchedImageWithDescriptors) =>
              matchedImageWithDescriptors.id === targetImage.id,
          )?.detections || [];

        if (!detections?.length) {
          updatedTargetImages.push(targetImage.src);
          continue;
        }
        const targetImgElement = targetImage.imgElement;
        canvas.width = targetImgElement.width;
        canvas.height = targetImgElement.height;
        ctx.drawImage(targetImgElement, 0, 0, canvas.width, canvas.height);

        for (const detection of detections) {
          const { x, y, width, height } = detection?.detection?.box;
          ctx.filter = "blur(60px)";
          ctx.drawImage(
            targetImgElement,
            x,
            y,
            width,
            height,
            x,
            y,
            width,
            height,
          );
          ctx.filter = "none";
        }

        updatedTargetImages.push(canvas.toDataURL());
      }

      setOutputImages(updatedTargetImages);
    } catch (err) {
      setError("Error processing image");
      console.error(err);
    } finally {
      setIsLoading(false);
    }
  };

  return {
    handleFace,
    isLoading,
    error,
    outputImages,
    loadExampleImage: setExampleImage,
    loadTargetImages: setTargetImages,
    exampleImage,
    targetImages,
  };
}

export default useFace;

A lot happens here. The most important function is handleFace, which organizes the entire process of recognizing, comparing, and blurring target faces.

First, we need to extract all the faces from the example image. These faces will serve as examples to decide which faces in the target set need to be blurred.

typescript

// REST OF THE CODE

async function extractAllFaces(image: HTMLCanvasElement) {
  const detections = await faceapi
    .detectAllFaces(image)
    .withFaceLandmarks()
    .withFaceDescriptors();

  return detections;
}

// REST OF THE CODE

const exampleCanvas = await createCanvasFromDataUrl(exampleImage);
const allFaces = await extractAllFaces(exampleCanvas);

// REST OF THE CODE

Next, we need to extract all the faces as objects called descriptors from the target set of images, using the same helper function (extractAllFaces) as in the previous step.

typescript

// REST OF THE CODE
const targetImagesWithDescriptors = await Promise.all(
  targetImagesWithId.map(async ({ id, src }) => {
    const canvas = await createCanvasFromDataUrl(src);
    const detections = await extractAllFaces(canvas);

    return {
      id,
      detections,
      descriptors: detections.map((detection) => detection.descriptor),
      src,
      imgElement: canvas,
    };
  }),
); // REST OF THE CODE

Then, we compare the faces using the Euclidean distance algorithm:

typescript

// REST OF THE CODE

function compareImages(
  exampleDescriptors: Float32Array[],
  targetImagesWithDescriptors: ImageWithDescriptor[],
) {
  const threshold = 0.5;

  const matchedImagesWithDescriptors: ImageWithDescriptor[] = [];
  targetImagesWithDescriptors.forEach(({ detections, ...rest }) => {
    const matchedDescriptors = detections.filter(({ descriptor }) => {
      return exampleDescriptors.some((exampleDescriptor) => {
        const distance = faceapi.euclideanDistance(
          exampleDescriptor,
          descriptor,
        );
        return distance < threshold;
      });
    });

    if (matchedDescriptors.length) {
      matchedImagesWithDescriptors.push({
        detections: matchedDescriptors,
        ...rest,
      });
    }
  });
  return matchedImagesWithDescriptors;
}

// REST OF THE CODE
const matchedImagesWithDescriptors = compareImages(
  allFaces.map((face) => face.descriptor),
  targetImagesWithDescriptors,
);
// REST OF THE CODE

face-api.js also provides a method to compute descriptors using this algorithm.

After a successful comparison, we have everything we need to blur the relevant faces. So, we perform a very simple face blurring:

typescript

// REST OF THE CODE
for (const targetImage of targetImagesWithDescriptors) {
  const detections =
    matchedImagesWithDescriptors.find(
      (matchedImageWithDescriptors) =>
        matchedImageWithDescriptors.id === targetImage.id,
    )?.detections || [];

  if (!detections?.length) {
    updatedTargetImages.push(targetImage.src);
    continue;
  }
  const targetImgElement = targetImage.imgElement;
  canvas.width = targetImgElement.width;
  canvas.height = targetImgElement.height;
  ctx.drawImage(targetImgElement, 0, 0, canvas.width, canvas.height);

  for (const detection of detections) {
    const { x, y, width, height } = detection?.detection?.box;
    ctx.filter = "blur(60px)";
    ctx.drawImage(targetImgElement, x, y, width, height, x, y, width, height);
    ctx.filter = "none";
  }

  updatedTargetImages.push(canvas.toDataURL());
}
// REST OF THE CODE

Finally, we can call the hooks inside the React component:

JSX

import { useCallback } from "react";
import { Button } from "./ui/button";

import { Preview } from "./preview";
import { ImageUploader } from "./preview/image-uploader";
import useFace from "@/hooks/use-face";
import useLoadModels from "@/hooks/use-load-models";

function AnonymizerClient() {
  useLoadModels();
  const {
    outputImages,
    handleFace,
    loadExampleImage,
    loadTargetImages,
    isLoading,
    exampleImage,
    targetImages,
  } = useFace();

  const handleProcess = () => {
    handleFace();
  };

  const handleOnImageUpload = useCallback(
    (data: string[]) => {
      loadTargetImages((prevData) => [...prevData, ...data]);
    },
    [loadTargetImages],
  );
  return (
    <div className="flex w-full max-w-[800px] flex-col items-center gap-1.5">
      <Preview
        exampleImagePlaceholder={
          <ImageUploader
            type="single"
            onImageUpload={(img) => loadExampleImage(img[0])}
          />
        }
        targetImagesPlaceholder={
          <ImageUploader type="multiple" onImageUpload={handleOnImageUpload} />
        }
        images={outputImages?.length ? outputImages : targetImages}
        exampleImage={exampleImage}
      />
      <div className="flex space-x-4"></div>{" "}
      {isLoading && <span>Loading...</span>}
      <Button type="button" onClick={handleProcess} disabled={isLoading}>
        Process
      </Button>
    </div>
  );
}

export default AnonymizerClient;

Everything looks great, so…

…where’s the problem?

The problem is that once users click on "Process" button, the UI is frozen. It's like that because JavaScript's runtime is single threaded. Everything by default runs on the single thread, which we call main thread, of the processor even though we might have even eight or more available. Despite the models being lightweight, inference tasks plus rendering the UI plus blurring faces, is way too much for only the main thread.

Solving the problem

Fortunately, there is a solution for this — Web Workers. The plan is simple: we can delegate the inference and blurring code to run in a Web Worker. This way, our UI remains responsive, as the main thread is offloaded. In the browser environment, there are several types of workers: Dedicated Worker, Shared Worker, and Service Worker. We will focus on the Dedicated Worker, which, according to MDN, is available for the one dedicated script that called it.

To use it, a file containing the worker’s code must be created:

typescript

// worker.js

onmessage(() => {
  console.log("Hello from worker");
  self.postMessage("Done");
});

Then, on the main thread, an instance of the Worker class must be created, providing the path to the worker file:

typescript

// main.js

const worker = new Worker("worker.js");

worker.onmessage = (event) => {
  console.log(event.data); // 'Hello from worker'
};

Communication is handled through messages.

Migrating the inference logic

Now that a solution to the problem has been identified, the inference logic must be moved from the main thread to the worker.

We will use a package to handle the communication, as native web solutions are not very convenient. The tool we’ll use is called Comlink. With this package, all we need to do is define a class that will abstract our worker; the methods of this class are called when the relevant message comes from the main thread.

To illustrate, here is an example of a worker written without any libraries:

typescript

// worker.ts

import type { MessageHandler } from "./types";

self.onmessage = (event: MessageEvent) => {
  const { type, payload } = event.data;

  switch (type) {
    case "printMessage":
      console.log(payload);
      break;

    case "getMessageLength":
      self.postMessage({
        type: "getMessageLengthResult",
        payload: payload.length,
      });
      break;
  }
};

// main.ts

import type { MessageHandler } from "./types";

const worker = new Worker(new URL("./worker.ts", import.meta.url), {
  type: "module",
});

const message = "Hello";

worker.onmessage = (event) => {
  const { type, payload } = event.data;

  if (type === "getMessageLengthResult") {
    console.log("Message length:", payload);
  }
};

worker.postMessage({ type: "printMessage", payload: message });
// logs -> 'Hello'
worker.postMessage({ type: "getMessageLength", payload: message });
// returns -> 5

Here is the version using Comlink:

typescript

// worker.ts
import type { MessageHandler } trom './types'
import * as Comlink from "comlink";

class MessageWorker implements MessageHandler {
  printMessage(message: string) {
    console.log(message);
  }
  geMessageLength(message: string) {
    return message.length;
  }
}
const worker = new MessageWorker();
Comlink.expose(worker);

// main.ts
import type { MessageHandler } trom './types'
import * as Comlink from "comlink";

const worker = new Worker('worker.ts')
const workerApi = Comlink.wrap<MessageHandler>(worker)
const message = 'Hello'

workerApi.printMessage(message) // logs -> 'Hello'
workerApi.getMessageLength(message) // returns -> 5

);

Cleaner, isn't it?

So, all we need to do is move our inference logic to the worker, right? Well... not quite.

Another issue...

There’s one problem we need to tackle if we want to make our client-side inference usable.

We need the canvas API to process the image, but it isn’t available in the web worker.

The problem with canvas is also related to face-api.js, as the library uses it under the hood. Fortunately, there’s a substitute for canvas called OffscreenCanvas. It should also solve the additional issues related to image blurring in the web worker.

Finally, We Can Migrate

First, we will create the code for our worker. This way, we can clearly see how the problem is solved.

Let’s define the utils:

typescript

// serializers.ts

export function serializeDetection(det: any): any {
  if (!det) return det;

  const box = det.box ?? det._box;
  return {
    score: det.score ?? det._score,
    classScore: det.classScore ?? det._classScore,
    className: det.className ?? det._className,
    box: box ? serializeBox(box) : undefined,
    imageDims: det.imageDims ?? det._imageDims,
  };
}

export function serializeBox(box: any): any {
  return {
    x: box.x ?? box._x,
    y: box.y ?? box._y,
    width: box.width ?? box._width,
    height: box.height ?? box._height,
  };
}

export function serializeLandmarks(landmarks: any): any {
  if (!landmarks?.positions) return landmarks;
  return {
    positions: landmarks.positions.map((pt: any) => ({
      x: pt.x ?? pt._x,
      y: pt.y ?? pt._y,
    })),
  };
}
export function serializeFaceApiResult(result: any) {
  if (!result) return result;

  const out: Record<string, any> = {};

  if ("detection" in result) {
    out.detection = serializeDetection(result.detection);
  }
  if ("descriptor" in result) {
    out.descriptor = ArrayBuffer.isView(result.descriptor)
      ? Array.from(result.descriptor) // or keep Float32Array for zero-copy
      : result.descriptor;
  }
  if ("expression" in result) {
    out.expression = result.expression;
  }
  if ("landmarks" in result) {
    out.landmarks = serializeLandmarks(result.landmarks);
  }
  if ("alignedRect" in result) {
    out.alignedRect = serializeDetection(result.alignedRect);
  }

  return out;
}

We need to serialize the properties of the object, as face-api.js objects (detections, landmarks, descriptors) are complex class instances (objects with prototypes and methods), and via postMessage we can only send serializable plain JavaScript data types. In plain English: if we don’t serialize these objects, we would end up with properties that are missing or malformed on the main thread. For example: box._y instead of box.y

Next, we set up face-api.js in the worker and add some utils as well:

typescript

// face-detection.worker.ts

import * as faceapi from "face-api.js";
import { serializeFaceApiResult } from "./worker/serializers";

const MODEL_PATH = `/models`;

faceapi.env.setEnv(faceapi.env.createNodejsEnv());

faceapi.env.monkeyPatch({
  Canvas: OffscreenCanvas,
  createCanvasElement: () => {
    return new OffscreenCanvas(480, 270);
  },
});

const createCanvas = async (transferObj: DataTransfer) => {
  try {
    const buf = transferObj as ArrayBuffer | undefined;
    if (!buf) {
      return new OffscreenCanvas(20, 20);
    }

    const blob = new Blob([buf]);
    const bitmap = await createImageBitmap(blob);
    const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
    const ctx = canvas.getContext("2d")!;
    ctx.drawImage(bitmap, 0, 0, bitmap.width, bitmap.height);
    return canvas;
  } catch (error) {
    console.error(
      "Error creating image from buffer, using empty canvas instead",
      error,
    );
    return new OffscreenCanvas(20, 20);
  }
};

The first thing we need to do to make face-api.js work in a web worker is to explicitly provide a canvas alternative. We also have a function for creating a canvas from image data, which will be used in face detection and recognition tasks.

Now we can define our class, which, thanks to Comlink, will serve as syntactic sugar for the worker API:

typescript

class WorkerClass implements FaceDetectionWorker {
  async extractAllFaces(transferObj: DataTransfer) {
    const canvas = await createCanvas(transferObj);
    const detections = await faceapi
      .detectAllFaces(canvas as unknown as faceapi.TNetInput)
      .withFaceLandmarks()
      .withFaceDescriptors();
    return detections.map(serializeFaceApiResult);
  }

  async detectMatchingFaces(transferObj: {
    allExampleFaces: Float32Array[];
    allTargetImages: ArrayBuffer;
  }) {
    const allExampleFaces = transferObj.allExampleFaces;
    const detections = await this.extractAllFaces(transferObj.allTargetImages);
    const threshold = 0.5;

    const matchedDescriptors = detections.filter(({ descriptor }) => {
      return allExampleFaces.some((exampleDescriptor) => {
        const distance = faceapi.euclideanDistance(
          exampleDescriptor,
          descriptor,
        );
        return distance < threshold;
      });
    });
    return matchedDescriptors.map(serializeFaceApiResult);
  }

  async drawOutputImage(imageWithDescriptors: ImageWithDescriptors) {
    const decodedCanvas = await createCanvas(imageWithDescriptors.imgElement);

    const canvas = new OffscreenCanvas(
      decodedCanvas.width,
      decodedCanvas.height,
    );
    const ctxRes = canvas.getContext("2d", { willReadFrequently: true })!;
    const detections = imageWithDescriptors.detections;
    ctxRes.drawImage(
      decodedCanvas as unknown as CanvasImageSource,
      0,
      0,
      canvas.width,
      canvas.height,
    );

    for (const detection of detections) {
      const { x, y, width, height } = detection?.detection?.box;

      const padding = 0.2;
      const expandedX = Math.max(0, x - width * padding);
      const expandedY = Math.max(0, y - height * padding);
      const expandedWidth = Math.min(
        canvas.width - expandedX,
        width * (1 + 2 * padding),
      );
      const expandedHeight = Math.min(
        canvas.height - expandedY,
        height * (1 + 2 * padding),
      );

      for (let i = 0; i < 3; i++) {
        ctxRes.filter = "blur(50px)";
        ctxRes.drawImage(
          decodedCanvas as unknown as CanvasImageSource,
          expandedX,
          expandedY,
          expandedWidth,
          expandedHeight,
          expandedX,
          expandedY,
          expandedWidth,
          expandedHeight,
        );
      }
      ctxRes.filter = "none";
    }

    return canvas.convertToBlob();
  }
}

The approach proposed by Comlink is very convenient. It’s easy to imagine that our methods will be called wherever the messages are received by the worker.

We also moved the logic for computing image and face comparisons, which was previously done in the handleFace function and the compareImages function called in useFaceHook. Now, this logic is encapsulated in the detectMatchingFaces method. The blurring function has also been moved and adapted.

The last thing we need to do on the worker side is to load the models and expose the instance of our worker class:

typescript

// REST OF THE CODE

async function loadModels() {
  console.log("WorkerClass init...");
  await Promise.all([
    faceapi.nets.ssdMobilenetv1.loadFromUri(MODEL_PATH),
    faceapi.nets.faceLandmark68Net.loadFromUri(MODEL_PATH),
    faceapi.nets.faceRecognitionNet.loadFromUri(MODEL_PATH),
  ]);
  console.log("worker initialized and models loaded");
}

(async () => {
  await loadModels();
  const worker = new WorkerClass();
  Comlink.expose(worker);
})();

Rewriting the Logic on the Main Thread

Now, we need to adjust the logic from the hook a little bit.

The first thing we need to add is the worker initialization. We will initialize the worker in a useEffect hook and assign it to a ref.

typescript

import { useEffect, useRef, useState } from "react";
import * as faceapi from "face-api.js";
import * as Comlink from "comlink";
import type { FaceDetectionWorker } from "../types";

function useFace() {
  const [exampleImage, setExampleImage] = useState<string | null>(null);
  const [targetImages, setTargetImages] = useState<string[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);
  const [outputImages, setOutputImages] = useState<string[]>([]);
  const workerRef = useRef<Worker | null>(null);

  useEffect(() => {
    const runWorker = async () => {
      const worker = new Worker(
        new URL("../face-detection.worker", import.meta.url),
        { type: "module" },
      );

      workerRef.current = worker;
    };

    runWorker().catch((error) => {
      console.error("Error initializing worker:", error);
    });

    return () => {
      workerRef.current?.terminate();
    };
  }, []);
}

// REST OF THE CODE...

Then, in our handleFace handler, we need to wrap the worker using the wrap method from Comlink.

typescript

function useFace() {
  // REST OF THE CODE

  const handleFace = async () => {
    if (!exampleImage || !workerRef.current) {
      return;
    }
    try {
      const api = Comlink.wrap<Comlink.Remote<FaceDetectionWorker>>(
        workerRef.current as any,
      );

      setIsLoading(true);
      setError(null);
    } catch (err) {
      setError("Error processing image");
      console.error(err);
    } finally {
      setIsLoading(false);
    }
  };

  // REST OF THE CODE
}

This gives us a nice, accessible API that allows us to seamlessly communicate with our worker, just as if it were a class.

typescript

function useFace() {
  // REST OF THE CODE

  const handleFace = async () => {
    try {
      // REST OF THE CODE

      const exampleArrayBuffer = await (
        await fetch(exampleImage)
      ).arrayBuffer();

      const allExampleFaces = await api.extractAllFaces(exampleArrayBuffer);
      const targetImagesWithId = targetImages.map((targetImage, index) => ({
        id: index,
        src: targetImage,
      }));

      // REST OF THE CODE
    } catch (err) {
      setError("Error processing image");
      console.error(err);
    } finally {
      setIsLoading(false);
    }
  };

  // REST OF THE CODE
}

After detecting all example faces in the example image, we can finally detect and blur faces in the target images.

typescript

// REST OF THE CODE

async function getImageWithDetections(
  allExampleFaces: Float32Array[],
  targetImageData: { id: number; src: string },
  detector: (transferObj: {
    allExampleFaces: Float32Array[];
    allTargetImages: ArrayBuffer;
  }) => Promise<
    faceapi.WithFaceDescriptor<
      faceapi.WithFaceLandmarks<
        { detection: faceapi.FaceDetection },
        faceapi.FaceLandmarks68
      >
    >[]
  >,
) {
  const { id, src } = targetImageData;
  const arrayBuffer = await (await fetch(src)).arrayBuffer();

  const arrayBufferForDetector = arrayBuffer.slice(0);

  const payload = {
    allExampleFaces,
    allTargetImages: arrayBufferForDetector,
  };

  const matchedDescriptors = await detector(payload);

  return {
    id,
    src,
    imgElement: arrayBuffer,
    detections: matchedDescriptors,
  };
}

function useFace() {
  // REST OF THE CODE

  const handleFace = async () => {
    try {
      // REST OF THE CODE
      for (const targetImage of targetImagesWithId) {
        const imageWithDescriptors = await getImageWithDetections(
          allExampleFaces.map((face) => face.descriptor),
          targetImage,
          api.detectMatchingFaces,
        );
        const output = await api.drawOutputImage(imageWithDescriptors);

        const url = URL.createObjectURL(output);
        setOutputImages((prevState) => [...prevState, url]);
      }
    } catch (err) {
      setError("Error processing image");
      console.error(err);
    } finally {
      setIsLoading(false);
    }
  };

  // REST OF THE CODE
}

We need to create a function to get an image with detections, as we don’t want to nest too much code in the loop—it would quickly become messy. The function takes everything needed to obtain the faces for blurring: example faces, the target image in which we look for faces that match the examples, and the function for detecting matching faces.

The entire module with the useFace hook looks like this:

typescript

import { useEffect, useRef, useState } from "react";
import * as faceapi from "face-api.js";
import * as Comlink from "comlink";
import type { FaceDetectionWorker } from "../types";

async function getImageWithDetections(
  allExampleFaces: Float32Array[],
  targetImageData: { id: number; src: string },
  detector: (transferObj: {
    allExampleFaces: Float32Array[];
    allTargetImages: ArrayBuffer;
  }) => Promise<
    faceapi.WithFaceDescriptor<
      faceapi.WithFaceLandmarks<
        { detection: faceapi.FaceDetection },
        faceapi.FaceLandmarks68
      >
    >[]
  >,
) {
  const { id, src } = targetImageData;
  const arrayBuffer = await (await fetch(src)).arrayBuffer();

  const arrayBufferForDetector = arrayBuffer.slice(0);

  const payload = {
    allExampleFaces,
    allTargetImages: arrayBufferForDetector,
  };

  const matchedDescriptors = await detector(payload);

  return {
    id,
    src,
    imgElement: arrayBuffer,
    detections: matchedDescriptors,
  };
}

function useFace() {
  const [exampleImage, setExampleImage] = useState<string | null>(null);
  const [targetImages, setTargetImages] = useState<string[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);
  const [outputImages, setOutputImages] = useState<string[]>([]);
  const workerRef = useRef<Worker | null>(null);

  useEffect(() => {
    const runWorker = async () => {
      const worker = new Worker(
        new URL("../face-detection.worker", import.meta.url),
        { type: "module" },
      );

      workerRef.current = worker;
    };

    runWorker().catch((error) => {
      console.error("Error initializing worker:", error);
    });

    return () => {
      workerRef.current?.terminate();
    };
  }, []);

  const handleFace = async () => {
    if (!exampleImage || !workerRef.current) {
      return;
    }
    try {
      const api = Comlink.wrap<Comlink.Remote<FaceDetectionWorker>>(
        workerRef.current as any,
      );

      setIsLoading(true);
      setError(null);

      const exampleArrayBuffer = await (
        await fetch(exampleImage)
      ).arrayBuffer();

      const allExampleFaces = await api.extractAllFaces(exampleArrayBuffer);
      const targetImagesWithId = targetImages.map((targetImage, index) => ({
        id: index,
        src: targetImage,
      }));

      for (const targetImage of targetImagesWithId) {
        const imageWithDescriptors = await getImageWithDetections(
          allExampleFaces.map((face) => face.descriptor),
          targetImage,
          api.detectMatchingFaces,
        );
        const output = await api.drawOutputImage(imageWithDescriptors);

        const url = URL.createObjectURL(output);
        setOutputImages((prevState) => [...prevState, url]);
      }
    } catch (err) {
      setError("Error processing image");
      console.error(err);
    } finally {
      setIsLoading(false);
    }
  };

  return {
    handleFace,
    isLoading,
    error,
    outputImages,
    loadExampleImage: setExampleImage,
    loadTargetImages: setTargetImages,
    exampleImage,
    targetImages,
  };
}

export default useFace;

Wrapping up

After a few hoops, we managed to solve a problem that many developers would give up on and just use external services. Nowadays, despite being often connected to cloud services and remote servers, it’s good to think about users and their privacy, and from our perspective, it can be beneficial. There’s no need to run a server or serverless function for every single task, and when we process users’ data locally on their machine, we don’t necessarily worry about costs for using resources. Also, we don’t bear the same responsibility for users’ sensitive data processed on their machine as we would if it were stored in our database hosted on our own servers.

Thanks for reading and stay tuned! 🫡

typescript and javascript logogreg@aboutjs.dev

©Grzegorz Dubiel | 2025