whitepaper / A Decade of AI

Chapter 1

This whitepaper feels like a new chapter for Pixelz. It’s about the journey, the thinking, and the experience behind how Pixelz approaches AI today. As the industry continues to evolve, we wanted to share not only what we’ve built, but the perspective that comes from spending years working at the intersection of technology, creativity, and e-commerce.

Who We Are

Pixelz began in 2011 in Holstebro, Denmark, back when we were called Remove the Background. The idea was to help e-commerce teams create high-quality product images for their sites.

That simple business idea shaped the next decade for us. As our customers grew and their needs became more complex, we knew traditional post-production workflows are difficult to scale. So we set out to rethink the process entirely.

That led us to develop S.A.W.™ (Specialist-Assisted Workflow) an assembly-line approach to image post-production. By breaking the process into smaller defined steps, we created a system where human expertise, technology, and AI work together.

Today, the industry looks very different. AI is everywhere, and new companies are emerging with promises of instant scale and speed. And while the tools may be new, many of these approaches are still starting from scratch. Pixelz is not.

Our AI capabilities are the result of more than 15 years of hands-on experience, continuous iteration and real-world application. We’ve spent over a decade building, training, and refining AI for post-production, always paired with human expertise. This hybrid model, combining people and technology, is what allows us to scale while maintaining the level of precision and consistency e-commerce demands.

And our AI journey is far from finished; it continues to evolve with every image we process and every new solution we add to our portfolio.

We trained the AI on around 65,000 manually labeled images to distinguish between actual moles and things like loose hair. From there, a simple script could handle the removal.

In the pages that follow, we’ll introduce Pixelz’s AI approach and products, share the thinking and milestones that led us here, and offer a full look into our generation solutions in Chapter 2. Most importantly, we want to demystify our AI, showing that it doesn’t have to be intimidating, especially when it’s designed to work alongside people, not replace them.

Campaign Imagery from Pixelz

Early experimentation with AI: Automated Mole Removal

Before we take you further, we want to tell you how we got it all started. One of our earliest functional AI models focused on skin retouching, specifically the identification and removal of moles. The system combined traditional skin detection, rule-based candidate filtering, and an AI model trained to classify true moles from false positives.

The challenge was in the accuracy. Features such as loose hair often produced false positives. To address this, we trained the AI using a dataset of approximately 65,000 manually labeled images, enabling it to reliably distinguish between moles and similar visual cues. Once classified, a standard Photoshop script performed the actual removal.

From a technical standpoint, the project worked. We were able to automate the process end-to-end. But client preferences shifted toward a more natural look, the need for mole removal started to fade. What was once a clear use case became less relevant, but the learning didn’t go away. It became a stepping stone in how we think about and build AI today.

And more importantly, it reinforced a core principle for us: AI doesn’t replace human expertise, it works with it. That balance is still at the center of how we work today.

Mole Removal through AI Example

S.A.W as a Hybrid Approach

50% of the workflow at Pixelz is automated using AI

That project was just one piece of a much bigger system. Over time, it became clear that scaling e-com images wasn’t about a single tool, it was about the production around it.

Our approach to AI didn’t start with generative tools or the latest breakthroughs. It’s built on something much less flashy, but a far more important foundation.

At Pixelz, that important foundation is S.A.W.™.

As we mentioned earlier, S.A.W.™ is about breaking image production into smaller, manageable steps and making sure each step is handled in the best possible way. Sometimes that’s a specialist human editor. Sometimes it’s an AI model. Sometimes it’s a simple automated script. The key is knowing what works best where.

Today, more than 50% of that workflow is automated using AI. But that didn’t happen overnight. It’s the result of years spent building structured, repeatable processes designed to scale both quality and efficiency. As we’ve seen across industries, lean production leads to better outcomes, more consistency, faster turnaround times, and fewer surprises.

Image editing is no different.

What makes S.A.W.™ powerful is that it’s not tied to one type of work. Whether it’s flat-lay, ghotsmannequin, on-model, editorial content, AI-generated visuals, or even video, everything moves through the same underlying system. The steps may change, but the logic stays the same.

So what do those steps actually look like, and where does AI fit in?

S.A.W. Diagram

Where it all starts: Image Preparation

Order Specifications

Within this system, AI plays a central role in managing and directing images through the Pixelz workflow.

It starts the moment an image is uploaded.

Before any edits are made, the system needs to understand what it’s working with. Is it a model shot, a product on a mannequin, footwear, furniture, or something else entirely? That first step, known internally as Image Preparation, sets the direction for everything that follows.

It might sound simple, but classifying images at scale is anything but. Every image is different: materials, styling, props, lighting. To handle that, Pixelz has developed a suite of in-house AI classifiers that go beyond basic categorization, detecting everything from models and skin to props, contrast, and overall complexity of the image.

This information does more than describe the image; it helps determine what happens next. Each image is assigned a complexity score, which determines how it’s routed through production, how long it will take, and how it’s prioritized to meet deadlines.

In that sense, Image Preparation is planning the work ahead. At scale, this allows us to schedule, prioritize, and deliver, even across very high volumes.

Behind the scenes, every step also feeds back into the system. Data is continuously logged and reused to improve performance over time, expanding the range of images AI can handle.

And preparation doesn’t stop at analysis. The system also sets the stage for production by automatically generating working files, complete with layered structures used by editors. What once required manual setup is now built into the process.

AI Trimaps and Masking

Most retouching steps aren’t limited by how to edit an image, but by understanding it.

Before anything can be changed, the system needs to know what it’s looking at. Where is the product? Where is the background? What actually matters? The basic editing itself is often simple, but knowing where exactly to apply it is not.

At Pixelz, that’s where it starts most of the time. And when it comes to masking, we don’t begin with perfect outlines; we begin with certainty. Instead of asking AI to evaluate every pixel, the problem is simplified by creating a trimap. This divides the image into three zones:

  • Foreground — what’s clearly part of the product
  • Background — what can be removed
  • Unknown — the narrow edge where precision matters

That edge adapts depending on the image, wider for complex details like hair or sheer fabrics, and tighter for clean, defined edges.

From there, the system focuses only on what’s uncertain. A matting model analyzes color, detail, transparency, and light to determine how the product should blend into the background.

The result is soft, natural edges, not a hard cutout, while the rest of the image stays.

From Understanding to Execution: AI, Code, and Algorithms

Once the system understands where the product is and where it isn’t, everything else becomes more precise.

While trimaps and object detection often provide the starting point, they’re just one part of the system. Behind the scenes, the whole retouching process relies on a large number of specialized AI models, each trained for specific tasks and applied at different stages of the workflow.

From there, a clear pattern takes over: AI decides what matters in the image, and our workflow turns that understanding into action. Take cropping as an example. The crop itself is simple, but knowing where to crop is not. AI identifies the right focal point, whether it’s a product or key body landmarks, and the system applies it consistently across every image.

In other cases, AI provides the detail, and algorithms refine it. Path creation is a good example: AI generates a highly precise mask, and our system translates that into a clean, usable path, optimized for design, layout, and real-world use.

Specifications

Sitting above the technical layer is the specification system, which defines what should happen to each image.

Specifications determine:

  • which elements should be detected
  • which retouching steps should be applied
  • what output standards must be met
Behind the Scenes in the Studio

Behind the scenes, we can create thousands of different combinations of settings in a template.

AI might detect things like hangers, shadows, or mirrors, but whether anything is done about them depends on the specification.

Specifications are kind of how we interact with our customers, how they tell the system what they want the final output to look like. That can be something straightforward, like a standard e-commerce template or marketplace requirement, or something more creative, like an editorial style that’s defined together with the brand.

Specifications act as the decision layer of the workflow: AI identifies key elements in the image, code applies the changes, and specifications define what should happen and how the final output should look.

A System Designed to Scale

This is how Pixelz uses AI in post-production today, across PDP imagery, editorial, lifestyle, and video.

More than a decade of structured workflows, experimentation, and iteration has led to a hybrid model where AI, code, and human expertise are intentionally combined within the S.A.W.™ framework. Each plays a distinct role, working together to support both quality and scale.

AI isn’t responsible for the entire process and it’s not meant to be. Instead, it enhances what’s possible: speed, consistency, and control while also maintaining quality content.

Now, we turn to the next evolution.

If post-production AI is about refining and optimizing what already exists, generative AI is about creating what does not. In the next section, we explore how that shift builds directly on the foundation established here.