Skugar6Ich Blog

Skugar6Ich Blog

Share this post

Skugar6Ich Blog
Skugar6Ich Blog
Product Design Process Framework: How to turn a chaotic design workflow into a predictable system

Product Design Process Framework: How to turn a chaotic design workflow into a predictable system

When design processes are chaotic and unpredictable, the team wastes time and the business loses money. PDPF is a practical framework that turns design processes into clear rules of the game

Jul 28, 2025

Share this post

Skugar6Ich Blog
Skugar6Ich Blog
Product Design Process Framework: How to turn a chaotic design workflow into a predictable system
Share

In this post:

  1. Why PDPF Was Created

  2. PDPF Architecture: Key Components and Minimum Requirements

  • Principle: The Foundational Rules Behind PDPF

  • UpStream: What Tasks Enter the Process

  • DownStream: How We Work on Tasks

  • Analytics: What to Measure and Why It Matters

  • What You Need to Implement PDPF

  • How to Adapt PDPF for Different Teams

  • Common Risks in Adapting PDPF

  • How PDPF Works in Practice

  1. And Finale


Short on time? Here's the gist:

PDPF is a framework that makes the design process manageable, predictable, and transparent.

It’s built on four key components:

  • Principle — sets shared rules and defines areas of responsibility

  • UpStream — helps clarify inputs and prioritize tasks

  • DownStream — structures the execution of tasks

  • Analytics — measures efficiency and helps improve the process

At the heart of the system is the «PonyMashka» — a template that helps frame tasks before work begins.

  • A basic Kanban process with clear stages and acceptance criteria

  • Metrics like Cycle Time, WIP, and Throughput enable deadline forecasting and bottlenecks identification 

  • The system works for both solo designers and multi-person teams

  • This article covers common implementation scenarios and real case studies

  • PDPF can be rolled out step by step — from templates and task boards to analytics and retrospectives


When there are no clear rules, teams miss release deadlines and end up redoing the same work multiple times. This breeds frustration both inside and outside the team -  designers burn out, managers see no results.Ultimately, the company wastes time and money — often 20% to 50% of the original task budget.

Uncertainty seeps into everything — task scopes, communication, and estimation of timelines. Someone forgot to clarify the inputs, someone didn’t lock in agreements, someone handed off half-baked mockups — and the design team quickly adapts to chaos.

But design processes can be managed, transparent, and predictable. Not through bloated documentation, but through clear agreements and a system that reflects the real conditions of product development.

1. Why PDPF was created

In one project, we started building a billing section — a core feature meant to increase net revenue and boost transactional activity. The original plan was to ship a working product first, then improve it iteratively. Instead, we ended up with five different concepts for new functionality that, according to the founder, were going to «disrupt the market». We burned through the resources and shut the project down.

In another case, we were redesigning the subscriptions section in a streaming app. Managers wanted to release updates quickly to test new retention metrics. Developers asked for a simpler design to speed up implementation. But no one clarified upfront what the real priority was — speed, quality, or ease of development.

In these stories, the problem wasn’t just communication. What was missing was:

  • A shared understanding of the task

  • Common quality criteria

  • A clear forecast

Without those anchors, every discussion turns into endless debates and rework. That’s why I started documenting a set of principles — and over time, PDPF took shape. It helps teams reduce revision cycles, align on the expected outcome, and forecast timelines without making empty promises. The framework is useful not just for designers, but for everyone who works with them: managers, developers, product leads.

We tried the usual methods — Kanban, Scrum, sprints — but none of them offered clear guidance for action. Those frameworks were simply too abstract for design processes.

Each of these approaches helped in their own way — but none of them addressed the core questions:

  • How do we decide a task is ready to be worked on?

  • How do we align on quality criteria in advance?

  • How can we forecast timelines?

Design remained a weak spot — tasks were handed off without a clear understanding of what needed to be done or when it would be ready.


2. PDPF Architecture: Key Components and Minimum Requirements

The framework consists of four components — Principle, UpStream, DownStream, and Analytics. Each of them solves a specific problem:

  • Principle defines shared rules and agreements that let the other components function without constant supervision or conflict.

  • UpStream handles task preparation — what gets into the pipeline, why it matters, and in what order.

  • DownStream ensures a clear execution flow — from product features to research tasks.

  • Analytics shows how the system performs overall and helps identify quick wins for improvement.


2.1 Principle: the foundational rules behind PDPF

This section lays out the core rules, beliefs, and expectations that the entire framework is built on. They apply to any team, regardless of its size or the maturity of the product.

Design is the product

Design helps users achieve their goals — and helps the business get measurable results. In PDPF, every process starts with three questions: What are we doing? Who is it for? Why does it matter? A designer: 

  • Owns not just aesthetics, but logic, structure, and value;

  • Can challenge the initial request, clarify the brief, and help shape the task;

  • Becomes a co-creator of the product — not just an executor — thinking through how design impacts metrics and user experience.

Every decision is an investment

Every design decision is a form of investment. Team time, development cost, potential profit or loss — it all comes down to money. A designer:

  • Thinks like a product owner: prioritizes, weighs risks, and estimates ROI;

  • Looks at solutions through a business lens;

Reflects on a question: If this were my money, would I still make this choice?

Ownership means active involvement

In PDPF, the designer owns the task — from clarifying the brief to design oversight and retrospective. This means the designer has:

  • Autonomy — they make decisions independently, propose alternatives, and defend their point of view; 

  • Commitment — they take care of quality, timelines, and communication with other teams;

  • Responsibility — they don’t hand off a task until they’re confident it’s truly ready.

Quality is a shared agreement

Good design is about delivering a result that everyone aligned on. PDPF is built around clear quality criteria — defined in advance between the designer and the stakeholder, and used again during the retrospective to evaluate the outcome.

What makes a task «well-defined»:

  • Clarity — the goal is clear, and risks are documented

  • Validated hypothesis — we know the user problem is real

  • Logic — structure, script and architecture are thought through

  • Transparency — the status and next step are known

  • Final artifact — ready for handoff with no rework needed

  • Design oversight — the designer supports implementation through to delivery

This approach turns vague quality standards into something measurable and comprehensible. The team works in sync — regardless of personal taste. If you know in advance the performer is notqualifiedfor a task, you simply don’t assign it to them.

Process is a team effort

PDPF only works when everyone’s on the same page. The designer collaborates closely with product, engineering, and analytics — right from the start.

  • Decisions are made together;

  • Conflicts aren’t brushed aside — they’re resolved;

  • When in doubt, we rely on data, not hierarchy;

  • Everyone knows their scope of responsibility and where they make decisions.

Role-based responsibilities:

  1. Product Designer. Clarifies the task, generates solutions, aligns with stakeholders, prepares design artifacts, and oversees implementation

  2. Product Manager. Defines goals and priorities, helps decode the problem, co-creates solutions, and makes calls on MVP and value

  3. Developer. Reviews prototypes, clarifies constraints, and implements the solution

  4. Design Lead / Manager. Sets up processes, ensures quality, facilitates retrospectives, and supports team growth


2.2 UpStream: What tasks enter the process

UpStream ensures that tasks entering the workflow are clear and realistic. This component serves three key functions:

  • It reduces uncertainty and lowers risk

  • It enables backlog and priority management

  • It sets rules so only well-formed tasks move into DownStream

If you skip this step, it quickly becomes unclear what needs to be done and why — and quality starts to suffer. But before we define what makes a task «clear», we need to look at the different types of tasks, how much uncertainty they carry, and how to prioritize them.

Types of tasks

Tasks typically fall into two categories: new concepts and improvements. The first involves exploring new scenarios, researching, and generating hypotheses. The second is about growing, optimizing, and maintaining the existing product.

Other types may exist depending on the company, but the DownStream process can handle a wide range of task types — from user research to technical design work.

Levels of uncertainty

In PDPF, all tasks are classified into three levels — this helps quickly assess risks and forecast timelines:

  • S (Small) — simple and well-understood. For example, changing a button color or adding a field to a form.

  • M (Medium) — moderately complex. For instance, designing a new feature or updating a complex user flow.

  • L (Large) — high uncertainty. These require starting from scratch, doing research, testing multiple hypotheses, and breaking the work into smaller tasks.

How to prioritize tasks

Prioritization is always a shared effort — the designer, manager, and developer assess the task from a different angle:

  • How much money it could generate or save

  • How difficult it is to implement

  • How it aligns with the company’s business goals

This approach is universal — it works in both startups and large-scale products. If a task brings no clear value or outcome, it’s better not to take it into work at all.

How to Make a Task Clear

Based on the PDPF principles, we created a simple template called the «PonyMashka». It’s a short, structured document that helps the whole team understand the task the same way. Think of it as a mini-brief, co-created by three people:

  • Product manager — owns business goals and prioritization

  • Developer — ensures the technical feasibility

  • Designer — defines user value, level of uncertainty, and input clarity

In PDPF, the «Ponymashka» acts as a Definition of Ready — a non-negotiable requirement for any task entering DownStream. If a task doesn’t meet the criteria, we send it back for revision.


2.3 DownStream: how we work on tasks

Once a task passes all filters and preparation, the execution phase begins. This part of the framework helps:

  • Manage time, quality, and expectations — through acceptance criteria and reviews

  • Design products people actually use

  • Reduce the number of revisions — thanks to a structured task intake and solid preparation

  • Improve the final output — through ongoing feedback and design supervision

How to structure task execution

DownStream is built on a simple and transparent system — a Kanban board. It shows what’s happening with each task at every stage, and what needs to happen to move it forward.

PDPF uses five columns that cover nearly all types of design work:

  • Ready to Design — tasks with a completed PonyMashka. These are ready to be picked up.

  • In Progress — active design work: research, prototyping, testing.

  • Review — final check and alignment with the manager, team, or stakeholders.

  • Support — post-handoff support: helping with implementation, answering questions.

  • Done — the task is fully completed and all commitments are done.

Depending on your organization’s workflow and lifecycle, your columns might vary. But in my experience, this setup covers around 90% of product design tasks.

What are WIP-limits and why they matter

WIP (Work in Progress) limits cap the number of tasks a team can have in progress at the same time.

Why this matters:

  • The team doesn’t take on more than it can finish

  • Tasks move all the way to completion, instead of getting stuck halfway

  • It's easier to forecast delivery timelines

If the In Progress column exceeds the WIP limit, we finish what’s already started before picking up anything new. This keeps the team focused and prevents overload. A good starting point is three tasks per designer in In Progress and one in Review — you can adjust based on workload and real data.

How the core process flows — from intake to MVP

Every task starts with alignment. The team comes together to review inputs, confirm agreements, and revisit the «PonyMashka» if anything’s unclear. Then the process follows this sequence:: 

  1. Task Research. We need tocheck whether we’re solving a real problem. This includes competitor analysis, user feedback, in-depth interviews, and quantitative surveys.

    Outcome: validated hypotheses or a reason to revisit the task.

  2. Information Architecture. At this stage, we define the structure of the future product — what entities exist, how they relate, and what user scenarios are involved. If anything is unclear, we test the structure using card sorting or tree testing.

    Outcome: an architecture diagram that can guide further design.

  3. High-Level Concept and Navigation Model. We map out how users will move through the product and what elements will appear on key screens.

    It’s good practice to explore several variations and test them with users when needed.

    Outcome: a prototype of the future product and a documented navigation model.

  4. Stakeholder Presentation. The team presents the proposed solution — explaining the rationale, the supporting data, and the goal behind the chosen concept. If open questions remain, we log them, revise the design, and return to discussion.

  5. Defining the MVP. Here, we decide what will go into the first iteration — and what can wait. We break the task into smaller sub-tasks and run each one through the same principles: clarification, hypotheses, quality criteria.

Once the MVP is defined, we begin the detailed design of each feature.

How detailing, review, and design oversight work

Even if a task seems clear, it still needs another round of validation:

  • What problem are we solving?

  • What outcome is the business expecting?

  • What limitations exist — time, tech, legal risks?

Only then we move on to formulating hypotheses and prototyping. At this stage, we turn ideas into concrete, testable hypotheses.

Example

If we introduce recommendation cards — like «Tip of the week», «You might like this», or «Feature spotlight» — we could drive more internal clicks by presenting the content in non-ad-style formats.

Pros:

  • Users tend to avoid anything that looks like an ad — banner blindness is real

  • This format could positively impact both time spent and total click count

Cons:

  • Risk: generating and maintaining diverse content formats is expensive

  • Moderating the content may become a challenge

  • We’d likely need to build a content team

  • Long-term, it could turn into content for the sake of content

  • Without strong analytics, suggestions may feel vague and impersonal

Next, we create a draft — sketches or clickable prototypes. We review them with the team and, if needed, run quick user tests.

Once the solution is shaped, it needs full-team approval: we check the logic, refine edge cases, assess feasibility, and lock in quality criteria.

After the final review, the task is handed off to development. At this point, it’s critical to:

  • Ensure mockups are clear

  • Document all states

  • Specify animations and interactivity

The designer stays involved — joining team syncs, answering questions quickly, and making sure the implementation matches what was agreed on.

After release, we return to the task:

  • Compare actual metrics with target ones

  • Discuss with the product manager what worked and what didn’t

  • Improve iteratively, if needed

This final loop closes the cycle — helping the team avoid repeating the same mistakes in future tasks.


2.4 Analytics: what to measure and why It matters

Analytics in design is a tool for spotting weak points in the process — and fixing them in time. If the team doesn’t track metrics, it loses the ability to forecast timelines and respond quickly to problems.

In PDPF, the designer’s primary zone of responsibility runs from the Ready status to the Support handoff. This is where the designer owns the process — from clarifying the task to final sign-off.

Why this segment matters:

  • It’s where most design decisions are made

  • It’s where the highest risks occur — unclear requirements, conflicts, revisions

  • It’s where timeline and quality can actually be influenced

By measuring everything up to the Support stage, the team gets a real sense of how long design takes — and where bottlenecks are hiding.

Any task management system can support this — Jira, Trello, Notion — as long as it allows:

  • Clear task statuses

  • A traceable activity history

  • Data updates at least 3 times a week

Core metrics to track:

  • Cycle Time — how long it takes from start of active work to design handoff. This is key for deadline forecasting.

  • WIP (Work in Progress) — how many tasks are being worked on right now. Helps prevent overload.

  • Throughput — how many tasks the team actually completes over a set period.

  • Lead Time — how long it takes from entering Ready to reaching Done. Good for seeing the full picture.

Charts worth tracking for analysis:

How to run retrospectives and improve the process

Tracking metrics is only half the job. The real value comes from turning them into improvements. That’s why PDPF includes a regular retrospective loop.

The optimal rhythm is once per quarter. The team reviews the data, discusses bottlenecks, and proposes process improvements. This cycle is known as the feedback loop — a continuous loop of improvement.

What a PDPF retro cycle looks like:

  1. Data collection — export metrics and charts (Cycle Time, WIP, Throughput)

  2. Analysis — discuss what went wrong and why

  3. Problem spotting — identify the tasks or patterns that slow things down

  4. Solutioning — decide which hypotheses are worth testing

  5. Fix the hypothesis — e.g., «If we introduce a sign-off template, delays will decrease»

  6. Check up next quarter — see if the change worked

📌 Example: Quarterly Analysis

A design team reviewed their data over a 3-month period:

  • Average WIP: 4.8 tasks per designer (with a limit of 3)

  • Throughput dropped by 12%

  • Median Cycle Time for large tasks increased from 10 to 16 days

To address this, the team could:

  • Lower the WIP limit to 2

  • Introduce a quick-approval template

  • Add another sync-review session with developers

How to forecast timelines by task type

To give realistic time estimates, PDPF uses median Cycle Time values for each task class. These forecasts are updated quarterly to stay relevant and accurate.

📌 Example: Median Timelines from a Quarterly Analysis

S-tasks

  • Median: 4 working days

  • Forecast: «An S-task will likely be completed in 5 days or less, with 90% confidence.»

M-tasks

  • Median: 9 working days

  • Forecast: «An M-task will likely be completed in 10 days or less, with 75% confidence.»

L-tasks

  • Median: 27 working days

Forecast: «An L-task typically takes around 6 weeks, with 85% confidence.»


2.5 What you need to implement PDPF

PDPF doesn’t require a large team or complex tools — but there are minimum conditions for the system to actually work.

Core roles required:

  • Product Designer — owns the task from start to finish, sets direction

  • Product Manager — defines goals, priorities, and aligns expectations; shapes the task in a developer-friendly way

  • Developer — checks feasibility, clarifies constraints, implements the solution

  • Team Lead — ensures quality, supports process development

If your team is small, roles can be combined. The key is to make responsibilities explicit and agreed upon.

Cultural requirements:

  • Willingness to be transparent

  • Ability to see value in templates and tracking

  • Regular retrospectives

  • Respect for agreements

  • Ownership from start to finish

Helpful tools:

  • Task tracker (Jira, Trello, Notion)

  • Knowledge base (Confluence, Notion)

  • Regular team syncs (Slack, Teams)

  • Metric dashboards (Google Sheets, Power BI, Miro)

If you can’t roll everything out at once, start small: introduce “PonyMashkas”, set up a Kanban board with basic columns, and commit to one retro cycle. That’s enough to see real results within a month.


2.6 How to adapt PDPF for different teams

PDPF is designed as a flexible structure, so it can be implemented in teams of any size. The approach just needs to adapt slightly — here’s what to focus on.


2.7 Common risks when adapting PDPF

Every system takes time to implement and fully integrate — PDPF is no exception. Here are the most common risks teams face during adoption:

  • Resistance to change. Teams may see the framework as unnecessary overhead — especially if they’ve been working intuitively. Designers, in particular, often see the «PonyMashka» or the tracker as extra “process for the sake of process.”

  • Late involvement of developers. If developers aren’t brought in early, you risk designing solutions that aren’t feasible. They should be part of the discussion from the UpStream stage.

  • Loss of accountability. As the team grows, ownership can become fuzzy. To avoid this, clearly define and document roles and areas of responsibility.

  • The Support phase. Many assume design ends at handoff. But support and iteration with engineering can take up to a quarter of the total effort — plan for that in your timelines.

  • Weak data foundation. If tasks aren’t tracked or statuses aren’t updated, your metrics won’t mean much. PDPF analytics only work if the data is honest and up to date.

  • Misinterpreting metrics. Metrics without context are dangerous. To make data useful, teams need to ask the right questions — and use answers to improve the process.

These risks shouldn’t block adoption. The key is to name them early and align on how the team will spot and handle them.


2.8 How PDPF works in practice

A framework is just theory until it’s tested in real projects and delivers measurable results. Here’s where the value becomes real.

Case 1: One designer doing it all

Problem: A single designer handled everything — from concept to dev handoff. Mockups were sent back for rework 3–4 times.Average task time: 12 working days, with 5 days lost to approvals and rework.

What was implemented:

  • «PonyMashka» introduced as mandatory DoD

  • Priorities aligned with the product manager

  • Simple Kanban board with WIP limit set to 2

Result:

  • Rework rate dropped from 3–4 to 1

  • Average task time dropped to 7 working days

  • Team gained a shared understanding of task status

Now the designer spends time designing — not chasing approvals — and can confidently communicate scope and timelines to managers.

Case 2: No structure, no progress

Problem:A four-person design team had no shared process. Tasks were picked up chaotically — whoever saw it first started it.

  • Average Cycle Time: 14 working days

  • 6–8 tasks in progress per designer at any time

  • 40% of tasks were returned for fixes

What was implemented:

  • Shared Kanban board with consistent columns

  • WIP limit set to 3

  • Twice-weekly team syncs

  • Assigned reviewer for each task

Result:

  • Median Cycle Time dropped to 9 working days

  • No more than 4 tasks in progress per designer

  • Task return rate fell from 40% to 10%

The team finally started functioning as a unit — with clear priorities, predictable delivery, and minimal rework.

Case 3: Everyone doing their own thing

Problem:In a 15-person design department, every team had its own process.

  • PMs complained about missed deadlines

  • Devs sent back nearly 50% of mockups

  • Managers had no tools to plan around design

What was implemented:

  • Unified «PonyMashka» template

  • Jira-based Kanban board with mandatory reviews

  • Quarterly retrospectives

  • A designated review lead in each sub-team

  • Metrics tracked via Google Sheets

Result after one quarter:

  • 100% of tasks had a documented status

  • Rework dropped from 47% to 14%

  • PMs and devs began engaging earlier in the process

  • Managers started using timeline forecasts for planning


3. And finale 

It’s not about columns, templates, or metrics — it’s about culture. Any system breaks down if the team isn’t ready for transparency and doesn’t know how to collaborate. Culture is what determines whether the process works or just becomes another formality.

Remember, no framework can replace honest human communication and common sense. PDPF is a flexible tool that needs to be adapted to your team. The more responsibility the team takes for their work, the greater the benefits.

If you want to discuss implementing the framework in your team or have any questions, feel free to reach out via Telegram or email. I’m happy to share my experience and hear your story. Maybe your case will be the next step in PDPF’s evolution.


Thanks to these people

Dimitry Panuli(editor) and Matvey Ivanknov(english-proofreader)

Share this post

Skugar6Ich Blog
Skugar6Ich Blog
Product Design Process Framework: How to turn a chaotic design workflow into a predictable system
Share

Ready for more?

© 2025 Anton Skugarov
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share