From 60 Repos to One: How Wix Tackled Monorepo Migration - Part1
- Wix Engineering
- 6 days ago
- 7 min read

At Wix, we faced a challenge that will sound familiar to many developers: managing an ever-growing backend codebase spread across 60 separate repositories. With over 30 million lines of code, 6,000 daily builds and 600 daily production deployments, our development workflows were starting to strain under the weight of fragmentation.
This is the story of how our teams came together to tackle one of the biggest engineering overhauls in Wix's history - a migration to a monorepo.
For us, the decision was not just about technology. It was about enabling better collaboration, improving developer efficiency, and ensuring our systems could scale for the future. But transitioning to a monorepo at this scale wasn’t simple. It required building new tools, rethinking workflows, and solving problems that many organizations face as they grow.
Let’s look at the common development feedback loop. It’s a rinse-and-repeat process that ends when a developer merges code to the master branch. When any of the components in the flow suffer from performance issues, the entire feedback loop's performance decreases.

VMR - a.k.a Virtual Mono Repo
Back then ~2018, Git didn’t support large scale repositories at the size of the Wix backend monorepo.
Before the monorepo era, we used a virtual monorepo. It was a clever trick — a commit hash based entity that glued together around 60 smaller repositories. But despite its ingenuity, it came with headaches:
Align with the latest virtual monorepo vector - must “pull” from VMR after pulling from Git to align the codebase with source dependencies and 2nd party dependencies.
Why does my build take so much time? I only added a small change…
Excessive network traffic after every VMR vector pull.
Missing features such as limited code search and code refactoring.
Distributing changes across a distributed virtual monorepo composed of 60 repositories is not a simple task.


We Needed a Monorepo
As developers at Wix, we faced growing challenges with our distributed virtual repositories. Our builds were running slowly, and whenever teams made low-level changes such as framework updates, it triggered extensive invalidations throughout the codebase.
This caused significant delays for teams working on interdependent projects. Looking ahead, we recognized that incorporating AI-generated code would only make these complex problems more difficult to manage.
In addition, working on a distributed virtual repository environment means that you sometimes need to perform cross-repo builds on up to 60 repositories per a single commit just to make sure your change does not break anything. Moreover, the fact that those builds did not share the Bazel cache due to different VMR vector states meant waiting for a couple of hours for tests’ feedback to arrive.
See below, that during peak times in mid-day, more than 10 minutes queue time and 17K Bazel actions queue size were waiting to get executed on Remote Build Execution (RBE) workers.

Development experience was lacking in such a distributed environment, as code search and code refactoring abilities were missing. This forced us to create in-house solutions of our own which raised the complexity bar even higher.
As we’ve previously mentioned, the number of builds and deployments has skyrocketed. Consequently, our existing CI/CD infrastructure faced immense pressure. Splitting work across 60 repositories wasn’t helping; it was fragmenting our efforts and slowing us down. We needed a single, unified codebase to:
Increase development velocity
Simplify the existing complexity of working on 60 repos
Enable faster production deployments
Solve the cross-repo builds pain
However, merging 60 repositories into a single monorepo of this size required us to rethink not only our tools but also how we worked as developers.
Overcoming the Challenges
At the start of this journey, we knew that moving to a monorepo wasn’t just about copying code into one place. It was about solving real, developer-focused problems, like keeping builds fast, optimizing workflows, and ensuring local development remained efficient.
A big challenge that we are still facing is running both systems, i.e. VMR and the monorepo, side by side during the evaluation phase and afterwards during the migration phase.
Scaling CI/CD Pipelines
A monorepo meant every developer’s change would now run through a single CI/CD pipeline. But with thousands of builds per day, the system had to be fast and reliable.
Taking into account that the existing infrastructure won’t behave the same when building 33 million lines of code, we’ve had to rethink how to improve the performance and resilience of our build pipelines. A naive approach didn’t cut it as it did not fit the monorepo’s scale due to out-of-memory (OOM) exceptions, server crashes, no space left on device, and so on.
Eventually, we achieved a small win with a fully successful build, which took approximately five hours, by:
Creating a lab environment just for the build with a customized Bazel build pipeline.
Scaling up resources from 8 CPU/16 GB to 30 CPU/160 GB.
Allowing only a single tenant per node on our build infrastructure.
Limiting the Remote Build Execution (RBE) build concurrency to 100 actions at a time.


The Scale of Wix Backend CI

At Wix’s scale, the fact that 450 developers spend ~30 minutes on Git clone on CI per build, and that it takes ~1 hour to clone the monorepo locally, were major hurdles that we have had to solve.
We’ve started with setting up performance expectations:
Have a performant version control system to support any scale.
Total build performance must be the same or better than what we have today.
Local development must support monorepo of any size for future growth.
Drastically reduce the clone times, both on CI & Local.
Where do we start?
Comprehensive Research: Extensive research into monorepo implementations at other large-scale companies.
Centralized Repository: Creation of a new monorepo Git repository to house all existing virtual repositories within a unified Bazel WORKSPACE.
Third-Party Dependency Management: Some virtual repositories were using specific, non-managed 3rd party versions; we’ve had to allow such repositories to pin the custom version in addition to the globally used 3rd party version.
Continuous Synchronization: Development of the MonoSync tool to ensure the monorepo remains up-to-date by merging commits from all 60 virtual repositories (x60 VMR’s → x1 Monorepo).
Build Process Adaptation: Modification of the production build process, including RBE and Bazel wrapper, to accommodate the monorepo structure.
Testing and Iteration: Testing of various build scenarios within the monorepo environment to identify and address potential issues, raise obstacles, experiment with git and assess the project future ahead.
Working with a 22GB Monorepo: What's the Process?
First, we’ve focused on CI and wanted to solve the challenge of cloning the monorepo upon every build invocation. Those are the iterations we tried until we’ve reached a performant monorepo clone:
1. The Naive Approach
Use raw Git with a naive clone command for performing a raw Git clone (either CI or Local)
Long clone duration - CI & Local
Slow git commands - status, log, diff
IDE - latest and greatest Git features didn’t always integrate well due to the large amount of files and occasional freezes

We can surely say that naive raw Git solution is not optimal, taking around 5 hours 20 min.

2. Using Amazon EFS as Git Index Storage
We aimed to share a single network drive among approximately 400 build agents. This was achieved using Git worktree to enable multiple working directories for a single local Git index housed on Amazon Elastic File System (Amazon EFS). By employing the Git worktree, we could replicate the Git index from an EFS network drive to any build agent within the build queue.
What is an Amazon EFS?
"Amazon EFS provides serverless, fully elastic file storage so that you can share file data without provisioning or managing storage capacity and performance"

Result:
Unfortunately, it didn’t go too well, due to:
Performance issues - when transferring the Git index over the wire for every build.
Git worktree - limitation of working on the same branch simultaneously from multiple build agents, meant we couldn’t run two master branch builds at the same time.
We can see that we’ve improved the time it takes to fetch the monorepo but the total build is still around 5 hours.

3. Git Index Compression and S3 Storage
Our next step was trying to control the size of the monorepo. A Git index was an ideal candidate for compression due to the following characteristics:
Text based
Structures data
Repetitive Information
Small file sizes
In addition, we used the following Git features:
Single branch - master branch only
History - limited to 90 days
GC - turned off
unpackLimit - keep the fetched objects loose without packing
Improve I/O performance
Prioritize fetch speed over immediate object accessibility
Handles a repository with frequent large amount of changes
manyFiles - Allow git support for large repository
Faster git status checks
Reduced index size
See an example git clone command:
git clone git@github.com/organization.monorepo.git \
--shallow-since="90 days" \
--single-branch \
-b master \
-c feature.manyFiles=true \
-c fetch.unpackLimit=1 \
-c core.fsmonitor=true \
-c merge.stat=false \
-c pull.rebase=true \
-c gc.auto=0 \
-c gui.GCWarning=false \
-c receive.autoGC=false \
/path/to/git/index
Result:
Compressing the result of the git index clone from above into a tar.gz file, reducing the git index size from ~22GB to ~2.5GB.
Downloading the compressed git index from Amazon S3.
Decompressed the git index using Pigz (Parallel Gzip - decompression of the gzip while utilizing all available cores).
Pulling latest delta changes on the build agent.

The clone duration was reduced from 20 minutes to an impressive 1 minute. Now the next challenge was reducing the 5 hour build duration.
In Part 2, I’ll dive into how we boosted our build strategy through intelligent target selection, shortening those lengthy builds and introducing efficient processes that only build what actually needs building. Stay tuned!
Go deeper - watch Zachi Nachshon's Wix Engineeting Conference 2024 talk - From 60 Repos to One: How Wix Tackled Monorepo Migration like a Tech Giant:

This post was written by Zachi Nachshon
More of Wix Engineering's updates and insights:
Join our Telegram channel
Visit us on GitHub
Subscribe to our YouTube channel