Houzz.com

Migration to Redis Cluster

At Houzz, we use Redis as the de-facto in-memory data store for our applications, including the web servers, mobile API servers and batch jobs. In order to support the growing demands of our applications, we migrated from an ad hoc collection of single Redis servers to Redis Cluster during the first half of the year.

To date, we have gained the following benefits from the migration:

  • Ability to scale up without the need to modify applications.
  • No additional proxies between clients and servers.
  • Lower capacity requirement and lower operational cost.
  • Built-in master/slave replication.
  • Greater resilience to single point of failures.
  • Functional parity to single Redis servers, including support for multi-key queries under certain circumstances.

Redis Cluster also has limitations. It does not support environments where IP addresses or TCP ports are remapped. Although it has built-in replication, as we ultimately discovered, few client libraries, if any, have support for it. For certain operations such as new connection creation and multi-key operations, Redis Cluster has longer latencies than single servers.

In this post, we will share our experiences with the migration, the lessons we learned, the hurdles we encountered, and the solutions we proposed.

Functional Sharding

Some of our applications use Redis as a permanent data store, while others use it as a cache. In a typical setting, there is a Redis master that processes write requests and propagates the changes to a number of slaves. The slaves serve only read requests. One of the slaves is configured to dump the memory to disk periodically. The dumps are backed up into the cloud. We use Redis Sentinel to do automatic failover from failed masters to slaves. Our applications access Redis through HAProxy.

Historically we scaled up the Redis servers by “functional” sharding. We started with a single shard. When we were about to run out of capacity, we added another shard and moved a subset of the keys from the existing shard to the new one. The new shard is typically dedicated to keys for a specific application or feature, e.g., ads or user data. Code that accessed the moved keys was modified to access the new servers after the move. For example, the marketplace application would access shards that store data about the products and merchants in the marketplace, while the consumer-oriented applications would access shards that store user data such as activities and followed topics. The same process was repeated for several years and the number of servers grew to several dozens. The process remained mostly manual due to the need to modify the applications.

The Redis servers ran on high-end hosts that had a large memory capacity. Such hosts would typically also have a large number of processors. Since each Redis server is a single process, only a small fraction of processors are utilized on each host. In addition, there is imbalance in memory and CPU usages across the shards due to the manual partitioning. Some shards have a large memory footprint and/or serve a high requests per second.

The large memory footprints are problematic to operations such as restart and master-slave synchronization. It can take more than 30 minutes for a large shard to restart or to do a full master-slave sync. Since all our client requests depend on Redis accesses, it poses a risk for a severe site-wide outage should all replicas of the large shard go down.

In the beginning of the year, we evaluated options to scale up the Redis servers with fewer manual processes and a shorter time to production.

Redis Cluster vs. Twemproxy

One option we considered was Redis Cluster. It was released by the Redis community on April 1, 2015. It automatically shards data across multiple servers based on hashes of keys. The server selection for each query is done in the client libraries. If the contacted server does not have the queried shard, the client will be redirected to the right server.

There are several advantages with Redis Cluster. It is well documented and well integrated with Redis core. It does not require an additional server between clients and Redis servers, hence has a lower capacity requirement and a lower operational cost. It does not have a single point of failure. It has the ability to continue read/write operations when a subset of the servers are down. It supports multi-key queries as long as all the keys are served by the same server. Multiple keys can be forced to the same shard with “hash tags”, i.e., sub-key hashing. It has built-in master-slave replication.

As mentioned above, Redis Cluster does not support NAT’ed environments and in general environments where IP addresses or TCP ports are remapped. This limitation makes it incompatible with our existing settings, in which we use Redis Sentinel to do automatic failover, and the clients access Redis through HAProxy. HAProxy provides two functions in this case: It does health checks on the Redis servers so that the client will not access unresponsive or otherwise faulty servers. It also detects the failover that is triggered by Redis Sentinel, so that write requests will be routed to the latest masters. Although Redis Cluster has built-in replication, as we discovered later, few client libraries, if any, have support for it. The open source client libraries we use, e.g., Predis and Jedis, would ignore the slaves in the cluster and send all requests to the masters.

The other option we evaluated was Twemproxy. Twitter developed and launched Twemproxy before Redis Cluster was available. Like Redis Cluster, Twemproxy automatically shards data across multiple servers based on hashes of keys. The clients send queries to the proxy as if it is a single Redis server that owns all the data. The proxy then relays the query to the Redis server that has the shard, and relays the response back to the client.

Like Redis Cluster, there is no single point of failure in Twemproxy if multiple proxies are running for redundancy. Twemproxy also has an option to enable/disable server ejection, which can mask individual server failures when Redis is used as a cache vs. a data store.

One disadvantage of Twemproxy is that it adds an extra hop between clients and Redis servers, which may add up to 20% latency according to prior studies. It also has extra capacity requirement and operational cost for monitoring the proxies. It does not support multi-key queries. It may not be well integrated with Redis Sentinel.

Based on the above comparison, we decided to use Redis Cluster as the scale-up method going forward.

Building the cluster

Before we could migrate to Redis Cluster, we needed to bring Redis Cluster to functional parity to functional shards. We implemented most of the improvements in our client libraries (e.g., the Predis library for PHP clients). We also built automation tools for cluster management in our infrastructure management toolkit salt.

As mentioned earlier, the main missing features in the client libraries for Redis Cluster are master-slave replication, health checks and master-slave failover.

We replaced the active health checks in HAProxy with passive mark down and retries in the PHP client library. When the client gets an error from a Redis server, e.g., connection timeout or unavailability due to loading, the client marks the server down in APC and retries another server. Since APC is shared by all PHP processes in the same web server, the marked down Redis server will not be accessed by another client until it expires from APC a few seconds later.

We also added support for cluster master-slave replication in the client library. It started out as a straightforward refactoring, but ended up with considerable complexity as it interacted with the passive health checks and retries in partial failure modes, especially in pipeline execution. I will discuss this in more detail later in the post.

Other improvements we made to the Predis library include: 

  • Support multi-server commands such as mset and mget
  • Reduction of memory usage of cluster configuration such as slot-to-server maps
  • Bug fixes including memory leak fixes, etc.
  • Added pipeline support to the Java Redis library Jedis.

Figures 1 and 2 show the Redis system architectures before and after the migration to Redis Cluster, respectively.

Figure 1. Functional shards

image


Figure 2. Redis Cluster

image


In addition to the client library improvements, we built tools to further automate the creation and maintenance of Redis Cluster.

There is an existing tool (redis-trib.rb) to create a cluster from a set of Redis servers. We built tools to place the masters and slaves in a more deterministic way than what redis-trib.rb does.

For example, we place the servers of the same shard across availability zones for better fault tolerance.

For data persistence, we enable one server per shard to dump its memory to disk and upload the dumps to the cloud storage periodically. Memory dump is a resource intensive operation, therefore, we chose a slave vs. the master for dumping and distributed the masters and dumping slaves evenly across hosts for load balancing. 

Another desirable feature of the layout is to have servers in the same shard to have the same port number, which eases manual operations during debugging.

We built a toolkit to implement the desired layout during cluster creation as well as subsequent additions of new cluster nodes. Figure 3 shows an example layout of our Redis Cluster.

Since automatic failover can happen in the cluster from time to time, our toolkit periodically collects the cluster status and reconfigures the servers when necessary.

Figure 3. Example layout of Redis Cluster

image


Before the migration, we ran a set of performance tests to compare the latency of Redis Cluster vs. functional shard under various conditions.

Figure 4 shows the latencies with non-persistent connections on cluster vs. functional shard. While there is a significant difference for the multi-key commands, we do not expect such cases to be common in practice for Redis Cluster since all Redis accesses in the same client session will be able to share the same connection to the cluster.

Figure 4. Latencies with non-persistent connections

image


Figure 5 shows the latencies with persistent connections on cluster vs. functional shard. The latencies are measured when there is only one client accessing Redis. In practice, there will be multiple clients and the actual latency per request will the increased by the processing time of the requests that are queued in front of this request. The queues will be shorter in Redis Cluster since the workload is distributed across a larger number of processes.

Figure 5. Latencies with persistent connections

image


The live migration

According to the Redis Cluster documentation, no automatic live migration to Redis Cluster is currently possible and we have to stop the clients during the migration. Stopping the clients is not an option for us because it means shutting down the whole site.

We built our own live migration tool based on the append-only-file (AOF) feature in Redis. The AOF, when initially created, consists of a sequence of commands that can be replayed on a second server to reconstruct the data set in the first server that creates the file. Commands that the first server receives after the AOF is created will be appended to the file, hence the name “append only file”. Redis can be configured to rewrite the file when it gets too big. After the rewrite, the commands at the end of the old file may be re-ordered.

Our automatic live migration process involves the following steps:

  1. Pick a slave in the functional shard that has dumping disabled.
  2. Enable AOF in the picked slave but disable AOF rewrite, so that subsequent commands will not be re-ordered in the file.
  3. Write a certain key, e.g., “redis:migration:timestamp”, to the functional shard to serve as a bookmark for later use.
  4. Copy the AOF from the functional shard slave to all the hosts in the cluster.
  5. Replay the AOF on each master in the cluster, using the “redis-cli –pipe” command.
  6. Extract the new commands in the functional shard AOF that were added after the last bookmark, and store them in a delta file.
  7. Repeat steps 3 - 5 with the new delta file instead of the full AOF.
  8. When the number of new commands in the delta file drops below a certain threshold, we make a live configuration change to the clients so that they will start to access the cluster instead of the functional shard.
  9. We continue to repeat steps 3 - 7 after the configuration change, until the number of new commands in the delta file drops to a lower threshold.

The live migration of each functional shard took from a few minutes to a few hours, depending on the size of the shard. The process went smoothly modular a few errors from the functional shard slave due to overloading from the AOF writes.

Post migration outage

After we migrated about ¾ of the functional shards to the cluster, something unexpected happened. We created the cluster with the same capacity as the functional shards. However, the size of our data grew faster than we expected.

The Redis cluster was overcommitted in memory and started to swap in early May. An issue with a backup script resulted in high volumes of disk reads and writes, and triggered the first failover when a Redis master on the same host tried to access the swap space. The failover then triggered a sequence of cascading events.

Slaves were unresponsive during failover, triggering cross-shard slave migrations, i.e., healthy slaves changed their masters. The failovers and cross-shard migrations resulted in an imbalanced distribution of masters and dumping slaves across hosts. The hosts with more dumping slaves were overloaded when the slaves started to dump, and more failover/cross-shard migrations followed. Redis clients repeatedly retried when servers were unresponsive, and eventually held up all web server processes and caused site outages.

Recovery and resolution

It took three weeks to fully recover from the outage. During the process, we performed many operations such as resizing and resharding the cluster for the first time in production. We learned lots of lessons and made quite a few improvements to the error handling code in our client library.

The first step to recovery was to rebalance the cluster, i.e., to bring it back to the balanced layout as illustrated in Figure 3. Next, we added 50% more hosts to the cluster.

We ran into several issues while resharding the cluster, i.e., migrating data from existing servers to new servers. For a short period of time during the migration of a key, clients kept receiving “MOVED” responses from both the source and the destination servers of the migration, therefore kept retrying between the two servers until they eventually had stack overflow.

To contain the issue to the affected processes only, we applied a limit on the number of retries in this situation so that the affected processes would not generate too much load in the clients or servers. We also made the passive healthy checks in the client library more robust. The entire resharding process took 12 hours, during which a small percentage of requests failed while the site was functioning overall.

After the resharding, the old Redis servers reported a drop in their data size, but their memory usage did not drop, nor did their swap usage. Since they still had data stored in the swap space, they could have spikes in latencies and client timeouts. We learned that the unchanged memory usage was a result of fragmentation in jemalloc, the memory allocator used in Redis, and that the only way to defragmentation is to restart the servers.

The last step in our recovery process was to rolling-restart all the old servers. For each shard, we first restarted a slave, then force a failover to the restarted slave, causing the old master and the other slaves to resynchronize data from the new master. Resynchronization has the same effect as restart on memory usage. After the resynchronization was completed, all servers in the shard had their memory defragmented and their swap space freed.

The rolling restart was a rigorous stress test on the clients and servers, and prompted us to make more improvements in the error handling of the client library. One improvement is to have different timeout values for masters vs. slaves, and to never mark down a master in the passive health checks. Masters are more heavily loaded than slaves during failover and are a single point of failure for write operations. Therefore, we would rather try a bit more when a master is slow, than give up too soon.

Open Questions

Clustering is a relatively new technology in Redis. Through the experience of building, migrating, and resharding Redis Cluster, we learned about its limitations as well as its potential.

While the ability to automate scaling in Redis Cluster opens up many new opportunities, it also brings up new questions. Will we be able to scale infinitely? What’s the best way of supporting a mixture of permanent store use case and cache use case? How can we minimize the impact of resharding and rolling restart on production traffic? We look forward to experimenting and learning more about Redis Cluster.

If you’re interested in joining us, we’re hiring! Check out opportunities on our team at houzz.com/jobs.

Houzzer Profile: Ella Zhang, Quantitative Analyst

image

As a member of the data analytics team, Ella uses data to inform and direct business decisions. When she’s away from work, Ella enjoys exploring local parks with her family.

Why did you choose to become a data scientist?
I actually studied environmental engineering, but when I graduated, data science was just beginning to gain momentum as a career path. Prior to that point, there weren’t many statistician positions available. Through my schooling, I’d had the opportunity to play with data and use different models to answer business-related questions and I realized that it was the perfect blend of my education and interests.

What benefits do data scientists bring to businesses?
All business decisions should be based on a clear understanding of how customers are using their products and services. Data provides the confidence to chart new paths and create greater efficiencies of resources, while evaluating the impact of those decisions. For a company like Houzz, which provides a bounty of products and services to our community, it’s important to understand how all decisions impact the overall dynamic of the experience from the minor ripples to the major waves.

What brought you to Houzz?
I was a big fan of Houzz long before I began working here and used to spend 20-30 minutes per night looking at photos and getting inspiration for my own home. In fact, one of my bathrooms looks very similar to a photo I found on Houzz!

Professionally, I craved the startup experience, which would allow me to broaden my focus and solve new challenges every day.

What do you most enjoy about your role?
It’s exciting! There’s an opportunity to blaze a trail, tap into my own creativity and provide useful information based on non-defined parameters. The many aspects of the Houzz platform mean that I am continuously learning and expanding my skills.

What project are you most proud of at Houzz?
I helped to develop the “Lifetime Value Model” for our marketing team, which is utilized for all campaigns to evaluate performance. I also analyze SEO-optimization and content partnerships to understand and educate the team on how and why people visit Houzz.com. It’s fascinating to learn what piques visitors’ interest about Houzz.

What’s something that has surprised you about working at Houzz?
I work with cross-functional teams across marketing, SEO and data operations, and have found that across the board, people are very data-driven at Houzz. It’s so refreshing. Everyone uses analysis to drive marketing efforts and business growth decisions to achieve greater, more collaborative results.

An Open (Design) Houzz: San Francisco Design Week

We hosted a special hands-on happy hour at our HQ in downtown Palo Alto as part of San Francisco Design Week to give guests a chance to meet our talented design team, learn about our work, culture and process, and enjoy small bites, drinks and games. Houzz product, graphic and mobile designers demoed our latest features, including Sketch for Web and View in My Room 3D, which allows users to see products and materials in a room before they purchase them.

Here are a few photos from the event:

image

Houzzer Aran Yeo welcomes newcomers to the happy hour.

image

Visitors get to know Sketch for the Web from Houzzer Kelvin Young.

image

Guests compete for a grand prize Houzz gift card by creating unique lego designs (pictured here is the winning model in-progress!)

image

Designers left their artistic mark on the epic blackboard wall.

image

The Houzz design team is growing! Check out career opportunities at houzz.com/jobs.

Houzz Sketch Now Available as a Web Experience!

image


Houzz Sketch, our popular communication, collaboration and design tool in the Houzz App, is now available as a web experience. With Sketch, people can communicate ideas and collaborate directly on any of the more than 14 million photos on Houzz – or images from their own library and around the web – by adding measurements, notes, stickers and more. Sketch also makes it easy to experiment with home decor options by adding products and materials from the Houzz Shop into any photo or blank canvas. A handy shopping list helps you keep track of all the products in your Sketch for easy checkout. In addition, you can choose any of the two dozen available Sketch Canvases to create mood boards and floor plans in an easy and lightweight way.

More than 1.5 million sketches have been created since Sketch was introduced on the Houzz app last year.

To start a Sketch, simply click the Sketch button on any photo on Houzz or upload a photo to an ideabook. Read more about the tool and check out a demo here.

Watch My Houzz: NBA All-Star Kyrie Irving Renovates his Father’s Home



In time for Father’s Day, check out the latest episode of My Houzz here, featuring NBA All Star Kyrie Irving as he secretly renovates his father Drederick’s home in West Orange, New Jersey, where Kyrie and his sister Asia grew up.

Kyrie, with the help of Asia, worked with a New Jersey-based designer from the Houzz community to pull off the surprise remodel. He used Houzz at every step of the process, from finding a local professional with great reviews to sharing ideas for his father’s home with the designer and Asia to buying all of the furniture and accessories from the Houzz Shop. The end result is a more open, functional and beautifully updated space.

You can shop the finished look here and check out the ideabooks Kyrie used for inspiration on his Houzz profile.

Houzz Launches Trade Program for Home Professionals

image


At Houzz, we’re always looking for new ways to help home remodeling and design professionals build their businesses and work with clients. Today, we launched the Houzz Trade Program, which provides industry professionals with multiple ways to profit from purchasing and recommending products in the Houzz Marketplace, from bathtubs and fixtures, to furniture and decor products.

Some of the benefits of the Houzz Trade Program include trade-only discounts on hundreds of thousands of products, referral bonuses when sending clients to the Houzz Marketplace, a dedicated support team to streamline ordering and expediting products, and free shipping on most trade-discounted orders over $49.

All professionals in the home improvement industry, including designers, architects, contractors, and more, can enroll in the program for free. Houzz Pro+ members are pre-enrolled, as well as members of industry associations including the American Society for Interior Designers (ASID), the National Association of Home Builders (NAHB), the National Kitchen & Bath Association (NKBA), the American Institute of Architects (AIA), the National Association of Landscape Professionals (NALP), the National Association of the Remodeling Industry (NARI) and the Interior Design Society (IDS).

For more information and to apply, click here.

Why I Joined Houzz: A Data Scientist’s Perspective

Recently members of our analytics and data science teams hosted a meet-up in conjunction with DahShu.org for students and recent graduates where we shared our own career experience to shed some light on their next career move. Here’s what I shared with them: 

I joined Houzz at the end of 2015 after seven years at one of the largest companies in the Valley. Over time, I came to feel that I would benefit from working at a smaller company where I would have the opportunity to work on more diverse projects. Many of the people I worked with had started to leave for different startups, and would convey how happy they were to be able to be more creative, move faster, and to have a big impact through their work. I knew it was time for a change.

After speaking with several close friends who had left the company for recommendations, I applied to four startups and fortunately received offers from all of them. There were four primary factors I considered in deciding which one to accept:

  1. Do I like the product and believe in the business model?
  2. Is the company the right size? If it is too small, I won’t have enough data to play with. If it is too big, I’d face the same problem I had at my current job.
  3. Would there be learning opportunities and an intellectually stimulating environment?
  4. Would I make enough money?

Houzz emerged as the clear winner.

I loved the product and had been using it for many years. Having gone through a painful remodeling process myself, I knew how much value Houzz brings to users. In fact, it was, and still is, the only platform that covers the full funnel from getting inspiration, to finding a professional, to buying all the products and materials you need.

In terms of size, Houzz was and is still relatively small. I believed I could contribute to the company’s growth and have big impact through my work.

Houzz has over 40 million monthly unique users and over 1.5 million active home renovation and design professionals. I knew I would have plenty of data to get insights from. Finally, at each visit to Houzz, I met with people who were smart and fun. As an example, after learning that we used an internal language to obtain data from logs instead of SQL at my former company, one of the Houzz team members interviewing me took the effort to learn that language just to be able to interview me. That was very meaningful to me, and I knew that Houzz would provide an environment where I would be happy.

When I told people, including my mentors, about my decision to join Houzz, everyone was very supportive. In fact, during my last 1:1, my manager went from trying to convince me to stay to saying he believed everyone should have a startup experience as part of their professional development.

My first project upon joining Houzz was to develop a methodology to measure impact of metrics from our AB tests. This was an important project given that we make many decisions based on the outcome of these tests as a data-driven company. What I thought would be a project that would take multiple quarters to accomplish took two months. I soon found myself presenting to our cofounder during our weekly meetings, gaining a visibility that I never had before. I also got exposure to everything from marketing to monetization to user growth, whereas in the past, my work had felt very siloed.

I found Houzz to provide an intellectually stimulating environment. As an example: my team has an analytics reading group where we have covered things like hypothesis testing, experiment design and causal inference. We just built a data science library to encourage people to learn and grow. I also really like spending time with my colleagues, who are also friends. It turns out we have quite a few people who enjoy playing tennis as much as me (at least four of them above a 4.5!).

By the end of 2016, the four friends who tried very hard to convince me to join their companies had all left. In contrast, the two friends who referred me to Houzz are still happily working here. One of them was inspired by his wife, who is an architect, to help develop a product that will launch soon. The other is leading a massive project to enhance our local advertising program for home professionals.

When I share my experience with others, I tell them how important it is to believe in the product, in the people and in the opportunity for personal and professional growth. Of course, I also tell them we’re hiring ;)

image

Houzz and DahShu.org welcome students and recent graduates to the event


image

Guests mingle and get to know each other


image

Jerry Krikheli introduces a panel of Houzzers to share their experiences


image

Guests listen to a panel of Houzzers

Designing View in My Room 3D

image


One of the things I love most about being a product designer at Houzz is how I’m constantly challenged by projects. Most recently I worked on View in My Room 3D (VIMR 3D), which brings augmented reality to the Houzz shopping experience, making it easy for people to find products that will work great in their homes. We had to overcome major design and technical challenges to make VIMR 3D an intuitive and easy-to-use feature for our users. Here are some takeaways from the process:

Define clear goals. This first step was critical to finding the right design solution. We had to think about what we wanted VIMR 3D to do, and what the user experience should be. We wanted to give our users the most value from our visualization tool, and take online shopping a step further by creating a rich mobile experience that is both human-centered and enjoyable.

Research. As a part of the design process at Houzz, we use research to gather insights and identify the best approach for our problem. For VIMR 3D, we started by asking basic questions such as “Why do people shop online?” “How do people shop online?” “What is the difference between online and offline shopping?” Visualization emerged as a key to decision-making for online shoppers. The ability to view products in the context of your room, scale them, move them and share them with others, creates a better shopping experience.

Test and iterate. Any designer will tell you that crafting a simple interaction is a challenging task. The new technologies and sensing abilities of devices have allowed us to present a new way of interaction through augmented reality. We thought about the movements of the finger and thumb that allow a user to interact with the app, and how these gestures have a universal quality. Through user testing, we found that in general, people will most likely try to pinch to zoom or use a single finger to drag an object around in space. We continued to iterate several different options for gestures and tested them against each other with the aim of finding the most intuitive interaction for VIMR 3D.

By defining clear goals, researching, and iterating across all stages of design and development, we were able to deliver a feature that addresses a major pain-point in online shopping, and that we’re all very proud of as a team. What really excites me though is what comes after the launch of a new feature: the immediate feedback we get from our enormous community of homeowners and home professionals. This input is what challenges us to enhance new features like VIMR 3D even further to provide the best experience for home design – and shopping.

Houzz Survey: Moms Tell Us What They Want for Mother’s Day

image

With Mother’s Day just around the corner, we asked more than 1,500 moms what they really want for Mother’s Day and many of the responses centered on the home, from their preferred place to celebrate to the type of gift they hope to receive. In fact, the majority of moms say they’d like to stay home for Mother’s Day (53%), while going out is a distant second (28%).

Of those interested in staying at home, the most popular request is to enjoy a meal with the family, cooked by someone else (40%), followed by a family activity (25%) like watching movies, playing games or gardening. Responses from moms who want to go out for Mother’s Day have similar themes, with 47% of moms requesting a meal with family and 23% interested in attending a cultural event with their family, like visiting a museum, going to the movie theater or watching a live sporting event.

Flowers seem to be a popular request (at least, that’s what we hear from 19% of responding moms), but nearly the same amount of moms say their ideal gift is something for the home (16%). For inspiration, check out this collection of thoughtful home decor Mother’s Day gifts from Houzz.

However you choose to celebrate, we wish all moms a happy and healthy Mother’s Day!

My Houzz: Ludacris Surprises Mom with a Home Makeover



In time for Mother’s Day, we’re excited to share the latest “My Houzz,” episode, which follows renowned recording artist and actor Chris “Ludacris” Bridges as he makes over his mom Roberta’s home in Atlanta. Roberta lives in the first house Bridges bought when he became commercially successful as the rapper Ludacris. While she wanted to make the space her own, many of the rooms she had started were incomplete and reflected Bridges’ style.

Working with an Atlanta-based designer from the Houzz community, Bridges and his wife Eudoxie transformed what was once Bridges’ home into Roberta’s home, using Houzz at each step of the process. Check out the ideabooks he used for inspiration here and shop the look from the finished space.