Booking.com attracts over 500 million monthly visitors and processes more than a billion annual bookings, cementing its position as a dominant force in online travel. However, maintaining this market leadership in an increasingly competitive landscape requires more than just scale—it demands rapid software innovation and delivery capabilities.
The Amsterdam-based travel giant is betting heavily on artificial intelligence to accelerate its 3,000-person engineering organization. Rather than simply adopting AI tools and hoping for the best, Booking.com has developed a sophisticated measurement framework to quantify exactly how generative AI impacts developer productivity, satisfaction, and code quality.
“We’re very excited about GenAI because it has the potential to multiply the impact of our engineering organization,” explains Amos Haviv from Booking.com’s Developer Experience team. “Our vision is to give every developer the equivalent of a senior engineer sitting beside them to pair program and tackle problems,” adds Bruno Passos, Group Product Manager leading the company’s AI implementation efforts.
The measurement challenge
When Booking.com first introduced AI code assistants—software tools that help developers write, debug, and optimize code—the company encountered a familiar corporate dilemma: wildly different expectations between executives and frontline workers.
Executive enthusiasm ran high, fueled by industry reports claiming AI could save developers hundreds of thousands of hours annually. Meanwhile, the actual developers remained skeptical about whether these tools would genuinely improve their day-to-day work or simply create additional overhead.
“The whole world was talking about AI, and the numbers we were hearing were astronomical,” shares Passos. “But nothing was backing it up. So our initial goal was simply to understand what GenAI could actually do for us and figure out a way to measure the value it was delivering to the organization.”
Beyond settling internal debates, Booking.com needed concrete data to guide critical business decisions: which AI vendors to partner with, how to structure training programs, and whether to expand or scale back their AI investments. Without measurable outcomes, the company risked making expensive decisions based on hype rather than evidence.
“We needed to understand how AI affected engineering velocity, satisfaction, and code quality,” says Passos. “Without that insight, we wouldn’t be able to confidently talk about the ROI of Booking’s investment in AI with the rest of the business. We also wouldn’t know for sure whether we should be driving more adoption for a specific tool or use case.”
Building a data-driven measurement system
To solve this measurement challenge, Booking.com partnered with DX, a developer intelligence platform created by leading productivity researchers. The platform combines direct developer feedback with objective code delivery metrics to provide comprehensive insights into how AI tools actually impact software development teams.
Using DX’s analytics capabilities, Passos’ team quickly began quantifying the real-world impact of AI code assistants across their engineering organization. The results revealed several key insights:
Productivity gains were measurable but nuanced. Developers who used AI tools daily demonstrated 16% higher “change throughput”—essentially, they completed and deployed more code changes per unit of time compared to non-users. This metric matters because it directly correlates with how quickly teams can deliver new features and fix issues.
Time savings enabled higher-value work. Rather than simply making developers faster at routine tasks, AI tools freed up mental bandwidth for more complex problem-solving and creative work. Developers reported spending less time on boilerplate code and more time on architectural decisions and business logic.
Satisfaction improved with proper implementation. Developer satisfaction with AI tooling increased by 15 points over six months, driven by both product improvements from vendors and internal training efforts that helped developers use the tools more effectively.
These findings emerged from both qualitative surveys and quantitative analysis of code delivery patterns over time. By tracking the same developers before and after AI adoption, while also comparing AI users to non-users, the team gained confidence that the productivity improvements were genuine rather than statistical noise.
Zane Wright, Senior Product Manager, explains how this data transformed their decision-making: “We’ve been able to use data to make tactical and strategic decisions on where to invest further in our GenAI program. Decisions like which vendors we should be going for, where we should be looking further to assess deeper impact on our company, and how we should be structuring our programs to best impact developers moving forward.”
Addressing adoption barriers
One surprising insight from the data: many developers remained hesitant to adopt AI tools despite their proven benefits. This resistance stemmed primarily from skepticism about the technology’s reliability and confusion about how to integrate AI effectively into existing workflows.
“We realized that education would be just as important to increasing adoption as improvements to the technology itself,” shares Passos.
This revelation shifted the team’s strategy beyond simply providing access to AI tools toward actively supporting developers through the adoption process. The data showed that sporadic usage yielded minimal benefits—developers needed to use AI tools consistently to see meaningful productivity gains.
Driving widespread adoption
Armed with evidence that daily AI usage produced the strongest results, Passos’ team launched a comprehensive adoption campaign targeting 100% utilization across the engineering organization.
Targeted outreach based on usage patterns. The team uses DX analytics to identify which developer segments extract more or less value from AI tools, then provides customized support to underperforming groups. “We’ve found segmenting data particularly valuable for driving AI adoption,” explains Bailey Stewart, Principal Software Engineer. “This tells us which communities within Booking have not been finding as much value in GenAI tools. Then we reach out to them to figure out what we can do to help them find more value.”
Hands-on education programs. Booking.com runs intensive two-day workshops where the first day covers AI fundamentals—explaining how large language models (LLMs) work, effective prompting techniques, and context management—while the second day focuses on solving real business problems using AI. “We pick up a business problem from a specific business unit and attempt to solve it with GenAI by bringing in internal and external experts,” Passos explains.
Continuous communication about new capabilities. As AI tools evolve rapidly, the team maintains regular communication about feature updates and new use cases. “Every time we’ve updated our AI coding assistant with new features, we post and communicate what developers can now do,” says Passos. “For example, initially, we could only use one LLM; now developers can use several LLMs depending on their specific task. Each time we make a change like this, we communicate it internally to the rest of the organization.”
Regular office hours and training sessions. The team hosts weekly sessions where developers can get personalized help with AI tools and learn best practices from colleagues. “We now have almost 100% of our developers adopting GenAI. Some of the biggest keys to achieving this have been our office hours, as well as producing content on how to use GenAI: for example, what’s an LLM, how to prompt… There’s a lot of video content that we produce and post on a regular basis,” Passos notes.
Measuring success
These targeted adoption efforts have produced measurable results beyond simple usage statistics. Teams that fully embrace AI tools now demonstrate 30% higher throughput compared to teams with minimal adoption—nearly double the productivity gains seen in the initial pilot programs.
This improvement suggests that AI’s impact compounds as teams develop more sophisticated usage patterns and integrate the technology more deeply into their workflows. Rather than treating AI as an occasional helper, high-performing teams use it as a continuous collaboration partner throughout the development process.
Looking ahead
Booking.com’s systematic approach to measuring and optimizing AI adoption offers a blueprint for other large organizations grappling with similar technology transitions. By focusing on concrete metrics rather than industry hype, the company has built confidence in its AI investments while identifying specific strategies that drive real productivity improvements.
The travel company continues expanding AI integration across its software development lifecycle, using the same data-driven approach to evaluate new tools and use cases. This methodical strategy positions Booking.com to maintain its competitive edge as AI becomes increasingly central to software development across the technology industry.
For organizations considering their own AI adoption strategies, Booking.com’s experience demonstrates that measurement infrastructure and change management are just as critical as the underlying technology. Success requires not just providing access to AI tools, but actively supporting teams through the cultural and workflow changes necessary to realize AI’s full potential.