At Rogers Communications, we take pride in ensuring billing accuracy and integrity for our customers. To achieve those tasks and satisfy a range of use cases, we need to utilize data throughout our various businesses. Everything from provisioning analysis to usage measurement depends on our ability to apply data and machine learning, enabling us to work faster and smarter.
To help us better understand our customers and internal operations, we rely on both historical and real-time data to provide insights and analytics that we can leverage for billing accuracy and preventing revenue leakage. Our legacy technology was unable to adapt and scale to meet our analytical requirements. Revenue Assurance was relying on monolithic, on-premises data warehouses and tools that created a number of challenges for our data teams:
To become insight-driven and adapt to the ever-growing telecommunications landscape, Revenue Assurance needed to migrate to the cloud and modernize tooling to keep up with the flow and volume of information. We needed to utilize tools that would democratize data access and collaboration across businesses, streamline efficiency through automation, and make better use of our data science talent for new insights. Business leaders were eager to keep up with industry peers and competitors, but they needed to understand the value of a completely new environment before providing support.
To help secure approvals for modernization, we created a KPI-based, year-long roadmap that outlined vital milestones. These included establishing a centralized data lake, implementing encryption for alignment with privacy laws, creating business intelligence (BI) dashboards to help visualize insights, and finally, accomplishing our goal of becoming a data-driven organization.
To achieve the outcomes we had promised, Revenue Assurance needed a modern data platform that unified our data and enabled data teams with analytics and ML at scale. It was time to clean up shop by transforming the way we interacted with our data.
Rogers chose to deploy the Databricks Lakehouse Platform on Azure based on the customer stories and achievements we read on the Databricks website. Regardless of industry, we saw many successful implementations of Databricks that delivered the same results we wanted to accomplish.
We created a centralized and harmonized data repository in the Azure cloud called the RADL or Revenue Assurance Data Lake. We used Azure Data Factory to move to the Azure cloud and migrated our on-prem Hadoop and Oracle data and pipelines into the RADL. In order to meet Canada’s privacy laws, we built an encryption framework to protect personally identifiable information (PII). For data analysis, we actually tried a different tool first, but it was unable to do predictive work at the scale we required. From that experience, we learned the criticality of using open source frameworks for flexibility and freedom.
Databricks Lakehouse supports multiple languages including SQL, Python, R, and Scala, which gives Rogers an advantage in the fierce competition for data engineers and scientists. We’re able to widen our talent pool in the labor market to attract top talent regardless of programming language. With Databricks, we’re also not locked into specific vendors or packages. A truly open source experience means we can invoke any open source package that exists and give data scientists the ability to apply what they think is best. Additionally, with automated clusters, we’re further enabled to scale according to workload size rather than worrying about overages, storage requirements, and limitations.
For our business teams, we are now able to easily feed real-time insights to analysts and business teams through visual dashboards. These can be sliced and diced to meet the needs of our stakeholders across business units. More people are understanding not only how data insights are generated, but also what those data insights mean for their own teams. Using advanced ML packages, we’ve also been able to improve the accuracy of predictive forecasting and descriptive SQL reporting. From an operations standpoint, Databricks gives us an understanding of cost in comparison to capabilities. We can justify the cost of using more compute and storage because we can also see gains in performance.
With the migration to the cloud complete and our data in RADL on Databricks Lakehouse, Revenue Assurance is now putting data-based use-cases into production faster and more frequently than ever. Where Databricks continues to shine is in remedying benchmark statistics like roaming trends for financial analysis. To dive deeper into roaming trends, we needed new data features to understand and predict customer behavior.
For example, we are using the number of travelers flying in and out of Canada (sourced from the national statistical office, Statistics Canada or StatsCAN) and other variables such as seasonality to help us better estimate future revenue. Now Revenue Assurance is able to better analyze roaming revenue, both presently and into the future, which is critical for billing integrity and accuracy.
Going forward, Rogers will continue to evolve and modernize using the latest data efficiencies in the Databricks Lakehouse Platform. Overall, our goal is to make ML a core competency of Revenue Assurance so that data-driven reporting and predictive elements are always being applied to achieving business outcomes. As data volume and sources continue to grow, Rogers has confidence in our Lakehouse architecture and underlying cloud infrastructure to give us the ability to efficiently use that information for smarter business decisions.