Badger Analytics: InfoAccess Cloud Migration

Logo featuring a badger inside a cloud with connection points depicting a network

Data, Academic Planning & Institutional Research is working to modernize and expand the capabilities of UW–Madison’s enterprise data warehouse, InfoAccess. The modern system will use the latest in cloud analytics technology to deliver a broad set of cross-functional data. The completed project will deliver previously unimaginable capabilities to the delivery of analytic insights to the UW–Madison community.

The cloud data warehouse project builds on the goals of the Administrative Transformation Project (ATP) by providing a modern cloud environment that is organized, integrated, and documented for use by analysts who can effectively share information with decision-makers. The project will address the following problems with the current state and deliver modern solutions and address the following problems with the current state:

  • Reduce manual effort and redundant data storage
  • Provide easy access to data storage and analytics tools to a wider audience
  • Improve the speed of analytics delivery
  • Offer flexible, scalable solutions for multiple analytic data types
  • Eliminate pain points with data silos and limited reporting driven by the prevalence of the many disparate source systems
  • Modernize and streamline data warehouse security access and enable more secure environments
  • Minimize capital expenses for hardware and software

The three-phase project will begin with the migration of recently developed InfoAccess content that uses more modern data architecture and structures and follows best practice data migration and modeling. The second and third phases of the project will migrate the content of the remainder of InfoAccess, including a significant amount of the warehoused student data, which is not dimensionally modeled and will require a redesign and rebuild to provide performant, cross-domain integrated insight delivery. InfoAccess will be phased out after the completion of this project.


This is an accordion element with a series of buttons that open and close related content panels.

Do you intend to store FERPA/HIPAA data, and if so, how will it be protected?

Snowflake’s security model meets and exceeds the security of InfoAccess environment. As part of the university’s contracting process, the Office of Cybersecurity conducted a review of Snowflake’s security framework and approved it for campus use. The Data Governance Council, which includes DoIT’s Chief Information Security Officer (CISO), will be engaged to monitor Snowflake’s institutional data use cases and security assessments on an on-going basis.

Will the process create a need to rewrite queries as pieces come online?

Possibly, depending on the changes implemented to the underlying data structures. During the initial phase, we do not anticipate major changes to the underlying data mappings or structures between InfoAccess and Snowflake, because we will only migrate the best practice, dimensionally modeled domains (Finance, Lumen, DARS).

What’s the impact of the redesign and migration on InfoAccess ad hoc query users?

It’s our hope that we can identify common use cases across developers that could be collected and collaboratively built into enterprise solutions, rather than each developer recreating their own.

What will the process be for each department to switch to Snowflake? Will it be similar to the move from Query Library to RADAR?

The two data warehouses will run in parallel during the transition period to give users enough time to convert queries to the new, enhanced structures. Technical architecture, data definitions, and support documentation will be provided as each release is moved from testing to production, as well as communications and support from DAPIR experts.

How will we reach agreement on data definitions when every program has its own unique caveats? How will we consolidate from an operations standpoint?

Data element by data element and metric by metric through the data governance process for any additions to curated data structures.