Looker is unique among popular business intelligence (BI) tools for its innovative approach to data modeling and exploration. Holistics, on the other hand, not only provides a data modeling layer but also promises full control over the whole data analytics workflow, from raw data to insights.
In this article, we are going to do a detailed comparison between Holistics and Looker on how they are similar and different and which use cases the two tools best fit.
On the surface, Looker and Holistics are similar in that both provide a data modeling layer that translates between business users’ drag-and-drop inputs to a database query to provide a self-service data exploration interface.
The data modeling layer not only provides self-service analytics for business users but also provide a centralized management interface for data analysts to ensure that the data exposed is accurate, maintainable and reusable.
However, the differences between Holistics and Looker become apparent when we go deeper in how the two handle various use cases.
Looker is very opinionated when it comes to querying. The product’s philosophy is to emphasize maintainability and reusability via the upfront modeling process before any querying is done. Thus its adhoc analysis feature is de-emphasized. Looker only provides a bare-bone SQL editor that returns a raw resultset without any rich exploration user interface you would expect in a data exploration tool. No version history of adhoc queries, no visualization available discourages data analysts from using this feature indiscriminately.
Holistics, on the other hand, does not have this restriction on adhoc analysis. The default adhoc query editor is comprehensive with many features such as version history, query templates, schema explorer while providing full visualization experience you would expect from data exploration tool. Results from the adhoc queries can then be converted directly into a new model to be reused later, allowing for a seamless experience between adhoc analysis and data modeling.
A requirement that immediately sets Looker apart from other alternatives is the need to invest time in an upfront data modeling step before any data visualizations can be done. For Looker, this means learning and preparing the modeling work in LookML first, as an abstraction on top of a SQL database.
Learning LookML is not rocket science but still takes significant effort, thus creating a fairly high barrier of entry. As a result, only analysts who are willing to invest in Looker’s approach would go through the process of learning it. Furthermore, Looker’s domain-specific language approach has a fairly long feedback cycle as analysts need to write the code, validate, save and only then can test it through exploring the resulting models. If there is an issue with the generated code, the analysts need to go back into fixing the code and repeat the cycle.
Holistics’ data modeling process, on the other hand, does not require the need to learn a new language. The only requirement is some minimal SQL knowledge to model some advanced use cases such as custom fields. Analysts can jump right into the data modeling process and get the models out with minimal learning. Furthermore, the feedback cycle is small as the data modeling interface has integrated visualization output, making it apparent when there is some fault in the models. As the underlying models are also stored as a domain-specific language, Holistics also retains the advantages of this approach and allows more advanced users to have freedom over the modeling process.
Looker prides itself on being a data exploration tool and thus does not give analysts the ability to do data preparation. Looker’s documentation refers to its partners when you need to do data workflow. Furthermore, Looker does not provide a catalog feature to allow the data team to have centralized definition of metrics and models to foster cross-company learning and exploration.
Holistics’ philosophy is to provide full control over the data analytics process, which means it provides both data workflow capabilities as well as data catalog for the whole organization to participate in the process. The tight integration between data modeling and data workflow means an analyst can create a workflow automatically during the modeling process, making it frictionless to create a full data pipeline with minimal effort.
Furthermore, by integrating data exploration and data workflow, Holistics makes it really easy to debug a data issue when it occurs. Exposing an inaccurate metric is as easy as clicking a button to see where the data goes wrong, at what step in the data pipeline.
The addition of data catalog in Holistics means data definition can be shared among the whole organization, eliminating the need for informal communication between data stakeholders while fostering data learning and exploration. As a result, the whole organization’s data literacy rate improves and everyone becomes more data-driven.
From the first glance, Holistics and Looker seem to be similar in approach with the use of a semantic layer that translates business users’ drag-and-drop input to SQL query. However, when you look deeper, Holistics goes beyond just data exploration and incorporates the whole data analytics process, providing the full flow from raw data to final insights.
In other words, while Looker provides a piece of the puzzle, Holistics provide the full assembled picture for your data analytics needs.
The followings table provide more detailed comparison between Holistics and Looker as it goes into each aspect of both tools.
|Data Delivery||Email/Slack Schedule|
|Visualization||Beautiful, interactive visualization|
|Embedded Analytics||Embedding into your own application|
|Native SQL support|
|100% browser-based, works perfectly on all OS(Mac, Linux, Windows)|
|Work with detailed, operational table data|
|Able to drill down to explore more detailed data|
|Access Control||Row-based access control|
|Version control support for reports and queries|
|Collaboration||Like and Commenting|
|Data Modeling||Easy to learn data modeling approach|
|Centralized data definition across organization (data catalogue)|
|Centralized data definition across organization (data catalogue)|
|Full-featured adhoc querying interface|
|Adhoc querying without the need to do data modeling|
|Able to do data modeling without SQL knowledge|
|Error reporting at visualization layer|
|Integrated data modeling and workflow system|
A data catalog helps users to search and understand an organization’s data definitions and raw data while maintaining high level of data governance.