If spreadsheets are eternal, are BI tools transitory?

A few months ago, Benn Stancil wrote about the eternal spreadsheet. While I appreciated the generous shout out to both mdsinabox and motherduck (my employer), this really got the wheels turning around something that I have been feeling but have only been able to put into words recently: are BI tools transitory?

Consider the following scenario in the Microsoft stack: ingest data with ADF, transform with Fabric (& maybe dbt?), build a semantic model in Power BI, and delicately craft an artisanal dashboard (with your mouse). Then your stakeholder takes a look at your dashboard, navigates to the top left corner and clicks “Analyze In Excel”. How did we get here?

I remember back in the 90s, hearing my dad talk about “killer apps”. The killer app was the app that made your whole platform work. If you wanted your platform to be adopted, it needed a killer app so good that users would switch their hardware and software (an expensive proposition at the time) so they could use the killer app. In my lifetime, I recall a few killer apps: The App Store (iOS), the web (Internet), and Spreadsheets (personal computing).

Spreadsheets allowed a user to make an update in one place and for the data to flow to another place. If this concept seems oddly like a directed-acyclic graph (DAG), that’s because it is. These spreadsheets contain a bunch of features that we find handy in the data solutions stack today: snapshots (save as), version control (file naming), sharing (network shares & email attachments), business intelligence (pivot tables & charts), file interoperability (reading csv, json, xml etc), transformation (power query (there was an earlier, even more cursed version too)). All of these pieces have obvious metaphors in the commonly used data stacks today. Critically, one piece is missing: orchestration (note: back in mid 2010s, I used and loved an excel plugin called “Jet Reports” that included an orchestrator, among other things).
Now if you were running a business in the 90s (like these guys), there was no need for orchestration in your spreadsheet. You, the business user, were the orchestrator. Your data came from many places – memos (later, emails), research (books, later pdfs), a filing cabinet (later, databases), phone calls (later, slack), meetings (later, zoom calls), and your own synthesis (later, chatGPT (just kidding)). Software could not contain these! We did not have the digital twins for these analog processes. In some ways, the spreadsheet was the perfect digital collection point for these physical artifacts.

As each of these parts of our business decision making input processes transitioned to digital, our poor spreadsheet began to fall out of favor. We replaced memos with emails, phone calls with IM (via skype, if you are old enough to remember), and so on. And these digital processes began to produce loads of data. Every step produced an event that was stored in a database. The pace of change in the business environment increased in-kind. Our once per month spreadsheets orchestrated by humans were a bit too slow, processes produced too much data to be aggregated by humans. I fondly recall the launch of excel 2007, which included a new architecture and file format, so that we could process one million rows instead of only 65,536.

Unfortunately, the hardware at the time could not actually handle one million rows. Every single person using excel, unencumbered by 32bit row limits, ran into the hard limits of the Excel architecture and inevitably seeing a spinning, “waiting for excel” icon before crashing (hopefully you saved recently). Hilariously, Microsoft trained users to tolerate an absolutely terrible experience. Excel could do too much. What we needed to do was unbundle this tool, take it apart piece-by-piece, so that we could have good, delightful experiences for our business users. Users could still use spreadsheets for things, but we needed to shift the load bearing intelligence of our business decision making into better tools. 

So we built even more powerful databases, and ways to automate decision making at scale. We began to use multiple computers, running in parallel, to solve these problems for us. Large complex systems like Hadoop were required to aggregate all this data. Companies like Google harnessed the immense scale enabled by these systems to become the largest in the world, building never-before-seen products and experiences.

At the same time, CPU clock speeds stopped increasing. We had maxed the number of cycles we could push out of the silicon in the fabs. But innovation found a way to continue – we began to add more cores. Slowly but surely Moore’s law kept on holding, not on clock speed but on throughput.

The software built to take advantage of the scale that was possible with huge quantities of networked computers made assumptions about how to work at great scale across many machines (i.e. Spark). These assumptions did not generalize to single machines with many cores. This has not been unnoticed, of course (see George’s tweet).


So what happened to our business intelligence while this was going on? The number of tools exploded, while the consumption interface remained unchanged. Categories were split into sub-categories into sub-categories. We only had so many charting primitives, and whether we dragged and dropped with Tableau or used BI as code in Evidence, the output looked largely the same. But instead of one tool that we needed in the 90s, we now had thousands.

But I would argue we haven’t added anything new, we’ve merely unbundled it into a bunch of different products and that don’t work that great together. REST APIs have allowed scalable, loosely coupled systems but really suck to work with. Behind every large enterprise data workflow is an SFTP server with CSVs sitting on it (if you are lucky, its object storage and a compressed format, but its the same thing). 

If we look at the trends, in 5 years we will have approx. 10x more compute than we do today, and Backblaze estimates that cost per GB of storage will stabilize around 0.01 / GB ($10/TB). If these trends hold, we will easily have enough horsepower on our laptops to put all these pieces that we have decoupled over time, into one box. If BI tools are transitory, spreadsheets are eternal. The era of spreadsheets 2.0 will be upon us.

What are the characteristics of Spreadsheets 2.0?

  • Runs on a single node with many cores (hundreds?)
  • One file format for handling all types of data (xlsx++)
  • One language for end-to-end data manipulation (sql)
  • A spreadsheet UI for interacting with data at any step in the data manipulation (power query-ish)
  • Fast, interactive charting (mosaic)
  • Intelligent, incemental orchestration (dynamic dags)
  • An AI trained on all these parts above to assist the user in documentation, understanding, and building (clippy but good)

I believe the first building block of this has emerged in front of our eyes: DuckDB. The hardware is being built as we speak (the feedback loop will build it whether we like it or not). Julian Hyde is advocating for “completing the SQL spec to handle metrics” (with apologies to Malloy) – humans have refined this language over the last 50 years and will continue to do it for the next 50. We already have the UI primitives (Excel), so we merely need to bolt these together.

It’s time for the humble spreadsheet to RETVRN. It’s time to bring humans back into the workflow, empowered by AI, to own the data ingestion, transformation, and synthesis required to make decisions. Of course, I’m tinkering with this idea today, if you are interested in what I have so far, please reach out, I would love to talk.

Duel: NBA Bubble Projections

For our inaugural duel – Jacob created a data set based on 538’s NBA Predictions. He’ll create a deep dive into the mechanics of the model and how to leverage Excel’s data table function for no-code simulations in a later post. The data is available at the bottom of this post.

Jacob is a native Excel user and has created similar models for his Fantasy NBA league. He was able to take those models and dress them up for this duel – albeit in a format that was native to PowerBI and Excel. More on how that impacted the Tableau side of the analysis below.

Since we were using the 538 data set, we decided the first part of the challenge should be to replicate the view above in PowerBI & Tableau.

Some of the data weren’t readily available, i.e. projected point differentials and team logos. For the purpose of the commentary below, we will be ignoring these facts.

As a phase 2 / stretch goal for this challenge, we also set out to create our own, novel visualization of the scenario combinations. This helped us to answer questions like “When the Bucks make the finals, who are their most likely opponents?” or “What are the paths for the Celtics to the conference finals?”.

PowerBI Commentary

This is section is written in first-person by Jacob.

Part 1: 538’s visualization

Where PowerBI succeeded: Getting the calculations out of the attached data set was fairly easy once I sorted out the data model in my head. While the data wasn’t perfectly formed, it was quite easy to shape it using PowerQuery to get what I needed.

PowerBI Data Model

I added a couple of measures on top of it and the table working pretty quickly. Getting the conditional formatting to match was fairly easy too, although to get an exact color match I used the “color dropper” from powerpoint on a screenshot of the website (gross).

Where PowerBI struggled: I couldn’t quite figure out how to get the sorting to work when I replaced certain values with “icons”, i.e. >99% or the “checkmark” icon. PowerBI treats the field as a string and therefore does a character-based sort. This means that apply a single sort on the outcome of the model doesn’t really work! Instead, you have to sort by ELO rating or by Projected standings to get a cohesive sort.

After I wrote this initially – I did find a workaround for this sorting issue, sort of. This video from Guy In A Cube explains the “hack” – but it is indeed just a hack.

Part 2: Scenario Modeling

I am pretty satisfied with how this visual turned out – but the sorting on probability fields continued to plague me. Also, the mental model for this data was effectively recursive, and I am not sure how to accomplish this in PowerBI, so I imported the same table twice. See the image on the leftfor how this was accomplished.

After fighting with DAX on and off for a few days, I was able to get a “base scenario” calculation using the ALL Filter. This meant that when you selected a Team from “series_winners” you could calculate the odds of that scenario versus the “base” scenario. This surfaces really neat scenarios in the modeler, such as an OKC win in the second round which double’s Milwalkee’s championship odds.

You can find the DAX for stripping the filters from the “series winner” table, below.

All Scenario Win Pct = 
CALCULATE ( COUNT ( series_detail[TeamID] )/
        DISTINCTCOUNT ( series_winners[ScenarioID] ),
        ALL(series_winners)
    )

Tableau Commentary

This section is written in first-person by Nate.

Part 1: Data Prepping

Where Tableau succeeded: Tableau handled the data really well once I completed a lot of trial-and-error to get the data into the right format. The data model I put together involved two tables in there twice, so it’s likely sub-optimal but is functional. Specifically, Tableau is consistently improving how data can be loaded & prepared (See the recent changes just launched in 2020.2) but my unfamiliarity with those new features meant I didn’t have time to give them a go on this analysis.

Tableau Data Model

Where Tableau struggled: My experience getting data into Tableau nearly always involves a connection to SQL – either a direct connection to a table or a very clean CSV output of a SQL query. Since the data model created by Jacob is in PowerBI/Excel, I had to do some manual adjustments to the tables to get them in the format I needed, such as creating long tables (just a few columns) out of wide tables. This resulted in several more hours of work as I did trial & error between modifying data and trying to visualize it in Tableau.

Part 2: 538’s visualization

Tableau’s version of 538’s table

Where Tableau succeeded: Getting this table created was very simple once I finalized the data model. Sorting works well across all columns and the Tableau method of dragging dimensions & measures around to get colors & formats worked well.

Where Tableau struggled: I could not get some of the nifty 538 features in the table – such as a checkmark at 100%, and a string for “<1%”. Instead, Tableau displays 0% for situations that round down to 0%. I tried adding in a decimal place, but that just cluttered up the view. As well, Tableau does not have strong conditional formatting capabilities for the background of cells. It’s possible (see the KB article here) but I found the saved development time for other work by instead just coloring the numbers and shifting to a darker cell background.

Part 3: Scenario Modeling

Where Tableau succeeded: After several rounds of arm wrestling, pleading and bribing – Tableau finally assented to the view I wanted which included:

(a) The original odds
(b) New odds based on selection
(c) Visualization demonstrating change in odds

There’s much more I wish I could have done but in the interest of time (the playoffs are going now!) it was time to hit ‘Publish’

Where Tableau struggled: I spent multiple hours trying to get the FIXED LOD formula in Tableau to work which would allow me to compare the odds from both the unfiltered view and the filtered scenario view at the same time. Tableau can be frustrating to use when trying to visualize multiple levels of detail in the same view, and likely my chosen data model contributed to the struggles.

I asked the best Tableau user I know for some advice as I was getting this viz prepared and his advice sums up the struggle well: “When dealing with LODs, I usually just try every permutation until something somehow works.” Turns out he was right in this instance, too.

Overall Winner: PowerBI

Category Winners:

538 Duplication: PowerBI – but really neither piece of software allows the customization that the web package used by 538 has. Still, we gave it a decent go. PowerBI does tables a bit better, so it wins here.

Scenario Modeling: While Tableau is very snappy and honestly more “discoverable” (good job with the tooltips, Nate), the PowerBI “tournament visual” is very intuitive for sports fans. Additionally, the analysis at the bottom of the chart is more comprehensive and more understandable than the tableau bar charts of the same info. We give the edge to PowerBI.

A note on data prep (not scored): The in-app data prep with PowerQuery is a no-contest when compared to Tableau. This will pretty much always be true and can be both good and bad. Good, because it allows a lot of control at the fingertips of the analyst, and bad, because a lot of code, either in M or DAX, gets added to PowerBI instead of database engine of your choice.

PowerBI Commentary from Tableau User

Where PowerBI succeeded: PowerBI is more equipped than Tableau to display data in a tabular format such as the one on 538 and that shows in the final product. The ability to put many small views of data into a single dashboard also proved to be powerful in the final scenario modeling output.

Where PowerBI struggled: PowerBI depended too strongly on its native table functionality, resulting in lots of details but a lack of bold & clear visuals. Sorting also turned out to be tough as you may notice when using the final interactive version.

Tableau Commentary from a PowerBI User

Where Tableau succeeded: The data model is much easier to grasp even to technical users. It’s much faster to interact with, and the tool-tips make it very easy to understand. Additionally, clever use of the NBA logo immediately contextualizes the user.

Where Tableau struggled: As Nate mentioned, the data prep took a significant amount of time. The conditional formatting inside tables is not very finely tuned, especially compared to PowerBI. Hilariously, sorting inside a table has its own set of issues (sort on “Win Championship”).

Links

Link: Interactive version of the PowerBI report.

Link: Interactive version of the Tableau report.

Let us know your thoughts below! A list of the files can be found after the jump.

Charts Reconsidered: Mask Wearing

Like any good analyst, I enjoy scrolling through r/dataisbeautiful. And when I say enjoy, I really mean “I’m doom-scrolling through reddit because twitter is depressingly worse.” Of course this leads me here, to our first entrant into “Charts Reconsidered”, where every week I will revisit a chart from reddit and suggest some improvements.

That leads me to this chart – the 5th ranked chart on the subreddit on July 21st, 2020. It tells us who does and doesn’t use masks, by a few different breakdowns. It is an interesting story, but it could be told in a better way.

3D bars. Yikes. This reminds me vaguely of “WordArt”.

Sorting

There are 4 distinct groups in this chart – Gender, Political Party, Education, & Overall. They are all mashed together with no space.

In excel, I would use “blank” series to add space between each group to improve readability while enabling shared axes. I would also pick a consistent series to sort on from high to low.

Colors

Green & Grey isn’t a great color combo and gets amplified by a lime green gridline color. It’s not a good look.

Keep the gridlines in background, a lighter gray perhaps. For a chart like this, I would use a light and dark tone of the same color. Or you can steal the Ben Evans approach – and use shades gray + a single color for emphasis (in his case, red).

Labels & Gridlines

Too much info crammed into this part which muddles the story. There are major and minor Y gridlines, which are then labeled without a percent sign. The bars are also labeled. Lastly, the X & Y axis labels are switched.

Turn off minor gridlines and make the major gridlines either 25 or 50. Add Percentage labels so the units are clear. And fix the axis labels (or remove them).

Chart Type

3D bars with series stacked front to back is not a good look. This is most obvious in the GOP group, where the labels overlap the bars. The lack of spacing between groups makes it challenging to see differences between groups as well.

Just use a regular, stacked bar chart.

New visualization

With the magic of PowerBI – I’ve crafted a new chart, with the same data, to tell a more visually appealing & easier to understand the story.

Regular stacked bar chart, grouped and then sorted alphabetically.

The labels are removed, the legend is cleaned up, and the colors are simplified. Did it take longer to make this chart? Yes! Does it tell a better story – also yes!

I hope you found this feedback helpful. Let me know what else you would change in the comments below.