How a Dead Power Grid Transformed an 8-Hour Databricks Workshop
Why a forced room change was the best thing that could have happened.
Picture this: It’s 8:00 AM at DataTune in Nashville. I’ve got a room full of data engineers sitting at classroom tables, laptops open, coffee in hand, ready to tackle an 8-hour marathon workshop on Modern Data Engineering with Databricks.
I’m about to kick off the opening slide on building resilient pipelines when we make a sudden discovery.
The power banks are dead. The room has no power.
For an 8-hour heavy-compute coding workshop, that’s a fatal error. But at Gambill Data, one of our three core architectural pillars is Anti-Fragility. We teach that junior engineers try to prevent errors, while senior architects assume errors will happen and build systems that survive them.
So, we practiced what we preached. We pivoted.
First, a massive shoutout to Cameron, the incredible AV tech for our floor. He did a fantastic job diving in to try and revive the power banks before ultimately helping us orchestrate a complete room change.
While Cameron was working his magic on the hardware, we didn’t just sit in the dark waiting. I took the class out into the hallway for an impromptu breakout session. We circled up and just talked… We talked about why everyone was passionate about data, what brought them into their data careers, and the real-world problems they were trying to solve.
Once we had the green light, we migrated the entire class out of the traditional classroom and into a large conference room. And honestly? It was the absolute best thing that could have happened.
The Psychology of the Room
A traditional classroom setup subconsciously dictates a dynamic: “Teacher and Students.” But when we all sat down together around a massive conference table after that hallway chat, the psychology of the room instantly shifted. It stopped being a lecture and immediately became an architectural working session between a Principal Architect and Engineering Peers.
That dynamic carried us through the next eight hours. An all-day technical deep-dive is a marathon, and the drop-off in attention usually hits right after lunch. But this group? Zero daydreaming. The engagement, the questions, and the “aha” moments were non-stop.
From Chaos to Trust
Our goal for the day was simple: Move data from a state of complete Chaos to a state of absolute Trust.
We didn’t just write Python syntax; we built a “Glass Factory.” We engineered a system that was transparent, highly governed, rigid where it mattered, and deeply automated. Here is what this incredible group of engineers accomplished in just a few hours:
The Governance Foundation: We started by building a “Mini-Enterprise” in Unity Catalog, enforcing strict boundaries between Development and Production environments. Rule #1: We don’t write code until we have a governed place to put it.
Strategic Ingestion: We used Lakeflow Connect to pull live Salesforce data, bypassing the need to write custom, brittle API polling scripts. Ingestion is configuration, not code.
Declarative Pipelines: We used Delta Live Tables (aka. Spark Declarative Pipelines… this week) to move from Imperative micromanagement to Declarative leadership. We learned how to avoid the “AI Trap” by explicitly utilizing Streaming Tables for cost-efficient Bronze/Silver cleansing, and Materialized Views for heavy Gold aggregations.
The Senior Transition: After lunch, we stopped doing “ClickOps.” We took our UI-built pipelines and converted them into Infrastructure-as-Code (IaC) using Databricks Asset Bundles (DABs), allowing us to seamlessly promote code from Dev to Prod without changing a single line of Python.
The Quarantine Pattern
My favorite moment of the day was the final stretch. We looked at our Silver layer, where we were originally dropping any Salesforce records that had negative dollar amounts.
I asked the room: “If we just silently drop a million-dollar bad record, how will the RevOps team ever know why their dashboard is wrong?”
True Anti-Fragility means we don’t just drop bad data; we route it. We refactored our code to build a Quarantine Pattern (a Dead Letter Queue), routing the invalid records into a dedicated table for human triage, and then querying our DLT Event Log to build a Data Quality (DQX) dashboard.
That is what it means to be a Strategic Partner. We didn’t just build a pipeline; we built a system that protects the business.
What’s Next?
To everyone who attended the workshop yesterday: Thank you. You showed up ready to elevate your game, proving that Strategy > Syntax every single time.
But our work this weekend isn’t done. Yesterday was about how to build systems that survive chaos. Today, we are talking about how to build careers that survive the next wave of tech.
If you are at DataTune Nashville today (Saturday, March 7th), come join my session at 1pm in room 411: Adapt or Be Automated: Continuous Learning in the Age of AI and Data Engineering. We’ve mastered the pipelines, now let’s talk about how to future-proof the engineer. I’ll see you there!

