data

Industry information management for causal inference

Proactive collection of data to comply or confront assumptions

Crosspost: The Art of Abstraction in ETL

Rounding out my three-part ETL series form Airbyte's developer blog

The Art of Abstraction in ETL: Dodging Data Extraction Errors

Cross-post from guest post on Airbyte's developer blog

Goin' to Carolina in my mind (or on my hard drive)

Out-of-memory processing of North Carolina's voter file with DuckDB and Apache Arrow

Oh, I'm sure it's probably nothing

How we do (or don't) think about null values and why the polyglot push makes it all the more important

Update: grouped data quality check PR merged to dbt-utils

After a prior post on the merits of grouped data quality checks, I demo my newly merged implementation for dbt

Using databases with Shiny

Key issues when adding persistent storage to a Shiny application, featuring {golem} app development and Digital Ocean serving

Make grouping a first-class citizen in data quality checks

Which of these numbers doesn’t belong? -1, 0, 1, NA. You can't judge data quality without data context, so our tools should enable as much context as possible.

Update: column-name contracts with dbtplyr

Following up on 'Embedding Column-Name Contracts... with dbt' to demo my new dbtplyr package to further streamline the process

A lightweight data validation ecosystem with R, GitHub, and Slack

A right-sized solution to automated data monitoring, alerting, and reporting using R (`pointblank`, `projmgr`), GitHub (Actions, Pages, issues), and Slack