flowchart TB %% Node definitions gd[("`<b>Source Data</b> Google Drive: calcofi/data/{provider}/{dataset}/*.csv`")] iw["<b>Ingest Workflow</b> workflows: ingest_{provider}_{dataset}.qmd"] dd["<b>Data Definitions</b> workflows: /ingest/{provider}/{dataset}/: <ul> <li>tbls_redefine.csv</li> <li>flds_redefine.csv</li> </ul>"] db[("<b>Database</b>")] api["<b>API Endpoint</b> /db_tables /db_columns"] catalog["<b>R Function</b> calcofi4r::cc_db_catalog()"] eml["<b>Publish Workflow</b> workflows: publish_{dataset}_{portal}.qmd with {portal}s: <ul> <li>erddap</li> <li>edi</li> <li>obis</li> <li>ncei</li> </ul>"] %% Edge definitions gd --> iw iw -->|"1 auto-generated"| dd dd -->|"2 manual edit"| iw iw -->|"3 data"| db iw --> comments comments -->|"4 metadata"| db db --> api api --> catalog db --> eml %% Comments subgraph with internal nodes subgraph comments["<b>Database Comments</b> (stored as text in JSON format to differentiate elements)"] direction TB h["hideme"]:::hidden h~~~tbl h~~~fld tbl["per <em>Table</em>: <ul> <li>description</li> <li>source (<em>linked</em>)</li> <li>source_created (<em>datetime</em>)</li> <li>workflow (<em>linked</em>)</li> <li>workflow_ingested (<em>datetime</em>)</li> </ul>"] fld["per <em>Field</em>: <ul> <li>description</li> <li>units (SI)`</li> </ul>"] end %% Clickable links click gd "https://drive.google.com/drive/folders/1xxdWa4mWkmfkJUQsHxERTp9eBBXBMbV7" "calcofi folder - Google Drive" click api "https://api.calcofi.io/db_tables" "API endpoint</b>" click catalog "https://calcofi.io/calcofi4r/reference/cc_db_catalog.html" "R package function" %% Styling classDef source fill:#f9f9f9,stroke:#000,stroke-width:2px,color:#000 classDef process fill:#a3e0f2,stroke:#000,stroke-width:2px,color:#000 classDef eml fill:#F0FDF4,stroke:#22C55E,stroke-width:2px,color:#000,text-align:left classDef data fill:#ffbe75,stroke:#000,stroke-width:2px,color:#000 classDef api fill:#9ad294,stroke:#000,stroke-width:2px,color:#000 classDef meta fill:#c9a6db,stroke:#000,stroke-width:2px,color:#000,text-align:left classDef hidden display: none; class gd source class dd,comments,tbl,fld meta class iw process class db data class api,catalog api class tbl,fld li class eml eml
5 Database
5.1 Database naming conventions
There are only two hard things in Computer Science: cache invalidation and naming things. – Phil Karlton (Netscape architect)
We’re circling the wagons to come up with the best conventions for naming. Here are some ideas:
5.1.1 Name tables
- Table names are singular and use all lower case.
5.1.2 Name columns
To name columns, use snake-case (i.e., lower-case with underscores) so as to prevent the need to quote SQL statements. (TIP: Use
janitor::clean_names()
to convert a table.)Unique identifiers are suffixed with:
Suffix with units where applicable (e.g.,
*_m
for meters,*_km
for kilometers,degc
for degrees Celsius). See units vignette.Set geometry column to
geom
(used by PostGIS spatial extension). If the table has multiple geometry columns, usegeom
for the default geometry column andgeom_{type}
for additional geometry columns (e.g.,geom_point
,geom_line
,geom_polygon
).
5.2 Use Unicode for text
The default character encoding for Postgresql is unicode (UTF8
), which allows for international characters, accents and special characters. Improper encoding can royally mess up basic text.
Logging into the server, we can see this with the following command:
docker exec -it postgis psql -l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------------+-------+----------+------------+------------+-------------------
gis | admin | UTF8 | en_US.utf8 | en_US.utf8 | =Tc/admin +
| | | | | admin=CTc/admin +
| | | | | ro_user=c/admin
lter_core_metabase | admin | UTF8 | en_US.utf8 | en_US.utf8 | =Tc/admin +
| | | | | admin=CTc/admin +
| | | | | rw_user=c/admin
postgres | admin | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | admin | UTF8 | en_US.utf8 | en_US.utf8 | =c/admin +
| | | | | admin=CTc/admin
template1 | admin | UTF8 | en_US.utf8 | en_US.utf8 | =c/admin +
| | | | | admin=CTc/admin
template_postgis | admin | UTF8 | en_US.utf8 | en_US.utf8 |
(6 rows)
Use Unicode (utf-8
in Python or UTF8
in Postgresql) encoding for all database text values to support international characters and documentation (i.e., tabs, etc for markdown conversion).
In Python, use
pandas
to read (read_csv()
) and write (to_csv()
) with UTF-8 encoding (i.e.,encoding='utf-8'
).:import pandas as pd from sqlalchemy import create_engine = create_engine('postgresql://user:password@localhost:5432/dbname') engine # read from a csv file = pd.read_csv('file.csv', encoding='utf-8') df # write to PostgreSQL 'table_name', engine, if_exists='replace', index=False, method='multi', chunksize=1000, encoding='utf-8') df.to_sql( # read from PostgreSQL = pd.read_sql('SELECT * FROM table_name', engine, encoding='utf-8') df # write to a csv file with UTF-8 encoding 'file.csv', index=False, encoding='utf-8') df.to_csv(
In R, use
readr
to read (read_csv()
) and write (write_excel_csv()
) to force UTF-8 encoding.library(readr) library(DBI) library(RPostgres) # connect to PostgreSQL <- dbConnect(RPostgres::Postgres(), dbname = "dbname", host = "localhost", port = 5432, user = "user", password = "password") con # read from a csv file <- read_csv('file.csv', locale = locale(encoding = 'UTF-8')) # explicit df <- read_csv('file.csv') # implicit df # write to PostgreSQL dbWriteTable(con, 'table_name', df, overwrite = TRUE) # read from PostgreSQL <- dbReadTable(con, 'table_name') df # write to a csv file with UTF-8 encoding write_excel_csv(df, 'file.csv', locale = locale(encoding = 'UTF-8')) # explicit write_excel_csv(df, 'file.csv') # implicit
5.3 Integrated database ingestion strategy
5.3.1 Overview
The CalCOFI database uses a two-schema strategy for development and production:
dev
schema: Development schema where new datasets, tables, fields, and relationships are ingested and QA/QC’d. This schema is recreated fresh with each ingestion run using the master ingestion script.prod
schema: Production schema for stable, versioned data used by public APIs, apps, and data portals (OBIS, EDI, ERDDAP). Oncedev
is validated, it’s copied toprod
with a version number.
5.3.2 Master ingestion workflow
All datasets are ingested using a single master Quarto script calcofi4db/inst/ingest.qmd
that:
- Drops and recreates the
dev
schema (fresh start each run) - Ingests multiple datasets from Google Drive source files (CSV, potentially SHP/NC in future)
- Applies transformations using redefinition files (
tbls_redefine.csv
,flds_redefine.csv
) - Creates relationships (primary keys, foreign keys, indexes)
- Records schema version with metadata in
schema_version
table
Each dataset section in the master script handles:
- Reading CSV files from Google Drive
- Transforming data according to redefinition rules
- Loading into database tables
- Adding table/field comments with metadata
5.3.3 Using calcofi4db package
The calcofi4db package provides streamlined functions for dataset ingestion:
library(calcofi4db)
library(DBI)
library(RPostgres)
# Connect to database
<- dbConnect(
con Postgres(),
dbname = "gis",
host = "localhost",
port = 5432,
user = "admin",
password = "postgres"
)
# Read CSV files and metadata
<- read_csv_files(
d provider = "swfsc.noaa.gov",
dataset = "calcofi-db"
)
# Transform data according to redefinitions
<- transform_data(d)
transformed_data
# Ingest into dev schema
ingest_csv_to_db(
con = con,
schema = "dev",
transformed_data = transformed_data,
d_flds_rd = d$d_flds_rd,
d_gdata = d$d_gdata,
workflow_info = d$workflow_info
)
# Record schema version
record_schema_version(
con = con,
schema = "dev",
version = "1.0.0",
description = "Initial ingestion of NOAA CalCOFI Database",
script_permalink = "https://github.com/CalCOFI/calcofi4db/blob/main/inst/ingest.qmd"
)
5.3.4 Schema versioning
Each successful ingestion creates a new schema version recorded in the schema_version
table with:
- version: Semantic version number (e.g., “1.0.0”, “1.1.0”)
- description: Changes introduced in this version
- date_created: Timestamp of ingestion
- script_permalink: GitHub permalink to the versioned ingestion script
Versions are also archived as SQL dumps in Google Drive for reproducibility.
5.3.5 Metadata and documentation
After ingestion, metadata is stored in PostgreSQL COMMENT
s as JSON at the table level:
- description: General description and row uniqueness
- source: CSV file link to Google Drive
- source_created: Source file creation timestamp
- workflow: Link to rendered ingestion script
- workflow_ingested: Ingestion timestamp
And at the field level:
- description: Field description
- units: SI units where applicable
These comments are exposed via the API db_tables endpoint and rendered with calcofi4r::cc_db_catalog.
5.3.6 Publishing to portals
After prod
schema is versioned, additional workflows publish data to Portals (ERDDAP, EDI, OBIS, NCEI) using ecological metadata language (EML) via the EML R package, pulling metadata directly from database comments.
5.3.7 OR Describe tables and columns directly
Use the
COMMENT
clause to add descriptions to tables and columns, either through the GUI pgadmin.calcofi.io (by right-clicking on the table or column and selectingProperties
) or with SQL. For example:COMMENT ON TABLE public.aoi_fed_sanctuaries IS 'areas of interest (`aoi`) polygons for federal **National Marine Sanctuaries**; loaded by _workflow_ [load_sanctuaries](https://calcofi.io/workflows/load_sanctuaries.html)';
Note the use of markdown for including links and formatting (e.g., bold, code, italics), such that the above SQL will render like so:
areas of interest (
aoi
) polygons for federal National Marine Sanctuaries; loaded by workflow load_sanctuariesIt is especially helpful to link to any workflows that are responsible for the ingesting or updating of the input data.
5.3.8 Display tables and columns with metadata
- These descriptions can be viewed in the CalCOFI API api.calcofi.io as CSV tables (see code in calcofi/api:
plumber.R
):- api.calcofi.io
/db_tables
fields:
schema
: (only “public” so far)table_type
: “table”, “view”, or “materialized view” (none yet)table
: name of tabletable_description
: description of table (possibly in markdown)
- api.calcofi.io
/db_columns
fields:
schema
: (only “public” so far)table_type
: “table”, “view”, or “materialized view” (none yet)table
: name of tablecolumn
: name of columncolumn_type
: data type of columncolumn_description
: description of column (possibly in markdown)
- api.calcofi.io
- Fetch and display these descriptions into an interactive table with
calcofi4r::
cc_db_catalog()
.
5.4 Relationships between tables
TODO:
add calcofi/apps: db to show latest tables, columns and relationsips
5.5 Spatial Tips
- Use
ST_Subdivide()
when running spatial joins on large polygons.