613
613parts an APC associate store
CATALOG GENERATOR · NIGHTLY CRON · v0.3.0
Cloudflare Worker · Scheduled cron

the nightly job that
feeds the bot.

Every night at 02:00 EST, a Cloudflare cron pulls the DAI rim master from the vendor portal, joins it with AS400 stock + fitment + vehicles, builds two artifacts to R2, and invalidates the bot's KV cache. Idempotent, observable, and rolls back automatically if anything fails.

Schedule
02:00 EST
Inputs
DAI + AS400
Outputs
2 JSON files
Runtime
~6-15 sec
Cost
$0/mo
Rollback
Auto
The pipeline

nine steps. one transaction.

All nine steps run inside a single try/catch. Failure at any step preserves yesterday's catalog (no overwrite happens until step 7), and ops gets pinged on Slack with the stack trace. Bot keeps serving the cached version.

01
INPUT · R2
DAI Master CSV
dai/master-latest.csv · uploaded by vendor sync
PARSE
dai-parser.ts
Quote-aware CSV · alias-tolerant column mapping · ~150 rims
IN-MEMORY
DAIRim[]
SKU · cost · MSRP · bolt pattern · in_stock
02
INPUT · R2
AS400 Exports
as400/{parts,stock,fitment,vehicles}-latest.json
LOAD
as400-loader.ts
4 parallel R2 reads · ~5,000 parts + ~250 vehicles
IN-MEMORY
parts, stock, fits, vehicles
Strongly typed · ready to join
03
JOIN
catalog-builder.ts
parts × stock × fitment → flat Part[]
IN-MEMORY
catalog: Part[]
Every SKU with .stock + .fits filled in
04
BUILD
package-builder.ts
DAI rims × in-stock tires × top-20 vehicles × 2 seasons × 3 tiers
IN-MEMORY
packages: Package[]
~120 packages with sell prices computed
05
SNAPSHOT
archive previous
Copy current catalog-search-latest.json → catalog-archive/{today}.json
R2
catalog-archive/2026-04-30.json
Daily rollback point · keeps 30 days by lifecycle policy
06
CHECK
anomaly detection
Compare in-stock count vs yesterday · alert if >50% delta
07
WRITE
R2 puts (parallel)
Two atomic writes · this is the only place existing files get overwritten
R2 · LIVE
catalog-search-latest.json
packages-latest.json
The bot reads these on next request
08
INVALIDATE
KV.delete('catalog')
Forces bot Worker to reload from R2 on next request
09
NOTIFY
notifier.ts
Slack webhook · success/warning/error blocks with stats
SLACK
#613parts-ops
"✅ Catalog regenerated · 2,847 SKUs in stock · 117 packages"
R2 layout

where everything lives.

One bucket: 613parts-catalog. Inputs go in subfolders by source (DAI vs AS400). Outputs live at the root for predictable lookup. Daily snapshots in catalog-archive/.

613parts-catalog/ # R2 bucket │ ├── # INPUTS · written by upstream sync jobs ├── dai/ │ └── master-latest.csv # DAI vendor portal export · daily ├── as400/ │ ├── parts-latest.json # ~5,000 SKUs without stock/fits │ ├── stock-latest.json # { sku: { belleville, kingston } } │ ├── fitment-latest.json # [{ part_id, vehicle_id, position, qty }] │ └── vehicles-latest.json # [{ id, year, make, model, oem_tire_size, bolt_pattern }] │ ├── # OUTPUTS · written by this generator ├── catalog-search-latest.json # Bot reads this for er_search ├── packages-latest.json # Bot reads this for the tire flow │ └── catalog-archive/ # Daily snapshots · 30-day retention ├── 2026-04-29.json ├── 2026-04-28.json └── ...
Upstream contract. The DAI vendor sync and AS400 nightly export jobs are NOT part of this Worker — they're separate scripts (TBD per AS400 setup) that write the input files to R2. This Worker just consumes them. If those jobs don't run, the generator throws on step 1 or 2 and Slack alerts ops.
handlers/generator.ts

the orchestrator.

The full nine-step pipeline as one async function. ~145 lines including comments and the anomaly check. Wrapped in a try/catch that fires Slack on success or failure.

src/handlers/generator.ts
~145 lines · TypeScript · idempotent
import { parseDAIMaster } from '../lib/dai-parser';
import {
  loadStock, loadFitment, loadVehicles, loadAS400Parts,
} from '../lib/as400-loader';
import { buildCatalog } from '../lib/catalog-builder';
import { buildPackages } from '../lib/package-builder';
import { notify } from '../lib/notifier';
import type { Env, Part } from '../types';

const CATALOG_KEY  = 'catalog-search-latest.json';
const PACKAGES_KEY = 'packages-latest.json';
const DAI_CSV_KEY  = 'dai/master-latest.csv';
const ANOMALY_THRESHOLD_PCT = 0.50;

export async function handleScheduled(_event: ScheduledEvent, env: Env): Promise<void> {
  const startedAt = Date.now();
  console.log(`[generator] Run started at ${new Date(startedAt).toISOString()}`);

  try {
    // ============ 1. FETCH DAI MASTER CSV ============
    const daiObj = await env.CATALOG_BUCKET.get(DAI_CSV_KEY);
    if (!daiObj) throw new Error(`DAI master CSV missing at ${DAI_CSV_KEY}.`);
    const daiRims = parseDAIMaster(await daiObj.text());

    // ============ 2. FETCH AS400 EXPORTS (PARALLEL) ============
    const [parts, stock, fitment, vehicles] = await Promise.all([
      loadAS400Parts(env),
      loadStock(env),
      loadFitment(env),
      loadVehicles(env),
    ]);

    // ============ 3. BUILD catalog-search-latest.json ============
    const catalogResult = buildCatalog(parts, stock, fitment);

    // ============ 4. BUILD packages-latest.json ============
    const packagesResult = buildPackages(catalogResult.catalog, vehicles, daiRims);

    // ============ 5. SNAPSHOT PREVIOUS CATALOG ============
    const today = new Date().toISOString().slice(0, 10);
    const previous = await env.CATALOG_BUCKET.get(CATALOG_KEY);
    let previousCatalog: Part[] | null = null;

    if (previous) {
      const data = await previous.arrayBuffer();
      await env.CATALOG_BUCKET.put(`catalog-archive/${today}.json`, data);
      previousCatalog = JSON.parse(new TextDecoder().decode(data));
    }

    // ============ 6. ANOMALY CHECK ============
    if (previousCatalog && previousCatalog.length > 0) {
      const prevInStock = previousCatalog.filter(
        p => (p.stock?.belleville ?? 0) + (p.stock?.kingston ?? 0) > 0
      ).length;
      const todayInStock = catalogResult.stats.in_stock_skus;
      const pctDelta = prevInStock > 0 ? Math.abs((todayInStock - prevInStock) / prevInStock) : 0;

      if (pctDelta > ANOMALY_THRESHOLD_PCT) {
        await notify(env, {
          level: 'warning',
          title: 'Large catalog stock change detected',
          detail: `In-stock SKU count changed by ${(pctDelta * 100).toFixed(1)}% overnight.`,
          fields: { 'Yesterday': prevInStock, 'Today': todayInStock },
        });
      }
    }

    // ============ 7. WRITE NEW CATALOG + PACKAGES TO R2 ============
    await Promise.all([
      env.CATALOG_BUCKET.put(CATALOG_KEY,  JSON.stringify(catalogResult.catalog)),
      env.CATALOG_BUCKET.put(PACKAGES_KEY, JSON.stringify(packagesResult.packages)),
    ]);

    // ============ 8. INVALIDATE KV CACHE ============
    await env.SEARCH_CACHE.delete('catalog');

    // ============ 9. NOTIFY OPS (SUCCESS) ============
    await notify(env, {
      level: 'info',
      title: '613parts catalog regenerated',
      detail: `Nightly rebuild done in ${((Date.now() - startedAt) / 1000).toFixed(1)}s.`,
      fields: {
        'In-stock SKUs': catalogResult.stats.in_stock_skus,
        'Total SKUs':    catalogResult.stats.total_skus,
        'DAI rims':      daiRims.length,
        'Packages':      packagesResult.stats.total_packages,
        'Vehicles':      `${packagesResult.stats.vehicles_covered}/20`,
      },
    });
  } catch (error: any) {
    await notify(env, {
      level: 'error',
      title: '613parts catalog regeneration FAILED',
      detail: `Bot is still serving yesterday's catalog (no overwrite happened).\n\`\`\`\n${error.stack ?? error.message}\n\`\`\``,
    });
    throw error;
  }
}
lib/dai-parser.ts

tolerant of vendor weirdness.

DAI's portal exports occasionally rename columns between formats — Diameter vs WheelDiameter, MSRP vs SuggestedRetail. The parser uses an alias map so when the vendor changes things we just add a new alias instead of rewriting code.

src/lib/dai-parser.ts
~150 lines · zero deps · quote-aware CSV
const COLUMN_ALIASES = {
  sku:      ['SKU', 'PartNumber', 'ItemNumber', 'DealerSKU'],
  mpn:      ['MPN', 'VendorPartNumber', 'ManufacturerPartNumber', 'StockNumber'],
  brand:    ['Brand', 'Make', 'Manufacturer'],
  line:     ['Line', 'Series', 'ProductLine', 'Style', 'Model'],
  diameter: ['Diameter', 'WheelDiameter', 'Size', 'WheelSize'],
  width:    ['Width', 'WheelWidth'],
  bolt:     ['BoltPattern', 'BP', 'PCD', 'BoltCircle'],
  offset:   ['Offset', 'ET', 'OffsetMM'],
  hub_bore: ['HubBore', 'CB', 'CenterBore', 'HubBoreMM'],
  finish:   ['Finish', 'Color', 'WheelFinish'],
  weight:   ['Weight', 'WheelWeight', 'WeightLbs'],
  cost:     ['Cost', 'DealerCost', 'WholesalePrice'],
  msrp:     ['MSRP', 'SuggestedRetail', 'RetailPrice', 'ListPrice'],
  image:    ['ImageURL', 'Image', 'PhotoURL'],
  in_stock: ['InStock', 'Stock', 'Available', 'QtyAvailable'],
};

export function parseDAIMaster(csvText: string): DAIRim[] {
  const rows = parseCSV(csvText);
  if (rows.length === 0) return [];
  const headers = rows[0]!;

  const findColumn = (aliases: string[]): number => {
    for (const alias of aliases) {
      const idx = headers.findIndex(h => h.trim().toLowerCase() === alias.toLowerCase());
      if (idx >= 0) return idx;
    }
    return -1;
  };

  const colMap = { /* every field's column index, or -1 if missing */ };

  if (colMap.sku < 0) {
    throw new Error('DAI CSV missing required column: SKU/PartNumber. Headers: ' + headers.join(', '));
  }

  const rims: DAIRim[] = [];
  for (let i = 1; i < rows.length; i++) {
    const row = rows[i];
    if (!row || !row[colMap.sku]) continue;
    const cell = (idx: number): string => idx >= 0 && row[idx] != null ? row[idx]!.trim() : '';

    rims.push({
      sku:          cell(colMap.sku),
      mpn:          cell(colMap.mpn),
      brand:        cell(colMap.brand) || 'DAI',
      line:         cell(colMap.line),
      diameter:     parseNumber(cell(colMap.diameter)),
      width:        parseNumber(cell(colMap.width)),
      bolt_pattern: normalizeBoltPattern(cell(colMap.bolt)),
      offset:       parseNumber(cell(colMap.offset)),
      hub_bore:     parseNumber(cell(colMap.hub_bore)),
      finish:       cell(colMap.finish),
      cost:         parsePrice(cell(colMap.cost)),
      msrp:         parsePrice(cell(colMap.msrp)),
      image:        cell(colMap.image),
      in_stock:     parseStock(cell(colMap.in_stock)),
    });
  }
  return rims;
}

// Quote-aware CSV parser — handles "value, with comma" and escaped "" quotes
function parseCSV(text: string): string[][] {
  const rows: string[][] = [];
  let row: string[] = [];
  let cell = '';
  let inQuotes = false;
  let i = 0;

  while (i < text.length) {
    const ch = text[i]!;
    if (inQuotes) {
      if (ch === '"' && text[i + 1] === '"') { cell += '"'; i += 2; continue; }
      if (ch === '"') { inQuotes = false; i++; continue; }
      cell += ch; i++; continue;
    }
    if (ch === '"')  { inQuotes = true; i++; continue; }
    if (ch === ',')  { row.push(cell); cell = ''; i++; continue; }
    if (ch === '\r') { i++; continue; }
    if (ch === '\n') {
      row.push(cell);
      if (row.some(c => c.length > 0)) rows.push(row);
      row = []; cell = ''; i++; continue;
    }
    cell += ch; i++;
  }
  if (cell.length > 0 || row.length > 0) {
    row.push(cell);
    if (row.some(c => c.length > 0)) rows.push(row);
  }
  return rows;
}
lib/package-builder.ts

where 120 packages get born.

For each of the top 20 popular vehicles × 2 seasons (winter, all-season), build 3 packages by tier: cheapest rim+tire (value), median rim × median tire (popular), most expensive rim × premium-brand tire (premium). Pricing is component cost × 1.30 with psychological rounding.

src/lib/package-builder.ts
~180 lines · pure function · returns packages + stats
const SEASONS = [
  { id: 'winter',     tag: 'winter' },
  { id: 'all_season', tag: 'all-season' },
];

const PREMIUM_TIRE_BRANDS = ['Michelin', 'Bridgestone', 'Continental', 'Pirelli'];
const VALVE_STEM_COST    = 5;
const LUG_NUT_COST       = 2.5;
const MOUNT_BALANCE_COST = 25;
const PACKAGE_MARGIN     = 1.30;  // 30% markup on component cost

export function buildPackages(catalog, vehicles, daiRims): BuildPackagesResult {
  const vehicleMap = new Map(vehicles.map(v => [v.id, v]));
  const tires = catalog.filter(p => p.cat === 'tires' && (p.stock.belleville + p.stock.kingston) > 0);

  const packages: Package[] = [];

  for (const popular of POPULAR_VEHICLES) {
    const vehicle = vehicleMap.get(popular.id);
    if (!vehicle?.oem_tire_size || !vehicle?.bolt_pattern) continue;

    const oemDiameter = parseInt(vehicle.oem_tire_size.split('R')[1], 10);

    // Compatible rims: matching bolt pattern, diameter ±1 (for plus-sizing)
    const compatibleRims = daiRims.filter(r =>
      r.bolt_pattern === vehicle.bolt_pattern &&
      Math.abs(r.diameter - oemDiameter) <= 1 &&
      r.in_stock > 0
    );
    if (compatibleRims.length === 0) continue;

    for (const season of SEASONS) {
      const seasonTires = tires.filter(t =>
        t.tags.includes(season.tag) &&
        t.tags.some(tag => tag === `size:${vehicle.oem_tire_size}`)
      );
      if (seasonTires.length === 0) continue;

      const sortedByPrice = [...seasonTires].sort((a, b) => a.price - b.price);

      const tierAssignments = [
        ['value',   pickRimByTier(compatibleRims, 'value'),   sortedByPrice[0]!],
        ['popular', pickRimByTier(compatibleRims, 'popular'), sortedByPrice[Math.floor(sortedByPrice.length / 2)]!],
        ['premium', pickRimByTier(compatibleRims, 'premium'), pickPremiumTire(sortedByPrice)],
      ];

      for (const [tier, rim, tire] of tierAssignments) {
        const componentCost =
          (rim.cost * 4) + (tire.price * 4) +
          (VALVE_STEM_COST * 4) + (LUG_NUT_COST * 20) + (MOUNT_BALANCE_COST * 4);

        const sellPrice = priceWithPsychologicalRounding(componentCost * PACKAGE_MARGIN);

        packages.push({
          id: `PKG-${popular.id.toUpperCase()}-${season.id.toUpperCase()}-${tier.toUpperCase()}`,
          type: 'tire_rim_package',
          vehicle_id: popular.id,
          vehicle_display: popular.display_name,
          season: season.id, tier,
          rim:  { sku: rim.sku, mpn: rim.mpn, brand: rim.brand, line: rim.line, /*...*/ },
          tire: { sku: tire.sku, mpn: tire.mpn, brand: tire.brand, name: tire.name, /*...*/ },
          components: [
            { sku: rim.sku, qty: 4, role: 'rim' },
            { sku: tire.sku, qty: 4, role: 'tire' },
            { sku: 'VALVE-STEM-RUB', qty: 4, role: 'valve_stem' },
            { sku: 'LUG-OEM', qty: 20, role: 'lug_nut' },
            { sku: 'MOUNT-BAL-LABOR', qty: 4, role: 'mount_balance' },
          ],
          price: sellPrice,
          price_label: `$${sellPrice.toLocaleString('en-CA')}`,
          branch_origin: pickBranch(tire),
          in_stock: true,
          description: `${popular.display_name} · ${season.id} · ${tier} pick · ${rim.line} ${rim.finish} + ${tire.brand} ${tire.name}`,
        });
      }
    }
  }
  return { packages, stats: { /* total, vehicles_covered, skip_reasons */ } };
}

function priceWithPsychologicalRounding(raw: number): number {
  // Round to nearest $10 ending in 9 (e.g. 1248 → 1249, 487 → 489)
  const rounded = Math.round(raw / 10) * 10;
  return Math.max(rounded - 1, 1);
}
Sample outputs

what r2 looks like at 02:01.

Two JSON files the bot Worker reads. Schemas match the existing 613parts catalog architecture so the bot endpoints work without changes.

R2 · catalog-search-latest.json
~5,000 SKUs · ~2 MB
[
  {
    "id":    "ACD-PF64",
    "sku":   "ACD-PF64",
    "mpn":   "PF64",
    "name":  "ACDelco PF64 Engine Oil Filter",
    "brand": "ACDelco",
    "cat":   "filters",
    "sub":   "oil-filter",
    "desc":  "OE-spec ceramic oil filter for Honda 1.5T...",
    "price": 14.99,
    "price_label": "$14.99",
    "unit":  "ea",
    "warranty": "24 month",
    "tags":  ["oe-spec", "ceramic", "ford-spec"],
    "image": "https://ic.carid.com/acdelco/items/pf64_1.jpg",
    "stock": { "belleville": 12, "kingston": 8 },
    "fits": [
      "honda_civic_2018_15t",
      "honda_civic_2019_15t",
      "honda_civic_2020_15t",
      "honda_civic_2021_15t",
      "honda_civic_2022_15t",
      "honda_crv_2018",
      "honda_accord_2018_15t"
    ]
  },
  /* +4,999 more */
]
R2 · packages-latest.json
~120 packages · ~250 KB
[
  {
    "id":    "PKG-HONDA_CIVIC_2018_15T-WINTER-POPULAR",
    "type":  "tire_rim_package",
    "vehicle_id":      "honda_civic_2018_15t",
    "vehicle_display": "2018-2022 Honda Civic",
    "season": "winter",
    "tier":   "popular",
    "rim": {
      "sku":      "DAI-MISSION-17-GB",
      "brand":    "DAI",
      "line":     "Mission",
      "diameter": 17,
      "width":    7.5,
      "finish":   "Gloss Black",
      "image":    "https://www.canadawheels.ca/.../mission.png"
    },
    "tire": {
      "sku":   "MICH-XICE-21550R17",
      "brand": "Michelin",
      "name":  "X-Ice Snow",
      "size":  "215/50R17",
      "image": "https://dxm.contentcenter.michelin.com/.../x-ice.webp"
    },
    "components": [
      { "sku": "DAI-MISSION-17-GB", "qty": 4,  "role": "rim" },
      { "sku": "MICH-XICE-21550R17", "qty": 4, "role": "tire" },
      { "sku": "VALVE-STEM-RUB",   "qty": 4,  "role": "valve_stem" },
      { "sku": "LUG-OEM",          "qty": 20, "role": "lug_nut" },
      { "sku": "MOUNT-BAL-LABOR",  "qty": 4,  "role": "mount_balance" }
    ],
    "price": 1489,
    "price_label":   "$1,489",
    "branch_origin": "belleville",
    "in_stock": true,
    "description":   "2018-2022 Honda Civic · winter · popular pick · Mission Gloss Black + Michelin X-Ice Snow"
  },
  /* +119 more */
]
Failure modes & recovery

things go wrong. the bot keeps working.

Defensive design — the bot's catalog never goes empty, even when upstream systems break. Every failure mode below has a tested recovery path.

DAI CSV missing or corrupt

Step 1 throws. Slack alert: "DAI master CSV missing at dai/master-latest.csv." No overwrite happens.

IMPACT · Bot keeps yesterday's catalog · 0 customer impact

AS400 export missing

Step 2 throws ("R2 object missing: as400/parts-latest.json"). Slack alert with stack trace. No overwrite.

IMPACT · Bot keeps yesterday's catalog · check AS400 cron

Stock count drops >50%

Step 6 fires a warning to Slack. Generator continues — could be real (huge sale, big shipment) or fake (AS400 export filter bug).

IMPACT · Catalog updates anyway · ops investigates

R2 write fails (step 7)

Catastrophic but rare. Catalog file may be partially written. Slack alert fires. Manual recovery: copy archive back: r2 copy archive/{yesterday}.json catalog-search-latest.json.

IMPACT · Bot may briefly serve corrupt data until cache reload

No packages built (zero compatible)

Step 4 returns empty array. Generator finishes. Slack reports "0 packages, 20 vehicles skipped." Bot tire flow has nothing to show.

IMPACT · Tire flow degraded · part-search flow unaffected

Manual rollback (rare)

Use the archive: wrangler r2 object get 613parts-catalog/catalog-archive/2026-04-29.json, then put back as catalog-search-latest.json. Delete KV key. Bot reloads on next request.

IMPACT · Recovery in <2 minutes
Deployment

add the cron to the existing worker.

The generator lives in the same Cloudflare Worker as the bot endpoints (er_search, er_fulfillment) — adding it is a config change plus one redeploy. No new infrastructure.

01 · CONFIG

Add cron to wrangler.toml

Already done in the worker repo. Schedule: 02:00 EST = 07:00 UTC.

[triggers]
crons = ["0 7 * * *"]
02 · SECRETS

Set Slack webhook + cron token

Webhook is for ops alerts; token gates the manual /cron/run endpoint.

npx wrangler secret put SLACK_WEBHOOK_URL
npx wrangler secret put CRON_TRIGGER_TOKEN
03 · UPSTREAM

Wire the input pipelines

NOT in this Worker. Two separate jobs need to write to R2:

# DAI vendor portal sync (someone)
# → 613parts-catalog/dai/master-latest.csv

# AS400 nightly export (existing IT job)
# → 613parts-catalog/as400/*.json
04 · DEPLOY

Push to Cloudflare

Single command. Cron starts running on the schedule immediately.

cd worker
npm run deploy
# Verify cron registered:
npx wrangler triggers list
05 · TEST

Manual trigger (recommended first)

Don't wait until 02:00 to find out it's broken. Trigger immediately:

curl -X POST https://bot.613parts.ca/cron/run \
  -H "authorization: Bearer $CRON_TRIGGER_TOKEN"

# Check results:
npx wrangler tail
06 · MONITOR

Watch the Slack channel

Every nightly run posts success or failure to #613parts-ops. If 3 consecutive failures, escalate to IT.

# Cloudflare dashboard:
# Workers → 613parts-bot-worker → Cron Triggers
# Shows last run, duration, success/failure
Project changeset

files added or changed.

Seven new files, four edits. Total ~660 lines added. The bot endpoints from the previous deliverable are unchanged — this is purely additive.

worker/ ├── wrangler.toml # EDIT · added [triggers] section with cron └── src/ ├── index.ts # EDIT · added scheduled handler + /cron/run endpoint ├── types.ts # EDIT · added DAIRim, Fitment, Package, extended Vehicle + Env ├── handlers/ │ └── generator.ts # NEW · the 9-step pipeline orchestrator └── lib/ ├── dai-parser.ts # NEW · CSV parser, alias-tolerant column mapping ├── as400-loader.ts # NEW · loads 4 nightly JSON exports ├── catalog-builder.ts # NEW · joins parts × stock × fitment ├── package-builder.ts # NEW · top-20 vehicles × 2 seasons × 3 tiers = ~120 packages ├── popular-vehicles.ts # NEW · seed list of 20 popular vehicles └── notifier.ts # NEW · Slack webhook with success/warning/error blocks
7 new files 4 edits ~660 lines added 0 new dependencies backwards compatible