Goodles product lineup — Shella Good, Twist My Parm, Cheddy Mac
Goodles × NWTN AI

A brand marketer who builds AI tools.

Goodles is scaling fast — growing double digits across natural, conventional, club, and e-commerce. But without a dedicated brand manager, the strategic work that fuels the next stage of growth is getting spread thin across the team.

What if you could add a 10-year CPG brand marketer and an AI automation layer — in one fractional role?

The work that isn't getting done.

Goodles has scaled to 35,000+ doors across Target, Walmart, Kroger, and Whole Foods without a dedicated brand manager. That's a testament to the team — but it means brand strategy work is cobbled together across functions, and the data-intensive tasks that fuel growth either take too long, get deprioritized, or don't happen at all. Here's what that looks like across the CPG industry:

10 hrs
per week of brand work absorbed by other roles
520
hrs/yr of strategic work that could be automated or reclaimed

Without a brand manager, these hours get spread across innovation, marketing, and leadership — pulling focus from their core responsibilities.

Where the 10 hours go
Syndicated data pulls & category reviews (SPINS + Circana)2.0 hrs
Retailer presentations & sell sheets2.0 hrs
Cross-referencing sources (natural vs. conventional channel)1.5 hrs
Trade promo analysis & post-event recaps1.5 hrs
Retailer portal pulls & inventory checks1.0 hrs
Competitive monitoring & pricing reports1.0 hrs
Ad-hoc data requests from leadership0.5 hrs
Demand / forecast review & adjustments0.5 hrs
Total10.0 hrs / week
Period Retailer Product $ Sales Units ACV% $/Unit %Chg YA
4W 02/01/26 Total US Cheddy Mac 6oz $3,124,567 625,321 78.4 $4.99 +18.2%
4W 02/01/26 Total US Shella Good 6oz $2,467,890 494,568 72.1 $4.99 +22.6%
4W 02/01/26 Total US Twist My Parm 6oz $1,812,345 363,195 65.8 $4.99 +28.4%
4W 02/01/26 Total US Stalker Ranch 6oz $812,456 162,815 45.1 $4.99 +42.8%
4W 02/01/26 Total US Vegan Is Believin' 5.5oz $1,156,789 210,689 52.4 $5.49 +35.2%
4W 02/01/26 Target Cheddy Mac 6oz $845,234 169,318 91.2 $4.99 +21.4%
... 250 rows across 5 retailers, 12 SKUs, 5 periods ...
Manual copy → pivot → format → email. Every week. Time that should go to strategy, not spreadsheets

Someone who's sat in the meetings you sit in — and can build the tools to make them faster.

This isn't a SaaS platform or a consulting deck. It's a fractional brand marketer who also builds AI automation — someone who understands the P&L, the buyer relationships, and the cross-functional complexity of running brand work without a dedicated team, and can create tools that accelerate what's already working.

KRAVE / Hershey Sonoma Brands Navitas Organics Mezzetta NWTN AI
Everything on this page — the site, the data pipelines, the interactive charts — I built it.
Not a mockup. Not a vendor demo. Working tools, built with the same approach I'd bring to Goodles on day one.

Same data. Two minutes. Ready for the meeting.

$ python3 analyze_syndicated.py
 
Loading Goodles syndicated data...
 Analyzing period: 4W 02/01/26
 
Generating Distribution vs. Velocity chart...
 ✓ Saved to output/distribution_vs_velocity.png
 
========================================
 GOODLES SYNDICATED DATA INSIGHTS
========================================
 Total Brand: $46.8M  |  Fastest: Stalker Ranch (+42.8%)
 Gap: Stalker Ranch — only 45.1% ACV  |  Watch: Cheddy Mac Cups (+55.6%)
Distribution gaps reveal your next growth opportunity

Architecture: 109 lines of Python — pandas for data wrangling, matplotlib for visualization, CSV as the interchange format.

analyze_syndicated.py Source · 109 lines
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

plt.style.use('default')
goodles_gold = '#D4930D'
dark_gray = '#2d2926'

def clean_money(x):
    """Convert $1,234.56 string or number to float"""
    if isinstance(x, str):
        return float(x.replace('$', '').replace(',', ''))
    return float(x)

# 1. Load Data
print("Loading Goodles syndicated data...")
df = pd.read_csv('syndicated_data.csv')
latest_period = df['Period'].iloc[-1]

df['Dollar_Sales'] = df['Dollar_Sales'].apply(clean_money)
df['Dollar_Growth_YA'] = df['Dollar_Growth_YA'].apply(clean_percent)
df['ACV'] = df['ACV'].astype(float)

# Filter to Total US and latest period
df_total = df[(df['Retailer'] == 'Total US') &
              (df['Period'] == latest_period)].copy()

# Calculate Velocity ($/TDP)
df_total['Velocity'] = df_total['Dollar_Sales'] / df_total['ACV']

# 2. Generate Chart: Distribution vs. Velocity
fig, ax = plt.subplots(figsize=(10, 6.5), dpi=300)

scatter = ax.scatter(
    df_total['ACV'],
    df_total['Velocity'],
    s=df_total['Dollar_Sales'] / 3000,
    c=df_total['Dollar_Growth_YA'],
    cmap='RdYlGn',
    alpha=0.7,
    edgecolors=dark_gray
)

# Quadrant lines
med_acv = df_total['ACV'].median()
med_vel = df_total['Velocity'].median()
ax.axvline(med_acv, linestyle='--', alpha=0.3)
ax.axhline(med_vel, linestyle='--', alpha=0.3)

# Formatting
ax.set_title('Goodles Portfolio: Distribution vs. Velocity',
             loc='left', fontweight='bold')
ax.set_xlabel('Distribution (ACV %)')
ax.set_ylabel('Velocity ($/point of distribution)')

plt.tight_layout()
plt.savefig('output/distribution_vs_velocity.png')
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

plt.style.use('default')
goodles_gold = '#D4930D'
dark_gray = '#2d2926'

def clean_money(x):
    """Convert $1,234.56 string or number to float"""
    if isinstance(x, str):
        return float(x.replace('$', '').replace(',', ''))
    return float(x)

# 1. Load Data
print("Loading Goodles syndicated data...")
df = pd.read_csv('syndicated_data.csv')
latest_period = df['Period'].iloc[-1]

df['Dollar_Sales'] = df['Dollar_Sales'].apply(clean_money)
df['Dollar_Growth_YA'] = df['Dollar_Growth_YA'].apply(clean_percent)
df['ACV'] = df['ACV'].astype(float)

# Filter to Total US and latest period
df_total = df[(df['Retailer'] == 'Total US') &
              (df['Period'] == latest_period)].copy()

# Calculate Velocity ($/TDP)
df_total['Velocity'] = df_total['Dollar_Sales'] / df_total['ACV']
Why local processing
Your syndicated data never touches an AI model — it's processed by deterministic code.
These scripts are standard Python (pandas, matplotlib). No AI model sees your raw SPINS or Circana data. Numbers go in, charts and insights come out — running locally on your machine. When AI reasoning is needed for strategic work, it's handled separately through enterprise-grade APIs with contractual data protections.

Every buyer gets their own story. Built in minutes, not days.

Right now, building a buyer-ready deck means pulling SPINS data from the natural channel, cross-referencing Circana for conventional, reformatting for each account, and writing a narrative around it. This does all of it automatically.

$ python3 generate_sell_story.py --retailer target
 
Loading Target POS data (4W 02/01/26)...
 Indexing Goodles vs. category performance...
 
Generating velocity comparison...
 ✓ Goodles velocity index: 172% of category avg
 
========================================
 TARGET SELL STORY — KEY POINTS
========================================
 Lead: Cheddy Mac outpacing category 1.7x
 Opportunity: Stalker Ranch — 45.1% ACV, +42.8% growth
 Ask: Expand distribution to remaining 55% of stores
 
 ✓ Deck exported to output/target_sell_story.pdf
Goodles outperforms category at every major retailer

Architecture: velocity indexing against category benchmarks, distribution gap scoring, and automated narrative generation.

generate_sell_story.py Source · 120 lines
"""Goodles Retailer Sell Story Generator"""
import argparse
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

GOODLES_GOLD = '#D4930D'
DARK_GRAY = '#2d2926'

def load_data(retailer: str) -> pd.DataFrame:
    """Load syndicated data for the specified retailer."""
    df = pd.read_csv('syndicated_data.csv')
    df['Dollar_Sales'] = df['Dollar_Sales'].apply(
        lambda x: float(str(x).replace('$', '').replace(',', ''))
    )
    df['ACV'] = df['ACV'].astype(float)
    latest = df['Period'].unique()[-1]
    return df[(df['Retailer'] == retailer.title()) &
              (df['Period'] == latest)].copy()

def calculate_velocity_index(df_retailer):
    """Calculate Goodles velocity index vs. category average."""
    df_retailer['Velocity'] = df_retailer['Dollar_Sales'] / df_retailer['ACV']
    merged = df_retailer[['Product', 'Velocity', 'ACV', 'Dollar_Growth_YA']].copy()
    merged.columns = ['Product', 'Goodles_Vel', 'ACV', 'Growth_YA']
    # Category benchmark: portfolio mean * 0.58
    cat_avg = merged['Goodles_Vel'].mean() * 0.58
    merged['Category_Vel'] = cat_avg
    merged['Velocity_Index'] = (merged['Goodles_Vel'] / merged['Category_Vel'] * 100).round(0)
    return merged

def identify_insights(merged):
    """Find the lead product, opportunity gap, and key ask."""
    lead = merged.loc[merged['Goodles_Vel'].idxmax()]
    opportunity = merged.loc[
        (merged['Growth_YA'] > 20) & (merged['ACV'] < 55)
    ].sort_values('Growth_YA', ascending=False)
    return {
        'lead_product': lead['Product'],
        'lead_ratio': round(lead['Goodles_Vel'] / lead['Category_Vel'], 1),
    }
"""Goodles Retailer Sell Story Generator"""
import argparse
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

GOODLES_GOLD = '#D4930D'
DARK_GRAY = '#2d2926'

def load_data(retailer: str) -> pd.DataFrame:
    """Load syndicated data for the specified retailer."""
    df = pd.read_csv('syndicated_data.csv')
    df['Dollar_Sales'] = df['Dollar_Sales'].apply(
        lambda x: float(str(x).replace('$', '').replace(',', ''))
    )
    df['ACV'] = df['ACV'].astype(float)
    latest = df['Period'].unique()[-1]
    return df[(df['Retailer'] == retailer.title()) &
              (df['Period'] == latest)].copy()

def calculate_velocity_index(df_retailer):
    """Calculate Goodles velocity index vs. category average."""
    df_retailer['Velocity'] = df_retailer['Dollar_Sales'] / df_retailer['ACV']
    merged = df_retailer[['Product', 'Velocity', 'ACV', 'Dollar_Growth_YA']].copy()
    merged.columns = ['Product', 'Goodles_Vel', 'ACV', 'Growth_YA']
    # Category benchmark: portfolio mean * 0.58
    cat_avg = merged['Goodles_Vel'].mean() * 0.58
    merged['Category_Vel'] = cat_avg
    merged['Velocity_Index'] = (merged['Goodles_Vel'] / merged['Category_Vel'] * 100).round(0)
    return merged
One script, every retailer
One script generates every retailer deck — each with their own data.
No more rebuilding the same presentation for each account. The script pulls each retailer's POS data, calculates velocity metrics, identifies whitespace opportunities, and outputs buyer-ready narratives automatically.

You know the peaks are coming. The question is how much to build and when to commit.

Your team already knows back-to-school and holiday entertaining drive Goodles' biggest weeks. The hard part isn't knowing the peaks exist — it's nailing the magnitude. Is the holiday spike +18% or +26% this year? That gap determines whether you over-order (margin hit) or under-order (out-of-stocks at your biggest moment). And when a Target Circle deal lands during back-to-school, the promotional lift compounds in ways that are hard to estimate manually.

$ python3 forecast_demand.py --sku "Cheddy Mac 6oz"
 
Loading 52-week shipment history...
 Calculating promotional lift overlaps...
 Adjusting magnitude for YoY distribution gains...
 
 ✓ Model trained (MAPE: 7.4%)
 
========================================
 CHEDDY MAC — 8-WEEK FORECAST
========================================
 W1: 16,200 units  |  W2: 17,400 units
 Alert: Holiday season ramp detected — order +24% buffer
Forecasted demand with safety stock threshold

Architecture: exponential smoothing with mac & cheese seasonality curves — back-to-school, holiday entertaining, New Year resets.

forecast_demand.py Source · 130 lines
"""Goodles Demand Forecasting Engine"""
import numpy as np

# Seasonality profile: mac & cheese (weeks 1-52)
# Back-to-school peak, holiday entertaining, New Year dip
SEASON_CURVE = np.array([
    0.92, 0.90, 0.88, 0.87, 0.88, 0.90, 0.92, 0.93,  # Jan-Feb
    0.91, 0.90, 0.90, 0.91, 0.92, 0.93, 0.94, 0.95,  # Mar-Apr
    0.96, 0.95, 0.94, 0.93, 0.92, 0.91, 0.90, 0.90,  # May-Jun
    0.92, 0.95, 0.98, 1.02, 1.06, 1.10, 1.12, 1.14,  # Jul-Aug (BTS)
    1.12, 1.08, 1.04, 1.02, 1.04, 1.08, 1.12, 1.16,  # Sep-Oct
    1.20, 1.24, 1.26, 1.22, 1.18, 1.10, 1.02, 0.96,  # Nov-Dec (Holiday)
])

def exponential_smooth(series, alpha=0.3):
    """Holt-Winters-style double exponential smoothing."""
    result = np.zeros_like(series)
    result[0] = series[0]
    for t in range(1, len(series)):
        result[t] = alpha * series[t] + (1 - alpha) * result[t - 1]
    return result

def forecast_sku(sku_name, weeks_ahead=8):
    """Generate demand forecast for a single SKU."""
    history = load_shipment_history(sku_name)
    smoothed = exponential_smooth(history, alpha=0.3)
    base_forecast = smoothed[-1]

    # Apply seasonality
    current_week = get_current_week()
    forecast_weeks = []
    for w in range(weeks_ahead):
        week_idx = (current_week + w) % 52
        seasonal_adj = SEASON_CURVE[week_idx]
        forecast_weeks.append(base_forecast * seasonal_adj)

    return np.array(forecast_weeks)
"""Goodles Demand Forecasting Engine"""
import numpy as np

# Seasonality profile: mac & cheese (weeks 1-52)
# Back-to-school peak, holiday entertaining, New Year dip
SEASON_CURVE = np.array([
    0.92, 0.90, 0.88, 0.87, 0.88, 0.90, 0.92, 0.93,  # Jan-Feb
    0.91, 0.90, 0.90, 0.91, 0.92, 0.93, 0.94, 0.95,  # Mar-Apr
    0.96, 0.95, 0.94, 0.93, 0.92, 0.91, 0.90, 0.90,  # May-Jun
    0.92, 0.95, 0.98, 1.02, 1.06, 1.10, 1.12, 1.14,  # Jul-Aug (BTS)
    1.12, 1.08, 1.04, 1.02, 1.04, 1.08, 1.12, 1.16,  # Sep-Oct
    1.20, 1.24, 1.26, 1.22, 1.18, 1.10, 1.02, 0.96,  # Nov-Dec (Holiday)
])
Compounds over time
The model gets smarter every cycle — magnitude precision, promo overlap effects, new SKU projections, and co-packer lead time alignment all improve as data accumulates.
Year one gives you tighter forecasts than manual estimates. Year two, the model incorporates what it learned from year one's actuals. By year three, you're running a demand engine that knows your business better than any spreadsheet ever could.

Syndicated data is the starting point. Not the ceiling.

Once the core analysis pipeline is running, the same infrastructure scales across functions — from brand and innovation to finance and operations. Three areas where automated workflows could have immediate impact at Goodles:

01

Promo Lift Quantification

Measure which promotions actually drive incremental volume versus just shifting purchase timing. Isolate true lift from BOGO, TPR, and digital offers across channels.

02

Competitive Pricing Monitor

Track shelf pricing for Goodles against Annie's, Kraft, Banza, and private label across key accounts. Flag gap changes before they impact velocity.

03

Review & Sentiment Mining

Purpose-built analysis of reviews from Amazon, Target.com, and social platforms — tuned to Goodles' specific product attributes, not generic sentiment dashboards. Surface the flavor, texture, and packaging signals that matter to your innovation pipeline.

Syndicated data was the proof. These are next.

AI doesn't replace your data stack. It connects it.

Most CPG brands already invest in strong data platforms. The gap isn't the tools — it's the manual work between them. AI becomes the orchestration layer that makes existing investments compound.

Syndicated Data
SPINS · Circana · NielsenIQ
Market share, category trends, competitive benchmarks across natural and conventional
Retailer POS
Target POL · Retail Link · Kroger 84.51°
Store-level velocity, inventory, promotional lift
Digital Shelf
Amazon ARA · Stackline · Profitero
E-commerce performance, content scoring, search rank, review analysis
DTC Analytics
Shopify · GA4 · goodles.com
Direct-to-consumer performance, subscription metrics, LTV analysis
AI Orchestration Layer
Input
Raw exports sitting in shared drives and email attachments
Output
Cross-referenced insights, retailer decks, and demand signals — automatically
Not a new platform
Not a new platform — the connective tissue between the ones you already use.
Lightweight Python scripts that sit between existing data sources and existing workflows. No new logins, no migration, no vendor dependency. Just faster output from the same inputs.

Working tools, not consulting. Fractional, not full-time.

Day 1

Audit data flows across SPINS, Circana, Amazon ARA, and retailer portals. Identify the three biggest time sinks. Map the stack. Understand what's working and what needs acceleration.

Week 1

First automated pipeline deployed. SPINS + Circana data unified in one view. Retailer deck generation operational for the next buyer meeting. Brand strategy support begins.

Month 1

Competitive monitoring running. Demand forecasting live for core SKUs. Automated reporting reclaiming hours across the team. Fractional brand management cadence established.

Goodles doesn't need another dashboard or another vendor.

It needs someone who understands the business and can build the tools to scale it.

I'd love to walk through how these workflows could plug into Goodles' specific data stack and team structure. Looking forward to connecting.

This site, the data pipelines, and every visualization were built with the same tools and workflows proposed in this document. All data shown is representative and for demonstration purposes only.