Goodles is scaling fast — growing double digits across natural, conventional, club, and e-commerce. But without a dedicated brand manager, the strategic work that fuels the next stage of growth is getting spread thin across the team.
What if you could add a 10-year CPG brand marketer and an AI automation layer — in one fractional role?
Goodles has scaled to 35,000+ doors across Target, Walmart, Kroger, and Whole Foods without a dedicated brand manager. That's a testament to the team — but it means brand strategy work is cobbled together across functions, and the data-intensive tasks that fuel growth either take too long, get deprioritized, or don't happen at all. Here's what that looks like across the CPG industry:
Without a brand manager, these hours get spread across innovation, marketing, and leadership — pulling focus from their core responsibilities.
| Period | Retailer | Product | $ Sales | Units | ACV% | $/Unit | %Chg YA |
|---|---|---|---|---|---|---|---|
| 4W 02/01/26 | Total US | Cheddy Mac 6oz | $3,124,567 | 625,321 | 78.4 | $4.99 | +18.2% |
| 4W 02/01/26 | Total US | Shella Good 6oz | $2,467,890 | 494,568 | 72.1 | $4.99 | +22.6% |
| 4W 02/01/26 | Total US | Twist My Parm 6oz | $1,812,345 | 363,195 | 65.8 | $4.99 | +28.4% |
| 4W 02/01/26 | Total US | Stalker Ranch 6oz | $812,456 | 162,815 | 45.1 | $4.99 | +42.8% |
| 4W 02/01/26 | Total US | Vegan Is Believin' 5.5oz | $1,156,789 | 210,689 | 52.4 | $5.49 | +35.2% |
| 4W 02/01/26 | Target | Cheddy Mac 6oz | $845,234 | 169,318 | 91.2 | $4.99 | +21.4% |
| ... 250 rows across 5 retailers, 12 SKUs, 5 periods ... | |||||||
This isn't a SaaS platform or a consulting deck. It's a fractional brand marketer who also builds AI automation — someone who understands the P&L, the buyer relationships, and the cross-functional complexity of running brand work without a dedicated team, and can create tools that accelerate what's already working.
Architecture: 109 lines of Python — pandas for data wrangling, matplotlib for visualization, CSV as the interchange format.
import pandas as pd import matplotlib.pyplot as plt import numpy as np plt.style.use('default') goodles_gold = '#D4930D' dark_gray = '#2d2926' def clean_money(x): """Convert $1,234.56 string or number to float""" if isinstance(x, str): return float(x.replace('$', '').replace(',', '')) return float(x) # 1. Load Data print("Loading Goodles syndicated data...") df = pd.read_csv('syndicated_data.csv') latest_period = df['Period'].iloc[-1] df['Dollar_Sales'] = df['Dollar_Sales'].apply(clean_money) df['Dollar_Growth_YA'] = df['Dollar_Growth_YA'].apply(clean_percent) df['ACV'] = df['ACV'].astype(float) # Filter to Total US and latest period df_total = df[(df['Retailer'] == 'Total US') & (df['Period'] == latest_period)].copy() # Calculate Velocity ($/TDP) df_total['Velocity'] = df_total['Dollar_Sales'] / df_total['ACV'] # 2. Generate Chart: Distribution vs. Velocity fig, ax = plt.subplots(figsize=(10, 6.5), dpi=300) scatter = ax.scatter( df_total['ACV'], df_total['Velocity'], s=df_total['Dollar_Sales'] / 3000, c=df_total['Dollar_Growth_YA'], cmap='RdYlGn', alpha=0.7, edgecolors=dark_gray ) # Quadrant lines med_acv = df_total['ACV'].median() med_vel = df_total['Velocity'].median() ax.axvline(med_acv, linestyle='--', alpha=0.3) ax.axhline(med_vel, linestyle='--', alpha=0.3) # Formatting ax.set_title('Goodles Portfolio: Distribution vs. Velocity', loc='left', fontweight='bold') ax.set_xlabel('Distribution (ACV %)') ax.set_ylabel('Velocity ($/point of distribution)') plt.tight_layout() plt.savefig('output/distribution_vs_velocity.png')
import pandas as pd import matplotlib.pyplot as plt import numpy as np plt.style.use('default') goodles_gold = '#D4930D' dark_gray = '#2d2926' def clean_money(x): """Convert $1,234.56 string or number to float""" if isinstance(x, str): return float(x.replace('$', '').replace(',', '')) return float(x) # 1. Load Data print("Loading Goodles syndicated data...") df = pd.read_csv('syndicated_data.csv') latest_period = df['Period'].iloc[-1] df['Dollar_Sales'] = df['Dollar_Sales'].apply(clean_money) df['Dollar_Growth_YA'] = df['Dollar_Growth_YA'].apply(clean_percent) df['ACV'] = df['ACV'].astype(float) # Filter to Total US and latest period df_total = df[(df['Retailer'] == 'Total US') & (df['Period'] == latest_period)].copy() # Calculate Velocity ($/TDP) df_total['Velocity'] = df_total['Dollar_Sales'] / df_total['ACV']
Right now, building a buyer-ready deck means pulling SPINS data from the natural channel, cross-referencing Circana for conventional, reformatting for each account, and writing a narrative around it. This does all of it automatically.
Architecture: velocity indexing against category benchmarks, distribution gap scoring, and automated narrative generation.
"""Goodles Retailer Sell Story Generator""" import argparse import pandas as pd import numpy as np import matplotlib.pyplot as plt GOODLES_GOLD = '#D4930D' DARK_GRAY = '#2d2926' def load_data(retailer: str) -> pd.DataFrame: """Load syndicated data for the specified retailer.""" df = pd.read_csv('syndicated_data.csv') df['Dollar_Sales'] = df['Dollar_Sales'].apply( lambda x: float(str(x).replace('$', '').replace(',', '')) ) df['ACV'] = df['ACV'].astype(float) latest = df['Period'].unique()[-1] return df[(df['Retailer'] == retailer.title()) & (df['Period'] == latest)].copy() def calculate_velocity_index(df_retailer): """Calculate Goodles velocity index vs. category average.""" df_retailer['Velocity'] = df_retailer['Dollar_Sales'] / df_retailer['ACV'] merged = df_retailer[['Product', 'Velocity', 'ACV', 'Dollar_Growth_YA']].copy() merged.columns = ['Product', 'Goodles_Vel', 'ACV', 'Growth_YA'] # Category benchmark: portfolio mean * 0.58 cat_avg = merged['Goodles_Vel'].mean() * 0.58 merged['Category_Vel'] = cat_avg merged['Velocity_Index'] = (merged['Goodles_Vel'] / merged['Category_Vel'] * 100).round(0) return merged def identify_insights(merged): """Find the lead product, opportunity gap, and key ask.""" lead = merged.loc[merged['Goodles_Vel'].idxmax()] opportunity = merged.loc[ (merged['Growth_YA'] > 20) & (merged['ACV'] < 55) ].sort_values('Growth_YA', ascending=False) return { 'lead_product': lead['Product'], 'lead_ratio': round(lead['Goodles_Vel'] / lead['Category_Vel'], 1), }
"""Goodles Retailer Sell Story Generator""" import argparse import pandas as pd import numpy as np import matplotlib.pyplot as plt GOODLES_GOLD = '#D4930D' DARK_GRAY = '#2d2926' def load_data(retailer: str) -> pd.DataFrame: """Load syndicated data for the specified retailer.""" df = pd.read_csv('syndicated_data.csv') df['Dollar_Sales'] = df['Dollar_Sales'].apply( lambda x: float(str(x).replace('$', '').replace(',', '')) ) df['ACV'] = df['ACV'].astype(float) latest = df['Period'].unique()[-1] return df[(df['Retailer'] == retailer.title()) & (df['Period'] == latest)].copy() def calculate_velocity_index(df_retailer): """Calculate Goodles velocity index vs. category average.""" df_retailer['Velocity'] = df_retailer['Dollar_Sales'] / df_retailer['ACV'] merged = df_retailer[['Product', 'Velocity', 'ACV', 'Dollar_Growth_YA']].copy() merged.columns = ['Product', 'Goodles_Vel', 'ACV', 'Growth_YA'] # Category benchmark: portfolio mean * 0.58 cat_avg = merged['Goodles_Vel'].mean() * 0.58 merged['Category_Vel'] = cat_avg merged['Velocity_Index'] = (merged['Goodles_Vel'] / merged['Category_Vel'] * 100).round(0) return merged
Your team already knows back-to-school and holiday entertaining drive Goodles' biggest weeks. The hard part isn't knowing the peaks exist — it's nailing the magnitude. Is the holiday spike +18% or +26% this year? That gap determines whether you over-order (margin hit) or under-order (out-of-stocks at your biggest moment). And when a Target Circle deal lands during back-to-school, the promotional lift compounds in ways that are hard to estimate manually.
Architecture: exponential smoothing with mac & cheese seasonality curves — back-to-school, holiday entertaining, New Year resets.
"""Goodles Demand Forecasting Engine""" import numpy as np # Seasonality profile: mac & cheese (weeks 1-52) # Back-to-school peak, holiday entertaining, New Year dip SEASON_CURVE = np.array([ 0.92, 0.90, 0.88, 0.87, 0.88, 0.90, 0.92, 0.93, # Jan-Feb 0.91, 0.90, 0.90, 0.91, 0.92, 0.93, 0.94, 0.95, # Mar-Apr 0.96, 0.95, 0.94, 0.93, 0.92, 0.91, 0.90, 0.90, # May-Jun 0.92, 0.95, 0.98, 1.02, 1.06, 1.10, 1.12, 1.14, # Jul-Aug (BTS) 1.12, 1.08, 1.04, 1.02, 1.04, 1.08, 1.12, 1.16, # Sep-Oct 1.20, 1.24, 1.26, 1.22, 1.18, 1.10, 1.02, 0.96, # Nov-Dec (Holiday) ]) def exponential_smooth(series, alpha=0.3): """Holt-Winters-style double exponential smoothing.""" result = np.zeros_like(series) result[0] = series[0] for t in range(1, len(series)): result[t] = alpha * series[t] + (1 - alpha) * result[t - 1] return result def forecast_sku(sku_name, weeks_ahead=8): """Generate demand forecast for a single SKU.""" history = load_shipment_history(sku_name) smoothed = exponential_smooth(history, alpha=0.3) base_forecast = smoothed[-1] # Apply seasonality current_week = get_current_week() forecast_weeks = [] for w in range(weeks_ahead): week_idx = (current_week + w) % 52 seasonal_adj = SEASON_CURVE[week_idx] forecast_weeks.append(base_forecast * seasonal_adj) return np.array(forecast_weeks)
"""Goodles Demand Forecasting Engine""" import numpy as np # Seasonality profile: mac & cheese (weeks 1-52) # Back-to-school peak, holiday entertaining, New Year dip SEASON_CURVE = np.array([ 0.92, 0.90, 0.88, 0.87, 0.88, 0.90, 0.92, 0.93, # Jan-Feb 0.91, 0.90, 0.90, 0.91, 0.92, 0.93, 0.94, 0.95, # Mar-Apr 0.96, 0.95, 0.94, 0.93, 0.92, 0.91, 0.90, 0.90, # May-Jun 0.92, 0.95, 0.98, 1.02, 1.06, 1.10, 1.12, 1.14, # Jul-Aug (BTS) 1.12, 1.08, 1.04, 1.02, 1.04, 1.08, 1.12, 1.16, # Sep-Oct 1.20, 1.24, 1.26, 1.22, 1.18, 1.10, 1.02, 0.96, # Nov-Dec (Holiday) ])
Once the core analysis pipeline is running, the same infrastructure scales across functions — from brand and innovation to finance and operations. Three areas where automated workflows could have immediate impact at Goodles:
Measure which promotions actually drive incremental volume versus just shifting purchase timing. Isolate true lift from BOGO, TPR, and digital offers across channels.
Track shelf pricing for Goodles against Annie's, Kraft, Banza, and private label across key accounts. Flag gap changes before they impact velocity.
Purpose-built analysis of reviews from Amazon, Target.com, and social platforms — tuned to Goodles' specific product attributes, not generic sentiment dashboards. Surface the flavor, texture, and packaging signals that matter to your innovation pipeline.
Syndicated data was the proof. These are next.
Most CPG brands already invest in strong data platforms. The gap isn't the tools — it's the manual work between them. AI becomes the orchestration layer that makes existing investments compound.
Audit data flows across SPINS, Circana, Amazon ARA, and retailer portals. Identify the three biggest time sinks. Map the stack. Understand what's working and what needs acceleration.
First automated pipeline deployed. SPINS + Circana data unified in one view. Retailer deck generation operational for the next buyer meeting. Brand strategy support begins.
Competitive monitoring running. Demand forecasting live for core SKUs. Automated reporting reclaiming hours across the team. Fractional brand management cadence established.
I'd love to walk through how these workflows could plug into Goodles' specific data stack and team structure. Looking forward to connecting.
This site, the data pipelines, and every visualization were built with the same tools and workflows proposed in this document. All data shown is representative and for demonstration purposes only.