Is Your AI Learning Too Much?
A Privacy-First Guide for Creative SMBs
By: Nick Fastje
At Foundation, we believe in using AI responsibly, including when we write. This post was drafted with AI’s help, while fine-tuned and sourced by a real human (👋).
You’re moving fast, thinking big, and using AI to do more with less. Smart. But there’s a catch: some of these tools might be quietly learning from your work — and not in the good “AI assistant gets better over time” way. We’re talking your pitch decks, logos, campaign copy, and client strategies becoming training data.
If you’re a creative firm: agency, architect, design studio, or anything in between, you’ve got more than speed to protect. You’ve got ideas. IP. Reputation. The last thing you want as an organization is to trade short-term productivity gains for longer-term losses of your data security and reputation. So let’s talk about using AI responsibly, without giving away the secret sauce.
What Does “Training on Input” Actually Mean?
It sounds harmless: “Your data may be used to improve our models.” Translation? Whatever you type, paste, or upload could end up in the training set for the next generation of that AI tool.
That could mean:
• Your client’s branding ideas influence someone else’s project
• Your internal tone-of-voice guide leaks into public-facing responses
• Your carefully-crafted architectural concept ends up as a Midjourney vibe
Now, not all AI tools do this, but many do unless you’re paying attention (and often, paying for a higher-tier plan).
Tools That Don’t Train on Your Data (Bless Them)
Thankfully, some AI vendors are starting to get it. Here’s who is doing privacy right (or at least better):
ChatGPT Teams or Enterprise:
Does not train on input data if correct settings are applied (opt out)
Free tier does train on input
Claude
Opt-in for training on input
Adobe Firefly
Trained only on licensed content
Midjourney and Google Gemini
Input stays private for paid subscriptions
Free tier it trains on input
You’ve likely noticed a trend here — your data is safe as long as you’re willing to pay for the privilege. If it’s free and flashy, assume your input’s going into the stew.
Why This Is a Big Deal for Creative Teams
When your work is the product, not just a means to an end, privacy hits different. Here’s why:
• Client trust is everything. You wouldn’t CC a competitor on your client proposal. Don’t let your AI do it either.
• Your IP is your value. Whether it’s mood boards or ad copy, you need to own what you create — and where it ends up.
• Creative work is context-heavy. Generic AI doesn’t understand nuance. But if it’s trained on your confidential input, that context could slip into someone else’s session.
Bottom line? You’re not paranoid. You’re being professional.
Red Flags and Questions to Ask Before You Prompt
Training isn’t just for AI, your team needs it too. Odds are, they’re already using these tools, with or without a greenlight. A little guidance now saves a lot of cleanup later.
Before you feed that next tagline into a chatbot, ask:
• “Would I be OK with this being published on the internet?” Because in some roundabout way, it might be.
• “Does this tool mention privacy or training in its Terms?” If you can’t find it, that’s a sign.
• “Am I logged in under a personal or company account?” (Tip: Always use work accounts for business-related prompts.)
And if the tool doesn’t offer team controls, admin dashboards, or any mention of enterprise usage? It’s probably not built for business.
AI Is a Power Tool. Use It Safely.
We’re not here to scare you off AI — far from it. We’re big fans. But just like you’d never cut tile with a chainsaw, you shouldn’t paste sensitive client work into a chatbot that doesn’t know how to keep a secret.
Use AI. Just don’t let it use you.
Need help now? Foundation Technologies helps creative teams choose the AI tools that match their business needs in a safe and responsible fashion — no jargon, no “did-I-just-leak-a-campaign?” moments. Let’s talk.