How to Actually Work With AI: Advanced Patterns (Part 2)
The masterclass—specific prompting patterns, developing AI taste, and the meta-skill of learning to learn.
Explain This to Three People
Explain Like I'm 5
Remember how in Part 1 we talked about playing pretend together? This part is about getting REALLY good at playing pretend—like how to tell your friend exactly what you're imagining so they can add the right parts, how to know when the story is getting good versus boring, and how to keep the story going even when you take a break and come back later. It's the secret tricks that make the game really, really fun!
Explain Like You're My Boss
Part 1 covered the framework. Part 2 covers execution at scale. Specifically: prompting patterns that actually work, quality calibration (distinguishing good output from plausible garbage), session management for multi-week projects, and the meta-skill—understanding how AI changes what skills are worth developing. This is operational knowledge for teams shipping AI-assisted work at volume.
Bottom line: This is the operational playbook. Part 1 was theory. Part 2 is practice.
Explain Like You're My Girlfriend
Okay so Part 1 was "here's how we work together in general." Part 2 is the nitty-gritty—like, the exact phrases I use that make it work, how I can tell when the AI is bullshitting me (yes, it does that), how I keep track of everything when we're working on a big project over multiple days, and honestly, what skills even matter anymore when AI can do so much. It's the behind-the-scenes stuff that makes the collaboration actually function. Also yes, I know how to tell when an AI is lying. That's not a flex I expected to have, but here we are. 😅💕
Part 1: Prompting Patterns That Actually Work
Why Most Prompting Advice Is Useless
You've probably seen prompting guides: "Be specific!" "Give examples!" "Use role prompts!" "Add 'step by step'!"
Most of this advice is technically true and practically useless.
It's useless because it treats prompting as a formula—follow these rules, get good output. But effective prompting isn't formulaic. It's communicative. You're not filling out a form; you're having a conversation with a collaborator who needs to understand what you actually want.
The problem with formulaic prompting:
"Act as a professional copywriter. Write a blog post about productivity. Be engaging and informative. Use bullet points. Include a call to action."
This follows all the "rules." It also produces generic garbage. Why? Because the prompt tells the AI how to write but not what you actually want to say. The AI can execute instructions—but instructions without insight produce hollow output.
What actually works:
Tell the AI what you're trying to achieve, what constraints matter, what the output will be used for, and what makes something good versus bad in your specific context.
Less: "Write a professional blog post."
More: "I need a blog post for my portfolio site. My readers appreciate depth—they'll bounce if the content is surface-level. I want to cover [topic]. My angle is [specific perspective]. I've noticed my best-performing posts have [characteristics]. I don't want [specific things to avoid]."
The difference: context that lets the AI understand your actual goals, not just your format requirements.
The Patterns I Actually Use
Pattern 1: Context → Goal → Constraints → Examples
Structure your prompts in layers:
- Context: What's the situation? What already exists?
- Goal: What are we trying to achieve?
- Constraints: What limitations or requirements apply?
- Examples: What does good look like? What should we avoid?
Real example from this project:
"We're building a series of psychology-focused blog posts [context]. I want content that resonates with emotionally intelligent readers without being obvious about targeting any demographic [goal]. Each article should be 4,500-5,500 words with a 4-part structure [constraints]. My existing blogs use personal experiments, real data, and practical frameworks—match that depth [examples]."
Pattern 2: Show, Don't Tell (Then Tell)
Instead of describing what you want, show an example—then let the AI extrapolate.
"Here's the 'Explain to 3 People' format I use:
To a 5-year-old: [Simple metaphor, relatable scenario, no jargon]
To your boss: [Business framing, metrics, ROI language]
To your girlfriend: [Emotional truth, personal relevance, 'this is why it matters']
Now write this section for an article about [topic]."
The example teaches patterns that description can't capture—rhythm, tone, level of detail.
Pattern 3: Iterative Narrowing
Start broad, then narrow based on what you see.
Round 1: "Give me 10 potential angles for an article about boundaries."
Round 2: "Angles 3 and 7 are interesting. Expand on those—what would each article actually cover?"
Round 3: "Let's go with the angle from #7. Draft an outline."
Round 4: "The outline is good, but Part 2 is too conceptual. Make it more practical."
Each round narrows the possibility space. You're not trying to specify everything upfront—you're discovering what you want through iteration.
Pattern 4: The "What's Missing" Prompt
After getting output, ask: "What's missing from this? What would make it stronger? What obvious angles haven't we considered?"
The AI is often better at identifying gaps than filling them unprompted. Use this to surface blind spots before refining.
Pattern 5: The Anti-Example
Sometimes the best way to get what you want is to specify what you don't want.
"Write this without:
- Generic productivity advice
- Rah-rah motivational tone
- Listicles or numbered tips
- Anything that sounds like LinkedIn content"
Negative constraints often clarify more than positive instructions.
The Meta-Pattern: Conversation, Not Commands
The underlying pattern behind all these patterns: treat AI interaction as conversation, not commands.
Commands: "Do X." "Generate Y." "Write Z."
Conversation: "I'm trying to accomplish X. Here's what I've been thinking. What do you think? Let's try Y and see how it feels."
Conversations allow for back-and-forth. Commands produce single-shot outputs. The conversation frame implicitly invites iteration, adjustment, and collaboration.
Practical shift:
Instead of: "Write me an introduction for this article."
Try: "I need to open this article. The reader should immediately feel like I get their experience. I'm thinking of starting with a specific moment—like the first time they caught themselves over-apologizing. Does that work? Draft a few options and let's see what lands."
Same request. Completely different interaction quality.
Part 2: Developing AI Taste
The Core Problem: AI Can Fool You
AI-generated content can sound good without being good.
This is the central challenge of quality calibration. AI is trained to produce text that sounds coherent, authoritative, and appropriate. It's not trained to produce text that is accurate, insightful, or true.
The failure mode:
AI generates a paragraph about psychological research. It sounds authoritative—uses the right terminology, cites what sounds like a study, makes claims that feel plausible.
But:
- The "study" doesn't exist
- The researcher named doesn't exist
- The claims are reasonable-sounding but unsupported
- The framing sounds sophisticated but says nothing
If you're not paying attention, this passes review. It sounds like the kind of content you wanted. It just doesn't work like the content you wanted.
How to Develop Taste
Taste = the ability to distinguish between output that sounds good and output that IS good.
This is the human skill that makes AI collaboration valuable. Without taste, you're just amplifying mediocrity at scale.
How to develop it:
1. Read your own output critically
After AI generates something, read it as if someone else wrote it. Ask:
- Does this actually say something, or does it just sound like it does?
- Would a knowledgeable reader learn anything from this?
- Where are the specific claims? Can they be verified?
- Is this insight, or is it dressed-up common sense?
2. Fact-check aggressively (especially citations)
AI confidently generates plausible-sounding citations. Many are fabricated. Always verify:
- Do the sources exist?
- Do they say what's claimed?
- Are the quotes accurate?
I've caught AI "citing" papers that don't exist, attributing quotes to people who never said them, and generating fake statistics with confident precision.
Rule: If it sounds like a fact, verify it. If you can't verify it, don't use it.
3. Test for substance
A good test: try to explain the content to someone smart. Does it hold up? Or does it dissolve into vagueness when pressed?
If you can't explain why a piece of content is valuable—if you can only say it "sounds good"—that's a red flag.
4. Compare to your best work
Pull up something you've written that you're proud of. Compare AI output to it. Not line-by-line, but for:
- Depth of insight
- Specificity of claims
- Voice and personality
- Whether it would actually help someone
If AI output doesn't meet the bar of your best work, it shouldn't ship with your name on it.
5. Develop pattern recognition for bullshit
Over time, you'll learn to spot AI-typical failure modes:
- Confident claims with no specifics ("Research shows...")
- Lists that sound comprehensive but add nothing new
- Transitions that connect paragraphs without connecting ideas
- Conclusions that restate premises without advancing them
- "Balanced" takes that actually take no position
Once you see these patterns, you can't unsee them. That's taste developing.
The Taste Calibration Loop
Here's the process I use for quality calibration:
Step 1: AI generates output
Step 2: I read it critically, asking "Is this actually good?"
Step 3: I identify what's working and what's not (be specific!)
Step 4: I direct refinement: "This section is hollow—add specific examples. This claim needs support. This conclusion doesn't follow."
Step 5: AI revises
Step 6: Repeat until it meets the bar
Step 7: Final review—would I be proud to publish this?
The key is Step 3—being specific about what's not working. "This isn't good" doesn't help. "This is too abstract—I need a concrete example that readers can see themselves in" gives direction.
When You Don't Have Taste (Yet)
If you're new to a domain, you might not have the taste to judge output well. This is a real limitation.
Options:
- Develop domain knowledge first. Don't use AI to produce content in areas where you can't evaluate quality. Learn enough to know good from bad, then collaborate.
- Use AI for learning, not production. Have AI explain concepts, then verify against authoritative sources. Build knowledge before building output.
- Get human review. If someone with domain expertise reviews your AI-assisted output, they can catch errors you'd miss.
- Start small. In domains where your taste is developing, produce less and review more carefully. Scale up as your calibration improves.
The worst outcome: producing confident-sounding garbage in domains you don't understand. AI makes this easy. Don't let it happen to you.
Developing Taste — Explain to 3 People
Explain Like I'm 5
Sometimes AI makes things that SOUND good but aren't actually good. It's like when someone uses big words but isn't really saying anything. You have to learn to tell the difference between "sounds smart" and "IS smart." That takes practice! Read what AI makes and ask: "Does this actually help? Or does it just sound helpful?"
Explain Like You're My Boss
Taste = distinguishing good from plausible. AI produces text that SOUNDS coherent but isn't necessarily accurate or insightful. The calibration loop: (1) AI generates, (2) Read critically—does this actually say something? (3) Identify what's working/not, (4) Direct refinement specifically, (5) Repeat. Fact-check aggressively—AI confabulates citations confidently.
Bottom line: Without taste, you're just amplifying mediocrity at scale. Taste is THE human value-add.
Explain Like You're My Girlfriend
Okay so AI can fool you. It generates stuff that SOUNDS authoritative—right terminology, confident claims, professional framing—but is sometimes just... wrong? I've caught it making up studies that don't exist, citing researchers who aren't real, generating statistics with confident precision that are completely fabricated. So I have to actually READ everything critically. "Does this say something or just sound like it does?" That's the question. Developing that filter is THE skill. Without it, I'd be shipping sophisticated-sounding garbage. 😅💕
Part 3: Session Management for Long Projects
The Context Problem
AI doesn't remember between sessions. Every new conversation starts fresh.
For quick tasks, this doesn't matter. For sustained projects—like producing a series of articles over multiple sessions—context management becomes critical.
What happens without management:
Session 1: Establish voice, create work orders, write Article 1
Session 2: AI has no idea what happened in Session 1. You start over. Voice drifts. Work orders get lost. Inconsistency creeps in.
The fix: Explicit context management.
The Resume Document
At the end of any working session, create a resume document—a note that contains everything needed to pick up where you left off.
What to include:
- Project overview: What are we building?
- Progress: What's done? What's not?
- Decisions made: What did we decide about voice, format, structure?
- Next steps: What's immediately next?
- Key context: Anything the AI needs to know to maintain consistency
Example from this project:
# Blog Series Session 2 - Resume Point
**Status**: 5 of 8 articles complete
## What We Built
- 4 work orders for 8 total articles
- Framework: 5yo/boss/girlfriend for "Explain to 3"
- 4-part structure per article
- 4,500-5,500 words each
## Completed
1. ✅ "The Cost of Being Low-Maintenance"
2. ✅ "Reading the Room vs. Reading the Silence"
3. ✅ "The Maintenance Work No One Sees"
4. ✅ "The Guilty Boundary Experiment"
5. ✅ "Why Your Gut Feeling Has a Blind Spot"
## Next Up
Article 6: "The Performative Confidence Gap"
## Voice Guidelines
- Conversational but precise
- Vulnerable but boundaried
- Personal experiments with real data
- AI transparency section in eachWhen you start a new session, feed the resume document to the AI. It reloads context instantly.
The Work Order System
For projects with multiple deliverables, maintain work orders as living documents.
Benefits:
- Alignment: You and the AI both know exactly what you're building
- Consistency: Each piece is produced against the same criteria
- Progress tracking: You can see what's done and what's left
- Easy handoffs: If you're working with others (or switching AI tools), the work order is a portable spec
How I use work orders:
Before writing each article, I reference the relevant work order. The AI sees the spec—word count, structure, required elements—and produces output that fits the criteria.
During review, I check against the work order. Did it hit the word count? Does it have the required sections? Is the tone consistent with the spec?
Preventing Drift
Even within a session, output can drift—voice changes, quality varies, structure gets inconsistent.
Drift prevention techniques:
1. Anchor documents: Reference your best output periodically. "Article 1 nailed the tone. Make sure this matches."
2. Explicit reminders: Every few outputs, restate key constraints. "Remember: 5yo, boss, girlfriend framework. Personal experiment with data. AI transparency section."
3. Comparative review: When producing similar pieces, compare them. If Article 5 is shorter or shallower than Article 1, that's drift. Correct it.
4. Pattern extraction: After a few successful outputs, extract the pattern explicitly. "Here's what's working: [specific elements]. Apply this to everything else."
Breaking Long Sessions
For very long sessions (like producing 40,000 words), take intentional breaks.
Why breaks matter:
- Your attention fades. After hours of review, you stop catching errors. Quality suffers.
- Fresh eyes see problems. Coming back after a break, you'll notice things you missed.
- Context can reset cleanly. Natural break points let you save state and resume deliberately.
How I structure breaks:
- Work in 90-120 minute blocks
- At the end of each block, create or update the resume document
- Take 15-30 minutes away
- Return, reload context, continue
The breaks aren't lost time—they're quality insurance.
Part 4: The Meta-Skill
What Skills Still Matter?
If AI can execute, what should humans get good at?
This is the question underlying all AI collaboration. The answer shapes how you invest your learning time.
Skills that matter MORE:
1. Direction-setting
What should we build? Why? For whom? AI can execute any direction—good direction is the scarce resource.
2. Taste and judgment
Distinguishing good from plausible. This is THE human value-add in AI collaboration. Without it, you're just shipping sophisticated-sounding noise.
3. Synthesis across domains
Connecting ideas from different fields. AI is broad but struggles with novel combinations. Humans who can synthesize across their experiences bring irreplaceable perspective.
4. Editing over generating
Refining output is more valuable than generating it. AI generates abundantly; what's scarce is the ability to refine that abundance into something sharp.
5. Communication
Articulating what you want clearly enough that AI (or humans) can execute. This was always valuable; it's now essential.
6. Domain expertise
Deep knowledge in specific areas. AI is broad but shallow. Domain experts can evaluate AI output and provide context AI lacks.
Skills that matter LESS:
1. Pure execution speed
If the task is "produce 5,000 words on topic X," AI is faster. Humans competing on raw execution speed will lose.
2. Comprehensive memory
AI can hold more context, reference more sources, maintain more consistency than human memory. Don't compete on recall.
3. First-draft generation
AI first drafts are often good enough to refine. Generating first drafts yourself is often slower than refining AI drafts.
4. Formatting and structure
AI handles format and structure well. Don't spend human cycles on what AI does easily.
Learning to Learn With AI
The meta-skill isn't any specific ability—it's learning to learn alongside AI.
What this means:
1. Use AI to accelerate domain learning
Want to understand a new topic? Have AI explain it, generate practice problems, answer questions, provide reading lists. Then verify against authoritative sources. AI becomes a personal tutor (with fact-checking required).
2. Use AI to practice skills
Want to improve at writing? Generate drafts, edit them, have AI critique your edits, iterate. The cycle is faster than solo practice.
3. Use AI to identify what you don't know
Ask AI to explain something you think you understand. Often, you'll discover gaps you didn't know existed. AI as a mirror for your ignorance.
4. Develop AI-specific workflow skills
Learn prompting patterns, context management, quality calibration. These are new skills specific to the AI era. They compound—better AI collaboration skills → better output → more practice → even better skills.
The Mindset Shift
The fundamental mindset shift: from "How do I do this?" to "How do we do this?"
"I" = limited time, limited stamina, limited execution bandwidth.
"We" = human direction + AI execution = dramatically expanded capability.
The person asking "How do I write 40,000 words?" is stuck.
The person asking "How do we write 40,000 words?" ships.
This isn't about replacing human effort. It's about amplifying it. You still put in the effort—direction, review, refinement, judgment. But the effort yields 10x more output because execution is no longer the bottleneck.
The compounding effect:
Better collaboration skills → More output → More practice → Better taste → Higher-quality output → Better collaboration skills
The people who get good at this early will have massive advantages. They'll produce more, ship faster, and iterate further than those who treat AI as a toy or a threat.
The Meta-Skill — Explain to 3 People
Explain Like I'm 5
The most important thing isn't knowing how to use AI—it's knowing WHAT to ask for and WHETHER the answer is good. Those skills make you the boss of any tool! The question changed from "How do I do this?" to "How do WE do this?"—and that makes everything possible that wasn't before.
Explain Like You're My Boss
Skills that matter MORE: direction-setting, taste/judgment, cross-domain synthesis, editing over generating, clear communication, deep domain expertise. Skills that matter LESS: raw execution speed, comprehensive memory, first-draft generation, formatting. The mindset shift: "How do I do this?" → "How do we do this?" Human direction + AI execution = dramatically expanded capability.
Bottom line: People who master this early will have massive advantages. The compounding starts now.
Explain Like You're My Girlfriend
The big shift in my head was going from "How do I write 40,000 words?" (stuck, impossible) to "How do WE write 40,000 words?" (possible, actually happening). I'm not replacing my brain—I'm extending it. Direction, taste, judgment: still me. Execution speed, tireless iteration: AI. Together we ship stuff neither could do alone. Also? It's FUN. I can finally move at the speed of my ideas instead of getting stuck in execution. This is what creative work should feel like. 😅💕
Part 5: Real Examples From Our Sessions
Let me show you actual prompts from our work and why they worked (or didn't).
Example 1: Pivoting the Framework
What I said:
"Wait don't be obvious its for them"
Context: AI had proposed topics that would "resonate with female readers." I wanted content that would resonate naturally without being pandering.
Why it worked: Short, direct course-correction. The AI understood immediately—shift from "targeted content" to "universally sophisticated content that naturally resonates."
The output shift: Topics went from gender-obvious framing to psychological depth accessible to anyone but resonant with specific experiences.
Example 2: Adjusting the "Explain to 3" Framework
What I said:
"you are adding in the how you would explain it to a 3 year old your boss and girlfriend right??"
Context: First two articles used technical audiences (behavioral psychologist, tech lead, etc.). I wanted more relatable audiences.
Why it worked: Caught a drift before it compounded. The AI had defaulted to technical framing; I wanted personal framing.
The lesson: Check your assumptions explicitly. If something seems off, ask directly.
Example 3: The "Keep Going" Pattern
What I said (multiple times):
"lets keep it going"
"yeah keep going"
"lets keep it rippin"
Why it worked: Maintains momentum. Signals that output is acceptable without long review discussions. Lets the AI continue applying established patterns.
The efficiency: Each "keep going" saves 5-10 minutes of re-establishing context. When patterns are working, don't interrupt with unnecessary meta-discussion.
Example 4: Spontaneous Pivot
What I said:
"can you do an article on teaching people how to do what we do |||| or take a deep dive in what we do together"
Context: Mid-series, I realized the collaboration itself was worth documenting.
Why it worked: The AI could pivot immediately. No need to finish the original plan before adding new elements.
The output: This article series (Part 1 and Part 2) documenting the methodology.
Example 5: Scope Adjustment
What I said:
"make that a 2 part one 10k words 2 articles"
Context: Initial plan was one article on AI collaboration. I wanted more depth.
Why it worked: Clear, specific scope adjustment. The AI immediately restructured to produce two full-length pieces instead of one.
The lesson: Don't settle for scope that doesn't match your vision. AI can scale up or down instantly.
Example 6: What Didn't Work (First Two Articles)
The problem: First two articles used "behavioral psychologist / tech lead / younger self" for the "Explain to 3" section.
What went wrong: I hadn't specified the framework clearly enough. AI defaulted to professional/technical audiences.
How I caught it: Reviewed output, noticed it didn't feel as relatable as I wanted.
The fix: "yeah keep that same framework ppl liked it" (after correcting to 5yo/boss/girlfriend)
The lesson: Early outputs establish patterns. Review early and correct fast before the pattern calcifies.
The Synthesis
Here's what this all comes together into:
The Collaboration Stack:
- Context (load it explicitly)
- Direction (what are we building, why)
- Iteration (refine, don't one-shot)
- Taste (distinguish good from plausible)
- Session management (maintain context across time)
- Meta-skills (learn to learn with AI)
Master these layers, and you have a production system that dramatically outperforms either human or AI alone.
The Future This Points To:
The people who thrive won't be "writers" or "coders" or "designers" in the traditional sense. They'll be directors—humans who know what to build, can recognize quality, and can orchestrate AI to execute at scale.
The execution barrier is falling. Direction, taste, and judgment become the scarce resources.
The question isn't whether AI will be involved in creative work. It already is. The question is who learns to collaborate effectively and who gets left behind.
The AI-Assisted Reality (Meta Edition, Part 2)
What AI helped with in writing this:
- Structuring the prompting patterns section
- Generating the session management frameworks
- Articulating the meta-skills analysis
- Producing the real examples with context
What I did:
- Decided this needed to be two parts
- Directed the depth and focus
- Provided all the actual examples from our sessions
- Caught when explanations were too abstract
- Maintained the honest assessment of limitations
- Made it about learning, not just instruction
The recursive proof:
This article about advanced AI collaboration was produced using advanced AI collaboration. The patterns it describes are the patterns that created it.
If you can read this and think "I could do that"—you're right. The method is documented. The tools are available. What's required is practice.
Where to Go From Here
If you haven't started: Go back to Part 1. Understand the basic framework. Try one project.
If you've started: Pick one pattern from Part 2 and apply it deliberately. Context loading? Iterative narrowing? Session management? Choose one, practice it, add another.
If you're experienced: Share what you've learned. Write about your methods. The craft is developing in real-time, and everyone's experiments add knowledge.
For everyone: The skill compounds. Every session makes you better. Start wherever you are and keep going.
The collaboration is available. The question is whether you'll learn to use it.
Resources
On AI collaboration:
- Ethan Mollick's work on AI augmentation
- Simon Willison's blog on practical LLM usage
- Anthropic's prompt engineering documentation
On taste and judgment:
- "Taste" by Kyle Chayka (what aesthetic judgment means now)
- Paul Graham's essays on taste and quality
On the future of work:
- "Co-Intelligence" by Ethan Mollick
- Economic research on AI and labor complementarity
This series:
Final thought:
The best thing about AI collaboration is that it lets you move at the speed of your ideas instead of the speed of your execution.
For years, I had more ideas than I could build. Projects I wanted to create but couldn't resource. Content I wanted to produce but couldn't find time for.
Now? The bottleneck isn't execution. It's direction.
And direction is where human creativity actually lives.
The AI handles the how. You provide the why.
Together, you ship things neither could ship alone.
That's the collaboration. That's the skill. That's the future.
Now go build something.