Claude Project: Build a Firm-Wide Proposal Content Library AI
What This Builds
You'll build a Claude Project loaded with your firm's approved proposal content — past performance summaries, capability descriptions, management approach templates, key personnel bios, certifications and differentiators — so any writer on the team can generate customized first drafts from approved content in minutes, not hours. It's like giving every proposal writer immediate access to the entire content library, with AI that understands which content is relevant to which opportunity.
Instead of: "Search SharePoint for 'cloud past performance,' find 5 documents, read all of them, pick the most relevant, paste and rewrite for the current RFP."
With the project: "Give me our best past performance citations for a DoD cloud modernization contract, framed for an Army customer who cares about FedRAMP and zero trust." — answered in 60 seconds from approved content.
Prerequisites
- Claude Pro subscription ($20/month at claude.ai)
- Access to your firm's proposal content library (SharePoint, network drive, or email archives)
- 2–3 hours to organize and load initial content
- Approval from your proposal manager to use this approach (discuss the data security considerations below)
The Concept
A Claude Project is a persistent workspace where uploaded documents and written context become available to every conversation you start inside it. Think of it as training a new team member on your firm's entire proposal content library — except the training takes 2 hours instead of 2 months, and the "team member" is available at 2am on the night before submission.
Important data security consideration: Only load content that is approved for external-use or that your firm has determined is safe to upload to cloud AI services. Do NOT load:
- Classified or CUI (Controlled Unclassified Information) content
- Contract documents containing sensitive program information
- Pricing data or cost estimates
- Personnel information beyond name, title, and professional bio
Load content that is: cleared for public (capability statements, website content), internal boilerplate, and redacted past performance summaries.
Build It Step by Step
Part 1: Create the Claude Project
- Log in to claude.ai with your Pro account
- Click "Projects" in the left sidebar
- Click "New Project"
- Name it: "Proposal Content Library — [Firm Name]"
- You'll see the project page with "Project Instructions" at the top and a "Knowledge" section below
Part 2: Write the Project Instructions
Click "Project Instructions" and paste the following (customize with your firm's details):
You are a proposal content assistant for [Firm Name], a government contractor specializing in [primary capabilities, e.g., "IT modernization, cybersecurity, and professional services for federal agencies"].
FIRM PROFILE:
- Certifications: [e.g., 8(a) certified, ISO 9001:2015, CMMI Level 3, FedRAMP Authorized]
- Primary agencies: [DoD, VA, DHS, etc.]
- Contract vehicles: [OASIS+, CIO-SP3, GSA MAS, etc.]
- Typical contract types: [FFP, T&M, IDIQ task orders]
- Proposal methodology: Shipley Associates framework with Pink/Red/Gold color reviews
YOUR JOB:
When I ask for proposal content, search the uploaded knowledge documents first. Generate drafts based on approved content from our library. Flag when you're generating content not found in the knowledge base — that content needs verification before use.
VOICE RULES:
- Active voice throughout
- Benefit-focused (frame capabilities as government benefits)
- Present tense for approach; past tense for past performance
- No contractions
- Spell out acronyms on first use
- Lead sections with win themes tied to evaluation criteria
ACCURACY RULE (Critical):
NEVER invent specific facts: contract numbers, dollar amounts, specific dates, named personnel, award fees, or CPARS ratings. If the uploaded content doesn't contain specific details, say "I don't have this information in the content library — please verify with the project manager."
Click Save.
Part 3: Organize and Upload Your Content Library
Create a content inventory before uploading — decide which categories to load:
Category 1: Past Performance Summaries Prepare a document with 8–12 of your firm's best past performance citations. Format:
- Contract name/number (or generic "Program A" if sensitive)
- Agency (can be generic "DoD customer" if needed)
- Contract value range ($xM)
- Period of performance
- Scope (3–4 sentences)
- Performance highlights (CPARS ratings if public; outcomes; metrics)
Category 2: Core Capability Descriptions One paragraph per capability area: "Cloud Modernization," "Cybersecurity," "Application Development," "IT Operations and Maintenance," "Program Management," etc. Use your most recent approved boilerplate.
Category 3: Company Overview Variants 2–5 versions of your company overview paragraph at different lengths (50 words, 150 words, 300 words) for different proposal contexts.
Category 4: Key Personnel Bios Sanitized 2-page bios for your most commonly proposed key personnel. Include: name, title, years of experience, education, certifications, relevant experience summary.
Category 5: Management Approach Templates 2–3 management approach section templates for different contract types (IT services, professional services, O&M). These are starting points, not finished sections.
Upload each as a text document or PDF: In the Knowledge section, click "Add Content" → type or paste content as a text note, or upload a document file.
Part 4: Test with Realistic Queries
After loading content, start a new conversation inside the project and test:
Test 1 — Past performance retrieval: "Give me our 3 most relevant past performance citations for a DoD cloud modernization contract with the Air Force. Evaluation criteria emphasize FedRAMP experience and large-scale migration."
Expected response: The project pulls from your uploaded past performance summaries and selects the 3 most relevant based on the context you provided.
Test 2 — Custom section draft: "Using our approved company capability descriptions, draft a 400-word Technical Approach introduction for an Army cybersecurity contract. Evaluation factors: technical depth, understanding of requirements, innovative approach. Our differentiators for this bid: active CMMC Level 2 assessment, dedicated SOC with 24/7 coverage, 5-year incumbency at the same installation."
Expected response: A draft that integrates approved capability language with the specific differentiators you provided — not generic content invented from scratch.
Test 3 — Accuracy check: "Do we have any past performance on HHS contracts in our library?"
Expected response: "Based on the uploaded content, I don't see any HHS past performance citations. Our library includes DoD, VA, and DHS examples." (It should NOT invent HHS experience.)
Real Example: A Day With the Content Library Project
7am — New proposal kickoff: "We're bidding on a Navy IT O&M IDIQ. What past performance do we have that's most relevant? How should we frame our management approach for a Navy customer?"
Response: Pulls the 3 most relevant past performance citations from the library, notes our Navy-specific experience, and drafts a management approach framing tailored to Navy culture.
10am — SME section editing: "A mechanical engineer wrote this technical section. Rewrite it in proposal voice using our approved capability descriptions where relevant. [paste section]"
Response: Edits the section AND integrates approved boilerplate language where applicable — ensuring consistency with the library.
2pm — Executive summary: "Draft an executive summary using our standard company overview and our most relevant differentiators for this specific bid. Differentiators: [list 4]. Section M factors: [paste]."
Response: A draft that leads with win themes, integrates the firm's approved boilerplate language, and ties differentiators to evaluation criteria.
Time saved: What would have taken 4 hours of SharePoint searching, reading, and writing took 45 minutes of AI-assisted queries.
What to Do When It Breaks
- Project generates content not in the library → Ask "Is this from the uploaded content library or generated from general knowledge?" Claude should acknowledge when it's going beyond the documents. Add a stronger instruction in Project Instructions: "You MUST flag when generating content not found in the knowledge documents."
- Past performance details are wrong → The project may be interpolating from partial information. Ask "What specific documents are you drawing from for this past performance?" If it can't cite the source, treat the content as unverified.
- Project forgets instructions between conversations → Project instructions apply to all new conversations in the project, but very long conversations may drift. Start a fresh conversation in the project when starting a new proposal.
- Content is too generic despite loaded documents → Your queries may be too broad. Add specificity: "From our uploaded past performance library specifically, give me..." to force it to reference the documents.
Variations
- Simpler version: Don't use Projects — instead maintain a single well-organized text document with your content library and paste the relevant sections into Claude as context for each query. Less automated but no Pro subscription required.
- Extended version: Load your firm's style guide, color review checklists, and past debrief lessons into the project — creating a comprehensive proposal process assistant that not only drafts content but also self-checks against your established process.
- Team version: All proposal writers at your firm can share the project (Claude Pro supports sharing Projects with Claude.ai teams/organizations) — one content library, used by everyone.
What to Do Next
- This week: Build and test the project with your 10 most important content documents
- This month: Expand to all major capability areas and all recent past performance; track the time savings per proposal
- Advanced: Add your color review checklist, proposal style guide, and lessons learned library — creating a full proposal process AI, not just a content library
Advanced guide for Proposal Writer professionals. These techniques use more sophisticated AI features that may require paid subscriptions.