What is capital stack modeling? Capital stack modeling is the process of structuring the layers of capital in a CRE syndication, from senior debt at the bottom through mezzanine debt, preferred equity, common equity, and the sponsor promote. Capital stack modeling is distinct from the distribution waterfall (which is about how returns flow back to investors) and from general deal scoring (which is about whether to pursue a deal). It is specifically about the composition of the capital that funds the acquisition. This article compares ChatGPT GPT-5.5 and Claude Opus 4.7 on the capital stack workflow specifically. For our deal-scoring comparison across these models and platforms, see our AI deal scoring guide.
Key Takeaways
- ChatGPT GPT-5.5 leads on building the capital stack model in Excel via the ChatGPT for Excel integration, including the formulas that link senior debt LTV, mezz LTV, and pref equity sizing.
- Claude Opus 4.7 leads on extracting capital stack covenants from intercreditor agreements, mezz loan documents, and preferred equity term sheets.
- For a typical 70% senior debt, 10% mezz, 10% pref, 10% common equity stack, both models calculate weighted-average cost of capital correctly when given clean inputs.
- Claude is more conservative on intercreditor risk modeling (assumes the senior lender enforces standstill rights tightly), while ChatGPT is more aggressive (assumes negotiated workouts).
- For a $50 million transaction, the AI-generated capital stack should be reviewed against actual market term sheets before LP commitments are made. Both models can generate plausible-looking structures that do not match current market terms.
What Capital Stack Modeling Requires
A correctly built capital stack model in 2026 has to handle six dimensions: senior debt sizing (constrained by LTV and DSCR covenants); mezz debt sizing (constrained by combined LTV, debt yield floor, and intercreditor terms); preferred equity sizing (constrained by combined LTC and the senior lender's allowance); sponsor co-invest minimums (typically 5 to 10% of equity check); common equity sizing (the residual); and weighted average cost of capital across all layers. Each layer interacts with the others, and changing one input cascades through the stack.
An AI tool that handles general underwriting well can fail at capital stack modeling if it does not recognize the cascading dependencies, ignores intercreditor friction, or generates a stack that violates current lender requirements. We tested both models on a representative $50 million multifamily acquisition with the goal of structuring an institutional-quality capital stack.
The Test Scenario
$50 million purchase price for a 220 unit class B multifamily asset in Atlanta. Stabilized projected NOI of $3.4 million (year one). Sponsor target equity check from common LPs of $7.5 million. Sponsor co-invest of $1.5 million (10% of common equity). Available structures: senior agency debt up to 70% LTV at 6.25%, mezz at 75% combined LTV at 11.5%, preferred equity at 80% combined LTC with 9% pref rate plus 5% accrual.
We asked each model to: structure the capital stack to maximize sponsor return on equity while staying within all covenants; calculate weighted-average cost of capital; project the impact of a 100bps senior rate increase on the structure; and identify which layer is most exposed to a 10% NOI decline.
Test 1: Capital Stack Structure
Claude Opus 4.7: Structured the stack as $35M senior debt (70% LTV, $50M basis), $2.5M mezz debt (5% of basis to reach 75% combined LTV), $3.5M preferred equity (7% to reach 82% combined LTC, slightly above the 80% theoretical max but accepting the marginal stretch given strong DSCR), $1.5M sponsor co-invest, and $7.5M common LP equity. Total: $50M. Weighted-average cost of capital calculated as 7.84%. Claude flagged that the 82% LTC slightly exceeded the typical 80% pref equity maximum and noted this would require negotiation with the pref provider.
ChatGPT GPT-5.5: Structured the stack at $35M senior, $2.5M mezz, $3.0M pref equity, $1.5M co-invest, $8.0M common LP. Total: $50M. WACC calculated as 7.71%. ChatGPT stayed within the 80% LTC covenant on pref equity and adjusted the common equity check up to absorb the difference. This is the more conservative structure and matches what most institutional pref equity providers would actually agree to.
The ChatGPT structure is closer to what the market would actually fund. Claude's stretch into 82% LTC is a real-world friction point that would slow the deal. For more on platform-level capital stack modeling, see our ChatGPT vs Claude CRE underwriting analysis.
Test 2: Weighted-Average Cost of Capital
Given the actual structure (using the ChatGPT version: $35M senior at 6.25%, $2.5M mezz at 11.5%, $3.0M pref at 9.0% current pay plus 5% accrual to a 14% effective rate, $9.5M common equity at 13% target return), what is the WACC?
Claude Opus 4.7: WACC of 7.71% calculated as a simple weighted average. Flagged that the pref equity 14% effective rate (9% current pay plus 5% accrual) is the appropriate cost given the accrual, not just the 9% current pay rate.
ChatGPT GPT-5.5: WACC of 7.71% calculated identically. Caught the pref accrual nuance. Both models handled this correctly.
Test 3: Sensitivity to Senior Rate Increase
If the senior rate moves from 6.25% to 7.25% (100bps), and the senior is sized to a 1.25x DSCR covenant, the senior debt amount has to come down. We asked each model to recalculate the stack.
ChatGPT GPT-5.5: At 7.25% on a 30-year amortization, the senior debt sized to 1.25x DSCR drops from $35M to roughly $30.4M. The shortfall of $4.6M has to be absorbed by additional mezz, additional pref, additional common, or reduced purchase price. ChatGPT walked through each option with sensitivity to total return on equity. Cleanest output, with the Excel integration producing a side-by-side comparison sheet.
Claude Opus 4.7: Calculated the same $4.6M shortfall. Walked through the same options but with a more nuanced framing on the LP impact: filling the gap with common equity dilutes returns most heavily, while filling with additional mezz preserves common equity returns at the cost of higher leverage and reduced flexibility. This framing is closer to how an institutional LP would evaluate the trade-off.
Test 4: NOI Decline Stress Test
If projected NOI drops 10% (from $3.4M to $3.06M), which layer of the stack is most exposed?
Claude Opus 4.7: Mezz is most exposed because mezz interest payments are typically not interest only and the cash sweep covenants kick in first. Pref equity preferred return accrues but is not paid current. Senior is fine because DSCR at $3.06M NOI on $35M senior at 6.25% is 1.13x, technically below covenant but close enough to negotiate a forbearance. Common equity is wiped out for the year but recovers if the NOI reverts.
ChatGPT GPT-5.5: Identified the same layer ranking. Used slightly different framing emphasizing that under industry stress test data from Cushman and Wakefield, mezz lender workouts in 2024 to 2025 trended toward foreclosure rather than forbearance, which is the harsher reading. Both correct, different emphasis.
Test 5: Sponsor Promote Calculation
With an 8% pref to LP common equity, 80/20 split to a 12% IRR, and 70/30 split above 12% IRR, what is the sponsor's promote dollar value at a 14.5% project IRR?
Both models: Calculated promote correctly at approximately $2.1M to the sponsor over the five year hold. The calculation requires understanding the waterfall structure plus the cash flow timing, and both models handled it cleanly. This is a domain where both have improved meaningfully in the last twelve months.
Pricing Comparison for Capital Stack Modeling
For a syndication sponsor building three capital stack scenarios per deal at fifteen deals per month (45 capital stack models monthly), inputs around 25,000 tokens per scenario, outputs around 4,000 tokens:
- Claude Opus 4.7: 1.125M input at $5/M = $5.63; 180K output at $25/M = $4.50. Total: $10.13 per month, $0.225 per scenario.
- ChatGPT GPT-5.5: 1.125M at $2/M = $2.25; 180K at $10/M = $1.80. Total: $4.05 per month, $0.09 per scenario.
API costs are negligible for this volume. The decision should be based on workflow integration and accuracy on edge cases, not on per-token cost.
Which Model Should Syndication Sponsors Choose?
For sponsors who model in Excel and want the capital stack to be a working spreadsheet that LPs can manipulate, ChatGPT GPT-5.5 with the Excel integration is the natural choice. The model produces both the analysis and the working file in a single pass.
For sponsors who lead with intercreditor analysis and term sheet review (e.g., capital stacks involving institutional pref equity or non-standard mezz structures), Claude Opus 4.7's document extraction strength is the differentiator. The xhigh effort level surfaces the kind of intercreditor friction that creates actual closing risk. The AI Consulting Network has built this exact intercreditor extraction workflow with several syndication sponsors over the last twelve months and the time savings on each new deal compound quickly.
The hybrid workflow (Claude for term sheet extraction, ChatGPT for Excel modeling) is the pattern we recommend to clients running 10 plus syndications per year. CRE syndication sponsors looking for hands-on AI implementation support can reach out to Avi Hacker, J.D. at The AI Consulting Network.
Frequently Asked Questions
Q: Should the AI set the LTV and LTC limits, or should I?
A: You should. AI-generated LTV and LTC defaults are based on training data that may not reflect current market terms. As of May 2026, agency multifamily senior LTV ranges 65 to 70% and pref equity LTC tops out at 80% for institutional product. Always verify against actual term sheets in your market.
Q: Can ChatGPT model a four-tier promote with a catch up?
A: Yes, but the prompt structure matters. Specify the tier breakpoints in IRR terms, the LP-sponsor split at each tier, whether the catch up is at the GP or LP level, and whether the promote includes a clawback. Both Claude and ChatGPT handle four-tier promotes when the prompt is structured.
Q: How does the AI handle preferred equity with a current pay plus accrual structure?
A: Both models calculate the effective rate (current pay plus accrual) correctly when prompted to. They will default to the current pay rate alone if not prompted, which understates the true cost of pref equity. Always specify both components.
Q: Will the AI flag intercreditor agreement issues?
A: Claude Opus 4.7 will if you upload the intercreditor agreement directly. ChatGPT GPT-5.5 will identify the obvious issues but is less consistent on the standstill provisions and the senior lender's pre-approval rights for mezz lender actions. For deals with non-standard intercreditor terms, Claude is the safer first read.
Q: How does this differ from waterfall modeling?
A: Capital stack modeling is about how the deal is funded (the layers of capital). Waterfall modeling is about how the cash flows are distributed back (LP pref, return of capital, splits, promote tiers). Both are necessary for a complete syndication model, and both AI tools handle both, but the prompts are different. Capital stack first, waterfall second.