Bayesian Auto-Tuner Guide
Step-by-step guide to the EXAWin Auto-Tuner — running analysis, result screens (Summary, Signal Lift, Impact, T/k, Dampening, Silence, Prior, AUC, Cross-validation), MCMC posterior distributions, Particle Storm visualization, simulation structure panel, and parameter application.
Auto-Tuner User Guide
Auto-Tuner is a feature that learns and recommends optimal parameter values for the Bayesian engine using outcome data from past Won/Lost projects. Administrators review recommendations backed by data-driven evidence and decide whether to apply them.
Location: Sidebar → Bayesian → Auto-Tune
⚠️ Auto-Tuner requires admin or super_user privileges.
1. Running Analysis
Step 1: Navigate to Auto-Tune Screen
Click Bayesian → Auto-Tune in the sidebar. An introductory screen is displayed on first visit; if a previous analysis exists, it is auto-restored from sessionStorage.
Step 2: Start Analysis
Click the Start Analysis button at the top or center of the screen.
After starting, a progress bar displays real-time progress across 11 stages:
| Component | Duration | Notes |
|---|---|---|
| Ruby Grid Search (Impact, T, k, Dampening, Silence) | < 1s | Instant |
| Cross Validation (5-fold) | < 1s | Overfitting check |
| Emcee MCMC Sampling | 15–30s | Phase 3+ only |
💡 You can cancel during analysis with the Cancel button. Cancellation does not affect data.
⚠️ When MCMC is included, total analysis takes 15–40 seconds. Elapsed time is shown in the progress bar.
2. Results Screen Layout
After completion, results are organized in the following sections from top to bottom:
① Summary Cards (4 columns)
Four summary cards are displayed at the top.
| Card | Content |
|---|---|
| Completed Data | Number of analyzed projects (Won / Lost breakdown) + Phase badge |
| Current Separation | Separation calculated with current parameters (Grade A–D) |
| Projected Separation | Simulated separation after applying all recommendations (Grade shown) |
| Won vs Lost Average | Won project average P(Win) vs Lost project average P(Win) |
Grade criteria:
| Grade | Separation | Meaning |
|---|---|---|
| A | ≥ 0.40 | Excellent — Parameters reflect reality well |
| B | 0.25 – 0.40 | Good — Adequate but room for improvement |
| C | 0.10 – 0.25 | Needs Improvement — Adjustment recommended |
| D | < 0.10 | Urgent — Immediate re-calibration needed |
② Simulation Structure Panel
Below the summary, a collapsible 🔬 Auto-Tuner Simulation Structure panel is available. Click to reveal the complete simulation structure and counts.
Summary Cards (4 columns):| Card | Content |
|---|---|
| PROJECTS | Number of analyzed projects (Won / Lost) |
| RUBY ENGINE | Ruby Grid Search simulation count |
| MCMC ENGINE | Estimated simulate_project() calls by Emcee |
| GRAND TOTAL | Ruby + MCMC combined |
Computation Distribution Bar: Visually displays the ratio between Ruby and MCMC computations. Typically MCMC accounts for 99%+.
Analysis Pipeline (3 columns):- GRID SEARCH: Tries 10 points within ± range from current values to maximize separation
- CROSS VALIDATION: 5-fold cross-validation for overfitting detection
- MCMC Emcee: 32 walkers × (500 warmup + 1,500 draws) ensemble sampling
| Step | Analysis | Calls |
|---|---|---|
| 1 | current_separation | P |
| 2 | signal_lift_analysis | 0 (DB aggregate) |
| 3 | impact_grid_search | I × G × P |
| 4 | optimal_thresholds | 0 (DB based) |
| 5 | k_recommendations | 0 (statistical) |
| 6 | dampening_search | D × P |
| 7 | silence_penalty_search | S × P |
| 8 | projected_separation | P |
| 9 | calculate_auc | 0 (P(Win) based) |
| 10 | cross_validate | F × P |
| 11 | mcmc_ensemble_sampling | W × Steps × P |
Where P=projects, I=Impact types, G=Grid points (10), D=Dampening trials, S=Silence trials, F=5 (folds), W=32 (walkers), Steps=2,000 (warmup+draws)
③ Signal Lift Analysis
Analyzes the discriminative power (Lift) of each signal.
| Column | Description |
|---|---|
| SIGNAL | Signal name |
| WON% | Occurrence rate in Won projects |
| LOST% | Occurrence rate in Lost projects |
| LIFT | Won rate / Lost rate |
| GRADE | Discriminative power grade + emoji |
- Lift > 1: Appears more often in Won projects → Positive indicator
- Lift < 1: Appears more often in Lost projects → Negative indicator
- ⚠ MISMATCH: Red warning when current classification (Positive/Negative) doesn't match actual discriminative power
④ Prior α, β Recommendation
Recommends optimal initial Prior based on historical data.
| Item | Description |
|---|---|
| Method | Estimation method (Method of Moments or MLE) |
| α (Success Weight) | Current → Recommended (95% CI shown) |
| β (Failure Weight) | Current → Recommended (95% CI shown) |
| Evidence Maturity | 🌱 Early / 🌿 Growing / 🌳 Mature (average α+β+n per project) |
⑤ Impact Optimization
Grid Search recommendations for each Impact Type. Only Impact types with adjust recommendations are shown as cards.
Each card contains:
- Impact Type name
- Current value → Recommended value
- Separation improvement (+%p)
- Checkbox — Select parameters to apply (Select All available)
💡 Search range varies by Phase: Phase 3 ±30%, Phase 4 ±40%, Phase 5 ±50%
⑥ Threshold (T) & Velocity (k)
Two tables displayed side by side.
Threshold (T):| Column | Description |
|---|---|
| STAGE | Sales stage name |
| CURRENT | Current threshold |
| OPTIMAL | Youden J optimal threshold |
| J | Youden J statistic (< 0.20 = not recommended) |
| Column | Description |
|---|---|
| STAGE | Sales stage name |
| CURRENT | Current k value |
| OPTIMAL | Grid Search optimal k (max: 12) |
| AVG α+β | Average evidence for the stage |
⑦ Impedance Impact
Simulates how T/k recommendations affect the impedance function.
| Column | Description |
|---|---|
| STAGE | Sales stage |
| P(WIN) | Average P(Win) for the stage |
| Current | Impedance (%) with current T/k |
| Recommended | Impedance (%) with recommended T/k |
| Change | ↑ / ↓ + %p difference |
⑧ Dampening & Silence Penalty
Two cards displayed side by side.
| Parameter | Description | Default |
|---|---|---|
| Dampening | Multi-signal attenuation rate. 0=strongest only, 1=all equal | 0.25 |
| Silence Penalty | Penalty ratio for activity gaps | 0.30 |
⚠️ Below Phase 4, Dampening/Silence checkboxes are disabled with a "🟢 Phase 4+ required" notice.
3. MCMC Posterior Distribution Analysis
When Phase 3+, MCMC posterior distribution estimates appear below Grid Search results.
MCMC Header
| Item | Description |
|---|---|
| Samples × Walkers | Samples × walkers (e.g., 1,500 samples × 32 walkers) |
| R̂ | Overall max R-hat + convergence status (✅ or ⚠️) |
| Runtime | MCMC execution time (seconds) |
| Projects | Number of projects in analysis |
Particle Storm Visualization
Click ▶ Play Particle Storm to see real-time animations of each parameter's posterior distribution:
- Density Histogram: MCMC samples accumulate one by one revealing the distribution shape
- Green solid line: MCMC estimated Mean
- Red dashed line: Current setting value
- Purple box: 95% HDI interval
MCMC Data Table
| Column | Description |
|---|---|
| ☑ | Checkbox — Select parameters to apply (Select All available) |
| Parameter | Parameter name (⚙️ = Dampening, 🔇 = Silence) |
| Current | Current value (red) |
| MCMC | Posterior mean (green) |
| ±SD | Standard deviation — estimation uncertainty |
| HDI 95% | 95% Highest Density Interval (purple) |
| R̂ | Convergence diagnostic (green < 1.05, yellow < 1.10, red ≥ 1.10) |
| Δ | Difference between current and MCMC estimate (↑ / ↓ / ≈) |
⚠️ R̂ > 1.05 is a convergence incomplete warning. R̂ > 1.10 means non-converged — do not apply.
💡 Rows without checkboxes: When the difference between MCMC mean and current value is ≤ 0.02 (ε threshold), the change is deemed negligible and no checkbox is shown. Shown as
≈in the Δ column.
HDI Interpretation Guide
HDI [3.5, 6.2], Current = 5.0
→ Current value within HDI — reasonable. No change needed.
HDI [2.0, 3.5], Current = 5.0
→ Current value outside HDI. Likely overestimated. Adjustment recommended.
HDI [0.8, 8.0] (very wide)
→ Insufficient data. Estimation uncertain. Reference only.
4. Applying Recommendations
Apply Bar
A fixed bar appears at the bottom of the results screen:
- Left: Number of selected parameters
- Right: Apply Selected button
Phase-specific Application Limits
| Phase | Applicable Scope |
|---|---|
| 1–2 | Not applicable — Apply button fully disabled |
| 3 | Impact, T, k + MCMC (Dampening/Silence locked) |
| 4+ | All parameters applicable |
Application Targets
| Source | Target Parameters | Application Target |
|---|---|---|
| Grid Search | Impact, T, k, Dampening, Silence | Direct DB update |
| MCMC | Impact(mcmc_impact), Dampening(mcmc_dampening), Silence(mcmc_silence_ratio) | Direct DB update |
⚠️ If you check both Grid Search and MCMC for the same parameter, the last applied value will be reflected. Select only one per parameter.
Pre-Application Checklist
- Grade Check: Review the change between current and projected grades
- Overfitting Risk: Be cautious if cross-validation shows overfitting warnings
- MCMC R̂: Use MCMC recommendations for R̂ > 1.05 parameters as reference only
- HDI Range: Very wide HDI indicates insufficient data
- Phase Limits: Dampening/Silence checkboxes only activate at Phase 4+
After Application
- A confirmation dialog appears; upon approval, the DB is updated
sessionStorageis cleared after application, requiring re-analysis- Applied parameter count is shown as a toast message
5. Data Maturity (Phase) and Feature Restrictions
Auto-Tuner assigns a 5-level confidence grade based on the lesser count (min) of Won/Lost records.
| Phase | Condition | Badge | Available Features |
|---|---|---|---|
| 1 | min < 5 | ❌ | Analysis unavailable — Insufficient data |
| 2 | min 5–9 | 🟠 | Signal Lift directional reference only |
| 3 | min 10–19 | 🟡 | Impact, T, k + MCMC |
| 4 | min 20–49 | 🟢 | + Dampening, Silence |
| 5 | min ≥ 50 | 🔵 | Full features + Stable MCMC |
💡 Best way to advance Phase: Close more projects as Won or Lost. Active (in-progress) projects are not included in analysis.
6. FAQ
Q. Can I run analysis multiple times?
Yes. Auto-Tuner does not modify the DB at all. Analysis is an in-memory simulation, and existing data is unaffected until you click "Apply Selected".
Q. What happens if I cancel during analysis?
Click the red Cancel button next to the progress bar to immediately cancel the AJAX request. No data is affected.
Q. Analysis takes a long time
When MCMC (Phase 3+) is included, it takes 15–40 seconds. This is because the Python MCMC engine (Emcee) performs hundreds of thousands of simulations. Without MCMC (Phase 2), Ruby Grid Search alone completes within 1 second.
Q. Grid Search and MCMC recommendations differ?
| Situation | Interpretation | Recommendation |
|---|---|---|
| Both results agree | High confidence | Apply recommended ✅ |
| MCMC similar to Grid Search but wide HDI | Direction correct but uncertain | Apply cautiously |
| Results significantly different | Grid Search may be at local optimum | Prioritize MCMC HDI range |
Q. R̂ warning is showing
R̂ > 1.05 means the MCMC chain has not yet converged. MCMC results for this parameter are unreliable. Use Grid Search recommendation first and re-run MCMC after more data accumulates.
Q. MCMC section is not visible
MCMC is skipped when:
- Below Phase 3 (min < 10 records)
- Python Emcee environment not installed on the server
In both cases, Grid Search results display normally and Auto-Tuner is usable without MCMC.
Q. What is Particle Storm?
A visualization feature showing MCMC sample data as real-time animation. It provides an intuitive understanding of posterior distribution shapes. Click Play to sequentially animate all parameter distributions.