DOCUMENTATION

Bayesian Auto-Tuner Guide

Step-by-step guide to the EXAWin Auto-Tuner — running analysis, result screens (Summary, Signal Lift, Impact, T/k, Dampening, Silence, Prior, AUC, Cross-validation), MCMC posterior distributions, Particle Storm visualization, simulation structure panel, and parameter application.

Auto-Tuner User Guide

Auto-Tuner is a feature that learns and recommends optimal parameter values for the Bayesian engine using outcome data from past Won/Lost projects. Administrators review recommendations backed by data-driven evidence and decide whether to apply them.

Location: Sidebar → Bayesian → Auto-Tune

⚠️ Auto-Tuner requires admin or super_user privileges.


1. Running Analysis

Step 1: Navigate to Auto-Tune Screen

Click Bayesian → Auto-Tune in the sidebar. An introductory screen is displayed on first visit; if a previous analysis exists, it is auto-restored from sessionStorage.

Step 2: Start Analysis

Click the Start Analysis button at the top or center of the screen.

After starting, a progress bar displays real-time progress across 11 stages:

ComponentDurationNotes
Ruby Grid Search (Impact, T, k, Dampening, Silence)< 1sInstant
Cross Validation (5-fold)< 1sOverfitting check
Emcee MCMC Sampling15–30sPhase 3+ only

💡 You can cancel during analysis with the Cancel button. Cancellation does not affect data.

⚠️ When MCMC is included, total analysis takes 15–40 seconds. Elapsed time is shown in the progress bar.


2. Results Screen Layout

After completion, results are organized in the following sections from top to bottom:

① Summary Cards (4 columns)

Four summary cards are displayed at the top.

CardContent
Completed DataNumber of analyzed projects (Won / Lost breakdown) + Phase badge
Current SeparationSeparation calculated with current parameters (Grade A–D)
Projected SeparationSimulated separation after applying all recommendations (Grade shown)
Won vs Lost AverageWon project average P(Win) vs Lost project average P(Win)

Grade criteria:

GradeSeparationMeaning
A≥ 0.40Excellent — Parameters reflect reality well
B0.25 – 0.40Good — Adequate but room for improvement
C0.10 – 0.25Needs Improvement — Adjustment recommended
D< 0.10Urgent — Immediate re-calibration needed

② Simulation Structure Panel

Below the summary, a collapsible 🔬 Auto-Tuner Simulation Structure panel is available. Click to reveal the complete simulation structure and counts.

Summary Cards (4 columns):
CardContent
PROJECTSNumber of analyzed projects (Won / Lost)
RUBY ENGINERuby Grid Search simulation count
MCMC ENGINEEstimated simulate_project() calls by Emcee
GRAND TOTALRuby + MCMC combined

Computation Distribution Bar: Visually displays the ratio between Ruby and MCMC computations. Typically MCMC accounts for 99%+.

Analysis Pipeline (3 columns):
  1. GRID SEARCH: Tries 10 points within ± range from current values to maximize separation
  2. CROSS VALIDATION: 5-fold cross-validation for overfitting detection
  3. MCMC Emcee: 32 walkers × (500 warmup + 1,500 draws) ensemble sampling
Step-by-Step Breakdown (11 steps):
StepAnalysisCalls
1current_separationP
2signal_lift_analysis0 (DB aggregate)
3impact_grid_searchI × G × P
4optimal_thresholds0 (DB based)
5k_recommendations0 (statistical)
6dampening_searchD × P
7silence_penalty_searchS × P
8projected_separationP
9calculate_auc0 (P(Win) based)
10cross_validateF × P
11mcmc_ensemble_samplingW × Steps × P

Where P=projects, I=Impact types, G=Grid points (10), D=Dampening trials, S=Silence trials, F=5 (folds), W=32 (walkers), Steps=2,000 (warmup+draws)

③ Signal Lift Analysis

Analyzes the discriminative power (Lift) of each signal.

ColumnDescription
SIGNALSignal name
WON%Occurrence rate in Won projects
LOST%Occurrence rate in Lost projects
LIFTWon rate / Lost rate
GRADEDiscriminative power grade + emoji
  • Lift > 1: Appears more often in Won projects → Positive indicator
  • Lift < 1: Appears more often in Lost projects → Negative indicator
  • ⚠ MISMATCH: Red warning when current classification (Positive/Negative) doesn't match actual discriminative power

④ Prior α, β Recommendation

Recommends optimal initial Prior based on historical data.

ItemDescription
MethodEstimation method (Method of Moments or MLE)
α (Success Weight)Current → Recommended (95% CI shown)
β (Failure Weight)Current → Recommended (95% CI shown)
Evidence Maturity🌱 Early / 🌿 Growing / 🌳 Mature (average α+β+n per project)

⑤ Impact Optimization

Grid Search recommendations for each Impact Type. Only Impact types with adjust recommendations are shown as cards.

Each card contains:

  • Impact Type name
  • Current value → Recommended value
  • Separation improvement (+%p)
  • Checkbox — Select parameters to apply (Select All available)

💡 Search range varies by Phase: Phase 3 ±30%, Phase 4 ±40%, Phase 5 ±50%

⑥ Threshold (T) & Velocity (k)

Two tables displayed side by side.

Threshold (T):
ColumnDescription
STAGESales stage name
CURRENTCurrent threshold
OPTIMALYouden J optimal threshold
JYouden J statistic (< 0.20 = not recommended)
Velocity (k):
ColumnDescription
STAGESales stage name
CURRENTCurrent k value
OPTIMALGrid Search optimal k (max: 12)
AVG α+βAverage evidence for the stage

⑦ Impedance Impact

Simulates how T/k recommendations affect the impedance function.

ColumnDescription
STAGESales stage
P(WIN)Average P(Win) for the stage
CurrentImpedance (%) with current T/k
RecommendedImpedance (%) with recommended T/k
Change↑ / ↓ + %p difference

⑧ Dampening & Silence Penalty

Two cards displayed side by side.

ParameterDescriptionDefault
DampeningMulti-signal attenuation rate. 0=strongest only, 1=all equal0.25
Silence PenaltyPenalty ratio for activity gaps0.30

⚠️ Below Phase 4, Dampening/Silence checkboxes are disabled with a "🟢 Phase 4+ required" notice.


3. MCMC Posterior Distribution Analysis

When Phase 3+, MCMC posterior distribution estimates appear below Grid Search results.

MCMC Header

ItemDescription
Samples × WalkersSamples × walkers (e.g., 1,500 samples × 32 walkers)
Overall max R-hat + convergence status (✅ or ⚠️)
RuntimeMCMC execution time (seconds)
ProjectsNumber of projects in analysis

Particle Storm Visualization

Click ▶ Play Particle Storm to see real-time animations of each parameter's posterior distribution:

  • Density Histogram: MCMC samples accumulate one by one revealing the distribution shape
  • Green solid line: MCMC estimated Mean
  • Red dashed line: Current setting value
  • Purple box: 95% HDI interval

MCMC Data Table

ColumnDescription
Checkbox — Select parameters to apply (Select All available)
ParameterParameter name (⚙️ = Dampening, 🔇 = Silence)
CurrentCurrent value (red)
MCMCPosterior mean (green)
±SDStandard deviation — estimation uncertainty
HDI 95%95% Highest Density Interval (purple)
Convergence diagnostic (green < 1.05, yellow < 1.10, red ≥ 1.10)
ΔDifference between current and MCMC estimate (↑ / ↓ / ≈)

⚠️ R̂ > 1.05 is a convergence incomplete warning. R̂ > 1.10 means non-converged — do not apply.

💡 Rows without checkboxes: When the difference between MCMC mean and current value is ≤ 0.02 (ε threshold), the change is deemed negligible and no checkbox is shown. Shown as in the Δ column.

HDI Interpretation Guide

HDI [3.5, 6.2], Current = 5.0
Current value within HDI — reasonable. No change needed.

HDI [2.0, 3.5], Current = 5.0
Current value outside HDI. Likely overestimated. Adjustment recommended.

HDI [0.8, 8.0] (very wide)
Insufficient data. Estimation uncertain. Reference only.

4. Applying Recommendations

Apply Bar

A fixed bar appears at the bottom of the results screen:

  • Left: Number of selected parameters
  • Right: Apply Selected button

Phase-specific Application Limits

PhaseApplicable Scope
1–2Not applicable — Apply button fully disabled
3Impact, T, k + MCMC (Dampening/Silence locked)
4+All parameters applicable

Application Targets

SourceTarget ParametersApplication Target
Grid SearchImpact, T, k, Dampening, SilenceDirect DB update
MCMCImpact(mcmc_impact), Dampening(mcmc_dampening), Silence(mcmc_silence_ratio)Direct DB update

⚠️ If you check both Grid Search and MCMC for the same parameter, the last applied value will be reflected. Select only one per parameter.

Pre-Application Checklist

  1. Grade Check: Review the change between current and projected grades
  2. Overfitting Risk: Be cautious if cross-validation shows overfitting warnings
  3. MCMC R̂: Use MCMC recommendations for R̂ > 1.05 parameters as reference only
  4. HDI Range: Very wide HDI indicates insufficient data
  5. Phase Limits: Dampening/Silence checkboxes only activate at Phase 4+

After Application

  • A confirmation dialog appears; upon approval, the DB is updated
  • sessionStorage is cleared after application, requiring re-analysis
  • Applied parameter count is shown as a toast message

5. Data Maturity (Phase) and Feature Restrictions

Auto-Tuner assigns a 5-level confidence grade based on the lesser count (min) of Won/Lost records.

PhaseConditionBadgeAvailable Features
1min < 5Analysis unavailable — Insufficient data
2min 5–9🟠Signal Lift directional reference only
3min 10–19🟡Impact, T, k + MCMC
4min 20–49🟢+ Dampening, Silence
5min ≥ 50🔵Full features + Stable MCMC

💡 Best way to advance Phase: Close more projects as Won or Lost. Active (in-progress) projects are not included in analysis.


6. FAQ

Q. Can I run analysis multiple times?

Yes. Auto-Tuner does not modify the DB at all. Analysis is an in-memory simulation, and existing data is unaffected until you click "Apply Selected".

Q. What happens if I cancel during analysis?

Click the red Cancel button next to the progress bar to immediately cancel the AJAX request. No data is affected.

Q. Analysis takes a long time

When MCMC (Phase 3+) is included, it takes 15–40 seconds. This is because the Python MCMC engine (Emcee) performs hundreds of thousands of simulations. Without MCMC (Phase 2), Ruby Grid Search alone completes within 1 second.

Q. Grid Search and MCMC recommendations differ?

SituationInterpretationRecommendation
Both results agreeHigh confidenceApply recommended ✅
MCMC similar to Grid Search but wide HDIDirection correct but uncertainApply cautiously
Results significantly differentGrid Search may be at local optimumPrioritize MCMC HDI range

Q. R̂ warning is showing

R̂ > 1.05 means the MCMC chain has not yet converged. MCMC results for this parameter are unreliable. Use Grid Search recommendation first and re-run MCMC after more data accumulates.

Q. MCMC section is not visible

MCMC is skipped when:

  • Below Phase 3 (min < 10 records)
  • Python Emcee environment not installed on the server

In both cases, Grid Search results display normally and Auto-Tuner is usable without MCMC.

Q. What is Particle Storm?

A visualization feature showing MCMC sample data as real-time animation. It provides an intuitive understanding of posterior distribution shapes. Click Play to sequentially animate all parameter distributions.