Parameter Optimization
Every strategy has parameters — SL/TP ticks, indicator periods, thresholds, R:R ratios. Choosing the right values can make the difference between a losing strategy and a profitable one. But optimization is a double-edged sword: done poorly, it leads to overfitting — a strategy that performs brilliantly on past data and fails in the future.
This tutorial covers practical parameter optimization techniques you can use with TestMax.
The Optimization Problem
Consider an EMA crossover strategy with two parameters:
- Fast EMA period: Could be 5, 7, 9, 12, 15, 20
- Slow EMA period: Could be 20, 30, 50, 75, 100
That is 6 x 5 = 30 possible combinations. Each combination produces a different equity curve, win rate, and P&L. How do you pick the right one?
Grid Search: The Brute Force Approach
The most straightforward method: test every combination and compare results.
Manual Grid Search with TestMax
- Define your parameter ranges — List every value you want to test for each parameter:
fast_periods = [5, 7, 9, 12, 15, 20]slow_periods = [20, 30, 50, 75, 100]sl_ticks_options = [10, 20, 30, 40]tp_ratios = [1.5, 2.0, 2.5, 3.0]-
Run each combination in TestMax — For each parameter set, create a run in the Algo Playground with those settings. TestMax stores every run’s results so you can compare them later.
-
Compare results in the Analytics page — Use TestMax’s built-in run comparison to see which parameter combinations performed best across the same date range and instrument.
Automated Grid Search in Code
You can also build the grid search into your strategy itself, running multiple parameter sets within a single execution. This is useful for quick exploration:
#!/usr/bin/env python3"""Parameter Grid Search — EMA CrossoverTests multiple parameter combinations and reports results.Note: This is an analysis tool, not a live trading strategy."""import os, time
ACCOUNT_ID = int(os.environ.get("ACCOUNT_ID", "0"))CONTRACT_ID = os.environ.get("CONTRACT_ID", "")TOTAL_BARS = int(os.environ.get("TOTAL_BARS", "5000"))STEP_DELAY = float(os.environ.get("STEP_DELAY", "0.02"))SPEED_FILE = os.environ.get("SPEED_FILE", "")
# First, collect all bar data without tradingprint("[INFO] Phase 1: Collecting bar data...")bars_data = []for i in range(TOTAL_BARS): bar = get_next_bar() if bar is None: break bars_data.append(bar) read_speed() if STEP_DELAY > 0: time.sleep(STEP_DELAY) if i % 1000 == 0: print(f"[INFO] Collected {i}/{TOTAL_BARS} bars")
print(f"[INFO] Collected {len(bars_data)} bars total")print("[INFO] Phase 2: Running parameter grid search...")print("-" * 70)
# EMA calculation helperdef calc_ema_series(closes, period): if len(closes) < period: return [] mult = 2 / (period + 1) ema = [sum(closes[:period]) / period] for i in range(period, len(closes)): ema.append((closes[i] - ema[-1]) * mult + ema[-1]) return ema
closes = [b["c"] for b in bars_data]
# Grid searchresults = []fast_periods = [5, 9, 12, 20]slow_periods = [21, 50, 100]
for fast in fast_periods: for slow in slow_periods: if fast >= slow: continue # Skip invalid combinations
fast_ema = calc_ema_series(closes, fast) slow_ema = calc_ema_series(closes, slow)
# Align: both series start at index `slow` fast_start = slow - fast # offset to align indices min_len = min(len(fast_ema) - fast_start, len(slow_ema))
trades = 0 wins = 0 total_pnl = 0.0 position = None entry_price = 0.0
for j in range(1, min_len): f_prev = fast_ema[j - 1 + fast_start] f_curr = fast_ema[j + fast_start] s_prev = slow_ema[j - 1] s_curr = slow_ema[j] price = closes[j + slow]
golden = f_prev <= s_prev and f_curr > s_curr death = f_prev >= s_prev and f_curr < s_curr
if position is None and golden: position = "LONG" entry_price = price elif position is None and death: position = "SHORT" entry_price = price elif position == "LONG" and death: pnl = price - entry_price total_pnl += pnl trades += 1 if pnl > 0: wins += 1 position = None elif position == "SHORT" and golden: pnl = entry_price - price total_pnl += pnl trades += 1 if pnl > 0: wins += 1 position = None
win_rate = (wins / trades * 100) if trades > 0 else 0 results.append({ "fast": fast, "slow": slow, "trades": trades, "wins": wins, "win_rate": win_rate, "pnl": total_pnl })
# Sort by P&L and print resultsresults.sort(key=lambda r: r["pnl"], reverse=True)
print(f"{'Fast':>6} {'Slow':>6} {'Trades':>8} {'WinRate':>8} {'P&L (pts)':>12}")print("-" * 44)for r in results: print(f"{r['fast']:>6} {r['slow']:>6} {r['trades']:>8} {r['win_rate']:>7.1f}% {r['pnl']:>+12.2f}")
print("-" * 44)best = results[0]print(f"[BEST] EMA {best['fast']}/{best['slow']} | {best['trades']} trades | {best['win_rate']:.1f}% | {best['pnl']:+.2f} pts")The Overfitting Problem
Overfitting occurs when you optimize parameters to perfectly fit historical data, capturing noise rather than real patterns. The result: stellar backtests, terrible live performance.
Signs of Overfitting
| Sign | Description |
|---|---|
| Too-good results | Win rate > 80% or P&L that looks unrealistic |
| Very specific parameters | The “best” settings are oddly precise (e.g., EMA 13.7/47.3) |
| Narrow peak | Small parameter changes cause huge performance drops |
| Inconsistent across dates | Works on Jan-Mar, fails on Apr-Jun |
How to Avoid Overfitting
1. Use robust parameter ranges, not exact values
Instead of picking EMA 9/21 because it tested 0.3% better than 9/20, look for a plateau — a range of values that all perform well:
EMA 7/20: +$2,100EMA 9/21: +$2,350 ← If this is an island, it is likely overfitEMA 10/25: +$2,200EMA 12/30: +$2,050
vs.
EMA 7/20: +$2,100EMA 9/21: +$2,350 ← If nearby values also work, this is robustEMA 10/25: +$2,380EMA 12/30: +$2,280If neighboring parameter values produce similar results, the pattern is real. If only one exact combination works, it is noise.
2. Out-of-sample testing
Never optimize and validate on the same data:
Data: Jan 1 - Dec 31
Wrong: Optimize on Jan-Dec, report results on Jan-DecRight: Optimize on Jan-Jun (in-sample), validate on Jul-Dec (out-of-sample)# Split your datatotal_bars = len(bars_data)split = int(total_bars * 0.6) # 60% for optimization
in_sample = bars_data[:split] # Optimize on thisout_of_sample = bars_data[split:] # Validate on this
# Run grid search on in_sample# Take the top 3-5 parameter sets# Test those on out_of_sample# The one that performs best on BOTH is your pick3. Walk-forward analysis
The gold standard. Instead of a single train/test split, do multiple:
Window 1: Optimize on Jan-Mar, test on AprWindow 2: Optimize on Feb-Apr, test on MayWindow 3: Optimize on Mar-May, test on Jun...
Final performance = average of all test windowsThis simulates what would happen if you re-optimized monthly.
def walk_forward(bars_data, optimize_months=3, test_months=1): """Walk-forward analysis framework.""" # Approximate bars per month (1m bars, ~22 trading days, ~390 bars/day) bars_per_month = 22 * 390
results = [] start = 0
while start + (optimize_months + test_months) * bars_per_month <= len(bars_data): opt_end = start + optimize_months * bars_per_month test_end = opt_end + test_months * bars_per_month
opt_data = bars_data[start:opt_end] test_data = bars_data[opt_end:test_end]
# 1. Find best params on opt_data (your grid search function) best_params = run_grid_search(opt_data)
# 2. Test those params on test_data test_result = run_strategy(test_data, best_params)
results.append({ "window": len(results) + 1, "params": best_params, "test_pnl": test_result["pnl"] })
start += test_months * bars_per_month # Slide forward
return resultsWhat to Optimize (and What Not To)
Worth optimizing:
- Indicator periods (EMA 9 vs 12 vs 20)
- SL/TP tick distances (within a reasonable range)
- R:R ratio (1.5:1 vs 2:1 vs 3:1)
- Time filters (which hours to trade)
Not worth optimizing:
- Core logic — if your strategy concept does not work with reasonable parameters, no amount of tuning will save it
- Too many parameters simultaneously — the more you optimize, the higher the chance of overfitting
- Exact thresholds — RSI 30 vs 32 vs 28 should all work similarly if the strategy is sound
Using TestMax’s Run Comparison
TestMax stores every strategy run with its full results. The most practical optimization workflow:
- Run baseline — Pick reasonable default parameters and run the strategy. Note the P&L, win rate, and trade count.
- Change one parameter at a time — Run the strategy again with one parameter changed. Compare results in the Analytics page.
- Build a picture — After 10-15 runs, you will see which parameters are sensitive (large P&L changes) and which are not (similar results regardless of value).
- Pick robust values — Choose parameters from the “plateau” — values where small changes do not cause large performance swings.
- Validate on different dates — Run your final parameter set on a completely different date range to confirm the results hold up.
Practical Tips
-
Start with default values from the tutorials. The parameters in these tutorials (EMA 9/21, RSI 14, ATR 14, etc.) are industry-standard defaults. They work reasonably well across many instruments and timeframes.
-
Optimize the risk-reward ratio first. This has the largest impact on overall P&L. A strategy with a 40% win rate at 3:1 R:R makes more money than one with a 55% win rate at 1:1 R:R.
-
Test across multiple date ranges. A strategy that only works on NQ in January 2025 is not a strategy — it is a coincidence.
-
Keep a log. Track what you tested and what happened. It is easy to run 50 backtests and forget which combination was which.
-
When in doubt, use wider stops and higher R:R. This forgives more imprecision in your entry signal.
What’s Next
- From TestMax to Live Trading — once you have optimized parameters, port your strategy to live trading
- ICT Smart Money Strategy — see how the ICT strategy adapts parameters per timeframe automatically