Systematic Hyperparameter Tuning for Neural Network-Based PDE Solvers

Systematic Hyperparameter Tuning for Neural Network-Based PDE Solvers

When working with deep learning methods for partial differential equations (PDEs) like the Hamilton-Jacobi-Bellman (HJB) equation, effective hyperparameter tuning proves critical for success. Based on experience with DeepBSDE and Deep Splitting methods, here’s my battle-tested key principles:

  1. Engineering-First Mindset:
    • Start with the minimal interesting example.
    • Test one thing at a time through iterative, controlled experiments.
    • Focus on incremental improvements, not single, analytical fixes.
    • Focus on empirical evidence, not theoretical assumptions.
    • Record everything.
  2. Key Metrics Only:
    • Focus only on key metrics (e.g., policy performance) to make decisions.
    • Use secondary indicators (loss landscapes, gradient norms, etc.) only for understanding the model.
  3. In-depth, Fact-based Analysis:
    • Focus only and solely on facts, not stories.
    • Seek the simplest explanation through first principles.
  4. Use the most straightforward solutions:
    • Use direct fixes, not complex changes.
    • Don’t fear hard work; it’s often the quickest way.

In addition, I have made a note on how to systematically tune the hyperparameters.

Please refer to this note on hyperparameter tuning below: