An all-dielectric 3D FDTD simulator w/GPU acceleration, for solving Maxwell's equations.
Find a file
2025-12-05 19:47:52 +01:00
design dump: proper architecture & initial report 2025-12-05 19:47:52 +01:00
pkgs dump: proper architecture & initial report 2025-12-05 19:47:52 +01:00
.cz.toml init: initialized workspace 2025-11-03 10:32:51 +01:00
.editorconfig dump: rearchitected w/working dipole and python interface 2025-11-27 10:36:40 +01:00
.gitignore dump: rearchitected w/working dipole and python interface 2025-11-27 10:36:40 +01:00
.licensesnip dump: rearchitected w/working dipole and python interface 2025-11-27 10:36:40 +01:00
.pre-commit-config.yml dump: seemingly working wave propagation to build on (finally!) 2025-11-03 10:32:57 +01:00
Cargo.lock dump: proper architecture & initial report 2025-12-05 19:47:52 +01:00
Cargo.toml dump: proper architecture & initial report 2025-12-05 19:47:52 +01:00
cog.toml dump: seemingly working wave propagation to build on (finally!) 2025-11-03 10:32:57 +01:00
LICENSE dump: seemingly working wave propagation to build on (finally!) 2025-11-03 10:32:57 +01:00
README.md dump: proper architecture & initial report 2025-12-05 19:47:52 +01:00
rustfmt.toml dump: rearchitected w/working dipole and python interface 2025-11-27 10:36:40 +01:00

hop: All-Dielectric 3D FDTD Solver w/GPU Acceleration

hop solves Maxwell's equations, for arbitrary dielectric mediums in a Yee-discretized 3D domain.

Warning

This project is currently in a prototype state. Proceed at your own risk!

Development

Development Tools

  • (optional) prek is suggested to make it easy to conform to all project rules, via pre-commits.
  • (optional) cocogitto is suggested to ease creating conformant commit messages, conformant changelog entries, and standardized release processing.
  • (optional) licensesnip is suggested to make adding correct license headers easy.

Release Process

Releases shall proceed in the following fashion:

  1. Execute cog bump --auto (nothing may be staged). This performs a few checks ex. prek on all files and a passing test-suite.
  2. Validate that the generated tags, changelog, and bump-commit are appropriate and as expected. If anything is wrong, this is the last chance to revert.
  3. Do git push origin main and git push origin --tags.
  4. Publish docs, packages, create a Forgejo release TODO. Perhaps a Justfile that validates the GPG signature, builds and publishes the docs to locally defined destination, builds and publishes any packages, and creates a Forgejo release using the changelog and a custom string entry - maybe even publishes a blog post somewhere too, with the custom string entry?

TODO

In-Scope

  • Bounds

    • Naive
  • Sources

    • EPointDipole
  • Structures

    • Cuboid
    • Sphere
  • Mediums

    • Vacuum
    • SimpleDielec
    • CPML
  • Dispersive Mediums: These are "trivial" in the sense that it's "just" a function that takes aux fields / data that we've already moved into place for it. NOTE: We may want to reconsider how aux fields are updated with subpixel averaging. Also, could PMLs be made better with subpixel averaging?

  • Inhomogeneous Mediums: Requires some kind of sampling within the single-medium structure.

    • Frankly, this is just another aux-field thing. No solver logic needed.
  • Nonlinear Mediums: Weirdly enough, nothing special in the solver. The medium will be doing all kinds of special stuff; namely, solving an auxiliary differential equation with substeps.

Fanciness

  • Bounds

    • PEC
    • PMC
    • Bloch(k=0)
    • Bloch(k!=0)
  • Sources

    • GaussBeam
    • PlaneWave
    • TFSF: Not really a source, but likely best expressed as one. Think on this.
  • Structures

  • Mediums

    • SimpleChiral
    • Lorentz
    • Drude
    • DrudeLorentz
    • Debye
  • Medium Enhancements:

    • General: Medium built from a models of P, J_f, M.
      • We then split up "how do I model mediums" into "how do I model P, J_f, M".
      • We keep existing mediums as-is, since they are sensible "special cases".
    • Animated: Requires recomputing kernel layout on every time step with altered parameters.
    • Nonlinear: This is just a P model, for use in the General medium.
    • Chiral: This is just a P model, for use in the General medium. Only kink is that we need an approximation of the local value of the dual field, since chirality is fundamentally a crossover effect.
    • Spatial Variation: This is just a P model, for use in the General medium.
  • Bloch periodic bounds: Much like NaivePeriodic, except a phase shift is applied, forcing fields to be complex. This is the only correct method of doing periodic boundary conditions - in fact, periodic sim results require a photonic band gap computation.

    • There is a split-field formulation that allows keeping all the same real sim logic. F_real and F_imag are both kept, real and imag parts of a bloch mode - and are both updated independently (I don't know how sources fit in). Boundaries then enforce crossover, aka. phase shifting.
    • That does require running the sim twice per time step, effectively. Anyway, the problem is well studied.
    • See: https://arxiv.org/pdf/2007.05091
  • Subpixel Averaging: Integrate by sampling medium at various points (if it's varying). For two-medium regions, integrate by multiplying sample-point-wise SDF with sample-point-wise medium.

    • Can we use Lines cretively to do this very efficiently? An interesting thought.
  • Subgrids: Needed for TFSF, but also generally useful.

  • Structure Kernel Layout Optimization: Directly use SDFs to find label cells without binning, and use the standard binned algo first. Then, refine the solution by annealing, using a cost function that rewards finding larger z-stripes.

    • Experiment with loading data into Lines more aggressively, esp. in structure vectors. Each kernel thus has more data to load - and much will be strided and inefficient, yes - but can also do its actual calculations very quickly. In particular, using Z lines