This is it!
This is the milestone.
We can now make, run, and analyze simulations in one big chain.
Jank remains:
- Dynamic enums still need caching, lest the user think to restart
Blender while the math nodes are riding high on `FlowPending`.
- GN remains untested, and so forth.
- Still no plane wave node. Easy to jank together, though.
- Still no reindexing in the Transform math, so only frequencies for
now.
- Active kinds still don't update shape, we still need an explicit
(postinit?) directive to do that.
- No colors for expr sockets :(
But we have a beautifully solid foundation to work on.
The new abstractive tools for defining event-driven actions via nodes
have had very few showstoppers, and are incredibly nice to work with.
There are sharp edges, of course, but generally they only matter where
the problem was so very difficult to begin with.
We'll start doing physics immediately, and fixing bugs / implementing
more nodes as we go.
We've added enough nodes to run simulations, and there is now only an
"air gap" remaining between the math nodes and the sim design nodes.
Of particular importance right now are:
- Finalizing the exporter w/Sim Data output.
- Finishing the plane wave, as well as other key source nodes (TFSF,
Gaussian Beam, Plane Wave stand out in particular).
- Reduce node, as we need to be able to reconstruct total flux
through a field monitor, as well as generally do statistics.
- Transform node, particularly w/pure-InfoFlow reindexing (we need to be
able to reindex frequency -> vacwl), fourier transform (we need to be
able to cast a time-domain field recording to frequency domain).
- Finish debugging the basic structures, in particular the Cylinder
primitive, but also the generic GeoNodes node, so we can build the
membrane properly.
- Ah yes, also, the grid definition. The default can sometimes act a
little suspiciously.
It now supports selecting the variant, and exporting a generated
LazyArrayRange of valid freqs/wls.
The UI was also revamped, with greatly more readable range values,
dynamic labels, link button pointing to the original data, etc. .
While more LOC, the code structure is also far more explicit and predictable to
maintain.
A very healthy amount of research on how to choose the Bloch vector was
performed.
It is encapsulated not only in the documentation, but also in the modes
available for how to derive one that fits a given simulation.
The theory of the Bloch boundaries can really bite you, and the hope is
that by focusing on such invalid-usage-prevention, a lot of time can be
saved in the sim design stages.
It's useful for scenarios where we need to intersect geometry with the
simulation boundary.
Generally, not preferred, which we make clear in the docs :)
Also fixed a bug in `events.py` that was misjudging `FlowInitializing`,
and causing `BoundConds` to fail.
Weird.
Anyway, fixed!
We now have a solid category (w/accompanying sockets) for defining
boundary conditions.
We also have a single-boundary-condition node for fully configuring the
all-important PML condition.
Some insights:
- PEC/PMC are so dead-simple that giving them their own nodes doesn't
even make sense.
- StablePML and PML are the same, just with differing layers. Chose
to require the user add layers to the PML node for the same effect.
- "Periodic" is a special case of "Bloch", so we only need "Bloch".
- "Absorber" vs "PML" is an important choice for the user, which we
must ensure shines through.
I'm of the belief that the correct abstractions are now actually
available, and that most-to-all of the required functionality actually
already exists within the code base.
The art is bringing it together!
The performance difference isn't as clear cut as hoped.
However, the plotting procedure is enormously more straightforward, and
performance is more predictable.
So it's worth it.
We're managing to perfectly reuse figure/canvas/axis, but still hovering at around 70-80ms.
Mind you, the tested machine is an older laptop.
Still, things feel interactive enough, especially together with the
other modifications.
To really amp it up, we can look into blitting. It requires alterations
to the plotting methodology, but it offers a cached approach to drawing
only altered pixels (the bottleneck with `canvas.draw()` is that it
needs to render all the pixels, every time).
We can also try to lower the resolution if it's too slow.
Various small adjustments with a big total impact:
- Toggling 3D preview no longer propagates a DataChanged, which prevents
chronic `bl_select()` invocation that greatly slows down switching
between images.
- Viewer now uses a replot() context manager to hide any active plots
whenever no plots were generated, so that turning off plots /
previewing a node (chain) without plots properly turns off the image,
instead of letting the image hang around.
- `self.managed_objs` is now properly invalidated, instead of trying to
set the `name` attribute of in-memory objects, so that persistance
keeps up with `sim_node_name` changes, so that all the ex. 'Viz' nodes
don't all try to hog a single 'Viz' image name.
- A pre-save handler was added, which ensures all images are packed into
the .blend, to ensure that the images will pop up on the next file
load.
- A fake user is now assigned to all new images, to nail down the idea
that `ManagedBLImage` is the owner.
- `name` setter of `ManagedBLImage` was unreasonably bugged (it's
actually incredible that it worked) - it has been fixed, as well as
other changes applied to the class as a whole (including @classmethod
on the UI-area/space getters and minor None-sanitizing).
This was a nasty (interesting?) one - usually, input sockets are not attempted used
after the socket no longer exists.
Various checks in ex. `events` tend to help that process along.
Unfortunately (fortunately?), `Extract` uses a `_compute_input` query with
`optional=True`, which results in a direct attempt to hit the cache
without any other checks.
Because old input socket caches were never deleted, it would
**continue to get cached data from sockets that no longer exist**.
While on the surface this could be considered a case of "the
private method (`_compute_input`) is private for a reason", or
alternatively, "don't hijack the graph flow", I'm more convinced that
the usage is actually quite clean, being read-only and generally
well-conceived. It's reasonable to presume that asking for a thing that
doesn't exist won't produce output!
Moreover, I wouldn't be surprised if several other mysterious bugs were
caused by this. Not to mention the memory leak of endless caching! (Well,
until the node is deleted). It's a good things we noticed!
The Viz node now detects the shape of the data, and presents compatible
plot options.
Not all are implemented, but a few quite important ones are.
Additionally, a number of dataflow-related bugs were investigated and
fixed. A few were truly damaging, but many simply resulted in gross
inefficiencies - we must be careful declaring BLFields that are updated
in hot loops!
Moreover, it is exceptionally easy to add more as needed, as we analyze
more and more sims.
The only limit is `matplotlib`, which is... well, yeah.
Due to the BLField work, the dynamicness of the Viz node is quite
under control, so there will not be any critical issues there.
The plotting lags (70ms total in the hot loop), but that's actually
entirely fixeable.
It's also entirely the `managed_bl_image`'s fault.
Fixing these inefficiencies will also make Tidy3D's builtin plots
near-realtime, incidentally.
We profiled the following currently:
- 25ms: Creating `fig = plt.subplots`. We can reuse fig per-managed
image.
- 43ms: The BytesIO roundtrip, including `savefig`. We can instead use
the Agg backend, `fig.canvas.draw()`, and a `np.frombuffer` to both
plot directly to the memory location,
- ~3ms: Actual plotting functions in `image_ops`. They are seriously fast.
- ~0ms: Blitting pixels to the Blender image - this was optimized in
4.1, and it shows; the time to copy the data over is essentially nothing.
BLField has gotten a huge facelift, to make it practical to wrangle
properties without the sharp edges.
- All the "special" UI-exposed property types can now be directly
constructed in a BLField marked with 'prop_ui'.
- The most appropriate internal representation will be chosen to
represent the attribute based on its type annotation, including sized
vector-like `bool`, `int`, `float` for `tuple[...]`.
- Static EnumProperties can now be derived from a special `StrEnum`, to
which a `to_name` and `to_icon` method is attached.
- Dynamic `EnumProperty` can now be used safely, with builtin
workarounds to the real-world reference-loss-crash (realized
in the Tidy3D Cloud Task node) and jankiness like empty enum.
- The update method is now fully managed, negating all bugs related to
improper update callback naming.
- Python-side getter caching is preserved for ui-exposed
properties, with the help of node/socket base class support for
passing a `Signal.InvalidateCache` to BLFields that are altered in the
UI.
The cost to all this niceness is rather low, and arguably, positive:
- Dynamic Enum/String searchers no longer "magically" invoke all the
time, since the values seen by Blender are cached by the BLField.
- To regenerate the searcher output, an `@on_value_changed` should be
made by the user to pass `Signal.ResetEnumItems` or
`Signal.ResetStrSearch` to the `BLField`.
- Since searching is no longer eager, there is no danger of
out-of-reference strings (which crash Blender from EnumProperty), but
also a greatly reduced performance problems associated with
the hot-loop regeneration of EnumProperty strings.
- The base classes are now involved with BLField invalidation, to ensure
that the getter caches are cleared up when the UI changes. For the
price of that small indirection (done cheaply with set lookup),
all attribute lookups are generally done in a single lookup, completely
avoiding Blender until needed.
- This does represent another increase in confidence wrt. the event
system's integrity, but so far, that has been a very productive
direction.
**NOTE**: The entire feature set of BLField is not well tested, and will
likely need adjustments as the codebase is converted to use them.
Enormously important changes to the data flow semantics and invalidation
rules. Especially significant is the way in which the node graph
produces a deeply composed function, compiles it to optimized machine
code with `jax`, and uses a seperately cached data flow to insert values
into the function from anywhere along the node graph without recompiling
the function.
A critical portion of the math system, namely the unit-aware dimensional
representation, is also finished. The `Data` node socket type now
dynamically reports the dimensional properties of the object flowing
through it, courtesy the use of a seperate data flow for information.
This allows for very high-peformance unit-aware nearest-value indexing built on binary
search.
Also, dependency management is completely ironed out. The `pip install`
process now runs concurrently, and the installation log is parsed in the
background to update a progress bar. This is the foundational work for a
similar concurrent process wrt. Tidy3D progress reporting.
The serialization routines are fast and effective.
Overall, the node graph feels snappy, and everything updates smoothly.
Logging on the action chain suggests that there aren't extraneous calls,
and that existing calls (ex. no-op previews) are fast.
There will likely be edge cases, and we'll see how it scales - but
for now, let's go with it!
This especially involved fixing the invalidation logic in
`trigger_action`.
It should now be far more accurate, concise, and performant.
The invalidation check ought still be optimized.
The reason this isn't trivial is because of the loose sockets:
To use our new `@keyed_cache` on a function like `_should_recompute_output_socket`, the loose
socket would also need to do an appropriate invalidation.
Such caching without accounting for invalidation on loose-socket change
would be incorrect.
For now, it seems as though performance is quite good, although it is
unknown whether this will scale to large graphs.
We've also left `kind`-specific invalidation alone for now (maybe
forever).
The crash: When a linked loose socket was deleted, the link remained in the
NodeLinkCache, and caused a crash when trying to ask the already-deleted
socket for removal consent. We fixed this by reporting all socket
removals to the node tree, so that links could be correctly removed
independently of link-change calculation.
The bug: The collection getter was cached improperly; Blender ID types
can't just be saved like that. We need to search every time. Performance
seems unaffected at first glance.
This is essential for:
- Representing ranges as bounds
- Arbitrary symbolic/numeric representation of spectral distributions
- Parametric representation and JIT of critical-path procedures.
Unfortunately this broke a lot of nodes in small ways.
Next step is to finish the low-hanging fruit nodes + fix the ones we
have.