Hey everyone,
I’ve been exploring a conceptual idea for a new kind of neural network architecture and would love to hear your thoughts or pointers to similar work if it already exists.
Instead of each node in a neural network representing a scalar or vector value, each node would itself be a small neural network (a subnetwork), potentially many levels deep (i.e. 10 levels of recursion where each node is a subnetwork). In essence, the network would have a recursive or fractal structure, where computation flows through nested subnetworks.
The idea is inspired by:
Fractals / self-similarity in nature Recursive abstraction: like how functions can call other functions
Possible benefits:
It might allow adaptive complexity: more expressive regions of the model where needed. Could encourage modular learning, compositionality, or hierarchical abstraction. Might help reuse patterns in different contexts or improve generalization.
Open Questions:
Has this been tried before? (I’d love to read about it!) Would this be computationally feasible on today’s hardware? What kinds of tasks (if any) might benefit most from such an architecture? Any suggestions on how to prototype something like this with PyTorch or TensorFlow?
I’m not a researcher or ML expert, just a software developer with an idea and curious about how we could rethink neural architectures by blending recursion and modularity. I saw somewhat similar concepts like capsule networks, recursive neural networks, and hypernetworks. But they differ greatly.
Thanks in advance for any feedback, pointers, or criticism!
submitted by /u/raunchard to r/learnmachinelearning
[link] [comments]
Laisser un commentaire