Announcing Yarrow
I’ve recently been funded to work on my project yarrow, which is a datastructure and data-parallel (read: GPU-accelerated) algorithms for string diagrams. A key feature of yarrow is to do with differentiability. More on that shortly.
Quick note: yarrow is still early in development, and the API is not likely to be stable for a few weeks. That aside, so far I’ve released:
- A paper describing the datastructures and algorithms of yarrow
- The yarrow library in Python (still in development!)
- Documentation for the library
- A homepage for the project.
Within a week or so I’ll also be finishing up a more in-depth blog post about yarrow, but for now the documentation is the best place to understand what it’s about.
Differentiable Circuits and Zero-Knowledge Machine Learning
So how does yarrow relate to Zero-Knowledge ML? Well, to describe a zero-knowledge computation, you need an arithmetic circuit, and these can be represented as string diagrams.
In fact, we wrote a paper doing not just that, but also showing how the gradients of such circuits can computed efficiently via “reverse derivatives”1.
Ultimately, this means that yarrow will2 be able to:
- Represent a ZK computation as an arithmetic circuit \(c\)
- Transform the circuit \(c\) into a circuit \(d\) which computes a single step of gradient descent
The idea is to allow for zero-knowledge training of models.
This is made possible by a key feature of yarrow: it natively handles a certain kind of hypergraph structure. This means that, given a circuit \(f\), we can compute an “optic” diagram:
Roughly speaking, in the specific case of machine learning with neural networks, the “backwards” arrows in these diagrams carry the gradients of the model. For a more formal description of this process, see Section 10 of the paper.
That’s it for now! If you’d like to keep updated follow me on twitter or check back later!
first defined in the paper Reverse Derivative Categories↩︎
Feature still in development!↩︎