I'm trying to figure out how networks learn—not just how to make them learn faster, but what's actually going on when a system updates itself. Somewhere between mathematics, neuroscience, and silicon.
Background: maths degree, MSc in Embedded AI, time in industry I don't miss. Currently chasing a PhD so I can spend five years on problems that don't have quarterly deadlines. I've taped out chips, contributed to open-source EDA tools, and spent more time than is healthy thinking about equilibrium propagation.
This blog is where I write about things I'm learning. Mostly neural network theory, physical learning systems, and whatever else I can't stop thinking about.
Email: thomaspluck95@gmail.com