https://orcid.org/0009-0007-3938-379X
GitHub more info
I am a PhD candidate in machine learning and cybersecurity, in the mean time I am also the laboratory director of the AI Testing Laboratory of CLR Labs where I work with a beautiful team of passionate and brilliant people on evaluating neural networks from robustness to system’s cybersecurity and explainability, I am based in the Marseille area, between sea, mountains and buildings.
I am a proud husband, father of one and son.
I study how deep models break and how to make them unbreakable — adversarial robustness, model security, and the geometry of learned representations applied to cyber threats.
This model visualizes gradient descent optimization - the fundamental algorithm used to train neural networks and other machine learning models. Agents represent different optimization algorithms searching for the minimum of a loss landscape (the “error surface” that ML models try to minimize during training).
The model demonstrates how different optimizer types (SGD, Momentum with different parameters) behave on various loss landscapes, from simple bowls to the notoriously difficult Rosenbrock “banana valley” function. This helps build intuition about why certain optimization algorithms work better than others for different problem geometries.
…
Under development.