Wesley Ladd

I build technology that has to work in the field, then ask why the science behind it can't tell me how wrong it is.
That tension is the thread connecting everything I do. As CTO of Polaris EcoSystems, I lead technology for a professional services firm that works in tribal energy, conservation planning, and infrastructure. That work led us to build custom computer vision tools for infrastructure inspection and time-lapse analysis, which led to 3D reconstruction, which led to prototyping our own sensor hardware, which led to learning metrology. Each step pulled me deeper into the same question: when does a model's output become a measurement you can defend?
The research sits at the intersection of 3D reconstruction and measurement science.
Monocular depth estimation, SLAM, and multi-view reconstruction produce geometry that looks convincing but lacks the calibrated uncertainty, sensor traceability, and error characterization that metrology requires. I believe closing that gap will unlock new frontiers in computer perception, and by extension, in what AI-based systems are capable of when they have to be right, not just plausible. I lay out the research direction in Your Depth Map Is Not a Measurement and the epistemological scaffolding in The State Space of Research.
At LSU, I teach and research at the intersection of AI, audit, and cyber risk.
I'm Associate Director of the Center for Internal Auditing and Cybersecurity Risk Management in the E.J. Ourso College of Business, where I teach internal audit, cybersecurity risk management, AI, and ESG to undergraduate and graduate students. I also serve as faculty advisor for the AI Club at LSU. My research interest is in automating assessment: cyber risk assessment, where the tools are large language models and reasoning systems, and physical infrastructure assessment, where the tools are computer vision and 3D reconstruction. The common problem is the same: how do you trust an automated system's judgment when the cost of being wrong is high?
I write for people who need to use AI without becoming AI researchers.
Auditors, engineers, attorneys, physicians, board members. Professionals who carry liability for their decisions and need to evaluate AI systems without treating abstract model scores as a substitute for accountability. That's the audience for Practical AI for Professionals: Understanding, Using, and Surviving AI (Chapman & Hall / CRC Press, 2026), which I coauthored with William Yarberry.
The deeper question is whether our tools for evaluating AI have any epistemic authority at all.
Benchmarks, ablations, and peer review are the instruments the field uses to decide what counts as progress. But they are instruments designed by and for human researchers, and there is a real question about whether they survive the transition to machine-speed science. That question isn't abstract. It determines what “trustworthy AI” can even mean, and it connects the metrology, the governance work, and the teaching into a single program.
Contact
Localization GUI
Localization CLI
© 2026 Wesley Ladd. All rights reserved.
Last updated: 3/24/2026