|
Titas Anciukevičius
I work on research problems combining generative models, computer graphics, and computer vision. My primary focus is on learning a generative model that understands and represents the world around us.
I am a Research Scientist at Waymo working on World Modelling. Previously, I was a Research Scientist at Meta and a Visiting Researcher at KAUST with Jürgen Schmidhuber. I completed my PhD at the University of Edinburgh, where I was advised by Hakan Bilen and Chris Williams.
Email /
Scholar /
Bluesky /
Github /
LinkedIn
|
|
|
|
All Life is Problem Creation: Learning to Generate Environments that Maximize Performance Gain
Titas Anciukevičius,
Yuhui Wang,
Piotr Piękos,
Li Nanbo,
Wenyi Wang,
Jürgen Schmidhuber
NeurIPS Workshop SEA, 2025
PDF
/
OpenReview
We propose a framework where a generative Proposer agent learns to create environments that explicitly maximize a Solver agent's performance gain on a target task. By conditioning on the Solver's policy, our method generates adaptive curricula that accelerate learning on both in-distribution and out-of-distribution tasks.
|
|
|
Denoising Diffusion via Image-Based Rendering
Titas Anciukevičius,
Fabian Manhardt,
Federico Tombari,
Paul Henderson
ICLR, 2024
PDF
/
project page
/
arXiv
/
OpenReview
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes. Our method uses a novel neural scene representation called IB-planes that can efficiently represent large 3D scenes and learns a denoising-diffusion prior over this representation using only 2D images.
|
|
|
Sampling 3D Gaussian Scenes in Seconds with Latent Diffusion Models
Paul Henderson,
Melonie de Almeida,
Daniela Ivanova,
Titas Anciukevičius
arXiv, 2024
PDF
/
project page
/
arXiv
A latent diffusion model over 3D scenes that can be trained using only 2D image data. Our approach generates 3D scenes in as little as 0.2 seconds using 3D Gaussian splats, running an order of magnitude faster than earlier NeRF-based generative models.
|
|
|
RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation
Titas Anciukevičius,
Zexiang Xu,
Matthew Fisher,
Paul Henderson,
Hakan Bilen,
Niloy J. Mitra,
Paul Guerrero
CVPR, 2023
PDF
/
project page
/
arXiv
We present the first diffusion model for 3D generation and inference that can be trained using only monocular 2D supervision. Our method uses a novel image denoising architecture that generates and renders an intermediate 3D representation in each denoising step, enabling view-consistent 3D reconstruction, generation, and inpainting.
|
|
|
Learning to Predict Keypoints and Structure of Articulated Objects without Supervision
Titas Anciukevičius,
Paul Henderson,
Hakan Bilen
ICPR, 2022
IEEE
Learning to predict keypoints and structure of articulated objects from images alone without supervision.
|
|
|
Unsupervised Causal Generative Understanding of Images
Titas Anciukevičius,
Patrick Fox-Roberts,
Edward Rosten,
Paul Henderson
NeurIPS, 2022
PDF
/
NeurIPS
A causal generative model for unsupervised object-centric 3D scene understanding that generalizes robustly to out-of-distribution images. Our model explicitly represents objects as separate neural radiance fields and accurately reconstructs scene geometry without supervision.
|
|
|
Object-Centric Image Generation with Factored Depths, Locations, and Appearances
Titas Anciukevičius,
Christoph H. Lampert,
Paul Henderson
ICML Workshop, 2020
PDF
/
arXiv
A generative model that explicitly reasons over objects in images, learning structured latent representations that separate objects from each other and background. The model can be trained from images alone without object masks or depth information.
|
|