Rendering realistic 3D scenes requires rich, high-quality materials datasets, which are lacking. We propose to leverage recent progress in computer vision and generative models to generate materials (color maps and normal maps) at large scale from a data-driven perspective. Our pipeline consists of generative models that synthesize texture color maps and a prediction model that maps color maps to normal maps. We are able to generate large, seamless color and normal maps that look realistic, diverse, and can be directly rendered into 3D scenes.