profile
viewpoint

jcjohnson/neural-style 17698

Torch implementation of neural style algorithm

jcjohnson/pytorch-examples 3766

Simple examples to introduce PyTorch

jcjohnson/fast-neural-style 3665

Feedforward style transfer

jcjohnson/cnn-benchmarks 2316

Benchmarks for popular CNN models

jcjohnson/densecap 1406

Dense image captioning in Torch

google/sg2im 1109

Code for "Image Generation from Scene Graphs", Johnson et al, CVPR 2018

jcjohnson/cnn-vis 494

Use CNNs to generate images

gkioxari/aims2020_visualrecognition 17

AIMS 2020, class on Visual Recognition

jcjohnson/neural-animation 12

Implementing neural art on video

push eventjcjohnson/website

Justin Johnson

commit sha 03a4ae6499fa366d14d71eaf092dcedaa20b210d

Long overdue website update

view details

push time in 11 days

issue commentfacebookresearch/pytorch3d

Antialiasing boundary

Thanks for the detailed explanation and reproducible code @naoto0804!

This is not a bug -- blur_radius isn't meant to be used for antialiasing; instead setting blur_radius>0 instructs the rasterizer to assign pixels outside of triangle boundaries to the triangle, up to the specified radius. This functionality is used to achieve differentiable rendering by having triangles become partially transparent outside their boundaries; to achieve this effect you'd have to use a SoftPhongShader instead of a HardPhongShader, and fiddle with the sigma and gamma values in blend_params.

However if you just want antialiased edges, the easiest way is to use supersampling antialiasing: render the image at a higher resolution than you actually want, then use average pooling to reduce back to your target resolution. This is easy to implement:

import torch
import torch.nn.functional as F
import numpy as np
from skimage.io import imread, imsave

from pytorch3d.structures import Meshes
# rendering components
from pytorch3d.renderer import (
    look_at_view_transform,
    FoVOrthographicCameras,
    PointLights,
    RasterizationSettings,
    MeshRenderer,
    MeshRasterizer,
    HardPhongShader,
    TexturesVertex,
    BlendParams
)

device = torch.device("cpu")

# Create two triangle meshes
vertices = torch.from_numpy(np.array([
    [0.5, 0.25, 0.0],
    [0.0, 0.5, 0.0],
    [-0.5, -0.25, 0.0],
    [0.0, -0.5, 0.0]
], dtype=np.float32))

faces = torch.from_numpy(np.array([
    [0, 1, 2],
    [0, 2, 3],
]))


verts_rgb = torch.zeros_like(vertices)[None]
textures = TexturesVertex(verts_features=verts_rgb.to(device))

# Construct mesh
mesh = Meshes(
    verts=[vertices.to(device)], faces=[faces.to(device)], textures=textures)

# Use supersampling antialiasing
aa_factor = 3

# Setup renderer
R, T = look_at_view_transform(2.0, 0, 0, device=device)
cameras = FoVOrthographicCameras(R=R, T=T, device=device)
lights = PointLights(
    device=device,
    location=[[0.0, 0.0, 3.0]],
    ambient_color=((1.0, 1.0, 1.0), ),
    diffuse_color=((0.0, 0.0, 0.0), ),
    specular_color=((0.0, 0.0, 0.0), )
)
raster_settings = RasterizationSettings(
    image_size=16 * aa_factor,
)

# blend_params = BlendParams(background_color=(0.,0.,0.), sigma=0.0, gamma=0.0)
blend_params = BlendParams()
renderer = MeshRenderer(
    rasterizer=MeshRasterizer(
        cameras=cameras,
        raster_settings=raster_settings
    ),
    shader=HardPhongShader(
        device=device,
        cameras=cameras,
        lights=lights,
        blend_params=blend_params
    )
)

# render image
images = renderer(mesh)
images = images.permute(0, 3, 1, 2)  # NHWC -> NCHW
images = F.avg_pool2d(images, kernel_size=aa_factor, stride=aa_factor)
images = images.permute(0, 2, 3, 1)  # NCHW -> NHWC
print(images.shape, images.dtype)
image = images.detach().cpu().numpy()[0][..., :3]
imsave('aa3.png', (255 * image).astype(np.uint8))

Here are some results with aa_factor = 1, 2, and 3: image image image

naoto0804

comment created time in 11 days

issue closedfacebookresearch/pytorch3d

Made a bunch of methods of `Pointclouds` class properties

🚀 Feature

Many methods in the Pointclouds class can be made properties. For e.g., points_packed and points_packed are among these methods.

Motivation

  • To get rid of the annoying parentheses.
  • To prevent from "accidentally" overwrite the points, normals, and features.
  • IMO, points, features and normals are "properties" rather than "methods"

Pitch

p = Pointclouds(...)
points_padded = p.points_padded()  # current interface
points_padded = p.points_padded  # new interface

PR

If the request is granted, I would be happy to send a PR.

closed time in 15 days

justanhduc

issue commentfacebookresearch/pytorch3d

Made a bunch of methods of `Pointclouds` class properties

Hi @justanhduc, thanks for the suggestion! We actually considered this when we originally designed the Pointclouds and Meshes classes (and I think even had an internal version implemented in this way before the first public release).

However we ultimately decided to stick with methods instead of properties. Even though they are conceptually similar to a property, methods like points_packed can involve nontrivial computation and may even build part of a computational graph depending on the current state of the object. Property lookup should not in general involve side effects, so we felt that keeping them as methods makes the potential for side effects more clear to users and that this outweighs the minor syntactic clutter of typing an extra pair of parenthesis.

justanhduc

comment created time in 15 days

issue commentfacebookresearch/pytorch3d

Support for higher order derivatives

I agree this is a useful feature, but I think it would be very nontrivial to implement. Adding second derivatives for packed_to_padded and interpolate_face_attributes probably wouldn't be too difficult, but I think adding second derivatives for rasterization would be a pretty big undertaking.

I also realized that rasterization isn't currently marked as once_differentiable, but it probably should be as well: https://github.com/facebookresearch/pytorch3d/blob/master/pytorch3d/renderer/mesh/rasterize_meshes.py#L214

hectorbasevi

comment created time in 20 days

issue closedfacebookresearch/pytorch3d

How can I do a faster rendering with a big mesh?

We try to change the renderer in the paper 3D Photography using Context-aware Layered Depth Inpainting(another project of facebook) to pytorch3d. The mesh contains about 1 million verts and 4 million faces. It takes about one second to render an image. However It may take less than 0.1 second with Vispy. Is it because of my program settings? The relevant procedures are as follows:

def render_try():

class myShader(torch.nn.Module):

    def __init__(
        self, device="cpu", cameras=None, blend_params=None
    ):
        super().__init__()
        self.cameras = cameras
        self.blend_params = blend_params if blend_params is not None else BlendParams()

    def forward(self, fragments, meshes, **kwargs) -> torch.Tensor:
        cameras = kwargs.get("cameras", self.cameras)
        if cameras is None:
            msg = "Cameras must be specified either at initialization \
                or in the forward pass of TexturedSoftPhongShader"
            raise ValueError(msg)
        # get renderer output
        blend_params = kwargs.get("blend_params", self.blend_params)
        texels = meshes.sample_textures(fragments)
        images = softmax_rgb_blend(texels, fragments, blend_params)

        return images

R, T = look_at_view_transform(0.01, 180, 0)
R1 = torch.eye(3).unsqueeze(0)
R1[0,1,1] =-1
T1 = torch.zeros_like(T)
verts = np.load("./mesh/4029_verts.npy")
colors = np.load("./mesh/4029_colors.npy")
faces  = np.load("./mesh/4029_faces.npy")

verts = torch.tensor(verts.astype(np.float32))
colors = torch.tensor(colors[:,:-1].astype(np.float32)).unsqueeze(0)
faces = torch.tensor(faces)
textures = TexturesVertex(verts_features=colors.to(device))
hFov = 2 * np.arctan((1. / 2.) * (480 / 640))
vFov = 2 * np.arctan((1. / 2.) * (640 / 640))
fov_in_rad = max(vFov, hFov)
fov = (fov_in_rad * 180 / np.pi)
img_h_len = 640
img_w_len = 480
cur_H = 640
cur_W = 640
mesh = Meshes(
    verts = [verts.to(device)],
    faces = [faces.to(device)],
    textures = textures
)

cameras = FoVPerspectiveCameras(device=device,fov=fov)
#blend_params = BlendParams(sigma=1e-4, gamma=1e-4)
raster_settings = RasterizationSettings(
    image_size=640,
    blur_radius=0.0,
    faces_per_pixel=1,
)

renderer = MeshRenderer(
    rasterizer=MeshRasterizer(
        cameras=cameras,
        raster_settings=raster_settings
    ),
    shader=myShader(
        device=device,
        cameras=cameras,
         )
)
R1 = R1.to(device)
T1 = T1.to(device)
a = time.time()
img = renderer(mesh,R=R1,T=T1)
print(time.time()-a)

closed time in 20 days

TSKongLingwei

issue commentfacebookresearch/pytorch3d

How can I do a faster rendering with a big mesh?

Great! Closing this for now, but feel free to re-open if you have more questions.

TSKongLingwei

comment created time in 20 days

issue commentfacebookresearch/pytorch3d

How can I do a faster rendering with a big mesh?

I don't see anything obvious in your settings that would give easy speedup. You might try explicitly setting bin_size in your RasterizationSettings; if it's not specified we set it heuristically based on the size of the image, but you might get a small speedup by tuning it for your task.

We did not do much testing with meshes of this size while developing our renderer, so it's very likely that there are CUDA-level improvements we can make to the rasterizer for meshes of this size. However no matter how much additional optimization we do, I don't expect we will be able to match the speed of Vispy. The reason is that Vispy relies on OpenGL under the hood, which is able to make use of the specialized rasterization hardware on the GPU (for e.g. edge and plane equations, clipping, etc); in contrast the PyTorch3D implements a rasterizer purely in software, which runs on the generic CUDA cores of the GPU. For this reason we should always expect a significant performance gap between OpenGL rendering and pure software rendering. A 2011 paper from NVIDIA found a ~2x-8x performance gap between their software rasterizer (which is likely more well-optimized than the PyTorch3D rasterizer) and OpenGL.

In short, if you care only about raw rendering speed, you are probably better off sticking with Vispy. The main advantages of PyTorch3D would be (1) computing gradients; (2) running in contexts where you can't use OpenGL.

TSKongLingwei

comment created time in 20 days

issue commentmadewithml/utterances

https://madewithml.com/projects/1575/deep-learning-for-computer-vision/

Looks like some of the links here are outdated. The FA2019 videos are now also on YouTube; you can also find updated assignments for the Fall 2020 version of the class

utterances-bot

comment created time in 25 days

startedmseitzer/pytorch-fid

started time in a month

issue commentfacebookresearch/pytorch3d

Why not use reshape here?

Sorry, I think I misunderstood your question -- this isn't really a question about reshape vs view, it's more about how to construct face_to_edge. I think you're right -- we can construct face_to_edge more directly by reshaping inverse_idxs directly, rather than indexing into it with an auxiliary index tensor.

In that case the answer is probably because we didn't think of it ;)

Your suggestion might be a bit more efficient -- but not sure whether it would make much difference on end-to-end performance, since this is probably not a bottleneck for most applications.

CharlesNord

comment created time in 2 months

issue commentfacebookresearch/pytorch3d

Why not use reshape here?

If the input tensor is contiguous, then view and reshape do the exact same thing. In this case, the output from torch.arange is a new tensor guaranteed to be contiguous, so view and reshape will the give the same result; then I prefer view since it's shorter.

More generally, view is guaranteed to return a view of the input tensor; in contrast reshape returns a view if possible, and if not then returns a copy. I often prefer view since it makes it more explicit when copies of tensors are being made; this can make it easier to get a sense of where memory is being allocated, in case you later need to optimize memory usage.

CharlesNord

comment created time in 2 months

startedkarpathy/minGPT

started time in 2 months

issue commentcs231n/cs231n.github.io

cs231n.github.io visited error!!

https://cs231n.github.io/ loads fine for me -- maybe double-check your internet connection?

ZHJD

comment created time in 2 months

startedtmbdev/webdataset

started time in 2 months

issue commentfacebookresearch/synsin

Neural point cloud renderer

It's part of PyTorch3D:

C++ / CUDA core: https://github.com/facebookresearch/pytorch3d/tree/master/pytorch3d/csrc/rasterize_points

Python interface: https://github.com/facebookresearch/pytorch3d/tree/master/pytorch3d/renderer/points

VitorGuizilini-TRI

comment created time in 3 months

more