pycvcam.OrthographicExtrinsic#

OrthographicExtrinsic Class#

class OrthographicExtrinsic(parameters=None, constants=None)[source]#

Subclass of the pycvcam.core.Extrinsic class that represents the orthographic extrinsic transformation.

Note

This class represents the extrinsic transformation, which is the first step of the process from the world_points to the image_points.

The OrthographicExtrinsic model in the composition of a changement in reference frame and an orthographic projection.

Lets consider world_points in the global coordinate system \(\vec{X}_w = (X_w, Y_w, Z_w)\), the corresponding normalized_points in the camera normalized coordinate system are given \(\vec{x}_n\) can be optained by :

\[\vec{X_c} = R \cdot \vec{X_w} + T\]
\[\vec{x}_n = (X_c, Y_c)\]

where \(R\) is the rotation matrix, \(T\) is the translation vector, and \(Z_c\) the depth of the point in the camera coordinate system is ignored.

Note

To compute the translation vector and the rotation vector, you can use cv2.Rodrigues() or py3dframe.Frame with convention 4.

See also

Package py3dframe (Artezaru/py3dframe) for the implementation of the 3D frame and the rotation vector.

This transformation is caracterized by 6 parameters and 0 constants:

  • 3 parameters as rotation vector \(\vec{rvec} = (r_x, r_y, r_z)\).

  • 3 parameters as translation vector \(\vec{tvec} = (t_x, t_y, t_z)\).

Two short-hand notations are provided to access the jacobian with respect to the rotation vector and translation vector in the results class:

  • jacobian_dr: The Jacobian of the normalized points with respect to the rotation vector. It has shape (…, 2, 3).

  • jacobian_dt: The Jacobian of the normalized points with respect to the translation vector. It has shape (…, 2, 3).

Parameters:
  • parameters (Optional[numpy.ndarray]) – The parameters of the extrinsic transformation. It should be a numpy array of shape (6,) containing the rotation vector and translation vector concatenated.

  • constants (Optional[None])

Instantiate a OrthographicExtrinsic object#

The pycvcam.OrthographicExtrinsic class can be instantiated using :

  • a rotation and translation vector (rvec, tvec).

  • a \(4 \times 4\) transformation matrix.

  • a frame of reference associated with the extrinsic parameters.

OrthographicExtrinsic.from_frame(frame)

Class method to create a OrthographicExtrinsic object from a 3D frame.

OrthographicExtrinsic.from_rt(rvec, tvec)

Class method to create a OrthographicExtrinsic object from a rotation vector and a translation vector.

OrthographicExtrinsic.from_tmatrix(tmatrix)

Class method to create a OrthographicExtrinsic object from a 4x4 transformation matrix.

Accessing the parameters of OrthographicExtrinsic objects#

The parameters and constants properties can be accessing using pycvcam.core.Transform methods. Some additional convenience methods are provided to access commonly used parameters of the OrthographicExtrinsic model:

OrthographicExtrinsic.frame

Get or set the 3D frame of the extrinsic transformation.

OrthographicExtrinsic.rotation_vector

Get or set the rotation vector rvec of the extrinsic transformation.

OrthographicExtrinsic.transformation_matrix

Get the 4x4 transformation matrix of the extrinsic transformation.

OrthographicExtrinsic.translation_vector

Get or set the translation vector tvec of the extrinsic transformation.

Performing projections with OrthographicExtrinsic objects#

The transform and inverse_transform methods can be used to perform projections and unprojections using the OrthographicExtrinsic model (as described in the pycvcam.core.Transform documentation).

The implementation of theses transformations and more details on the options available can be found in the following methods:

OrthographicExtrinsic._transform(world_points, *)

Compute the transformation from the world_points to the normalized_points.

OrthographicExtrinsic._inverse_transform(...)

Compute the transformation from the normalized_points to the world_points.

OrthographicExtrinsic._compute_rays(...)

Computes the rays from the camera to the scene for the the extrinsic model in the world coordinate system.

Examples#

Create an extrinsic object with a rotation vector and a translation vector:

import numpy
from pycvcam import OrthographicExtrinsic

rvec = numpy.array([0.1, 0.2, 0.3])
tvec = numpy.array([0.5, 0.5, 0.5])

extrinsic = OrthographicExtrinsic.from_rt(rvec, tvec)

Then you can use the extrinsic object to transform world_points to normalized_points:

world_points = numpy.array([[1, 2, 3],
                           [4, 5, 6],
                           [7, 8, 9],
                           [10, 11, 12]]) # shape (n_points, 3)

result = extrinsic.transform(world_points)
normalized_points = result.normalized_points # shape (n_points, 2)
print(normalized_points)

You can also access to the jacobian of the extrinsic transformation:

result = extrinsic.transform(world_points, dx=True, dp=True)
normalized_points_dx = result.jacobian_dx  # Shape (n_points, 2, 3)
normalized_points_dp = result.jacobian_dp  # Shape (n_points, 2, 6)
print(normalized_points_dx)
print(normalized_points_dp)

The inverse transformation can be computed using the inverse_transform method: By default, the depth is assumed to be 1.0 for all points, but you can provide a specific depth for each point with shape (…,).

depth = numpy.array([1.0, 2.0, 3.0, 4.0])  # Example depth values for each point

inverse_result = extrinsic.inverse_transform(normalized_points, dx=True, dp=True, depth=depth)
world_points = inverse_result.world_points  # Shape (n_points, 3)
print(world_points)

Note

The jacobian with respect to the depth is not computed.

See also

For more information about the transformation process, see: