pycvcam.Cv2Extrinsic#
Cv2Extrinsic Class#
- class Cv2Extrinsic(parameters=None, constants=None)[source]#
Subclass of the
pycvcam.core.Extrinsicclass that represents the OpenCV extrinsic model.Note
This class represents the extrinsic transformation, which is the first step of the process from the
world_pointsto theimage_points.The
Cv2Extrinsicmodel in the composition of a changement in reference frame and a normalisation.Lets consider
world_pointsin the global coordinate system \(\vec{X}_w = (X_w, Y_w, Z_w)\), the correspondingnormalized_pointsin the camera normalized coordinate system are given \(\vec{x}_n\) can be optained by :\[\vec{X_c} = R \cdot \vec{X_w} + T\]\[\vec{x}_n = \frac{\vec{X_c}}{Z_c}\]where \(R\) is the rotation matrix, \(T\) is the translation vector, and \(Z_c\) is the depth of the point in the camera coordinate system.
Note
To compute the translation vector and the rotation vector, you can use
cv2.Rodrigues()orpy3dframe.Framewith convention 4.See also
Package
py3dframe(Artezaru/py3dframe) for the implementation of the 3D frame and the rotation vector.This transformation is caracterized by 6 parameters and 0 constants:
3 parameters as rotation vector \(\vec{rvec} = (r_x, r_y, r_z)\).
3 parameters as translation vector \(\vec{tvec} = (t_x, t_y, t_z)\).
Two short-hand notations are provided to access the jacobian with respect to the rotation vector and translation vector in the results class:
jacobian_dr: The Jacobian of the normalized points with respect to the rotation vector. It has shape (…, 2, 3).jacobian_dt: The Jacobian of the normalized points with respect to the translation vector. It has shape (…, 2, 3).
- Parameters:
parameters (Optional[numpy.ndarray]) – The parameters of the extrinsic transformation. It should be a numpy array of shape (6,) containing the rotation vector and translation vector concatenated.
constants (Optional[None])
Instantiate a Cv2Extrinsic object#
The pycvcam.Cv2Extrinsic class can be instantiated using :
a rotation and translation vector (
rvec,tvec).a \(4 \times 4\) transformation matrix.
a frame of reference associated with the extrinsic parameters.
|
Class method to create a Cv2Extrinsic object from a 3D frame. |
|
Class method to create a Cv2Extrinsic object from a rotation vector and a translation vector. |
|
Class method to create a Cv2Extrinsic object from a 4x4 transformation matrix. |
Accessing the parameters of Cv2Extrinsic objects#
The parameters and constants properties can be accessing using pycvcam.core.Transform methods.
Some additional convenience methods are provided to access commonly used parameters of the Cv2Extrinsic model:
Get or set the 3D frame of the extrinsic transformation. |
|
Get or set the rotation vector |
|
Get the 4x4 transformation matrix of the extrinsic transformation. |
|
Get or set the translation vector |
Performing projections with Cv2Extrinsic objects#
The transform and inverse_transform methods can be used to perform projections and unprojections using the Cv2Extrinsic model (as described in the pycvcam.core.Transform documentation).
The implementation of theses transformations and more details on the options available can be found in the following methods:
|
Compute the transformation from the |
|
Compute the transformation from the |
|
Computes the rays from the camera to the scene for the the extrinsic model in the world coordinate system. |
Examples#
Create an extrinsic object with a rotation vector and a translation vector:
import numpy
from pycvcam import Cv2Extrinsic
rvec = numpy.array([0.1, 0.2, 0.3])
tvec = numpy.array([0.5, 0.5, 0.5])
extrinsic = Cv2Extrinsic.from_rt(rvec, tvec)
Then you can use the extrinsic object to transform world_points to normalized_points:
world_points = numpy.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]]) # shape (n_points, 3)
result = extrinsic.transform(world_points)
normalized_points = result.normalized_points # shape (n_points, 2)
print(normalized_points)
You can also access to the jacobian of the extrinsic transformation:
result = extrinsic.transform(world_points, dx=True, dp=True)
normalized_points_dx = result.jacobian_dx # Shape (n_points, 2, 3)
normalized_points_dp = result.jacobian_dp # Shape (n_points, 2, 6)
print(normalized_points_dx)
print(normalized_points_dp)
The inverse transformation can be computed using the inverse_transform method: By default, the depth is assumed to be 1.0 for all points, but you can provide a specific depth for each point with shape (…,).
depth = numpy.array([1.0, 2.0, 3.0, 4.0]) # Example depth values for each point
inverse_result = extrinsic.inverse_transform(normalized_points, dx=True, dp=True, depth=depth)
world_points = inverse_result.world_points # Shape (n_points, 3)
print(world_points)
Note
The jacobian with respect to the depth is not computed.
See also
For more information about the transformation process, see:
pycvcam.Cv2Extrinsic._transform()to transform theworld_pointstonormalized_points.pycvcam.Cv2Extrinsic._inverse_transform()to transform thenormalized_pointsback toworld_points.