pycvcam.Cv2Intrinsic#

Cv2Intrinsic Class#

class Cv2Intrinsic(parameters=None, constants=None)[source]#

Subclass of the pycvcam.core.Intrinsic class that represents the OpenCV intrinsic model.

Note

This class represents the intrinsic transformation, which is the last step of the process from the world_points to the image_points.

The Cv2Intrinsic model used an intrinsic matrix to transform the distorted_points to the image_points.

The equation used for the intrinsic transformation is:

\[\begin{split}\begin{align*} \vec{x}_i &= K \cdot \vec{x}_d \\ \end{align*}\end{split}\]

where \(\vec{x}_d\) is the distorted points, \(\vec{x}_i\) is the image points, and \(K\) is the intrinsic matrix defined as:

\[\begin{split}K = \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix}\end{split}\]

where \(f_x\) and \(f_y\) are the focal lengths in pixels in x and y direction, \(c_x\) and \(c_y\) are the principal point coordinates in pixels.

Note

If no distortion is applied, the distorted_points are equal to the normalized_points.

Warning

No skew parameter is included in the intrinsic matrix in this implementation. If you need to include a skew parameter, you can use pycvcam.SkewIntrinsic.

This transformation is caracterized by 4 parameters and 0 constants:

  • 2 parameters as focal length \(\vec{f} = (f_x, f_y)\).

  • 2 parameters as principal point \(\vec{c} = (c_x, c_y)\).

Two short-hand notations are provided to access the jacobian with respect to the focal length and principal point in the results class:

  • jacobian_df: The Jacobian of the image points with respect to the focal length parameters. It has shape (…, 2, 2), where the last dimension represents (df_x, df_y).

  • jacobian_dc: The Jacobian of the image points with respect to the principal point parameters. It has shape (…, 2, 2), where the last dimension represents (dc_x, dc_y).

Note

The Cv2Intrinsic class can be instantiated with 2 different ways:

  • Setting directly the parameters as a numpy array of shape (4,) (__init__ method) containing the focal length and principal point concatenated.

  • Using the classmethod from_matrix to set the intrinsic matrix.

Parameters:
  • parameters (Optional[numpy.ndarray]) – The parameters of the intrinsic transformation. It should be a numpy array of shape (4,) containing the focal length and principal point concatenated.

  • constants (Optional[None])

Instantiate a Cv2Intrinsic object#

The pycvcam.Cv2Intrinsic class can be instantiated using :

  • a \(3 \times 3\) intrinsic matrix.

Cv2Intrinsic.from_matrix(intrinsic_matrix)

Class method to create a Cv2Intrinsic object from an intrinsic matrix.

Accessing the parameters of Cv2Intrinsic objects#

The parameters and constants properties can be accessing using pycvcam.core.Transform methods. Some additional convenience methods are provided to access commonly used parameters of the Cv2Intrinsic model:

Cv2Intrinsic.focal_length_x

Get or set the focal length fx of the intrinsic transformation.

Cv2Intrinsic.focal_length_y

Get or set the focal length fy of the intrinsic transformation.

Cv2Intrinsic.intrinsic_matrix

Get or set the intrinsic matrix of the intrinsic transformation.

Cv2Intrinsic.intrinsic_vector

Get or set the intrinsic vector of the intrinsic transformation.

Cv2Intrinsic.principal_point_x

Get or set the principal point cx of the intrinsic transformation.

Cv2Intrinsic.principal_point_y

Get or set the principal point cy of the intrinsic transformation.

Performing projections with Cv2Intrinsic objects#

The transform and inverse_transform methods can be used to perform intrinsic transformations using the Cv2Intrinsic model (as described in the pycvcam.core.Transform documentation).

The implementation of theses transformations and more details on the options available can be found in the following methods:

Cv2Intrinsic._transform(distorted_points, *)

Compute the transformation from the distorted_points to the image_points.

Cv2Intrinsic._inverse_transform(image_points, *)

Compute the inverse transformation from the image_points to the distorted_points.

Examples#

Create an intrinsic object with a given intrinsic matrix:

import numpy
from pycvcam import Cv2Intrinsic

intrinsic_matrix = numpy.array([[1000, 0, 320],
                              [0, 1000, 240],
                              [0, 0, 1]])
intrinsic = Cv2Intrinsic.from_matrix(intrinsic_matrix)

Then you can use the intrinsic object to transform distorted_points to image_points:

distorted_points = numpy.array([[100, 200],
                              [150, 250],
                              [200, 300]]) # Shape (n_points, 2)
result = intrinsic.transform(distorted_points)
image_points = result.image_points # Shape (n_points, 2)
print(image_points)

You can also access to the jacobian of the intrinsic transformation:

result = intrinsic.transform(distorted_points, dx=True, dp=True)
image_points_dx = result.jacobian_dx  # Jacobian of the image points with respect to the distorted points
image_points_dp = result.jacobian_dp  # Jacobian of the image points with respect to the intrinsic parameters
print(image_points_dx)

The inverse transformation can be computed using the inverse_transform method:

inverse_result = intrinsic.inverse_transform(image_points, dx=True, dp=True)
distorted_points = inverse_result.distorted_points  # Shape (n_points, 2)
print(distorted_points)

See also

For more information about the transformation process, see: