Karthick B
29. June 2021路Categories: Miscellaneous,

Nvidia Jetson platforms powered by the Tegra processors have carved themselves a niche in the edge analytics market especially in the field of video analytics, machine vision etc. With a wide range of interfaces like MIPI-CSI, USB, Gigabit Ethernet, it is possible to acquire video data over many different interfaces. Of them, the CSI interface remains the most preferred interface for machine vision applications.

In this blog, we will discuss in detail about the camera interface and data flow in Jetson Tegra platforms and typical configuration and setup of a MIPI CSI driver. For specifics, we will consider Jetson Nano and Onsemi OV5693 camera.

Jetson Camera Subsystem

While there are significant architectural differences between the Tegra TX1, TX2, Xavier and Nano platforms, the camera hardware sub-system remains more or less the same. The high level design of the same is captured below.

Nvidia Tegra Camera Sub System
Nvidia Tegra Camera Sub system

As seen, the major components and their functionalities are:

  • CSI Unit: The MIPI-CSI compatible input sub-system that is responsible for data acquisition from the camera, organize the pixel format and send it to the VI unit. There are 6 Pixel Parser (PP) units, each of which can accept input from a single 2-lane camera. Apart of this 6-camera model, it is also possible to reconfigure the inputs such that 3 Mono or Stereo 4-lane cameras can be connected to PPA, CSI1_PPA and CSI2_PPA pairs.
  • VI: The Video Input unit accepts data from the CSI unit over a 24-bit bus with the positioning of data determined by the input format. Then this data can be routed to any one or 2 of the following interested parties. The VI also has a Host 1x interface with 2 channels – one to control I2C access to cameras and another for VI register programming.
  • Memory: Written to system memory for further consumption by the applications.
  • Image Signal Processor ISP A:For pre-processing the input data and convert/pack it to a different format. ISP A can also acquire data from memory.
  • Image Signal Processor ISP B:For pre-processing the input data and convert/pack it to a different format. ISP A can also acquire data from memory.

VI Unit provides a hardware-software sysncronization mechanism called VI Sync Points (syncpts) that are used to wait for a particular condition to be met and increment a counter or want for the counter to reach a particular value. Multiple predefined indices are available each corresponding to once functionality such as frame start, line end, completion of ISP processing. For example, the software can choose to wait till one frame is receved by the VI indicated via the next counter value corresponding to the index.

With these powerful compoenets, the Tegra Camera sub-system offers options the handle data seamlessly from multiple sources in different formats.

Linux 4 Tegra Camera Driver

With understanding of the hardware sub-system, we will now look into the software architecture of Tegra camera interface. Nvidia supports Linux OS with its Linux4Tegra (L4T) software. The camera drivers configures and read the data from camera sensors over the CSI bus in the sensor’s native format and optionally convert them to a different format.

Nvidia provides two types of camera access paths, that can be chosen depending on the camera and application use case:

  • Direct V4L2 Interface

Primarily for capturing RAW data from camera, this is a minimal path where no processing are done  and the data is directly consumed by the user application.

  • Camera Core Library Interface

In this model, the camera data is consumed via few Nvidia libraries such as Camera Core, libArgus. In this case, various data processing can be done on the input data efficiently leveraging the GPU available in the core.

In either case, the application can be a Gstreamer plugin or a custom one.

OV5693 Camera for Jetson

To take a deep-dive, let us consider the 5MP(2592 x 1944, Bayer sensor)Omnivision CSI camera module OV5693 that comes with the Tegra TX1 and TX2 carrier board by default. High level software architecture is captured below:

L4T Camera Driver Architecture
L4T Camera Driver Architecture

The OV5693 camera connected to I2C bus 0x06 (default \I2C address as 0x36) via TCA9548 I2C expander chip. This can be changed to 0x40 by adding a pull up resistor on SID pin.

The OV5693 driver is triggered using I2C bus driver and registers itself with the Tegra V4L2 camera framework. This in turn exposes /dev/videoX device that can be used by the application to consume the data.

To bring up the OV5693 driver, following must be handled and are further explained in the next sections:

  • Appropriate node in the Device Tree
  • V4L2 compatible sensor driver

In the next section, we will see how to set up the device tree for OV5693 camera.

Device Tree Changes for Tegra Camera

The  tegra194-camera-e3333-a00.dtsi file is located  in /hardware/nvidia/platform/t19x/common/kernel-dts/t19x-common-modules/ folder.


tegra-camera-platform consist of one or more modules which defines the basic information of the camera/sensor connected to the Tegra SoC. While the common part in the top, contains consolidated information about all the connected, each of the module sub section defines them individually. In this case, single OV5693 camera is connected over two MIPI lanes.

tegra-camera-platform {
compatible = 'nvidia, tegra-camera-platform';
num_csi_lanes = <2>;        //Number of lanes
max_lane_speed = <1500000>; //Maximum lane speed
min_bits_per_pixel = <12>;  //bits per pixel
vi_peak_byte_per_pixel = <2>;   //byte per pixel
vi_bw_margin_pct = <25>;    //Don't care
max_pixel_rate = <160000>;  //Don't care
isp_peak_byte_per_pixel = <5>;//Don't care
isp_bw_margin_pct = <25>;   //Don't care

modules {
module0 { //OV5693 basic details
    badge = 'ov5693_right_iicov5693';
    position = 'right';
    orientation = '1';
    drivernode0 {
        pcl_id = 'v4l2_sensor';
        devname = 'ov5693 06-0036';
        proc-device-tree = '/proc/device-tree/i2c@31c0000/tca9548@77/i2c@6/ov5693_a@36'; //Device tree node path

Device tree node

In device tree node, all the camera properties (output resolution, FPS, Mipi clock…etc) must be added for proper operation of the device.

I2c@31c0000 {   //I2C-6 base address
tca9548@77 { //I2C expander IC
i2c@6 {
    ov5693_a@36 {
        compatible = nvidia,ov5693';
        reg = <0x36>; //I2C slave address
        devnode = 'video0';//device name

        /* Physical dimensions of sensor */
        physical_w = '3.674';	//physical width of the sensor
        physical_h = '2.738';	//physical height of the sensor

        /* Enable EEPROM support */
        has-eeprom = '1';

        /* Define any required hw resources needed by driver */
        /* ie. clocks, io pins, power sources */
        avdd-reg = 'vana';	//Power Regulator 
        iovdd-reg = 'vif';	//Power Regulator
        mode0 { // OV5693_MODE_2592X1944
            mclk_khz = '24000';		//MIPI driving clock
            num_lanes = '2';		//Number of lanes
            tegra_sinterface = 'serial_a'; //Serial interface
            phy_mode = 'DPHY';		//physical connection mode
            discontinuous_clk = 'yes';
            dpcm_enable = 'false';		//Don't care
            cil_settletime = '0';		//Don't care

            active_w = '2592';		//active width
            active_h = '1944';		//active height
            mode_type = 'bayer';		//sensor type
            pixel_phase = 'bggr';		//output format
            csi_pixel_bit_depth = '10';	//bit per pixel
            readout_orientation = '0';	//Don't care
            line_length = '2688';		//Total width
            inherent_gain = '1';		//Don't care
            mclk_multiplier = '6.67';	//pix_clk_hz/mclk_khz
            pix_clk_hz = '160000000';	//Pixel clock HTotal*VTotal*FPS 
            gain_factor = '10';		//Don't care
            min_gain_val = '10';/* 1DB*/	//Don't care
            max_gain_val = '160';/* 16DB*/ //Don't care
            step_gain_val = '1';		//Don't care
            default_gain = '10';		//Don't care
            min_hdr_ratio = '1';		//Don't care
            max_hdr_ratio = '1';		//Don't care
            framerate_factor = '1000000';	//Don't care
            min_framerate = '1816577';	//Don't care
            max_framerate = '30000000';
            step_framerate = '1';
            default_framerate = '30000000';
            exposure_factor = '1000000';	//Don't care
            min_exp_time = '34';		//Don't care
            max_exp_time = '550385';	//Don't care
            step_exp_time = '1';		//Don't care
            default_exp_time = '33334';	//Don't care
            embedded_metadata_height = '0';//Don't care

In this example, the pixel clock is calculated as below:

pix_clk_hz = HTotal*VTotal*FPS

For OV5693:- 2592×1944@30fps

Total height and Total width for 2592×1944 is 2688×1984

pix_clk_hz = 2688 x 1984 x 30 = 159989760

pix_clk_hz is ~160000000

And the mclk multiplier is

mclk_multiplier = pix_clk_hz / mclk_khz
mclk_multiplier = 160000000 / 24000000 = 6.66

DTS binding

As seen earlier, the camera data flow is as follows:

Sensor OutputCSI InputCSI outputVI Input
Hardware – Device Tree Nodes Data flow mapping

The binding between internal ports is done by using the below settings.

ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
ov5693_ov5693_out0: endpoint {
port-index = <0>;
bus-width = <2>;
remote-endpoint = <&ov5693_csi_in0>;

nvcsi@15a00000 {
num-channels = <1>;
#address-cells = <1>;
#size-cells = <0>;
status = 'okay';
channel@0 {
reg = <0>;
ports {
    #address-cells = <1>;
    #size-cells = <0>;
    port@0 {
        reg = <0>;
        ov5693_csi_in0: endpoint@0 {
            port-index = <0>;
            bus-width = <2>;
            remote-endpoint = <&ov5693_ov5693_out0>;
    port@1 {
        reg = <1>;
        ov5693_csi_out0: endpoint@1 {
            remote-endpoint = <&ov5693_vi_in0>;

host1x {
vi@15c10000 {
num-channels = <1>;
ports {
    #address-cells = <1>;
    #size-cells = <0>;
    port@0 {
        reg = <0>;
        ov5693_vi_in0: endpoint {
        port-index = <0>;
        bus-width = <2>;
        remote-endpoint = <&ov5693_csi_out0>;

The driver get the data from VI output via Host1x DMA engine module.


L4T employs a mechanism of DTB overlays’ to enable/disable to drivers. The ov5693 driver can be enabled in the DTS by setting its status field to “okay”.

fragment-ov5693@0 {
ids = '2180-*';
override@0 {
target = <&ov5693_cam0>;
_overlay_ {
    status = 'okay';


During boot up, if the proper camera module is detected, then the overlay added to the device tree node and further driver and device registration is done by camera driver(ov5693.c) as described in the next blog.

About Embien: Embien is a leading product engineering service provided with specialised expertise on Nvidia Tegra and Jetson platform. We have been interfacing various types of cameras over different interfaces with the Nvidia platforms and enabling them with libargus framework as well as customised Gstreamer plugins and applications. Our customers include Fortune 500 companies in the field of defence, avionics, industrial automation, medical, automotive and semi-conductors.

Subscribe to our Blog