Karthick B
07. July 2022 · Write a comment · Categories: Miscellaneous

In the growing IOT world, Embedded devices are playing a major role in people day2day life. Earlier in 1980’s to 2000 embedded product will be having a specific feature such as printing/display etc., and it will be updated only for bug fixes. Now a days in 2022 due to technology growth, a product is having multiple features/use cases and it has been upgraded for bug fixes and new features in the interest of customer/end.

So, if we look in to product upgrade history. in 1990 – 2000 OEM’s use to upgrade a device once in a year, but now the same is getting upgraded at least 10 times in a year. Number varies based on the use case and feature competition.

Now a days feature/application is developed with some dependency on 3rd party components such as Openssl, Cloud Library, Databases, Protobuf etc. These 3rd party component owners share an updated version addressing bug fixes or including new features. OEM must adapt the new versions and release an update for existing products. In this process there are certain challenges inevitable such as i) All modules in product need to be updated. ii) Retesting of all features. iii) Mitigate the impact for existing users etc.

All modules in product need to be updated

Since we cannot have multiple versions of 3rd party component in the product. if the component is getting changed, it’s the responsibility of all module/feature owners to adapt the new component along with their new feature roadmap.

This challenge can be addressed in couple of ways by making product as dockerized or snap based approach. In this way, we no need to wait for all module/feature owners to complete adapting the component to release an upgrade

Here we will look in to snap based approach and how that helps in rolling out new product feature rollout to market in less time.

Legacy Product Architecture

If you have one physical machine dedicated to only one thing this is the best option. Has full access to the hardware and best performance with no overhead.

What is SNAP?

Snap is a Linux-based software utility for packaging and deploying applications. It works with major Linux distributions such as Ubuntu, Debian, Arch Linux, Fedora, CentOS, and Manjaro. Snaps are self-contained apps that run in a sandbox and have limited access to the host system. It uses the.snap file format, which is a single compressed file system that uses the SquashFS standard. Applications and libraries, as well as declarative metadata, make up the file system.

Any Linux based product feature can be packaged as a snap, making it easy to deploy across distributions and devices with isolated environments. Snap has a flexible upgradable or down gradable support that does not affect other snaps/features, this enables secure execution and better transactional management. Snaps can also be upgraded remotely.

Easy and clean installation of full software packages. It can deploy complex systems with all support systems, such as database servers, and everything will be already configured to working perfectly

Elements in SNAP

The Snap’s key elements are as follows:

  • Snapd: Snapd used snap metadata to create a safe and secure sandbox for a program on the system. Because Snap is nothing more than SnapDaemon, all of the system’s functions, such as maintaining and administering the complete snap environment, continue in the background.
  • Snaps: Snaps are a set of Snap packages that are dependency-free and simple to install. It includes the whole module, including the application and its dependencies, in the.snap file.
  • Channels: A channel determines which release of a snap is installed and checked for updates

    $ snap install xxx –channel=latest/edge
    • Tracks : There is a default track for all snaps. The default track is called newest unless the snap developer specifies otherwise. When no track is supplied, a snap will install from the latest track by default. It’s also possible to specify the track explicitly.
    • Risk Levels : Stable, candidate, beta, and edge are the four risk levels. Installing from a risk-level that is less stable will usually result in more frequent updates.
    • Stable: suitable for the great majority of users in production scenarios.
      • Candidate: for users who need to test changes prior to a stable deployment, or who want to double-check that an issue has been handled. 
      • Beta: for users who wish to try out new features before they go live, usually outside of a production environment. 
      • Edge: for those who wish to keep a close eye on their progress.

Use the –channel option to select a different risk-level

$ snap install –channel=beta xxxx

After installation, the risk-level being tracked can be changed with:

$ snap switch –channel=stable xxxx

  • Branches: As a developer, you may have a released programme with defects that users encounter but that you are unable to duplicate. A temporary branch can be used to store a bug-fixing test build of the application you’re working on. If you’re tracking and fixing many bugs at the same time, each one can have its own branch in the Snap Store under the same snap name. Because branches are ‘hidden,’ users are unlikely to come across potentially broken bug-fix builds of your application unless they guess the name. Branches are only active for 30 days before being destroyed, and any user with the snap will be transferred to the channel’s most recent track.
  • Snap Store: It is like any other package manager, where the packages are published by creators and consumers install them.
  • Snap Craft: It’s a tool to build snaps.

SNAP Updates

The snap list command displays all available revisions for all installed snaps. By specifying a snap name in the snap list —all command, only results for that snap will be returned.

$ snap list –all xxxxx

Snaps are automatically updated. To check for updates manually, run the following command:

            $ snap refresh xxxx

With the snap reverse command, a snap can be reverted to a previous revision.

            $ snap revert xxxx 

SNAP Configuration

The following files control the behavior of a snap:

            snap.yaml

            hooks

            icon.{svg,png}

            *.desktop

.yaml :  Every snap package contains a meta/snap.yaml file that holds the basic metadata for the snap.

snap.yaml lives inside every snap package, read by snapd. snapcraft.yaml contains instructions to create a snap package, read by the snapcraft command used to build snaps

Hooks: A hook is an executable file that runs within a snap’s confined environment when a certain action occurs. Hooks provide a mechanism for snapd to alert snaps that something has happened, or to ask the snap to provide its opinion about an operation that is in progress.

Common examples of actions requiring hooks include:

Notifying a snap that something has happened

Example: If a snap has been upgraded, the snap may need to trigger a scripted migration process to port an old data format to the new one.

Notifying a snap that a specific operation is in progress

Example: A snap may need to know when a specific interface connects or disconnects.

SNAP Overview

Create your own SNAP

Make your working directory first, then enter there.

            $ mkdir snap-workspace

$ cd snap-workspace

A tool for creating snaps is called SnapCraft. Running $  snapcraft init  will produce a template in the snap/snapcraft.yaml file.

The directory structure looks like this,

mysnaps/

└── snap-workspace

    └── snap

        └── snapcraft.yaml

The snapcraft.yaml file looks like if you were to open it.

name: snap-demo

base: core18

version: ‘0.1’

summary: Single-line elevator pitch for your amazing snap

description: This is my-snap’s description.

grade: devel # must be ‘stable’ to release into candidate/stable channels

confinement: devmode # use ‘strict’ once you have the right plugs and slots

Where,

Name: The name of the snap.

Base: A foundation snap that gives users a run-time environment and a small selection of libraries used by most apps. Core18, which is equivalent to Ubuntu 18.04 LTS, is the default setting for the template.

Version: The current version of the snap

Summary: A short, one-line summary or tag-line for your snap.

Description: A longer description of the snap.

Grade: The publisher may use this to express their level of satisfaction with the construction quality. The store will stop ‘devel’ grade builds from being published to the ‘stable’ channel.

Confinement: There are three levels of confinement for snaps: strict, classic, and devmode. The level of confinement a snap has from your system determines how isolated it is.

strict snaps run in complete isolation. Devmode snaps run as strict but with open access to the system, while classic snaps have open access to system resources.

SNAP Compilation:

To build hello world app, add the following ‘parts’ to our snapcraft.yaml file (replace anything else that might be there):

parts:

  gnu-hello:

    source: http://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz

    plugin: autotools

Added a ‘part’ called gnu-hello (its name is arbitrary). The  ‘source’ represent,  a tarball located on the GNU project’s FTP server. As ‘plugin’ we’ve chosen auto tools which uses the traditional. /configure && make && make install build steps.

To build our snap,

            $ snapcraft

Snapcraft will produce a lot of output while the build is underway, but a successful build will result in:

[…]

Staging gnu-hello

+ snapcraftctl stage

Priming gnu-hello

+ snapcraftctl prime

Snapping |                                                                                                                                                 

Snapped hello_2.10_amd64.snap

To Install the SNAP,

$ sudo snap install –devmode hello_2.10_amd64.snap

Output:

            hello 2.10 installed

To execute,

            $ hello

Output:

Hello World

Karthick B
29. June 2021 · Write a comment · Categories: Miscellaneous

Nvidia Jetson platforms powered by the Tegra processors have carved themselves a niche in the edge analytics market especially in the field of video analytics, machine vision etc. With a wide range of interfaces like MIPI-CSI, USB, Gigabit Ethernet, it is possible to acquire video data over many different interfaces. Of them, the CSI interface remains the most preferred interface for machine vision applications.

In this blog, we will discuss in detail about the camera interface and data flow in Jetson Tegra platforms and typical configuration and setup of a MIPI CSI driver. For specifics, we will consider Jetson Nano and Onsemi OV5693 camera.

Jetson Camera Subsystem

While there are significant architectural differences between the Tegra TX1, TX2, Xavier and Nano platforms, the camera hardware sub-system remains more or less the same. The high level design of the same is captured below.

Nvidia Tegra Camera Sub System
Nvidia Tegra Camera Sub system

As seen, the major components and their functionalities are:

  • CSI Unit: The MIPI-CSI compatible input sub-system that is responsible for data acquisition from the camera, organize the pixel format and send it to the VI unit. There are 6 Pixel Parser (PP) units, each of which can accept input from a single 2-lane camera. Apart of this 6-camera model, it is also possible to reconfigure the inputs such that 3 Mono or Stereo 4-lane cameras can be connected to PPA, CSI1_PPA and CSI2_PPA pairs.
  • VI: The Video Input unit accepts data from the CSI unit over a 24-bit bus with the positioning of data determined by the input format. Then this data can be routed to any one or 2 of the following interested parties. The VI also has a Host 1x interface with 2 channels – one to control I2C access to cameras and another for VI register programming.
  • Memory: Written to system memory for further consumption by the applications.
  • Image Signal Processor ISP A:For pre-processing the input data and convert/pack it to a different format. ISP A can also acquire data from memory.
  • Image Signal Processor ISP B:For pre-processing the input data and convert/pack it to a different format. ISP A can also acquire data from memory.

VI Unit provides a hardware-software sysncronization mechanism called VI Sync Points (syncpts) that are used to wait for a particular condition to be met and increment a counter or want for the counter to reach a particular value. Multiple predefined indices are available each corresponding to once functionality such as frame start, line end, completion of ISP processing. For example, the software can choose to wait till one frame is receved by the VI indicated via the next counter value corresponding to the index.

With these powerful compoenets, the Tegra Camera sub-system offers options the handle data seamlessly from multiple sources in different formats.

Linux 4 Tegra Camera Driver

With understanding of the hardware sub-system, we will now look into the software architecture of Tegra camera interface. Nvidia supports Linux OS with its Linux4Tegra (L4T) software. The camera drivers configures and read the data from camera sensors over the CSI bus in the sensor’s native format and optionally convert them to a different format.

Nvidia provides two types of camera access paths, that can be chosen depending on the camera and application use case:

  • Direct V4L2 Interface

Primarily for capturing RAW data from camera, this is a minimal path where no processing are done  and the data is directly consumed by the user application.

  • Camera Core Library Interface

In this model, the camera data is consumed via few Nvidia libraries such as Camera Core, libArgus. In this case, various data processing can be done on the input data efficiently leveraging the GPU available in the core.

In either case, the application can be a Gstreamer plugin or a custom one.

OV5693 Camera for Jetson

To take a deep-dive, let us consider the 5MP(2592 x 1944, Bayer sensor)Omnivision CSI camera module OV5693 that comes with the Tegra TX1 and TX2 carrier board by default. High level software architecture is captured below:

L4T Camera Driver Architecture
L4T Camera Driver Architecture

The OV5693 camera connected to I2C bus 0x06 (default \I2C address as 0x36) via TCA9548 I2C expander chip. This can be changed to 0x40 by adding a pull up resistor on SID pin.

The OV5693 driver is triggered using I2C bus driver and registers itself with the Tegra V4L2 camera framework. This in turn exposes /dev/videoX device that can be used by the application to consume the data.

To bring up the OV5693 driver, following must be handled and are further explained in the next sections:

  • Appropriate node in the Device Tree
  • V4L2 compatible sensor driver

In the next section, we will see how to set up the device tree for OV5693 camera.

Device Tree Changes for Tegra Camera

The  tegra194-camera-e3333-a00.dtsi file is located  in /hardware/nvidia/platform/t19x/common/kernel-dts/t19x-common-modules/ folder.

Tegra-camera-platform:

tegra-camera-platform consist of one or more modules which defines the basic information of the camera/sensor connected to the Tegra SoC. While the common part in the top, contains consolidated information about all the connected, each of the module sub section defines them individually. In this case, single OV5693 camera is connected over two MIPI lanes.

tegra-camera-platform {
    compatible = "nvidia, tegra-camera-platform";
    num_csi_lanes = <2>;        //Number of lanes
    max_lane_speed = <1500000>; //Maximum lane speed
    min_bits_per_pixel = <12>;  //bits per pixel
    vi_peak_byte_per_pixel = <2>;   //byte per pixel
    vi_bw_margin_pct = <25>;    //Don't care
    max_pixel_rate = <160000>;  //Don't care
    isp_peak_byte_per_pixel = <5>;//Don't care
    isp_bw_margin_pct = <25>;   //Don't care

    modules {
        module0 { //OV5693 basic details
            badge = "ov5693_right_iicov5693";
            position = "right";
            orientation = "1";
            drivernode0 {
                pcl_id = "v4l2_sensor";
                devname = "ov5693 06-0036";
                proc-device-tree = "/proc/device-tree/i2c@31c0000/tca9548@77/i2c@6/ov5693_a@36"; //Device tree node path
            };
        };
    };
};  

Device tree node

In device tree node, all the camera properties (output resolution, FPS, Mipi clock…etc) must be added for proper operation of the device.

I2c@31c0000 {   //I2C-6 base address
	tca9548@77 { //I2C expander IC
		i2c@6 {
			ov5693_a@36 {
				compatible = nvidia,ov5693";
				reg = <0x36>; //I2C slave address
				devnode = "video0";//device name

				/* Physical dimensions of sensor */
				physical_w = "3.674";	//physical width of the sensor
				physical_h = "2.738";	//physical height of the sensor

				/* Enable EEPROM support */
				has-eeprom = "1";

				/* Define any required hw resources needed by driver */
				/* ie. clocks, io pins, power sources */
				avdd-reg = "vana";	//Power Regulator 
				iovdd-reg = "vif";	//Power Regulator
				mode0 { // OV5693_MODE_2592X1944
					mclk_khz = "24000";		//MIPI driving clock
					num_lanes = "2";		//Number of lanes
					tegra_sinterface = "serial_a"; //Serial interface
					phy_mode = "DPHY";		//physical connection mode
					discontinuous_clk = "yes";
					dpcm_enable = "false";		//Don't care
					cil_settletime = "0";		//Don't care

					active_w = "2592";		//active width
					active_h = "1944";		//active height
					mode_type = "bayer";		//sensor type
					pixel_phase = "bggr";		//output format
					csi_pixel_bit_depth = "10";	//bit per pixel
					readout_orientation = "0";	//Don't care
					line_length = "2688";		//Total width
					inherent_gain = "1";		//Don't care
					mclk_multiplier = "6.67";	//pix_clk_hz/mclk_khz
					pix_clk_hz = "160000000";	//Pixel clock HTotal*VTotal*FPS 
					gain_factor = "10";		//Don't care
					min_gain_val = "10";/* 1DB*/	//Don't care
					max_gain_val = "160";/* 16DB*/ //Don't care
					step_gain_val = "1";		//Don't care
					default_gain = "10";		//Don't care
					min_hdr_ratio = "1";		//Don't care
					max_hdr_ratio = "1";		//Don't care
					framerate_factor = "1000000";	//Don't care
					min_framerate = "1816577";	//Don't care
					max_framerate = "30000000";
					step_framerate = "1";
					default_framerate = "30000000";
					exposure_factor = "1000000";	//Don't care
					min_exp_time = "34";		//Don't care
					max_exp_time = "550385";	//Don't care
					step_exp_time = "1";		//Don't care
					default_exp_time = "33334";	//Don't care
					embedded_metadata_height = "0";//Don't care
			};	
			};
		};
	}; 
};

In this example, the pixel clock is calculated as below:

pix_clk_hz = HTotal*VTotal*FPS

For OV5693:- 2592×1944@30fps

Total height and Total width for 2592×1944 is 2688×1984

pix_clk_hz = 2688 x 1984 x 30 = 159989760

pix_clk_hz is ~160000000

And the mclk multiplier is

mclk_multiplier = pix_clk_hz / mclk_khz
mclk_multiplier = 160000000 / 24000000 = 6.66

DTS binding

As seen earlier, the camera data flow is as follows:

Sensor OutputCSI InputCSI outputVI Input
ov5693_ov5693_out0ov5693_csi_in0ov5693_csi_out0ov5693_vi_in0
Hardware – Device Tree Nodes Data flow mapping

The binding between internal ports is done by using the below settings.

ports {
	#address-cells = <1>;
	#size-cells = <0>;
port@0 {
	reg = <0>;
	ov5693_ov5693_out0: endpoint {
		port-index = <0>;
		bus-width = <2>;
		remote-endpoint = <&ov5693_csi_in0>;
	};
};
};

nvcsi@15a00000 {
	num-channels = <1>;
	#address-cells = <1>;
	#size-cells = <0>;
	status = "okay";
	channel@0 {
		reg = <0>;
		ports {
			#address-cells = <1>;
			#size-cells = <0>;
			port@0 {
				reg = <0>;
				ov5693_csi_in0: endpoint@0 {
					port-index = <0>;
					bus-width = <2>;
					remote-endpoint = <&ov5693_ov5693_out0>;
					};
				};
			port@1 {
				reg = <1>;
				ov5693_csi_out0: endpoint@1 {
					remote-endpoint = <&ov5693_vi_in0>;
					};
				};
			};
		};
	};
		
			
host1x {
	vi@15c10000 {
		num-channels = <1>;
		ports {
			#address-cells = <1>;
			#size-cells = <0>;
			port@0 {
				reg = <0>;
				ov5693_vi_in0: endpoint {
				port-index = <0>;
				bus-width = <2>;
				remote-endpoint = <&ov5693_csi_out0>;
				};
			};
		};
	};

The driver get the data from VI output via Host1x DMA engine module.

Overlay

L4T employs a mechanism of DTB overlays’ to enable/disable to drivers. The ov5693 driver can be enabled in the DTS by setting its status field to “okay”.

fragment-ov5693@0 {
    ids = "2180-*";
    override@0 {
        target = <&ov5693_cam0>;
        _overlay_ {
            status = "okay";
        };
    };

};

During boot up, if the proper camera module is detected, then the overlay added to the device tree node and further driver and device registration is done by camera driver(ov5693.c) as described in the next blog.

About Embien: Embien is a leading product engineering service provided with specialised expertise on Nvidia Tegra and Jetson platform. We have been interfacing various types of cameras over different interfaces with the Nvidia platforms and enabling them with libargus framework as well as customised Gstreamer plugins and applications. Our customers include Fortune 500 companies in the field of defence, avionics, industrial automation, medical, automotive and semi-conductors.

As discussed in the earlier blog, it is becoming very important, in an embedded system, to ensure authenticity of the firmware before running it. Also, the system has to be made tamper proof against further hacking, especially for remotely managed internet connected IoT applications.

To prevent breach of security, the software can be strengthened with various techniques based on the underlying MCU and peripheral set. This blog discusses in particular how it can be done for iMx RT1020 based devices using the High Assurance Boot (HAB) mechanism as recommended by NXP.

Secure Boot Concepts

NXP’s HAB uses the mechanism of asymmetric encryption to protect its firmware. To give a quick introduction to asymmetric encryption, it is essentially creating a pair of keys in a way that one of the keys can encrypt the message and other can decrypt the message (and vice versa). It is mathematically impossible to use the same key used for encryption to decrypt the message. Also, with increased key sizes, it will be highly resource consuming to decrypt a message without the other pair.

Thus, with asymmetric encryption, it is enough to protect one of the keys (private key) and other can be shared (public key). The message encrypted by the private key can only be decrypted by the public key. Further if the public key (or at least its hash) is stored in a location that can not be modified such as On Time Programmable Flash, it will be impossible for any one to compromise the system. An attempt to modify the public key will be nullified because of the check with the OTP memory.

The high level of the above sequence can be capture in the below sequence diagram.

Secure Boot iMX RT 1020 HAB process
Secure Boot iMX RT 1020 HAB process

During the device provisioning process, the public and private key pairs are generated and private key is secured in the provisioning system. Hash for the public key is generated and stored in the device OTP area, which prevents further modification.

In the code signing sequence, the firmware image is hashed and encrypted using the private key. The final image generated is comprised of the firmware image, its encrypted image along with the public key and the same is programmed on to the boot memory.

During the bootup sequence, the HAB logic extracts the individual components of the signed image and validates to authenticity of the public key by comparing the computed hash and that stored in the OTP fuses. It is impossible to create public key such that the hash is same there by preventing any attempt of overriding the public key by external parties. Then it proceeds to calculate the hash of the firmware. It is compared with a hash generated by decrypting the encrypted hash using the public key. If it is a match, it proceeds to boot. If it fails in any of the place, the boot is aborted.

Code Signing for i.Mx RT1020

NXP provides all the tools necessary for generating public-private key pairs, code signing and blowing boot flashes such as MfgTool, elftosb, cst etc.

The device can be programmed using two methods: Device Boot and Secure Boot. The Device boot mode can be used during development purposes and secure boot for final programming. If the device is once programmed in Secure boot mode, it is not possible to revert back to Dev Boot mode and all further firmware has to be signed properly. The programming process is carried out by Flashloader tools such as- elftosb tool for boot image creation, Mfg tool for boot image programming.

Dev Boot Mode

To program the device, use the Mfgtool.

  • Create unsigned boot_image.sb using elftosb tool from SREC format of the application image (app.s19 file).
  • Make sure the file inside the Mfg tool is available in the name – cfg.ini
  • The content inside the file should be in the following format : chip → MXRT102X, name → MXRT102X-DevBoot
  • Import the boot_image.sb file to …/Tools/mfgtools-rel/Profiles/MXRT102X/OS Firmware from …/Tools/elftosb/linux/amd64/
  • After generating the boot_image.sb and placing it in the following directory …/Tools/mfgtools-rel/Profiles/MXRT102X/OS Firmware
  • Change the device’s boot mode into serial downloader mode and connect it to the host PC
  • Run the MfgTool and press the start button to program the target.
  • To exit MfgTool, click “Stop” and “Exit” in turn

Secure Boot Mode

To program the OTP flash once with hash of the public key, use the Mfgtool as follows

  • Check whether the device is in serial downloader mode
  • Generate the private/public keys using CST tool and create fuse.bin and fuse.table files.
  • Make sure the file inside the Mfg tool is available in the name – cfg.ini
  • The content inside the file should be in the following format : chip → MXRT102X, name → MXRT102X-Burnfuse
  • Create and import the enable_hab.sb file to the following directory …/Tools/mfgtools-rel/Profiles/MXRT102X/OS Firmware from the directory …/Flashloader_RT1020_1.0_GA/Tools/elftosb/linux/amd64/
  • After programming the above mentioned enable_hab.sb file successfully, the device will be ready for secure boot.

The above process of programming the fuse has to be executed only once. Further mode to program the device with signed image, use the Mfgtool as follows

  • Create signed boot_image.sb using elftosb tool from SREC format of the application image (app.s19 file).
  • Check whether the file inside the Mfg tool is available in the name – cfg.ini
  • The content inside the file should be in the following format : chip → MXRT102X, name → MXRT102X-SecureBoot
  • Import the signed boot_image.sb file to the following directory …/Tools/mfgtools-rel/Profiles/MXRT102X/OS Firmware from the directory …/Flashloader_RT1020_1.0_GA/Tools/elftosb/linux/amd64/

The details of the process can be obtained from NXP i.Mx1020 product page. Once secured, it will be impossible to run unauthorized software.

Same concepts can be extended to OTA updates so that the new firmware can be authenticated even before programming.

About Embien :

Embien has been actively developing IoT devices that form important part of a larger network with huge ramifications on security. We have been using advanced tools and techniques to prevent unauthorized access and tampering of the device. Get in touch with us to get your design unprecedented security.