Sathya Kumari R
30. March 2023 · Write a comment · Categories: Miscellaneous

Nowadays, No Code and Low code development is getting popular in the enterprises. It is essentially developing business logic in a visual approach rather than a traditional text-based manual coding approach. With many of the programming language nuances removed, user interface and functional complexities abstracted, it allows for a shortened development time. This enables more application to be developed at the same time providing businesses agility and hence advantage. 

In this blog, we will explore the scope for No-Code and Low-Code development for embedded systems and how Flint tool can help realize the same for this field.  

Embedded System Ecosystem 

Low-code development has found a firm footing for enterprise grade application development such as mobile apps, web apps etc. When it comes to the embedded device development, things are quite traditional today due to various constraints. As against general application development, embedded systems are often delivered as customized solutions. It is rather rare to find two specifications same over the course of many developments. There are many reasons for the same – difference in end-user functional requirements, variations in component selection, need for compliances based on underlying use-case, cost vs complexity factor and more recently supply chain issues. 

With all these challenges, adopting a no-code and low-code approach for embedded system development is challenging. But still there is quite a scope for this and solutions like Node-Red, IAR Visual state caters to this growing market. 

Visualizing Embedded No-Code Application Development 

With limitations of today, considering the inherent complexities of embedded hardware development, it can be removed from the ambit of low-code generation. A good amount of embedded software development can be done with this approach. Many of the modern designs can be realized with one or more of the following: 

  • State machine-based system operation 
  • Event driven transitions and actions. 
  • Leveraging libraries for complex functional units 
  • Standardizing HAL using BSW and MCAL  
  • Visual logical programming  

If there is a way to achieve these functionalities with a tool, it should be possible to cater to a wide range of developers. 

No-Code Development with Flint System Configurator 

Embien is taking a pole position in No/Low code development with it flagship Flint tool. While Flint graphical programming is being known as more of a UI designer tool, it really offers much more than embedded GUI development. The underlying design methodology is based on UML Hierarchical State machines (HSM) and thus all the events, actions & transitions can be mapped using the tool. Besides the pin mapping to the peripheral interfaces, adding driver blocks to these peripherals, can be done using the Module editor of the Flint System Configurator. On top of this, in-built application modules can be added such as protocols like Modbus Server, UDS Server, Ethernet/IP, File systems such as FATFS, LittleFS, functionalities such as Non-Volatile Memory etc. Each of these can be configured to a very detailed level. 

Customized Logic Creation with Flint Visual Programmer 

While the System Configurator, offers precise control over the underlying modules, still there will be need for custom logic and control. This can be achieved using the Flint Visual programming tool. Here programming blocks like Arithmetic operators, logical operators, mathematical functions, digital filters etc are available and dragged-dropped to create complex business logic. Multiple such logics can be created independently and run. In addition to that, custom C/C++ functions can be called from inside the Visual Programmer logic.  

RAPIDSEA and Sparklet 

Flint is IDE for low-code development and upon building the project, platform agnostic binaries are generated. These binaries when programmed on to the system, it is read by the Sparklet and RAPIDSEA run time engines and the configuration/program is created dynamically. The system behaves as expected thereby effectively reducing a lot of development efforts. 

We believe that Flint could be an ideal Low-Code tool for the embedded system development. Equipped with intuitive user interface and vast libraries and tools, together with RAPIDSEA and Sparklet, Flint can significantly fasten development time. Get in touch with us to know more about Flint and evaluate it. 

About Embien

Embien is a leading product engineering company offering ready to use solutions that helps customer realize complex products quickly without compromising on the quality. Our solutions like Flint IDE, Sparklet Graphical engine, RAPIDSEA suite and TestBot – automated testing tool are being used be leading companies across the globe for their products. Our service offerings include hardware design, FPGA development, BSP/BSW development, electro-mechanical engineering and end-product development.   

Karthick B
07. July 2022 · Write a comment · Categories: Miscellaneous

In the growing IOT world, Embedded devices are playing a major role in people day2day life. Earlier in 1980’s to 2000 embedded product will be having a specific feature such as printing/display etc., and it will be updated only for bug fixes. Now a days in 2022 due to technology growth, a product is having multiple features/use cases and it has been upgraded for bug fixes and new features in the interest of customer/end.

So, if we look in to product upgrade history. in 1990 – 2000 OEM’s use to upgrade a device once in a year, but now the same is getting upgraded at least 10 times in a year. Number varies based on the use case and feature competition.

Now a days feature/application is developed with some dependency on 3rd party components such as Openssl, Cloud Library, Databases, Protobuf etc. These 3rd party component owners share an updated version addressing bug fixes or including new features. OEM must adapt the new versions and release an update for existing products. In this process there are certain challenges inevitable such as i) All modules in product need to be updated. ii) Retesting of all features. iii) Mitigate the impact for existing users etc.

All modules in product need to be updated

Since we cannot have multiple versions of 3rd party component in the product. if the component is getting changed, it’s the responsibility of all module/feature owners to adapt the new component along with their new feature roadmap.

This challenge can be addressed in couple of ways by making product as dockerized or snap based approach. In this way, we no need to wait for all module/feature owners to complete adapting the component to release an upgrade

Here we will look in to snap based approach and how that helps in rolling out new product feature rollout to market in less time.

Legacy Product Architecture

If you have one physical machine dedicated to only one thing this is the best option. Has full access to the hardware and best performance with no overhead.

What is SNAP?

Snap is a Linux-based software utility for packaging and deploying applications. It works with major Linux distributions such as Ubuntu, Debian, Arch Linux, Fedora, CentOS, and Manjaro. Snaps are self-contained apps that run in a sandbox and have limited access to the host system. It uses the.snap file format, which is a single compressed file system that uses the SquashFS standard. Applications and libraries, as well as declarative metadata, make up the file system.

Any Linux based product feature can be packaged as a snap, making it easy to deploy across distributions and devices with isolated environments. Snap has a flexible upgradable or down gradable support that does not affect other snaps/features, this enables secure execution and better transactional management. Snaps can also be upgraded remotely.

Easy and clean installation of full software packages. It can deploy complex systems with all support systems, such as database servers, and everything will be already configured to working perfectly

Elements in SNAP

The Snap’s key elements are as follows:

  • Snapd: Snapd used snap metadata to create a safe and secure sandbox for a program on the system. Because Snap is nothing more than SnapDaemon, all of the system’s functions, such as maintaining and administering the complete snap environment, continue in the background.
  • Snaps: Snaps are a set of Snap packages that are dependency-free and simple to install. It includes the whole module, including the application and its dependencies, in the.snap file.
  • Channels: A channel determines which release of a snap is installed and checked for updates

    $ snap install xxx –channel=latest/edge
    • Tracks : There is a default track for all snaps. The default track is called newest unless the snap developer specifies otherwise. When no track is supplied, a snap will install from the latest track by default. It’s also possible to specify the track explicitly.
    • Risk Levels : Stable, candidate, beta, and edge are the four risk levels. Installing from a risk-level that is less stable will usually result in more frequent updates.
    • Stable: suitable for the great majority of users in production scenarios.
      • Candidate: for users who need to test changes prior to a stable deployment, or who want to double-check that an issue has been handled. 
      • Beta: for users who wish to try out new features before they go live, usually outside of a production environment. 
      • Edge: for those who wish to keep a close eye on their progress.

Use the –channel option to select a different risk-level

$ snap install –channel=beta xxxx

After installation, the risk-level being tracked can be changed with:

$ snap switch –channel=stable xxxx

  • Branches: As a developer, you may have a released programme with defects that users encounter but that you are unable to duplicate. A temporary branch can be used to store a bug-fixing test build of the application you’re working on. If you’re tracking and fixing many bugs at the same time, each one can have its own branch in the Snap Store under the same snap name. Because branches are ‘hidden,’ users are unlikely to come across potentially broken bug-fix builds of your application unless they guess the name. Branches are only active for 30 days before being destroyed, and any user with the snap will be transferred to the channel’s most recent track.
  • Snap Store: It is like any other package manager, where the packages are published by creators and consumers install them.
  • Snap Craft: It’s a tool to build snaps.

SNAP Updates

The snap list command displays all available revisions for all installed snaps. By specifying a snap name in the snap list —all command, only results for that snap will be returned.

$ snap list –all xxxxx

Snaps are automatically updated. To check for updates manually, run the following command:

            $ snap refresh xxxx

With the snap reverse command, a snap can be reverted to a previous revision.

            $ snap revert xxxx 

SNAP Configuration

The following files control the behavior of a snap:

            snap.yaml

            hooks

            icon.{svg,png}

            *.desktop

.yaml :  Every snap package contains a meta/snap.yaml file that holds the basic metadata for the snap.

snap.yaml lives inside every snap package, read by snapd. snapcraft.yaml contains instructions to create a snap package, read by the snapcraft command used to build snaps

Hooks: A hook is an executable file that runs within a snap’s confined environment when a certain action occurs. Hooks provide a mechanism for snapd to alert snaps that something has happened, or to ask the snap to provide its opinion about an operation that is in progress.

Common examples of actions requiring hooks include:

Notifying a snap that something has happened

Example: If a snap has been upgraded, the snap may need to trigger a scripted migration process to port an old data format to the new one.

Notifying a snap that a specific operation is in progress

Example: A snap may need to know when a specific interface connects or disconnects.

SNAP Overview

Create your own SNAP

Make your working directory first, then enter there.

            $ mkdir snap-workspace

$ cd snap-workspace

A tool for creating snaps is called SnapCraft. Running $  snapcraft init  will produce a template in the snap/snapcraft.yaml file.

The directory structure looks like this,

mysnaps/

└── snap-workspace

    └── snap

        └── snapcraft.yaml

The snapcraft.yaml file looks like if you were to open it.

name: snap-demo

base: core18

version: ‘0.1’

summary: Single-line elevator pitch for your amazing snap

description: This is my-snap’s description.

grade: devel # must be ‘stable’ to release into candidate/stable channels

confinement: devmode # use ‘strict’ once you have the right plugs and slots

Where,

Name: The name of the snap.

Base: A foundation snap that gives users a run-time environment and a small selection of libraries used by most apps. Core18, which is equivalent to Ubuntu 18.04 LTS, is the default setting for the template.

Version: The current version of the snap

Summary: A short, one-line summary or tag-line for your snap.

Description: A longer description of the snap.

Grade: The publisher may use this to express their level of satisfaction with the construction quality. The store will stop ‘devel’ grade builds from being published to the ‘stable’ channel.

Confinement: There are three levels of confinement for snaps: strict, classic, and devmode. The level of confinement a snap has from your system determines how isolated it is.

strict snaps run in complete isolation. Devmode snaps run as strict but with open access to the system, while classic snaps have open access to system resources.

SNAP Compilation:

To build hello world app, add the following ‘parts’ to our snapcraft.yaml file (replace anything else that might be there):

parts:

  gnu-hello:

    source: http://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz

    plugin: autotools

Added a ‘part’ called gnu-hello (its name is arbitrary). The  ‘source’ represent,  a tarball located on the GNU project’s FTP server. As ‘plugin’ we’ve chosen auto tools which uses the traditional. /configure && make && make install build steps.

To build our snap,

            $ snapcraft

Snapcraft will produce a lot of output while the build is underway, but a successful build will result in:

[…]

Staging gnu-hello

+ snapcraftctl stage

Priming gnu-hello

+ snapcraftctl prime

Snapping |                                                                                                                                                 

Snapped hello_2.10_amd64.snap

To Install the SNAP,

$ sudo snap install –devmode hello_2.10_amd64.snap

Output:

            hello 2.10 installed

To execute,

            $ hello

Output:

Hello World

Karthick B
29. June 2021 · Write a comment · Categories: Miscellaneous

Nvidia Jetson platforms powered by the Tegra processors have carved themselves a niche in the edge analytics market especially in the field of video analytics, machine vision etc. With a wide range of interfaces like MIPI-CSI, USB, Gigabit Ethernet, it is possible to acquire video data over many different interfaces. Of them, the CSI interface remains the most preferred interface for machine vision applications.

In this blog, we will discuss in detail about the camera interface and data flow in Jetson Tegra platforms and typical configuration and setup of a MIPI CSI driver. For specifics, we will consider Jetson Nano and Onsemi OV5693 camera.

Jetson Camera Subsystem

While there are significant architectural differences between the Tegra TX1, TX2, Xavier and Nano platforms, the camera hardware sub-system remains more or less the same. The high level design of the same is captured below.

Nvidia Tegra Camera Sub System
Nvidia Tegra Camera Sub system

As seen, the major components and their functionalities are:

  • CSI Unit: The MIPI-CSI compatible input sub-system that is responsible for data acquisition from the camera, organize the pixel format and send it to the VI unit. There are 6 Pixel Parser (PP) units, each of which can accept input from a single 2-lane camera. Apart of this 6-camera model, it is also possible to reconfigure the inputs such that 3 Mono or Stereo 4-lane cameras can be connected to PPA, CSI1_PPA and CSI2_PPA pairs.
  • VI: The Video Input unit accepts data from the CSI unit over a 24-bit bus with the positioning of data determined by the input format. Then this data can be routed to any one or 2 of the following interested parties. The VI also has a Host 1x interface with 2 channels – one to control I2C access to cameras and another for VI register programming.
  • Memory: Written to system memory for further consumption by the applications.
  • Image Signal Processor ISP A:For pre-processing the input data and convert/pack it to a different format. ISP A can also acquire data from memory.
  • Image Signal Processor ISP B:For pre-processing the input data and convert/pack it to a different format. ISP A can also acquire data from memory.

VI Unit provides a hardware-software sysncronization mechanism called VI Sync Points (syncpts) that are used to wait for a particular condition to be met and increment a counter or want for the counter to reach a particular value. Multiple predefined indices are available each corresponding to once functionality such as frame start, line end, completion of ISP processing. For example, the software can choose to wait till one frame is receved by the VI indicated via the next counter value corresponding to the index.

With these powerful compoenets, the Tegra Camera sub-system offers options the handle data seamlessly from multiple sources in different formats.

Linux 4 Tegra Camera Driver

With understanding of the hardware sub-system, we will now look into the software architecture of Tegra camera interface. Nvidia supports Linux OS with its Linux4Tegra (L4T) software. The camera drivers configures and read the data from camera sensors over the CSI bus in the sensor’s native format and optionally convert them to a different format.

Nvidia provides two types of camera access paths, that can be chosen depending on the camera and application use case:

  • Direct V4L2 Interface

Primarily for capturing RAW data from camera, this is a minimal path where no processing are done  and the data is directly consumed by the user application.

  • Camera Core Library Interface

In this model, the camera data is consumed via few Nvidia libraries such as Camera Core, libArgus. In this case, various data processing can be done on the input data efficiently leveraging the GPU available in the core.

In either case, the application can be a Gstreamer plugin or a custom one.

OV5693 Camera for Jetson

To take a deep-dive, let us consider the 5MP(2592 x 1944, Bayer sensor)Omnivision CSI camera module OV5693 that comes with the Tegra TX1 and TX2 carrier board by default. High level software architecture is captured below:

L4T Camera Driver Architecture
L4T Camera Driver Architecture

The OV5693 camera connected to I2C bus 0x06 (default \I2C address as 0x36) via TCA9548 I2C expander chip. This can be changed to 0x40 by adding a pull up resistor on SID pin.

The OV5693 driver is triggered using I2C bus driver and registers itself with the Tegra V4L2 camera framework. This in turn exposes /dev/videoX device that can be used by the application to consume the data.

To bring up the OV5693 driver, following must be handled and are further explained in the next sections:

  • Appropriate node in the Device Tree
  • V4L2 compatible sensor driver

In the next section, we will see how to set up the device tree for OV5693 camera.

Device Tree Changes for Tegra Camera

The  tegra194-camera-e3333-a00.dtsi file is located  in /hardware/nvidia/platform/t19x/common/kernel-dts/t19x-common-modules/ folder.

Tegra-camera-platform:

tegra-camera-platform consist of one or more modules which defines the basic information of the camera/sensor connected to the Tegra SoC. While the common part in the top, contains consolidated information about all the connected, each of the module sub section defines them individually. In this case, single OV5693 camera is connected over two MIPI lanes.

tegra-camera-platform {
    compatible = "nvidia, tegra-camera-platform";
    num_csi_lanes = <2>;        //Number of lanes
    max_lane_speed = <1500000>; //Maximum lane speed
    min_bits_per_pixel = <12>;  //bits per pixel
    vi_peak_byte_per_pixel = <2>;   //byte per pixel
    vi_bw_margin_pct = <25>;    //Don't care
    max_pixel_rate = <160000>;  //Don't care
    isp_peak_byte_per_pixel = <5>;//Don't care
    isp_bw_margin_pct = <25>;   //Don't care

    modules {
        module0 { //OV5693 basic details
            badge = "ov5693_right_iicov5693";
            position = "right";
            orientation = "1";
            drivernode0 {
                pcl_id = "v4l2_sensor";
                devname = "ov5693 06-0036";
                proc-device-tree = "/proc/device-tree/i2c@31c0000/tca9548@77/i2c@6/ov5693_a@36"; //Device tree node path
            };
        };
    };
};  

Device tree node

In device tree node, all the camera properties (output resolution, FPS, Mipi clock…etc) must be added for proper operation of the device.

I2c@31c0000 {   //I2C-6 base address
	tca9548@77 { //I2C expander IC
		i2c@6 {
			ov5693_a@36 {
				compatible = nvidia,ov5693";
				reg = <0x36>; //I2C slave address
				devnode = "video0";//device name

				/* Physical dimensions of sensor */
				physical_w = "3.674";	//physical width of the sensor
				physical_h = "2.738";	//physical height of the sensor

				/* Enable EEPROM support */
				has-eeprom = "1";

				/* Define any required hw resources needed by driver */
				/* ie. clocks, io pins, power sources */
				avdd-reg = "vana";	//Power Regulator 
				iovdd-reg = "vif";	//Power Regulator
				mode0 { // OV5693_MODE_2592X1944
					mclk_khz = "24000";		//MIPI driving clock
					num_lanes = "2";		//Number of lanes
					tegra_sinterface = "serial_a"; //Serial interface
					phy_mode = "DPHY";		//physical connection mode
					discontinuous_clk = "yes";
					dpcm_enable = "false";		//Don't care
					cil_settletime = "0";		//Don't care

					active_w = "2592";		//active width
					active_h = "1944";		//active height
					mode_type = "bayer";		//sensor type
					pixel_phase = "bggr";		//output format
					csi_pixel_bit_depth = "10";	//bit per pixel
					readout_orientation = "0";	//Don't care
					line_length = "2688";		//Total width
					inherent_gain = "1";		//Don't care
					mclk_multiplier = "6.67";	//pix_clk_hz/mclk_khz
					pix_clk_hz = "160000000";	//Pixel clock HTotal*VTotal*FPS 
					gain_factor = "10";		//Don't care
					min_gain_val = "10";/* 1DB*/	//Don't care
					max_gain_val = "160";/* 16DB*/ //Don't care
					step_gain_val = "1";		//Don't care
					default_gain = "10";		//Don't care
					min_hdr_ratio = "1";		//Don't care
					max_hdr_ratio = "1";		//Don't care
					framerate_factor = "1000000";	//Don't care
					min_framerate = "1816577";	//Don't care
					max_framerate = "30000000";
					step_framerate = "1";
					default_framerate = "30000000";
					exposure_factor = "1000000";	//Don't care
					min_exp_time = "34";		//Don't care
					max_exp_time = "550385";	//Don't care
					step_exp_time = "1";		//Don't care
					default_exp_time = "33334";	//Don't care
					embedded_metadata_height = "0";//Don't care
			};	
			};
		};
	}; 
};

In this example, the pixel clock is calculated as below:

pix_clk_hz = HTotal*VTotal*FPS

For OV5693:- 2592×1944@30fps

Total height and Total width for 2592×1944 is 2688×1984

pix_clk_hz = 2688 x 1984 x 30 = 159989760

pix_clk_hz is ~160000000

And the mclk multiplier is

mclk_multiplier = pix_clk_hz / mclk_khz
mclk_multiplier = 160000000 / 24000000 = 6.66

DTS binding

As seen earlier, the camera data flow is as follows:

Sensor OutputCSI InputCSI outputVI Input
ov5693_ov5693_out0ov5693_csi_in0ov5693_csi_out0ov5693_vi_in0
Hardware – Device Tree Nodes Data flow mapping

The binding between internal ports is done by using the below settings.

ports {
	#address-cells = <1>;
	#size-cells = <0>;
port@0 {
	reg = <0>;
	ov5693_ov5693_out0: endpoint {
		port-index = <0>;
		bus-width = <2>;
		remote-endpoint = <&ov5693_csi_in0>;
	};
};
};

nvcsi@15a00000 {
	num-channels = <1>;
	#address-cells = <1>;
	#size-cells = <0>;
	status = "okay";
	channel@0 {
		reg = <0>;
		ports {
			#address-cells = <1>;
			#size-cells = <0>;
			port@0 {
				reg = <0>;
				ov5693_csi_in0: endpoint@0 {
					port-index = <0>;
					bus-width = <2>;
					remote-endpoint = <&ov5693_ov5693_out0>;
					};
				};
			port@1 {
				reg = <1>;
				ov5693_csi_out0: endpoint@1 {
					remote-endpoint = <&ov5693_vi_in0>;
					};
				};
			};
		};
	};
		
			
host1x {
	vi@15c10000 {
		num-channels = <1>;
		ports {
			#address-cells = <1>;
			#size-cells = <0>;
			port@0 {
				reg = <0>;
				ov5693_vi_in0: endpoint {
				port-index = <0>;
				bus-width = <2>;
				remote-endpoint = <&ov5693_csi_out0>;
				};
			};
		};
	};

The driver get the data from VI output via Host1x DMA engine module.

Overlay

L4T employs a mechanism of DTB overlays’ to enable/disable to drivers. The ov5693 driver can be enabled in the DTS by setting its status field to “okay”.

fragment-ov5693@0 {
    ids = "2180-*";
    override@0 {
        target = <&ov5693_cam0>;
        _overlay_ {
            status = "okay";
        };
    };

};

During boot up, if the proper camera module is detected, then the overlay added to the device tree node and further driver and device registration is done by camera driver(ov5693.c) as described in the next blog.

About Embien: Embien is a leading product engineering service provided with specialised expertise on Nvidia Tegra and Jetson platform. We have been interfacing various types of cameras over different interfaces with the Nvidia platforms and enabling them with libargus framework as well as customised Gstreamer plugins and applications. Our customers include Fortune 500 companies in the field of defence, avionics, industrial automation, medical, automotive and semi-conductors.