30. April 2014 · Write a comment · Categories: Embedded System

Any element that can retain a state with time is called memory. Memory is the most important part of a processing system considering the fact that both the instructions to be executed as well as the data being manipulated are stored in memory. In the fourth post of the series on embedded system design, we will discuss about the memory types and their applications.

Memory Usage

As mentioned earlier, memories are used to store primarily two kinds of information – Program and data.

Program information are the instructions i.e. opcodes that are to be executed by the processor. Generally they are stored in a non-volatile memory that is mapped directly to the address space of the processor. Or they might be stored in external memories (say as files in a partition) and loaded on to a volatile memory just prior to execution of the program.

Data memory can be used to store primarily two kind of information. One is relating to the intermediate data being processed- for e.g. a variable storing a value during course of execution of an algorithm or a Process Control Block in an OS etc. The other is the Stack which is used by the processor to store its return functions and local variables. In either case the memory type is volatile.

Memory Types

The primary differentiation of the memory is based on the volatility i.e. whether the stored data is retained after power cycling the device.  Accordingly, the memory can be either Volatile memory or a non-volatile memory.

Volatile memory

Volatile memories can hold their contents only when power is continuously applied to the memory devices. As soon as the power is removed, the contents in the memories are lost. The primary usage is to store the data/stack as well as storing the program instructions.

Examples of volatile memories include static RAM, dynamic RAM and static dynamic RAM.

Generally the volatile memories used are of type Random Access Memory (RAM) i.e. data at any address in the memory can be accessed by giving the address in the address bus of the memory. Primarily the volatile memory is divided in to two types:

SRAM – Static Random Access Memory

The static RAM is a type of memory that uses bi-stable latching circuitry to store each bit. Due to the design, the memories need not be refreshed. Thus the data stored will be static till the duration of power being applied to the RAM.

The primary advantage of SRAM is its speed. Fast SRAMs can operate on par with the processor speed enabling access times equal to a clock cycle used by the microprocessor. Synchronous SRAMs are the preferred way of implementing Instruction and Data caches in a processor system. Further since there is no need for specialized controllers to refresh the RAM, they are easier to use with low end microcontrollers.

The down side is that the density of the SRAMs is comparatively lower than the DRAMs. Also the cost is comparatively higher.

DRAM – Dynamic Random Access Memory

DRAM stores each bit in a storage cell consisting of capacitor and transistors. Since capacitors lose their charges quickly they need to be recharged. So by design, each bit in the DRAM must be refreshed periodically to maintain its contents and hence the name “Dynamic”. Due to the structural simplicity (only one transistor and a capacitor per bit), DRAM can be packed much denser than SRAM.

Even though they need specialized controller to take care of refreshing, their higher density provides a higher cost to memory ratio compared to SRAM’s.

The most popular type of DRAM used in the SDRAM.

SDRAM – Static Dynamic Random Access Memory

SDRAM is a type of DRAM that ‘Synchronous’ with the system bus. The device needs a SDRAM controller typically a part of the SoC for it to function properly. The data is organized as row and column and an internal state machine that takes care of fetch and refresh logic.

High speed varieties of SDRAM include DDR, DDR2, and DDR3. DDR – Double Data Rate RAMs can transfer data on both edges of the clock and hence the name. DDR2/DDR3 has higher data width and different power requirements even though internally they operate at the same rates as DDR.

Non-volatile memory

Non-volatile memories will retain their contents even when the power to the memory device is removed. This makes them better choice for storing the data that are to be retrieved after the system is restarted. The configurations settings are typically stored in the non-volatile memory. They are typically slower than volatile memory and require complex procedures for reading and writing.

Though there are many other kinds of technologies such as Disk-On-Chip, SSD, MMC Cards etc, are available, the most common non-volatile memories found in embedded systems are as follows

  • Flash memory
  • SD cards

Flash memory

Flash memory is a most commonly used type of non volatile memory in the embedded system for their durability and larger number of erase cycles.

Microcontroller unit mostly contains flash memory on which the programs are written for execution. Since flash memory is integrated on-chip with the microcontroller, its usage become easier. Flash memory is generally sector/block erasable, which means one sector/block of the memory can be erased at a time in which each bit erased is moved to a state ‘1’. When it is written, the state is changed from ‘1’ to ‘0’.

Apart from the on-chip flash memories, there are two types of flash memories available for external storage. They are NAND and NOR flash memories

NAND flash

NAND flash memories are the most commonly used types of flash memory. NAND type of flash memory can be written and read in blocks. They are generally smaller and are primarily used in USB flash drives and SSD’s. They have core cells connected in series either as 8 or 16 cells.

NOR flash

NOR flash contains core cells connected in parallel (common ground). Since random access is supported, they are used for storing Execute in Place code.

Though NAND technology is slower compared to NOR flash, it offers higher density and better cost ratio as well as a higher life span up to 10 times more than NOR. Typical interface for flash memory to the processor is the SPI bus.

EEPROM (Electrically Erasable Programmable Read Only Memory)

EEPROM is a special type of memory that supports erasing and programming of each bit of memory unlike the flash technology that supports only block erases.  Further the power consumption is very low for EEPROM. SPI, I2C are the most commonly available interface options for EEPROM.

SD cards (Secure Digital cards)

SD cards are the type of non volatile memory commonly used in portable devices. The SD card itself has a processor inside to take care of the complex interface requirements as well as performing internal operations like error correction, wear levelling etc. SD cards are also used as a boot device is most of the high performance embedded system.  Common SD card interface modes available are SD and SPI.

Memory Selection

Selection of suitable memory is very much essential step in high performance applications, because the challenges and limitations of the system performance are often decided upon the type of memory architecture.

Systems memory requirement depend primarily on the nature of the application that is planned to run on the system. Memory performance and capacity requirement for low cost systems are small, whereas memory throughput can be the most critical requirement in a complex, high performance system.

Following are the factors that are to be considered while selecting the memory devices,

  • Speed
  • Data storage size and capacity
  • Bus width
  • Latency
  • Power consumption
  • Cost

SRAM’s have lower data storage and capacity hence they are suitable for lower end systems where as SDRAM for higher end systems with complex requirements.

Among the high speed types of SDRAM, DDR2 memory modules can have memory capacities from 256MB to 4GB capacities. Most of the DDR2 memory chips come in FBGA (Fine Ball Grid Array) package. The package allows higher memory densities in smaller space with better electrical properties. DDR2 memory uses 1.8V for power, resulting in lower power and cooler operation, whereas the DDR uses 2.5V.

Further there are variations of DDR available that are fine tuned for particular applications. For example, the Graphic DDR (GDDR) memory is designed for higher performance than the standard DDR memory. To achieve this, they operate at additional voltage of 2.0V. But the capacity of GDDR memory devices in comparison to DDR tends to be reduced typically from 256Mb to 512Mb. This enables them to be used in resource intensive video cards. On the other end of the spectrum, Mobile DDR (MDDR) memory devices are optimized for low power applications such as battery operated and handheld devices. In deep power down (DPD) mode of operation, their current can go as low as 10uA.

The data rates are defined by the RAM manufacturer and are based on various factors such as CAS latency, RAS-CAS delay etc. Even a increase of 0.5 cycle, can impact a change of up to 10% of speed.

Again, these high speed varieties of SDRAM needs careful PCB layout with signal integrity considerations including presence of suitable terminations.

Obviously a 32-bit width memory can fetch more data in a same cycle as a 16 bit memory. Thus more the data width, better the transfer rate, provided the data line support is available.

Another factor, when going for non-volatile programmable storage, is deciding the programming model. For example, it could be ISP (In-System Programming) that allows programming the flash but needs the application to be stopped at that time. Or it could be IAP (In-Application Programming) that will allow re-programming of the memory even when the application firmware is running. This is determined by the memory architecture. Nowadays many microcontrollers support both the options and ISP is used for manufacturing and IAP is appropriate for field updates.

Though nowadays the memory controllers available in the SoC primarily dictate the selection of the memory devices, we believe this blog provides a good insight about various memory technologies, their application and selection. In the next blog, we will analyze about the power supply design in an embedded system.

Saravana Pandian Annamalai
10. April 2014 · Write a comment · Categories: Embedded System, Internet of Things, Technology · Tags:

Internet of Things is the buzzword for the past couple of years. Without doubt technology companies across spectrum are vying for a slice in the IoT Pie. Acquisition of NEST by Google, ThingWorx by PTC, Pachube by Logmein are definite pointers towards the trend.

In this post we will see about IoT, architecture, applications etc and see how things are going to communicate more than human in their entire history.

What is Internet of Things?

Internet of Things is a concept where real world physical devices, each with a unique identifier, are connected to the Internet and decisions and actions are made based on the acquired data without any human interaction. The physical device could be anything like ambient light sensor in our home, a human heart rate monitor, a temperature controller in a refrigerator truck etc. The decision could be taken by a Cloud platform or by a gateway in our home or even by our mobile phone.

An IoT scenario can be as simple as our car, on detecting a crash, calling ambulance service automatically. Or it can be complex enough to manage complete inventory beginning with detection of reduced inventory, triggering an order based on financials, historical supply/demand parametric and other factors, monitoring the production, tracking the shipment, monitoring the quality, updating the inventory and so on.

IoT is a sort of extension of earlier technologies including data acquisition with sensors, internet and human intelligence. Earlier we had a data logger that collects all the data and pushes it to a central server. The user will view these parameters and take some decision based on that and controls the system.

But with IoT, the data analytics and decisions are taken by intelligent devices. Hence the human intervention is minimized or removed so that the Things manage themselves for bringing out a desired and optimal result.

Internet of Things Architecture

Basically IoT involves Sensors, Connectivity and Intelligence.

Internet of Things - An Overview

Internet of Things – An Overview

Connecting millions of devices on the internet is going to create a scenario that was not seen earlier. Technically IPv6 has enough unique addresses to identify devices that will be manufactured and connected to Internet for a long time from now.  The complexity arises from the way of handling the data from these devices, extract meaningful information from them and take decisions on the same.

The communication technology will be depending on various factors including its usage/applications, data rate, location and other requirements. For example it could be a low power ZigBee/BLE communication to a gateway and then to the Internet or directly to Internet using a GSM modem or WiFi/Broadband.

The architecture of IoT is evolving and most likely it is going to be based on events from the real time objects. The events are likely to be propagated to the control center and processed. The event could be interpreted in different ways depending on the current state of the object and action taken. Again it could originate from the command center and move towards the object.

It is expected that the architecture is expected to evolve and continue to evolve until a majority of the devices in the world are connected.

Internet of Things Standards

There are many standards currently being created for IoT, that soon the number of standards may become more than the number of devices. There are many organizations and consortium that are trying to define a common standard for these devices to communicate among them and for inter-operability. Some include the IoT-A by www.iot-a.eu, Global Standards Initiative on Internet of Things (IoT-GSI) by ITU, IEEE Standards Association, Open Mobile Alliance etc.

Since the applications for IoT are varied, each company is creating its own protocol for communication. But most are based on RESTful services, CoAP, JSON and other such technologies. Consolidation of these different standards towards a common one is no where in sight for the near future.

Internet of Things Applications

Internet of Things finds numerous applications. It will change the way we interact with the physical world and the devices around us. The devices will have understand us better and make our life easier. Many of the applications scenarios are available in Internet for reference.

Some prominent examples include:

  • IoT-A’s vision of IoT usage
  • Libelium take of IoT Applications

Internet of Things Opportunities

The IoT will create new business possibilities. Primarily it will be in the following segments.

Sensors: With requirement of more number of sensors, large scale production will lead to low cost devices.

Connectivity: Different connectivity options are expected to be employed for different scenarios. Along with new devices manufactured with connectivity, there will be huge opportunity for making existing devices connected using gateways, protocol converters and other technologies.

Cloud Services: Eventually the data generated by the devices are to be consumed by a central platform more likely to be a cloud based one. So cloud business models including SaaS and PaaS are expected to undergo radical changes.

Applications: The algorithms, that are going to understand the data and control the devices, are the most important part of the IoT universe. Hence they are going to be next mobile app store.


IoT are defiantly going to be the trend for a few years. We can expect some major changes in the way we live with these few years because of IoT.

Embien has launched its SkyCase solution for enabling IoT for devices across industry. As the first phase, it supports data collection and visualization using many widgets available. Soon the second phase of SkyCase will incorporate intelligence for manipulating data and controlling connected devices.

03. April 2014 · Write a comment · Categories: Embedded Hardware

In continuation with our Part 2 article on embedded processor classification where we discussed about the various processor architectures available and types, we will see about considerations in selecting the processor for an embedded product design.

Processor selection for an embedded system

With numerous kinds of processors with various design philosophies available at our disposal for using in our design, following considerations need to be factored during processor selection for an embedded system.

  • Performance Considerations
  • Power considerations
  • Peripheral Set
  • Operating Voltage
  • Specialized Processing Units

Now let us discuss each of them in detail.

Performance considerations

The first and foremost consideration in selecting the processor is its performance. The performance speed of a processor is dependent primarily on its architecture and its silicon design.  Evolution of fabrication techniques helped packing more transistors in same area there by reducing the propagation delay. Also presence of cache reduces instruction/data fetch timing. Pipelining and super-scalar architectures further improves the performance of the processor. Branch prediction, speculative execution etc are some other techniques used for improving the execution rate. Multi-cores are the new direction in improving the performance.

Rather than simply stating the clock frequency of the processor which has limited significance to its processing power, it makes more sense to describe the capability in a standard notation. MIPS (Million Instructions Per Second) or MIPS/MHz was an earlier notation followed by Dhrystones and latest EEMBC’s CoreMark. CoreMark is one of the best ways to compare the performance of various processors.

Processor architectures with support for extra instruction can help improving performance for specific applications. For example, SIMD (Single Instruction/Multiple Data) set and Jazelle – Java acceleration can help in improving multimedia and JVM execution speeds.

So size of cache, processor architecture, instruction set etc has to be taken in to account when comparing the performance.

Power Considerations

Increasing the logic density and clock speed has adverse impact on power requirement of the processor. A higher clock implies faster charge and discharge cycles leading to more power consumption. More logic leads to higher power density there by making the heat dissipation difficult. Further with more emphasis on greener technologies and many systems becoming battery operated, it is important the design is for optimal power usage.

Techniques like frequency scaling – reducing the clock frequency of the processor depending on the load, voltage scaling – varying the voltage based on load can help in achieving lower power usage. Further asymmetric multiprocessors, under near idle conditions, can effectively power off the more powerful core and load the less powerful core for performing the tasks. SoC comes with advanced power gating techniques that can shut down clocks and power to unused modules.

Peripheral Set

Every system design needs, apart from the processor, many other peripherals for input and output operations.  Since in an embedded system, almost all the processors used are SoCs, it is better if the necessary peripherals are available in the chip itself. This offers various benefits compared to peripherals in external IC’s such as optimal power architecture, effective data communication using DMA, lower BoM etc. So it is important to have peripheral set in consideration when selecting the processor.

Operating Voltages

Each and every processor will have its own operating voltage condition. The operating voltage maximum and minimum ratings will be provided in the respective data sheet or user manual.

While higher end processors typically operate with 2 to 5 voltages including 1.8V for Cores/Analogue domains, 3.3V for IO lines, needs specialized PMIC devices, it is a deciding factor in low end micro-controllers based on the input voltage. For example it is cheaper to work with a 5V micro-controller when the input supply is 5V and a 3.3 micro-controllers when operated with Li-on batteries.

Specialized Processing

Apart from the core, presence of various co-processors and specialized processing units can help achieving necessary processing performance.  Co-processors execute the instructions fetched by the primary processor thereby reducing the load on the primary. Some of the popular co-processors include

Floating Point Co-processor:

RISC cores supports primarily integer only instruction set. Hence presence of a FP co-processor can be very helpful in application involving complex mathematical operations including multimedia, imaging, codecs, signal processing etc.

Graphic Processing Unit:

GPU(Graphic Processing Unit) also called as Visual processing unit is responsible for drawing images on the frame buffer memory to be displayed. Since human visual perception needed at-least 16 Frames per second for a smooth viewing, drawing for HD displays involves a lot of data bandwidth. Also with increasing graphic requirements such as textures, lighting shaders etc, GPU’s have become a mandatory requirements for mobile phones, gaming consoles etc.

Various GPU’s like ARM’s MALI, PowerVX, OpenGL etc are increasing available in higher end processors. Choosing the right co-processor can enable smooth design of the embedded application.

Digital Signal Processors

DSP is a processor designed specifically for signal processing applications. Its architecture supports processing of multiple data in parallel. It can manipulate real time signal and convert to other domains for processing. DSP’s are either available as the part of the SoC or separate in an external package. DSP’s are very helpful in multimedia applications. It is possible to use a DSP along with a processor or use the DSP as the main processor itself.


Various considerations discussed above can be taken in to account when a processor is being selected for an embedded design. It is better to have some extra buffer in processing capacities to enable enhancements in functionality without going for a major change in the design. While engineers (especially software/firmware engineers) will want to have all the functionalities, price will be the determining factor when designing the system and choosing the right processor.

In the upcoming blog, we will discuss about various memory technologies and factors to be considered when selecting them.