hUMA – shared based memory for Heterogeneous system Architecture

hUMA

A multi-processor system contains two or more processors.All processors share access to common set of memory modules,I/O channels, and peripheral devices. Inter-processor communication can be done through the shared memory. hUMA is the answer to this sharing of CPU and GPU workloads.

Types of shared memory modules:

  • UNIFORM MEMORY ACCESS(UMA):

Refers to how processing cores in a system view and access memory.

All processing cores in a true UMA system share a single memory address space.

hUMA

  • NON-UNIFORM MEMORY ACCESS(NUMA):

It is used to manage data with different address spaces across multiple areas.

It adds programming complexity due to frequent copies,synchronization and address translation.

 

hUMA

Now with heterogeneous computing,HSA restores the GPU to uniform memory access.

 HSA

An Intelligent computing Architecture that enables CPU,GPU and other processors(I/O processor) to work in harmony on a single piece of silicon by moving the right task to best suited processing elements.

 INTRODUCING hUMA

 

hUMA

             CPU and GPU will share the same memory.

KEY FEATURES:

  •  BI-DIRECTIONAL COHERENT MEMORY

Any changes made to CPU can be seen by GPU and vice-versa.

  • PAGEABLE MEMORY

GPU can take page faults and is no longer to page locked memory.

  • ENTIRE MEMORY SPACE

CPU and GPU can dynamically use memory from entire memory(virtual memory).

heterogeneous Uniform Memory Access(hUMA)

when using pointers and data sharing,

  •  passing a pointer to GPU
  •  GPU will complete computation
  •  CPU can read result directly without copying

BENEFITS OF hUMA to DEVELOPERS

 Easy and simplicity of programming.

single,standard computing Environments.

Support for Mainstream languages

python,c++,java

Lower development cost

more efficient architecture enables less people to do the same work.

 Conclusion: 
                          This unified memory access used in Heterogeneous computing helps to distribute workloads to both CPU and GPU. CPU work can be handle by GPU and vice-versa. This will also save some power which gives rise to next generation of efficient computing.
Share with your friendsShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInPin on PinterestShare on RedditShare on StumbleUpon

3 Thoughts to “hUMA – shared based memory for Heterogeneous system Architecture”

  1. akashbelekar

    Yes,buddy
    I will make changes as stated according to you ….

  2. This post gives clear idea for the new users of blogging, that in fact how to do blogging.

    1. akashbelekar

      Thanks!!!!!!!!!

Leave a Comment