jump to navigation

HFT Requires “High-Performance Computing Systems” to Achive Low Latency July 9, 2009

Posted by jbarseneau in 1.
trackback

Unlike the traditional definition of high-performance computing (HPC), which often states “that HPC is the use of supercomputers and computer clusters to solve advanced computation problems”, High-Performance Computing Systems (HPCS) is more encompassing. It is a super-set of HPC and has many high performance components that make up an end-to-end system that includes much more than just focusing on the compute engine, or processor; for instance it usually consists of (i) the computing capabilities, (ii) storage capabilities, (iii) external data acquisition, (iv) network communications, and (v) application performance. Also in traditional HPC, whether the compute engine is a supercomputer or distributed computers, the goal is to accelerate the calculation of the problem at hand. This is because tasks that require acceleration are so computationally intensive. This is not necessarily true in HPCS, the task at hand may be deterministic and simple in nature but the need for it to “happen” as quickly as possible is paramount. Not unlike the task of high-frequency trading.

In essence one can appreciate the difference by thinking that HPC’s objective is to maximize the through-put of a compute engine so that difficult problems can be solved as quickly as possible. Alternatively, the object of a HPCS is to maximize the through-put of a system so that transactions can be completed quickly as possible and therfore latency is low.

The objective of HPC and the minimization of latency through the processor certainly will improve the overall performance of increasing the throughput of transactions, or messages, in a system but it is not often the bottle neck. To approach frictionless throughput in a computer system one must analysis all potential sources of latency and address them so the individual improvement approaches compliment one another and produce an environmental, all encompassing improvement. I will try and create a taxonomy of candidate latency areas and what may be used to lower that latency. The taxonomy will be divided into the most logical components in terms of computing systems:

HPCS Architecture Taxonomy

  1.  Comutational components
    1. Large Bus (64bit & 128bit)
    2. Multiple Core
    3. Large On-Chip Memory
    4. Clusters
    5. Grids
    6. Special Processors (GPUs, Gate Arrays, & EPROMS)
    7. Quantum Processor
  2. Storage components
    1. Solid State Discs
    2. High Performance Databases
    3. Cross-CPU Shared Memory
  3. External data sources
  4. Network communications, and
    1. Utilizing Optimal Routing Protocols
  5. Application components

Outlined above are the components I believe required to be addressed to develop a low-latency, High-Performance Computing System. I have also added subcategories under the components that represent some technologies or approaches that one might explore, and possibly, include in their low-latency architecture.

I will explore each component area in much more detail in further postings.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: