add a comment
Unlike the traditional definition of high-performance computing (HPC), which often states “that HPC is the use of supercomputers and computer clusters to solve advanced computation problems”, High-Performance Computing Systems (HPCS) is more encompassing. It is a super-set of HPC and has many high performance components that make up an end-to-end system that includes much more than just focusing on the compute engine, or processor; for instance it usually consists of (i) the computing capabilities, (ii) storage capabilities, (iii) external data acquisition, (iv) network communications, and (v) application performance. Also in traditional HPC, whether the compute engine is a supercomputer or distributed computers, the goal is to accelerate the calculation of the problem at hand. This is because tasks that require acceleration are so computationally intensive. This is not necessarily true in HPCS, the task at hand may be deterministic and simple in nature but the need for it to “happen” as quickly as possible is paramount. Not unlike the task of high-frequency trading.
In essence one can appreciate the difference by thinking that HPC’s objective is to maximize the through-put of a compute engine so that difficult problems can be solved as quickly as possible. Alternatively, the object of a HPCS is to maximize the through-put of a system so that transactions can be completed quickly as possible and therfore latency is low.
The objective of HPC and the minimization of latency through the processor certainly will improve the overall performance of increasing the throughput of transactions, or messages, in a system but it is not often the bottle neck. To approach frictionless throughput in a computer system one must analysis all potential sources of latency and address them so the individual improvement approaches compliment one another and produce an environmental, all encompassing improvement. I will try and create a taxonomy of candidate latency areas and what may be used to lower that latency. The taxonomy will be divided into the most logical components in terms of computing systems:
HPCS Architecture Taxonomy
- Comutational components
- Large Bus (64bit & 128bit)
- Multiple Core
- Large On-Chip Memory
- Special Processors (GPUs, Gate Arrays, & EPROMS)
- Quantum Processor
- Storage components
- Solid State Discs
- High Performance Databases
- Cross-CPU Shared Memory
- External data sources
- Network communications, and
- Utilizing Optimal Routing Protocols
- Application components
Outlined above are the components I believe required to be addressed to develop a low-latency, High-Performance Computing System. I have also added subcategories under the components that represent some technologies or approaches that one might explore, and possibly, include in their low-latency architecture.
I will explore each component area in much more detail in further postings.
A Return to Commentary: What has advanced? July 5, 2009Posted by jbarseneau in 1.
add a comment
Advances in computing have profoundly changed our society. It has provided us with the ability to capture and process massive volumes of meaningful data. We never before have had the amount of financial data available for analysis nor have had the processing power to analyze a complete market in real-time. This in conjunction with the advances in computational methods, we have been recently equipped to examine financial data in real-time and more efficiently than anytime previously.
This seris will examine the scientific and commercial relevance exposed as a result of the convergence of four advanced and diverse fields of; (i) model-driven trading, (ii) computational intelligence, (iii) the availability of high frequency market data, and (iv) the evolution of enabling technologies; such as available 64-bit processors, high performance data managers and grid computing. We will demonstrate the unique power of this technology convergence by analyzing quote depth, which is still not commercially available in historic form, in real-time and identifying important non-seasonal patterns. We will examine the BID-ASK depth of the NASDAQ cash equity market by loading and committing inhomogeneous time-series market data into cache memory. We will then applying the dataset to a continuously adaptive and biologically-inspired computational method that will conduct high speed pattern recognition. The resultant patterns will indicate market anomalies and will form stylized facts that in turn can be used to supply a paradigm for model-driven trading. Because of the technology barriers to entry and a high level of domain specific knowledge required the method described here has not been attempted by any known large non-bank entities and is truly ground breaking.
Automated Trading September 24, 2007Posted by jbarseneau in 1.
add a comment
Improvment # 1: Discovery of opportunities in near real time.
No more missing a trading opportunity just because you aren’t staring at the right chart closely enough. Or wasting precious seconds entering an order manually while the market moves away from you. Using the superior processing speed of your computer, TradeStation is designed to monitor the markets, seek and identify trading opportunities based on the trading rules you’ve specified—and then send your buy, sell, and even your cancel orders—within fractions of second to all major ECNs and exchanges. And when seconds can mean the difference between a big gain and a disappointing trading loss, we believe you’ll find this a significant advantage.
Improvment # 2: High frequency data, track mpore or all markets in realtime
No longer do you have to be glued to your screen, trying to keep up with each stock at once. TradeStation gives you the power to monitor dozens or even hundreds of securities at once—and do it more efficiently—than ever before. It’s designed to follow multiple markets for you easily, no matter how complex your trading strategies are, or how precise your trading rules are. That means that you can include multiple conditional entries and exits, profit targets, protective stops, trailing stops, and more in your strategies, and have them all automated simultaneously.
Improvment # 3: Implement Behavioral Finance & Increase trading discipline
How often have you missed a trading opportunity simply because you hesitated too long…or watched your profits disappear because you held out for more profits, instead of sticking to your planned exit strategy? There’s simply no doubt that emotions can be your worst enemy when trading. TradeStation helps you combat your emotions by helping you get into the market—and out of it—all based on the historically tested strategies you design
faster than Fast; Why FX needs to be Sooo Fast August 16, 2007Posted by jbarseneau in Uncategorized.
add a comment
Trade execution in a half a blink of an eye. When Equity algorithmic systems are seeing round trips of 500 ms to 50 ms, FX managers are looking for turnaround times of 300ms to 25ms.
Latency; Survial of the fastest. August 14, 2007Posted by jbarseneau in Uncategorized.
add a comment
The latency, speed and movement, of sensitive data has seen it’s obvious effects on the general quality of execution. There are many drivers that include satisfy the need for speed, consistency, and systemic capacity or throughput. Any slow down in this speed can be a losing proposition for the executor. Along with competitive pressure there are regulatory requirements that need to be address that include: MiFID, Reg NMS and Decimalization. The result of these market infrastructure changes is that of putting unprecedented pressure on the speed of execution.
In historical times the transportation of data can be seen to be executed as much as today, and as important. Without Nathan’s Rothschild’s elaborate network of hilltop lantern semaphores, he would not of known the out come of the battle of Waterloo a full day before the British government themselves thus profiteering in an unprecedented way; data and data latency has and always be a crucial component in the execution process. Along with continuing growth of MTFs, ECNs and crossing networks a large fragmentation of liquity is happening. Terms like light, and dark pools are being used to describe the difficulty in finding and access these pools of liquity. There are even special tools the “sweep” and aggregate liquity pools for execution before the pool moves.
All of these issues contribute to execution latency; the time to make a complete round trip; including broker’s internal latency, exchange configuration of hardware and software, order book processing time, and even proximity to exchange hosting services. The Latency time can be from 60ms to 850ms. Latency is not necessarily calculated by the fasted communication; like a tour du-France rider they must be able to be versatile; slowing downing, navigating corners, other obstructions and subjective dangers. It is more accurate to say a low touch and no touch trading will continue to consume and even larger percentage of the execution world.
add a comment
The new National Market System Regulation (Reg. NMS) is finalized and dates are being set for implementation deadlines. It seems like it has been long road that we have traveled to get to where we are. But considering the mandate of Reg. NMS was first spelt out originally back in 1970, under §11a of the Securities Exchange Act of 1934, and the confused state of the pan-European efforts on unifying their own financial system, the US is doing a relatively “good” job despite different opinions on how Reg. NMS should be implemented.
Since the original 1970 notion of a NMS, the market and its supporting technologies have naturally changed dramatically due partially in a “renaissance” in technology and the realization of a global economy. Due to this, the SEC has proposed updated high-level regulations to address these profound progressions, which include the following:
A uniform trade-through rule for both exchange-and Nasdaq-listed securities;
A uniform market access rule with de minimis fee standard;
A sub-penny rule prohibiting market participants from displaying sub-penny quotes except for securities with a share price of below $1.00;
A modified system for the dissemination and pricing of market data; and
New Regulations NMS, which would consolidate the existing NMS, rules under §11A of the Exchange Act.
So the overall objectives are set and complaints form market participants of possible unfair changes continue to go on. Some are saying the new regulation does not go far enough to renovate the current system, while others say the regulation lacks any kind of global regulatory reassessment of the capital markets.
add a comment
The speed in which electronic trading now takes place is forcing market supervision and market surveillance to be become electronic, semi-automated, and real-time. This combined with the adoption of Reg. NMS in the US and MiFID in Europe has created even a greater need for market surveillance to be ultra-fast, predictive and adaptive. No longer will surveillance technology be “labeled” pedestrian or back office slop work. It will be akin to high tech MI5 or CIA overt type capabilities.
The primary objective of a modern surveillance department is to maintain a fair, and the most efficient market, for all participants. A fair market is one where all participants face a transparent set of trading rules which are effectively enforced. An efficient market is one where instantaneous exchange of securities for cash, or cash for securities, takes place at the lowest possible cost quickly. These objectives are sought after by all the different market participants including; broker-dealers, exchanges, ATSs, and clearing & settlement houses. Each participant, acting as layers of surveillance, provides compliance assurance so that the transaction moves to the next party with a high confidence that it complies with all regulations.
Most traditional surveillance systems are tailored for a non-electronic world. They are basically a large database of regulatory definitions that are used to instantiate exceptions when a suspicious trade is identified. This form allows the user to drill down into the data in order to inspect and identify exceptions that actually violate a rule. Starting from a broad view of the exchange, the system would then monitor the trading activity of each firm, individual trader, and the nature of each individual trade. This method forces many exceptions to the surface and requires experienced human intervention to decipher true breaches of the rule or just unusual trading behavior. It is, in other words a sophisticated real-time filter.
add a comment
The SEC‘s Reg. NMS profoundly solidifies the paramount role of electronic trading in the US equities market and continues to encourage innovation, both business wise and technologically. In one simple sentence, Reg. NMS will Institutionalize Electronic Trading. But it does so by directly threatening the old way of conducting trade operations, including the jobs of specialists, floor brokers, and traditional traders.
In the post-Reg. NMS environment, algorithmic-based trading is expected to play an even greater role as traders attempt to capture Alpha in increasingly difficult market conditions for institutional-size trading. Algorithmic-based trading will be used for efficiency and productivity reasons, but an increasing number of firms will rely on algorithmic trading for regulatory compliance as well. This is truly the institutionalization of electronic trading and will push algorithmic-based trading into the main stream. The job now is up to the market participants to ensure all the necessary polices, procedures, and underlying technology infrastructure is in place to facilitate the implementation of Reg. NMS in 2007.
The adoption today of Reg NMS was the culmination of the Securities and Exchange Commission’s efforts over the past several years to re-examine and modernize the national market system.The SEC has worked diligently to resolve complex issues that are critical to investors. SIA has continuously supported an open dialogue on these important issues to ensure the enhancement of investor protection and increased competition among the markets”
— Statement by SIA EVP Don Kittell on April 6, 2005.
With the new deadline extensions, the proposed Reg. NMS is “finalized” and dates are being set for phased implementation deadlines. It seems like it has been a long road traveled, but considering the mandate of Reg. NMS was first spelt out originally back in 1970, under §11a of The Securities Exchange Act of 1934, and the challenging state of the pan-European efforts, addressed by MiFID, to unify their own financial system, the US is doing a relatively “good” job despite different opinions on how Reg. NMS should be implemented.
add a comment
Electronic trading strategies, strategies strategies… Everyone, including market service providers, like exchanges; broker-dealers; market utilities; and the buy side are hiring buckets of people to sit and scratch their heads and come up with an “electronic trading” strategy; which by the way they need to! Well it is going to be interesting, there has been so many products and services developed over the last few years; some innovative and useful, others not so much, that there is a quagmire of capabilities to sort through. I believe this situation was precipitated by several factors including; the T+1 mandate followed by the “promise” of STP, then followed by Basel II and continued with The Agency Disclosure Act, FAS 133 and now finally Reg. NMS. All these initiatives are certainly good and necessary, but they have clogged up a log-jam of electronic capabilities that we are now all piecing together. There are so many that:
There are small companies out there that have sophisticated algorithmic capabilities that support program trading, basket trading and crossing networks that no one has heard of.
I believe that underlying the regulatory mandates mentioned above there are common implementation characteristics. Some of the main one’s being transparency, better transaction times, and lower operational risk; which all lead to automation, and thus some sort of electronic implementation. This is why I believe there is such a back log of capabilities. Not to say that they would have not naturally have gone electronic anyway, but maybe not so dramatically and in such mass. What we are now seeing is the major players are sifting through all the surplus electronic capabilities and determining how they fit into their overall “electronic trading strategy”. Thus we are seeing a large and systemic consolidation in the electronic trading area. For instance, when CitiGroup bought Lava, you saw JPMorgan buy Neovest quickly and then BoNY bought Sonic so they were all covered on the program trading front. I believe this will continue.
Electronic Trading of SWAPS: Are we ready now? July 18, 2006Posted by jbarseneau in Uncategorized.
add a comment
SWAPS are on the electronic move! (again…) With the recent purchase of Swapstream by the CME, the past release of PBWire by SwapWire, and the increasing use of ICAP’s I-Swap & FRA-Cross, we are seeing some encouraging movement in the electronic derivative space again. It seems that the ever increasing success of electronic trading in both the equity and fixed income markets are pressuring many participants to accelerate the implementation of interest rate swaps electronic capabilities. The SWAPS market has been ready for a renovation for a longtime. After all, trading, confirmation, and processing remain highly manual, while brokers continue to reap some of their largest fees from interest rate swaps. The market needs to get it right this time and delivery the following things:
- Increased market transparency. Multi-dealer platforms can function as anonymous liquidity aggregators, providing a deeper view of the market while tightening spreads.
- Significantly reduced transaction costs. Technology and reduced overhead allows these firms to offer significantly reduced brokerage fees. While none of the firms Celent spoke with would disclose exact fee schedules, all maintained that they offered deep discounts (as much as 50 percent) against prices charged by traditional voice brokers.
- Increased operational efficiency via straight-through processing (STP). Firms will see an overall reduction in operational overhead and error rates through increased STP integration.
- Decreased Operational Risk & Increased Relief on Reg. Capital: again, operational improvements that decrease error rates will allow firms to reduce Basel II-mandated capital reserve requirements, making swap trading significantly less capital-intensive.
Although swaps, like other OTC derivatives, can be highly customized financial instruments, many are the same and the industry felt that the swaps market was sufficiently standardized to be traded online. In response, several platforms have been implemented and promised to make the swap market markedly more efficient. To be fair the first attempt was not promising; The majority of these initiatives failed to attract significant interest from the market. The past 2 years have seen a “renaissance” in electronic swap trading initiatives, and things seem to be moving in the right direction.
CME’s acquisition of Swapstream expands the CME into global SWAPS trading. Swapstream is a neutral inter-dealer platform that supports the trading of SWAPS and is considered an ATS governed by the Financial Services Authority (FSA). Swapstream will help penetrate the fast growing $164 trillion in notion value OTC SWAPS market. Swapstream established market position of offering the greatest liquidity available and innovative functionality and CME’s global distribution, post-trade processing and clearing capabilities is a very synergetic match.
ICAP is the world’s largest electronic inter-dealers broker and is seeing great success with both their i-Swap and FRA-Cross products. i–Swap is an electronic booking model for provides a sophisticated view of liquidity, strategy trading, and STP for better execution. FRA-Cross provides a matching system so that traders can hedge their reset risk efficiently and cheaply.
SwapWires PBwire product allows firms to “give-up” their SWAP trades to other executing brokers. These so-called give-ups trades are cleared through other brokerages without the worry of the originating dealer. This service is becoming very popular with hedge funds as it becomes getting instantaneous execution to them which significantly reduction in the possibility of operational and trading risk.
If technology innovation and business collaboration continues we should see a markedly increase in SWAPS electronic trading which my pave the road for other derivatives.