Capital Markets in the Cloud

Capital Markets in the Cloud

Markets have always been ‘places’ where traders and investors have met to transact and establish an equilibrium price for assets. As cloud architectures take hold, those places may one day become more ethereal and less tied to a single physical location. The advent of capital markets in the cloud may not be far away.

The first stock markets were informal meeting places – a coffee shop in the City of London or under a Buttonwood tree on New York’s Wall Street. As capital and commodities markets developed, they became purpose-built facilities designed for open outcry trading. Then came the advent of electronic order books. Noisy floor traders were replaced by the hum of servers. But the ‘place’ where those servers met was still important. With many strategies predicated on being the first to react to new information, exchange co-location venues were born, and the industry embarked on its obsession over latency.

Then came a tipping point – the Flash Crash. More than a trillion dollars wiped away from US equity markets in a couple of hours, for no fundamental reason. Although markets demonstrated their resilience and quickly rebounded, the crash highlighted the vulnerability of ever faster trading in ever smaller lots of liquidity. It was almost like a nuclear test case – market liquidity was temporarily vaporised. Although the fallout on the broader economy was contained, it brought the public’s attention to HFT, with regulators, politicians and even author Michael Lewis chiming in on how to “fix” what was broken (the absence of circuit breakers), or in some eyes how to “un-fix” what was “fixed.”

Personally, I don’t think there’s anything inherently wrong with low-latency or high-frequency trading. Information asymmetries have always existed, and will always exist. Healthy and fair competition over who can be the first to source, analyse and respond to new information all contributes to a smarter, more efficient market. As technology has advanced, bid/offer spreads and transaction costs have fallen – benefiting the retail investor. The only problem is that it has made transacting in size more difficult as markets have become more sensitive at the touch.

New market models have therefore been emerging that either look to equalise latency among trading participants (such as IEX’s speed bump model) or introduce some random element of timing (such as BATS Europe’s periodic auction book).

Actually, to call these market models “new” is not entirely fair. When it comes to equalising latency, I remember covering a story more than a decade ago about OMX (before it was acquired by Nasdaq) filing a patent for a non-deterministic trading strategy server that would run in tandem with an exchange’s deterministic core matching engine. The approach was somewhat different to the speed hump employed by IEX. Rather than slow everyone down, it proposed speeding everyone up – equalising latency by offering all market participants access to the same high-speed algo engine that ran alongside the exchange’s matching core. Equally, the idea of introducing a random timing element into an auction trading model is an idea I first saw suggested in a paper by Cinnober in 2010, several years before it was brought into practice by BATS.

Whether these market models are truly new is beside the point, though. The attention they are gathering is certainly novel (in an industry that has spent so long obsessing over latency). Should these latency-agnostic models begin to command an increasing share of order flow, it could be an eye opener for many firms. It could prompt somewhat of a snowball effect. Ultimately, liquidity begets liquidity. As more flow is channelled toward a particular market, it should help improve the quality and likelihood of execution, which in turn should attract more flow – creating a virtuous circle.

In turn, more participants could come to realize that their investment strategies are not predicated on latency, and begin to prioritize other factors.

Now comes the leap of faith – should more market participants realize that latency is not the be-all and end-all, it could pave the way for more trading infrastructure to be hosted in the public cloud. Traditionally, most firms have shied away from hosting real-time infrastructure in the cloud. But if latency were less of a concern, it opens up a new world of possibility.

Imagine how much global financial markets would save if they hosted all of their trading systems on public IaaS – replacing all of their leased lines, premium exchange co-lo space, specialized low-latency hardware, point-to-point circuits and financial extranet services – with cost effective, software-defined hardware and connectivity.

Running the entirety of the global capital markets infrastructure in the cloud is not realistic. After all, there will always be aspects of trading and price formation that will be latency-sensitive and require a higher level of determinism. Besides, given that the IaaS market is dominated by only a couple of key players (and with one clear leader), concentration risk would eventually be a significant concern. That said, there are huge swathes of the market that could benefit from the kind of simple, cost effective, rapidly provisioned and resilient infrastructure offered by the likes of AWS, Azure, Google et al.

Some regulators are already making the right noises in promoting innovation – not only of new fintech business models, but also providing a green light to the use of cloud services (the UK’s FCA and MAS in Singapore are two cases in point).

But the prescriptive nature of rules such as Reg NMS may actually be stifling innovation by locking down price/time priority across the market to the detriment of other factors, such as size and cost. If regulators begin re-thinking market infrastructure, they may also consider that IT infrastructure delivery models have evolved significantly since Reg NMS was first devised.

A version of this blog was first published on the Tabb Forum.