Last night I got a demo of a brilliantly innovative IR derivatives system from the founder team. It prompted some thoughts on the cyclical nature of the software industry, and the return of the fat client. Back in the 90s, when Microsoft still ruled the world, client server computing and the fat client were the dominant model for application development. Often the server tier was simply a DB,  and all app logic happened in a Windows app built in MFC C++ or VB. The rise of Java and the browser threatened that model, and met with a forceful response from Microsoft. That’s a well-worn story, and I won’t repeat it here. Java never delivered on its promise of zero deploy desktop OS agnostic apps; anyone remember Marimba? Instead the browser began its long march toward becoming the dominant desktop client. After the dot com bust it took time for web 2.0 to emerge. For a while there was a debate about Silverlight vs Flex vs Ajax. Thankfully Ajax – the only one not aligned with a major vendor – won out, despite myriad incompatibilities between browsers and unsatisfactory server push mechanisms. Websockets and HTML5 resolved those issues, setting the scene for thin client zero deploy nirvana. Or so it seemed.

Concurrently with these technical developments on the desktop we’ve seen the rise of SaaS. The advance of the browser as client enabled ISVs, starting with SalesForce, to bypass corporate IT departments and server rooms, and deliver apps direct to users, often with UI quality approaching consumer products, and very low entry prices. Now there’s no end of CRM, budgeting and financial management, project management and industry specific vertical solutions delivered direct from cloud hosting to user desktops. Each browser delivered SaaS may be cheap and compelling, but they exclude the possibility of integration and data sharing across apps. Unless the business users buying those SaaS apps gets their IT department involved again to build integrations across the SaaS REST APIs. But that ain’t gonna happen, because the users bought into the SaaS apps to free themselves from the dysfunctional IT depts in the first place, and because of that the IT depts hate the SaaS apps in the same way they hate Excel.

So how does the integration problem get solved? On the desktop of course! We’re now seeing the re-emergence of 90s style app to app interop on the desktop. This time the enabler is not Microsoft’s OLE (Object Linking and Embedding), but proprietary extensions to the HTML5 container. OpenFin’s Chromium based HTML5 container supports drag and drop between apps, and also provides a pubsub bus. DDE anybody? This trend is writ large with the startup’s IRD options pricing solution, which addresses the financial sector’s rates options trading niche in a fiendishly clever manner. Anyone who’s ever stepped on a trading floor knows about the ubiquity of the Bloomberg terminal. Bloomberg’s terminal comes with desktop interop; their Excel addin enables live ticking market data in spreadsheets. APIs do the same for desktop apps, but license terms mean that data can only be used by apps running on the same desktop. That precludes publishing Bloomberg market data to the server side pricing systems typical in the bigger hedge funds and banks. So this startup’s master stroke is to deliver IRD options pricing in a browser client, as an SaaS, and driven by Bloomberg market data. The analytics – the number crunching special source – are coded in JavaScript and run in the browser. Truly a browser based fat client offering a unique solution enabled by desktop integration.

We’re going to see more and more of this kind of SaaS delivered HTML5 fat client adding value with smart desktop integrations. Proprietary extensions like those in OpenFin, or the startup’s Bloomberg integration, will multiply. Proprietary extensions will inevitably clash, and the benefits of zero touch deployment to a standardised HTML5 browser client will be dissipated. It’s all yet another instance of the endlessly repeating cycle in our industry: standards consolidate, and competition differentiates. Standards are introduced to ameliorate the pain of incompatibility, then vendors differentiate with proprietary extensions until the level of incompatibility becomes too painful and the cycle repeats. Intelligent market participants should think hard about where we are in the cycle, and how to get ahead of it. But not too far ahead of course!

This week I’ve been testing the SpreadServe addin with Tiingo’s IEX market data. I was checking performance on my sscalc0.online AWS host for a group of SpreadServeEngines executing various test and demo spreadsheets, including one that subscribes to IEX tickers for AAPL & SPY, via Tiingo API websockets. That API gives us real time top of book as well as last trade price and size for the cash equity traded on IEX. In my test scenario I was running five engines, two of them idle, three running spreadsheets, one of which was a simple IEX market data subscriber. Using Process Explorer I saw some odd CPU spiking on the idle engines. Zooming in with Process Explorer I could see the busyness was on a thread that should have been idle, sleeping inside a WaitForSingleObject call, waiting for a signal to check its input queue. The event object waited upon was created by some generic code invoking win32’s CreateEvent and also used in another thread. Reading the docs I found that CreateEvent’s fourth param, the event object name, implies that the caller will get a handle to a previously created event object if the names match. And I was using a hardwired name! So my thread was being repeatedly woken by events from another thread. A quick fix to make the names unique produced idling engines with no unnecessary CPU burn. All very instructive, partly because running on AWS makes one very aware of paying by the CPU hour.

quandl badly formed URL

April 20, 2015

I’ve started working on some new code that pulls data from quandl, and I was getting this error…

 { "error":"Unknown api route."}

I was using the first example from quandl’s own API page

https://www.quandl.com/api/v1/WIKI/AAPL.csv

and googling didn’t turn up any answers. Fortunately the quandl folk responded on Twitter, and all’s well. The URL should be…

https://www.quandl.com/api/v1/datasets/WIKI/AAPL.csv

So I’m recording the issue here for any others that get stuck. Looks like “unknown api route”==”badly formed url”.

I’ve been heads down working on SpreadServe recently, so haven’t paid so much attention to the etrading topics that I used to blog about so much. Thanks to an update from mdavey, I’ve been catching up on the excellent, thought provoking content that jgreco has been posting on his plans for a new US Treasury trading venue, organised as a limit order book, with buy and sell side trading on an equal footing. I enjoyed the post on internalization and adverse selection. His points about single dealer platforms are well founded too, though my own experience in rates trading is that it’s difficult to get client flow on to SDPs as by their very nature they can’t offer multi dealer RFQs, which are critical for real money clients that must prove best execution for regulatory reasons. Of course, if the inter dealer prices from BrokerTec, eSpeed and EuroMTS were public in the same way as equity prices from major exchanges are public, then more solutions to the best execution problem would be possible. As jgreco rightly points out, transparency is key.

Now I want to raise a few questions prompted by jgreco’s posts, both pure tech, and market microstructure…

  • Java? Really? I wonder if it’s inspired by LMAX’s Java exchange implementation, their custom collections and Disruptor. I would have expected C++, but then I’m an old school C++er.
  • Is that really the workflow ? That must be a tier 2 or 3 bank. All my experience has been at tier 1 orgs where all pricing and RFQ handling is automated. If a trader quotes a price by voice, it’s a price made by the bank’s own pricing engines. Those engines will be coded in C++, driven by Eurex futures or UST on the runs, and showing ticking prices on the trader desktop. Mind you, obfuscation techniques were used to frustrate step 2: copy and paste quote. After you’ve spent a fortune building a rates etrading infrastructure, you don’t want everyone backing out your curve from your Bloomberg pages.
  • Will DirectMatch have decimal pricing, or are you going to perpetuate that antiquated 1/32nd stuff?
  • How will you handle settlement/credit risk? Will each trade result in two, with counterparties facing off with a clearing house?
  • How do you shift liquidity? When liquidity is concentrated at a single venue, it’s difficult to move. The only case I know of is German Govt Futures moving from Liffe to Eurex. I guess UST liquidity is fragmented across D2D and D2C venues, so it’s not concentrated all in one place, improving DirectMatch’s chances of capturing some of the flow.

Banks as platforms

September 29, 2014

Zac Townsend‘s post on how Standard Treasury aims to turn banks into platforms is intriguing. There’s certainly no lack of ambition in his goal. But I do wonder if he’s setting himself to tilt against the very nature of both banks and platforms. One of the key phrases in Zac’s post is: “allowing developers to think of banks as platforms”. I’ll just unpack that a little. First, platforms, as explicated in Evans & Hagiu’s excellent Invisible Engines. Platforms are multi-sided markets. One side pays for access to the revenue generating customers that an effective platform aggregates by offering free or cheap access. For example, in gaming, game devs pay licenses to platform (console) owners so they can sell to gamers. The console manufactures sell consoles at or even below cost. In financial trading clients pay Bloomberg for access to information & liquidity, and dealers get access to the platform without paying fees to Bloomberg. Famously, Google and Facebook offer services free to consumers to enable them to sell to advertisers. So if banks are going to spend a load of cash adopting Standard Treasury tech so they can become more like real software platforms, who is going to pay?

Let’s bear in mind that banks are already liquidity platforms. They charge fees for access to the liquidity they provide by aggregating client capital. They disguise fees by making some things “free”, and charging for others when they cross sell. If you attempt to commoditise or aggregate by means of a software platform, they lose the cross sell, and so the margins. They will certainly resist that prospect. So, any software platform that integrates banks with with software services needs to offer the prospect of more margin in existing deal flow, or new deal flow to justify the cost of adoption. Judging by Zac’s post, it looks as if he thinks the new deal flow would come from the underbanked via mobile apps. Will that deal flow justify the cost of implementing Standard Treasury tech? I’m sceptical…

Standard Treasury should also be considering the cost of decommissioning those expensive legacy systems. In banking old and new systems tend to run in parallel until all stakeholders are confident that the new systems supports all legacy system functionality. So new tech that promises cost savings tends to cause a cost spike until the old system really has been put to bed. And, believe me, that can be a lengthy and painful process! I have first hand experience of systems that have resisted retirement for decades…

Magic Ink

March 1, 2012

Thanks to reddit I’ve just discovered Bret Victor. I watched the Invention video, and enjoyed the whole theme on tightening the feedback loop between changing code and seeing results. The later part on moral crusading was interesting if not entirely convincing. So I checked out the web site, and am reading Magic Ink. Wow ! This is a full blown vision of doing software differently. Back in the 90s I got really excited by, in turn, Brad Cox’s vision, Patterns, and Open Source. About 10 years ago I discovered dynamically typed languages with Python and Smalltalk. And that’s the last time I had a real rush of excitement about some new approach in software. Sure, I’ve dabbled in functional languages like F#, and played with various OSS projects. But for the most part my attention has been on the trading topics that fascinate me, like electronic limit order books.

So what’s Magic Ink about ?  Victor divides software into three categories: information, manipulation and communication software. He focuses on information software, which is most apps really. And that includes most financial and trading apps. And then he proceeds to argue that there’s too much interactivity, and that interaction is bad. The way forward is context sensitivity combined with history and graphic design. Counterintuitive, and utterly convincing. A joy to read !

I can’t help wondering what the UX crew over at Caplin think of this ?  I haven’t seen them blogging on it. Victor’s views have radical implications for how etrading apps should work. I’d expect Sean Park to be pushing this angle with his portfolio companies too…

Fascinating post from Quantivity – I’m hoping for more on the same topic from him. Many of the advantages listed would be enjoyed by any small non real money fund: hedge, prop, family office etc. Of course there are some serious obstacles that small (relatively) unregulated funds face, and Lars Kroijer describes them in detail in Money Mavericks. And a lack of legacy technology is indeed an advantage in building trading systems quickly. A relatively recent pre existing framework, either from vendor or in house built can be a big advantage though. A classic example is gateways for exchange/ECN connectivity.

Fascinating blog on HFT implementation from WK. He commnts “a variation on this structure is to store the Limits in a sparse array instead of a tree. ”  More detail on the implications for L1 and L2 cache behaviour of trees versus arrays for limits would be welcome. I’m assuming C++ implementation here of course, though WK points out you can make Java go fast if you avoid GC, which chimes with the experience of the LMAX guys. I ask because I interviewed with a big HFT firm last year: they gave me a programming exercise based on L1/2 cache behaviour.

The Greatest Trade Ever

February 1, 2011

I’m reading Zuckerman’s Greatest Trade Ever, an accout of how John Paulson’s hedge fund profited from the credit crunch. There’s a lot of anecdotage and general background. But among all that there’s some good detail on implementation. How to implement a view of the markets as a trade is a key question for any trader. Drobny’s House of Money is excellent on this. Zuckerman’s book as some good stuff on why shorting the bonds or equity of home loan origination companies didn’t work, why CDSs on sub prime MBSs didn’t become tradeable til 2005, and why they were the right vehicle for shorting. Also on why using CDSs means negative carry, and why that’s generally a difficult thing for any portfolio. Taleb has some good comments on why his out the money options strategy suffered from the same problem.

LMAX conference

January 28, 2011

So I’ve signed myself for the LMAX UCL algo trading conference – see you there !  I’m looking forward to Martin Thompson’s talk, and hoping I can bend his ear on a few server engineering questions over drinks at the end of the afternoon. I’m also keen to know more about the LMAX market design. In the infoq presentation I linked earlier there are some intriguing comments about ensuring low and stable latency for market makers. This makes me wonder what the terms are for market maker participation on the LMAX order books. Do market makers have a different API than market takers ? Do makers get processing priority for placing, pulling and amending orders ? Can maker orders cross with other maker orders, or only market taker orders ?