Python 3 & PyCharm

July 1, 2015

I’ve been coding in Python since 2000, and for a long time my dev env preferences haven’t changed. Like many I used Python 1.5.2 with a basic text editor, often vim, for a long time. Once the 2.x series of Python releases started I held off and stuck with 1.5.2 for a long time. I never used 1.6.x. I can’t remember whether I made the jump to 2.1 or 2.2, but I’ve been using 2.x for a long time now, usually with notepad++ as my editor. Part of the reason is that it takes time for the extensive Python ecosystem to catch up and port all the libraries and frameworks. Anyway, I’ve just finished a contract where I used Python 3.3 and the PyCharm IDE, and it was a breath of fresh air. I’d never consider development in Java or C++ without an IDE, and my preferences are IntelliJ & MS Visual C++ respectively. Previously I’d felt an IDE was unnecessary in Python, mainly because the rapid cycle time is so quick. Unlike C++ the cycle is not edit, compile, link, test. In Python one just edits and tests, which makes the printf style of debugging far more effective. PyCharm turbocharges the debugging process with breakpoints and visual object graph traversal. And during coding it interactively highlights syntax errors and variable references. That’s a big time saver too, since it makes code run at the first attempt without throwing syntax errors. +1 for PyCharm!

So what about the shift from Python 2.x to 3.x ?  For me the important points have been the move to more iterator based coding. The iteritems( )/iterkeys( )/itervalues( ) methods no longer exist as items( )/keys( )/values( ) no longer return lists, they return iterable view objects. Those view objects are not stand in replacements for lists. And I had to get used to using the next( ) operator with generators. And, of course, print is now a function and no longer a statement. But apart from that it was straightforward.

Update 2015-07-01: I’ve just been pinged by an old coding compadre who downloaded PyCharm on my recommendation, and needs a tip on fixing up interpreter paths to pick up libs. I had to read a couple of StackOverflow articles to figure this out too, so I though I’d document it here. I’m using PyCharm Community Edition 4.5.2, and to add libraries to my interpreter search path I go to the File/Settings dialog. In the left had tree control, under the Project: <myproj> node I select Project Interpreter. Then I click on the cog icon in the top right, next to the selected interpreter, and choose the More… option. This throws up another dialog: Project Interpreters. On the right are several icons. The bottom one is a mini tree control that shows a pop up tooltip saying “show paths for the selected interpreter”. Click on that, and finally you get the Interpreter Paths dialog, and you can add your library. Phew!! Could this config be buried any deeper? IntelliJ: sort it out! PyCharm is very, very good, but this is quite a useability flaw….

I’ve been doing a lot of Excel RTD addin coding recently, as I’ve been adding RTD support to SpreadServe. As part of that work I’ve developed two new addins, both of which I’ve posted on github. Of course, both addins work in Excel and SpreadServe. The first, SSAddin, supports quandl.com queries and Unix cron style timer events on background threads. Both these things can be done with VBA of course, and that’s how quandl’s existing Excel addin does it. However, SSAddin gives you the means to achieve automated, scheduled downloads from quandl with no Visual Basic and no manual keystrokes into a GUI. The second addin, kkaddin, is based on Kenny Kerr‘s example C# RTD code. While I was researching RTD I read Kenny’s excellent material on the topic. John Greenan also has some quality content on his blog too. However, I wasn’t able to find a single, simple, download with C# boilerplate code that would build and run; so that’s what kkaddin addresses.

quandl badly formed URL

April 20, 2015

I’ve started working on some new code that pulls data from quandl, and I was getting this error…

 { "error":"Unknown api route."}

I was using the first example from quandl’s own API page

https://www.quandl.com/api/v1/WIKI/AAPL.csv

and googling didn’t turn up any answers. Fortunately the quandl folk responded on Twitter, and all’s well. The URL should be…

https://www.quandl.com/api/v1/datasets/WIKI/AAPL.csv

So I’m recording the issue here for any others that get stuck. Looks like “unknown api route”==”badly formed url”.

Excel industrialisation

April 3, 2015

John Greenan has produced an excellent series of posts on Excel VBA Industrialisation on his blog. It’s a topic dear to me, so I figured I’d better respond. In his posts JG presents a series of VB Extensions based techniques to enable the export of embedded VB from a spreadsheet, so it can be version controlled, as well as techniques for error logging and reporting. The code is out there on github, and it’s a valuable addition to the public domain, especially since there are several commercial offerings addressing this space. For instance, spreadgit, ClusterSeven and Finsbury Solutions. JG kicks off his discussion in part one by observing that VBA is in the doldrums, and that the cool kids are using MEAN, Scala, OCaml or Haskell. Sure, the cool kids are never going to use VBA. But that’s not just because other languages are cooler, it’s because VBA and the latest programming languages are aimed at completely different audiences. Scala, OCaml & Haskell are for developers, and Excel is for non developers, end users, business users. The very reason for Excel’s phenomenal success and ubiquity is because it enables end users to create software solutions. Apparently there are eleven million professional software developers in the world. But even those eleven million can’t meet the world’s demand for software, so end users have to generate their own solutions, and they use Excel to do it. The result is, as JG points out in the comments to part six in his series: “In many cases the requirement for Excel Industrialisation is for a firm with an existing portfolio of ‘000s of spreadsheets that cannot all, in a cost-effective manner, be manually rewritten to conform to a coding standard.”

A version control system is an important part of controlling those portfolios of end user developed spreadsheets. However, it solves only part of the problem. Another major underlying factor that causes so many spreadsheet problems is their manual, desktop operation. Since Excel is a desktop application, Excel spreadsheets must be manually operated by their users. Users have to start up Excel, load the sheet, key in unvalidated data, hit F9, and then copy & paste or email the results out. All of that is error prone. And all of this manual operation is a major factor preventing any organised, systematic testing. All of these problems were writ large with the London Whale. All these problems could be resolved if we could decouple Excel as a development environment from Excel as a runtime. It’s great that end users can develop their own solutions in Excel, but it’s burdensome and error prone for this solutions to be operated manually on desktop PCs. Those solutions should be automated, resilient and scalable, and hosted by a server side rumtime. That, of course, is SpreadServe.

I’ve been heads down working on SpreadServe recently, so haven’t paid so much attention to the etrading topics that I used to blog about so much. Thanks to an update from mdavey, I’ve been catching up on the excellent, thought provoking content that jgreco has been posting on his plans for a new US Treasury trading venue, organised as a limit order book, with buy and sell side trading on an equal footing. I enjoyed the post on internalization and adverse selection. His points about single dealer platforms are well founded too, though my own experience in rates trading is that it’s difficult to get client flow on to SDPs as by their very nature they can’t offer multi dealer RFQs, which are critical for real money clients that must prove best execution for regulatory reasons. Of course, if the inter dealer prices from BrokerTec, eSpeed and EuroMTS were public in the same way as equity prices from major exchanges are public, then more solutions to the best execution problem would be possible. As jgreco rightly points out, transparency is key.

Now I want to raise a few questions prompted by jgreco’s posts, both pure tech, and market microstructure…

  • Java? Really? I wonder if it’s inspired by LMAX’s Java exchange implementation, their custom collections and Disruptor. I would have expected C++, but then I’m an old school C++er.
  • Is that really the workflow ? That must be a tier 2 or 3 bank. All my experience has been at tier 1 orgs where all pricing and RFQ handling is automated. If a trader quotes a price by voice, it’s a price made by the bank’s own pricing engines. Those engines will be coded in C++, driven by Eurex futures or UST on the runs, and showing ticking prices on the trader desktop. Mind you, obfuscation techniques were used to frustrate step 2: copy and paste quote. After you’ve spent a fortune building a rates etrading infrastructure, you don’t want everyone backing out your curve from your Bloomberg pages.
  • Will DirectMatch have decimal pricing, or are you going to perpetuate that antiquated 1/32nd stuff?
  • How will you handle settlement/credit risk? Will each trade result in two, with counterparties facing off with a clearing house?
  • How do you shift liquidity? When liquidity is concentrated at a single venue, it’s difficult to move. The only case I know of is German Govt Futures moving from Liffe to Eurex. I guess UST liquidity is fragmented across D2D and D2C venues, so it’s not concentrated all in one place, improving DirectMatch’s chances of capturing some of the flow.

The old ones are the best ones, and I spent too many hours yesterday butting my head against a classic Windows bug. I was testing an optimisation in the SpreadServeEngine‘s handling of nested invocations of XLL supplied functions. I was using QuantLib‘s YieldCurveBootstrapping spreadsheet, and the QuantLib 1.4.0 XLL addin. The sheet invokes the qlPiecewiseYieldCurve function, and the fourth parameter, RateHelpers, is supplied by an invocation of ohPack. Normally in this scenario, the XLOPER returned by ohPack would be marshalled to the engine’s internal representation, before being marshalled back to an XLOPER for handing into qlPiecewiseYieldCurve. I added a shortcut that made the XLOPER available as well as the internal representation, so the marshalling process could proceed by shortcut if possible. Unwittingly I ended up newing an object in one DLL which was released by another DLL when qlPiecewiseYieldCurve returned and released it’s stack frame. My debug build threw a run time assert on _BLOCK_TYPE_IS_VALID. Cue a bug hunt with me toothcombing through the codebase for a double delete or a buffer overrun. Spreadsheet engines are complex beasts since they’re basically a runtime for a functional programming language, so must maintain an object graph and dispatch imperatively at the nodes to tokenised code and XLL supplied C/C++ functions. There’s a lot of memory pool and stack management to do in all that! So I was sure there was a subtle memory management bug somewhere that had been exposed by my optimisation for the nested XLL functions corner case. While I was looking for the bug I did get some good reading on Windows heap debugging done: this and this are recommended. When debugging it’s easy to keep going deeper and deeper in pursuit of a supposedly subtle bug. Better to take a breath and try and widen one’s focus. It was this article that made me think again about the fact that each DLL has it’s own C run time linked in, and if their memory management implementations differ, there will be problems. So it was in this case. Adding a static allocator function to the DLL responsible for freeing the object ensured that the new & delete were done by the same DLL, and the bootstrapping model started running through smoothly. Of course what I need to do now is revisit my build system’s link model and ensure all DLLs and EXEs are linking the same CRT implementation…

SpreadServe Beta

December 6, 2014

For the last few months I’ve been working on a new product: SpreadServe. SpreadServe’s mission is to take all those unwieldy spreadsheets full of XLL addins and VBA macros off trader desktops, and turn them into resilient, automated, scalable enterprise services. Excel spreadsheets are great, because they empower users to create their own solutions quickly. But on the other hand, they’re a liability, because they’re usually poorly tested and they have to be manually operated on the desktop. SpreadServe’s goal is to fix that by providing an alternate runtime for Excel spreadsheets. Retain business agility by continuing to design and build models in Excel. Auto-test, deploy, scale, log and automate those models in SpreadServe. Do contact us if you’re interested in joining the beta program.

Back in April I posted on my use of tornado for real time push up to a browser client using websockets. Then I was building a proof of concept. I’ve come a lot further since; I’m now building the beta of my product. I’m still using tornado on the server side, and I’ve adopted Twitter’s Bootstrap framework for the browser GUI. There’s a bit of a learning curve to become productive in Bootstrap & jQuery. But it’s been worthwhile, and I’ve been pleasantly surprised by how far browser GUI development has advanced in the last five years. Anyway, the purpose of this post is to flag an excellent resource for anyone digging deeper into tornado server dev: Oscar Vilaplana‘s Tornado In Depth Europython talk. There’s no sound on that youtube video for the first couple of minutes, and even after that the audio is a little scratchy. But bear with it. Oscar spends an hour walking you through tornado based code and the supporting internals, and makes it all beautifully simple. All the examples from his talk can be found here on bitbucket. Thanks Oscar!

Suppose you have a third party Win32 DLL, but don’t have access to the source code. The DLL exports a well known init function that makes callbacks into your code to register more functions. So you can call Win32’s LoadLibrary to load up the DLL, and you can use GetProcAddress to get hold of a pointer to the init func. You can invoke the init function from your code because you know the function prototype – the number and type of parameters. Then you get callbacks into a Register function you supply, which gives your code the names of other functions exported by the third party DLLs, as well as the number and type of parameters. Excel developers will recognise the description of the XLL function registration mechanism. So, given the names of those functions you can use GetProcAddress to get function pointers for them. But how do you invoke them? You don’t have function declarations available at compile time in your code. The functions don’t use varargs, so you can’t use va_start and va_end to figure out the params at run time.

The only way to resolve this dilemma is to kick down to assembler, and hand code some x86 that follows the Win32 calling conventions, which are well explained here and here. So here’s the code I wrote to invoke arbitray functions exported from a DLL. I used a couple of great resources to refresh my ASM chops, which have become very rusty after years of neglect: this primer and this x86 instruction set reference. It’s in inline assembler, together with the C++ preamble that sets up parameters to simplify the assembler.

bool cc_cdecl = true;                         // stdcall if false
int parmBytes = ( parmCount - 1) * 4;         // parmCount includes ret val, so subtract 1
int parmPop = ( cc_cdecl ? parmBytes : 0);    // number of bytes to pop off the stack after call
void* rvInt = 0;                              // for receiving int or ptr return value
double rvDbl = 0.0;                           // for a float return value from ST(0)
int paddr = ( int)parms;                      // parms is void** array of parameters. Cast to int
                                              // to prevent implicit ptr deref by asm
// Then asm code to do a cdecl or stdcall dispatch and call xf.
__asm {                      // push parms onto stack in reverse order
        push eax             // save eax
        mov eax, paddr       // point to start of parms
        add eax, parmBytes   // point to last parm
    pp: push [eax]           // stack a parm
        sub eax, 4           // point to next parm
        cmp eax, paddr       // have we hit the start yet?
        jg pp                // if eax > parms goto pp
        call xf              // invoke the function!
        add esp, parmPop     // pop parms if cdecl
        mov rvInt, eax       // int or ptr retvals are in eax
        fst rvDbl            // float ret vals are in st0
        pop eax              // restore eax
 }

Invisible Engines

June 4, 2014

I first read Invisible Engines back in 2007. I still rate it highly now, and I’m pleased to see you can download the whole book as a PDF from the MIT site. It’s topic is the economic and technical aspects of software platforms. Anyone who’s followed the fortunes of IBM, Microsoft, Apple and Sun, and their respective software and hardware platforms should get a lot from this book. I had high expectations when I originally read it, and I wasn’t disappointed. Looking at the download again today, I can see it’s stood the test of time. The book goes well beyond operating systems as platforms and has excellent material on gaming consoles and the three way tension between mobile handsets, their OSes and network operators. It came out in 2006, so pre dates the rise of social software platforms. But the principles it elucidates on multi sided markets and APIs are obviously applicable to Facebook, Google and Twitter. And in the financial sector they apply equally to industry giants like Bloomberg who are a classic multi sided play. Bloomberg as a platform derives revenue from clients paying $1000/month for a terminal. The other sides of its market are dealers contributing quotes and liquidity to Bloomberg as an ECN, and software developers using Bloomberg APIs.