August 31, 2006
Harris bills section 14.3 of Trading & Exchanges, on adverse selection and uninformed traders, as the most important lesson in the book. Trading authors often comment that novice traders get ground down by transaction costs. We all know that trading is a zero sum game. Harris explains exactly how trading by dealers and informed speculators grinds down the rest in that zero sum game. “Informed” means informed on fundamental values.
So what is adverse selection ? Dealers quote buy and sell prices for all instruments they deal. Adverse selection is the risk that a better informed trader will take one of a dealers prices and leave them with a position against which the market subsequently moves, making it difficult to unwind that position. If a dealer thinks they’ve just traded with a better informed trader, they can take several steps in mitigation: they can change quoted prices and sizes to discourage further trades on the same side, and encourage trades on the other side. Or they can unwind an unwanted position immediately by paying for liquidity and taking someone else’s prices. Or they can hedge eg buy the future if they just sold the bond.
Uninformed traders don’t get ground down because they always pick the wrong side of the market: buying before a drop or selling before a rally. They lose because dealers build the cost of adverse selection into their spreads, among other reasons. So the zero sum game means that dealers charge uninformed traders for their losses to informed traders. Of course, dealers can and should be well informed traders themselves, even if they do encounter better informed traders in the market.
August 18, 2006
I’m using R to fit curves to data that has outliers backed by low observation counts. I’ve been thrashing round trying different R fitting methods, including glm.fit. But linear fits were obviously wrong. Not being a maths graduate, I was a bit stumped until I had a chat with one of our more maths and tech minded model traders. He looked at my charts and data, and suggested a weighted least squares fit.
In R we can do a least squares fit with loess() and predict(). Given sample data including weights, loess() will generate fitting parameters. The you feed data points to be charted through predict(), along with the fitting parameters, and plot the nicely smoothed result. Do example(loess) in R to get started.
August 16, 2006
I started programming as a kid back in the 70s, and I’ve never stopped getting a buzz from learning the new paradigm that goes with any genuinely different programming environment. But I haven’t picked up a new paradigm since I starting using dynamic languages with Python and Smalltalk in 00/01. Now I’m starting out with R I’m experiencing all the excitement that goes with every new glimpse of the possibilities of a fresh conceptual toolkit.
Here’s a timeline of the programming languages I’ve picked up along the way, with comment on which ones excited me, and which ones didn’t. Bear in mind this is purely about programming languages, and not operating systems or hardware…
- Late 70s: Basic on a Commodore Pet, coding home brew games. You always remember the first time !
- Early 80s: Z80 on a ZX81. My first taste of the power, control and efficiency of close to the metal coding.
- Mid 80s: early professional coding in Basic, dBaseII and Fortran. Nothing new or interesting there.
- Late 80s: learn to code in C. Data structures and pointers ! They say all programming problems can be solved by adding a level of indirection, and all bugs can be fixed by removing one ! One has to master indirection to code in assembler, and in C I rediscovered something crucial that’s missing in Basic and Fortran. Combine that with structs and dynamic memory allocation and you have a big jump forward in expressive capability of Basic and Fortran.
- Early 90s: C++, object orientation and polymorphism. After initial excesses with inheritance I discovered the power of interfaces and composition with the GoF patterns book. A huge jump forward.
- Late 90s: Java. Big standard libraries and portability courtesy of a VM, but no improvement in terms of expressiveness, power, control or efficiency over C++.
- 2000: Python & Smalltalk. All the power of OO, but much less code. Don’t need to write acres of type declarations to get anything done, and containers are built in to the language. Radically interactive too, so we get away from tedious compile, link, debug cycles.
- 2001: C#. Yawn.
- 2006: R. Array operations. tapply. Built in stats and charting. Wow ! R is to Excel as Unix is to Windows…
August 15, 2006
I’ve got a bunch of XY cartesian coords that I want to plot. When charted the curves are jagged. Fortunately, each point has an associated weight – the number of observations used to create it. So how do I fit a curve to the data, using the weights to smooth and reduce the influence of the outlying points resulting from fewer observations ?
With R, it’s easy…
# x,y define straight line at 45%
x <- 0:19
y <- 0:19
# make the line jagged by shifting points
# [down,no move, up] in a repeating group of 3
y <- y + (-1:1)
plot(x,y,type=’b’) # generate vector of weights that give more
# weight to the unshifted points
w <- rep(0,20)
w <- w + c(1,3,1)
# reconstruct the unshifted straight line
f <- glm.fit(x,y,w)
August 7, 2006
I was struggling to parse a big CSV with R this morning, til I discovered this lecture. There’s some useful tips in there on the use of read.table(), with good detail on separators and quotes. I’ve got 350806 lines of data, and hunting down the lines causing the “more columns than column names” error was like looking for a needle in a haystack. The lecture shows how to use count.fields() and table() to quickly identify the problem.
August 2, 2006
“The benchmarking proposal could also create a dysfunctional market structure, says Liba. By promoting the use of e-trading systems to obtain the price data required for delivering and monitoring best execution, the proposal fails to recognise the importance of a voice market in the price formation process. This could lead to a distortion in the market in favour of dealing via an electronic “request for quote” model in an MTF market, which would not be required to deliver best execution under MiFID”
I’d be quite happy to see our electronic prices used to keep voice dealers in line. Most RFQs are multi dealer, so the competition and price transparency keeps dealers honest online. But which price source would be used: Bloomberg ?
August 1, 2006
A small illustration culled from the R intro…
# 30 elements in each array..
# ..a salary and an Aussie state
incomes <- c(60, 49, 40, 61, 64, 60,
59, 54, 62, 69, 70, 42, 56,
61, 61, 61, 58, 51, 48, 65,
49, 49, 41, 48, 52, 46, 59, 46, 58, 43)
state <- c(“tas”, “sa”, “qld”, “nsw”,
“nsw”, “nt”, “wa”, “wa”, “qld”,
“vic”, “nsw”, “vic”, “qld”, “qld”,
“sa”, “tas”, “sa”, “nt”, “wa”,
“vic”, “qld”, “nsw”, “nsw”, “wa”,
“sa”, “act”, “nsw”, “vic”, “vic”, “act”)
# Categorise the states and categorise the incomes
# into buckets using cut(). Then use table() to
# generate a table of counts for each category
statef <- factor( state)
incomef <- factor( cut( incomes, breaks = 35 + 10*(0:7)))
table( incomef, statef)
incomef act nsw nt qld sa tas vic wa
(35,45] 1 1 0 1 0 0 1 0
(45,55] 1 1 1 1 2 0 1 3
(55,65] 0 3 1 3 2 2 2 1
(65,75] 0 1 0 0 0 0 1 0
August 1, 2006
VentureBlog is discussing the business ecology necessary to support a startup, and whether such an ecology could be fostered in Europe. I’m intrigued by the comment at the end “Amsterdam … may just succeed while London isn’t looking.” Which reads to me as assuming that London has pole position in European startups, possibly a natural assumption for an American to make, as London is the financial centre of Europe, and is English speaking.
I suspect London’s role as a financial centre will prevent it ever being a start up centre. In London hacker types are drawn into banks to work on trading systems by the excess remuneration. Start ups can’t compete on salaries. Consider New York: all the coders are working on Wall St.
Fortunately the UK does have start up action outside London. Cambridge is the centre of a lot of activity in the business park & MS Research has an office there. Autonomy and ARM are recent Cambridge success stories. The Thames Valley hosts a lot of software activity too.
I wonder if this is invisible to the US VC mindset because so many of the UK companies are focused on verticals. ARM and Autonomy are exceptions in cutting across sectors and having the horizontal reach and scale that generate really big pay offs. Before I worked in finance, I spent several years with UK ISVs that were world leaders in mechanical engineering and petroleum production software. They were more typical of the UK start up scene. Both were businesses founded in the 80s.