Last weekend, I had the pleasure of attending Jane Street’s Electronic Trading Challenge. Two fellow interns and I formed a team to compete in a simulated market to see whose bot could generate the most money. We took third place in the first part of the competition, but didn’t do so hot in the second part. Unfortunately, there were only prizes for the first place team in the second part, so we didn’t win anything :(. Regardless, I had an awesome time and it was fun to learn about something that I’d previously had very little exposure to.
You can see our code here: https://github.com/charlieyou/jsetc
It’s a 10 hour competition split into two parts: the first being nine hours, running continuously; the second being the last one hour where everyones’ points reset back to zero.
There is a simulated market containing seven securities available to trade: GOOG, AAPL, MSFT, NOKIA, BOND, NOKADR, and XLK. GOOG, AAPL, MSFT, and NOKIA each had fair values that were random and unknown. BOND’s fair value was known to be 100, NOKADR was an ADR of NOKIA, and XLK was an ETF of the four first securities. The price of each security on the market was random around it’s underlying fair value. The ADR and ETF had the same fair value of the sum of it’s underlying securities, but were much less liquid.
For five minutes, you and another bot trade against each other in this simulated market to see who can walk away with the most money.
What We Did
First, we traded BOND whenever the ask was below 100 or the bid was above 100. This gave us some profit to start, but not much. We then moved onto parallel development of three other strategies: ADR pair trading, mean reversion, and fair value prediction.
The NOK* ADR pair had the same underlying value, so when their prices would diverge, we traded both in the direction of the other. Like trading BOND, this was fast and easy to implement, but didn’t give us that much profit.
Mean reversion was the main thing I worked on, but ultimately could not get it to be profitable enough to deploy. This is because the price of the securities was stochastic and therefore not predictable with mean reversion. Unfortunately, I didn’t reflect on this during the competition, so lots of time was wasted implementing and testing this.
Fair value prediction was our main money-maker but was also very finicky. We would use an exponential moving average to try and predict the fair values of the first four securities, then trade them in the direction towards this fair value. There was lots of tuning done with the exact way that we calculated the prediction and the high computation involved led our both to be quite slow.
Because we got such an early start in getting something to work, we jumped to second place and stayed there for the majority of the competition until we were moved to third by the team that would ultimately take first in both the first and second parts. Our performance in the second part was lackluster at best: we finished in the middle of the pack (15/30). I hypothesize that this was because our bot was slower than most and thus couldn’t keep up with the trading speeds of the other final bots. However this doesn’t explain how we did so well in the first part. I attribute this to us getting a large head start on everyone at first and to randomness. Unfortunately, we’ll never know the exact reasons.
What the Best Team Did
One team absolutely dominated everyone else in the second portion of the competition: they scored over one hundred thousand points where the next best team was at thirty thousand with most in the under ten thousand range. Afterwards, I asked them what their strategy was and they graciously shared with me:
Like us, they started with only trading BOND to get something going as quick as possible, then moved onto NOK* ADR pair trading. Our strategies differed in that they didn’t simply average the two symbols, they weighted the more liquid one higher.
They made the majority of their money with a strategy that we tried to implement, but ultimately ran out of time to: ETF arbitrage. Whenever the price of the ETF and the sum of it’s underlying securities would diverge, they would trade the ETF in the direction of the difference. They hedged this trade 5:1 and could control the frequency of trading by adjusting the threshold by which a trade was made.
Three other things that they did that gave them an edge:
- Coded everything in C++, which made them faster than whoever they were trading against (almost everyone else used Python or Java).
- Limited the orders on the book to only two, canceling any orders that were made based on old information.
- Kept track of what was in their portfolio at all times, letting them execute some optimizations based on the holding limits.
One thing we did well was to setup our codebase so that everyone could easily develop and test strategies independent of one another. The strategies used on each test or deployment were specified on the command line, so no additional code changes were needed besides adding the file containing the subclass were needed.
What we didn’t do such a good job of was our actual deployment system onto the server. We would push code to git and then pull it on the server. This was perhaps the worst way to do it. It caused lots of unnecessary commits and made us get lazy with our version control, leading to multiple problems later on. In addition, we had separate folders for each person’s working code on the server, but they were not named very distinctly. There were times when someone would accidentally overwrite something in someone else’s folder, causing issues with lost code and developer time due to confusing errors.
Overall, the event was extremely well run and I highly recommend that anyone attend it if they can. I will definitely be applying for Jane Street’s summer internship as I’d like to get a better sense of the work that they do as well as the tech behind it.
Update (2017-09-06): Jane Street rejected me for a summer internship :(.