by Yuvi (@crypto_yuvi), Conclave Head of DeFi
I recently started placing trades on Pear Protocol because I believe that pair trading is a fantastic way to bet on relative outperformance while mitigating the influence of extraneous variables. You can long one asset and short another, and if the market is influenced by something you had not considered, they are more likely to move together and allow you to focus on relative outperformance of the specific pair.
In doing so, I found myself swamped with tickers and ideas, and decision paralysis meant that I could never settle on a pair. So I set out to build a process that could help me synthesize the volumes of data that are available and find indicators for pair trading ideas that I could consider.
I shared some outputs with a few friends (s/o HanSolar) who helped me come up with ideas and indicators, so in this article I would like to share one particular tool that I have been using, and how my use developed in response to some losing trades.
The hypothesis is simple; tokens with large upcoming unlocks will not see commensurate demand to absorb supply.
To begin, I looked at the DefiLlama Unlocks dashboard (s/o DefiLlama). By downloading the same data that informs the unlock dashboard, I built a python script to parse it for my own dashboard.

Unlocks dashboard as at 07 Oct 24
Since this was a problem of supply and demand, I thought a good way to present the data would be an ‘average return over 14 days’ relative to the ‘next 30 day inflation rate’ per any unlocks occurring in the next 30 days, compared to the ‘expected analytical price action’.
The expected analytical price action is simply an inflation curve whereby no unlocks means no price movement, and 100% supply inflation corresponds with -50% price movement (if you double the supply, you half the value).

Output at 07 Oct 24
This simply provides another view of the unlock data, however an edge may lie in assessing market activity. In order to see if unlocks were priced in, I used the Binance public API to fetch the last two weeks of price action for each asset, and compare the average daily price movement to the expected price movement caused by upcoming inflation events. For any inflation rate caused by unlocks, the average price movement of the asset should correspond with the analytical expected price action curve, if the unlock is being priced in.
Lesson 1: Timeline. Unlocks will occur in the future, whereas the data I am looking at occurred in the past. Since the supply hasn’t hit the market yet, there is no material impetus for the price action of the asset to align with the expected curve. However, the opportunity here is to find the relative outperformers and look for likely cases of reversion.
It is critical to understand the context of these assets, what ‘unlock’ means, and what kind of liquidity they have. Having narrowed down opportunities to a few, I could go back to the unlock dashboard and take a look at the purpose of the unlock and what that supply was doing. Unlocking treasury assets? Less likely to get sold off. Early investor or advisor unlocks? More likely to get sold off. Similarly, a 2% depth calculation from the Binance API data could reveal illiquid assets. The context of an unlock is important to understand the effect it has on supply.

Output at 07 Oct 24
So now I had narrowed down my picks to a few assets whose unlocks I thought would hit the market, and whose price action I thought was not reflecting the upcoming influx.
Lesson 2:
Thematic pairs. Initially I was only looking for shorts and ignored the long side. I paired with majors because I assumed they would be less volatile, allowing me to focus on the inflating assets. I realized that this introduced more variables into the question of relative outperformance, because now network effects came into play. There is definitely a time and place for this, but I decided that I wanted even fewer variables influencing my trades, and I think this is where Pear protocol really sets itself apart. If the short token was an AI token, and the long was BTC, there were cases where the AI narrative saw a tailwind in the order books and even the lowest tier assets caught huge bids. Now, when I select a short leg based on this approach, I look for a theme-aligned long pair. If the short asset is a gaming token, I look for a relatively stronger looking gaming token. If the short asset is an AI token, I look for a relatively stronger looking AI token.
Narratives catching a bid was one hurdle to navigate, but has been easy enough to mitigate. Other hurdles have presented themselves with root causes that were much harder to identify. I recently opened a pair trade shorting $BIGTIME, before it launched up over 50% in a matter of days. I could not work out why. No other gaming tokens caught a bid, no other network tokens were up, and volume on Binance had barely moved.
Lesson 3:
Geography. Binance may have the deepest liquidity and typically the most volume, but it is not the only exchange. In the case of $BIGTIME, trading volume on Bithumb and Upbit, largely driven by Korean traders, had elevated to beyond BTC volumes. After losing a few trades to the Korean markets, I integrated Bithumb and Upbit volume data into my dashboard to help me identify cases where Korean market momentum might catch me off guard. Fool me twice, shame on me.

Output at 07 Oct 24
It’s not all pair trading, and I have other tools and indicators that I am using to inform my trades and help me generate ideas. My trading accounts are still small and I have gone from losing money to losing less money, which means I am trending towards not losing money, and perhaps will go as far as making money. One thing that is certain is that I will lose more trades. I know that and I am okay with it. My goal is to develop a process by which I win more trades than I lose, and hopefully in larger sizes. My pair trading process has evolved from ‘short token unlocks’, to quantitatively analyzing data and identifying relative outperformance to trade against.