Today is the second part of the AI Recap and this week will delve into Automated Portfolios and Causal Analysis. It is understood that the subject of Artificial Intelligence can be quite information dense so each topic has been separated into sections in order to help break everything down in an easier to understand way.
Architecture of Automated Crypto-Finance Agent
The goal of this architecture design is to attempt to provide an autonomous agent for active portfolio management in decentralized finance, involving activities such as asset selection, portfolio balancing, liquidity provision, and trading. This article will cover the implementation of the architecture and show the experiment results and what it means.
One of the most important aspects of portfolio management is asset selection and portfolio balancing and rebalancing according to market dynamics. That requires appropriate metrics to evaluate assets for both the inclusion of them in portfolios as well as reweighting the portfolios as time goes on and as the market changes. The volatile nature of the crypto markets provide challenges for some of the traditional metrics traditionally used in regular finance markets.
To explain the methods and Architecture of the Automated Crypto-Finance Agent we’ll cover the components, experiments, and results.
The Portfolio Planner Oracle/AI: uses accumulated on-chain market data as well as getting predictions on price trends and volatility from the Price Predictor Oracle/AI to provide long-term weights on tokens; therefore, helping human or programmatic Portfolio Managers building long-term investment portfolios.
The Strategy Evaluator Oracle/AI: uses the same on-chain data and price trend/volatility predictions to evaluate different competitive strategies and parameters so the winning strategy could be recommended for current operations on a portfolio maintained by Portfolio Balancer applications and smart contracts: rebalancing its inventory, deploying smart contracts for liquidity provision on the portfolio instruments and executing corresponding trades.
The Pool Weighting Oracle/AI: suggests short-term weights on market instruments (tokens) helping Portfolio Balancer to adjust portfolio inventory given short-term risks.
The Signal Generator Oracle AI: takes predictions of current price fluctuations and sentiment buzz in respect to specific tokens to generate signals for trading and liquidity provision applications and smart contracts for when to buy, sell, create or cancel limit orders as well as the optimal sell, buy, ask and bid prices appropriate for that given the market momentum.
The Sentiment Watcher Oracle/AI: monitors news feeds on social and online media in respect to specific tokens and overall crypto-related buzz to provide overall assessment of the sentiment for Signal Generator and Price Predictor.
Simulation and Backtesting Architecture in Strategy Evaluator:
The key component of the Strategy Evaluator is expected to be Simulation and Backtesting frameworks serving two different yet complementary purposes. The Simulation framework is intended to simulate multiagent trading and market making activity within a configured environment of virtual agents and market conditions. The results are recorded during the simulation and returns or losses of every agent are evaluated at the end. The Backtesting framework is similar to Simulation one, although the price trend is not simulated, but rather taken from real live or historical data.
The Prediction Oracle (Price Predictor): Would ideally provide the following inputs for the other services and AI oracles. Expectation of price for a specific time in the nearest future can be used by Market Makers/Takers to execute their trading orders and expectations of price trends and volatility levels can be used by Portfolio Balancer and Portfolio Manager for the purpose of inventory re-weighing (Portfolio Manager) or selection of the strategy for inventory rebalancing (Portfolio Balancer). The main principle of the Predictor is the ability to perform predictions for expected market price values for target symbol pairs on specific exchanges in real time, being able to update its model on the fly. The Predictor can simultaneously do both at the same time, around the clock, potentially applying them to different symbol pairs and exchanges.
Incremental training: Update predictive models by retraining on historical data spanned over temporal the training interval of hour, day, week, month, quarter, half-year, or year periodically, using a period increment of five minutes, hour, six hours, day, week, month, etc. and historical interval window to get the features from the entire training interval.
Incremental prediction: Use the latest model to predict the expected market price during the next current time period. The input data used against the model will be the latest rolling “historical interval” covering part of the latest “training interval” and part of the currently being predicted “period,” from the very beginning till the very end of the latter.
Portfolio Planner and Pool Weighter for Active Portfolio Management:
Portfolio Planner and Pool Weighter have to be evaluated together, as decisions in one effect the other. First, assets included in the portfolio by Portfolio Planner and their weights will be adjusted by Pool Weighter. Secondly, a strategy to execute the portfolio, “Hodling,” “Liquidity Provision” (Market Making), or “Trading” (Liquidity Taking). Third, there may be more precise settings in each of these strategies, setup as a sub strategy with specific parameters for profit margin, limit order cancellation, order grid settings, and many others. In regards to the Pool Weighter it might be not a single strategy, but a combination of the strategies applied to dedicated fractions of an entire portfolio.
Given the current state of development, some aspects of the AI Oracles presented above have been tested on real data from Binance CEX. Results below. Losses in red, profits in green.
Profits (right bars) and losses (left bars) for market making by different strategies compared to “hodling” (at the bottom) where strategies based on price predictions actually know the future price as if they were having “insider” information.
Simulation and Backtesting have been carried out on Bitcoin BTC/USDT exchange rate on different time intervals for the past half year with consistent results across time intervals and different strategies of market making. It has been shown that almost any selected market making strategy may be profitable (with no losses) if an agent can anticipate the price movements and the price level. With this in mind, the most profitable strategy has been “zero-spread market making” (middle green line in above image) with ultimate returns compared to “hodling” (bottom line of above image). The next two strategies using the same “prediction” technology were strategies setting the bid and ask orders at the price level one point better than the competitors – based on the limit order book information known from the last limit order book snapshot (top line of above image). All three of the strategies made it possible to outperform the “hodling” strategy substantially.
The suggested architecture of an automated agent for active portfolio management in decentralized finance, including portfolio planning and balancing, liquidity provision and trading appears to be quite flexible, covering all aspects of the crypto-investment business. The preliminary results show utility of automated simulation and backtesting for strategy selection, value of using contents of the limit order book for liquidity provision on CEX, and the possibility of increase of the market making profitability even with small increase of accuracy of the price prediction.
Causal Analysis of Generic Time Series Data Applied for Market Prediction
The goal of this work is to figure out a suitable general purpose algorithmic framework capable of figuring out causal connections across data from different sources, including sparse and unreliable ones. It is supported by the other AI work on the generic architecture for active portfolio management employed by automated adaptive trading and market making agents which need to be capable to do predictions in respect to future market dynamics relying on diverse temporal streams of data. This includes market data, social and online media news, as well as so-called “on-chain” data computed from transactional activities on public financial ecosystems such as blockchains.
While the team understood that the operations being performed by a hypothetical completely autonomous trading or market making agent might be considered as a narrow artificial general intelligence (Narrow AGI), they wanted to have the operational environment of it to gain as much reach as possible, maximizing its capabilities for intelligent decision making based on wide range of information sources, including market data and technical indicators from different exchanges, “on-chain” data, and sentiment and emotional data from online and social media sources. The Team explored the possibility of causal analytics for market prediction purposes for as much information as possible.
Given the practical objective of the work is providing operations on crypto exchanges such as Binance and crypto finance is a domain being actively discussed on social media channels such as Twitter and Reddit, the team collected as much as possible data from both kinds of sources.
The present data acquisition framework streams the live market data from Binance exchange, including both raw trades and snapshots of the LOB (limit order book) at different sampling rates or granularity periods including 1 day, 1 hour, 1 minute, and 1 second. The overall scope of the market data for the BTC/USDT pair discussed in this work was covering almost 1.5 years from August 2020 till December 2021.
Two kinds of metrics were derived from the online social media data: public posts from about 80 channels on Twitter and Reddit relevant to the crypto market. The overall volume of the media content was exceeding 100,000 posts across all channels. The sentiment analysis metrics were discussed in the previous AI Recap: Part One article released last week.
Since the practical and goal of the study was the prediction of the market price, the causal analytical framework was considering the price movement as a target “effect” and all the other metrics as potential “causes.” The conceptual causal frameworks and justifying the studies turned difficult to implement literally due to the lack of clearly identifiable “events” in the time series data, even assuming the data is represented by stationary functions in the range [-1.0,+1.0]. It was considered that the determination of events such as “price goes up,” or “there is positive sentiment,” but it was clear that it could be done on the basis of some thresholds which would be either subjective or become a source of extra errors and uncertainties, or both.
Given the rich data, the team were performing the causal analysis in three dimensional space, with time being the first dimension, channel being the second dimension, and the metric being the third one. The channel might be either Twitter or Reddit channels used to derive the media metric or some source of the market data (such as Binance), “on-chain” data (such as Bitcoin), or third-party sources. The metrics would be specific in respect to the channel being used.
The causal connectivity as a correlation has been studied on the full scope of the media and market data discussed in the previous article. It is clearly seen that ability to build a well-correlated SACI (synthetic additive cause indicator) from media data at the point one day before the anticipated “effect” is dominating all other time lags/shifts so it can said with greater certainty that some combination of the metrics represented by the model of the SACI is having the causal connection with the target price change.
While the sentiment metrics appeared promising, the market metrics turned out to be substantially less inspiring. The daily study for market metrics does render promising correlation of the SACI one day before the “effect,” however much lower than using media metric. The team also considered non-sentiment metrics like circulating supply, active addresses, and GitHub activity available on the platform but none of these metrics could be identified as having an impact on the price prediction.
The results tried to get applied for price prediction of the BTC/USDT trading pair on the Binance exchange. The objective was set to hit two targets. First, to exceed the baseline provided by prediction made just by copying the last known price (LKP) and approach the
prediction made by looking up the “future known price” (FKP) in historical test data. Second, using the backtesting framework to use obtained predictions by the market making bots according to their strategies. In order to accomplish the goal, the team tried classical Machine Learning algorithms without any clear success to outperform the LKP baseline.
The team found a way to determine causal connections in massive time series data. Also, they discovered such connections between the price change as an effect caused by combinations of specific cognitive distortions and sentiment patterns in online media content as well as changes of trade, sell, and buy volumes and imbalances between them on the daily time basis applied to Bitcoin cryptocurrency. That gives hope to build a solution for reliable price prediction mechanisms usable for financial applications.
Thank you so much for taking the time to read the AI Recap. The team worked extremely hard for almost two years doing the research and the community's support is very much appreciated.