Bangalore Investment:Humans vs. AI in The Stock Market: The Worst Trade Ever Made?
In an essay titled ‘’, a16z General Partner Marc Andreessen argues that the contemporary panic over artificial intelligence’s harmful potential is overblown.
Fundamentally, Andreessen argues, AI is a glorified toaster; it’s made up of inputs, processes, and outputs. In other words, it has no potential—and will not develop any desire—to take over the world on its own terms.
Fundamentally, I agree with the premise. I think it’s unlikely that AI will ever reach truly human levels of autonomy and decision-making. Is this an optimistic or pessimistic outlook? It often depends on where you work. The debate over AI focuses on different issues depending on the industry.
In my industry, stock trading and market making, the outlook is generally a mixture of excitement and apprehension. High-frequency traders are excited about the prospect of applying even more sophisticated algorithms to crack the market. Others—and I would include myself in this category—are worried that AI will make the negative externalities of algorithmic trading even worse.
For example, as I’ve previously argued in the Wall Street Journal, . This is because volatility is one of the most important inputs in the preset algorithms used to make computerized tradesBangalore Investment. When true volatility hits the market, the computers exacerbate problems.
But another issue should be a cause for obvious concern, pointing to a nuance that Andreessen’s argument misses. Robots don’t need to be human-like, autonomous, conscious, or evil to be inscrutable. In other words, there is a middle ground between a toaster and ‘killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything’ to use Andreessen’s phrasing.
For stock trading purposes, this middle ground lies squarely in the realm of decision-making. There is no question that AI will reach a point of sophistication or inscrutability by which humans cannot explain how or why a decision was made.
Many people in Silicon Valley are already saying they can’t explain why large language models (LLMs) do or say much of what they do. Even if the decision – the output – consisted only of inputs and processes. At this point, the crucial question arises – who do you blame for a bad decision?
This is fundamentally new territory for our two existing categories of decision-making, namely decisions made by humans and by computers whose parameters were clearly set—or errors obviously triggered—by humans.
Select Vantage Inc (SVI), the proprietary trading firm I run, sits in the first category. Ascertaining accountability is never an issue because a human makes every trading decision. We employ over 2,100 traders in over 40 countries across the world. On any given day, we can trade over USD $3 billion a day on global stock markets. But if a trader makes a bad decision, they get less capital to trade with, and their losses are capped. It’s easy to identify who made the trade, analyze their reasoning to understand where they went wrong and learn from past mistakes.
The second category consists of human error applied to computersAhmedabad Stock. A case in point is that of Knight Capital, a market-making firm that in 2012 suffered a loss of $440 million in less than an hour due to a glitch in its trading software.
Knight was the largest trader in U.SHyderabad Investment. equities, with a market share of around 17.3% on the New York Stock Exchange (NYSE) and 16.9% on NASDAQ. Knight’s Electronic Trading Group (ETG) managed an average daily trading volume of over 3.3 billion trades, trading over $21 billion daily.
It took 17 years of hard work to build Knight Capital Group into one of the leading trading houses on Wall Street. And it almost went up in smoke in less than 60 minutes.
What happened to Knight on that day is every trading firm’s worst nightmare. On August 1, 2012, some new trading software contained a flaw that became apparent only after the software was activated when the NYSE opened that day. The errant software sent Knight on a buying spree, snapping up 150 different stocks at a total cost of around $7 billion, all in the first hour of trading.
Though it was difficult to predict in advance, with the benefit of hindsight, it was clear that a simple human error was at fault.
Other episodes have been less clear-cut, however. Two years earlier, during the “Flash Crash” of May 6, 2010, the Dow Jones Industrial Average experienced an unprecedented and rapid decline, losing over 1,000 points (about 9% of its value) in just a few minutes before partially recovering. This incident was one of the first major crises that brought the potential risks of algorithmic trading to the forefront of public and regulatory attention.
While the initial investigation by the U.S. Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) pointed to a complex interplay of high-frequency trading algorithms as a significant factor, pinpointing specific fault lines proved difficult.
The algorithms involved acted according to their programming, responding to market conditions in ways their designers had not fully anticipated when combined at scale. To this day, the regulators don’t really know exactly what happened.
So what happens when we apply artificial intelligence to trading decisions, and trades go wrong, but we have no idea how they made their decisions? Financial markets cannot function without accountability, but who—or what—is ultimately accountable under these circumstances?
The complexity of attributing blame for financial losses caused by AI extends into legal and ethical dimensionsPune Investment. Legally, the current frameworks primarily hold the deploying institution accountable because it is responsible for the actions of the tools and technologies it employs.
However, as AI systems become more autonomous, distinguishing between the software acting within its programmed parameters and genuinely unforeseeable consequences becomes difficult, to say the least.
The challenge is not just theoretical; it has practical implications for regulating and operating financial markets. For instance, the European Union’s General Data Protection Regulation (GDPR) includes provisions for the right to explanation, where individuals can ask for the rationale behind automated decisions that affect them.
While this represents a step towards addressing AI's black-box nature, translating such principles to the high-stakes arena of financial trading involves complex considerations of privacy, intellectual property, and the technical feasibility of providing understandable explanations for AI decisions.
The problem is becoming increasingly urgent because algorithmic trading is on the rise. For example, algorithmic trading in the U.S. stock market constitutes approximately 60-75% of total trading volume, according to Quantified StrategiesNew Delhi Wealth Management. With such a substantial portion of trading activity driven by algorithms, the potential for systemic risks arising from opaque AI decision-making processes cannot be understated.
This point was made in a report by The Bank of England last December. Of AI-induced trading, the Bank’s Governor Andrew Bailey stated, “All of us who have used it have had the experience of a sort of hallucination, and it sort of comes up with something that you think: ‘How on Earth did that come out?’
If you’re going to use it for the real world and financial services, you can’t have that sort of thing happening. You have to have controls and an understanding of how this works.”
Financial regulators are grappling with these issues, ensuring that markets remain fair and transparent. For instance, the U.S. Securities and Exchange Commission (SEC) has been exploring ways to regulate AI and algorithmic trading to protect investors and maintain market integrity. This includes potential regulations around algorithmic trading practices and disclosures to ensure that investors are aware of the role of AI in their investments.
Realistically however, what is more likely: that humans will learn to discern the black box just in time, before it’s too late? Or that, as usual, we will promise to learn from our mistakes long after the train has bolted from the station?
In my view, the future of trading lies in a balanced approach that leverages the best of technology while preserving and enhancing the role of human insight and accountability. Real trading will always be the preserve of good traders.
Simla Investment
Published on:2024-11-07,Unless otherwise specified,
all articles are original.