When a finance professional discusses AI, a certain kind of uneasiness comes into their voice. Not quite panic. Something more subdued. You hear it in the coffee carts outside trading floors, at conferences in Midtown, and in the way portfolio managers half-joke about their analysts requesting sector recommendations from ChatGPT. Andrew Lo, who has spent decades observing poor market behavior, has now come forward with a list of what really matters, and MIT Sloan appears to have noticed the same thing.
Lo doesn’t act as though she knows everything. He recently stated, “This is definitely not business as usual,” portraying the situation as a turning point without fully committing to its location. It’s a cautious statement from someone who has seen many trends come and go with a lot of fanfare. However, there seems to be less skepticism on his part this time.
| Institution | MIT Sloan School of Management |
| Featured Faculty | Andrew W. Lo, Professor of Finance |
| Affiliated Lab | Director, MIT Laboratory for Financial Engineering |
| Course Title | Artificial Intelligence for Financial Services: Tools, Opportunities, and Challenges |
| Format | In-person executive education at MIT Sloan |
| Core Themes | Machine learning, LLMs, quantamental investing, AI governance |
| Related Voices | Daron Acemoglu (MIT), Ethan Mollick (Wharton) |
| Broader Initiative | MIT Stone Center on Inequality and Shaping the Future of Work |
| Target Audience | Finance executives, risk managers, investment professionals |
| Publication Source | MIT Sloan Ideas Made to Matter |
He wants finance experts to start by keeping an eye on the peculiar new union of large language models and machine learning. For years, machine learning has been quietly but effectively working in quant shops. LLMs are more theatrical, messier, and louder. Lo contends that if they work together, the black box may eventually become readable. An LLM can explain why a model produced a specific forecast in a language that is similar to English. It remains to be seen if traders will accept that explanation.
Then there is quantamental investing, a term that sounds like it was created on a Tuesday by a consultant but actually refers to a real phenomenon. A hybrid is emerging from the long-standing conflict between the fundamental stockpickers and the quantitative experts, the screens vs. stories argument that dominated Wall Street for twenty years.

Earnings transcripts can be read by large language models in the same manner as by a human analyst, and then that reading can be fed into a quantitative engine. On paper, it’s a simple concept. In actuality, it poses the more difficult question of whose judgment is being used.
Lo keeps bringing up the issue of trust. LLMs are confident whether or not they are correct, which is a polite way of saying that they have strong hallucinations. That confidence is a liability in finance, where a misplaced decimal can cause a market to shift. This is evident to anyone who has witnessed a junior analyst defend an incorrect number during a meeting. Imagine the same thing at scale, automated, and without any specific signatures.
The optimism tends to fade when it comes to the deployment issue. Transferring an AI model from a research notebook to a regulated banking workflow is a discipline unto itself, involving legacy systems, unstructured data, and compliance officers who have seen too much to be won over by a demonstration. Some businesses will figure it out. Not many will, at least not in the near future.
Lo appears to be most concerned about governance, and it’s difficult not to feel the same way. Someone must take responsibility when an algorithm flags a transaction or rejects a loan. Regulators already find it difficult to audit decisions that they are unable to fully reconstruct. Whether AI in finance grows or stagnates may depend on the unglamorous work of designing systems that are accountable by default rather than as an afterthought.
Speaking close by at the MIT Stone Center’s opening, Daron Acemoglu presented the case in a different way. AI could increase the range of tasks that employees can perform, such as professionals reading financial statements at three in the morning. Alternatively, it might subtly deplete them. He said that decisions made now, in rooms that most people will never see, will determine the outcome. As this develops, it’s difficult to avoid thinking that the next five years will reveal more than anyone is prepared for.
