AI is underhyped
An enthusiastic and optimistic view of the future, powered by AI
AI, Large Language Models (LLMs), Generative Models. Gallons of metaphorical ink have been spilt on these technologies over the past several years. The tech world is alight with ideas for their implementation, and regulators worldwide are trying to catch up to their mass-market introduction.
Techno Optimists have waxed lyrical on their power and ability to positively change the world, while many are worried about the societal impact of their introduction.
Thanks for reading Tony’s Newsletter! Subscribe for free to receive new posts and support my work.
I’m here to say I think their impact is underrated: if you’re thinking you’re thinking big enough, you’re not - the total addressable market (TAM) of LLMs is the entirety of white-collar work, and trillions of dollars are about to change hands.
The future is almost here
The concept of AI is, of course, not new. Works were being published in the 1920s, Turing wrote papers on it in the 1950s, and the phrase was introduced by John McCarthy in 1955. What we’re seeing now is not new; it’s just much better.
Through a confluence of compute power improvements, the development of transformer models by Google and others, and OpenAI’s public introduction of ChatGPT in mid-2020, the power of AI is at everyone’s fingertips. The worlds I follow are ablaze: Twitter, LinkedIn, and my friends all played with this thing and speak about it ad nauseam.
For me, GPT - especially GPT-4 - passes the Turing Test: it’s hard to tell if a human is typing the responses or a machine. There are many examples of ‘hallucinations’ where the model has produced something blatantly wrong, and many critics have used these instances as reasons to doubt its power. Similarly, people lament these models for having a training cut-off date, i.e., their datasets are outdated and don’t include recent information. These are genuine issues today, but they won’t be for long. I expect the recency of data issue will be fixed long before the hallucination problem.
Downplaying these technologies because of these two current issues is myopic.
My recent experience
Recently, I used GPT to build a toy Customer Relationship Management (CRM) platform. This experience and reflections of what I see in the market prompted me to write this.
For context, I started to learn to program when I was 13, have a degree in Computer Science, and worked as a software engineer at Morgan Stanley after graduating. While I’m technical and have a good foundation of knowledge, I haven’t coded professionally for years. I’ve run two startups since my last full-time engineering role and haven’t meaningfully kept up-to-date with coding. All skills atrophy over time, and I’ve certainly felt that.
Most important for this story, I used to write ‘backend’ code. I built risk systems for equity derivative traders that ran valuation models for stock positions and fed data into various systems. Another project I led automated the hedging of FX exposure when traders bought an asset denominated in a different currency than the one used to pay for it.
While highly technical behind the scenes, I didn’t build the User Interface (UI) associated with these projects. I’ve never spent time learning how to build UIs, and I don’t have the fundamental knowledge to assemble them myself.
My project - a basic CRM
CRMs are commonplace within companies I’ve founded and worked with. For those unfamiliar, they’re systems that allow users to create a list of customers, associate ‘deals’ linked against them, and perform analytics. This description is a gross oversimplification - the largest CRM companies are worth hundreds of billions, and people specialise their careers focusing on them - but will serve the purpose of this story.
I asked GPT-4 to help me build one. I’ve included the prompts I used to create the CRM at the bottom of this article. The workflow to build went something like this:
I asked GPT to create a Python backend, creating the concept of a Customer and Deal.
I iterated on this, extending the concepts, adding new functionality, and so on, copying and pasting code from my Python coding environment (PyCharm) to GPT and copying the fixes and new code back to PyCharm.
I cycled around this improvement loop until I had what I wanted.
I then asked GPT to build me a React frontend.
I repeated the same improvement and bug-fixing copy/paste cycle until I had something working that did what I needed.
I connected the frontend and backend and repeated the same copy/paste, improve/fix loop until I was happy with what I had.
After about four hours of coding - or, really, four hours of me copying and pasting text from one window to another - I had a (very basic) CRM. I could add deals, add customers, assign them deals, change the deal values, and chart them over time.
What I built was a toy: there’s no way I could release it or charge for it, but it was an incredible first step towards a usable piece of software, especially for someone who, it’s charitable to say, is rusty.
GPT allowed me to build something in a language I’m unfamiliar with - Python - with a frontend - something I’m entirely unaccustomed to building - creating a working prototype in a couple of hours.
I had four thoughts:
I was surplus to requirements. I had the idea for the project, and I knew what I wanted, but other than that, I just copied and pasted things from one window to another. As soon as GPT has access to an entire desktop and as soon as it can do Robotic Process Automation (RPA), I wouldn’t need to do… anything. I’d just describe the app I was looking to build, review its progress, and give feedback and pointers.
Humans will transition from being the creators of work to the curators - supervising, editing and improving work, not initiating it.
I had grossly underestimated the TAM of LLMs. For years, when thinking about technological progress and creative destruction, I thought job disruption would be at the low end: blue-collar workers would be replaced with robots, automated machines, performing their tasks 24/7, with no ‘people’ requirements - like wages, health care requirements, bathroom breaks, and so on.
While that will continue to happen - Figure, Tesla and others are pursuing this vision - it’s not where the most change will happen in the next ten years. The scope, dynamics, and number of white-collar jobs, especially those relying on operating on large corpora of text - think lawyers, accountants, … even programmers, will fundamentally change.
LLMs with full access to our computers and company file systems will take over all repeatable and explainable important-but-rote work.
This is unbelievably powerful. I wish this thing had been available during my programming days; it is so, so, so much better than the previous approach of Googling, reading Stack Overflow and other sites, sometimes, dare I say it, physical books, speaking to others, etc. None of these options is as powerful as this $20/month product.
How would your role change if you had the smartest helper in the world aiding you 24/7 for $20/month?
It’s the best teacher of all time. This thing is: never tired, never snarky, doesn’t judge, there to help, available 24/7, highly personalisable, with the backing of nearly all information ever published online.
How much better would you be at your job, hobby, or whatever if you had access to an always-on teacher whose only role is to help you?
Where will the value go?
Lots has been written and predicted about the future transition of money in this new economy.
Many commentators have predicted value accruing to incumbents. I expect the multinational software houses - Microsoft, Alphabet, SAP, Oracle, Databricks, Snowflake, etc. - will benefit in droves from LLMs: their reach and distribution into the major buyers, the ability to improve COGS or sell a net-new product will create a significant windfall.
From my point of view as a startup founder, whole new industries and ideas, previously impossible, have become possible due to the power of LLMs. These products will need to add significant end-user functionality above the underlying power generated by the LLM, but the market has that message, and “GPT-wrappers” are likely to become a thing of the past.
Regarding LLM providers, it’s clear to me that (1.) competition and open source will compete away the excess profits of any pure-play LLM provider1, (2.) training new LLMs will require an enormous amount of capital to be competitive with those already on the field, which are compounding additional benefits and power, and at some point - quite soon - new entrants will not be funded for fear of too much competition, and (3.) developers will want to use a ‘company safe’ provider that provides value above pure LLM functionality.
All this means, in my perspective, that a lot of the value will accrue to the hyperscalers - AWS, Azure, GCP, etc. - that’ll either build their own LLMs or provide third parties, likely through creative investment deals, al la Microsoft’s investment in OpenAI, Amazon’s in Anthropic, and others to come. Will we see another pure-play LLM provider post-Mistral? I don’t think so.
Then, of course, there’s money spent on jobs: $10.5tr in the US, according to ChatGPT2. How much of that spend will transition to LLM providers? Will companies need five people in FP&A, eight people in Legal, or ten in HR? What about the 150 programmers? Could their work be magnified through LLMs, or could the company just reduce headcount? This is an evolving field that is going to play out before us in real-time. I wouldn’t be surprised if we start seeing LLM-inspired layoffs before long, especially if the much-spoken-about-but-still-impending recession does come knocking.
I believe that roles in organisations that are currently pivotal will be substantially transitioned and potentially erased. Trillions of dollars will change hands over the next few years: from human capital to AI-backed service providers that understand the roles, apply AI - and other technologies - to them, and can execute faster, with no ‘people overhead’. People in my tech-bubble are talking about a potential future business worth $10bn run by only ten people. This, I think, is infinitely possible.
I mentioned in the subheading of this post that this is an optimistic view of the future, while I have outlined in this post that I think a lot of people are going to lose their jobs. I’m a capitalist, certainly, but also a human, and I don’t want to be seen as celebrating an enormous reduction in available roles.
What I’m describing is the next step in the creative destruction cycle: innovation leads to the obsolescence of old industries and jobs but simultaneously creates new opportunities and markets. Who knows what new roles this LLM revolution will create.
Who knew that the GPS in smartphones would create Uber, a $145bn3 juggernaut that has fundamentally changed how car drivers make money, opening new opportunities and flexibility to them at the expense of traditional cab firms? Who knew how introducing smartphone cameras would create the world of the digital influencer? Instagram, YouTube, TikTok, and other apps have created a world where teenagers can make millions from dancing, unboxing toys4, and creating makeup brands5.
Consumers still want to speak to someone when setting up their Wills; they still require a human to survey a house they’re considering buying; they want to be sure they’ve correctly submitted their tax return; they want help when choosing an expensive good in a store; etc.
I expect humans will continue this way. Perhaps the automation of some white-collar roles will create pressure on blue-collar roles - will the next wave of applicants to management roles in restaurants have multiple degrees? Will service in restaurants be improved because of the quality of applicants due to increased demand for those roles that, at this point - and even with AI improvements to come - require a human? What about call centres where humans field questions from customers? Will those be replaced with human-like voice systems?
There have been many conversations around Universal Basic Income (UBI) over the past few years. I expect these conversations will reoccur while the transition that has already started in the coastal cities of America and the central cities in Europe, China, and elsewhere proliferates.
The future is not bleak for humanity, but the introduction of LLMs marks a definite shift. Many have said before me that “AI isn’t going to take your job; someone knowing how to use AI will”. This, I do believe - it’s time to get learning.
The future is here and is relatively evenly distributed: $20/month gets you access to the most powerful resource of all time. What will you build?
Thanks for reading; if you enjoyed this, please do share. Contact me if you’re building something interesting in this area or have other ideas.
I have invested in several AI-focused startups: Jitty, building a better home moving experience, indexing the property market with GPT, Channel, helping companies answer their burning questions by converting natural language to SQL, Quantly providing financial insights in seconds, Platformed, helping companies accelerate procurement processes, and AudioStack, generating studio-quality audio for adverts, walk-throughs, and so on, with AI. None of this article is influenced by my experience with these companies or is intended to promote them.
The prompts I used
Here are the first few prompts I used:
Write a simple CRM in python including APIs, including models for Customer and Deal
It produced a Python web app that did exactly what I needed. It included the concept of customers out of the box, with the ability to create, fetch, update, and remove them - the four horsemen of CRUD (Create, Read, Update, and Delete), a core paradigm for building web apps.
I tried to run the code it gave me in a code editor called PyCharm (think, Word for writing Python). It didn’t work, and gave me an error, so I asked:
I've run the pip install step on the command line, but pycharm can't find the module
GPT told me what to do. It fixed that error, but I got another:
now I get this error when running the app: UserWarning: Flask-SQLAlchemy integration requires marshmallow-sqlalchemy to be installed. warnings.warn( Traceback (most recent call last): File "/Users/tony/programming/python/crm/main.py", line 15, in <module> class CustomerSchema(ma.SQLAlchemyAutoSchema): ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'Marshmallow' object has no attribute 'SQLAlchemyAutoSchema'
It told me what to do, which fixed the error, and the app ran.
I wanted to expand it:
cool now it's running - give me some URLs I can use to call the API
It gave me the code to do so, but with some warnings:
I get python warnings saying "Shadows built-in name 'id' " in the code - what does this mean?
It told me what the problem was and how to fix it. Eventually, I added a UI:
write a simple react frontend that queries all of the APIs
This web app worked perfectly, so I asked to extend it:
add a text box to the front end to allow the user to specify the first and last names
And so it went on.
This year OpenAI prophylactically reduced prices: this, I’m sure, will continue.
At the time of writing, February 2024.