The tech world is buzzing right now over ChatGPT. The chat based AI software is smart, useful, and just fun to interact with. I’ve already used it in my role as a TA, and undoubtably students have been using it to write essays, tech workers to debug code, and economists desperate to know when the Fed will stop raising interest rates.
If you haven’t already I’d highly recommend checking it out. It is mostly accurate, there seem to be inaccuracies in the data it was trained on, and is especially helpful in writing code.
Because of its ability to write code accurately however some coders have been fretful of AI’s ability to replace them in their jobs.
I don’t think this will happen, and an economist I often quote on this blog, Noah Smith, has an excellent post on this topic.
In short, new technology replaces tasks instead of jobs. For example, autocorrect aims to automate mundane spelling check tasks. With further advancements in AI instead of automating spell checks, a whole paper or blog might be written by AI. But the writer likely wouldn’t be out of a job since the things that AI are good at like repetitive tasks and quick recall of information (things that humans typically aren’t great at!) could be used for structure, but you probably still need someone to tell it what to write about and to have original ideas. The value added by the writer then is the the imagination or ideas to connect pieces of information together in an interesting way that compels their audience, which, imagination is not something that AI is great at as of now.
I would expect something similar for coders. At least in my experience of scripting to create models and tools for personal or professional use, the actual typing out of code is easy. Or if it isn’t, there is a StackOverflow post somewhere out there that works decent enough to get me where I want to go. It’s the time to think of what you want to do, what you can do, what data is available to you, and how you want these pieces to interact that take up the majority of time and effort. Actually building just takes time.
So on net this is a good thing. Humans can spend more time doing human things while AI can spend more time doing things it is good at. But even if AI became so creative and interesting that it was better than humans, there would likely still be jobs because of the concept of comparative advantage.
This concept is often taught in economics to explain international trade. Here if France is better at producing both wine and widgets, but the UK is only a little bit worse at producing widgets and a lot worse at producing wine, then they can specialize and trade to increase consumption of both goods. So AI might become better at everything, but server space in the end is limited. So humans would have a place to supplement other tasks.
Even if those tasks left for humans are purely for entertainment. The value of a piece of art generated by AI might be different than that of art generated by people, but even if its not people will still dance and paint and play sports because… that’s what they like to do!
I was thinking about how I could use ChatGPT as researcher rather than a coder / writer. I wanted to replace a task that was mundane; something tedious and time consuming that required combing through a lot of information. Something that I didn’t particularly like to do and instead would happily make a robot do it. And I landed on: a literature review.
Reading through academic papers is difficult, time consuming, contains a lot of information, and often times doesn’t really help with what I needed. And doing that dozens of times counts for… one literature review. I’ve found after a literature review either the research question has either been already answered, there isn’t data for it, or there was something I was severely misunderstanding. Decreasing the time for these reviews then let’s me the researcher spend more time hypothesizing interesting research questions.
So, I went back to one of the papers I became familiar with during my dissertation from last year and asked ChatGPT to summarize it. This is what I got:
Identifying Uncertainty Shocks Using the Price of Gold is a study published in The Economic Journal by Piffer and Podstawski in 2018. The study aims to identify uncertainty shocks in the economy using the price of gold as a measure of economic uncertainty.
To do this, the authors use a structural vector autoregression (SVAR) model to estimate the effects of uncertainty shocks on the economy. They find that uncertainty shocks, as measured by the price of gold, have negative effects on economic activity, including GDP growth, investment, and consumption. They also find that uncertainty shocks are associated with increased volatility in financial markets.
The authors conclude that the price of gold can be an effective measure of economic uncertainty and can provide valuable insights into the potential impacts of uncertainty shocks on the economy. Their findings suggest that policymakers and market participants should consider the potential effects of uncertainty on economic activity and financial markets when making decisions.
This summary is a good starting place because it tells me the high level conclusions and methods, in a very clear way that doesn’t have too much jargon that is only known by those at the edge of the knowledge frontier in this space.
Compared to the abstract from this paper:
We propose an instrument to identify uncertainty shocks in a proxy structural vector autoregressive model (SVAR). The instrument equals the variations in the price of gold around events associated with unexpected changes in uncertainty. These variations correlate with uncertainty shocks because gold is perceived as a safe haven asset. To control for news-related effects associated with the events we identify uncertainty and news shocks jointly, developing a set-identified proxy SVAR. We find that the popular recursive approach underestimates the effects of uncertainty shocks and delivers responses for economic activity and monetary policy that have more in common with news shocks than with uncertainty shocks.
This is almost unreadable unless you are one of the dozens of academics in the space. Now, the ChatCPT summary wasn’t as detailed or precisely correct as the abstract is (the last sentence isn’t a direct implication of the paper) but it is good enough to help me decide to investigate further during a literature review.
Pairing this with another tool I used often, Connected Papers, it could really help to automate the process by making it easier to find related papers, and understand them quickly via ChatGPT.
This would be a good thing for saving time, but I think we should be careful about the information lost in distilling information down too much to the point where incorrect inferences are drawn based on suggestive rather than conclusive evidence.
With that caveat in mind, I’ll still be using ChatGPT to try to find other uses in my work. Who knows, maybe I even wrote parts of this blog post using it :)
Awesome read — particularly the section on comparative advantage. I also loved the example of parsing research papers.
I think ChatGPT could have some momentum for no code founders though. The tool seems relevant enough to generate some snippets that could be used in early prototypes before a non-technical founder can identify their first tech lead