They say AI will replace the analyst. But there are some things AI can’t and may never be able to do. Thought I’d share the main ones that stand out to me.
By the way, I’m not an AI denier - like many in the tech space, I’m extremely bullish on the potential for AI. There’s just a lot of articles in the bullish camp, and I want to take a more neutral tone with this.
At least for the medium-term, human-done data analysis & modeling won’t go away. And with this, neither will the spreadsheet. This is why I still recommend investing in learning the simple tricks and tools to maximize your time in Google Sheets and continue to talk about simple shortcut & trace precedent add-ons like SheetWhiz
Here are some things AI cannot and may never be able to do:
1. AI can't always explain itself. While AI can provide valuable recommendations, it can be difficult to understand how these recommendations were synthesized. This lack of transparency can make it difficult for us to trust and rely on AI-generated conclusions, particularly when analyzing data informing high-stakes business decisions. The result? We’ll need to do the analysis ourselves
2. Similarly, AI is not a replacement for human empathy and understanding. Financial decision-making often involves understanding the needs and perspectives of different stakeholders. As you’re building your assumptions and forecasts, you’ll need to internalize this and transferring your understandings to an AI analyst might be difficult to do
3. AI is not infallible. While AI can certainly help to reduce errors, it is not immune to mistakes (including obvious ones - read any of the latest Bing AI articles). Business decisions based solely on AI insights could lead to costly errors. In the end, what are you willing to leave an AI to do independently? How much will you need to QA? And how much will you just do yourself for your own peace of mind?
4. AI can be biased. Like humans, AI is subject to bias, particularly if it is trained on biased data. This can lead to skewed recommendations or outcomes that don't align with ethical or legal requirements. Humans are needed to provide oversight and ensure that AI is behaving well. Startups like Anthropic, focused on building safe AI systems, have demonstrated the importance of reinforcement learning from human feedback to tune up language models. I’m bullish on a future where AI tools are only helpful and unbiased, but today we’re still very much in the Wild West of AI where trust in an AI needs to be earned not assumed
Overall, it’s remarkable how fast the AI space is moving given Chat GPT-3 was released last November. I believe there’s a space for AI in data analysis - heck, let me be the first to sign up for a beta test - but there are some material barriers that will continue to necessitate human-driven data analysis in a spreadsheet
Comments