Embrace AI or Risk Irrelevance: The Future of Work Is NowNavigating the Impact of AI on Job Roles: Complementing Human Workers
Artificial intelligence, with its varied use cases, is sure to enhance business processes across domains, but measuring its impact on existing job roles is tricky. There is no contest that human oversight is necessary to prevent AI inaccuracies, but fears of job loss are overblown. AI complements, not replaces, workers. AI tool ChatGPT brought forth a mix of wonder and curiosity. Its data interaction is groundbreaking, showcasing human-like abilities, and its integration possibilities are vast.
A whole new class of users can now interact with data that was previously the preserve of data analysts and scientists. No longer do we need to learn cryptic engineering regex formulas to be search engineer heroes. Natural language can finally be used to get users the data they want. More importantly, it can be formatted or adjusted based on some arbitrary requirements. e.g., "in less than 200 words," or, if it’s your thing, "generated in the voice of Morgan Freeman."
Freshly buoyed by their newfound, albeit shallow, understanding of AI, MBAs have started clamouring adoption, with the promise of productivity gains exceeding 30%, reduced headcount costs, and a leg up over the competition.
But productivity gains touted by some organizations have proven hard to reproduce. They are either the result of inefficient practices to start with, or the result of tooling that is not yet available to the general public. We saw this with some pre-release studies of Copilot, GitHub’s AI code programming assistant. Many CEOs were disappointed that they could not achieve similar successes with the public version of Copilot.
Some gains are even harder to measure. Productivity has many lenses, and generating code with better unit test coverage and existent documentation, using the same timeframe, has a larger value due to engineering excellence completeness.
AI programs still need to be monitored by humans to prevent the risk of inaccuracies, which can be severe. You cannot simply pipe an AI directly to your customers; it is too raw and unfiltered. It may present inaccurate facts to the end user.
So what does the future of AI look like? We’re currently in a twilight zone where, on one end, content generators use AI to generate reams of content from a set of five bullet points, while on the other side, consumers use AI to take all that content and condense it into five bullet points.
Users can expect customisable journalistic flavours of news in formats from traditional outlets, such as Wired, The Wall Street Journal or USA Today, which have built their own style and audience. For example, with the help of generative AI, you can choose to generate Reddit content in the style of The Wall Street Journal!
AI has introduced a new need for verifiable truth, and we might finally have found an intelligent use case for blockchains and NFTs to allow us to track the source of truthful content.
Content adjusters restore trust in content sources, potentially reclaiming value lost to platforms, such as Google and Facebook. This allows news organizations to track and monetize their original content, engaging upstream collaborators.
It turns out that humans are adept at detecting generated content and, less surprisingly, can quickly get bored of AI content. We will see a measure of generated content value emerge, where human-generated content, or at least, curated content, holds a higher value for human attention than AI-generated content. The success of generated content will depend on the ability to hide the fact that it is autogenerated.
While we are yet to discover what new job roles AI will create, we are already seeing increased demand for skills in ChatGPT question formulation and context injection.
The narrative of AI replacing human jobs is similar to the panic that was created when computers first hit the consumer market. We could never have imagined back then the thousands of roles that had never existed until that point, and now are part of our everyday vernacular.
With the influx of new ChatGPT users, AI specialists will need to modernize their ways of working to keep pace with the new competition - who are not data analysts. Now they are competing with the skills that the newcomers bring to their domain, such as product knowledge and market expertise.
We are still missing key capabilities, such as instrumenting AI with company content, although this is changing fast. You can upload indexes of your content that can be used to simulate training, but eventually, tooling will support full parsing and ingestion of your content, at which point, AI’s search engine capability will become the core of most company intranets. Later, once a proper data taxonomy is defined, it becomes possible to start synthesising your own new observations based on your domain-specific knowledge.
We started with an audit of our current engineering capabilities by understanding firstly the span and quality of our data as well as our systems and processes, where AI could provide some quick wins. It was a constant battle between engineers looking for problems to apply our solutions to and product or business owners struggling to innovate in their domain of expertise with scant knowledge of the promise of AI.
We are entering a paradigm shift where the role stays the same, but the process gets better for everyone. As a generator of knowledge, we now have a powerful assistant to augment output, accuracy and transcription capabilities. As a consumer of knowledge, AI offers a powerful tool to demand personalization of data in ways that were not cost-effective to scale to individual preference levels.
In the end, AI will make knowledge workers become even more valuable. And it will make lower-skilled workers less valuable - exactly how the invention of computers disrupted the workforce.