Title: Sridhar Ramaswamy, SVP of AI, Snowflake
Prediction: Generative AI’s negative impacts will be hard to manage early on — including job loss, deep fakes, and a deepening digital divide.
Although generative AI is reimagining how we interact with machines, there are some immediate concerns that will be particularly challenging in the early years of widespread AI and language model adoption. For a lot of people involved in what we loosely call “knowledge work,” quite a few of their jobs are going to vaporize. Rapid change makes it hard to quickly absorb displaced workers elsewhere in the workforce, and as a result both the private sector and governments will need to step up. Deep fakes are also another hurdle, and we can expect increased attacks on what we humans collectively think of as our reality — resulting in a world where no one can, or should, trust a video of you because it may be AI-generated. Finally, advances in AI will exacerbate the digital divide that has been happening over the past 20-30 years between the haves and have nots, and will further increase inequality across the globe. I can only hope that by making information more accessible, this emerging technology leads to a new generation of young adults who better understand the issues and potential, and can counter that risk.
Prediction: Ethical guardrails for AI will emerge, from both private and public sectors, faster than with other tech upheavals such as privacy.
I’d like to think that we’ve learned from our past when it comes to establishing safe and ethical rules for leveraging new technologies, with the lack of privacy frameworks and guidelines around sensitive data serving as a cautionary tale of what not to do. Governments are stepping up earlier in the cycle when it comes to AI adoption and use. For example, in mid-September the U.S. Senate hosted a private, informational round table that included leaders from OpenAI, NVIDIA, Google, Meta and more. However, quick regulatory intervention will not solve all problems, and I suspect the industry will primarily be responsible for defining what “responsible AI” means. Narrow tech regulation is very hard. While it made the internet as we know it possible, the internet is also rife with lies, hate speech, and bullying. We’ve seen that well-meaning regulation can sometimes play out in bad ways.
Prediction: LLMs will become commonplace, but most people will use “MLM”s (smaller models trained using the very large ones) because we don’t all need trillion-parameter models!
As large language models (LLMs) become more democratized, we’ll see most organizations start to downsize — with smaller language models becoming the industry standard. There will still be some big players, but in general most vendors will fine-tune smaller models catered toward specific verticals and use cases. I see a future with millions of smaller language models, operating at the company or department level, and providing hyper-customized insights based on the employee or need. Smaller language models require less time and resources to maintain, can be operated inside a company’s existing security perimeter, and are often faster and more accurate because they’re optimized for a narrower set of tasks compared to the do-it-all models that have garnered most of the attention to date. There is more and more proof that you can get a 20 billion parameter model to do most of the things that you want from language models — when compared to the ~1.8 trillion parameter model of OpenAI’s GPT-4 — and they are just as effective if not more.