The increasing concerns about AI and the need for regulations are prevalent. However, focusing on regulations specific to LLMs can miss the bigger picture. Model development best practices, financial services regulations, and global data privacy regulations already offer a huge foundation.
That then brings us to explore the distance between the call for new regulations and the underlying ones that exist. Sid Mangalik and Dr. Andrew Clark discussed this topic on a recent episode of The AI Fundamentalists podcast. We dig further into their point during this Q&A:
Q: At the time of the podcast discussion, Sam Altman of OpenAI and other high-profile tech leaders put out a warning about the high risk of AI being the cause of human extinction. Wrapped in that warning was a call for immediate regulation. What are the underlying complexities of this call?
Andrew: It's really great to see that people are starting to think about how we actually use these generative AI foundational models versus just using chat GPT as a productivity tool. If companies actually want to leverage these technologies, they need to now start thinking about the reality of deployments. These models aren’t just this magical box. Engineering and development teams have to figure out how are they going to do this. That’s when it gets into regulation. But one of my concerns with what we're talking about now is the simplification of a premise like regulating generative AI models. Okay, well, what does that mean? What are we regulating? Why are we saying only certain people can build them?
Q: “What” are we regulating seems to be the muddy middle in deploying these models. Can you say more about that?
Andrew: So I'm a little concerned about just regulating that technology versus regulating the use cases or putting those guardrails in place. For instance, a lot of the meat and potatoes that people want to have regulated for LLMs is data privacy, meaning the data that’s used from the start in training the models. How do I know my data is or is not being used in these algorithms? Or how do I know that they're being truthful over time? Or how do we know we're not exploiting individuals? We already know that there are existing data regulations, in some states, maybe we start talking more like GDPR-type scenarios in the USA as well. Perhaps data privacy regulation can be a bit more impactful than just doing generative AI legislation because that's a very broad category.
Sid: But I think what Andrew is saying is that we probably need to do more as practitioners. What is data privacy going to look like for business models and what is going to be valid and fair training data? What will be allowable and permissible questions and prompts to give to the servers? And what are going to be acceptable answers to give these types of systems? And these are questions that the major players in this field aren't going to be as interested in, they're going to be a lot more interested in basically, who is allowed to play in this space.
Q: You’re saying that the existing regulations play a part, but it also sounds like we have to watch the players who benefit from potential regulations, as well as those who don’t. Is that correct?
Sid: I think that this really touches on the underlying problem here, which is that open AI, who makes ChatGPT, made it public. And then immediately turned around and asked for regulation of Generative AI. To most practitioners, this is a bit transparent as closing the door behind you after forcing it open. Now we are approaching a situation where the major players here are pushing for regulations, which are going to look more like, are you allowed to release these models?
Andrew: Federal regulators would have to do a lot of discovery to check that to even know what that means, not to mention the impact that it would have. if you have Open AI implying they want regulation of a certain thing a certain way, because it's going to benefit me that's not really benefiting the American people and making us safer, right?
Q: Andrew, in your earlier response about enforcing existing regulations, what’s another example besides those found in data privacy?
Andrew: Consider consumer-facing use cases with LLMs and generative AI. The FTC has made it very well-known that their existing regulations apply to AI. So I'm just a little bit hesitant on just like, let's just regulate LLMs for LLMs versus like, what are we actually trying to solve? And there are a lot of privacy and actual concerns here. So I'm not trying to dismiss that at all. It's more of a question of let's break this down to the first principles: the fundamentals of what are we trying to accomplish? Are we talking about individual privacy, are we talking about making sure these aren't being exploited? Let's hammer in on those. And if we need to give the FTC a few more teeth, or if we need to expand some data privacy, let's talk about those aspects versus just doing a blanket of generative AI, because isolating regulation to a single model type like an LLM can confuse the issues and actually make innovation slower to achieve.
Sid: We also can’t forget NIST. The NIST AI risk management framework is already an overarching framework on policies you should have in place for models.
Andrew: Yes! And although specific to banking regulations, OCC is like the gold standard for model risk management from which NIST adopted its framework. I would even go back to OCC as a model for what we should be doing to enhance regulatory standards for AI.
Q: Any final thoughts on newer regulations for AI as the space evolves?
Andrew: We want to make sure that regulations focus on use cases like progress in basic human needs like healthcare. Again, what problem needs to be solved? And from there, evolve at a trajectory as we’ve seen with technology in the past whereby automating mundane tasks creates richer life and work experiences.
Sid: As AI develops further into our daily lives we want to see regulations that look at how models affect human lives first. This looks like comprehensive laws defining permissible data collection and use, guardrails on high-risk model decision-making, mandating auditability of model decisions, and expecting regular performance monitoring of models in production.
For more discussions about the foundations of building AI, subscribe to The AI Fundamentalists on Spotify, Apple Podcasts and more popular players.