For as much as people talk about the impact of artificial intelligence in the workplace, the overall AI and machine learning industry is still maturing.
Generative AI is going to change how organizations create, share and store data and will change the narrative around collaboration, Docker CEO Scott Johnston told the audience at JFrog’s SwampUp 2024 last fall.
However, just as companies added DevOps and DevSecOps when software and app building moved in house, organizations need to follow a similar approach to ensure the security of AI development. This has led to the rise of MLOps and MLSecOps.
MLOps is a set of practices that manage the machine learning lifecycle while bridging the gap between development and operations, according to Google. Without MLOps, any application relying on ML or AI could see an increase in errors, reduced efficiency and the inability to collaborate effectively across various teams.
MLSecOps builds security and privacy practices into ML development. It plays a vital role in ensuring that machine learning and AI processes can meet governance, regulatory and compliance standards.
Organizations that are developing AI, especially generative AI applications, need to deploy MLOps and MLSecOps when they are building models, according to Yuval Fernbach, VP and CTO of MLOps at JFrog.
“MLOps is a sub-category of DevOps, and companies need to think about how their DevOps process can cover ML pipelines,” said Fernbach during a conversation at SwampUp.
Why organizations need MLOps and MLSecOps
Companies that run ML in production or rely on any type of AI need to think about how to best secure those technologies. But few organizations are using MLOps right now.
“It’s still in its infancy, technology-wise, but deploying MLOps is the only way for companies to achieve the impact of ML within their organization,” said Fernbach.
That’s because the process of building ML and AI in applications is complicated.
ML applications have complex components. For example, ML models are dependent on quality data that will require intense data prep. ML also needs model training, monitoring and tuning. ML and AI applications rely on collaboration across data scientists, engineers, cloud developers and security teams.
There are a lot of nuances that go into training the ML model, and if there are inaccuracies, it could come back to haunt the organization, Dilip Bachwani, CTO at Qualys pointed out in a conversation at QSC2024 in October.
In one example, Air Canada’s chatbot provided a passenger incorrect information about refunds via an AI hallucination, but the airline was held liable because it was responsible for the information fed into the model, according to a court ruling.
“You have to put guardrails around your LLMs before you deploy them,” said Bachwani.
Having visibility in the AI system is also a must, because without it, the organization risks losing control over what the AI models produce. If everyone across the different areas of the company is adding in data without oversight, you are setting up a system that will be littered with vulnerabilities and risk.
MLOps and MLSecOps are meant to be the guardrails, and they should work in tandem with DevOps teams.
“There is commonality here in how you're building models [and deploying] models and we need to think of it, not separately, but in a more unified manner,” said Bachwani.
Just as developers should be conducting security checks and investigating the security gates in place before software goes live, ML engineers need to take the same type of steps. These checks may lead to another realization.
“In some cases, you may realize that you just have to go to more traditional machine learning, deep learning. And LLM might not be the answer for your needs,” said Bachwani.
Organizations will likely have a mix of traditional ML and generative AI, eventually adding on whatever comes next. The challenge is to not just adopt the technology, but to ensure the models used are secure and aren’t adding new risk.
Deploying MLOps and MLSecOps will allow engineers and security teams to work together to train models designed specifically for the use cases within the organization and provide the governance necessary to keep the models free of vulnerabilities.