top of page

Deploying Machine Learning Models with Sparkflows MLOps



Machine learning models provide powerful capabilities to make predictions and gain insights from data. However, developing accurate models is only the first step. To fully realize value, models need to be properly deployed and served. This allows them to be used in applications and drive business impact.


Sparkflows provides MLOps capabilities to deploy models built within its platform or models developed externally. This post will explore how to serve Sparkflows models for both offline batch scoring and online real-time usage. We’ll cover the administrator setup and configurations required, as well as how users can leverage the options.


Offline Model Serving


Offline serving is best suited for use cases that involve periodic batch scoring. For example, scoring new customer data each night to generate predictions. Key aspects include

  • High throughput for scoring batches of data

  • Latency is less critical

  • Leverages spare capacity

Sparkflows supports multiple methods for offline model deployment:


Sparkflows Scoring Workflows


Since Sparkflows already includes a full data engineering platform, scoring models with Sparkflows is straightforward. Users can create a workflow that loads the model and uses it to generate predictions on new data. The workflow can be scheduled or triggered on-demand via REST API.


No additional administrative setup is required compared to running other Sparkflows workflows. This option is great for users who are already using Sparkflows for their data pipelines and want integrated model deployment.


Standalone Python Scorer Docker Image


For deployment outside the Sparkflows platform, models can be encapsulated in a custom docker image. This includes Python model loading and scoring logic to perform batch predictions on input data files or directories.


Administrators need to pull the scorer image from the Sparkflows repository for the specific platform version. The ML model file itself also needs to be downloaded from the Sparkflows model registry. When running the docker image, the model file should be mounted and input/output directories provided.


This approach provides flexibility to run scoring on-prem or in the cloud. The docker container can be executed on the Sparkflows server itself or a separate machine. It also enables cluster deployment, for example on Kubernetes.


MLflow Model Serving


Sparkflows leverages MLflow for model tracking and artifacts. Once a model is trained, users can easily register it with MLflow from the Sparkflows UI.


The administrator needs an MLflow server accessible from Sparkflows for model registration. On-premises or cloud-hosted options are available. After setup, the MLflow URL and credentials are specified in the Sparkflows configuration.


With the backend configured, users can simply click to register models. The model files and metadata are logged in MLflow. Then batch scoring Python scripts can be executed against the deployed model.

Online Model Serving


Online serving provides low latency predictions on streaming data, supporting real-time usage. Example use cases include web services, IoT applications, and personalized user experiences.

Sparkflows enables online serving via workflows with REST APIs, docker containers, and MLflow model deployment.


Sparkflows Scoring Workflows with REST API


Sparkflows workflows natively support triggering via REST API requests. This allows them to be invoked on-demand for low latency scoring.


Once a scoring workflow is created, users can simply enable REST API triggering in the configuration. There is no additional admin setup required compared to running batch workflows. Calling the workflow for online predictions is as simple as sending a REST request and handling the response.


Standalone Python Scorer Docker Image with REST API


The Sparkflows standalone scoring docker images also facilitate real-time usage. When running the container, it will start up a REST endpoint that handles model scoring.


The administrator workflow is similar to the batch case. After pulling and running the image with a mounted model file, the REST endpoint will be available. The container can be run anywhere REST access is required, including cloud hosting.


Requests can be sent to the endpoint to score data in real time. The image handles loading the model, serving predictions, and returning results. It provides a turnkey REST API for any Sparkflows model.


MLflow Model Serving and Deployment


In addition to batch scoring, the MLflow model registry also enables hosting models for online serving. After a Sparkflows model is registered, it can be deployed with a REST endpoint directly from the UI.


The administrator needs to have the MLflow server accessible and configured in Sparkflows as covered earlier. Once setup, users can deploy models as REST servers with the click of a button.


Deployed models have their endpoint and sample scoring code snippet shown in the UI. Users can easily view and test the consumption of the live REST API for model inference. The endpoint can also be called from applications and services needing low latency predictions.


For models that no longer need to be served, the REST server is easily torn down by clicking Undeploy. Models can also be re-registered if needed.


Advanced Integrations


Sparkflows provides seamless integration with popular model-serving platforms like SageMaker and Azure Machine Learning.


After developing and registering models in Sparkflows, users can deploy them by clicking “Deploy to SageMaker” or “Deploy to Azure ML”. Sparkflows handles packaging the model and managing the deployment.


The administrator needs to configure credentials for the target cloud platform in Sparkflows. Once configured, users can deploy with zero code. The platforms then handle hosting the model REST endpoint.


In the Sparkflows UI, model consumers can view the created endpoint and sample scoring code for calling it from applications. Cloud hosting enables easy scalability, security, and reliability.

In summary, Sparkflows provides a range of options for deploying ML models, both for offline batch workloads and online real-time usage.


Built-in support for Sparkflows scoring workflows makes it easy to integrate with existing data pipelines. For portable deployment, Docker containers package models for batch or online serving.


Integration with MLflow enables advanced model management with just a few clicks. Models can be registered, deployed, and managed from Sparkflows UI. Finally, one-click deployment to SageMaker and Azure ML provides highly scalable cloud hosting platforms.


Combined, these capabilities provide a complete MLOps toolkit to seamlessly transition models from development to production. Companies can rapidly deploy models at scale to unlock business value from AI and machine learning.


References :

Sparkflows MLOps : MLOps | Sparkflows

Sparkflows User Guide : User Guide | Sparkflows

Sparkflows Tutorial : Tutorials | Sparkflows

Learn from the Experts : Videos | Sparkflows

Try Sparkflows Yourself : Download | Sparkflows

95 views0 comments

Comments


bottom of page