Scoring Models in Production

June 18, 2020

One of the three pillars of SAS Viya is Deployment. SAS Viya allows to deploy models in Production environments easily. Depending on the business use case, users can choose to deploy their models with SAS Model Manager to different environments:

  • SAS Cloud Analytic Services (CAS): for batch scoring, this is optimized for scoring big tables of records.
  • SAS Micro Analytic Service (MAS): for scoring transactions, MAS is a high-performance execution service, it’s best suited for scoring individual records in real-time.
  • Docker containers: used for scoring Open Source models (Python and R)
  • External Databases: Hadoop or Teradata, for specific use cases.

We will see how users can execute models on the first three environments and how they can integrate them in third-party applications.

This guide requires these publishing destinations to be configured on SAS Model Manager. For more information, see Configuring Publishing Destinations in SAS Model Manager: Administrator’s Guide.

The code steps are implemented on SAS Viya 3.5

Batch Scoring: CAS

SAS Cloud Analytic Services is the analytical engine of SAS Viya, it’s an in-memory and distributed engine. This is the go-to publishing destination for scoring vast amounts of data. In order to score models on CAS.

Users can invoke the runModelLocal action.

import swat

# Connect to CAS
s = swat.CAS("cloud.example.com", 5570)  

# Load modelPublishing action set
s.loadactionset('modelPublishing')

# Score Model in CAS
s.modelPublishing.runModelLocal(
    modelName="Great Forest Model",                             # Model Name
    modelTable={"caslib":"Models","name":"sas_model_table"},    # CAS destination
    intable={"caslib":"public","name":"INPUT_TABLE"},           # Input Table
    outTable={"caslib":"public", "name":"OUTPUT_TABLE"})        # Output Table

Transaction scoring: MAS

SAS Micro Analytic Service (MAS) is a high-performance execution service. It’s multi-threaded and can be clustered for high-availability. MAS also supports the execution of Python programs.

Models deployed to as SAS Micro Analytic Service destination from SAS Model Manager can be scored using the REST Interface.

The REST API is documented on developers.sas.com. You can follow the “Getting Started” guide to configure your environment for API use.

To score a model, check the Execute a step operation. The documentation includes examples in Shell, Javascript, Python and Go to call the REST API. Below is an example written in Python.

import requests

url = "https://cloud.example.com/microanalyticScore/modules/great_forest_model/steps/score"

payload = "{\"inputs\":[{\"name\":\"var1\",\"value\":25},{\"name\":\"var2\",\"value\":100}]}"
headers = {
  'Content-Type': 'application/json',
  'Authorization': 'Bearer ***YOUR*TOKEN*HERE***'
}

response = requests.request("POST", url, headers=headers, data = payload)

print(response.text)

Docker container

Starting from Viya 3.5, users can configure Docker publishing destinations and use them for executing models. In Viya 3.5 users can configure:

  • Amazon Web Services container
  • Private Docker container

There are several requirements to be met before deploying models to Docker destinations, more information about how you can format your score code for open source models can be found here Concepts: Open-Source Models

Once your model is published, you can start the container and start scoring !

The model container exposes the following REST endpoints:

  • GET / : returns pong
  • POST /executions : takes a csv file as an input, executes the scoring and returns the “id” of the process
  • GET /query/<id> : returns the results of the scoring process
  • GET /query/<id>/log : returns the logs of the scoring process
  • GET /system/log : returns the logs of the container

This code snippet shows how to score a csv file using the container

import requests
import os

input_path = 'input_data.csv'           # input file
host = 'http://docker.example.com'      # docker host

headers = {'Accept': 'application/json'} 
file_name = os.path.basename(input_path)
files = {'file': (file_name, open(input_path, 'rb'), 'application/octet-stream')}

# execute model on input data
response = requests.post(host+':33720/executions', files=files, headers=headers)
resp_json = response.json()

# query parameters
test_id = resp_json['id'] 
result_file = resp_json['id'] + '.csv' 

# query the results
result_url = host+':33720/query/' + result_file 
r = requests.get(result_url, allow_redirects=True)

# save to csv file
scored_file= test_id + '.csv' 
open(scored_file, 'wb').write(r.content)

Conclusion

These code samples show how users can easily execute models in production. There is a destination optimized for each use case whether it is for scoring batch or transaction data. Docker containers enable users to have more flexible environments to deploy Open Source Models. The API Interfaces for MAS and Docker destinations allow also to integrate models in client applications.