{"id":2221,"date":"2020-09-30T15:05:01","date_gmt":"2020-09-30T15:05:01","guid":{"rendered":"https:\/\/data-science.gotoauthority.com\/2020\/09\/30\/machine-learning-model-deployment\/"},"modified":"2020-09-30T15:05:01","modified_gmt":"2020-09-30T15:05:01","slug":"machine-learning-model-deployment","status":"publish","type":"post","link":"https:\/\/wealthrevelation.com\/data-science\/2020\/09\/30\/machine-learning-model-deployment\/","title":{"rendered":"Machine Learning Model Deployment"},"content":{"rendered":"<div id=\"post-\">\n<p><b>By <a href=\"\" target=\"_blank\" rel=\"noopener noreferrer\">Asha Ganesh<\/a>, Data Scientist<\/b><\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/950\/1*I3cuMuGIRlNPbMK1zST5KA.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>ML models serverless deployment<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<h3>What is serverless deployment<\/h3>\n<p>\u00a0<br \/>Serverless is the next step in Cloud Computing. This means that servers are simply hidden from the picture. In serverless computing, this separation of server and application is managed by using a platform. The responsibility of the platform or serverless provider is to manage all the needs and configurations for your application. These platforms manage the configuration of your server behind the scenes. This is how in serverless computing, one can simply focus on the application or code itself being built or deployed.<\/p>\n<p>Machine Learning Model Deployment is not exactly the same as software development. In ML models a constant stream of new data is needed to keep models working well. Models need to adjust in the real world because of various reasons like adding new categories, new levels and many other reasons. Deploying models is just the beginning, as many times models need to retrain and check their performance. So, using serverless deployment can save time and effort and for retraining models every time, which is cool!<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/1011\/1*MiOcSJy84obS4DFcCtYzPw.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>ML Workflow<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>Models are performing worse in production than in development, and the solution needs to be sought in deployment. So, its easy to deploy ML models through serverless deployment.<\/p>\n<p>\u00a0<\/p>\n<h3>Prerequisites to understand serverless deployment<\/h3>\n<p>\u00a0<\/p>\n<ul>\n<li>Basic understanding of cloud computing\n<\/li>\n<li>Basic understanding of cloud functions\n<\/li>\n<li>Machine Learning\n<\/li>\n<\/ul>\n<p>\u00a0<\/p>\n<h3>Deployment Models for prediction<\/h3>\n<p>\u00a0<br \/>We can deploy our ML model in 3 ways:<\/p>\n<ul>\n<li>web hosting frameworks like Flask and Django, etc.\n<\/li>\n<li>Server less compute AWS lambda, Google Cloud Functions,Azure Functions\n<\/li>\n<li>Cloud Platform specific frameworks like AWS Sagemaker, Google AI Platform, Azure Functions\n<\/li>\n<\/ul>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/967\/1*kR1A4aPlNYm7gIWRfFgREg.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>Deploy Models in various ways<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<h3>Server less deployment architecture overview<\/h3>\n<p>\u00a0<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/497\/1*h7A3pZFIQ_AqDJtSSy7zWQ.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>Image taken and modified from google cloud<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>Store models in Google Cloud Storage buckets then write Google Cloud Functions. Using Python for retrieving models from the bucket and by using HTTP JSON requests we can get predicted values for the given inputs with the help of Google Cloud Function.<\/p>\n<p>\u00a0<\/p>\n<h3><strong>Steps to start model deployment<\/strong><\/h3>\n<p>\u00a0<br \/><strong>1. About Data, code and models<\/strong><\/p>\n<p>Taking the movie reviews dataset for sentiment analysis, see the solution\u00a0<a href=\"https:\/\/github.com\/Asha-ai\/ServerlessDeployment\/blob\/65037f323cd5d32e52d9bae90f271ed1a59a2f6d\/ServerlessDeployment.ipynb\" rel=\"noopener noreferrer\" target=\"_blank\"><strong>here<\/strong><\/a><strong>\u00a0<\/strong>in my GitHub repository and\u00a0<a href=\"https:\/\/github.com\/Asha-ai\/ServerlessDeployment\/tree\/65037f323cd5d32e52d9bae90f271ed1a59a2f6d\" rel=\"noopener noreferrer\" target=\"_blank\"><strong>data<\/strong><\/a><strong>, models\u00a0<\/strong>also<strong>\u00a0<\/strong>available in the same repository.<\/p>\n<p><strong>2. Create storage bucket<\/strong><\/p>\n<p>By executing the \u201c<strong>ServerlessDeployment.ipynb<\/strong>\u201c file you will get 3 ML models: DecisionClassifier, LinearSVC, and Logistic Regression.<\/p>\n<p>Click on the Browser in Storage option for creating new bucket as shown in the image:<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/528\/1*8ETeoBVgQtHyaRqlI38vDg.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>Create Storage bucket<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p><strong>3. Create new function<\/strong><\/p>\n<p>Create a new bucket, then create a folder and upload the 3 models in that folder by creating 3 sub folders as shown.<\/p>\n<p>Here\u00a0<strong>models<\/strong>\u00a0is my main folder name and my sub folders are:<\/p>\n<ul>\n<li>\n<strong>decision_tree_model<\/strong>\n<\/li>\n<li>\n<strong>linear_svc_model<\/strong>\n<\/li>\n<li>\n<strong>logistic_regression_model<\/strong>\n<\/li>\n<\/ul>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/738\/1*z26B4MZMgVyuPSvL4CjE0Q.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>storing models in google storage in 3 folders<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p><strong>4. Create function<\/strong><\/p>\n<p>Then go to Google Cloud Functions and create a function, then select trigger type as HTTP and select language as Python (you can choose any language):<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/399\/1*g1MbtYJecIdAN59LyCNKqg.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>Create Cloud Function<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p><strong>5. Write cloud function in the editor<\/strong><\/p>\n<p>Check the cloud function in my repository. Here I have imported the required libraries for calling models from Google Cloud Bucket and other libraries for HTTP request.<\/p>\n<ul>\n<li>GET method used to test the URL response and POST method\n<\/li>\n<li>Delete default template and paste our code then\u00a0<strong>pickel<\/strong>\u00a0is used for deserializing our model\n<\/li>\n<li>google.cloud \u2014 access our cloud storage function.\n<\/li>\n<li>If incoming request is\u00a0<strong>GET<\/strong>\u00a0we simply return \u201cwelcome to classifier\u201d\n<\/li>\n<li>If incoming request is\u00a0<strong>POST\u00a0<\/strong>access the json data in the body request\n<\/li>\n<li>GET JSON instantiates the storage client object and accesses models from the bucket; here we have 3 classification models in the bucket\n<\/li>\n<li>If the user specifies \u201cDecisionClassifier\u201d we access the the model from the respective folder respectively with other models\n<\/li>\n<li>If the user does not specify any model, the default model is Logistic Regression\n<\/li>\n<li>The blob variable contains a reference to the model.pkl file for the correct model\n<\/li>\n<li>We download the .pkl file on to the local machine where this cloud function is running. Now every invocation might be running on a different VM and we only access the \/temp folder on the VM, which is why we save our model.pkl file\n<\/li>\n<li>we deserialize the model by invoking pkl.load to access the prediction instances from the incoming request and call model.predict on the prediction data\n<\/li>\n<li>the response that will send back from the serverless function is the original text that is the review that we want to classify and our pred class\n<\/li>\n<li>after main.py write requirement.txt with required libraries and versions\n<\/li>\n<\/ul>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/896\/1*eGoY1iDldLG8slQSA6_a_A.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p><strong>6. Deploy the model<\/strong><\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/925\/1*G8lgPACJn0Ob-C0UcgZ3lQ.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>Green tick represent successful model deployment<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p><strong>7. Test the model<\/strong><\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/907\/1*kNlm_rxDs-x2BJwvHX9djw.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>Give model name and review(s) for testing<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>Test the function with the other model.<\/p>\n<p><img alt=\"Image for post\" class=\"aligncenter\" src=\"https:\/\/miro.medium.com\/max\/725\/1*u9A8ilkGkMGlvpprDnCiIA.png\" width=\"100%\"><\/p>\n<p>Will meet you with complete UI details with this model deployment.<\/p>\n<p>\u00a0<\/p>\n<h3>Code References:<\/h3>\n<p>\u00a0<br \/>My GitHub Repository:\u00a0<a href=\"https:\/\/github.com\/Asha-ai\/ServerlessDeployment\" rel=\"noopener noreferrer\" target=\"_blank\">https:\/\/github.com\/Asha-ai\/ServerlessDeployment<\/a><\/p>\n<p>Don\u2019t hesitate to give more &amp; more claps \ud83d\ude42<\/p>\n<p>\u00a0<br \/><b>Bio: <a href=\"https:\/\/medium.com\/@ashaicy99\" target=\"_blank\" rel=\"noopener noreferrer\">Asha Ganesh<\/a><\/b> is a data scientist.<\/p>\n<p><a href=\"https:\/\/medium.com\/@ashaicy99\/machine-learning-model-deployment-748e0c2437b8\" target=\"_blank\" rel=\"noopener noreferrer\">Original<\/a>. Reposted with permission.<\/p>\n<p><b>Related:<\/b><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/www.kdnuggets.com\/2020\/09\/machine-learning-model-deployment.html<\/p>\n","protected":false},"author":0,"featured_media":2222,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[2],"tags":[],"_links":{"self":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/2221"}],"collection":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/comments?post=2221"}],"version-history":[{"count":0,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/2221\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media\/2222"}],"wp:attachment":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media?parent=2221"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/categories?post=2221"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/tags?post=2221"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}