{"id":888,"date":"2020-09-01T19:57:54","date_gmt":"2020-09-01T19:57:54","guid":{"rendered":"https:\/\/data-science.gotoauthority.com\/2020\/09\/01\/pycaret-2-1-is-here-whats-new\/"},"modified":"2020-09-01T19:57:54","modified_gmt":"2020-09-01T19:57:54","slug":"pycaret-2-1-is-here-whats-new","status":"publish","type":"post","link":"https:\/\/wealthrevelation.com\/data-science\/2020\/09\/01\/pycaret-2-1-is-here-whats-new\/","title":{"rendered":"PyCaret 2.1 is here: What\u2019s new?"},"content":{"rendered":"<div id=\"post-\">\n<p><b>By <a href=\"https:\/\/www.linkedin.com\/in\/profile-moez\/\" target=\"_blank\" rel=\"noopener noreferrer\">Moez Ali<\/a>, Founder &amp; Author of PyCaret<\/b><\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/414\/1*OYS6O-iLkoE88fBbd3IKcw.jpeg\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>We are excited to announce PyCaret 2.1 \u2014 update for the month of Aug 2020.<\/p>\n<p>PyCaret is an open-source,\u00a0<strong>low-code<\/strong>\u00a0machine learning library in Python that automates the machine learning workflow. It is an end-to-end machine learning and model management tool that speeds up the machine learning experiment cycle and makes you 10x more productive.<\/p>\n<p>In comparison with the other open-source machine learning libraries, PyCaret is an alternate low-code library that can be used to replace hundreds of lines of code with few words only. This makes experiments exponentially fast and efficient.<\/p>\n<p>If you haven\u2019t heard or used PyCaret before, please see our\u00a0<a href=\"https:\/\/towardsdatascience.com\/announcing-pycaret-2-0-39c11014540e\" rel=\"noopener noreferrer\" target=\"_blank\">previous announcement<\/a>\u00a0to get started quickly.<\/p>\n<p>\u00a0<\/p>\n<h3>Installing PyCaret<\/h3>\n<p>\u00a0<br \/>Installing PyCaret is very easy and takes only a few minutes. We strongly recommend using a virtual environment to avoid potential conflict with other libraries. See the following example code to create a\u00a0<strong><em>conda environment\u00a0<\/em><\/strong>and install pycaret within that conda environment:<\/p>\n<div>\n<pre><code><strong># create a conda environment <\/strong>\r\nconda create --name yourenvname python=3.6  <strong># activate environment <\/strong>\r\nconda activate yourenvname  <strong># install pycaret <\/strong>\r\npip install pycaret <strong># create notebook kernel linked with the conda environment \r\n<\/strong>python -m ipykernel install --user --name yourenvname --display-name \"display-name\"<\/code><\/pre>\n<\/div>\n<p>If you have PyCaret already installed, you can update it using pip:<\/p>\n<div>\n<pre><code>pip install --upgrade pycaret<\/code><\/pre>\n<\/div>\n<p>\u00a0<\/p>\n<h3><strong>PyCaret 2.1 Feature Summary<\/strong><\/h3>\n<p>\u00a0<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/4286\/0*cf_p85-ytqAWXneJ\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49\u00a0Hyperparameter Tuning on GPU<\/h3>\n<p>\u00a0<br \/>In PyCaret 2.0 we have announced GPU-enabled training for certain algorithms (XGBoost, LightGBM and Catboost). What\u2019s new in 2.1 is now you can also tune the hyperparameters of those models on GPU.<\/p>\n<div>\n<pre><code><strong># train xgboost using gpu<\/strong>\r\nxgboost = create_model('xgboost', tree_method = 'gpu_hist')<strong># tune xgboost \r\n<\/strong>tuned_xgboost<strong> = <\/strong>tune_model(xgboost)<\/code><\/pre>\n<\/div>\n<p>No additional parameter needed inside\u00a0<strong>tune_model\u00a0<\/strong>function as it automatically inherits the tree_method from xgboost instance created using the\u00a0<strong>create_model\u00a0<\/strong>function. If you are interested in little comparison, here it is:<\/p>\n<blockquote>\n<p>\n<strong>100,000 rows with 88 features in a Multiclass problem with 8 classes<\/strong>\n<\/p>\n<\/blockquote>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/818\/1*1lAya7O3sEad9-epPH1sUw.jpeg\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>XGBoost Training on GPU (using Google Colab)<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49 Model Deployment<\/h3>\n<p>\u00a0<br \/>Since the first release of PyCaret in April 2020, you can deploy trained models on AWS simply by using the\u00a0<strong>deploy_model\u00a0<\/strong>from<strong>\u00a0<\/strong>your Notebook. In the recent release, we have added functionalities to support deployment on GCP as well as Microsoft Azure.<\/p>\n<p>\u00a0<\/p>\n<h3><strong>Microsoft Azure<\/strong><\/h3>\n<p>\u00a0<br \/>To deploy a model on Microsoft Azure, environment variables for connection string must be set. The connection string can be obtained from the \u2018Access Keys\u2019 of your storage account in Azure.<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/1437\/1*XPH0ZtRmQkRxVHiqEMLaIw.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>https:\/portal.azure.com \u2014 Getting connection string from the storage account<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>Once you have copied the connection string, you can set it as an environment variable. See example below:<\/p>\n<div>\n<pre><code><strong>import os\r\n<\/strong>os.environ['AZURE_STORAGE_CONNECTION_STRING'] = 'your-conn-string'<strong>from pycaret.classification import load_model<\/strong>\r\ndeploy_model(model = model, model_name = 'model-name', platform = 'azure', authentication = {'container' : 'container-name'})<\/code><\/pre>\n<\/div>\n<p>BOOM! That\u2019s it. Just by using one line of code<strong>,\u00a0<\/strong>your entire machine learning pipeline is now shipped on the container in Microsoft Azure. You can access that using the\u00a0<strong>load_model<\/strong>\u00a0function.<\/p>\n<div>\n<pre><code><strong>import os\r\n<\/strong>os.environ['AZURE_STORAGE_CONNECTION_STRING'] = 'your-conn-string'<strong>from pycaret.classification import load_model\r\n<\/strong>loaded_model = load_model(model_name = 'model-name', platform = 'azure', authentication = {'container' : 'container-name'})<strong>from pycaret.classification import predict_model\r\n<\/strong>predictions = predict_model(loaded_model, data = new-dataframe)<\/code><\/pre>\n<\/div>\n<p>\u00a0<\/p>\n<h3>Google Cloud Platform<\/h3>\n<p>\u00a0<br \/>To deploy a model on Google Cloud Platform (GCP), you must create a project first either using a command line or GCP console. Once the project is created, you must create a service account and download the service account key as a JSON file, which is then used to set the environment variable.<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/1438\/1*nN6uslyOixxmYpFcVel8Bw.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>Creating a new service account and downloading the JSON from GCP Console<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>To learn more about creating a service account, read the\u00a0<a href=\"https:\/\/cloud.google.com\/docs\/authentication\/production\" rel=\"noopener noreferrer\" target=\"_blank\">official documentation<\/a>. Once you have created a service account and downloaded the JSON file from your GCP console you are ready for deployment.<\/p>\n<div>\n<pre><code><strong>import os\r\n<\/strong>os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'c:\/path-to-json-file.json'<strong>from pycaret.classification import deploy_model\r\n<\/strong>deploy_model(model = model, model_name = 'model-name', platform = 'gcp', authentication = {'project' : 'project-name', 'bucket' : 'bucket-name'})<\/code><\/pre>\n<\/div>\n<p>Model uploaded. You can now access the model from the GCP bucket using the\u00a0<strong>load_model<\/strong>\u00a0function.<\/p>\n<div>\n<pre><code><strong>import os\r\n<\/strong>os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'c:\/path-to-json-file.json'<strong>from pycaret.classification import load_model\r\n<\/strong>loaded_model = load_model(model_name = 'model-name', platform = 'gcp', authentication = {'project' : 'project-name', 'bucket' : 'bucket-name'})<strong>from pycaret.classification import predict_model\r\n<\/strong>predictions = predict_model(loaded_model, data = new-dataframe)<\/code><\/pre>\n<\/div>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49 MLFlow Deployment<\/h3>\n<p>\u00a0<br \/>In addition to using PyCaret\u2019s native deployment functionalities, you can now also use all the MLFlow deployment capabilities. To use those, you must log your experiment using the\u00a0<strong>log_experiment<\/strong>\u00a0parameter in the\u00a0<strong>setup\u00a0<\/strong>function.<\/p>\n<div>\n<pre><code><strong># init setup<\/strong>\r\nexp1 = setup(data, target = 'target-name', log_experiment = True, experiment_name = 'exp-name')<strong># create xgboost model\r\n<\/strong>xgboost = create_model('xgboost')..\r\n..\r\n..# rest of your script<strong># start mlflow server on localhost:5000<\/strong>\r\n!mlflow ui<\/code><\/pre>\n<\/div>\n<p>Now open\u00a0<a href=\"https:\/\/localhost:5000\/\" rel=\"noopener noreferrer\" target=\"_blank\">https:\/\/localhost:5000<\/a>\u00a0on your favorite browser.<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/1439\/1*y0nMOMuDeMS1sdFepDngKw.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>You can see the details of run by clicking the\u00a0<strong>\u201cStart Time\u201d<\/strong>\u00a0shown on the left of\u00a0<strong>\u201cRun Name\u201d<\/strong>. What you see inside is all the hyperparameters and scoring metrics of a trained model and if you scroll down a little, all the artifacts are shown as well (see below).<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/1311\/1*NS7ifCnHHKRpLHCWeYhNZg.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>MLFLow Artifacts<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>A trained model along with other metadata files are stored under the directory \u201c\/model\u201d. MLFlow follows a standard format for packaging machine learning models that can be used in a variety of downstream tools \u2014 for example, real-time serving through a REST API or batch inference on Apache Spark. If you want you can serve this model locally you can do that by using MLFlow command line.<\/p>\n<div>\n<pre><code>mlflow models serve -m local-path-to-model<\/code><\/pre>\n<\/div>\n<p>You can then send the request to model using CURL to get the predictions.<\/p>\n<div>\n<pre><code>curl <a href=\"http:\/\/127.0.0.1:5000\/invocations\" rel=\"noopener noreferrer\" target=\"_blank\">http:\/\/127.0.0.1:5000\/invocations<\/a> -H 'Content-Type: application\/json' -d '{\r\n    \"columns\": [\"age\", \"sex\", \"bmi\", \"children\", \"smoker\", \"region\"],\r\n    \"data\": [[19, \"female\", 27.9, 0, \"yes\", \"southwest\"]]\r\n}'<\/code><\/pre>\n<\/div>\n<p><em>(Note: This functionality of MLFlow is not supported on Windows OS yet).<\/em><\/p>\n<p>MLFlow also provide integration with AWS Sagemaker and Azure Machine Learning Service. You can train models locally in a Docker container with SageMaker compatible environment or remotely on SageMaker. To deploy remotely to SageMaker you need to set up your environment and AWS user account.<\/p>\n<p><strong>Example workflow using the MLflow CLI<\/strong><\/p>\n<div>\n<pre><code>mlflow sagemaker build-and-push-container \r\nmlflow sagemaker run-local -m &lt;path-to-model&gt;\r\nmlflow sagemaker deploy &lt;parameters&gt;<\/code><\/pre>\n<\/div>\n<p>To learn more about all deployment capabilities of MLFlow,\u00a0<a href=\"https:\/\/www.mlflow.org\/docs\/latest\/models.html#\" rel=\"noopener noreferrer\" target=\"_blank\">click here<\/a>.<\/p>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49 MLFlow Model Registry<\/h3>\n<p>\u00a0<br \/>The MLflow Model Registry component is a centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of an MLflow Model. It provides model lineage (which MLflow experiment and run produced the model), model versioning, stage transitions (for example from staging to production), and annotations.<\/p>\n<p>If running your own MLflow server, you must use a database-backed backend store in order to access the model registry.\u00a0<a href=\"https:\/\/www.mlflow.org\/docs\/latest\/tracking.html#backend-stores\" rel=\"noopener noreferrer\" target=\"_blank\">Click here<\/a>\u00a0for more information. However, if you are using\u00a0<a href=\"https:\/\/databricks.com\/\" rel=\"noopener noreferrer\" target=\"_blank\">Databricks<\/a>\u00a0or any of the managed Databricks services such as\u00a0<a href=\"https:\/\/azure.microsoft.com\/en-ca\/services\/databricks\/\" rel=\"noopener noreferrer\" target=\"_blank\">Azure Databricks<\/a>, you don\u2019t need to worry about setting up anything. It comes with all the bells and whistles you would ever need.<\/p>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/768\/1*XlT58YrFuszGb-1PIXvKZw.gif\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49 High-Resolution Plotting<\/h3>\n<p>\u00a0<br \/>This is not ground-breaking but indeed a very useful addition for people using PyCaret for research and publications. The\u00a0<strong>plot_model<\/strong>\u00a0now has an additional parameter called \u201cscale\u201d through which you can control the resolution and generate high quality plot for your publications.<\/p>\n<div>\n<pre><code><strong># create linear regression model<\/strong>\r\nlr = create_model('lr')<strong># plot in high-quality resolution\r\n<\/strong>plot_model(lr, scale = 5) # default is 1<\/code><\/pre>\n<\/div>\n<div>\n<img src=\"https:\/\/miro.medium.com\/max\/1296\/1*O413K8IUvgYTgD3aTtcYjw.png\" alt=\"Figure\" width=\"100%\"><br \/><span><\/p>\n<p>High-Resolution Residual Plot from PyCaret<\/p>\n<p><\/span>\n<\/div>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49 User-Defined Loss Function<\/h3>\n<p>\u00a0<br \/>This is one of the most requested feature ever since release of the first version. Allowing to tune hyperparameters of a model using custom \/ user-defined function gives immense flexibility to data scientists. It is now possible to use user-defined custom loss functions using\u00a0<strong>custom_scorer\u00a0<\/strong>parameter in the\u00a0<strong>tune_model\u00a0<\/strong>function.<\/p>\n<div>\n<pre><code><strong># define the loss function<\/strong>\r\ndef my_function(y_true, y_pred):\r\n...\r\n...<strong># create scorer using sklearn<\/strong>\r\nfrom sklearn.metrics import make_scorer<strong>\r\n<\/strong>my_own_scorer = make_scorer(my_function, needs_proba=True)<strong># train catboost model\r\n<\/strong>catboost = create_model('catboost')<strong># tune catboost using custom scorer\r\n<\/strong>tuned_catboost = tune_model(catboost, custom_scorer = my_own_scorer)<\/code><\/pre>\n<\/div>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49 Feature Selection<\/h3>\n<p>\u00a0<br \/>Feature selection is a fundamental step in machine learning. You dispose of a bunch of features and you want to select only the relevant ones and to discard the others. The aim is simplifying the problem by removing unuseful features which would introduce unnecessary noise.<\/p>\n<p>In PyCaret 2.1 we have introduced implementation of Boruta algorithm in Python (originally implemented in R). Boruta is a pretty smart algorithm dating back to 2010 designed to automatically perform feature selection on a dataset. To use this, you simply have to pass the\u00a0<strong>feature_selection_method\u00a0<\/strong>within the\u00a0<strong>setup<\/strong>\u00a0function.<\/p>\n<div>\n<pre><code>exp1 = setup(data, target = 'target-var', feature_selection = True, feature_selection_method = 'boruta')<\/code><\/pre>\n<\/div>\n<p>To read more about Boruta algorithm,\u00a0<a href=\"https:\/\/towardsdatascience.com\/boruta-explained-the-way-i-wish-someone-explained-it-to-me-4489d70e154a\" rel=\"noopener noreferrer\" target=\"_blank\">click here.<\/a><\/p>\n<p>\u00a0<\/p>\n<h3>\ud83d\udc49 Other Changes<\/h3>\n<p>\u00a0<\/p>\n<ul>\n<li>\n<code>blacklist<\/code>\u00a0and\u00a0<code>whitelist<\/code>\u00a0parameters in\u00a0<code>compare_models<\/code>\u00a0function is now renamed to\u00a0<code>exclude<\/code>\u00a0and\u00a0<code>include<\/code>\u00a0with no change in functionality.\n<\/li>\n<li>To set the upper limit on training time in\u00a0<code>compare_models<\/code>\u00a0function, new parameter\u00a0<code>budget_time<\/code>\u00a0has been added.\n<\/li>\n<li>PyCaret is now compatible with Pandas categorical datatype. Internally they are converted into object and are treated as the same way as\u00a0<code>object<\/code>\u00a0or\u00a0<code>bool<\/code>\u00a0is treated.\n<\/li>\n<li>Numeric Imputation New method\u00a0<code>zero<\/code>\u00a0has been added in the\u00a0<code>numeric_imputation<\/code>\u00a0in the\u00a0<code>setup<\/code>\u00a0function. When method is set to\u00a0<code>zero<\/code>, missing values are replaced with constant 0.\n<\/li>\n<li>To make the output more human-readable, the\u00a0<code>Label<\/code>\u00a0column returned by\u00a0<code>predict_model<\/code>\u00a0function now returns the original value instead of encoded value.\n<\/li>\n<\/ul>\n<p>To learn more about all the updates in PyCaret 2.1, please see the\u00a0<a href=\"https:\/\/github.com\/pycaret\/pycaret\/releases\/tag\/2.1\" rel=\"noopener noreferrer\" target=\"_blank\">release notes<\/a>.<br \/>There is no limit to what you can achieve using the lightweight workflow automation library in Python. If you find this useful, please do not forget to give us \u2b50\ufe0f on our\u00a0<a href=\"https:\/\/www.github.com\/pycaret\/pycaret\/\" rel=\"noopener noreferrer\" target=\"_blank\">GitHub repo<\/a>.<\/p>\n<p>To hear more about PyCaret follow us on\u00a0<a href=\"https:\/\/www.linkedin.com\/company\/pycaret\/\" rel=\"noopener noreferrer\" target=\"_blank\">LinkedIn<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.youtube.com\/channel\/UCxA1YTYJ9BEeo50lxyI_B3g\" rel=\"noopener noreferrer\" target=\"_blank\">Youtube<\/a>.<\/p>\n<p>\u00a0<\/p>\n<h3>Important Links<\/h3>\n<p>\u00a0<br \/><a href=\"https:\/\/www.pycaret.org\/guide\" rel=\"noopener noreferrer\" target=\"_blank\">User Guide<\/a><br \/><a href=\"https:\/\/pycaret.readthedocs.io\/en\/latest\/\" rel=\"noopener noreferrer\" target=\"_blank\">Documentation<\/a><br \/><a href=\"https:\/\/github.com\/pycaret\/pycaret\/tree\/master\/tutorials\" rel=\"noopener noreferrer\" target=\"_blank\">Official Tutorials<\/a><br \/><a href=\"https:\/\/github.com\/pycaret\/pycaret\/tree\/master\/examples\" rel=\"noopener noreferrer\" target=\"_blank\">Example Notebooks<\/a><br \/><a href=\"https:\/\/github.com\/pycaret\/pycaret\/tree\/master\/resources\" rel=\"noopener noreferrer\" target=\"_blank\">Other Resources<\/a><\/p>\n<p>\u00a0<\/p>\n<h3>Want to learn about a specific module?<\/h3>\n<p>\u00a0<br \/>Click on the links below to see the documentation and working examples.<br \/><a href=\"https:\/\/www.pycaret.org\/classification\" rel=\"noopener noreferrer\" target=\"_blank\">Classification<\/a><br \/><a href=\"https:\/\/www.pycaret.org\/regression\" rel=\"noopener noreferrer\" target=\"_blank\">Regression<\/a><br \/><a href=\"https:\/\/www.pycaret.org\/clustering\" rel=\"noopener noreferrer\" target=\"_blank\">Clustering<\/a><br \/><a href=\"https:\/\/www.pycaret.org\/anomaly-detection\" rel=\"noopener noreferrer\" target=\"_blank\">Anomaly Detection<\/a><br \/><a href=\"https:\/\/www.pycaret.org\/nlp\" rel=\"noopener noreferrer\" target=\"_blank\">Natural Language Processing<\/a><br \/><a href=\"https:\/\/www.pycaret.org\/association-rules\" rel=\"noopener noreferrer\" target=\"_blank\">Association Rule Mining<\/a><\/p>\n<p>\u00a0<br \/><b>Bio: <a href=\"https:\/\/www.linkedin.com\/in\/profile-moez\/\" target=\"_blank\" rel=\"noopener noreferrer\">Moez Ali<\/a><\/b> is a Data Scientist, and is Founder &amp; Author of PyCaret.<\/p>\n<p><a href=\"https:\/\/towardsdatascience.com\/pycaret-2-1-is-here-whats-new-4aae6a7f636a\" target=\"_blank\" rel=\"noopener noreferrer\">Original<\/a>. Reposted with permission.<\/p>\n<p><b>Related:<\/b><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/www.kdnuggets.com\/2020\/09\/pycaret-21-new.html<\/p>\n","protected":false},"author":0,"featured_media":889,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[2],"tags":[],"_links":{"self":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/888"}],"collection":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/comments?post=888"}],"version-history":[{"count":0,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/888\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media\/889"}],"wp:attachment":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media?parent=888"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/categories?post=888"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/tags?post=888"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}