{"id":8079,"date":"2021-01-11T17:31:27","date_gmt":"2021-01-11T17:31:27","guid":{"rendered":"https:\/\/wealthrevelation.com\/data-science\/2021\/01\/11\/pycaret-2-2-efficient-pipelines-for-model-development\/"},"modified":"2021-01-11T17:31:27","modified_gmt":"2021-01-11T17:31:27","slug":"pycaret-2-2-efficient-pipelines-for-model-development","status":"publish","type":"post","link":"https:\/\/wealthrevelation.com\/data-science\/2021\/01\/11\/pycaret-2-2-efficient-pipelines-for-model-development\/","title":{"rendered":"PyCaret 2.2: Efficient Pipelines for Model Development"},"content":{"rendered":"<div>\n<p>Data science is an exciting field, but it can be intimidating to get started, especially for those new to coding.\u00a0 Even for experienced developers and data scientists, the process of developing a model could involve stringing together many steps from many packages, in ways that might not be as elegant or efficient as one might like.\u00a0 The creator of the <a href=\"http:\/\/topepo.github.io\/caret\/index.html\">Caret<\/a> library in R (\u201cshort for <strong>C<\/strong>lassification <strong>A<\/strong>nd <strong>RE<\/strong>gression <strong>T<\/strong>raining\u201d) was a software engineer named Max Kuhnwho sought to improve the situation by creating <a href=\"https:\/\/www.r-project.org\/conferences\/useR-2010\/slides\/Kuhn.pdf\">a more efficient, \u201cstreamlined\u201d process for developing models<\/a>.\u00a0 Eventually, data scientist <a href=\"http:\/\/philipmgoddard.com\/Python\/pycaret\">Philip Goddard<\/a> switched from R to Python and, to bring the \u201csmooth and intuitive\u201d workflows of Caret with him, he created <a href=\"https:\/\/pycaret.org\/\">PyCaret<\/a>.<\/p>\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image.png\" alt=\"\" class=\"wp-image-7355\" width=\"580\" height=\"296\"><figcaption>Image from<a href=\"http:\/\/github.com\/pycaret\" data-type=\"URL\" data-id=\"github.com\/pycaret\"> github.com\/pycaret<\/a><\/figcaption><\/figure>\n<p>PyCaret is a convenient entree into machine learning and a productivity tool for experienced practitioners.\u00a0 It gives scientists and analysts a simple, concise, low-code interface into many of the most popular and powerful machine learning libraries in the data science ecosystem, making it easy to start exploring new techniques or datasets.\u00a0 The clear, concise code that PyCaret users generate is also easy for collaborators and teammates to read and adapt, whether these colleagues are new to the field or experienced data scientists.\u00a0<\/p>\n<p><strong>Building Blocks<\/strong><\/p>\n<p>Goddard initially built out PyCaret as a package that wrapped many useful Python libraries and common tasks into concise repeatable components, making it easy to construct pipelines with the minimal number of statements.\u00a0 Today, PyCaret still utilizes many modules that will be familiar to Pythonistas: Pandas and Numpy for data wrangling, Matplotlib, Plotly and Seaborn for visualization, scikit-learn and XGBoost for modeling, Gensim, Spacy and NLTK for natural language processing, among others.\u00a0<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"624\" height=\"371\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-1.png\" alt=\"\" class=\"wp-image-7356\"><figcaption>Image from <a href=\"http:\/\/pycaret.org\">pycaret.org<\/a><\/figcaption><\/figure>\n<p><strong>Building a Pipeline<\/strong><\/p>\n<p>While the internals and usage have changed considerably from the first version to PyCaret 2.2, the experience is still rooted in the same goal: simple efficiency for the whole model development lifecycle.\u00a0 This means that you can utilize PyCaret to go from raw data through training, tuning, interpretability analysis, to model selection and experiment logging, all with just a few lines of code.\u00a0<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"641\" height=\"323\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-2.png\" alt=\"\" class=\"wp-image-7357\"><figcaption>Training and comparing models in just a few lines of code<\/figcaption><\/figure>\n<p>Let\u2019s walk through each step of building a classification pipeline in PyCaret.\u00a0<\/p>\n<p><strong>0.\u00a0\u00a0 Installation<\/strong><\/p>\n<p>You can quickly try out PyCaret in the pre-configured example project here: <a href=\"https:\/\/try.dominodatalab.com\/u\/katie_shakman\/PyCaret\/overview\">PyCaret Project on Domino<\/a>.\u00a0 If you\u2019d like to install it in another Domino compute environment or using a Docker image elsewhere, you can add the following RUN statement to add PyCaret 2.2.2 (and dependencies required for this walk-through):<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nRUN sudo apt-get purge python-numpy -y \n&amp;amp;&amp;amp; sudo apt-get autoremove --purge python-numpy -y \n&amp;amp;&amp;amp; sudo pip uninstall numpy -y \n&amp;amp;&amp;amp; sudo pip install numpy==1.17 &amp;amp;&amp;amp; sudo pip install pycaret==2.2.2 &amp;amp;&amp;amp; sudo pip install shap==0.36.0\n<\/pre>\n<p>Or you can install with pip (though you may want to do this in a virtual environment if you\u2019re not using Docker): <strong><code>pip install pycaret<\/code><\/strong><\/p>\n<p>For this example I started with a Domino Analytics Distribution base environment.\u00a0 Depending on your starting environment you may also need to install Pandas and some additional dependencies that ship with Domino Analytics Distributions.\u00a0<\/p>\n<p>You can verify you have everything installed by importing packages as follows:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nimport pycaret\nprint('Using PyCaret Version', pycaret.__version__)\nprint('Path to PyCaret: ', pycaret.__file__)\nimport os\nimport pandas as pd\nfrom pycaret.classification import *\nfrom pycaret import datasets\n<\/pre>\n<p><strong>Accessing Data<\/strong><\/p>\n<p>There are two ways to register your data into PyCaret: via the repository or a Pandas dataframe.\u00a0 Let\u2019s take a look at each method.<\/p>\n<p><strong>Loading a Dataframe with Pandas<\/strong><\/p>\n<p>The first way to get data into PyCaret is simply to load up a Pandas dataframe and then pass it to PyCaret.\u00a0<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\ndata_path = os.path.join(os.environ['DOMINO_WORKING_DIR'], 'mushrooms.csv')\nprint('Data path: ', data_path)\ndata = pd.read_csv(data_path)\ndata.head()\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"624\" height=\"127\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-3.png\" alt=\"\" class=\"wp-image-7359\"><\/figure>\n<p><strong>Using the Data Repository<\/strong><\/p>\n<p>The second way of getting data, which is used in the PyCaret tutorials, is to pull in a curated dataset from the <a href=\"https:\/\/pycaret.org\/get-data\/#datasets\">PyCaret Data Repository<\/a>. The repository helpfully includes popular sample datasets for classification, regression, clustering, NLP, etc.\u00a0 (Last I checked, the repository contained 56 datasets, a sample of which are shown here.)\u00a0 You can list all the datasets available in the repository, and see associated metadata:\u00a0<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nall_datasets = pycaret.datasets.get_data('index')\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"705\" height=\"504\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-4.png\" alt=\"\" class=\"wp-image-7360\"><\/figure>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nall_datasets = pycaret.datasets.get_data('index')\ndataset_name = 'heart_disease' # Replace with your desired dataset.\ndata = pycaret.datasets.get_data(dataset_name)\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"622\" height=\"116\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-5.png\" alt=\"\" class=\"wp-image-7361\"><\/figure>\n<p><strong>Note<\/strong>: PyCaret 2.2 expects the data to be loaded as a dataframe in memory.\u00a0 If you are using a large dataset, PyCaret may be useful at the exploration and prototyping stage of your project if you can load a sufficient sample of your data into memory for meaningful exploration.\u00a0 If your dataset is both large and wide, such that your number of features would be close to or larger than the number of samples you can load into memory, then preprocessing to reduce the number of features or utilizing other tools would be preferable.\u00a0<\/p>\n<p><strong>Experiment Setup<\/strong><\/p>\n<p>Many often-tedious preprocessing steps are taken care of automatically in PyCaret, which standardizes and conveniently packages fundamental data preparation steps into repeatable time-saving workflows.\u00a0 Users are able to automate cleaning (e.g. handling missing values with various imputation methods available), splitting into train and test sets, as well as some aspects of feature engineering and training.\u00a0 While many of the objects created in this process aren\u2019t explicitly shown to the user (such as train and test sets, or label vectors), they are accessible if needed or desired by more experienced practitioners.\u00a0<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nclf1 = setup(data, \n             target = target_variable_name, # Use your target variable.\n             session_id=123, \n             log_experiment=True, \n             experiment_name='experiment1', # Use any experiment name.\n             silent=True # Runs the command without user input. \n            )\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"598\" height=\"849\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-6.png\" alt=\"\" class=\"wp-image-7362\"><\/figure>\n<p><strong>Compare Baseline Models<\/strong><\/p>\n<p>Here is where we begin to see the full power of PyCaret.\u00a0 In a single line of code, we can train and compare baseline versions of all available models on our dataset:<\/p>\n<p><strong><code>best_model = compare_models()<\/code><\/strong><\/p>\n<p>This trains a baseline version of each available model type and yields a detailed comparison of metrics for the trained models, and highlights the best results across models.<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nbest_model = compare_models()\n<\/pre>\n<p>Note that we did not have to do any data preparation by hand \u2014 we just needed to make the data available as a CSV, and run the setup function.\u00a0 Behind the scenes of those two setup steps, the data was passed into PyCaret and transformed to the extent necessary to train and evaluate the available models.\u00a0 To see what models PyCaret knows about, we can run<\/p>\n<p><strong><code>models()<\/code><\/strong><\/p>\n<p>which returns a dataframe of all available models, their proper names, the reference package that they\u2019re drawn from (e.g. sklearn.linear_model._logistic.LogisticRegression), and whether Turbo is supported (a mode that limits the model training time, which may be desirable for rapid comparisons).\u00a0<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nmodels()\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"533\" height=\"455\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-7.png\" alt=\"\" class=\"wp-image-7363\"><\/figure>\n<p><strong>Train and tune specific models<\/strong><\/p>\n<p>From <strong>compare_models<\/strong>, we were easily able to see the best baseline models for each metric, and select those for further investigation.\u00a0 For example, if we were looking for the model with the highest AUC above, we would have elected to continue with random forest.\u00a0 We can then save and fine tune our model using the <strong>create_model<\/strong> and <strong>tune_model<\/strong> functions.\u00a0<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nrf = create_model('rf', fold = 5)\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"470\" height=\"239\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-8.png\" alt=\"\" class=\"wp-image-7364\"><\/figure>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nTuned_rf = tune_model(rf) \n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"460\" height=\"362\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-9.png\" alt=\"\" class=\"wp-image-7365\"><\/figure>\n<p><strong>Combine Models ( Optional )<\/strong><\/p>\n<p>We can combine our trained models in various ways.\u00a0 First, we can create ensemble models with methods such as bagging (bootstrap aggregating) and boosting.\u00a0 Both bagging and boosting are invoked with the <strong>ensemble_model<\/strong> function.\u00a0 We can further apply blending and stacking methods to combine diverse models, or estimators \u2014 a list of estimators can be passed to <strong>blend_models<\/strong> or <strong>stack_models<\/strong>.\u00a0 If desired, one could create ensemble models and combine them via blending or stacking, all in a single line of code. \u00a0For clarity, we\u2019ll show an example in which each of these four methods is shown sequentially in its own cell, which also allows us to see the default output from PyCaret when each of these methods is used.\u00a0\u00a0<\/p>\n<p>Creating a bagged decision tree ensemble model:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nbagged_dt = ensemble_model(dt) \n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"471\" height=\"327\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-10.png\" alt=\"\" class=\"wp-image-7366\"><\/figure>\n<p>Creating a boosted decision tree ensemble model:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nboosted_dt = ensemble_model(dt, method = \u2018Boosting\u2019) \n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"418\" height=\"326\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-11.png\" alt=\"\" class=\"wp-image-7367\"><\/figure>\n<p>Blending estimators:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nblender = blend_models(estimator_list = [boosted_dt, bagged_dt, tuned_rf], method = 'soft')\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"435\" height=\"338\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-12.png\" alt=\"\" class=\"wp-image-7368\"><\/figure>\n<p>Stacking bagged, boosted, and tuned estimators:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nstacker = stack_models(estimator_list = [boosted_dt,bagged_dt,tuned_rf], meta_model=rf)\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"263\" height=\"214\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-13.png\" alt=\"\" class=\"wp-image-7369\"><\/figure>\n<p><strong>AutoML ( Optional )<\/strong><\/p>\n<p>Quick and painless tuning for a particular metric can be accomplished using the AutoML feature.\u00a0<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\n# Select the best model based on the chosen metric\nbest = automl(optimize = 'AUC')\nbest\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"596\" height=\"123\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-14.png\" alt=\"\" class=\"wp-image-7370\"><\/figure>\n<p>AutoML techniques generally reduce the human oversight of the model selection process, which may not be ideal or appropriate in many contexts, they can be a useful tool to quickly identify the highest performing option for a particular purpose.\u00a0<\/p>\n<p><strong>Analyze Models with Plots<\/strong><\/p>\n<p>Once the preferred model has been selected, whatever the method, its performance can be visualized with built-in plot options.\u00a0 For example, you can simply call <strong>plot_model<\/strong> on a random forest model to return overlayed ROC curves for each class:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nplot_model(best)\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"462\" height=\"309\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-15.png\" alt=\"\" class=\"wp-image-7371\"><\/figure>\n<p><strong>Interpret Models with SHAP ( For compatible model types )<\/strong><\/p>\n<p>Increasingly, having a well-performing model is not enough \u2014 in many industries and applications, the model must also be explainable.\u00a0 Our Chief Data Scientist Josh Poduska has written fantastic overviews of SHAP and other explainability tools: take a look at <a href=\"https:\/\/blog.dominodatalab.com\/shap-lime-python-libraries-part-1-great-explainers-pros-cons\/\">SHAP and LIME Python Libraries: Part 1 \u2013 Great Explainers, with Pros and Cons to Both<\/a> and <a href=\"https:\/\/blog.dominodatalab.com\/shap-lime-python-libraries-part-2-using-shap-lime\/\">SHAP and LIME Python Libraries: Part 2 \u2013 Using SHAP and LIME<\/a>.\u00a0 PyCaret provides seamless integration with SHAP so you can easily add interpretation plots to your model analysis.\u00a0\u00a0<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\ninterpret_model(best)\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"585\" height=\"422\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-16.png\" alt=\"\" class=\"wp-image-7372\"><\/figure>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\ninterpret_model(best, plot = 'correlation')\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"501\" height=\"302\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-17.png\" alt=\"\" class=\"wp-image-7373\"><\/figure>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\ninterpret_model(best, plot = 'reason', observation = 12)\n<\/pre>\n<p><strong>Predict<\/strong><\/p>\n<p>As we\u2019ve come to expect by now, generating predictions on our held-out test data is a cinch:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\npredict_model(best)\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"638\" height=\"314\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-18.png\" alt=\"\" class=\"wp-image-7374\"><\/figure>\n<p><strong>Save and Load Model<\/strong><\/p>\n<p>Once we\u2019re satisfied with our selected model, we can easily save it:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nsave_model(best, model_name='best-model')\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"579\" height=\"289\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-19.png\" alt=\"\" class=\"wp-image-7375\"><\/figure>\n<p>And finally we can load our saved model for use:<\/p>\n<pre class=\"brush: python; gutter: false; title: ; notranslate\" title=\"\">\nloaded_bestmodel = load_model('best-model')\nprint(loaded_bestmodel)\n<\/pre>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"609\" height=\"371\" src=\"https:\/\/blog.dominodatalab.com\/wp-content\/uploads\/2021\/01\/image-20.png\" alt=\"\" class=\"wp-image-7376\"><\/figure>\n<p>If you\u2019d like to see how we can put all of this together into a reproducible and shareable Domino project, please take a look at the reference project below.\u00a0 It includes everything discussed above, as well as a few other examples and resources. Anyone can browse the project and download the code files. You must be logged in (free do to \u2013 <a href=\"https:\/\/www.dominodatalab.com\/try\">sign up here<\/a>) to run the code in Domino.<\/p>\n<p>We\u2019d love to hear from you about your use cases and experiences with PyCaret.\u00a0 Let us know if it\u2019s been helpful to you and if you\u2019d like to see more projects and pieces about tools in this space.\u00a0 Drop us a note or bring your questions and feedback to <a href=\"https:\/\/community.dominodatalab.com\/\">Domino Community<\/a> (also free \u2013 <a href=\"https:\/\/community.dominodatalab.com\/entry\/register?Target=categories\">register here<\/a>).<\/p>\n<p><strong>Domino Reference Project<\/strong><\/p>\n<p><a href=\"https:\/\/try.dominodatalab.com\/u\/katie_shakman\/PyCaret\/view\/README.md\">PyCaret Project on Domino<\/a><\/p>\n<p>The reference project accompanies this post and provides a quick means for you to try our PyCaret and follow along on this walk-through. The project includes a classification pipeline <a href=\"https:\/\/try.dominodatalab.com\/u\/katie_shakman\/PyCaret\/view\/PyCaret_Template.ipynb\">template notebook<\/a> that addresses the above and demonstrates additional functionality.\u00a0<\/p>\n<p><!-- relpost-thumb-wrapper --><!-- close relpost-thumb-wrapper -->    <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blog.dominodatalab.com\/pycaret-2-2-efficient-pipelines-for-model-development\/<\/p>\n","protected":false},"author":0,"featured_media":8080,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[2],"tags":[],"_links":{"self":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/8079"}],"collection":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/comments?post=8079"}],"version-history":[{"count":0,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/8079\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media\/8080"}],"wp:attachment":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media?parent=8079"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/categories?post=8079"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/tags?post=8079"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}