{"id":8246,"date":"2021-05-10T03:16:46","date_gmt":"2021-05-10T03:16:46","guid":{"rendered":"https:\/\/wealthrevelation.com\/data-science\/2021\/05\/10\/predicting-house-prices-with-xgboost\/"},"modified":"2021-05-10T03:16:46","modified_gmt":"2021-05-10T03:16:46","slug":"predicting-house-prices-with-xgboost","status":"publish","type":"post","link":"https:\/\/wealthrevelation.com\/data-science\/2021\/05\/10\/predicting-house-prices-with-xgboost\/","title":{"rendered":"Predicting House Prices with XGBoost"},"content":{"rendered":"<div>\n<p><a title=\"My LinkedIn\" href=\"https:\/\/www.linkedin.com\/in\/tyronewilkinson\/\" target=\"_blank\" rel=\"noopener noreferrer\">LinkedIn<\/a> | <a title=\"My GitHub\" href=\"https:\/\/github.com\/TyroneWilkinson\/AimesHousingML\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a> | <a title=\"Email Me\" href=\"\/cdn-cgi\/l\/email-protection#3d49444f5253584f4a54515654534e52537d5a505c5451135e5250\">Email<\/a> | <a title=\"Kaggle\" href=\"https:\/\/www.kaggle.com\/c\/house-prices-advanced-regression-techniques\/data\" target=\"_blank\" rel=\"noopener\">Data<\/a> | <a title=\"Heroku App\" href=\"https:\/\/housingexplainer.herokuapp.com\/\">Web App<\/a> | <a title=\"Jupyter Notebook\" href=\"https:\/\/colab.research.google.com\/github\/TyroneWilkinson\/AimesHousing-ML\/blob\/master\/Predicting%20House%20Prices%20with%20Advanced%20Regression%20Techniques.ipynb\" target=\"_blank\" rel=\"noopener\">Notebook<\/a><\/p>\n<p>\u00a0<\/p>\n<h2>Introduction<\/h2>\n<p><span>\u201cLocation, location, location.\u201d The likelihood that you will hear that phrase if you are looking into purchasing a house, apartment, condo, or timeshare, is .9999999999. (Yes, I performed that study myself.) However, there are many other factors that contribute to the price of real estate, some of which do not relate to the quality of the house itself &#8212; like the month in which it is sold.\u00a0 Additional factors that contribute to the sale price include the unfinished square feet of basement area. Location itself can be applied in ways that people do not necessarily anticipate, as well, like the location of the home\u2019s garage.\u00a0<\/span><\/p>\n<p><span>All the variants can add up to hundreds of features. Accordingly, arriving at the correct sale price involves some advanced statistical techniques. Who is going to want to sit down with a paper and pencil and try and determine how so many features interact to determine a home&#8217;s market value? Not me, and I was only working with 79 features. This is where computers help run through calculations that would take far too long to work out paper, but we do need to set them up by training them with models. The challenge of this exercise was coming up with the best model to use to predict a home price.\u00a0<\/span><\/p>\n<p>\u00a0<\/p>\n<h2>Objective<\/h2>\n<p><span>I was tasked with predicting the house prices given a combination of 79 features. I did so mostly following the data science methodology. Using the sklearn.metrics module, I managed to attain the following metric scores in my train-test split:\u00a0<\/span><\/p>\n<p><em><span>Mean Squared Error<\/span> 395114426.0445745<\/em><\/p>\n<p><em><span>Mean Absolute Error<\/span> 13944.044001807852<\/em><\/p>\n<p><em><span>R-Squared<\/span> 0.908991109360274<\/em><\/p>\n<p><span>However, my Kaggle submission was <\/span><span>evaluated on Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price. My score was 0.13244.<\/span><\/p>\n<p><span>Mean-absolute-error is likely the easiest to interpret of the above metrics, being \u201cthe average of the absolute values of the errors\u201d (<\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Root-mean-square_deviation\"><span>Root-mean-square deviation &#8211; Wikipedia<\/span><\/a><span>). Basically, my model can predict the price of a house within $13944.05.\u00a0\u00a0<\/span><\/p>\n<p>\u00a0<\/p>\n<h2>Process<\/h2>\n<div id=\"attachment_73786\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/towardsdatascience.com\/data-science-methodology-101-ce9f0d660336\"><img loading=\"lazy\" aria-describedby=\"caption-attachment-73786\" alt=\"\" width=\"1458\" height=\"879\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/pasted-image-0-977646-joLG6gY5.png 1458w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/pasted-image-0-977646-joLG6gY5-300x181.png 300w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/pasted-image-0-977646-joLG6gY5-1024x617.png 1024w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/pasted-image-0-977646-joLG6gY5-768x463.png 768w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/pasted-image-0-977646-joLG6gY5-600x362.png 600w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/pasted-image-0-977646-joLG6gY5.png\" data-sizes=\"(max-width: 1458px) 100vw, 1458px\" class=\"wp-image-73786 size-full lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><img loading=\"lazy\" aria-describedby=\"caption-attachment-73786\" class=\"wp-image-73786 size-full\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/pasted-image-0-977646-joLG6gY5.png\" alt=\"\" width=\"1458\" height=\"879\"><\/a><\/p>\n<p id=\"caption-attachment-73786\" class=\"wp-caption-text\"><em>Data Science Methodology<\/em><\/p>\n<\/div>\n<p><strong><span>I will provide a simplified overview of the steps I took in order to achieve my desired outcome. Feel free to visit my <\/span><a href=\"https:\/\/github.com\/TyroneWilkinson\/AimesHousingML\"><span>GitHub<\/span><\/a><span> for a more thorough dive.<\/span><\/strong><\/p>\n<p>\u00a0<\/p>\n<h3>Business Understanding<\/h3>\n<p><span>This step determines the trajectory of one&#8217;s project. Although my undertaking was purely academic in nature, there are conceivably several reasons why a similar goal would be made in the \u201creal world.\u201d Perhaps an online real estate competitor entered the fray that offered more accurate home value estimates than Zillow does. Not wanting to lose market share, Zillow desires to revamp its home valuation model by utilizing features it had previously ignored and by considering a wider array of data models. In any case, the objective is fairly straightforward. <\/span><\/p>\n<p>\u00a0<\/p>\n<h3>Analytic Approach<\/h3>\n<p><span>The approach depends on the goal. Since I must predict sale prices, I know that predicting quantities is a regression problem. If I were predicting labels or discrete values, I would have to utilize classification algorithms. There are different types of regression models. I know that tree-based regression models have typically performed well with similar problems, but I will have to see what the data looks like before I decide. Ultimately, I will evaluate different models and choose the one that performs best.\u00a0<\/span><\/p>\n<p>\u00a0<\/p>\n<h3>Data Requirements &amp; Data Collection<\/h3>\n<p><span>The data has already been provided. If that were not the case, I would have to define the data requirements, determine the best way of collecting the data, and perhaps revise my definitions depending on whether the data could be used to fulfill the objective.<\/span><\/p>\n<p>\u00a0<\/p>\n<h3>Data Understanding<\/h3>\n<p><span>his step encompasses exploratory data analysis. Reading relevant information about the data and conducting my own research to increase my domain knowledge were also necessary, as I did not perform the Data Requirements and Data Collection steps myself. The documentation that accompanied the dataset proved useful as it explained much of the missingness.\u00a0<\/span><\/p>\n<p><span>According to the paper (<\/span><a href=\"http:\/\/jse.amstat.org\/v19n3\/decock.pdf\"><span>decock.pdf<\/span><\/a><span>), the dataset describes \u201cthe sale of individual residential property in Ames, Iowa from 2006 to 2010.\u201d Its origins lie in the Ames City Assessor\u2019s Office, but its journey from that office to my computer was not direct. It had been modified by Dean De Cock, who is credited as the individual who popularized this dataset for educational purposes with hopes to replace the Boston Housing dataset, and then again by the community at Kaggle, the\u00a0 website from which I downloaded the data. <\/span><\/p>\n<p><span>All of my work can be viewed in the <\/span><a href=\"https:\/\/colab.research.google.com\/github\/TyroneWilkinson\/AimesHousing-ML\/blob\/master\/Predicting%20House%20Prices%20with%20Advanced%20Regression%20Techniques.ipynb\"><span>Jupyter Notebook<\/span><\/a><span> I created for this project (give it a few minutes to load.) Here I discuss some of the descriptive statistics I performed.\u00a0<\/span><\/p>\n<p><span>In order to view the feature distributions, I created histograms of the numerical and continuous features and viewed the count distributions of the categorical features.<\/span><\/p>\n<p><span>Continuous Features:<\/span><\/p>\n<p><img loading=\"lazy\" alt=\"\" width=\"512\" height=\"377\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-7-953697-McbscEAM.png 512w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-7-953697-McbscEAM-300x221.png 300w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-7-953697-McbscEAM.png\" data-sizes=\"(max-width: 512px) 100vw, 512px\" class=\"aligncenter size-full wp-image-73787 lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><\/p>\n<p><img loading=\"lazy\" class=\"aligncenter size-full wp-image-73787\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-7-953697-McbscEAM.png\" alt=\"\" width=\"512\" height=\"377\"><\/p>\n<p>\u00a0<\/p>\n<p><span>I visualized the missing values present in the dataset then examined the relationships between the missing values and the sale price of the houses.<\/span><\/p>\n<p><img loading=\"lazy\" alt=\"\" width=\"372\" height=\"326\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-6-432591-eHHwHdXN.png 372w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-6-432591-eHHwHdXN-300x263.png 300w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-6-432591-eHHwHdXN.png\" data-sizes=\"(max-width: 372px) 100vw, 372px\" class=\"aligncenter size-full wp-image-73788 lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><\/p>\n<p><img loading=\"lazy\" class=\"aligncenter size-full wp-image-73788\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-6-432591-eHHwHdXN.png\" alt=\"\" width=\"372\" height=\"326\"><\/p>\n<p>\u00a0<\/p>\n<p><span>I examined the correlation among the values with a heatmap.<\/span><\/p>\n<p><img src=\"https:\/\/lh4.googleusercontent.com\/qFV54v9-trcYYn-4vg7kLRrqQyBkrZM4c5YvhI7m8_eeF4ANzCnWgyiKL-42sYhzWFE1k9ku2ZeKhb2Ef1Mql9hg3hWg7XNxS4BAdCb8WIxl64HznLXCnueM7BPV1wS-8ZxgthLM\"><\/p>\n<p>\u00a0<\/p>\n<p><span>I also examined the presence of outliers. Throughout this process I noted observations and potential steps I might take when I prepared the data. <\/span><\/p>\n<p>\u00a0<\/p>\n<h3>Data Preparation<\/h3>\n<p><span>During this stage, missing values, skewed features, outliers, redundant features, and multicollinearity are handled, and feature engineering is done. As I mentioned before, the documentation explained much of the missingness, removing the need to impute any of the missing data. I handled the missing values, removed some outliers, and encoded the categorical features. Dummy encoding was used for the nominal data and integer encoding for the ordinal data. Tree-based models are robust to outliers, multicollinearity, and skewed data, so I decided to utilize those methods in order to avoid altering the data further.<\/span><\/p>\n<p><span>Here are some of the obvious outliers I removed:<\/span><\/p>\n<p><img loading=\"lazy\" alt=\"\" width=\"408\" height=\"262\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-5-692754-lRTOaKC7.png 408w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-5-692754-lRTOaKC7-300x193.png 300w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-5-692754-lRTOaKC7.png\" data-sizes=\"(max-width: 408px) 100vw, 408px\" class=\"aligncenter size-full wp-image-73789 lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><\/p>\n<p><img loading=\"lazy\" class=\"aligncenter size-full wp-image-73789\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-5-692754-lRTOaKC7.png\" alt=\"\" width=\"408\" height=\"262\"><\/p>\n<p><img loading=\"lazy\" alt=\"\" width=\"512\" height=\"167\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-4-587226-2Dv61vXk.png 512w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-4-587226-2Dv61vXk-300x98.png 300w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-4-587226-2Dv61vXk.png\" data-sizes=\"(max-width: 512px) 100vw, 512px\" class=\"aligncenter size-full wp-image-73782 lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><\/p>\n<p><img loading=\"lazy\" class=\"aligncenter size-full wp-image-73782\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-4-587226-2Dv61vXk.png\" alt=\"\" width=\"512\" height=\"167\"><\/p>\n<p>\u00a0<\/p>\n<h3>Modeling and Evaluation<\/h3>\n<p><span>These stages go hand-in-hand, given that typically multiple models are created in order to find the one that performs best. In light of<\/span><span> the high number of features, a tree based regression model would be better suited compared to something like linear regression. I decided to utilize the XGBoost Python library due to its known advantages over<\/span><span> the Gradient Boosting and Random Forest algorithms in the scikit-learn library. I then used Grid Search to determine the best parameters to use in each model.\u00a0<\/span><\/p>\n<p><span>The XGBRegressor took: 4162.4min to complete while the XGBRFRegressor took 8049.0min.<\/span><\/p>\n<p><span>Interestingly enough, the top features for each model were First Floor in Square Feet and Lot Area in Square Feet for the gradient boost, and First Floor in Square Feet and Ground Living Area in Square Feet for random forest. The scoring metric used was negative root mean squared error. The top features using the R<sup>2<\/sup><\/span><span> was Ground Living Area in Square Feet followed by Overall Quality, which rated the <\/span><span>overall material and finish of the house. While I was surprised that Overall Quality was not at the top, the importance of features that measured the size of the house were in line with some of my findings (look <\/span><a href=\"https:\/\/www.fortunebuilders.com\/what-are-the-biggest-factors-in-determining-property-value\/\"><span>here<\/span><\/a><span> and <\/span><a href=\"https:\/\/www.opendoor.com\/w\/blog\/factors-that-influence-home-value\"><span>here<\/span><\/a><span>).<\/span><\/p>\n<p><span>I interactively explored my best performing model with <\/span><a href=\"https:\/\/github.com\/oegedijk\/explainerdashboard\"><span>ExplainerDashboard<\/span><\/a><span>, an awesome library for building interactive dashboards that explain the inner workings of machine learning models. My web app, a stripped-down version of the dashboard, can be found <\/span><a href=\"https:\/\/housingexplainer.herokuapp.com\/\"><span>here<\/span><\/a><span>. I used <\/span><a href=\"https:\/\/www.heroku.com\/\"><span>Heroku<\/span><\/a><span>, a free cloud application platform, to host my web app, alongside <\/span><a href=\"https:\/\/kaffeine.herokuapp.com\/\"><span>Kaffeine<\/span><\/a><span> to keep it running. If that link does not work, you can go to my <\/span><a href=\"https:\/\/colab.research.google.com\/github\/TyroneWilkinson\/AimesHousing-ML\/blob\/master\/Predicting%20House%20Prices%20with%20Advanced%20Regression%20Techniques.ipynb\"><span>notebook<\/span><\/a><span> and scroll all the way to the bottom to view the dashboard. You can also check out my<\/span><a href=\"https:\/\/github.com\/TyroneWilkinson\/AimesHousingML\"> <span>github<\/span><\/a><span> for the complete experience. My favorite feature is the ability to adjust the values of features and to generate a predicted house price. However, the library comes with a number of unique visualizations and features. It is a must use when working with \u201cblack box\u201d models.\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" alt=\"\" width=\"512\" height=\"428\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-3-704264-dA4YDVzx.png 512w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-3-704264-dA4YDVzx-300x251.png 300w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-3-704264-dA4YDVzx.png\" data-sizes=\"(max-width: 512px) 100vw, 512px\" class=\"aligncenter size-full wp-image-73783 lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><\/p>\n<p><img loading=\"lazy\" class=\"aligncenter size-full wp-image-73783\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-3-704264-dA4YDVzx.png\" alt=\"\" width=\"512\" height=\"428\"><\/p>\n<p><img loading=\"lazy\" alt=\"\" width=\"512\" height=\"317\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-2-719885-Bk1IO84j.png 512w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-2-719885-Bk1IO84j-300x186.png 300w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-2-719885-Bk1IO84j.png\" data-sizes=\"(max-width: 512px) 100vw, 512px\" class=\"aligncenter size-full wp-image-73784 lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><\/p>\n<p><img loading=\"lazy\" class=\"aligncenter size-full wp-image-73784\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-2-719885-Bk1IO84j.png\" alt=\"\" width=\"512\" height=\"317\"><\/p>\n<p><img loading=\"lazy\" alt=\"\" width=\"512\" height=\"343\" data-srcset=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-1-780227-egAmggLQ.png 512w, https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-1-780227-egAmggLQ-300x201.png 300w\" data-src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-1-780227-egAmggLQ.png\" data-sizes=\"(max-width: 512px) 100vw, 512px\" class=\"aligncenter size-full wp-image-73785 lazyload\" src=\"image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\"><\/p>\n<p><img loading=\"lazy\" class=\"aligncenter size-full wp-image-73785\" src=\"https:\/\/nycdsa-blog-files.s3.us-east-2.amazonaws.com\/2021\/05\/unnamed-1-780227-egAmggLQ.png\" alt=\"\" width=\"512\" height=\"343\"><\/p>\n<p>\u00a0<\/p>\n<h2>Conclusion<\/h2>\n<p><span>Experimenting with advanced regression techniques on real data in order to come up with the best prediction was an informative experience. Zillow makes billions a year, which indicates that a model that accurately predicts the sale price of a house would be a very valuable tool for a competitor or Zillow itself.<\/span><\/p>\n<p>\u00a0<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/nycdatascience.com\/blog\/student-works\/predicting-house-prices-with-xgboost\/<\/p>\n","protected":false},"author":0,"featured_media":8247,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[2],"tags":[],"_links":{"self":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/8246"}],"collection":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/comments?post=8246"}],"version-history":[{"count":0,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/posts\/8246\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media\/8247"}],"wp:attachment":[{"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/media?parent=8246"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/categories?post=8246"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wealthrevelation.com\/data-science\/wp-json\/wp\/v2\/tags?post=8246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}