Forecasting US GDP using Machine Learning and Mathematics | by Dron Mongia

What can we learn from this ultramodern problem?Photo by Igor Omilaev on UnsplashGDP is a veritably strong metric of a country’s profitable well- being; thus, making vaticinations of the dimension largely sought after. Policymakers and lawmakers, for illustration, may want to have a rough cast of the trends regarding the country’s GDP previous to passing a new bill or law. Experimenters and economists will also consider these vaticinations for colorful trials in both academic and artificial settings.Forecasting GDP, also to numerous other time series problems, follows a general workflow.Using the integrated FRED( Federal Reserve Economic Data) library and API, we will produce our features by constructing a data frame composed of US GDP along with some other criteria that are nearly affiliated( GDP = Consumption Investment Govt. Spending Net Import) Using a variety of statistical tests and analyses, we will explore the nuances of our data in order to more understand the underpinning connections between features.Finally, we will use a variety of statistical and machine- literacy models to conclude which approach can lead us to the most accurate and effective forecast.Along all of these way, we will claw into the nuances of the underpinning fine backbone that supports our tests and models.To construct our dataset for this design, we will be exercising the FRED( Federal Reserve Economic Data) API which is the premier operation to gather profitable data. Note that to use this data, one must register an account on the FRED website and request a custom API key.Each time series on the website is connected to a specific character string( for illustration GDP is linked to ‘ GDP’, Net Export to ‘ NETEXP’, etc.). This is important because when we make a call for each of our features, we need to make sure that we specify the correct character string to go on with it.Keeping this in mind, lets now construct our data frame#used to marker and construct each point dataframe.def gen_df( order, series) gen_ser = fred.get_series( series, frequence = ’ q’) return pd.DataFrame({ ‘ Date’ gen_ser. indicator, order ‘ Billions of bones
’ gen_ser. values})#used to combine every constructed dataframe.def merge_dataframes( dataframes, on_column) merged_df = dataframes( 0) for df in dataframes( 1) merged_df = pd.merge( merged_df, df, on = on_column) return merged_df#list of features to be used dataframes_list = ( gen_df( ‘ GDP’, ‘ GDP’), gen_df( ‘ PCE’, ‘ PCE’), gen_df( ‘ GPDI’, ‘ GPDI’), gen_df( ‘ NETEXP’, ‘ NETEXP’), gen_df( ‘ GovTotExp’, ‘ W068RCQ027SBEA’))#defining and displaying datasetdata = merge_dataframes( dataframes_list,’ Date’) dataNotice that since we’ve defined functions as opposed to static gobbets of law, we’re free to expand our list of features for farther testing. Running this law, our performing data frame is the following( final dataset) We notice that our dataset starts from the 1960s, giving us a fairly broad literal environment. In addition, looking at the shape of the data frame, we’ve 1285 cases of factual profitable data to work with, a number that is n’t inescapably small but not big moreover. These compliances will come into play during our modeling phase.Now that our dataset is initialized, we can begin imaging and conducting tests to gather some perceptivity into the geste
of our data and how our features relate to one another.Visualization( Line plot) Our first approach to assaying this dataset is to simply graph each point on the same plot in order to catch some patterns. We can write the following#separating date column from point columnsdate_column = ‘ Date’feature, columns = data.columns.difference(( date_column))#set the plot fig, layoff = plt.subplots( figsize = ( 10, 6)) fig.suptitle( ‘ Features vs Time’, y = 1.02)#graphing features onto plotfor i, point in enumerate( feature_columns) ax.plot( data( date_column), data( point), marker = point, color = plt.cm.viridis( i/ len( feature_columns)))#label axisax.set_xlabel( ‘ Date’) ax.set_ylabel( ‘ Billions of Bones’) ax.legend( loc = ’ upper left’, bbox_to_anchor = ( 1, 1))#display the plot plt.show() Running the law, we get the result( features colluded against one another) Looking at the graph, we notice below that some of the features act GDP far further than others. For case, GDP and PCE follow nearly the exact same trend while NETEXP shares no visible parallels. Though it may be tempting, we ca n’t yet begin opting and removing certain features before conducting further exploratory tests.ADF( Augmented Dickey- Fuller) TestThe ADF( Augmented Dickey- Fuller) Test evaluates the stationarity of a particular time series by checking for the presence of a unit root, a characteristic that defines a time series as nonstationarity. Stationarity basically means that a time series has a constant mean and friction. This is important to test because numerous popular soothsaying styles( including bones
we will use in our modeling phase) bear stationarity to serve properly.Formula for Unit RootAlthough we can determine the stationarity for utmost of these time series just by looking at the graph, doing the testing is still salutary because we will probably exercise it in after corridor of the cast. Using the Statsmodel library we writefrom statsmodels.tsa.stattools import adfuller#iterating through each featurefor column in data.columnsif column! = ‘ Date’result = adfuller( data( column)) print( f ” ADF Statistic for{ column}{ result( 0)} ”) print( f ” P- value for{ column}{ result( 1)} ”) print( “ Critical Values ”) for key, value in result( 4). particulars() print( f ”{ key}{ value} ”)#creating separation line between each featureprint( “ ” “ = ” * 40 “ ”) giving us the result( ADF Test results) The figures we’re interested from this test are the P- values. A P- value close to zero( equal to or lower than 0.05) implies stationarity while a value closer to 1 implies nonstationarity. We can see that all of our time series features are largely nonstationary due to their statistically insignificant p- values, in other words, we’re unfit to reject the null thesis for the absence of a unit root. Below is a simple visual representation of the test for one of our features. The red dotted line represents the P- value where we’d be suitable to determine stationarity for the time series point, and the blue box represents the P- value where the point is presently.( ADF visualization for NETEXP) VIF( friction Affectation Factor) TestThe purpose of chancing the Variance Affectation Factor of each point is to check for multicollinearity, or the degree of correlation the predictors partake with one another. High multicollinearity is n’t inescapably mischievous to our cast, still, it can make it much harder for us to determine the individual effect of each point time series for the vaticination, therefore hurting the interpretability of the model.Mathematically, the computation is as follows( friction Affectation Factor of predictor) with Xj representing our named predictor and R ² j is the measure of determination for our specific predictor. Applying this computation to our data, we arrive at the following result( VIF scores for each point) putatively, our predictors are veritably nearly linked to one another. A VIF score lesser than 5 implies multicollinearity, and the scores our features achieved far exceed this quantum. Predictably, PCE by far had the loftiest score which makes sense given how its shape on the line plot recalled numerous of the other features.Now that we’ve looked completely through our data to more understand the connections and characteristics of each point, we will begin to make variations to our dataset in order to prepare it for modeling.Differencing to achieve stationarityTo begin modeling we need to first insure our data is stationary. we can achieve this using a fashion called differencing, which basically transforms the raw data using a fine formula analogous to the tests above.The conception is defined mathematically as( First Order Differencing equation) This makes it so we’re removing the nonlinear trends from the features, performing in a constant series. In other words, we’re taking values from our time series and calculating the change which passed following the former point.We can apply this conception in our dataset and check the results from the preliminarily used ADF test with the following law#differencing and storing original dataset data_diff = data.drop( ‘ Date’, axis = 1). diff(). dropna()#printing ADF test for new datasetfor column in data_diff. columnsresult = adfuller( data_diff( column)) print( f ” ADF Statistic for{ column}{ result( 0)} ”) print( f ” P- value for{ column}{ result( 1)} ”) print( “ Critical Values ”) for key, value in result( 4). particulars() print( f ”{ key}{ value} ”) print( “ ” “ = ” * 40 “ ”) running this results in( ADF test for differenced data) We notice that our new p- values are lower than 0.05, meaning that we can now reject the null thesis that our dataset is nonstationary. Taking a look at the graph of the new dataset proves this assertion( Graph of Differenced Data) We see how all of our time series are now centered around 0 with the mean and friction remaining constant. In other words, our data now visibly demonstrates characteristics of a stationary system.VAR( Vector Auto Regression) ModelThe first step of the VAR model is performing the Granger Causality Test which will tell us which of our features are statistically significant to our vaticination. The test indicates to us if a lagged interpretation of a specific time series can help us prognosticate our target time series, still not inescapably that one time series causes the other( note that occasion in the environment of statistics is a far more delicate conception to prove). Using the StatsModels library, we can apply the test as followsfrom statsmodels.tsa.stattools import grangercausalitytestscolumns = ( ‘ PCE Billions of bones
’, ‘ GPDI Billions of bones
’, ‘ NETEXP Billions of bones
’, ‘ GovTotExp Billions of bones
’) lags = ( 6, 9, 1, 1)#determined from collectively testing each combinationfor column, pause in zip( columns, lags) df_new = data_diff(( ‘ GDP Billions of bones
’, column)) print( f’For{ column}’) gc_res = grangercausalitytests( df_new, pause) print( “ ” “ = ” * 40 “ ”) Running the law results in the following table( Sample of Granger Causality for two features) Then we’re just looking for a single pause for each point that has statistically significant p- values(>.05). So for illustration, since on the first pause both NETEXP and GovTotExp, we will consider both these features for our VAR model. particular consumption expenditures arguably did n’t make this cut- off( see tablet), still, the sixth pause is so close that I decided to keep it in. Our coming step is to produce our VAR model now that we’ve decided that all of our features are significant from the Granger Causality Test.VAR( Vector Auto Regression) is a model which can work different time series to hand patterns and determine a flexible cast. Mathematically, the model is defined by( Vector Auto Regression Model) Where Yt is some time series at a particular time t and Ap is a determined measure matrix. We’re basically using the lagged values of a time series( and in our case other time series) to make a vaticination for Yt. Knowing this, we can now apply this algorithm to the data_diff dataset and estimate the results( Evaluation Metrics)( factual vs read GDP for VAR) Looking at this cast, we can easily see that despite missing the mark relatively heavily on both evaluation criteria used( MAE and MAPE), our model visually was n’t too inaccurate barring the outliers caused by the epidemic. We managed to stay on the testing line for the utmost part from 2018 – 2019 and from 2022 – 2024, still, the global events following obviously threw in some unpredictability which affected the model’s capability to precisely judge the trends.VECM( Vector Error Correction Model) VECM( Vector Error Correction Model) is analogous to VAR, albeit with a many crucial differences. Unlike VAR, VECM does n’t calculate on stationarity so differencing and homogenizing the time series wo n’t be necessary. VECM also assumes cointegration, or long- term equilibrium between the time series. Mathematically, we def

Leave a Comment

Your email address will not be published. Required fields are marked *